Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Has anybody had success getting newer AMD cards working?

ROCm support seems spotty at best, I have a 5700xt and I haven't had much luck getting it working.



I have it working on an RX 6800, used the scripts from this repo[0] to build a docker image that has ROCm drivers and PyTorch installed.

I'm running Ubuntu 22.04 LTS as the host OS, didn't have to touch anything beyond the basic Docker install. Next step is build a new Dockerfile that adds in the Stable Diffusion WebUI.[1]

[0] https://github.com/AshleyYakeley/stable-diffusion-rocm [1] https://github.com/hlky/stable-diffusion-webui


The RX6800 seems like a great card for this - 16GB of relatively fast VRAM for a good price.

How long does it take to do 50 iterations on a 512x512?


I've tried using this set of steps [1], but have so far not had luck, mostly because the ROCm driver setup is throwing me for a loop. Tried it with an RX 6700 XT and first was going to test on Ubuntu 22.04 but realized ROCm doesn't support that OS yet, so tried again on 20.04 and ended up breaking my GPU driver!

[1] https://gist.github.com/geerlingguy/ff3c3cbcf4416be2c0c1e0f8...


Yes. That's expected.

AMD market segmented their RDNA2 support in ROCm to the Navi21 set only (6800/6800 XT/6900 XT).

It is not officially supported in any way on other RDNA2 GPUs. (Or even on the desktop RDNA2 range at all, that only works because their top end Pro cards share the same die)


Oh... had no clue! Thanks for letting me know so I wouldn't have to spend hours banging my head against the wall.


As an aside, a totally unsupported hack to make it somewhat work on Navi2x smaller dies which you use:

HSA_OVERRIDE_GFX_VERSION=10.3.0 to force using the Navi21 binary slice.

This is totally unsupported and please don't complain if something doesn't work when using that trick.

But basic PyTorch use works using this, so you might get away with it for this scenario.

(TL;DR: AMD just doesn't care about GPGPU on the mainstream, better to switch to another GPU vendor that does next time...)


Looks like I may be out of luck with NAVI 10.


you can try my guide. got it working on a 6750XT

https://yulian.kuncheff.com/stable-diffusion-fedora-amd/


I tried getting pytorch vulkan inference working with radv, it gives me a missing dtype error in vkformat. Fp16 or normal precision have the same error. I think it's some bf16 thing.


6600XT reporting in. Spent a few hours on Windows and WSL2 setup attempts, got no where. I don't run Ubuntu at home and don't want to dual boot just for this. From looking around I think I'd have a better chance on native Ubuntu.


Buy an NVIDIA card. ROCm isn't supported in any way on WSL2, but CUDA is.

AMD just doesn't invest in their developer ecosystem. Also as you use a 6600 XT, no official ROCm support for the die that you use. Only for navi21.


Or wait, if its just about stable diffusion multiple people try to create onnx and directml forks of the models/scripts, which atleast in theory can work for AMD gpus in windows and wsl2




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: