I have it working on an RX 6800, used the scripts from this repo[0] to build a docker image that has ROCm drivers and PyTorch installed.
I'm running Ubuntu 22.04 LTS as the host OS, didn't have to touch anything beyond the basic Docker install. Next step is build a new Dockerfile that adds in the Stable Diffusion WebUI.[1]
I've tried using this set of steps [1], but have so far not had luck, mostly because the ROCm driver setup is throwing me for a loop. Tried it with an RX 6700 XT and first was going to test on Ubuntu 22.04 but realized ROCm doesn't support that OS yet, so tried again on 20.04 and ended up breaking my GPU driver!
AMD market segmented their RDNA2 support in ROCm to the Navi21 set only (6800/6800 XT/6900 XT).
It is not officially supported in any way on other RDNA2 GPUs. (Or even on the desktop RDNA2 range at all, that only works because their top end Pro cards share the same die)
I tried getting pytorch vulkan inference working with radv, it gives me a missing dtype error in vkformat. Fp16 or normal precision have the same error. I think it's some bf16 thing.
6600XT reporting in. Spent a few hours on Windows and WSL2 setup attempts, got no where. I don't run Ubuntu at home and don't want to dual boot just for this. From looking around I think I'd have a better chance on native Ubuntu.
Or wait, if its just about stable diffusion multiple people try to create onnx and directml forks of the models/scripts, which atleast in theory can work for AMD gpus in windows and wsl2
ROCm support seems spotty at best, I have a 5700xt and I haven't had much luck getting it working.