Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Brilliant, thank you! I just got OP's setup working, but this seems much more user-friendly. Giving it a try now...

EDIT: Got it working, with a couple of pre-requisite steps:

0. `rm` the existing `stable-diffusion` repo (assuming you followed OP's original setup)

1. Install `conda`, if you don't already have it:

    brew install --cask miniconda
2. Install the other build requirements referenced in OP's setup:

    brew install Cmake protobuf rust
3. Follow the main installation instructions here: https://github.com/lstein/stable-diffusion/blob/main/README-...

Then you should be good to go!

EDIT 2: After playing around with this repo, I've found:

- It offers better UX for interacting with Stable Diffusion, and seems to be a promising project.

- Running txt2img.py from lstein's repo seems to run about 30% faster than OP's. Not sure if that's a coincidence, or if they've included extra optimisations.

- I couldn't get the web UI to work. It kept throwing the "leaked semaphor objects" error someone else reported (even when rendering at 64x64).

- Sometimes it rendered images just as a black canvas, other times it worked. This is apparently a known issue and a fix is being tested.

I've reached the limits of my knowledge on this, but will following closely as new PRs are merged in over the coming days. Exciting!



I followed all these steps, but I got this error:

> User specified autocast device_type must be 'cuda' or 'cpu'

> Are you sure your system has an adequate NVIDIA GPU?

I found the solution here: https://github.com/lstein/stable-diffusion/issues/293#issuec...


I had to manually install pytorch for the preload_models.py step to work, because ReduceOp wasn't found. Why even use anaconda if all the dependencies aren't included? Every time I touch an ML project, there's always a python dependency issue. How can people use a tool that's impossible to provide a consistent environment for?


You are completely correct that there are a lot of dependency bugs here, I would just like to pedantically complain that the issue in question is PyTorch supporting MPS, which is basically entirely a C++ dependency issue rather than a Python one. (PyTorch being mostly written in C++ despite having "py" in the name.) And yeah the state of C++ dependency management is pretty bad.


FYI: black images are not just from the safety checker.

Yes, the safety checker will zero out images but can just turn it off with an “if False:”; Mostly black images are due to a bug, especially frustrating because it turns up on high step counts and means you’ve wasted time running it.

My experience has been roughly 2-4/32 of an image batch comes back black at the default settings, regardless of the prompt.

Just stamp out images in batches and discard the black ones.


I was able not to have black images by using a different sampler

--sampler k_euler

full command:

"photography of a cat on the moon" -s 20 -n 3 --sampler k_euler -W 384 -H 384


I tried that as well but resulted in an error:

AttributeError: module 'torch._C' has no attribute '_cuda_resetPeakMemoryStats'

https://gist.github.com/JAStanton/73673d249927588c93ee530d08...


hi jastanto. Im on an intel mac running into the same problem. Did you find a workaround?


To get past `pip install -r requirements` I had to muck around with CFLAGS/LDFLAGS because I guess maybe on your system /opt/homebrew/opt/openssl is a symlink to something? On mine it doesn't exist, I just have /opt/homebrew/opt/openssl@1.1 symlinked to /opt/Cellar/somewhere.

The command that finally worked for me:

  python3 -m venv venv
  . venv/bin/activate
  CFLAGS="-I /opt/homebrew/opt/openssl@1.1/include" LDFLAGS="-L /opt/homebrew/opt/openssl@1.1/lib -L/opt/homebrew/Cellar/openssl@1.1/1.1.1q/lib -lssl -lcrypto" PKG_CONFIG_PATH="/usr/local/opt/openssl@1.1/lib/pkgconfig" GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1 pip install -r requirements.txt


Thank you with those extra steps I got it working now myself. At least I think thank you. My work productivity for the next few days might not agree.


Instructions don't work here, dead ends at

  FileNotFoundError: [Errno 2] No such file or directory: 'models/ldm/stable-diffusion-v1/model.ckpt'
Looks like there's a step missing or broken at downloading the actual weights.

Going up to the parent repo points at a bunch of dead links or hugginface pages.


You have to download the model from the huggingface[0] site first (requires a free account). The exact steps on how to link the file are then detailed here[1].

[0] https://huggingface.co/CompVis/stable-diffusion-v-1-4-origin... [1] https://github.com/lstein/stable-diffusion/blob/main/README-...


I did this but then moved the directory. When re-linking and checking with ls for the path I thought "oh, alright, it's already there". Oh well, better check with ls -l earlier next time.


Can you describe how you did (/ are doing) this? Do you now need to use conda (as opposed to OPs pip only version)?


See my edit for more info. (Just ironing out a couple of other issues I've found, so might update it again shortly)


I only get black images.


You have to disable the safety checker after creating the pipe




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: