Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are forks that even work on 1.8 of VRAM! They work great on my GTX 1050 2GB.

This is by far the most popular and active right now: https://github.com/AUTOMATIC1111/stable-diffusion-webui



> This is by far the most popular and active right now: https://github.com/AUTOMATIC1111/stable-diffusion-webui

While technically the most popular, I wouldn't call it "by far". This one is a very close second (500 vs 580 forks): https://github.com/sd-webui/stable-diffusion-webui/tree/dev


That's why I said "right now", since I feel that most people have moved from the one you linked to AUTOMATIC's fork by now. hlky's fork (the one you linked) was by far the most popular one until a couple of weeks ago, but some problems with the main developer's attitude and a never-ending migration from Gradio to Streamlit filled with issues made it lose its popularity.

AUTOMATIC has the attention of most devs nowadays. When you see any new ideas come up, they usually appear in AUTOMATIC's fork first.


Just as another point of reference. I followed the windows install. I'm running this on my 1060 with 6GB memory. With no setting changes takes about 10 seconds to generate an image. I often run with sampling steps up to 50 and that takes about 40 seconds to generate an image.


While AUTOMATIC is certainly popular, calling it the most active/popular would be ignoring the community working on Invoke. Forks don’t lie.

https://github.com/invoke-ai/InvokeAI


> Forks don’t lie.

They sure do. InvokeAI is a fork of the original repo CompVis/stable-diffusion and thus shares its fork counter. Those 4.1k forks are coming from CompVis/stable-diffusion, not InvokeAI.

Meanwhile AUTOMATIC1111/stable-diffusion-webui is not a fork itself, and has 511 forks.


Welp - TIL.

Thanks for the correction.

Any idea on how to count forks of a downstream fork? If anyone would know... :)


Subjectively, AUTOMATIC has taken over -- I have not heard of invoke yet but will check it out.


The only reason to use it imo has been if you need mac/m1 support, but that's probably in other forks by now


What settings and repo are you using for GTX 1050 with 2GB?


I'm using the one I linked in my original post: https://github.com/AUTOMATIC1111/stable-diffusion-webui

The only command line argument I'm using is --lowvram, and usually generate pictures at the default settings at 512x512 image size.

You can see all the command line arguments and what they do here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...


I guess then it could even work on a Jetson Nano(4GB) then, I run models of ~1.6 GB on it 24*7; Would give this a try.


This needs Windows 10/11 though?


Nope. There are instructions for Windows, Linux and Apple Silicon in the readme: https://github.com/AUTOMATIC1111/stable-diffusion-webui

There's also this fork of AUTOMATIC1111's fork, which also has a Colab notebook ready to run, and it's way, way faster than the KerasCV version: https://github.com/TheLastBen/fast-stable-diffusion

(It also has many, many more options and some nice, user-friendly GUIs. It's the best version for Google Colab!)


Brilliant thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: