- Joined
- Oct 20, 2014
- Posts
- 3,250
- Solutions
- 5
- Reaction
- 3,489
- Points
- 990
From latest Stable Diffusion development, it was stated You do not have permission to view the full content of this post. Log in or register now. that it will acceptably run on Windows 10 or higher OS with a discrete Nvidia video card (GPU) with a 4 GB VRAM or more - the minimum graphic hardware specification for You do not have permission to view the full content of this post. Log in or register now.. Meaning, the GPU will run/process the application - not the cpu by default. If it does, you'll ran out of memory and experience pc failures. If you look at the SD GPU performance benchmarks You do not have permission to view the full content of this post. Log in or register now., you'll see the big difference on how the top GPUs perform using SD, what more if it is none on the list. For those of us who don't own yet the required hardware (like me), there are still ways to install and try regardless of how long it process your images.
People can still use the opensource Stable Diffusion using free trial demo online or simply using the notebooks for a limited time via Google Colab service. It;s the best option.
But I'll focus on the unlimited offline use of some SD forks which work. I'm not an expert on Stable Diffusion usage, so I'm mainly concentrating on possible non-standard methods that work with only cpu or low end GPU at best. So bear with me.
First the first optionwe will use: You do not have permission to view the full content of this post. Log in or register now. . A lightweight SD version similar to You do not have permission to view the full content of this post. Log in or register now. - another SD-cpuonly option.
This is the fork and guide I initially used for the trial but you need to replace line#17 of install_sdco.bat with this code:Stable Diffusion CPU only
This fork of Stable-Diffusion doesn't require a high end graphics card and runs exclusively on your cpu. It's been tested on Linux Mint 22.04 and Windows 10.
This isn't the fastest experience you'll have with stable diffusion but it does allow you to use it and most of the current set of features floating around on the internet such as txt2img, img2img, image upscaling with Real-ESRGAN and better faces with GFPGAN.
A install guide video is located here. You do not have permission to view the full content of this post. Log in or register now.
Requirements
Windows and Linux requirements
Install Anaconda You do not have permission to view the full content of this post. Log in or register now. Yes even on a linux system anaconda needs to be installed.
Windows
Install Visual Studio Community Edition to build 1 file.
Click on the free download and make sure to check "Desktop Development with C++" when installing as per this image (You do not have permission to view the full content of this post. Log in or register now.)
You do not have permission to view the full content of this post. Log in or register now.
Install Git
You do not have permission to view the full content of this post. Log in or register now. version control manager for code we just use it to download repos from GitHub Must be on system PATH, When installing select the option add to system PATH
Install Wget
used to download models for projects Windows users need this verison You do not have permission to view the full content of this post. Log in or register now. download the .exe and I copied it to my C:/Windows/System directory (this isn't the correct way just the fastest to get it working)
reboot your system just to make sure everything is properly loaded up.
Linux Mint 22.04
Install git and wget with the following command
sudo apt-get -y install git wget build-essential
Installation of Stable-Diffusion-cpuonly
Download Stable-Diffusion-cpuonly
Copy this github repository and extract the files.
Download the CompVis Stable-diffusion model.
Go here and download the correct mode from here. You'll have to agree to the license setup an account but this is the bread and butter AI art generating learning model.
You do not have permission to view the full content of this post. Log in or register now.
copy the file to your stable-diffusion-cpuonly-main directory
Download the GFPGAN model
You do not have permission to view the full content of this post. Log in or register now.
copy the file to your stable-diffusion-cpuponly-main directory
Windows - Running the install script
open a terminal or powershell and cd to your stable-diffusion-cpuonly-main directory and run
.\install_sdco.bat
Linux - Running the install script
open a terminal or powershell and cd to your stable-diffusion-cpuonly-main directory and run
bash -i install_sdco.sh
Windows - Starting Stable-Diffusion-cpuonly
Run the following command
.\run_sdco.bat
Linux - Starting Stable-Diffusion-cpuonly
Run the following command
bash -i run_sdco.sh
Code:
call pip install -e git+https://github.com/crowsonkb/k-diffusion#egg=k_diffusion
I already mentioned this on another related You do not have permission to view the full content of this post. Log in or register now.. Here is the pic.
My 2nd try using the same test prompts resulted in much better graphics compared to the link above. Though SD-cpuonly works, it took me ~900 sec to create an image. I'm not satisfied, but that is the result. The only consolation is the use of GFPGAN (a good image restorer) plus Real-ESGRAN, a high end upscaler and some other standard image processing tools.
BTW, I'm using an Intel Core i7 3632QM with 12GB of RAM (with 2GB NVIDIA GT 740m with CUDA 10 drivers but not used). It took around +8GB to run the app and open the GUI via browser link (You do not have permission to view the full content of this post. Log in or register now.). After that, it took +4GB of memory to process/create the image at that given timeframe.
You can also try using You do not have permission to view the full content of this post. Log in or register now. from this link, place it on your working directory and restart install_sdco.bat again to replace your model.ckpt. Then see the results on your next render.
BTW, there is also a Web GUI for this by the same forker: You do not have permission to view the full content of this post. Log in or register now.
Check the new features if you want to try.
Note: Please read the whole development page to know the features and limitations. At the moment, it can't use controlnet nor safetensors. You can try merging models using You do not have permission to view the full content of this post. Log in or register now. (via Google Colab) or try the tools provided here: You do not have permission to view the full content of this post. Log in or register now.. Do it at your own risk and create your hybrids he he. Use your imagination.
Thanks. I'll update later.
PS: Check this thread "You do not have permission to view the full content of this post. Log in or register now." and "You do not have permission to view the full content of this post. Log in or register now." to try SD, or make an "You do not have permission to view the full content of this post. Log in or register now.".
Attachments
-
You do not have permission to view the full content of this post. Log in or register now.
Last edited: