Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for us to input desired output resolution #29

Open
KAmaia opened this issue Sep 15, 2022 · 2 comments
Open

Allow for us to input desired output resolution #29

KAmaia opened this issue Sep 15, 2022 · 2 comments

Comments

@KAmaia
Copy link

KAmaia commented Sep 15, 2022

Could we get some parameters that allow us to designate the desired output resolution. I am attempting to use SD to generate super ultrawide desktop backgrounds, and it would be nice to have this feature

@benedlore
Copy link

I used the original arguments and I can control the size. For example, I add "--H 704" and the result is a more portrait aspect ratio, 1024x1408

@KAmaia
Copy link
Author

KAmaia commented Sep 24, 2022

Those arguments seem to get passed to txt2img not txt2imghd. Trying to generate a 7680x1440 image results in the script exhausting all available vram and dying:

Traceback (most recent call last): File "scripts/txt2imgHD.py", line 549, in <module> main() File "scripts/txt2imgHD.py", line 365, in main text2img2(opt) File "scripts/txt2imgHD.py", line 443, in text2img2 samples_ddim, _ = sampler.sample(S=opt.steps, File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "c:\stable-diffusion\ldm\models\diffusion\plms.py", line 97, in sample samples, intermediates = self.plms_sampling(conditioning, size, File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "c:\stable-diffusion\ldm\models\diffusion\plms.py", line 152, in plms_sampling outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "c:\stable-diffusion\ldm\models\diffusion\plms.py", line 218, in p_sample_plms e_t = get_model_output(x, t) File "c:\stable-diffusion\ldm\models\diffusion\plms.py", line 185, in get_model_output e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) File "c:\stable-diffusion\ldm\models\diffusion\ddpm.py", line 987, in apply_model x_recon = self.model(x_noisy, t, **cond) File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion\ldm\models\diffusion\ddpm.py", line 1410, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 732, in forward h = module(h, emb, context) File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 85, in forward x = layer(x, context) File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion\ldm\modules\attention.py", line 258, in forward x = block(x, context=context) File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion\ldm\modules\attention.py", line 209, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "c:\stable-diffusion\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "c:\stable-diffusion\ldm\modules\diffusionmodules\util.py", line 127, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "c:\stable-diffusion\ldm\modules\attention.py", line 212, in _forward x = self.attn1(self.norm1(x)) + x File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion\ldm\modules\attention.py", line 180, in forward sim = einsum('b i d, b j d -> b i j', q, k) * self.scale File "C:\Users\3vilpcdiva\.conda\envs\ldm\lib\site-packages\torch\functional.py", line 330, in einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] RuntimeError: CUDA out of memory. Tried to allocate 889.89 GiB (GPU 0; 23.99 GiB total capacity; 5.66 GiB already allocated; 15.42 GiB free; 5.98 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants