Comfyui upscale model download reddit

Comfyui upscale model download reddit. the factor 2. So I made a upscale test workflow that uses the exact same latent input and destination size. so I made one! Rn it installs the nodes through Comfymanager and has a list of about 2000 models (checkpoints, Loras, embeddings, etc. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. e. 5 to get a 1024x1024 final image (512 *4*0. If the workflow is not loaded, drag and drop the image you downloaded earlier. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Then output everything to Video Combine . After generating my images I usually do Hires. All of this can be done in Comfy with a few nodes. I want to upscale my image with a model, and then select the final size of it. Additionally, the animatediff_models and clip_vision folders are placed in M:\AI_Tools\StabilityMatrix-win-x64\Data\Packages\ComfyUI\models. Solution: click the node that calls the upscale model and pick one. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). Hope someone can advise. For some context, I am trying to upscale images of an anime village, something like Ghibli style. ). One does an image upscale and the other a latent upscale. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. Scan this QR code to download the app now ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. You just have to use the node "upscale by" using bicubic method and a fractional value (0. All the models are located in M:\AI_Tools\StabilityMatrix-win-x64\Data\Models. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). SDXL most definitely doesn't work with the old control net. Reply reply Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. A step-by-step guide to mastering image quality. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Thanks Welcome to the unofficial ComfyUI subreddit. The downside is that it takes a very long time. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. pth, taesd3_decoder. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Once they're installed, restart ComfyUI and launch it with --preview-method taesd to enable high-quality previews. this is just a simple node build off what's given and some of the newer nodes that have come out. Please share your tips, tricks, and workflows for using this software to create your AI art. with a denoise setting of 0. The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. But for the other stuff, super small models and good results. There are plenty of workflows made you can find. As well Juggernaut XL and other XL models. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. For SD 1. Edit : I am sorry I didn't see that you were looking for SDXL clip file i thought you wanted the cascade clip file. 2 - Custom models/LORA's: Tried a lot of CivitAI, epicrealism, cyberrealistic, absolutereality, realistic vision 5. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. pth or 4x_foolhardy_Remacri. And when purely upscaling, the best upscaler is called LDSR. Please keep posted images SFW. pth, taesdxl_decoder. 5 if you want to divide by 2) after upscaling by a model. Do you have ComfyUI manager. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? I've been using Stability Matrix and also installed ComfyUI portable. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Will be interesting seeing LDSR ported to comfyUI OR any other powerful upscaler. second pic. I get good results using stepped upscalers, ultimateSD upscaler and stuff. ckpt motion with Kosinkadink Evolved . Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… If you have comfyUI ,manager you can directly download all the models from it. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). 5 for the diffusion after scaling. 1 and 6, etc. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. I haven't been able to replicate this in Comfy. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. That's because of the model upscale. /r/StableDiffusion is back open after the Cause I run SDXL based models from start and through 3 ultimate upscale nodes. I rarely use upscale by model on its own because of the odd artifacts you can get. . From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Upscaling: Increasing the resolution and sharpness at the same time. Welcome to the unofficial ComfyUI subreddit. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. safetensors (SD 4X Upscale Model) Sep 7, 2024 · Here is an example of how to use upscale models like ESRGAN. 5=1024). But it's weird. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. But I think simply typing the file name in the search panel of comyUI manager will get you the file. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. Thank Sames as Swin4R which details a lot the image. This way it replicates the sd upscale/ultimate upscale scripts from A1111. If it's the best way to install control net because when I tried manually doing it . So from VAE Decode you need a "Uplscale Image (using model)" under loaders. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. It's a lot faster that tiling but outputs aren't detailed. Usually I use two my wokrflows: Upscale x1. However, I'm facing an issue with sharing the model folder. The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) Generates a SD1. Though, from what someone else stated it comes to use case. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. No attempts to fix jpg artifacts, etc. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. Step 1: Download SDXL Turbo checkpoint. in a1111 the controlnet * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. In the Load Video node, click on choose video to upload and select the video you want. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. I have a custom image resizer that ensures the input image matches the output dimensions. Always wanted to integrate one myself. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. The workflow is kept very simple for this test; Load image Upscale Save image. Simply save and then drag and drop relevant image into your ComfyUI interface window with ControlNet Tile model installed, load image (if applicable) you want to upscale/edit, modify some prompts, press "Queue Prompt" and wait for the AI generation to complete. 5 model) >> FaceDetailer. Step 2: Download this sample Image. I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) You can also run a regular AI upscale then a downscale (4x * 0. Search the sub for what you need and download the . To enable higher-quality previews with TAESD, download the taesd_decoder. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt That workflow consists of vid frames at 15fps into vae encode and CNs, a few loras, animatediff v3, lineart and scribble-sparsectrl CNs, ksampler basic with low cfg, small upscale, AD detailer to fix face (with lineart and depth CNs in segs, and same loras, and animatediff), upscale w/model, interpolate, combine to 30fps. Plus, you want to upscale in latent space if possible. Good for depth, open pose so far so good. pth and taef1_decoder. - image upscale is less detailed, but more faithful to the image you upscale. Tried the llite custom nodes with lllite models and impressed. In the CR Upscale Image node, select the upscale_model and set the rescale_factor. I am curious both which nodes are the best for this, and which models. so i. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. Look at this workflow : These comparisons are done using ComfyUI with default node settings and fixed seeds. You can also do latent upscales. Jan 5, 2024 · Start ComfyUI. Thanks. There are also "face detailer" workflows for faces specifically. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird Welcome to the unofficial ComfyUI subreddit. I was looking for tools that could help me set up ComfyUI workflows automatically and also let me use it as a backend, but couldn't find any. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. json or drag and drop the workflow image (I think the image has to not be from reddit, reddit removes metadata, I believe) into the UI. Also, both have a denoise value that drastically changes the result. 5 I'd go for Photon, RealisticVision or epiCRealism. g Use a X2 Upscaler model. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Thanks for the tips on Comfy! I'm enjoying it a lot so far. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. It didn't work out. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Then another node under loaders> "load upscale model" node. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 5), with an ESRGAN model. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. Still working on the the whole thing but I got the idea down I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. There's "latent upscale by", but I don't want to upscale the latent image. I love to go with an SDXL model for the initial image and with a good 1. This is done after the refined image is upscaled and encoded into a latent. 15-0. I'm using mm_sd_v15_v2. Stable Diffusion model used in this demonstration is Lyriel. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. You can also provide your custom link for a node or model. attach to it a "latent_image" in this case it's "upscale latent" "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. it's nothing spectacular but gives good consistent results without In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. pth and place them in the models/vae_approx folder. 25 i get a good blending of the face without changing the image to much. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. That's because latent upscale turns the base image into noise (blur). xxltet oonjxb mzjsp rohp cpesejgy ewnrg pnu thno mxsshsk kwmp