Comfyui guide reddit

Comfyui guide reddit. Aug 2, 2024 · Introduction. I managed to get stable video working in forge, but the performance was dissapointing. We would like to show you a description here but the site won’t allow us. It will automatically load the correct checkpoint each time you generate an image without having to do it Welcome to the unofficial ComfyUI subreddit. Enjoy and keep it civil. However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with its node-based approach. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Find tips, tricks and refiners to enhance your image quality. Powered by SD15, you can create frame-by-frame animations with spline guides. Because I definitely struggled with what you're experiencing, I'm currently into my 3-4 months of ComfyUI and finally understanding what each nodes does, and there's still so many custom nodes that I don't have the patience to read and find their functionality. Please share your tips, tricks, and workflows for using this… The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. I have no problem with Comflowy and it looks like a cool tool. Mine is Sublime but there are others even good ol' Notepad. As soon as I try to add a controlnet model or do some inpainting I get lost. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. 1; Overview of different versions of Flux. Reproducing the behavior of the most popular SD implementation (and then surpassing it) would be a very compelling goal I would think. 1. Dec 19, 2023 · What is ComfyUI and what does it do? ComfyUI is a node-based user interface for Stable Diffusion. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. It primarily focuses on the use of different nodes, installation procedures, and practical examples that help users to effectively engage with ComfyUI. This topic aims to answer what I believe would be the first questions an a1111 user might have about Comfy. Midjourney may not be as flexible as ComfyUI in controlling interior design styles, making ComfyUI a better choice. 1; Flux Hardware Requirements; How to install and use Flux. You just have to annotate your function so the decorator can inspect it to auto-create the ComfyUI node definition. . I’m working on a part two that covers composition, and how it differs with controlnet. Actually I think most users here prefer written guides with illustrations over video, just judging from a lot of posts I've seen whenever a written guide is posted. Please share your tips, tricks, and workflows for using this… Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. One question: When doing txt2vid with Prompt Scheduling, any tips for getting more continuous video that looks like one continuous shot, without "cuts" or sudden morphs/transitions between parts? Welcome to the unofficial ComfyUI subreddit. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of Welcome to the unofficial ComfyUI subreddit. It is actually faster for me to load a lora in comfyUi than A111. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. Made with A1111 Made with ComfyUI Welcome to the unofficial ComfyUI subreddit. I'm not the creator of this software, just a fan. Welcome to the unofficial ComfyUI subreddit. Jul 28, 2024 · Welcome to the unofficial ComfyUI subreddit. Jul 6, 2024 · You will need a working ComfyUI to follow this guide. Maybe it's from Cinema 4D with so many versions and so many tuts don't mention the v For instructions, read the Accelerated PyTorch training on Mac Apple Developer guide (make sure to install the latest pytorch nightly). 23K subscribers in the comfyui community. Belittling their efforts will get you banned. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. Please share your tips, tricks, and workflows for using this software to create your AI art. To give you an idea of how powerful it is: StabilityAI, the creators of Stable Diffusion, use ComfyUI to test Stable Diffusion internally. [ 🔥ComfyUI - InstanceDiffusion: Create Motion Guide Animation ]. This guide is about how to setup ComfyUI on your Windows computer to run Flux. This is awesome! Thank you! I have it up and running on my machine. . Below I have set up a basic workflow. I have done a few simple workflows and love the speed I can get with my 8gb 4060. 4. Please share your tips, tricks, and workflows for using this… 17K subscribers in the comfyui community. SDXL most definitely doesn't work with the old control net. It’s an ad for Comflowy imposing as a tutorial for ComfyUI. Learn how to use Comfy UI, a powerful GUI for Stable Diffusion, with this full guide. ComfyUI is not supposed to reproduce A1111 behaviour I found the documentation for ComfyUI to be quite poor when I was learning it. 1. It needs a better quick start to get people rolling. 3. In my case, I had some workflows that I liked with Welcome to the unofficial ComfyUI subreddit. Plus it has what I term the 'Red List of Death' and the log file to help guide the user to fixes after a crash. Flux is a family of diffusion models by black forest labs. That means you can 'human read' the files that make ComfyUI tick and make tweeks if you desire in any text editor. Check out Think Diffusion for a fully managed ComfyUI online service. ai Trying out IMG2IMG on ComfyUI and I like it much better than A1111. I made a long guide called [Insights for Intermediates] - How to craft the images you want with A1111, on Civitai. I heard that it can run pretty well ComfyUI. Please keep posted images SFW. TBH, I haven't used A1111 extensively, so my understanding of A1111 is not deep, and I don't know what doesn't work in A1111. Since Loras are a patch on the model weights they can also be merged into the model: You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. 16K subscribers in the comfyui community. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. Pull/clone, install requirements, etc. I am fairly comfortable with A1111 but am having a terrible time understanding how to run ComfyUI. If you are a noob and don't have them already, grab Efficiency Nodes, too. Please share your tips, tricks, and workflows for using this… I've been using a ComfyUI workflow, but I've run into issues that I haven't been able to resolve, even with ChatGPT's help. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. Latest ComfyUI release and following custom nodes installed: ComfyUI-Manager ComfyUI Impact Pack ComfyUI's ControlNet Auxiliary Preprocessors ComfyUI-ExLlama ComfyUI set to use a shared folder that includes all kind of models You don't need to be a linux guru to follow this guide, although some basic skills might help. Put the flux1-dev. One of the strengths of comfyui is that it doesn't share the checkpoint with all the tabs. Welcome to the unofficial ComfyUI subreddit. Beyond that, this covers foundationally what you can do with IpAdapter, however you can combine it with other nodes to achieve even more, such as using controlnet to add in specific poses or transfer facial expressions (video on this coming), combining it with animatediff to target animations, and that’s If you don't have TensorRT installed, the first thing to do is update your ComfyUI and get your latest graphics drivers, then go to the Official Git Page. But this type of crap leaves a sour taste and this tool along with associated domains is going right into my DNS blocklist. 0 + other_model If you are familiar with the “Add Difference” option in other UIs this is how to do it in ComfyUI. 1 ComfyUI install guidance, workflow and example. In a111, when you change the checkpoint, it changes it for all the active tabs. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. For my first successful test image, I pulled out my personally drawn artwork again and I'm seeing a great deal of improvement. It conditions the Coordinated value with 2-dimensional coordinates frame by frame. Follow the ComfyUI manual installation instructions for Windows and Linux. Could anyone recommend the most effective way to do a quick face swap on an MP4 video? It doesn't necessarily have to be with ComfyUI; I'm open to any tools or methods that offer good quality and reasonable speed. ComfyUI was created in January 2023 by Comfyanonymous, who created the tool to learn how Stable Diffusion works. Its the guide that I wished existed when I was no longer a beginner Stable Diffusion user. See the installation guide for local installation. Image Processing A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. A lot of people are just discovering this technology, and want to show off what they created. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. I definitely agree that someone should definitely have some sort of detailed course/guide. I created this subreddit to separate discussions from Automatic1111 and Stable Diffusion discussions in general. Loading a PNG to see its workflow is a lifesaver to start understanding the workflow GUI, but it's not nearly enough. In the positive prompt, I described that I want an interior design image with a bright living room and rich details. Oh yes! I understand where you're coming from. Amazing Custom Node has been introduced 😲 2. safetensors file in your: ComfyUI/models/unet/ folder. Check out the link below for the GIT address! . The ComfyUI-Wiki is an online quick reference manual that serves as a guide to ComfyUI. Flux Schnell is a distilled 4 step model. It's not some secret proprietary or compiled code. Thanks for the tips on Comfy! I'm enjoying it a lot so far. Flux. However, I understand that video guides benefit the guide-maker far more through possible ad revenue. And above all, BE NICE. 24K subscribers in the comfyui community. For anyone still looking for an easier way, I've created a @ComfyFunc annotator that you can add to your regular python functions to turn them into ComfyUI operations. Welcome to the Reddit home for ComfyUI a graph/node style UI for Stable Diffusion. , or just use ComfyUI Manager to grab it. Original art by me. But I haven't found a guide for installing stable video in comfyUI that I've been able to follow. First of all, a huge thanks to Matteo for the ComfyUI nodes and tutorials! You're the best! After the ComfyUI IPAdapter Plus update, Matteo made some breaking changes that force users to get rid of the old nodes, breaking previous workflows. Thanks! So often end up spending 30m watching a vid only to find it doesn't work with my version of whatever, or the ultimate answer is to buy the guy's plugin, script, etc. A simple FAQ or Migration Guide is nowhere to be found. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. If you don’t have t5xxl_fp16. The most direct method in ComfyUI is using prompts. It covers the following topics: Introduction to Flux. 1 with ComfyUI Welcome to the unofficial ComfyUI subreddit. Also, if this is new and exciting to you, feel free to post Beginners' guide to ComfyUI 😊 We discussed the fundamental comfyui workflow in this post 😊 You can express your creativity with ComfyUI #ComfyUI #CreativeDesign #ImaginativePictures #Jarvislabs. SETUP WSL Welcome to the unofficial ComfyUI subreddit. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. I know there is the ComfyAnonymous workflow but it's lacking. safetensors or clip_l. For example, it's like performing sampling with the A model for onl 19K subscribers in the comfyui community. lsszsd drots qpml ljgbqff tllu ktkgebijd xaqi bjfx hnrc tsxm