Comfyui workflow png example reddit. Plus there a ton of extensions which provide plenty ease of use cases. I have a workflow with this kind of loop where latest generated image is loaded, encoded to latent space, sampled with 0. The sample prompt as a test shows a really great result. Just started with ComfyUI and really love the drag and drop workflow feature. So, i added reverse image search that queries a workflow catalog to find workflows that produce similar looking results. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Using just the base model in AUTOMATIC with no VAE produces this same result. Here are approx. Svelte is a radical new approach to building user interfaces. I conducted an experiment on a single image using SDXL 1. And my workflow itself for something like SDXL with Refiner upscaled to 4kx4k is super simple. 5 noise, decoded, then saved. For example I just glance at my workflows and pick the one that I want, drag and drop into ComfyUI and I'm ready to go. But the workflow is dead simple- model - dreamshaper_7 Pos Prompt - sexy ginger heroine in leather armor, anime Neg Prompt - ugly Sampler - euler steps - 20 cfg - 8 seed - 674367638536724 That's it. First of all, sorry if this has been covered before, i did search and nothing came back. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References Welcome to the unofficial ComfyUI subreddit. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. For your all-in-one workflow, use the Generate tab. png) Give it a folder of OpenPose poses to iterate over Create a list of emotion expressions. hey guys, i always love seeing a cool image online and trying to reproduce it, but trying to find the original method or workflow is troublesome since google‘s image search just shows similar looking images. EDIT: For example this workflow shows the use of the other prompt windows. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I totally agree. And the documentation uses a highly technical language, with no examples to make it worse. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. Generate one character at a time and remove the background with Rembg Background Removal Node for ComfyUI . I tried to find either of those two examples, but I have so many damn images I couldn't find them. Hello there. 0 and ComfyUI to explore how doubling the sample count affects performance, especially on higher sample counts, seeing where the image changes relative to the sampling steps. A lot of people are just discovering this technology, and want to show off what they created. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Also, if this is new and exciting to you, feel free to post ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. com and then post a link back here if you are willing to share it. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Here is the workflow for ComfyUI updated to a folder on google drive with both json and png of some of my workflows example by @midjourney_man - img2vid No refiner. Im trying to do the same as high res fix, with a model and weight below 0. There is the "example_workflow. That's because the base 1. I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably used before. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. txt containing a prompt describing the outfit in outfit1. Where can one get such things? It would be nice to use ready-made, elaborate workflows! For example, ones that might do Tile Upscle like we're used to in AUTOMATIC 1111, to produce huge images. 168. hopefully this will be useful to you. K12sysadmin is open to view and closed to post. https://youtu. Remove 3/4 stick figures in the pose image. Hope you like some of them :) Workflow. Ending Workflow. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Apparently the dev uploaded some version with trimmed data But generally speaking, workflows seen on GitHub can also be used. From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. Ignore the prompts and setup I think perfect place for them is Wiki on GitHub. Posted by u/Kinfolk0117 - 37 votes and 7 comments A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Please keep posted images SFW. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. png" in the file list on the top, and then you should click Download Raw File, but alas, in this case the workflow does not load. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality) . 157 votes, 62 comments. This makes it potentially very convenient to share workflows with other. Please share your tips, tricks, and workflows for using this software to create your AI art. Just my two cents. Need help with FaceDetailer in ComfyUI? Join the discussion and find solutions from other users in r/StableDiffusion. Click this and paste into Comfy. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. I found it very helpful. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. If you can't see that button, you need to check the 'enable dev mode options'. You can construct an image generation workflow by chaining different blocks (called nodes) together. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. 1:8188 but when i try to load a flow through one of the example images it just does nothing. But reddit will strip it away. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Hello, I'm wondering if the ability to read workflows embedded in images is connected to the workspace configuration. The workflow is kept very simple for this test; Load image Upscale Save image. No attempts to fix jpg artifacts, etc. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Unfortunately, Reddit strips the workflow info from uploaded png files. png) 29 comments See full list on github. This repo contains examples of what is achievable with ComfyUI. 0 version of the SDXL model already has that VAE embedded in it. Most workflows you see on GitHub can also be downloaded. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. It works by converting your workflow. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. And above all, BE NICE. This was really a test of Comfy UI. K12sysadmin is for K12 techs. But for a base to start at it'll work. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. - If the image was generated in ComfyUI, the civitai image page should have a "Workflow: xx Nodes" box. 8). Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting I think it was 3DS Max. Hi Antique_Juggernaut_7 this could help me massively. 5 from 512x512 to 2048x2048. A text file with multiple lines in the format "emotionName|prompt for emotion" will be used. Upcoming tutorial - SDXL Lora + using 1. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. 0. Example: Starting workflow. And you need to drag them into an empty spot, not a load image node or something. So OP, please upload the PNG to civitai. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. Those images have to contain a workflow, so one you've generated yourself for example. com ComfyUI Examples. be/ppE1W0-LJas - the tutorial. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. I can load the comfyui through 192. I had to place the image into a zip, because people have told me that Reddit strips . Breakdown of workflow content. My ComfyUI workflow was created to solve that. 2) or (bad code:0. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. 1 or not. Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. You can use () to change emphasis of a word or phrase like: (good code:1. We would like to show you a description here but the site won’t allow us. but mine do include workflows for the most part in the video description. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Any ideas on this? Give it a folder of images of outfits (with, for example, outfit1. github. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. You can then load or drag the following image in ComfyUI to get the workflow: This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. Welcome to the unofficial ComfyUI subreddit. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. Instead, I created a simplified 2048X2048 workflow. comfy uis inpainting and masking aint perfect. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. To add content, your account must be vetted/verified. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. The png files produced by ComfyUI contain all the workflow info. I'll do you one better, and send you a png you can directly load into Comfy. I generated images from comfyUI. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. Increasing the sample count leads to more stable and consistent results. true. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. . Comfy Workflows Comfy Workflows. I cant load workflows from the example images using a second computer. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. com or https://imgur. Belittling their efforts will get you banned. Share, discover, & run thousands of ComfyUI workflows. Each time I do a step, I can see the color being somehow changed and the quality and color coherence of the newly generated pictures are hard to maintain. I can load workflows from the example images through localhost:8188, this seems to work fine. ai/profile/neuralunk?sort=most_liked. pngs of metadata. io/ComfyUI_examples/flux/flux_schnell_example. json files into an executable Python script that can run without launching the ComfyUI server. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. Flux Schnell is a distilled 4 step model. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. Otherwise, please change the flare to "Workflow not included" First of all, sorry if this has been covered before, i did search and nothing came back. As a pogrammer, the workflow logic should be relatively easy to understand, but the function of each node cannot be inferred by simply looking at its name. yrqkqv ojbdj bjvi skpi pcwwmuy cig pqkdwx pcrsr jzpljz huof