Skip to Main Content

How to use comfyui workflows reddit github

How to use comfyui workflows reddit github. hi u/Critical_Design4187, it's definitely an active work in progress, but the goal of the project is to be able to support/run all types of workflows. This will allow you to access the Launcher and its workflow projects from a single port. If you have questions or are new to Python use r/learnpython The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language. This appears to no longer be the case. So even with the same seed, you get different noise. 14K subscribers in the comfyui community. Currently, PROXY_MODE=true only works with Docker, since NGINX is used within the container. Make sure it points to the ComfyUI folder inside the comfyui_portable folder; Run python app. Took my 35 steps generations down to 10-15 steps. • 2 hr. Aug 5, 2023 · Use the QR Code for simple workflows and the QR Code (Split) if you want to build more advanced pipelines with additional outputs for the MODULE_LAYER, FINDER_LAYER, or FINDER_MASK. Click the Load (Decrypted) button. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. Reply. Jul 28, 2023 · So that was not too bad! I could even use a workflow that output at 8k. This extension might be of Welcome to the unofficial ComfyUI subreddit. You should be in the default workflow. I believe A1111 uses the GPU to generate a random number to generate the noise, whereas comfyui uses the CPU. will output this resolution to the bus. nathman999. You can now use half or less of the steps you were using before and get the same results. Copy and paste the key into the prompt. [deleted] • 8 mo. Its a little rambling, I like to go in depth with things, and I like to explain why things are done rather than give you a list of rapid fire instructions. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. I Want To Like Comfy, But It Keeps Defeating Me. If you're running the Launcher manually, you'll need to set up a reverse Welcome to the unofficial ComfyUI subreddit. Combine both methods: gen, draw, gen, draw, gen! Always check the inputs, disable the KSamplers you don’t intend to use, make sure to have the same resolution in Photoshop than in Step Two. json. It includes literally everything possible with AI image generation. looping through and changing values i suspect becomes a issue once you go beyond a simple workflow or use custom nodes. 11) or for Python 3. it has backwards compatibility with running existing workflow. Heya, part 5 of my series of step by step tutorials is out, it covers improving your adv ksampler setup and usage of prediffusion with an unco-operative prompt to get more out of your workflow. Some workflows alternatively require you to git clone the repository to your ComfyUI/custom_nodes folder, and restart ComfyUI. com/comfyanonymous/ComfyUIDownload a model https://civitai. Inputs: Dec 19, 2023 · Here's a list of example workflows in the official ComfyUI repo. Hope this helps. g. Generate background images, character images etc. Below is the simplest way you can use ComfyUI. The advantage of this approach is that you can manipulate the outlines of the generated images through Canny edge maps, like this: This repository provides Colab notebooks that allow you to install and use ComfyUI, including ComfyUI-Manager. Should have the all the features that the Stable Diffusion extension offers. If the key doesn't match the file, absolutely, ComfyUI is ComfyScript v0. But the speed was pathetic. Install this extension on other's ComfyUI, restart. Svelte is a radical new approach to building user interfaces. This node allows you to load a Core ML UNet model and use it in your ComfyUI workflow. You'd probably want to right click the clip text encode and turn the prompt into an input. GitHub - xiwan/comfyUI-workflows: store my pixel or any interesting comfyui workflows. json", which is designed to have 100% reproducibility Apr 22, 2024 · Apr 22, 2024. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. The process of building and rebuilding my own workflows with the new things I've learned has taught me a lot. Selecting a model Downloading SDXL pics posted here on reddit and dropping them into comfyUI doesn't work either so I guess will need a direct download link comments sorted by Best Top New Controversial Q&A Add a Comment Start by loading our default workflow, then double-click in a blank area and enter ReActor. This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. The workflow joson info is saved with the . If you really want the json, you can save it after loading the png into comfyui. component. Share the encrypted file along with the key to others. Place the converted . VERY slow. 6. Mar 23, 2024 · A ComfyUI workflow and model manager extension to organize and manage all your workflows, models and generated images in one place. This workflow uses SDXL 1. Then add the ReActor Fast Face Swap node. Once you have the node installed, search for demofusion and choose 'Demofusion from single file. Wraithnaut. LD2WDavid. There are so many resources available, but you need to dive in. The metadata from PNG files saved from comfyUI should transfer over to other comfyUI environments. You need to select this node to use your local SDXL checkpoints, and save a ton of space. mlmodelc file in ComfyUI's models/unet directory and use the node to load the model. ComfyUI is a modular GUI for Stable Diffusion that allows you to create images, short videos, and more. I also use the comfyUI manager to take a look at the various custom nodes available to see what interests me. json files saved via comfyui, but the launcher itself lets you export any project in a new type of file format called "launcher. Copy that path (we’ll need it later). But I wanted to have a standalone version of ComfyUI. ComfyUI https://github. Workflow Support: Plugin integrates seamlessly into your Photoshop Welcome to the unofficial ComfyUI subreddit. Next, link the input image from this node to the image from the VAE Decode. Dec 19, 2023 · Here's a list of example workflows in the official ComfyUI repo. xiwan / comfyUI-workflows Public. Also added a second part where I just use a Rand noise in Latent blend. The graph that contains all of this information is refered to as a workflow in comfy. You upload image -> unsample -> Ksampler advanced -> same recreation of the original image. Draw in Photoshop then paste the result in one of the benches of the workflow, OR. c Dec 15, 2023 · SparseCtrl is now available through ComfyUI-Advanced-ControlNet. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various If you want to add in the SDXL encoder, you have to go out of your way. Please share your tips, tricks, and workflows for using this software to create your AI art. Belittling their efforts will get you banned. I've put together some videos showcasing its features: Text to Image, Image to Image, Inpaint, Outpaint : Plugin allows seamless conversion between text and image, as well as image-to-image transformations. That might work. - if-ai/ComfyUI-IF_AI_tools Just take your normal workflow and replace the ksampler with the custom one so you can use the ays sigmas. I simply combined the two for use in ComfyUI. After adding a Note and changing the title to "input-spec", you can set default values for specific input slots by following the format: Welcome to the unofficial ComfyUI subreddit. More info here, including how to change a Welcome to the unofficial ComfyUI subreddit. You did not click on the Queue Promt (i tried that) so Im assume you hit a key on the keyboard ? Thanks so much ! "ctrl-enter" is equivalent to "click queue prompt". Real-Time Mode: Experience the power of real-time editing with Plugin. 12) and put into the stable-diffusion-webui (A1111 or SD. png files just don't import drag and drop half the time, as advertised. json or . I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. Either you maintain a ComfyUI install with every custom node on the planet installed (don't do this), or you steal some code that consumes the JSON and draws the workflow & noodles (without the underlying functionality that the custom nodes bring) and saves it as a JPEG next to each image you upload. 0 Refiner for very quick image generation. Fork 6. Then you need to download the Canny model. Star 85. I also had to edit the styles. 3: Using ComfyUI as a function library. Welcome to the unofficial ComfyUI subreddit. GitHub - comfyanonymous/ComfyUI_examples: Examples of ComfyUI workflows. After you can use the same latent and tweak start and end to manipulate it. Support for installing ComfyUI; Support for basic installation of ComfyUI-Manager; Support for automatically installing dependencies of custom nodes upon restarting Colab notebooks. There's no reason to use Comfy if you're not willing to learn it. mlpackage or . '. Next) root folder (where you have "webui-user. Jul 26, 2023 · You elarge the tagger node and then something happens to trigger it and it goes green. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow Generate from Comfy and paste the result in Photoshop for manual adjustments, OR. The reason why you typically don't want a final interface for workflows because many users will eventually want to apply LUTs and other post-processing filters. Please keep posted images SFW. Last week, we officially launched our alpha, which lets you deploy ComfyUI workflows to any Discord server without the constraints of a single machine. 10 or for Python 3. This tool enables you to enhance your image generation workflow by leveraging the power of language models. The file extension will be . This method allows you to control the edges of the images generated by the model using Canny edge maps. Mar 20, 2024 · Don’t worry if the jargon on the nodes looks daunting. The only way to keep the code open and free is by sponsoring its development. You can also do this all in one with the mile high styler $\Large\color{orange}{Expand\ Node\ List}$ BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. Please share your tips, tricks, and workflows for using this…. No quality loss that I could see after hundreds of tests. And above all, BE NICE. Within the modular interface, you can design and customize your own workflows Welcome to the unofficial ComfyUI subreddit. main. If you have something to teach others post here. I also created the workflow based on Olivio's video, and replaced the positive and negative nodes with the new styles node. A lot of people are just discovering this technology, and want to show off what they created. ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Comfy Dungeon; Not to mention the documentation and videos tutorials. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. • 4 mo. Selecting a model Downloading SDXL pics posted here on reddit and dropping them into comfyUI doesn't work either so I guess will need a direct download link comments sorted by Best Top New Controversial Q&A Add a Comment Welcome to the unofficial ComfyUI subreddit. Notifications. It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. Then add an empty text box so you can write a prompt and add a text concat to combine the prompt and the style and run that into the input. Also, if this is new and exciting to you, feel free to post Invoke just released 3. Then choose the encrypted file. Adding other Loader Nodes. csv file to remove some incompatible characters (mostly accents). 11 (if in the previous step you see 3. Well over 30 minutes for a generation. Inputs: Dec 15, 2023 · SparseCtrl is now available through ComfyUI-Advanced-ControlNet. In researching InPainting using SDXL 1. I uploaded the workflow in GH . So in this workflow each of them will run on your input image and you Welcome to the unofficial ComfyUI subreddit. Without that functionality, it's "have fun teaching yourself yet another obscure, ever-changing UI". Once the container is running, all you need to do is expose port 80 to the outside world. ago. Just as an experiment, drag and drop one of the png files you have outputed into comfyUI and see what happens. To use ComfyUI, click on this link. json file. To load a workflow, simply click the Load button on the right sidebar, and select the workflow . 12 (if in the previous step you see 3. Breakdown of workflow content. ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. ComfyUI-IF_AI_tools is a set of custom nodes for ComfyUI that allows you to generate prompts using a local Large Language Model (LLM) via Ollama. . I simply copied the "Stable Diffusion" extension that comes with SillyTavern and adjusted it to use ComfyUI. My current gripe is that tutorials or sample workflows age out so fast, and github samples from . It's likely that more artists will be attracted to using SD in the near future because of SDXL's quality renders. The idea is to make it as easy as possible to get flows in the hands of real users, starting This node allows you to load a Core ML UNet model and use it in your ComfyUI workflow. Seamlessly switch between workflows, track version history and image generation history, 1 click install models from Civit ai, browse/update your installed models So instead of having a single workflow with a spaghetti of 30 nodes, it could be a workflow with 3 sub workflows, each with 10 nodes, for example. We will walk through a simple example of using ComfyUI, introduce some concepts, and gradually move on to more complicated workflows. If the key matches the file, ComfyUI should load the workflow correctly. • 8 mo. 0. bat" file) or into ComfyUI root folder if you use ComfyUI Portable STEP 1: Open the venv folder, then type on its path. It's not that bad 🙂. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Allows you to choose the resolution of all output resolutions in the starter groups. 2. Jan 22, 2024 · Anthony Quoc Anh Doan - Ramblings of a Happy Scientist An instrument of peace. Hope it helps, sure helped me getting started. with python the easiest way i found was to grab a workflow json, manually change values you want to a unique keyword then with python replace that keyword with the new value. Slash command is /comfy (e. RGB and scribble are both supported, and RGB can also be used for reference purposes for normal non-AD workflows if use_motion is set to False on the Load SparseCtrl Model node. I recommend using a different term. basically, this lets you upload and version control your workflows, and then you can use your local machine / or any server with comfy UI installed, then use the endpoint just like any simple API, to trigger your custom workflow, it will also handle the generated output upload and stuff to s3 compatible storage. Building your own is the best advice there is when starting out with ComfyUI imo. . Results and speed will vary depending on Download prebuilt Insightface package for Python 3. After much research, some help from a few kind people on Reddit, and using ChatGPT to answer questions, I finally got it set up and running. using ComfyUI. There is a comment on this thread that says that this node downloads 60GB on the first run. Where there is hatred, let me sow love; where there is doubt, let's get some data and build a model. The output of the node is a coreml_model object that can be used with the Core ML Sampler. In that command prompt, type this: python -m venv [venv folder path] Welcome to the unofficial ComfyUI subreddit. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. I have never tried the load styles CSV. I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. In this ComfyUI Tutorial we'll install ComfyUI and show you how it works. Return in the default folder and type on its path too, then remove it and type “cmd” instead. The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language. [11]. Read README page in ComfyUI repo. If you have questions or are new to Python use r/learnpython We’re launching Salt AI, a platform that lets you share your AI workflows with the world for free. /comfy background or /comfy apple). Instead of simply Add Node -> Conditioning -> CLIP Text Encoder, you have to delve into Add Node -> Advanced ->Conditioning -> CLIPTextEncoderSDXL. Spent the whole week working on it. This was the base for my own workflows. Press Enter, it opens a command prompt. Once all the component workflows have been created, you can save them through the "Export As Component" option in the menu. Many artists, like myself, will want to discuss workflow in the conventional sense and this could cause confusion. Loop the conditioning from your ClipTextEncode prompt, through ControlNetApply, and into your KSampler (or whereever it's going next). Inputs protocol - If enabled this will prefix the textbox input with a preset to represent the internet protocol. 1. png Simply load / drag the png into comfyUI and it will load the workflow. py to start the Gradio app on localhost; Access the web UI to use the simplified SDXL Turbo workflows; Refer to the video tutorial for detailed guidance on using these workflows and UI. Looks good, but would love to have more examples on different use cases for a noob like me. su sr bu sq uq ub og ar zd tv