Custom AI prompt generator node for ComfyUI. With this node, you can use text generation models to generate prompts. Before using, text generation model has to be trained with prompt dataset.
- prompt-generator-comfyui
- Table Of Contents
- Setup
- Features
- Example Workflow
- Pretrained Prompt Models
- Variables
- Troubleshooting
- Contributing
- Example Outputs
- Automatic installation is added for portable version.
- Clone the repository with
git clone https://github.com/alpertunga-bile/prompt-generator-comfyui.git
command undercustom_nodes
folder. - Run the run_nvidia_gpu.bat file
- Open the
hires.fixWithPromptGenerator.json
orbasicWorkflowWithPromptGenerator.json
workflow - Put your generator under
models/prompt_generators
folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just putpytorch_model.bin
file for example. - Click
Refresh
button in ComfyUI
- Clone the repository with
git clone https://github.com/alpertunga-bile/prompt-generator-comfyui.git
command undercustom_nodes
folder. - Run the ComfyUI
- Open the
hires.fixWithPromptGenerator.json
orbasicWorkflowWithPromptGenerator.json
workflow - Put your generator under
models/prompt_generators
folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just putpytorch_model.bin
file for example. - Click
Refresh
button in ComfyUI
- Download the node with ComfyUI Manager
- Restart the ComfyUI
- Open the
hires.fixWithPromptGenerator.json
orbasicWorkflowWithPromptGenerator.json
workflow - Put your generator under
models/prompt_generators
folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just putpytorch_model.bin
file for example. - Click
Refresh
button in ComfyUI
- Multiple output generation is added. You can choose from 5 outputs and check the generated prompts in the log file and terminal. The prompts are logged and printed in order.
- Optimizations are done with Optimum package.
- ONNX and transformers models are supported.
- Preprocessing outputs. See this section.
- Recursive generation is supported. See this section.
- Print generated text to terminal and log the node's state under
generated_prompts
folder with date as filename.
- Prompt Generator Node may look different with final version but workflow in ComfyUI is not going to change
-
You can find the models in this link
-
For to use the pretrained model follow these steps:
- Download the model and unzip to
models/prompt_generators
folder. - Click
Refresh
button in ComfyUI. - Then select the generator with the node's
model_name
variable (If you can't see the generator restart the ComfyUI).
- Download the model and unzip to
- 1.434.667 rows of unique prompts (Gathering In Process)
- %85 train | %15 test
- Process of data cleaning and gathering can be found here
- These Huggingface datasets are used:
-
female_positive_generator_v2 (Training In Process)
- using distilgpt2 model
- Training Loss ~0.50
-
female_positive_generator_v3 (Training In Process)
- using bigscience/bloom-560m model
- Training loss ~0.59
num_beams
must be dividable bynum_beam_groups
otherwise you will get errors.
Variable Names | Definitions |
---|---|
model_name | Folder name that contains the model |
accelerate | Open optimizations. Some of the models are not supported by BetterTransformer (Check your model). If it is not supported switch this option to disable or convert your model to ONNX |
prompt | Input prompt for the generator |
cfg | CFG is enabled by setting guidance_scale > 1. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality |
min_new_tokens | The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt. |
max_new_tokens | The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. |
do_sample | When True, picks words based on their conditional probability |
early_stopping | When True, generation finishes if the EOS token is reached |
num_beams | Number of steps for each search path |
num_beam_groups | Number of groups to divide num_beams into in order to ensure diversity among different groups of beams |
diversity_penalty | This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search is enabled. |
temperature | How sensitive the algorithm is to selecting low probability options |
top_k | How many potential answers are considered when performing sampling |
top_p | Min number of tokens are selected where their probabilities add up to top_p |
repetition_penalty | The parameter for repetition penalty. 1.0 means no penalty |
no_repeat_ngram_size | The size of an n-gram that cannot occur more than once. (0=infinity) |
remove_invalid_values | Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation. |
self_recursive | See this section |
recursive_level | See this section |
preprocess_mode | See this section |
- Let's say we give
a,
as seed and recursive level is 1. I am going to use the same outputs for this example to understand the functionality more accurately. - With self recursive, let's say generator's output is
b
. So next seed is going to beb
and generator's output isc
. Final output isa, c
. It can be used for generating random outputs. - Without self recursive, let's say generator's output is
b
. So next seed is going to bea, b
and generator's output isc
. Final output isa, b, c
. It can be used for more accurate prompts.
- exact_keyword =>
(masterpiece), ((masterpiece))
is not allowed. Checking the pure keyword without parantheses and weights. The algorithm is adding prompts from the beginning of the generated text so add important prompts to seed. - exact_prompt =>
(masterpiece), ((masterpiece))
is allowed but(masterpiece), (masterpiece)
is not. Checking the exact match of the prompt. - none => Everything is allowed even the repeated prompts.
# ---------------------------------------------------------------------- Original ---------------------------------------------------------------------- #
((masterpiece)), ((masterpiece:1.2)), (masterpiece), blahblah, blah, blah, ((blahblah)), (((((blah))))), ((same prompt)), same prompt, (masterpiece)
# ------------------------------------------------------------- Preprocess (Exact Keyword) ------------------------------------------------------------- #
((masterpiece)), blahblah, blah, ((same prompt))
# ------------------------------------------------------------- Preprocess (Exact Prompt) -------------------------------------------------------------- #
((masterpiece)), ((masterpiece:1.2)), (masterpiece), blahblah, blah, ((blahblah)), (((((blah))))), ((same prompt)), same prompt
- If the below solutions are not fixed your issue please create an issue with
bug
label
- The node is based on transformers and optimum packages. So most of the problems may be caused from these packages. For overcome these problems you can try to update these packages:
- Activate the virtual environment if there is one.
- Run
pip install --upgrade transformers optimum optimum[onnxruntime-gpu]
command.
- Go to the
ComfyUI_windows_portable
folder. - Open the command prompt in this folder.
- Run
.\python_embeded\python.exe -s -m pip install --upgrade transformers optimum optimum[onnxruntime-gpu]
command.
- If updating the packages is not solve your problem please create an issue with
bug
label.
- The users have to check if they activate the virtual environment if there is one
- The users have to check that they are starting the ComfyUI in the
ComfyUI_windows_portable
- Because the node is checking the
python_embeded
folder if it is exists and use it to install the required packages
-
Contributions are welcome. If you have an idea and want to implement it by yourself please follow these steps:
- Create a fork
- Create a branch with a name that describes the feature that you are adding
- Pull request the fork
-
If you have an idea but don't know how to implement it please create an issue with
enhancement
label. -
The contributing can be done in several ways. You can contribute to code or to README file.