cntoby / prompt-generator-comfyui

Custom AI prompt generator node for the ComfyUI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

prompt-generator-comfyui

Custom AI prompt generator node for ComfyUI. With this node, you can use text generation models to generate prompts. Before using, text generation model has to be trained with prompt dataset.

Table Of Contents

Setup

For Portable Version of the ComfyUI

  • Automatic installation is added for portable version.
  • Clone the repository with git clone https://github.com/alpertunga-bile/prompt-generator-comfyui.git command under custom_nodes folder.
  • Run the run_nvidia_gpu.bat file
  • Open the hires.fixWithPromptGenerator.json or basicWorkflowWithPromptGenerator.json workflow
  • Put your generator under models/prompt_generators folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin file for example.
  • Click Refresh button in ComfyUI

For Manual Installation of the ComfyUI

  • Clone the repository with git clone https://github.com/alpertunga-bile/prompt-generator-comfyui.git command under custom_nodes folder.
  • Run the ComfyUI
  • Open the hires.fixWithPromptGenerator.json or basicWorkflowWithPromptGenerator.json workflow
  • Put your generator under models/prompt_generators folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin file for example.
  • Click Refresh button in ComfyUI

For ComfyUI Manager Users

  • Download the node with ComfyUI Manager
  • Restart the ComfyUI
  • Open the hires.fixWithPromptGenerator.json or basicWorkflowWithPromptGenerator.json workflow
  • Put your generator under models/prompt_generators folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin file for example.
  • Click Refresh button in ComfyUI

Features

  • Multiple output generation is added. You can choose from 5 outputs and check the generated prompts in the log file and terminal. The prompts are logged and printed in order.
  • Optimizations are done with Optimum package.
  • ONNX and transformers models are supported.
  • Preprocessing outputs. See this section.
  • Recursive generation is supported. See this section.
  • Print generated text to terminal and log the node's state under generated_prompts folder with date as filename.

Example Workflow

example_workflow

example_workflow_basic

  • Prompt Generator Node may look different with final version but workflow in ComfyUI is not going to change

Pretrained Prompt Models

  • You can find the models in this link

  • For to use the pretrained model follow these steps:

    • Download the model and unzip to models/prompt_generators folder.
    • Click Refresh button in ComfyUI.
    • Then select the generator with the node's model_name variable (If you can't see the generator restart the ComfyUI).

Dataset

Models

  • female_positive_generator_v2 (Training In Process)

  • female_positive_generator_v3 (Training In Process)

Variables

  • num_beams must be dividable by num_beam_groups otherwise you will get errors.
Variable Names Definitions
model_name Folder name that contains the model
accelerate Open optimizations. Some of the models are not supported by BetterTransformer (Check your model). If it is not supported switch this option to disable or convert your model to ONNX
prompt Input prompt for the generator
cfg CFG is enabled by setting guidance_scale > 1. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality
min_new_tokens The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt.
max_new_tokens The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
do_sample When True, picks words based on their conditional probability
early_stopping When True, generation finishes if the EOS token is reached
num_beams Number of steps for each search path
num_beam_groups Number of groups to divide num_beams into in order to ensure diversity among different groups of beams
diversity_penalty This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search is enabled.
temperature How sensitive the algorithm is to selecting low probability options
top_k How many potential answers are considered when performing sampling
top_p Min number of tokens are selected where their probabilities add up to top_p
repetition_penalty The parameter for repetition penalty. 1.0 means no penalty
no_repeat_ngram_size The size of an n-gram that cannot occur more than once. (0=infinity)
remove_invalid_values Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation.
self_recursive See this section
recursive_level See this section
preprocess_mode See this section

How Recursive Works?

  • Let's say we give a, as seed and recursive level is 1. I am going to use the same outputs for this example to understand the functionality more accurately.
  • With self recursive, let's say generator's output is b. So next seed is going to be b and generator's output is c. Final output is a, c. It can be used for generating random outputs.
  • Without self recursive, let's say generator's output is b. So next seed is going to be a, b and generator's output is c. Final output is a, b, c. It can be used for more accurate prompts.

How Preprocess Mode Works?

  • exact_keyword => (masterpiece), ((masterpiece)) is not allowed. Checking the pure keyword without parantheses and weights. The algorithm is adding prompts from the beginning of the generated text so add important prompts to seed.
  • exact_prompt => (masterpiece), ((masterpiece)) is allowed but (masterpiece), (masterpiece) is not. Checking the exact match of the prompt.
  • none => Everything is allowed even the repeated prompts.

Example

# ---------------------------------------------------------------------- Original ---------------------------------------------------------------------- #
((masterpiece)), ((masterpiece:1.2)), (masterpiece), blahblah, blah, blah, ((blahblah)), (((((blah))))), ((same prompt)), same prompt, (masterpiece)
# ------------------------------------------------------------- Preprocess (Exact Keyword) ------------------------------------------------------------- #
((masterpiece)), blahblah, blah, ((same prompt))
# ------------------------------------------------------------- Preprocess (Exact Prompt) -------------------------------------------------------------- #
((masterpiece)), ((masterpiece:1.2)), (masterpiece), blahblah, blah, ((blahblah)), (((((blah))))), ((same prompt)), same prompt

Troubleshooting

  • If the below solutions are not fixed your issue please create an issue with bug label

Package Version

  • The node is based on transformers and optimum packages. So most of the problems may be caused from these packages. For overcome these problems you can try to update these packages:

For Manual Installation of the ComfyUI

  1. Activate the virtual environment if there is one.
  2. Run pip install --upgrade transformers optimum optimum[onnxruntime-gpu] command.

For Portable Installation of the ComfyUI

  1. Go to the ComfyUI_windows_portable folder.
  2. Open the command prompt in this folder.
  3. Run .\python_embeded\python.exe -s -m pip install --upgrade transformers optimum optimum[onnxruntime-gpu] command.
  • If updating the packages is not solve your problem please create an issue with bug label.

Automatic Installation

For Manual Installation of the ComfyUI

  • The users have to check if they activate the virtual environment if there is one

For Portable Installation of the ComfyUI

  • The users have to check that they are starting the ComfyUI in the ComfyUI_windows_portable
  • Because the node is checking the python_embeded folder if it is exists and use it to install the required packages

Contributing

  • Contributions are welcome. If you have an idea and want to implement it by yourself please follow these steps:

    1. Create a fork
    2. Create a branch with a name that describes the feature that you are adding
    3. Pull request the fork
  • If you have an idea but don't know how to implement it please create an issue with enhancement label.

  • The contributing can be done in several ways. You can contribute to code or to README file.

Example Outputs

ComfyUI_00062_ ComfyUI_00055_ ComfyUI_00054_

About

Custom AI prompt generator node for the ComfyUI

License:MIT License


Languages

Language:Python 100.0%