higgsfield-ai / higgsfield

Fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some issues if try to use it in a real life :)

Denys88 opened this issue · comments

Hi, I tried to use your product but got a lot of small issues and found some lack of functionality.

  1. Please don't expect that git address is something line this:

    if link.startswith("git@github.com:"):

    I tried to use it with the internal github which is different from public github and got an error.

  2. There are a lot of cases where your error messages are useless like in first example.
    I tried to use higgsfield manually and got a lot of messages like 'something is not a string'.
    Quick debug helped me to find that I forgot or put wrong command line parameter. It could be improved.

  3. LLama and hugging face:

    tokenizer=LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf"),

    When I import llama loader it automatically tries to get access to the HF without any my permitions. Overall trying to access something from internet without explicit calls is a big red flag from the security of view. In my case I've already downloaded everything and don't need to connect to the HG at all.

  4. Would be nice to see more examples:

  • very simple manually implemented architecture which supports deepspeed/zero distribution training.
  • example which show how to manually run everything without github and hf access.
  • ability to run your code on a single machine - single gpu and single machine multiple gpu too.
    Because how do you expect people to debug their code?
    I wanted to run a simple example without setting up my machines and using github and found it impossible which is a big problem in my opinion/

Overall great job and nice implementation but it could be much user friendlier.
Thanks!

Hey! Thanks a lot for your thorough inspection.

  1. A nice catch. We haven't thought about it.
  2. Yes, the errors are ill-defined for now. The case with "something is not a string" happens since we parse AST directly without any proper analysis (line numbers, etc.).
  3. We'll change the logic soon, so you can use the locally downloaded models, datasets with higgsfield without any implicit calls.
  4. Overall, we are going to provide more tutorials. We're quite sorry for not having them right now.

    very simple manually implemented architecture which supports deepspeed/zero distribution training.

    • Right now our API provides a support for major LLMs. You can implement your own if you're eager to.

    example which shows how to manually run everything without github and hf access.

    • We chose github just only for usability. It's not a big deal to make a converter into gitlab or other github-like services which has a concept of CI/CD. Yet you don't have to depend of Hugging Face when it comes to datasets or models. The current impl provides a way to do so.

    ability to run your code on a single machine + single gpu and single machine + multiple gpu.

    • We'll make it happen in upcoming updates.
      Appreciate it very much.