Giters
meta-llama
/
codellama
Inference code for CodeLlama models
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
15253
Watchers:
173
Issues:
187
Forks:
1747
meta-llama/codellama Issues
Will there be a Codellama based on Llama 3?
Updated
13 days ago
Comments count
6
can't download llama2 model
Closed
15 days ago
I have missing CUDA library files that are causing crash when I start torchrun
Updated
21 days ago
codellama keeps lecturing about privacy and ethics
Updated
a month ago
Achieving Deterministic Output
Updated
a month ago
WhatsApp-Meta AI-Bug
Closed
a month ago
WhatsApp-Meta AI-Bug
Updated
a month ago
Loss calculation always 0
Updated
2 months ago
Comments count
4
Explicitly support Pascal / Delphi programming language
Updated
2 months ago
Codellama 7b model is printing random words when i ask it to write a script to add 2 numbers.
Updated
2 months ago
codellama providing C++ code that does not successfully perform a basic mathematical function
Updated
2 months ago
Last date of the training dataset
Updated
2 months ago
Tree sound
Closed
2 months ago
코드라마
Closed
2 months ago
Address family not supported by protocol Error
Updated
2 months ago
I'd like to know whether to use eos or bos during Code Llama pre-training
Closed
2 months ago
Comments count
3
Unable to Download Code-Llama 7B via the download.sh Script
Closed
2 months ago
fine-tuning CodeLlama-34b loss
Closed
2 months ago
Comments count
1
Where is the attribute `past_key_values`
Closed
2 months ago
Comments count
1
CodeLlama went into infinite cycle (of communication)
Closed
2 months ago
Comments count
7
70B model memory issue
Closed
2 months ago
Comments count
1
CodeLlama 中文text2sql效果如何?
Closed
2 months ago
Comments count
1
Follow all the README Instructions but when I run one of the example.py files they get stuck
Updated
2 months ago
Request for Codellama's setting in MBPP dataset
Updated
2 months ago
Question on specifying File Path for FIM prompt?
Closed
3 months ago
Comments count
1
Unable to run example_completion.py on CodeLlama-7b
Updated
3 months ago
Comments count
4
Annoying, Non-User-Friendly Download Script
Closed
3 months ago
Comments count
6
Does codellama 13B/34B/70B support function calling and Lora fine tuning on multi-run chat with function calling?
Closed
3 months ago
Comments count
1
Where are the docs for the `llama` API?
Closed
3 months ago
Comments count
3
Single-line Infilling Results reproduction
Closed
4 months ago
Comments count
6
Knowledge Cutoff Date?
Updated
4 months ago
Comments count
2
Code Llama 13b download failed: no properly formatted checksum lines found
Closed
4 months ago
Comments count
7
Incomplete Download of 13B and higher parameters model
Updated
4 months ago
Comments count
1
CodeLlama-34b Fine-Tune Evaluation
Closed
4 months ago
Comments count
1
Potentially Incorrect Configs
Closed
4 months ago
Comments count
8
Checksum failures for 70B Instruct (and Python)
Closed
4 months ago
Comments count
3
Context Length and GPU VRAM Usage in CodeLlama-7B
Updated
4 months ago
Questions about Downloading Code Llama Model and Using GPT4all?
Updated
4 months ago
Codellama
Closed
4 months ago
line 1: payload:allShortcutsEnabled:false: command not found
Closed
4 months ago
Comments count
5
Meta AI response "bug"
Closed
4 months ago
Comments count
1
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
Updated
4 months ago
Comments count
3
I am curious about the form of the infilling dataset for training.
Closed
4 months ago
Comments count
1
Aiiiii
Closed
4 months ago
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!
Closed
4 months ago
Error D/L Models
Updated
4 months ago
I'd like to know if this is the right type of dataset for the model using the infilling function.
Closed
5 months ago
Comments count
1
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9)
Updated
5 months ago
Comments count
1
Questions about learning more than 4096 sequences in codellama
Closed
5 months ago
Comments count
2
How to enable the sending of 100K tokens for codellama
Updated
5 months ago
Previous
Next