Giters
ml-explore
/
mlx-examples
Examples in the MLX framework
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
5204
Watchers:
58
Issues:
362
Forks:
750
ml-explore/mlx-examples Issues
GatedRepoError: 401 Client Error; "You must be authenticated to access it."
Updated
17 days ago
Issue with Fusing Models - Output is Bad
Updated
17 days ago
Command-R-Plus, Context Window Limitations
Updated
17 days ago
Comments count
42
libc++abi: terminating due to uncaught exception of type std::runtime_error
Updated
17 days ago
Comments count
23
Generating after LORA training CAN NOT Stop Properly
Updated
18 days ago
Comments count
2
[BUG]Qwen/Qwen1.5-7B-Chat Model cannot be output Chinese in stream mode
Updated
18 days ago
Add a —scan-models to mlx_lm.server to check downloaded models
Closed
18 days ago
Comments count
5
Llama-3-8B-Instruct-Gradient-1048k-4bit not working?
Closed
18 days ago
Comments count
2
generate mlx-community/Meta-Llama-3-70B-Instruct-4bit doesn't halt at <|eot_id|>
Closed
23 days ago
Comments count
5
FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
Closed
20 days ago
Comments count
1
omp_set_nested routine deprecated, please use omp_set_max_active_levels instead
Updated
20 days ago
Comments count
1
MNIST Example Error: 403 Forbidden
Closed
20 days ago
Comments count
4
Contributing SigLIP to `mlx-examples`
Updated
20 days ago
Comments count
1
Model doesn't know when to stop generating.
Updated
21 days ago
Comments count
5
Seems like when generating, some memory usage cannot be correctly released.
Updated
22 days ago
Comments count
17
Convert OpenELM to MLX compatible (ValueError: Unrecognized configuration class_)
Closed
22 days ago
Comments count
2
Potential memory leak during Llama 3 8b model fine-tuning with LoRA
Closed
23 days ago
Comments count
9
Phi 3 mini 4k , 128k does nor work LORA
Closed
23 days ago
Comments count
2
[BUG] OpenELM Quantization broken
Closed
24 days ago
Comments count
12
Phi-3 q4 systematic wrong token in first date
Closed
24 days ago
Comments count
7
[Feature Request] Support for QDoRA: Efficient quantized fine-tuning
Updated
24 days ago
Comments count
1
Bug due to Typo in starcoder2 model file
Closed
24 days ago
object 'QuantizedLinear' has no attribute 'quantize_module´
Closed
a month ago
Comments count
4
Loss nan for phi-3
Closed
a month ago
Comments count
6
TypeError: ModelArgs.__init__() missing 5 required positional arguments
Closed
a month ago
Comments count
3
Model type openelm not supported
Closed
a month ago
Comments count
2
Looks like llama.py sanitize_config is outdated
Closed
a month ago
Comments count
3
[Feature request] A version of mlx_lm.utils.generate() that acts as an iterator
Updated
a month ago
Comments count
1
If we do not specify the specific LoRa configuration in the evaluate script, the program will automatically overwrite the default configuration to adapter_config.json.
Closed
a month ago
Colorize not working with phi-3
Closed
a month ago
Comments count
3
Curl response got truncated
Closed
a month ago
Comments count
1
Model type phi3 not supported
Closed
a month ago
Comments count
1
mutex lock failed: Invalid argument
Closed
a month ago
Comments count
2
convert.py cannot find params.json
Closed
a month ago
Comments count
1
In llms/mlx_lm/tuner/trainer.py. The function default_loss is calculating the loss of the whole question and answer pair instead of the loss of answer A
Closed
a month ago
Comments count
2
llama3 70B 4bit seems to be generating garbage, but 8 bit works fine
Closed
a month ago
Comments count
22
ValueError: [dequantize] The matrix should be given as a uint32
Closed
a month ago
Comments count
1
Feature request: Add a `--use-temp-file` option in mlx_lm.convert
Closed
a month ago
Comments count
1
Can't build whisper; pip not up to date, rust compiler missing
Closed
a month ago
Comments count
7
Unable to convert CogVLM due to the model not existing.
Updated
a month ago
Comments count
1
GPU Usage dropping before completion ends
Updated
a month ago
Comments count
10
mistral convert.py quantize error
Closed
a month ago
Comments count
1
Trouble with Triplet Loss Implementation in MLX
Closed
a month ago
Comments count
2
Mixtral Bug
Closed
a month ago
Comments count
2
mlx_lm.lora training don't report anything
Closed
a month ago
Comments count
8
Whisper: Using large-v3 instead of tiny model for test fails
Closed
a month ago
Comments count
1
DpO training
Closed
a month ago
Comments count
1
Can we integrate schedule free learning rate over epochs from Meta research into MLX fine tuning?
Updated
2 months ago
Comments count
1
Failed to convert gemma model.
Closed
2 months ago
Comments count
1
interesting new finetuning approach from stanford - ReFT
Updated
2 months ago
Previous
Next