This project is part of a Bachelor Thesis at Shahid Beheshti University of Tehran, that focuses on optimizing LLVM intermediate representation (IR) using large language models (LLMs). The goal is to leverage the capabilities of LLMs to enhance the performance and efficiency of code represented in LLVM IR.
The codes used in this project are sourced from the Exebench and ComPile datasets. These codes have been converted to LLVM-IR and loop-optimized for further processing.
- Dataset URL: [https://huggingface.co/datasets/maedehm02/llvm-ir-loop-optimized/tree/main](#)
- Exebench Dataset: [Link to Exebench](#)
- ComPile Dataset: [Link to ComPile](#)
- Code-gemma: [Lmaedehm02/code-gemma-Code-Instruct-Finetuned](#)
- Llama 3: [maedehm02/Llama3-Code-Instruct-Finetuned](#)
- Code-Llama: [maedehm02/code-llama-Code-Instruct-Finetuned](#)