Byeol-hee Kim (kimbyeolhee)

kimbyeolhee

Geek Repo

Company:AMC

Location:Seoul, Korea

Github PK Tool:Github PK Tool

Byeol-hee Kim's starred repositories

llama3-from-scratch

llama3 implementation one matrix multiplication at a time

Language:Jupyter NotebookLicense:MITStargazers:11702Issues:0Issues:0

tech-interview-for-junior

The technical interview knowledge that a junior backend developer should possess.

Stargazers:167Issues:0Issues:0

RandStainNA

RandStainNA: Simple and efficient augmentations for histology [MICCAI 2022]

Language:PythonLicense:MITStargazers:49Issues:0Issues:0

self-supervised-histopathology

Pretrained model for self supervised histopathology

License:MITStargazers:103Issues:0Issues:0
Language:PythonStargazers:10Issues:0Issues:0

pytorch-grad-cam

Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.

Language:PythonLicense:MITStargazers:9993Issues:0Issues:0

torchtune

A Native-PyTorch Library for LLM Fine-tuning

Language:PythonLicense:BSD-3-ClauseStargazers:3723Issues:0Issues:0

HE2RNA_code

Train a model to predict gene expression from histology slides.

Language:PythonLicense:GPL-3.0Stargazers:89Issues:0Issues:0

ALBEF

Code for ALBEF: a new vision-language pre-training method

Language:PythonLicense:BSD-3-ClauseStargazers:1468Issues:0Issues:0

honeybee

Official implementation of project Honeybee (CVPR 2024)

Language:PythonLicense:NOASSERTIONStargazers:402Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:67Issues:0Issues:0

WikiChat

WikiChat stops the hallucination of large language models by retrieving data from Wikipedia.

Language:PythonLicense:Apache-2.0Stargazers:941Issues:0Issues:0
Language:PythonStargazers:2Issues:0Issues:0

Asclepius

Official Codes for "Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes"

Language:PythonStargazers:81Issues:0Issues:0

ml-engineering

Machine Learning Engineering Open Book

Language:PythonLicense:CC-BY-SA-4.0Stargazers:10356Issues:0Issues:0

mistral-inference

Official inference library for Mistral models

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:9387Issues:0Issues:0

lm-evaluation-harness

A framework for few-shot evaluation of language models.

Language:PythonLicense:MITStargazers:6051Issues:0Issues:0

TableLlama

[NAACL'24] Dataset, code and models for "TableLlama: Towards Open Large Generalist Models for Tables".

Language:PythonLicense:MITStargazers:100Issues:0Issues:0

yarn

YaRN: Efficient Context Window Extension of Large Language Models

Language:PythonLicense:MITStargazers:1279Issues:0Issues:0

llama2-fine-tune

Scripts for fine-tuning Llama2 via SFT and DPO.

Language:PythonStargazers:171Issues:0Issues:0

large-scale-lm-tutorials

Large-scale language modeling tutorials with PyTorch

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:279Issues:0Issues:0

PMC-LLaMA

The official codes for "PMC-LLaMA: Towards Building Open-source Language Models for Medicine"

Language:PythonStargazers:564Issues:0Issues:0

lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Language:PythonLicense:Apache-2.0Stargazers:5911Issues:0Issues:0

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Language:PythonLicense:Apache-2.0Stargazers:15339Issues:0Issues:0

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

Language:PythonLicense:Apache-2.0Stargazers:29231Issues:0Issues:0

awesome-Vision-and-Language-Pre-training

Recent Advances in Vision and Language Pre-training (VLP)

License:Apache-2.0Stargazers:285Issues:0Issues:0

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:12754Issues:0Issues:0

GPTScore

Source Code of Paper "GPTScore: Evaluate as You Desire"

Language:PythonStargazers:217Issues:0Issues:0

TinyLlama

The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.

Language:PythonLicense:Apache-2.0Stargazers:7446Issues:0Issues:0

SAM-Med2D

Official implementation of SAM-Med2D

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:815Issues:0Issues:0