There are 0 repository under flan-t5 topic.
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
Official implementation of the paper "CoEdIT: Text Editing by Task-Specific Instruction Tuning" (EMNLP 2023)
LLMs4OL: Large Language Models for Ontology Learning
This repository contains the code to train flan t5 with alpaca instructions and low rank adaptation.
Empower your LLM to do more than you ever thought possible with these state-of-the-art prompt templates.
Tools and our test data developed for the HackAPrompt 2023 competition
A template Next.js app for running language models like FLAN-T5 with Replicate's API
Use AI to personify books, so that you can talk to them 🙊
In this implementation, using the Flan T5 large language model, we performed the Text Classification task on the IMDB dataset and obtained a very good accuracy of 93%.
The TABLET benchmark for evaluating instruction learning with LLMs for tabular prediction.
In-context learning, Fine-Tuning, RLHF on Flan-T5
Fine-tuning of Flan-5T LLM for text classification
My solutions to the lab assignments in the Generative AI with Large Language Models course offered by Amazon Web Services.
Document Summarization App using large language model (LLM) and Langchain framework. Used a pre-trained T5 model and its tokenizer from Hugging Face Transformers library. Created a summarization pipeline to generate summary using model.
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
Tutorial para treino de um modelo baseado Flan-T5 usando Flax no GCP-TPU
Research POC on the mitigation of bias in large language models (FLAN-T5 and Bloomz) through fine-tuning.
Master's thesis on Large Language Models for Document Visual Question Answering
Summarize Long Document with Pretrained sequence-to-sequence LM with long-range attention!
A preliminary investigation for ontology alignment (OM) with large language models (LLMs).
This is a project done for an assessment. I found it to be interesting and decided to share this. The idea is to create a scraper to scrap the Wikipedia page and generate question and answers
Socratic models for multimodal reasoning & image captioning
Symbol Team model for PAN@AP 2023 shared task on Profiling Cryptocurrency Influencers with Few-shot Learning
This repository was commited under the action of executing important tasks on which modern Generative AI concepts are laid on. In particular, we focussed on three coding actions of Large Language Models. Extra and necessary details are given in the README.md file.
Using Open-Source LLMs like FLAN-T5, built a Dialog Summarization model and did fine-tuning with DialogSum HF Dataset
Performing the task of dialogue summarisation using Generative AI, whilst comparing the effects of zero shot, one shot and few shot prompt engineering. These steps are used to enhance the completion of Large Language Models (LLMs))
AI Assistant for Customer Support
The LLM-based medical chatbot, powered by the Llama-2-7b-chat-hf model from Meta and implemented within the Langchain framework, offers personalized healthcare support.
A gradio frontend for Google's Flan-T5 Large language model, can also be adjusted for other sizes.