Silvia Terragni's starred repositories
safety-tuned-llamas
ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.
plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
prompt-based-user-simulator
In-Context Learning User Simulators for Task-Oriented Dialog Systems
vision-language-models-are-bows
Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023
sandbox-conversant-lib
Conversational AI tooling & personas built on Cohere's LLMs
nlg-metricverse
[COLING22] An End-to-End Library for Evaluating Natural Language Generation
evalRS-CIKM-2022
Official Repository for EvalRS @ CIKM 2022: a Rounded Evaluation of Recommender Systems
contextualized-topic-models
A python package to run contextualized topic modeling. CTMs combine contextualized embeddings (e.g., BERT) with topic models to get coherent topics. Published at EACL and ACL 2021 (Bianchi et al.).
twitter-demographer
A python package to enrich Twitter Data
clip-italian
CLIP (Contrastive Language–Image Pre-training) for Italian
BART-TL-topic-label-generation
Implementation and helper scripts for the BART-TL model - https://www.aclweb.org/anthology/2021.eacl-main.121/
topic-labeling
The project proposes a framework to apply topic models on a text-corpus and eventually topic labels on the generated topics.