Meta Research's repositories
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
detectron2
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
seamless_communication
Foundational Models for State-of-the-Art Speech and Text Translation
Kats
Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.
habitat-sim
A flexible, high-performance 3D simulator for Embodied AI research.
habitat-lab
A modular high-level library to train embodied AI agents across a variety of tasks and environments.
schedule_free
Schedule-Free Optimization in PyTorch
home-robot
Mobile manipulation research tools for roboticists
OrienterNet
Source Code for Paper "OrienterNet Visual Localization in 2D Public Maps with Neural Matching"
projectaria_tools
projectaria_tools is an C++/Python open-source toolkit to interact with Project Aria data
llm-transparency-tool
LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. *Check out demo at* https://huggingface.co/spaces/facebook/llm-transparency-tool-demo
HolisticTraceAnalysis
A library to analyze PyTorch traces.
fbpcs
FBPCS (Facebook Private Computation Solutions) leverages secure multi-party computation (MPC) to output aggregated data without making unencrypted, readable data available to the other party or any third parties. Facebook provides impression & opportunity data, and the advertiser provides conversion / outcome data.
generative-recommenders
Repository hosting code used to reproduce results in "Actions Speak Louder than Words Trillion-Parameter Sequential Transducers for Generative Recommendations" (https://arxiv.org/abs/2402.17152).
MultiModalExplorer
Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll, zoom, search (via any modality text, image, audio etc) .
multisense_consistency
Repository for the paper "From Form(s) to Meaning Probing the Semantic Depths of Language Models Using Multisense Consistency"