There are 14 repositories under responsible-ai topic.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
🐢 Open-Source Evaluation & Testing library for LLM Agents
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
moDel Agnostic Language for Exploration and eXplanation (JMLR 2018; JMLR 2021)
Deliver safe & effective language models
A toolkit that streamlines and automates the generation of model cards
💡 Adversarial attacks on explanations and how to defend them
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
FIBO is a SOTA, first open-source, JSON-native text-to-image model built for controllable, predictable, and legally safe image generation.
A detailed summary of "Designing Machine Learning Systems" by Chip Huyen. This book gives you and end-to-end view of all the steps required to build AND OPERATE ML products in production. It is a must-read for ML practitioners and Software Engineers Transitioning into ML.
Carefully curated list of awesome data science resources.
Open-source Gen AI testing platform. Collaborative test management that turns expertise into comprehensive automated testing. Your team defines expectations, Rhesis generates thousands of test scenarios. Know what you ship.
[NeurIPS 2023] Sentry-Image: Detect Any AI-generated Images
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Official code repo for the O'Reilly Book - Machine Learning for High-Risk Applications
This is an open-source tool to assess and improve the trustworthiness of AI systems.
A curated list of awesome academic research, books, code of ethics, courses, databases, data sets, frameworks, institutes, maturity models, newsletters, principles, podcasts, regulations, reports, responsible scale policies, tools and standards related to Responsible, Trustworthy, and Human-Centered AI.
Référentiel d'évaluation data science responsable et de confiance
[ICCV 2023 Oral, Best Paper Finalist] ITI-GEN: Inclusive Text-to-Image Generation
Python library for implementing Responsible AI mitigations.
A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoid repeating the failures of the past.
PyTorch package to train and audit ML models for Individual Fairness
Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
A curated list of explainability-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the explainability implications, challenges, and advancements surrounding these powerful models.
Responsible Prompting is an LLM-agnostic tool that aims at dynamically supporting users in crafting prompts that embed responsible intentions and help avoid harmful, adversarial prompts.
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
When the stakes are high, intelligence is only half the equation - reliability is the other ⚠️
Official code of "StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis" (CVPR 2022)
Open-source toolkit for responsible AI: CLI + SDK to scan code, collect evidence, and generate model cards, risk files, evals, and RAG indexes.
An open-content programming cookbook. A responsible use of AI proof of concept. Collaborative, polyglot and multilingual.
This framework aims to assists in the documentation of datasets to promote transparency and help dataset creators and consumers make informed decisions about whether specific datasets meet their needs and what limitations they need to consider