Qoboty's repositories

Amphion

Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Bert-VITS2-ext

基于Bert-VITS2做的表情、动画测试

Language:PythonLicense:AGPL-3.0Stargazers:0Issues:0Issues:0

best-rq-pytorch

Implementation of BEST-RQ - a model for self-supervised learning of speech signals using a random projection quantizer, in Pytorch.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

cambrian

Cambrian-1 is a family of multimodal LLMs with a vision-centric design.

License:Apache-2.0Stargazers:0Issues:0Issues:0

clash

A rule-based tunnel in Go.

Language:GoLicense:GPL-3.0Stargazers:0Issues:0Issues:0

CosyVoice

LLM based TTS model, providing inference/training/deployment full-stack ability.

License:Apache-2.0Stargazers:0Issues:0Issues:0

fish-speech

Brand new TTS solution

License:NOASSERTIONStargazers:0Issues:0Issues:0

HierSpeechpp

The official implementation of HierSpeech++

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Inpaint-Anything

Inpaint anything using Segment Anything and inpainting models.

License:Apache-2.0Stargazers:0Issues:0Issues:0

llark

Code for the paper "LLark: A Multimodal Foundation Model for Music" by Josh Gardner, Simon Durand, Daniel Stoller, and Rachel Bittner.

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

ltu

Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".

Language:PythonStargazers:0Issues:0Issues:0

magic-animate

MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

MetaMath

MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

OpenVoice

Instant voice cloning by MyShell

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

parler-tts

Inference and training library for high-quality TTS models.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

SoundStorm

The reproduced code for Google's SoundStorm

Language:PythonStargazers:0Issues:0Issues:0

stable-audio-tools

Generative models for conditional audio generation

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

StyleTTS2

StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

TTS-xtts

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

Language:PythonLicense:MPL-2.0Stargazers:0Issues:0Issues:0

UMOE-Scaling-Unified-Multimodal-LLMs

The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"

Language:PythonStargazers:0Issues:0Issues:0

UniAudio

The Open Source Code of UniAudio

Language:PythonStargazers:0Issues:0Issues:0

UniCATS-CTX-txt2vec

CTX-txt2vec, the acoustic model in UniCATS

Language:PythonStargazers:0Issues:0Issues:0

UniCATS-CTX-vec2wav

Code for CTX-vec2wav in UniCATS

Language:PythonStargazers:0Issues:0Issues:0

Video-LLaVA

Video-LLaVA: Learning United Visual Representation by Alignment Before Projection

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

vocode-python

🤖 Build voice-based LLM agents. Modular + open source.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

VoiceCraft

Zero-Shot Speech Editing and Text-to-Speech in the Wild

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0