Sergio (sergiobr)

sergiobr

Geek Repo

Location:Brasil

Github PK Tool:Github PK Tool

Sergio's starred repositories

sd-webui-deforum

Deforum extension for AUTOMATIC1111's Stable Diffusion webui

Language:PythonLicense:NOASSERTIONStargazers:2635Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:2174Issues:0Issues:0
Language:Jupyter NotebookLicense:UnlicenseStargazers:130Issues:0Issues:0

Text2Video-Zero

[ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators

Language:PythonLicense:NOASSERTIONStargazers:3929Issues:0Issues:0

img2video

use images to seed video generation

Language:PythonLicense:AGPL-3.0Stargazers:18Issues:0Issues:0

Text-To-Video-Finetuning

Finetune ModelScope's Text To Video model using Diffusers 🧨

Language:PythonLicense:MITStargazers:646Issues:0Issues:0

sd-webui-text2video

Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies

Language:PythonLicense:NOASSERTIONStargazers:1273Issues:0Issues:0

TempoFunk

(pseudo) Make-a-Video with stable diffusion

Language:PythonLicense:AGPL-3.0Stargazers:24Issues:0Issues:0

make-a-stable-diffusion-video

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch - fork with video pseudo3d

Language:PythonLicense:Apache-2.0Stargazers:97Issues:0Issues:0

composer

Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"

License:MITStargazers:1531Issues:0Issues:0

Tune-A-Video

[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation

Language:PythonLicense:Apache-2.0Stargazers:4154Issues:0Issues:0

docker-diffusers-api

Diffusers / Stable Diffusion in docker with a REST API, supporting various models, pipelines & schedulers.

Language:PythonLicense:MITStargazers:199Issues:0Issues:0

docker-diffusers-api

Diffusers / Stable Diffusion in docker with a REST API, supporting various models, pipelines & schedulers.

Language:PythonLicense:MITStargazers:1Issues:0Issues:0

DiffTalk

[CVPR2023] The implementation for "DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation"

Language:PythonStargazers:423Issues:0Issues:0

diffused-heads

Official repository for Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation

Language:PythonLicense:NOASSERTIONStargazers:453Issues:0Issues:0

Thin-Plate-Spline-Motion-Model

[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.

Language:Jupyter NotebookLicense:MITStargazers:3393Issues:0Issues:0

OpenEmu

🕹 Retro video game emulation for macOS

Language:SwiftStargazers:15998Issues:0Issues:0
Language:Jupyter NotebookStargazers:3Issues:0Issues:0

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Language:PythonLicense:MITStargazers:34995Issues:0Issues:0

agentic

AI agent stdlib that works with any LLM and TypeScript AI SDK.

Language:TypeScriptLicense:MITStargazers:16040Issues:0Issues:0
Language:GoLicense:MITStargazers:3071Issues:0Issues:0

matryodshka

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Language:PythonStargazers:90Issues:0Issues:0

Skin-Clothes-Hair-Segmentation-using-SMP

3クラス(肌、服、髪)のセマンティックセグメンテーションを実施するモデル(A model that performs semantic segmentation of 3 classes(skin, clothes, hair))

Language:Jupyter NotebookLicense:MITStargazers:31Issues:0Issues:0

ebsynth_utility

AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth.

Language:PythonStargazers:1218Issues:0Issues:0

VGGFace2-HQ

A high resolution face dataset for face editing purpose

Language:PythonLicense:NOASSERTIONStargazers:398Issues:0Issues:0

open_clip

An open source implementation of CLIP.

Language:PythonLicense:NOASSERTIONStargazers:9295Issues:0Issues:0

Paddle-CLIP

A PaddlePaddle version implementation of CLIP of OpenAI.

Language:PythonLicense:Apache-2.0Stargazers:65Issues:0Issues:0

train-CLIP

A PyTorch Lightning solution to training OpenAI's CLIP from scratch.

Language:PythonLicense:MITStargazers:638Issues:0Issues:0

sd-parseq

Parameter sequencer for Stable Diffusion

Language:TypeScriptLicense:MITStargazers:349Issues:0Issues:0