Kiran CHHATRE (kiranchhatre)

kiranchhatre

Geek Repo

Location:Stockholm, Sweden

Home Page:https://www.kth.se/profile/chhatre/

Twitter:@ChhatreKiran

Github PK Tool:Github PK Tool

Kiran CHHATRE's repositories

amuse

[CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion

Language:PythonLicense:NOASSERTIONStargazers:75Issues:11Issues:9

BEAM-Bayes-Opt

[LOD 2022] Parallel Bayesian Optimization of Multi-agent Systems

Language:PythonStargazers:1Issues:0Issues:0
Language:PythonStargazers:1Issues:0Issues:0

Imagery_Analytics_GeoSatellite_Data

Geotiff python analytics to compute NDVI (normalized difference vegetation index)

Language:Jupyter NotebookStargazers:1Issues:0Issues:0

kiranchhatre.github.io

Data Science Portfolio

Language:SCSSLicense:NOASSERTIONStargazers:1Issues:0Issues:0

ABB_robot_object_detection_module

ABB IRB 7600 object detection module for ARC online platform using OpenCV

Language:PythonStargazers:0Issues:0Issues:0

beam

The Framework for Modeling Behavior, Energy, Autonomy, and Mobility in Transportation Systems

Language:ScalaLicense:NOASSERTIONStargazers:0Issues:2Issues:0

Co-Speech_Gesture_Generation

This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

deep-motion-editing

An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Language:PythonLicense:BSD-2-ClauseStargazers:0Issues:0Issues:0

FaceFormer

[CVPR 2022] FaceFormer: Speech-Driven 3D Facial Animation with Transformers

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

genea_numerical_evaluations

Scripts for numerical evaluations for the GENEA Gesture Generation Challenge

Language:PythonStargazers:0Issues:1Issues:0

genea_visualizer

This repository provides scripts that can be used to visualize BVH files. These scripts were developed for the GENEA Challenge 2022, and enables reproducing the visualizations used for the challenge stimuli. The server consists of several containers which are launched together with the docker-compose, and a Blender-ready minimal release is provided.

License:CC-BY-4.0Stargazers:0Issues:0Issues:0

hemvip

Experiment framework based on webMUSHRA for comparing videos in a MUSHRA-like way

Language:JavaScriptLicense:NOASSERTIONStargazers:0Issues:1Issues:0

meshtalk

Code for MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement

License:NOASSERTIONStargazers:0Issues:0Issues:0

MICA

MICA - Towards Metrical Reconstruction of Human Faces [ECCV2022]

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

MLP_pytorch

multilayer perceptron model for MNIST data using torch

Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Stargazers:0Issues:1Issues:0

motion-diffusion-model

The official PyTorch implementation of the paper "Human Motion Diffusion Model"

License:MITStargazers:0Issues:0Issues:0

MotionDiffuse

MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model

Stargazers:0Issues:0Issues:0

neural-head-avatars

Official PyTorch implementation of "Neural Head Avatars from Monocular RGB Videos"

Stargazers:0Issues:0Issues:0

PyMO

A library for machine learning research on motion capture data

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

register

Free subdomains for personal sites, open-source projects, and more.

License:MITStargazers:0Issues:0Issues:0

Robocar

Continental Challenge

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

serverless-guestbook

A serverless guestbook web application and API built with Cloud Functions

Language:JavaScriptLicense:NOASSERTIONStargazers:0Issues:1Issues:0

sk2torch

Convert scikit-learn models to PyTorch modules

Language:PythonStargazers:0Issues:0Issues:0

spectre

Official Pytorch Implementation of SPECTRE: Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

Speech_driven_gesture_generation_with_autoencoder

This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

transflower-lightning

multimodal transformer

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

voca

This codebase demonstrates how to synthesize realistic 3D character animations given an arbitrary speech signal and a static character mesh.

Language:PythonStargazers:0Issues:1Issues:0
Language:HTMLStargazers:0Issues:0Issues:0