Lingyi's starred repositories
Expedition
Expedition suite for computing, visualizing, and analyzing single-cell alternative splicing data
rnacounter
Counter of HTS reads
tasic2018analysis
Scripts related to VISp and ALM scRNA-seq and FISH analysis for Tasic, et al., 2018.
sccaf_example
The examples, data preprocessing and benchmark on parameters
fall2019-project4-sec1-grp9
fall2019-project4-sec1-grp9-1 created by GitHub Classroom
inferCNV_examples
additional examples for use with inferCNV
ImmuneResistance
This resource provides the code developed in the study of Jerby-Arnon _et al. "Single-cell RNA-seq of melanoma ecosystems reveals sources of T cell exclusion linked to immunotherapy clinical outcomes".
SingleSplice
Algorithm for detecting alternative splicing in a population of single cells. See details in Welch et al., Nucleic Acids Research 2016: http://nar.oxfordjournals.org/content/early/2016/01/05/nar.gkv1525.full
face_recognition
The world's simplest facial recognition api for Python and the command line
emotion_classifier
emotion classifier based on kaggle fer2013
Fall2019-proj3-sec1-grp3
Fall2019-proj3-sec1--proj3-sec1-grp3 created by GitHub Classroom
Emotion-Detection-in-Videos
The aim of this work is to recognize the six emotions (happiness, sadness, disgust, surprise, fear and anger) based on human facial expressions extracted from videos. To achieve this, we are considering people of different ethnicity, age and gender where each one of them reacts very different when they express their emotions. We collected a data set of 149 videos that included short videos from both, females and males, expressing each of the the emotions described before. The data set was built by students and each of them recorded a video expressing all the emotions with no directions or instructions at all. Some videos included more body parts than others. In other cases, videos have objects in the background an even different light setups. We wanted this to be as general as possible with no restrictions at all, so it could be a very good indicator of our main goal. The code detect_faces.py just detects faces from the video and we saved this video in the dimension 240x320. Using this algorithm creates shaky videos. Thus we then stabilized all videos. This can be done via a code or online free stabilizers are also available. After which we used the stabilized videos and ran it through code emotion_classification_videos_faces.py. in the code we developed a method to extract features based on histogram of dense optical flows (HOF) and we used a support vector machine (SVM) classifier to tackle the recognition problem. For each video at each frame we extracted optical flows. Optical flows measure the motion relative to an observer between two frames at each point of them. Therefore, at each point in the image you will have two values that describes the vector representing the motion between the two frames: the magnitude and the angle. In our case, since videos have a resolution of 240x320, each frame will have a feature descriptor of dimensions 240x320x2. So, the final video descriptor will have a dimension of #framesx240x320x2. In order to make a video comparable to other inputs (because inputs of different length will not be comparable with each other), we need to somehow find a way to summarize the video into a single descriptor. We achieve this by calculating a histogram of the optical flows. This is, separate the extracted flows into categories and count the number of flows for each category. In more details, we split the scene into a grid of s by s bins (10 in this case) in order to record the location of each feature, and then categorized the direction of the flow as one of the 8 different motion directions considered in this problem. After this, we count for each direction the number of flows occurring in each direction bin. Finally, we end up with an s by s by 8 bins descriptor per each frame. Now, the summarizing step for each video could be the average of the histograms in each grid (average pooling method) or we could just pick the maximum value of the histograms by grid throughout all the frames on a video (max pooling For the classification process, we used support vector machine (SVM) with a non linear kernel classifier, discussed in class, to recognize the new facial expressions. We also considered a Naïve Bayes classifier, but it is widely known that svm outperforms the last method in the computer vision field. A confusion matrix can be made to plot results better.
microexpnet
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Frontal Face Images
Facial-Expression-Detection
Facial Expression or Facial Emotion Detector can be used to know whether a person is sad, happy, angry and so on only through his/her face. This Repository can be used to carry out such a task.
Facial-Expression-Recognition
Facial-Expression-Recognition in TensorFlow. Detecting faces in video and recognize the expression(emotion).
Facial-Expression-Recognition.Pytorch
A CNN based pytorch implementation on facial expression recognition (FER2013 and CK+), achieving 73.112% (state-of-the-art) in FER2013 and 94.64% in CK+ dataset
landmark-detection
Four landmark detection algorithms, implemented in PyTorch.
2D-and-3D-face-alignment
This repository implements a demo of the networks described in "How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)" paper.
face-alignment
:fire: 2D and 3D Face alignment library build using pytorch