sgy1664

sgy1664

Geek Repo

Github PK Tool:Github PK Tool

sgy1664's repositories

3D-image-warping-using-Nadaraya-Watson-non-linear-regression

Deforming a 3D image according to a given deformation vector field with Nadaraya-Watson regression; 3rd repo in a series of 3 repos associated with the research article "Prediction of the motion of chest internal points using an RNN trained with RTRL for latency compensation in lung cancer radiotherapy" (Pohl et al, Comput Med Imaging Graph, 2021)

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

ct_mar_attention

Codes and data for paper "Rigid and Non-rigid Motion Artifact Reduction in X-ray CT using Attention Module"

Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0
License:MITStargazers:1Issues:0Issues:0

Motion-Estimation

Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. Usually, optical flow is used for this application. Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the movement of object or camera. It is a 2D vector field where each vector is a displacement vector showing the movement of points from the first reference frame to a second.frame. The Lucas-Kanade (LK) and the Kanade-Lucas-Tomasi (KLT) algorithms are popular optical flow computation methods.

Stargazers:0Issues:0Issues:0

Sate2Wnd

Converting satellite images to wind field using the pix2pix model, i.e., retrieving atmospheric motion vectors(AMVs) based on generative adversarial networks(GAN)

License:MITStargazers:0Issues:0Issues:0

Coronary-Artery-Tracking-via-3D-CNN-Classification

The PyTorch re-implement of a 3D CNN Tracker to extract coronary artery centerlines with state-of-the-art (SOTA) performance. (paper: 'Coronary artery centerline extraction in cardiac CT angiography using a CNN-based orientation classifier')

License:MITStargazers:1Issues:0Issues:0

Emotion-Detection-in-Videos

The aim of this work is to recognize the six emotions (happiness, sadness, disgust, surprise, fear and anger) based on human facial expressions extracted from videos. To achieve this, we are considering people of different ethnicity, age and gender where each one of them reacts very different when they express their emotions. We collected a data set of 149 videos that included short videos from both, females and males, expressing each of the the emotions described before. The data set was built by students and each of them recorded a video expressing all the emotions with no directions or instructions at all. Some videos included more body parts than others. In other cases, videos have objects in the background an even different light setups. We wanted this to be as general as possible with no restrictions at all, so it could be a very good indicator of our main goal. The code detect_faces.py just detects faces from the video and we saved this video in the dimension 240x320. Using this algorithm creates shaky videos. Thus we then stabilized all videos. This can be done via a code or online free stabilizers are also available. After which we used the stabilized videos and ran it through code emotion_classification_videos_faces.py. in the code we developed a method to extract features based on histogram of dense optical flows (HOF) and we used a support vector machine (SVM) classifier to tackle the recognition problem. For each video at each frame we extracted optical flows. Optical flows measure the motion relative to an observer between two frames at each point of them. Therefore, at each point in the image you will have two values that describes the vector representing the motion between the two frames: the magnitude and the angle. In our case, since videos have a resolution of 240x320, each frame will have a feature descriptor of dimensions 240x320x2. So, the final video descriptor will have a dimension of #framesx240x320x2. In order to make a video comparable to other inputs (because inputs of different length will not be comparable with each other), we need to somehow find a way to summarize the video into a single descriptor. We achieve this by calculating a histogram of the optical flows. This is, separate the extracted flows into categories and count the number of flows for each category. In more details, we split the scene into a grid of s by s bins (10 in this case) in order to record the location of each feature, and then categorized the direction of the flow as one of the 8 different motion directions considered in this problem. After this, we count for each direction the number of flows occurring in each direction bin. Finally, we end up with an s by s by 8 bins descriptor per each frame. Now, the summarizing step for each video could be the average of the histograms in each grid (average pooling method) or we could just pick the maximum value of the histograms by grid throughout all the frames on a video (max pooling For the classification process, we used support vector machine (SVM) with a non linear kernel classifier, discussed in class, to recognize the new facial expressions. We also considered a Naïve Bayes classifier, but it is widely known that svm outperforms the last method in the computer vision field. A confusion matrix can be made to plot results better.

Stargazers:0Issues:0Issues:0

HM-20.20_LF_DPB

Modified HM encoder to process Light Fields (LFs) with two inputs: original LF and synthetized LF. The encoder overwrites the Decoded Picture Buffer (DPB) with synthetized LF during the encode to encode residual between original and synthetyzed LF. Only zero motion vectores are employed, and reference picture is co-located synthetized picture (synthetized SAI).

License:NOASSERTIONStargazers:0Issues:0Issues:0

CoronaryArteryCenterline

Coronary artery centerline extraction from CT image (.mha format) 从心脏CT图像提取冠状动脉中心线

Stargazers:0Issues:0Issues:0

VFMP

👈👉 Motion profiling via vector field overlays.

Stargazers:0Issues:0Issues:0

Linear-Advection-simulation

The advection equation is the partial differential equation that governs the motion of a conserved scalar field as it is advected by a known velocity vector field. It is derived using the scalar field's conservation law, together with Gauss's theorem, and taking the infinitesimal limit. The advection equation is not simple to solve numerically: the system is a hyperbolic partial differential equation, and interest typically centers on discontinuous "shock" solutions (which are notoriously difficult for numerical schemes to handle). Even with one space dimension and a constant velocity field, the system remains difficult to simulate.

Stargazers:0Issues:0Issues:0

Vector-Fields-Path-Planning

Inspiration from 'Motion Planning and Collision Avoidance using Navigation Vector Fields' by Dimitra Panagou

Stargazers:0Issues:0Issues:0

CTScanningShow

This is a short program to show how CT scanning works.

Stargazers:0Issues:0Issues:0

boundary

Boundary detection using simulation of particle motion in vector image field

Stargazers:0Issues:0Issues:0

motion_detect

simple motion detector based on H264 motion vector field

License:MITStargazers:0Issues:0Issues:0