Jonas's repositories
Traditional-Feature-Extraction-Methods
Feature Extraction is an integral step for Image Processing jobs. This repository contains the python codes for Traditonal Feature Extraction Methods from an image dataset, namely Gabor, Haralick, Tamura, GLCM and GLRLM.
AMSR-Few-Shot
Adapting Multi-source Representations for Cross-Domain Few-shot Learning (CD-FSL)
awesome-self-supervised-learning
A curated list of awesome self-supervised methods
building-height-deu
D. Frantz, F. Schug, A. Okujeni, C. Navacchi, W. Wagner, S. van der Linden, and P. Hostert (2021): National-scale mapping of building height using Sentinel-1 and Sentinel-2 time series. Remote Sensing of Environment 252, 112128. https://doi.org/10.1016/j.rse.2020.112128
CMIR-NET-A-deep-learning-based-model-for-cross-modal-retrieval-in-remote-sensing
We address the problem of cross-modal information retrieval in the domain of remote sensing. In particular, we are interested in two application scenarios: i) cross-modal retrieval between panchromatic (PAN) and multispectral imagery, and ii) multi-label image retrieval between very high resolution (VHR) images and speech-based label annotations. These multi-modal retrieval scenarios are more challenging than the traditional uni-modal retrieval approaches given the inherent differences in distributions between the modalities. However, with the increasing availability of multi-source remote sensing data and the scarcity of enough semantic annotations, the task of multi-modal retrieval has recently become extremely important. In this regard, we propose a novel deep neural network-based architecture that is considered to learn a discriminative shared feature space for all the input modalities, suitable for semantically coherent information retrieval. Extensive experiments are carried out on the benchmark large-scale PAN - multispectral DSRSID dataset and the multi-label UC-Merced dataset. Together with the Merced dataset, we generate a corpus of speech signals corresponding to the labels. Superior performance with respect to the current state-of-the-art is observed in all the cases.
CPU-GPU-Benchmark
CPU-Benchmark CPU单核多核跑分图
DeepLearningNote
this is all of my code and data with my deep learning note
Demo_DHCNN_for_TGRS2021
A novel deep hashing method (DHCNN) for remote sensing image retrieval and classification, which was pulished in IEEE Trans. Geosci. Remote Sens., 2021.
Geodjango-Vue-Leaflet-Demo
The project shows how we can build an API using Django/GeoDjango, the Django Rest framework, Django-rest-framework-gis, and output data (from a PostgreSQL database) in a format that is GeoJSON compatible. The API is used in a Vue application which displays data randomly on a web map (Leaflet) using polling.
Image_match
An image retrieval and matching method based on color histogram.
leeml-notes
李宏毅《机器学习》笔记,在线阅读地址:https://datawhalechina.github.io/leeml-notes
LSH_PyTorch
Source code for paper "Similarity Search in High Dimensions via Hashing" on VLDH-1999
map-marker-openlayers
OpenLayers map marker popup. Map delivery area. Find location from address, geolocation. Multiple markers with html popups. Import and export polygon with openlayers map..
ml4a-guides
practical guides, tutorials, and code samples for ml4a
mmsegmentation
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
OpenSelfSup
Self-Supervised Learning Toolbox and Benchmark
PaddleClas
A treasure chest for visual recognition powered by PaddlePaddle
PV-plant-dataset-of-China
PV plant vector dataset of China for year 2023.
PV_ScientificData_Classification_Code
The GEE code for PV power stations classification based on Sentinel-2 imagery and DEM data. The code is written in JavaScript, including all the mentioned steps in the paper, A 10-m national-scale map of ground-mounted photovoltaic power stations in China of 2020, including feature calculation, random forest training, etc.
sat_to_map
Learning mappings to generate city maps images from corresponding satellite images.
ssc_csharp
C# version of Sound Shape Code(SSC)
VisualTransformers
A Pytorch Implementation of the following paper "Visual Transformers: Token-based Image Representation and Processing for Computer Vision"