A Collection of Papers and Codes in ICCV2023 related to Low-Level Vision
[In Construction] If you find some missing papers or typos, feel free to pull issues or requests.
- Awesome-ICCV2021-Low-Level-Vision
- Awesome-CVPR2023/2022-Low-Level-Vision
- Awesome-NeurIPS2022/2021-Low-Level-Vision
- Awesome-ECCV2022-Low-Level-Vision
- Awesome-AAAI2022-Low-Level-Vision
- Awesome-CVPR2021/2020-Low-Level-Vision
- Awesome-ECCV2020-Low-Level-Vision
Multi-weather Image Restoration via Domain Translation
- Paper:
- Code: https://github.com/pwp1208/Domain_Translation_Multi-weather_Restoration
- Tags: Multi-weather
Towards Authentic Face Restoration with Iterative Diffusion Models and Beyond
- Paper: https://arxiv.org/abs/2307.08996
- Tags: Authentic Face Restoration, Diffusion
Physics-Driven Turbulence Image Restoration with Stochastic Refinement
- Paper: https://arxiv.org/abs/2307.10603
- Code: https://github.com/VITA-Group/PiRN
- Tags: Turbulence Image
On the Effectiveness of Spectral Discriminators for Perceptual Quality Improvement
SRFormer: Permuted Self-Attention for Single Image Super-Resolution
Spherical Space Feature Decomposition for Guided Depth Map Super-Resolution
Diffir: Efficient diffusion model for image restoration
- Paper:
- Code: https://github.com/Zj-BinXia/DiffIR
MoTIF: Learning Motion Trajectories with Local Implicit Neural Functions for Continuous Space-Time Video Super-Resolution
Denoising - 去噪 [back]
Random Sub-Samples Generation for Self-Supervised Real Image Denoising
- Paper:
- Code: https://github.com/p1y2z3/SDAP
The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior
- Paper: https://arxiv.org/abs/2304.11409
- Code: https://github.com/YilinLiu97/FasterDIP-devil-in-upsampling
Hybrid Spectral Denoising Transformer with Learnable Query
- Paper: https://arxiv.org/abs/2303.09040
- Code: https://github.com/Zeqiang-Lai/HSDT
- Tags: hyperspectral image denoising
ExposureDiffusion: Learning to Expose for Low-light Image Enhancement
Implicit Neural Representation for Cooperative Low-light Image Enhancement
Deep Image Harmonization with Learnable Augmentation
Parallax-Tolerant Unsupervised Deep Image Stitching
- Paper:
- Code: https://github.com/nie-lang/UDIS2
Delegate Transformer for Image Color Aesthetics Assessment
Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives
AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks
Two Birds, One Stone: A Unified Framework for Joint Learning of Image and Video Style Transfers
- Paper:
- Code: https://github.com/NevSNev/UniST
Adaptive Nonlinear Latent Transformation for Conditional Face Editing
Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing
- Paper: https://arxiv.org/abs/2304.02051
- Code: https://github.com/aimagelab/multimodal-garment-designer
MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing
Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation
- Paper: https://arxiv.org/abs/2307.08448
- Code: https://github.com/AndysonYs/Selective-Diffusion-Distillation
HairCLIPv2: Unifying Hair Editing via Proxy Feature Blending
- Paper:
- Code: https://github.com/wty-ustc/HairCLIPv2
MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models
ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation
Better Aligning Text-to-Image Models with Human Preference
Unleashing Text-to-Image Diffusion Models for Visual Perception
Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models
- Paper: https://arxiv.org/abs/2306.05357
- Code: https://github.com/nanlliu/Unsupervised-Compositional-Concepts-Discovery
BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion
Ablating Concepts in Text-to-Image Diffusion Models
Reinforced Disentanglement for Face Swapping without Skip Connection
BlendFace: Re-designing Identity Encoders for Face-Swapping
Conditional 360-degree Image Synthesis for Immersive Indoor Scene Decoration
Masked Diffusion Transformer is a Strong Image Synthesizer
Q-Diffusion: Quantizing Diffusion Models
Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer
MODA: Mapping-Once Audio-driven Portrait Animation with Dual Attentions
Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators
- Paper: https://arxiv.org/abs/2303.13439
- Code: https://github.com/Picsart-AI-Research/Text2Video-Zero
FateZero: Fusing Attentions for Zero-shot Text-based Video Editing
Others [back]
DDColor: Towards Photo-Realistic and Semantic-Aware Image Colorization via Dual Decoders
- Paper: https://arxiv.org/abs/2212.11613
- Code: https://github.com/piddnad/DDColor
- Tags: Colorization
DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion
- Paper: https://arxiv.org/abs/2303.06840
- Code: https://github.com/Zhaozixiang1228/MMIF-DDFM
- Tags: Image Fusion
Name Your Colour For the Task: Artificially Discover Colour Naming via Colour Quantisation Transformer
Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation