tsgts / Neural-Networks-on-Silicon

This is a collection of works on neural networks and neural accelerators.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Neural Networks on Silicon

My name is Fengbin Tu. I'm currently pursuing the Ph.D. degree with the Institute of Microelectronics, Tsinghua University, Beijing, China. My research interests include accelerators for neural networks, deep learning and approximate computing. This is an exciting field where fresh ideas come out every day. Welcome to join us!

Table of Contents

Conference Papers

This is a collection of conference papers that interest me. The emphasis is focused on, but not limited to neural networks on silicon. Papers of significance are marked in bold. My comments are in marked in italic.

2015 DAC

  • Reno: A Highly-Efficient Reconfigurable Neuromorphic Computing Accelerator Design. (Universtiy of Pittsburgh, Tsinghua University et al.)
  • Scalable Effort Classifiers for Energy Efficient Machine Learning. (Purdue University, Microsoft Research)
  • Design Methodology for Operating in Near-Threshold Computing (NTC) Region. (AMD)
  • Opportunistic Turbo Execution in NTC: Exploiting the Paradigm Shift in Performance Bottlenecks. (Utah State University)

2016 DAC

  • DeepBurning: Automatic Generation of FPGA-based Learning Accelerators for the Neural Network Family. (Chinese Academy of Sciences)
  • C-Brain: A Deep Learning Accelerator that Tames the Diversity of CNNs through Adaptive Data-Level Parallelization. (Chinese Academy of Sciences)
  • Simplifying Deep Neural Networks for Neuromorphic Architectures. (Incheon National University)
  • Dynamic Energy-Accuracy Trade-off Using Stochastic Computing in Deep Neural Networks. (Samsung, Seoul National University, Ulsan National Institute of Science and Technology)
  • Optimal Design of JPEG Hardware under the Approximate Computing Paradigm. (University of Minnesota, TAMU)
  • Perform-ML: Performance Optimized Machine Learning by Platform and Content Aware Customization. (Rice University, UCSD)
  • Low-Power Approximate Convolution Computing Unit with Domain-Wall Motion Based “Spin-Memristor” for Image Processing Applications. (Purdue University)
  • Cross-Layer Approximations for Neuromorphic Computing: From Devices to Circuits and Systems. (Purdue University)
  • Switched by Input: Power Efficient Structure for RRAM-based Convolutional Neural Network. (Tsinghua University)
  • A 2.2 GHz SRAM with High Temperature Variation Immunity for Deep Learning Application under 28nm. (UCLA, Bell Labs)

2016 ISSCC

  • A 1.42TOPS/W Deep Convolutional Neural Network Recognition Processor for Intelligent IoE Systems. (KAIST)
  • Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks. (MIT, NVIDIA)
  • A 126.1mW Real-Time Natural UI/UX Processor with Embedded Deep Learning Core for Low-Power Smart Glasses Systems. (KAIST)
  • A 502GOPS and 0.984mW Dual-Mode ADAS SoC with RNN-FIS Engine for Intention Prediction in Automotive Black-Box System. (KAIST)
  • A 0.55V 1.1mW Artificial-Intelligence Processor with PVT Compensation for Micro Robots. (KAIST)
  • A 4Gpixel/s 8/10b H.265/HEVC Video Decoder Chip for 8K Ultra HD Applications. (Waseda University)

2016 ISCA

  • Cnvlutin: Ineffectual-Neuron-Free Deep Convolutional Neural Network Computing. (University of Toronto, University of British Columbia)
  • EIE: Efficient Inference Engine on Compressed Deep Neural Network. (Stanford University, Tsinghua University)
  • Minerva: Enabling Low-Power, High-Accuracy Deep Neural Network Accelerators. (Harvard University)
  • Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks. (MIT, NVIDIA)
  • Present an energy analysis framework.
  • Propose an energy-efficienct dataflow called Row Stationary, which considers three levels of reuse.
  • Neurocube: A Programmable Digital Neuromorphic Architecture with High-Density 3D Memory. (Georgia Institute of Technology, SRI International)
  • Propose an architecture integrated in 3D DRAM, with a mesh-like NOC in the logic layer.
  • Detailedly describe the data movements in the NOC.
  • ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars. (University of Utah, HP Labs)
  • A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory. (UCSB, HP Labs, NVIDIA, Tsinghua University)
  • RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision. (Rice University)
  • Cambricon: An Instruction Set Architecture for Neural Networks. (Chinese Academy of Sciences, UCSB)

2016 DATE

  • The Neuro Vector Engine: Flexibility to Improve Convolutional Network Efficiency for Wearable Vision. (Eindhoven University of Technology, Soochow University, TU Berlin)
    • Propose an SIMD accelerator for CNN.
  • Efficient FPGA Acceleration of Convolutional Neural Networks Using Logical-3D Compute Array. (UNIST, Seoul National University)
    • The compute tile is organized on 3 dimensions: Tm, Tr, Tc.
  • NEURODSP: A Multi-Purpose Energy-Optimized Accelerator for Neural Networks. (CEA LIST)
  • MNSIM: Simulation Platform for Memristor-Based Neuromorphic Computing System. (Tsinghua University, UCSB, Arizona State University)
  • Accelerated Artificial Neural Networks on FPGA for Fault Detection in Automotive Systems. (Nanyang Technological University, University of Warwick)
  • Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks. (Purdue University)

2016 FPGA

  • Going Deeper with Embedded FPGA Platform for Convolutional Neural Network. [Slides][Demo] (Tsinghua University, MSRA)
  • The first work I see, which runs the entire flow of CNN, including both CONV and FC layers.
  • Point out that CONV layers are computational-centric, while FC layrers are memory-centric.
  • The FPGA runs VGG16-SVD without reconfiguring its resources, but the convolver can only support k=3.
  • Dynamic-precision data quantization is creative, but not implemented on hardware.
  • Throughput-Optimized OpenCL-based FPGA Accelerator for Large-Scale Convolutional Neural Networks. [Slides] (Arizona State Univ, ARM)
  • Spatially allocate FPGA's resources to CONV/POOL/NORM/FC layers.

2016 ASPDAC

  • Design Space Exploration of FPGA-Based Deep Convolutional Neural Networks. (UC Davis)
  • LRADNN: High-Throughput and Energy-Efficient Deep Neural Network Accelerator using Low Rank Approximation. (Hong Kong University of Science and Technology, Shanghai Jiao Tong University)
  • Efficient Embedded Learning for IoT Devices. (Purdue University)
  • ACR: Enabling Computation Reuse for Approximate Computing. (Chinese Academy of Sciences)

2016 VLSI

  • A 0.3‐2.6 TOPS/W Precision‐Scalable Processor for Real‐Time Large‐Scale ConvNets. (KU Leuven)
  • Use dynamic precision for different CONV layers, and scales down the MAC array's supply voltage at lower precision.
  • Prevent memory fetches and MAC operations based on the ReLU sparsity.
  • A 1.40mm2 141mW 898GOPS Sparse Neuromorphic Processor in 40nm CMOS. (University of Michigan)

2016 ICCAD

  • Efficient Memory Compression in Deep Neural Networks Using Coarse-Grain Sparsification for Speech Applications. (Arizona State University)
  • Memsqueezer: Re-architecting the On-chip memory Sub-system of Deep Learning Accelerator for Embedded Devices. (Chinese Academy of Sciences)
  • Caffeine: Towards Uniformed Representation and Acceleration for Deep Convolutional Neural Networks. (Peking University, UCLA, Falcon)
  • Propose a uniformed convolutional matrix-multiplication representation for accelerating CONV and FC layers on FPGA.
  • Propose a weight-major convolutional mapping method for FC layers, which has good data reuse, DRAM access burst length and effective bandwidth.
  • BoostNoC: Power Efficient Network-on-Chip Architecture for Near Threshold Computing. (Utah State University)
  • Design of Power-Efficient Approximate Multipliers for Approximate Artificial Neural Network. (Brno University of Technology, Brno University of Technology)
  • Neural Networks Designing Neural Networks: Multi-Objective Hyper-Parameter Optimization. (McGill University)

2016 MICRO

  • From High-Level Deep Neural Models to FPGAs. (Georgia Institute of Technology, Intel)
  • vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design. (NVIDIA)
  • Stripes: Bit-Serial Deep Neural Network Computing. (University of Toronto, University of British Columbia)
  • Introduce serial computation and reduced precision computation to neural network accelerator designs, enabling accuracy vs. performance trade-offs.
  • Design a bit-serial computing unit to enable linear scaling the performance with precision reduction.
  • Cambricon-X: An Accelerator for Sparse Neural Networks. (Chinese Academy of Sciences)
  • NEUTRAMS: Neural Network Transformation and Co-design under Neuromorphic Hardware Constraints. (Tsinghua University, UCSB)
  • Fused-Layer CNN Accelerators. (Stony Brook University)
  • Fuse multiple CNN layers (CONV+POOL) to reduce DRAM access for input/output data.
  • Bridging the I/O Performance Gap for Big Data Workloads: A New NVDIMM-based Approach. (The Hong Kong Polytechnic University, NSF/University of Florida)
  • A Patch Memory System For Image Processing and Computer Vision. (NVIDIA)
  • A Cloud-Scale Acceleration Architecture. (Microsoft Research)
  • Reducing Data Movement Energy via Online Data Clustering and Encoding. (University of Rochester)
  • The Microarchitecture of a Real-time Robot Motion Planning Accelerator. (Duke University)
  • An Ultra Low-Power Hardware Accelerator for Automatic Speech Recognition. (Universitat Politecnica de Catalunya)
  • Chameleon: Versatile and Practical Near-DRAM Acceleration Architecture for Large Memory Systems. (UIUC, Seoul National University)

2016 FPL

  • A High Performance FPGA-based Accelerator for Large-Scale Convolutional Neural Network. (Fudan University)
  • Overcoming Resource Underutilization in Spatial CNN Accelerators. (Stony Brook University)
    • Build multiple accelerators, each specialized for specific CNN layers, instead of a single accelerator with uniform tiling parameters.
  • Accelerating Recurrent Neural Networks in Analytics Servers: Comparison of FPGA, CPU, GPU, and ASIC. (Intel)

2017 FPGA

  • An OpenCL Deep Learning Accelerator on Arria 10. (Intel)
  • ESE: Efficient Speech Recognition Engine for Compressed LSTM on FPGA. (Stanford University, DeepPhi, Tsinghua University, NVIDIA)
  • FINN: A Framework for Fast, Scalable Binarized Neural Network Inference. (Xilinx, Norwegian University of Science and Technology, University of Sydney)
  • Can FPGA Beat GPUs in Accelerating Next-Generation Deep Neural Networks? (Intel)
  • Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs. (Cornell University, UCLA, UCSD)
  • Improving the Performance of OpenCL-based FPGA Accelerator for Convolutional Neural Network. (UW-Madison)
  • Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory System. (USC)
  • Optimizing Loop Operation and Dataflow in FPGA Acceleration of Deep Convolutional Neural Networks. (Arizona State University)

2017 ISSCC

  • A 2.9TOPS/W Deep Convolutional Neural Network SoC in FD-SOI 28nm for Intelligent Embedded Systems. (ST)
  • DNPU: An 8.1TOPS/W Reconfigurable CNN-RNN Processor for GeneralPurpose Deep Neural Networks. (KAIST)
  • ENVISION: A 0.26-to-10TOPS/W Subword-Parallel Computational Accuracy-Voltage-Frequency-Scalable Convolutional Neural Network Processor in 28nm FDSOI. (KU Leuven)
  • A 288µW Programmable Deep-Learning Processor with 270KB On-Chip Weight Storage Using Non-Uniform Memory Hierarchy for Mobile Intelligence. (University of Michigan, CubeWorks)
  • A 28nm SoC with a 1.2GHz 568nJ/Prediction Sparse Deep-NeuralNetwork Engine with >0.1 Timing Error Rate Tolerance for IoT Applications. (Harvard)
  • A Scalable Speech Recognizer with Deep-Neural-Network Acoustic Models and Voice-Activated Power Gating (MIT)
  • A 0.62mW Ultra-Low-Power Convolutional-Neural-Network Face Recognition Processor and a CIS Integrated with Always-On Haar-Like Face Detector. (KAIST)

2017 HPCA

  • FlexFlow: A Flexible Dataflow Accelerator Architecture for Convolutional Neural Networks. (Chinese Academy of Sciences)
  • PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning. (University of Pittsburgh, University of Southern California)
  • Towards Pervasive and User Satisfactory CNN across GPU Microarchitectures. (University of Florida)
  • Supporting Address Translation for Accelerator-Centric Architectures. (UCLA)

2017 ASPLOS

  • Scalable and Efficient Neural Network Acceleration with 3D Memory. (Stanford University, EPFL)

Important Topics

This is a collection of papers on other important topics related to neural networks. Papers of significance are marked in bold. My comments are in marked in italic.

Benchmarks

  • Fathom: Reference Workloads for Modern Deep Learning Methods. (Harvard University)
  • AlexNet: Imagenet Classification with Deep Convolutional Neural Networks. (University of Toronto, 2012 NIPS)
  • Network in Network. (National University of Singapore, 2014 ICLR)
  • ZFNet: Visualizing and Understanding Convolutional Networks. (New York University, 2014 ECCV)
  • OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. (New York University, 2014 CVPR)
  • VGG: Very Deep Convolutional Networks for Large-Scale Image Recognition. (Univerisity of Oxford, 2015 ICLR)
  • GoogLeNet: Going Deeper with Convolutions. (Google, University of North Carolina, University of Michigan, 2015 CVPR)
  • ResNet: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. (MSRA, 2015 ICCV)

Network Compression

Other Topics

Object Detection

  • You Only Look Once: Unified, Real-Time Object Detection. (University of Washington, Allen Institute for AI, Facebook AI Research, 2016 CVPR)

GAN

  • Generative Adversarial Nets (Universite de Montreal, 2014 NIPS)
  • Two "adversarial" MLP models G and D: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
  • D is trained to learn the above probability.
  • G is trained to maximize the probability of D making a mistake..

Research Groups

Industry Contributions

  • Movidius
    • Myriad 2: Hardware-accelerated visual intelligence at ultra-low power.
    • Fathom Neural Compute Stick: The world's first discrete deep learning accelerator (Myriad 2 VPU inside).
  • NVIDIA
    • Jetson TX1: Embedded visual computing developing platform.
    • DGX-1: Deep learning supercomputer.
  • Google
    • TPU (Tensor Processing Unit).
  • Nervana
    • Nervana Engine: Hardware optimized for deep learning.
  • Wave Computing
    • Deep Learning Computers Powered by Dataflow Technology.

About

This is a collection of works on neural networks and neural accelerators.