Date Name (ref. no.) Resource Representative method Architecture of NN or topic of machine learning Application scenario Dataset Algorithmic 2016-4-11 [53 ]Memory capacity, power Inference phase: SVD decomposition-based weight matrix compression, fine-grained task scheduling to processors AlexNet [76 ], 2-hidden layer DNN for SpeakerID, SVHN CNN, 2-hidden layer DNN for Audio Scene Recognition of objects, human voice, audio environment ImageNet [76 ], Speaker Verification Spoofing, and Countermeasures Challenge Dataset [77 ], SVHN dataset [78 ], Audio Scene dataset [79 ] 2016-8-8 [60 ]Power Training phase: data projectionunder energy constraint 4-Layer DNN Imaging, smart sensing, speech recognition Hyperspectral Remote Sensing Scenes [80 ], UCI Daily and Sports Activities [81 ], UCI ISOLET [82 ] 2017-4-17 [49 ]Memory capacity Training phase: depthwise separable convolution, avoidance of im2col reordering, hyperparameter tuning A 28-layer convolution neural net, PlatNet [87 , 88 ], FaceNet [89 , 90 ] Large-scale geolocation, fine-grained image recognition, face recognition, object detection ImageNet, Im2GPS [83 ], Stanford Dogs [84 ], YFCC100M [85 ], COCO [86 ] 2017-4-30 [39 ] Memory capacity Inference phase: weight encoding, weight sharing, factorization of vector-matrix multiplication 2-Hidden layer DNN Speech recognition, indoor localization, human activity recognition, handwritten digital recognition UCI ISOLET, UCI UJIIndoorLoc [87 ], UCI Daily and Sports Activities, MNIST [88 ] 2018-3-19 [41 ]Power Training phase: hyperparameter tuning, GP-Bayesian optimization Variants of AlexNet for MNIST and CIFAR-10 Handwritten digital recognition, image classification MNIST,CIFAR-10 [89 ] 2019-1-21 [59 ] Memory access latency, power Training phase: transform the DNN realization problem into a Boolean logic optimization problem, Boolean logic minimization Multiple layer perception [92 ], CNN Handwritten digital recognition MNIST 2019-2-28 [52 ] Memory capacity Training phase: group lasso regularization, intergroup lasso regularization Fully convolutional network with 7 convolution layer initialized with pretrained VGG16 Face recognition LFW face dataset [93 ] 2019-4-12 [58 ] Memory capacity Training phase: structured sparsity regularization, Alternative Updating with Lagrange Multipliers (AULM) LeNet [94 ], AlexNet, VGG-16 [95 ], ResNet-50 [96 ], GoogLeNet [97 ] Handwritten digital recognition, image classification MNIST, ImageNet Computational 2017-6-18 [40 ]Processor Training and inference phases: enhancing parallelism through computing load granularity altering, network splitting through depth-first traversal methodology, data dimension reduction using dictionary learning, parallelizing with GPU Establish an universal framework for fitting DL network into specific hardware, AlexNet was used as an example Imaging, smart sensing, speech recognition Hyperspectral Remote Sensing Scenes, UCI Daily and Sports Activities, UCI ISOLET 2017-6-19 [37 ]Processor, power Inference phase: data caching, hardware-specific code fine-tuning, Tucker-2 matrix decomposition VGG-Verydeep-16 [95 ], YOLO [98 ] Continuous vision application ILSVRC2012 train dataset [99 ], Pascal VOC 2007 train dataset [100 ], UCF101 dataset [101 ], LENA dataset [102 ] 2017-6-23 [64 ]Processor Inference phase: fine granularity code execution, parallelizing with GPU LSTM model [103 ] Smart sensing Mobile phone sensor dataset [104 ] 2019-2-1 [68 ]Memory capacity Inference phase: fine-grained memory utilization VGG, CaffeNet [105 ], GoogLeNet, AlexNet Imaging ILSVRC2012 Hardware 2018-10-4 [70 ] Computing power, power Training and inference phases: a unified core architecture, binary feedback alignment (BFA), dynamic fixed-point-based run-length compression (RLC), dropout controller MDNet [106 ] Real-time object tracking Object tracking benchmark (OTB) dataset [107 ] 2018-9-7 [72 ] Computing power Adopt voltage-charge relationship of electrochemical cells to achieve forgetting parameters, describe the decision-making problem using motion of ions Multiarmed bandit problems (MBPs) Reinforcement learning — 2018-12-7 [75 ] Computing power Quantum computing, model the correlation in data with underlying probability amplitudes of a many-body entangled state Generative model Generative model — 2019-2-15 [73 ] Computing power Electrochemical cells Matrix-vector multiplications based on nonvolatile photonic memory Basic arithmetic operations for machine learning or AI algorithms — 2019-5-9 [74 ] Processing power, power Separating the functionalities of data memory and processing, mimic the neurosynaptic system in an all-optical manner Neural network consisting of four neurons and sixty synapses (and 140 optical elements in total) Letter recognition —