The Technology
Deep Neural Networks (DNNs) have substantially pushed Artificial Intelligence (AI) limits in a wide range of tasks. Today, DNNs are almost exclusively trained on one or many very fast and power – hungry Graphic Processing Units (GPUs). In a training phase, networks are being trained by large quantities of parameters for solving multi parameter and /or objective problems. Commonly, the trained neural network is put to work out in the digital world using what it has learned for recognizing images, spoken words, a disease, and / or suggesting promotional content or products in the streamlined form of an application. The trained network infers conclusions about new data it’s presented with, based on its training.
The technology trains neural networks by constructing a neural network model having neurons each associated with a quantized activation function adapted to output a quantized activation value. The neurons are arranged in layers and connected by connections associated quantized connection weight functions adapted to output quantized connection weight values. During a training process a plurality of weight gradients are calculated during backpropagation sub-processes by computing neuron gradients, each of an output of a respective the quantized activation function in one layer with respect to an input of the respective quantized activation function. Each neuron gradient is calculated such that when an absolute value of the input is smaller than a positive constant threshold value, the respective neuron gradient is set as a positive constant output value and when the absolute value of the input is smaller than the positive constant threshold value the neuron gradient is set to zero.
Advantages
- Can potentially allow orders of magnitude improvements in speed, area and energy.
Applications and Opportunities
- Deep neural networks in various web applications and in mobile devices.