Dell PowerEdge C4140 Deep Learning Performance Comparison - Scale-up vs. Scale - Page 8

Background

Page 8 highlights

Deep Learning Performance: Scale-up vs Scale-out electrical charge - reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal. 2.1 Deep Learning Deep Learning consists of two phases: Training and inference. As illustrated in Figure 2, training involves learning a neural network model from a given training dataset over a certain number of training iterations and loss function. The output of this phase, the learned model, is then used in the inference phase to speculate on new data [1]. The major difference between training and inference is training employs forward propagation and backward propagation (two classes of the deep learning process) whereas inference mostly consists of forward propagation. To generate models with good accuracy, the training phase involves several training iterations and substantial training data samples, thus requiring manycore CPUs or GPUs to accelerate performance. Figure 2. Deep Learning phases 3 Background With the recent advances in the field of Machine Learning and especially Deep Learning, it's becoming more and more important to figure out the right set of tools that will meet some of the performance characteristics for these workloads. Since Deep Learning is compute intensive, the use of accelerators like GPU become the norm. But GPUs are costly and often it comes down to what is the performance difference between a system with & without GPU. Architectures & Technologies Dell EMC | Infrastructure Solutions Group 7

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53

Deep Learning Performance: Scale-up vs Scale-out
Architectures & Technologies
Dell
EMC
| Infrastructure Solutions Group
7
electrical charge
reaches a specific value. When a neuron fires, it generates a signal which
travels to other neurons which, in turn, increase or decrease their potentials in accordance with
this signal.
2.1
Deep Learning
Deep Learning consists of two phases: Training and inference. As illustrated in
Figure 2
, training
involves learning a neural network model from a given training dataset over a certain number of
training iterations and loss function. The output of this phase, the learned model, is then used in
the inference phase to speculate on new data [1].
The major difference between training and inference is training employs
forward propagation
and
backward propagation
(two classes of the deep learning process) whereas inference mostly
consists of forward propagation. To generate models with good accuracy, the training phase
involves several training iterations and substantial training data samples, thus requiring many-
core CPUs or GPUs to accelerate performance.
Figure 2. Deep Learning phases
3
Background
With the recent advances in the field of Machine Learning and especially Deep Learning,
it’s
becoming more and more important to figure out the right set of tools that will meet some of
the performance characteristics for these workloads.
Since Deep Learning is compute intensive, the use of accelerators like GPU become the norm.
But GPUs are costly and often it comes down to what is the performance difference between a
system with & without GPU.