Performance Optimization of Deep Learning Frameworks on Modern Intel Architectures

Print Friendly, PDF & Email

moustaphaIn this video from the Intel HPC Developer Conference, Elmoustapha Ould-ahmed-vall from Intel describes how the company is doubling down to optimize Machine Learning frameworks for Intel Platforms. Using open source frameworks as a starting point, surprising speedups are possible using Intel technologies.

“With the availability of high computing capabilities, deep neural networks have become the popular algorithm of choice for image classification, automatic speech recognition, natural language processing, Advanced Driver Assistance System (ADAS), etc. applications. Intel has made significant contributions to an optimized fork of Berkeley Vision Learning Center (BVLC) Caffe and also making extensive contributions to Tensorflow, Theano, Torch, all in the open source. In his talk, he analyzed the performance characteristics of Caffe and TensorFlow, on Intel Xeon Phi x200. Intel Xeon Phi x200 (code named Knights Landing (KNL)) is the latest Intel Many Integrated Core processor.”

See more Machine Learning videos from the Intel HPC Developer Conference

Sign up for our insideAI News Newsletter

Speak Your Mind

*