Sponsored Post
Latest processors from Intel accelerate the most data-intensive workloads with AI acceleration built-in
Simulation, modeling, data analytics, and other workloads commonly use high performance computing (HPC) to advance research and business in many ways. However, as converged workloads involving AI grow in adoption, HPC systems must keep pace with evolving needs. 2nd Generation Intel® Xeon® Platinum processors, with built-in AI acceleration technologies, offer leadership performance to speed the most demanding HPC workloads.
Future-ready, now
The Intel Xeon Platinum 9200 processors feature 12 DDR4 memory channels, and two Intel Xeon dies in a single package. SKUs in the processor family also offer customers 32 to 56 cores per processor depending on their workload needs.
For workloads regularly used in manufacturing and science, the 56-core Intel Xeon Platinum 9282 processors deliver an average of 31% higher performance when compared with a 64-core AMD Rome-based system (7742)1. Please visit our website for more benchmarking data and configuration details.
The latest Intel Xeon Platinum 9282 processors also offer a 2X average performance improvement2 when compared with the Intel Xeon Platinum 8180 processors. In scenarios involving AI inference, Intel Xeon Platinum 9282 CPU with Intel DL Boostenables up to a 30X increase in performance over the Intel Xeon Platinum 8180 processors3.
While processors represent a critical element of overall HPC system performance, other factors weigh in too. Complementing most 2nd Generation Intel Xeon Scalable processors, other Intel technologies like Intel Optane™ DC SSDs and Intel Optane DC persistent memory supplement the processors to speed mission-critical endeavors. Intel Optane DC persistent memory combines high capacity and non-volatility to streamline use cases involving big data. Intel Optane DC SSDs also enable high bandwidth and low latency for data ingestion and inference stages in HPC workloads
Science and industry, empowered
“Many organizations today use their HPC systems for multiple workload types, and that variability places different demands on their systems,” said Harris Joyce, Director of HPC Marketing at Intel Corporation. “For this reason, customers tell us they need flexible, adaptable, and future-ready systems. The latest Intel Xeon processors with built-in AI acceleration and complemented with Intel Optane DC Persistent Memory technology offer a comprehensive solution which helps companies and research institutions prepare for the new possibilities which AI enables by accelerating the convergence of HPC and AI.”
Supported by a broad ecosystem
Intel’s extensive ecosystem of software developers, solution providers, and systems integrators embrace Intel solutions as a foundation for their clients’ needs. OEMs including Atos, Cray/Hewlett Packard Enterprise, H3C, Inspur, Lenovo, Penguin, and Sugon now offer Intel Xeon Platinum 9200 processor-based solutions.
Headquartered in Iceland, Advania is the first provider offering HPC-in-the-cloud instances empowered by underlying Intel Xeon Platinum 9200 processors. Because the Advania cloud-based HPC instances offer their customers performance characteristics nearing that offered by on-premise systems, those clients gain both versatility and speed.
For enterprises and institutions desiring a turnkey HPC system based on Intel’s Xeon Platinum 9200 processors, the Intel Server System S9200WK product family for Intel Data Center Blocks (Intel DCB) offers an excellent choice. Available through OEMs, the Intel DCB offers pre-validated solutions, including Intel’s newest data center technologies. These proven, performant, and unbranded server systems are deployment-ready helping OEMs and their customers to accelerate time to market.
2019 Supercomputing Conference
At SC19, In Intel’s booth #1301,visitors can see the head-to-head performance and feature comparisons showcasing the advanced capabilities offered by Xeon Platinum 9200 processors.
Learn more
Find out how Intel Xeon Platinum 9200 processors can benefit your organization.
1. For configuration details, visit http://www.intel.com/2019xeonconfigs/ (Intel Xeon Scalable processors – claim #31). For additional detail visit https://www.intel.com/content/www/us/en/high-performance-computing/performance-for-hpc-platforms.html
2. 2x Average Performance Improvement compared with Intel® Xeon® Platinum 8180 processor. Geomean of est SPECrate2017_int_base, est SPECrate2017_fp_base, Stream Triad, Intel® Distribution of Linpack, server side Java. Platinum 92xx vs Platinum 8180: 1-node, 2x Intel® Xeon® Platinum 9282 cpu on Walker Pass with 768 GB (24x 32GB 2933) total memory, ucode 0x400000A on RHEL7.6, 3.10.0-957.el7.x86_65, IC19u1, AVX512, HT on all (off Stream, Linpack), Turbo on all (off Stream, Linpack), result: est int throughput=635, est fp throughput=526, Stream Triad=407, Linpack=6411, server side java=332913, test by Intel on 2/16/2019. vs. 1-node, 2x Intel® Xeon® Platinum 8180 cpu on Wolf Pass with 384 GB (12 X 32GB 2666) total memory, ucode 0x200004D on RHEL7.6, 3.10.0-957.el7.x86_65, IC19u1, AVX512, HT on all (off Stream, Linpack), Turbo on all (off Stream, Linpack), result: est int throughput=307, est fp throughput=251, Stream Triad=204, Linpack=3238, server side java=165724, test by Intel on 1/29/2019.
3. Up to 30X AI performance with Intel® Deep Learning Boost (Intel DL Boost) compared to Intel® Xeon® Platinum 8180 processor (July 2017). Tested by Intel as of 2/26/2019. Platform: Dragon rock 2 socket Intel® Xeon® Platinum 9282(56 cores per socket), HT ON, turbo ON, Total Memory 768 GB (24 slots/ 32 GB/ 2933 MHz), BIOS: SE5C620.86B.0D.01.0241.112020180249, Centos* 7 Kernel 3.10.0-957.5.1.el7. x86_64, Deep Learning Framework: Intel® Optimization for Caffe* version: https://github.com/intel/caffe d554cbf1, ICC 2019.2.187, MKL DNN version: v0.17 (commit hash: 830a10059a018cd-2634d94195140cf2d8790a75a), model: https://github.com/intel/caffe/blob/master/models/intel_optimized_models/int8/resnet50_int8_full_conv.prototxt, BS=64, No datalayer DummyData: 3x224x224, 56 instance/2 socket, Datatype: INT8 vs Tested by Intel as of July 11th 2017: 2S Intel® Xeon® Platinum 8180 cpu @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS* Linux release 7.3.1611 (Core), Linux kernel* 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (https://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50),. Intel C++ compiler ver. 17.0.2 20170213, Intel® Math Kernel Library (Intel® MKL) small libraries version 2018.0.20170425. Caffe run with “numactl -l“.
For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available security updates. No product or component can be absolutely secure.
Refer to https://software.intel.com/en-us/articles/optimization-notice for more information regarding performance and optimization choices in Intel software products.
Intel Advanced Vector Extensions (Intel AVX) provides higher throughput to certain processor operations. Due to varying processor power characteristics, utilizing AVX instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with Intel® Turbo Boost Technology 2.0 to not achieve any or maximum turbo frequencies. Performance varies depending on hardware, software, and system configuration and you can learn more at http://www.intel.com/go/turbo.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
Speak Your Mind