Intel® Parallel Studio XE 2020: Transform Enterprise, Cloud, HPC & Artificial Intelligence with Faster Parallel Code

In this article we’ll drill down into the capabilities of Intel® Parallel Studio XE 2020, the latest release of a comprehensive, parallel programming tool suite that simplifies the creation and modernization of code. Using this newest release, software developers and architects can speed AI inferencing with support for Intel® Deep Learning Boost and Vector Neural Network Instructions (VNNI), designed to accelerate inner convolutional neural network (CNN) loops.

oneAPI: – A Unified Cross-Architecture, High Performance Programming Model Designed to Help Shape the Future of Application Development

In this article, we’ll dive into the newly announced oneAPI, a single, unified programming model that aims to simplify development across multiple architectures, such as CPUs, GPUs, FPGAs and other accelerators. The long-term journey is represented by two important first-steps – the industry initiative and the Intel beta product.

Heterogeneous Computing Programming: oneAPI and Data Parallel C++

Sponsored Post What you missed at the Intel Developer Conference, and how to catch-up today By James Reinders In the interests of full disclosure … I must admit that I became sold on DPC++ after Intel approached me (as a consultant – 3 years retired from Intel) asking if I’d help with a book on […]

2nd Generation Intel® Xeon® Platinum 9200 Processors Offer Leadership Performance, and Advance AI

Simulation, modeling, data analytics, and other workloads commonly use high performance computing (HPC) to advance research and business in many ways. However, as converged workloads involving AI grow in adoption, HPC systems must keep pace with evolving needs. 2nd Generation Intel® Xeon® Platinum processors, with built-in AI acceleration technologies, offer leadership performance to speed the most demanding HPC workloads.

How Astera Labs is Revolutionizing Semiconductor Product Development—100% in the Cloud

For any established semiconductor product developer, designing a next-generation PCIe 5.0 chipset in less than a year is no small feat. For a brand-new startup with no compute infrastructure other than laptops, however, it is a huge ask. That’s why, with time being of the essence, Astera Labs decided to take a chance on the efficiencies it would gain from a 100% cloud-based approach.

Six Platform Investments from Intel to Facilitate Running AI and HPC Workloads Together on Existing Infrastructure

Because HPC technologies today offer substantially more power and speed than their legacy predecessors, enterprises and research institutions benefit from combining AI and HPC workloads on a single system. Six platform investments from Intel will help reduce obstacles and make HPC and AI deployment even more accessible and practical.

DAOS Delivers Exascale Performance Using HPC Storage So Fast It Requires New Units of Measurement

Forget what you previously knew about high-performance storage and file systems. New I/O models for HPC such as Distributed Asynchronous Object Storage (DAOS) have been architected from the ground up to make use of new NVM technologies such as Intel® Optane™ DC Persistent Memory Modules (Intel Optane DCPMMs). With latencies measured in nanoseconds and bandwidth measured in tens of GB/s, new storage devices such as Intel DCPMMs redefine the measures used to describe high-performance nonvolatile storage.

Using Converged HPC Clusters to Combine HPC, AI, and HPDA Workloads

Many organizations follow an old trend to adopt AI and HPDA as distinct entities which leads to underutilization of their clusters. To avoid this, clusters can be converged to save (or potentially eliminate) capital expenditures and reduce OPEX costs. This sponsored post from Intel’s Esther Baldwin, AI Strategist, explores how organizations are using converged HPC to combine HPC, AI, and HPDA workloads.