DAOS Delivers Exascale Performance Using HPC Storage So Fast It Requires New Units of Measurement

Forget what you previously knew about high-performance storage and file systems. New I/O models for HPC such as Distributed Asynchronous Object Storage (DAOS) have been architected from the ground up to make use of new NVM technologies such as Intel® Optane™ DC Persistent Memory Modules (Intel Optane DCPMMs). With latencies measured in nanoseconds and bandwidth measured in tens of GB/s, new storage devices such as Intel DCPMMs redefine the measures used to describe high-performance nonvolatile storage.

Interview: Terry Deem and David Liu at Intel

I recently caught up with Terry Deem, Product Marketing Manager for Data Science, Machine Learning and Intel® Distribution for Python, and David Liu, Software Technical Consultant Engineer for the Intel® Distribution for Python*, both from Intel, to discuss the Intel® Distribution for Python (IDP): targeted classes of developers, use with commonly used Python packages for data science, benchmark comparisons, the solution’s use in scientific computing, and a look to the future with respect to IPD.

DarwinAI Generative Synthesis Platform and Intel Optimizations for TensorFlow Accelerate Neural Networks

DarwinAI, a Waterloo, Canada startup creating next-generation technologies for Artificial Intelligence development, announced that the company’s Generative Synthesis platform – when used with Intel technology and optimizations – generated neural networks with a 16.3X improvement in image classification inference performance. Intel shared the optimization results in a recently published solution brief.

Using Converged HPC Clusters to Combine HPC, AI, and HPDA Workloads

Many organizations follow an old trend to adopt AI and HPDA as distinct entities which leads to underutilization of their clusters. To avoid this, clusters can be converged to save (or potentially eliminate) capital expenditures and reduce OPEX costs. This sponsored post from Intel’s Esther Baldwin, AI Strategist, explores how organizations are using converged HPC to combine HPC, AI, and HPDA workloads.

Intel’s AI Lab Presents Several Ground-breaking Research Papers

Researchers at Intel’s AI Lab recently presented several compelling research papers at the International Conference on Machine Learning (ICML) (June 10-15) and the Conference on Computer Vision and Pattern Recognition (CVPR) June 16-20.

Intel IT Serves Up Innovative Applications of AI, Analytics and Collaboration

One telling sign that AI industry leading companies truly believe in the technologies they promoting across the globe is how much they’re using these same technologies internally. An excellent case-in-point is Intel IT, the IT services arm of giant Intel Corp.

Exploring the Convergence of AI, Data and HPC

The demand for performant and scalable AI solutions has stimulated a convergence of science, algorithm development, and affordable technologies to create a software ecosystem designed to support the data scientist. A special insideHPC report explores how HPC and the data driven AI communities are converging as they are arguably running the same types of data and compute intensive workloads on HPC hardware, be it on a leadership class supercomputer, small institutional cluster, or in the cloud.

Special Report: AI-HPC is Happening Now

HPC and the data driven AI communities are converging as they are arguably running the same types of data and compute intensive workloads on HPC hardware, be it on a leadership class supercomputer, small institutional cluster, or in the cloud. Download the insideHPC Special Report, brought to you by Intel, to learn more about AI-HPC and how today’s businesses are using this technology.

BlueData, Intel Compare Bare-Metal & Containers for Big Data Workloads

Has your business ever tried to decide between a bare-metal environment or a container-based environment for dealing with Big Data needs? BlueData and Intel collaborated to discuss this very issue in a benchmark study of the performance of Big Data workloads.

Bare-Metal Performance for Big Data Workloads on Docker Containers

In a benchmark study, Intel compared the performance of Big Data workloads running on a bare-metal deployment versus running in Docker containers with the BlueData EPIC software platform. The study found that it is possible to run Big Data workloads in a container-based environment without sacrificing performance. The benefits include agility, flexibility, and cost efficiency. Data science teams can get on-demand Hadoop and Spark clusters, while leveraging enterprise-grade security in a multi-tenant architecture. Get the white paper to learn about this breakthrough benchmark study.