Artificial Intelligence (AI) and Deep Learning (DL) represent some of the most demanding workloads in modern computing history as they present unique challenges to compute, storage and network resources. In this technology guide, insideAI News Guide to Optimized Storage for AI and Deep Learning Workloads, we’ll see how traditional file storage technologies and protocols like NFS restrict AI workloads of data, thus reducing the performance of applications and impeding business innovation. A state-of-the-art AI-enabled data center should work to concurrently and efficiently service the entire spectrum of activities involved in DL workflows, including data ingest, data transformation, training, inference, and model evaluation.
insideAI News Guide to Optimized Storage for AI and Deep Learning Workloads – Part 3
insideAI News Guide to Optimized Storage for AI and Deep Learning Workloads – Part 2
Artificial Intelligence (AI) and Deep Learning (DL) represent some of the most demanding workloads in modern computing history as they present unique challenges to compute, storage and network resources. In this technology guide, insideAI News Guide to Optimized Storage for AI and Deep Learning Workloads, we’ll see how traditional file storage technologies and protocols like NFS restrict AI workloads of data, thus reducing the performance of applications and impeding business innovation. A state-of-the-art AI-enabled data center should work to concurrently and efficiently service the entire spectrum of activities involved in DL workflows, including data ingest, data transformation, training, inference, and model evaluation.
insideAI News Guide to Optimized Storage for AI and Deep Learning Workloads
Artificial Intelligence (AI) and Deep Learning (DL) represent some of the most demanding workloads in modern computing history as they present unique challenges to compute, storage and network resources. In this technology guide, insideAI News Guide to Optimized Storage for AI and Deep Learning Workloads, we’ll see how traditional file storage technologies and protocols like NFS restrict AI workloads of data, thus reducing the performance of applications and impeding business innovation. A state-of-the-art AI-enabled data center should work to concurrently and efficiently service the entire spectrum of activities involved in DL workflows, including data ingest, data transformation, training, inference, and model evaluation.
How to Get to the Data-Enabled Data Center
Despite their many promising benefits, advancements in Artificial Intelligence (AI) and Deep Learning (DL) are creating some of the most challenging workloads in modern computing history and put significant strain on the underlying I/O, storage, compute and network. An AI-enabled data center must be able to concurrently and efficiently service the entire spectrum of activities involved in the AI and DL process, including data ingest, training and inference.
Parallel Storage Fuels Groundbreaking Neuroscience and Behavioral Research at Harvard
To alleviate bottlenecks and achieve the ideal balance of parallel performance and optimized availability, Harvard University’s Faculty of Arts and Sciences Research Computing (FASRC) deployed the DataDirect Networks (DDN®) GRIDScaler® GS7KX® parallel file system appliance with 1PB of storage. The installation has sped the collection of images detailing synaptic connectivity in the brain’s cerebral cortex.
All-in on AI? Five Considerations to Help Ensure Long-Term Success
In this contributed article, Kurt Kuckein, Director of Marketing for DDN Storage, offers five key areas to strongly consider when creating and developing an AI data platform that ensures better answers, faster time to value, and capability for rapid scaling. Analytics, AI and Machine Learning continue to make extensive inroads into data-oriented industries presenting significant opportunities for Enterprises and research organizations. However, the potential for AI to improve business performance and competitiveness demands a different approach to managing the data life-cycle.
DDN Named Datacenter Platform Partner of the Year at Intel Technology Partner Awards, Recognizing its Market Leadership at Scale
DataDirect Networks (DDN®) announced that it received the Intel® Technology Partner of the Year award for Datacenter Platform. DDN was honored for the creation and success of its Infinite Memory Engine® (IME®) SSD enabling product family, which leverages Intel technologies and allows customers to accelerate time to insight, and simplify workflows and applications in on premise, hybrid and cloud environments.
DDN Storage Announces Groundbreaking 33GB/s Performance to NVIDIA DGX Servers to Accelerate Machine Learning and AI Initiatives
DataDirect Networks (DDN®) today announced its EXAScaler DGX solution, a unique solution that delivers leading-edge performance using a new optimized, accelerated client integrating tightly and seamlessly with the NVIDIA DGX Architecture. Using the EXAScaler® ES14KX® high-performance all-flash array, the new solution smashed existing records by demonstrating a massive 33GB/s of throughput to a single NVIDIA […]
Five Data Platform Considerations When Thinking About Your Deep Learning Future
With the current maturation of Artificial Intelligence applications and Deep Learning algorithms, many organizations are spinning up initiatives to figure out how they will extract competitive differentiation from their data. This guest article comes from DDN Storage, a provider of high performance, high capacity big data storage systems, processing solutions and services to data-intensive, global organizations.
New Market Dynamics Report: HPC Life Sciences
Scientific research in the life sciences is often akin to searching for needles in haystacks. Finding the one protein, chemical, or genome that behaves or responds in the way the scientist is looking for is the key to the discovery process. For decades, high performance computing (HPC) systems have accelerated this process, often by helping to identify and eliminate in feasible targets sooner.