Webinar: Getting Started with Llama 3 on AMD Radeon and Instinct GPUs

Print Friendly, PDF & Email

July 11th, 2024, 11AM ET
[Sponsored Post] This webinar provides a guide to installing Hugging Face transformers, Meta’s Llama 3 weights, and the necessary dependencies for running Llama locally on AMD systems with ROCm™ 6.0.1
 
Webinar Topics

  • Walk through a basic understanding of transformer models. Introduce LLM’s, Hugging Face, and some of the recent work done between AMD and Hugging Face
  •  How to install torch and transformers framework and its dependencies on AMD Instinct and Radeon GPUs with ROCm 6.1
  • Walk through a set of publicly available scripts for running llama2-7b and serving on AMD Instinct™ MI210 and AMD Radeon™ W7800 GPUs on a system with ROCm 6.1 installed
  • Share where to find documentation and blog posts from AMD

Following the live coding section, there will be a brief wrap-up to share ROCm resources and have a Q&A session with ROCm experts.

Register here.
 
For a full list of Radeon parts supported by ROCm™ software as of 5/1/2024, go to https://rocm.docs.amd.com/en/latest/reference/gpu-arch-specs.html. GD-241

Speak Your Mind

*