11/06/25 - 3:00 PM - 3:45 PM EST

Location

Virtual

AI adoption is straining today’s compute infrastructure, as demand shifts from training massive models to serving billions of real-time inference requests. The industry’s future will be dominated by inference workloads, where speed, efficiency, and scalability are critical yet constrained by traditional GPU-based systems.

Groq addresses this challenge with its Language Processing Unit (LPU), a purpose-built chip that delivers superior inference performance, achieving faster throughput, lower cost, and higher energy efficiency than GPUs. With no exotic cooling or power requirements, deploying Groq systems requires no major overhaul to your existing data center infrastructure.

Join us for an overview of the Groq technology and architecture, and learn how Groq's 100% inference-focused approach can accelerate AI inference up to 5X while significantly eliminating cost.