Ops4AI: Scaling RAG Architectures with VAST Data
Retrieval-augmented generation (RAG) inference gives organizations the deep insights of a custom-built model without the time and expense of training an LLM from scratch. Watch our webinar with VAST Data to learn how networking and storage work together in RAG architectures and how Juniper Apstra enables the rapid deployment of an EVPN/VXLAN-based front-end fabric to efficiently handle RAG inference and vector I/O traffic.
Event sessions run at:
AMER | May 6 | 10:00 a.m. PT/1:00 p.m. ET
EMEA | May 6 | 10:00 a.m. GMT
APAC | May 6 | 1:00 p.m. SGT/4:00 p.m. AEDT
Here's What You'll Learn
Experts from Juniper and VAST Data will discuss the challenges of scaling RAG inference and explore best practices for high-performance storage, data retrieval, and AI workload optimization in real-world deployments.
Design for RAG inference
Take a deep dive into network fabrics, both small and large, for RAG inference and discuss the importance of vector databases in RAG use cases.
Pair the right network with the right storage
High-performance storage and data retrieval is key with RAG. We’ll explore how the VAST Storage solution combines with Juniper Apstra to easily deploy and scale RAG solutions.
Deploy and scale your inference environment
Learn how to use Juniper Apstra to deploy a RAG inference cluster quickly, and how to seamlessly scale, monitor, and manage both inference and storage I/O.
Ops4AI: Scaling RAG Architectures with VAST Data
Retrieval-augmented generation (RAG) inference gives organizations the deep insights of a custom-built model without the time and expense of training an LLM from scratch. Watch our webinar with VAST Data to learn how networking and storage work together in RAG architectures and how Juniper Apstra enables the rapid deployment of an EVPN/VXLAN-based front-end fabric to efficiently handle RAG inference and vector I/O traffic.