Inside Juniper’s Ops4AI Lab: Powering the Future of AI Network Fabrics

Inside Juniper’s Ops4AI Lab: Powering the Future of AI Network Fabrics
Maximize your GPU investment with a network purpose-built to help you realize your boldest AI vision. The Juniper Ops4AI Lab is fully operational and ready to run your AI models—confidentially and securely. It’s open to all customers and partners for testing. Watch the video to learn what’s available in the lab, our technology partners, and take a tour with Mark Goedde, Lab Ops, HPE Juniper Networking.
You’ll learn
How Juniper helps customers and partners test and fine-tune their AI models in a trusted environment.
What hardware the Ops4AI Lab features—Nvidia, AMD, WEKA, Vast, and more.
Who is this for?
Experience More
Transcript
0:00 AI is reshaping the world and data
0:03 centers are its foundation. But
0:05 designing AI infrastructure that's open,
0:08 scalable, and simple to manage, that's
0:10 no small task. That's why HP and Juniper
0:12 Networking has made a multi-million
0:14 dollar investment in the Ops for AI lab,
0:17 the industry's first lab dedicated to AI
0:20 R&D, operational innovations, and open
0:23 validated infrastructure design to
0:25 empower customers and partners with a
0:27 trusted environment to test and
0:29 fine-tune their AI models. The Ops for
0:31 AI lab combines Juniper's cutting edge
0:34 AI native networking solutions,
0:36 industryleading AI models, and a
0:39 powerful ecosystem of technology
0:40 partners to solve one critical
0:42 challenge. How to build AI data centers
0:45 that are open, scale efficiently,
0:48 deliver consistent performance, and
0:50 remain simple to operate. At the heart
0:52 of the AI lab is a production scale
0:54 topology leveraging QFX series switches
0:57 including the 800 gig capable QFX 5240
1:01 running AI optimized Junos OS software
1:03 and orchestrated by Juniper Appstster
1:05 data center director and data center
1:07 assurance for intentbased automation and
1:10 AI ops across multi- vendor
1:12 environments. gain real-time visibility
1:15 into the flows between GPUs and take
1:18 action when needed. To accelerate AI
1:20 deployments, we created Juniper
1:23 validated designs to support distributed
1:25 AI training and inference clusters, GPU
1:28 as a service platforms and hybrid AI
1:30 clouds, making them perfect for
1:32 enterprise and hypers scale
1:34 environments. We're collaborating with
1:36 leading partners to validate
1:38 performance, test real a IML models, and
1:41 design future ready architectures that
1:43 scale with your business. Today, our lab
1:46 hosts over a 100 GPUs, a mix of Nvidia
1:49 and AMD, including Nvidia DGX. The
1:52 Nvidia based systems are running with a
1:54 combination of ConnectX6 and Connect X7
1:57 Nick. In addition to that, we're also
1:59 utilizing a Ksite Aries 1S Rocky VDU
2:03 traffic generator to simulate RDMA
2:06 traffic. For storage, we have a Wacka
2:09 cluster and a BAS storage cluster. From
2:11 LLM inference to image classification
2:14 and self-supervised learning, we test
2:16 how real world AI workloads interact
2:19 with the network to ensure your
2:20 infrastructure flawlessly to deliver
2:23 optimal performance for your AI
2:25 workloads. With the ops for AI lab,
2:28 Juniper offers AI native operations,
2:30 automated life cycle management, and
2:32 telemetry rich visibility so you can
2:34 focus on results, not complexity.
2:37 Purpose-built AI infrastructure
2:39 validated by Juniper, ready for what's
2:42 next.