Solutions & Technologies
Ops4AI Lab
Maximize your GPU investment with a network optimized from the ground up to help you achieve your greatest AI vision. Juniper’s AI data center solution is the fastest and most flexible way to deploy high-performing AI training, inference, and storage clusters, and the simplest to operate with limited IT resources. And now, you can see it for yourself in the Juniper Ops4AI Lab.
How Juniper can help
Customers and partners can now easily validate the performance and functionality of their AI models in our exclusive lab in Juniper’s Sunnyvale, CA headquarters.
Test if Juniper AI-Optimized Ethernet matches or outperforms InfiniBand
Customers can now validate the performance of AI-optimized Ethernet switching for their use cases and get hands-on with the full-stack Juniper AI Data Center solution for inference, training, and storage. Test AI performance and functionality using MLCommons® MLPerf BERT-large, DLRM, Llama 2, or bring your own model (BYOM).
Validate your designs to ensure successful outcomes
Juniper validated designs (JVDs) for full-stack fabric, GPU, and storage networks give customers complete confidence that their AI clusters are fully vetted, tested, and ready to rock. No more guesswork, just plug and play.
Automate and manage across the full AI lifecycle
Use Juniper’s Apstra intent-based networking solution as you design, build, deploy, and automate ongoing AI operations. Apstra affords customers more hiring flexibility and enables common NetDevOps processes end-to-end across your AI infrastructure.
Ready to get started?
Fill out this form to get access to the labs.
Ops4AI Lab FAQs
What is Ops4AI?
Our new solution, Ops4AI, offers impactful enhancements to deliver significant value to customers. Ops4AI includes a unique combination of the following Juniper Networks components:
- AIOps in the data center built upon the Marvis virtual network assistant
- Intent-based automation via Juniper Apstra multivendor data center fabric management
- AI-optimized Ethernet capabilities, including RoCEv2 for IPv4/v6, congestion management, efficient load-balancing, and telemetry
Together, Ops4AI enables rapid acceleration of time-to-value of high-performing AI data centers while simultaneously reducing operational costs and streamlining processes.
What AI models are customers able to run in the lab?
The Juniper AI lab leverages various AI models, including:
- LLAMA2
- BERT
- DLRM
- And customers often bring their own model (BYOM)
Customers can run the standard models (BERT, LLAMA, DRLM) for benchmarking and performance testing. They can also bring their models to test running over a Juniper Validated Design (JVD), consisting of a spine-leaf network topology with a rail-optimized design. We test MLCommons submissions in the lab.
What is MLCommons?
MLCommons is an AI industry consortium built on the philosophy of open collaboration to improve AI systems. It helps companies and universities around the world build better AI systems.
Juniper was the first company to submit a multi-node Llama2 inference benchmark to MLCommons earlier in February/March and, more recently, for the MLPerf Training4.0 Benchmarking. We’re committed to open architectures using Ethernet to interconnect GPUs in data center network fabrics.
What do we have in the lab today?
- Juniper QFX-series, PTX-series with 400G and 800G interfaces
- Juniper Apstra data center fabric management and automation solution (Premium)
- WEKA storage
- Nvidia GPUs (H100, A100); additional AI accelerators being added (e.g., AMD MI300X)
Are customer test results from the AI Lab made publicly available?
We respect the confidentiality and privacy of customer models and test data. As a result, we do not use customer models to test with other customers. While MLPerf models are publicly available, customer models are not shared.
I am a Juniper Channel Partner. How can I get access to the lab for my customers?
Please visit the partner center page to request access.