Dave Vellante; Bob Laliberte

Rami Rahim, Juniper Networks | MWC25 Barcelona

AI & ML400G & 800G

Rami Rahim, Juniper Networks | MWC25 Barcelona

Dave Vellante and Bob Laliberte host a conversation with Rami Rahim, CEO of Juniper Networks, as part of theCUBE’s coverage of MWC25 from Barcelona, Spain.

Show more

You’ll learn

  • The impact of AI on networking 

  • Challenges and opportunities facing the networking industry 

  • The role silicon plays in network innovation 

Who is this for?

Network Professionals Business Leaders

Host

Dave Vellante
Dave Vellante
Co-Founder & Co-CEO, SiliconANGLE Media, Inc.
Bob Laliberte Headshot
Bob Laliberte
Principal Analyst, theCube Research

Guest speakers

Rami Rahim Headshot
Rami Rahim
, CEO, Juniper Networks

Transcript

Navigating the AI Revolution in Networking: From Acquisition to Adaptation

0:07 Hi everybody.

0:07 Welcome back to Fira Barcelona. My name is Dave Vellante.

0:09 I'm here with Bob Laliberte.

0:11 Really excited to have Rami Rahim back on theCUBE,

0:14 CEO of Juniper Networks.

0:15 Rami, thanks for coming on.

0:17 Great to be here. - Good to see you. Last year,

0:18 a year ago, we were on this set

0:20 and of course we were talking about the acquisition.

0:22 We don't really want to get into the acquisition.

0:24 A lot of that's been said out there.

0:26 But what can you tell us about the state?

0:29 A lot of information is out there in the internet,

0:30 but give us your quick take.

0:33 Look, I'll say that we remain super excited about the

0:36 proposition of combining with HPE

0:38 and becoming a really powerful player in the industry,

0:41 especially in networking.

0:42 We're working through the process,

0:43 including the DOJ challenge.

0:45 That being said, there's so much amazing things happening at

0:48 Juniper right now, I'd love to focus this interview on that.

0:50 Yeah, good. Well, let's do that.

0:52 What's got you excited these days?

0:54 Where do you want to start?

0:57 Juniper started in the age of the internet,

0:59 and I remember way back then

1:02 we were really pushing the edge, the limits of technology,

1:05 silicon development software,

1:06 scale out the software development and so forth.

1:09 Fast forward to today what's happening

1:11 with artificial intelligence,

1:12 I feel like once again we're having to push the very limits

1:16 of what's possible with technology.

1:18 Things that were kind of nice to have, like liquid cooling,

1:22 co-packaged optics

1:23 and so on, are now becoming necessary to have to keep up

1:26 with the pace of change and the capacity requirements of AI.

1:31 So how are you seeing your customers

1:35 deal with that change?

1:36 Are they refactoring their data centers?

1:39 Obviously the hyperscalers are all over this.

1:42 The on-prem customers that we talk to, they all want

1:44 to build AI centers of excellence.

1:46 They're rethinking their air-cooled data centers.

1:51 Because they really haven't invested a ton.

1:52 They've been de-investing in data centers

1:54 for a while. What are you seeing?

1:56 I think there are two big opportunities.

1:58 The first one is AI for networks.

2:00 And that really is an opportunity that transcends customers.

2:04 It applies to every customer segment, cloud, SB, enterprise.

2:08 And this is all about leveraging artificial intelligence

2:10 to make network operations far easier,

2:13 to basically make the heroes

2:16 of network operations really successful

2:18 for their organizations.

2:20 But to also delight the end user with an experience

2:23 that's always awesome, irrespective of

2:25 what they're using the network for.

2:27 That's the first big opportunity.

2:28 The second opportunity is networks for AI.

2:31 This is now basically making artificial intelligence

2:34 applications of any type possible to begin with.

2:37 Here, of course, you need

2:39 to have this high performance fabric connecting GPUs, tens,

2:42 hundreds, thousands, hundreds of thousands, soon

2:45 to be millions connected together.

2:47 You need a network with the right set of attributes

2:50 to keep up with that capacity.

2:51 So how is the role

2:54 of the network's operation person changing?

2:59 I think about storage, like somebody managing LUNs,

3:02 that changed overnight.

3:05 Well, it felt like overnight with the cloud.

3:07 How is the network operator changing?

3:09 You got AI for networking, makes their lives easier,

3:12 but then you got networks for AI.

3:14 That's a whole new game. How does that change?

Foundations and Leadership in AI-native Networks

3:16 Well, starting with AI for networks, any big CTO,

3:20 the CTO of a big company, CIO,

3:25 is going to be typically struggling just keeping up

3:29 with maintaining the network

3:31 to provide a great experience for their end users.

3:34 Like keeping the lights on typically is an arduous task

3:38 in and of itself.

3:40 Providing artificial intelligence

3:42 and moving much of that operations to robots,

3:45 basically software that's doing that work that would

3:48 otherwise have to be done by humans, is

3:51 what this opportunity is all about.

3:52 Our solution, which is driven by Mist AI,

3:55 is truly unique in the industry today in giving operators

4:00 that freedom to focus on much more consequential,

4:03 important things, like advancing their strategies,

4:07 keeping the disruptors out, generating new revenue streams.

4:10 That's what they should be focusing on, not just the day-to-

4:14 day firefighting that's necessary

4:15 to delight their end users.

4:18 On the other side of the equation, networks for AI,

4:21 there it's just about keeping up with the capacity.

4:23 I mean, what's happening right now is unbelievable.

4:26 The pace of investment, tens of billions, hundreds

4:30 of billions of dollars going into learning.

4:33 In time all of that learning has to translate to inference

4:38 and value generation.

4:39 That's necessarily going to happen closer to

4:41 where the data is at the edge

4:43 or even in the customer premises.

4:45 And that's a whole new skill set

4:47 that we're helping our customers achieve.

4:49 Now, Mist AI has always been unique in the industry.

4:52 I want to ask you a question about when you see the race

4:55 to AGI, all the LLMs,

4:58 they're leapfrogging each other, everybody's catching up.

5:00 It's like the NFL, it's like the copycat league.

5:03 How are you able to maintain your lead

5:08 with something like Mist AI?

5:10 Or has the competition, are they closing in?

5:13 I'm sure you're going to say they're not. But why is that?

5:17 I think there are three really critical ingredients

5:20 of an AI-native network.

5:22 I know a lot of people like to use that term,

5:24 but at Juniper we like to be very specific about

5:27 what defines an AI-native network.

5:29 The first is you must have access to the right data.

5:33 And by data, I'm not just looking at whether the network is

5:35 up, whether my network elements are working, I need

5:38 to understand in real time whether my customers are actually

5:42 happy, whether the experience

5:44 of using the network is a good experience.

5:47 That is not something that many people can do.

5:49 I mean, Juniper I think is somewhat unique in that area.

5:52 Second, it's about having a proven cloud

5:55 that can scale from the smallest to the largest

5:59 of customers, as we have done, again,

6:01 with incredible wins around the world.

6:04 And the third is the accurate response.

6:06 Because some of our peers like

6:08 to talk a lot about observability,

6:10 and observability is important,

6:11 but it's only half of the equation.

6:13 You must translate observability into insights,

6:15 and insights then translate into actions.

6:18 And those actions have to be tangible,

6:20 reduction in trouble tickets, reduction in time

6:23 to deployment of new services and networks.

6:25 And then we're doing that in

6:26 spades for customers around the world.

6:28 How do you do that? How do you turn those

6:32 that insight into action?

6:34 And what role will agents play?

6:36 So AI comes down to data and learning

6:41 and then translating learning into actions.

6:44 We have been doing this longer than anybody else.

6:47 We have been collecting the data now for 10 years,

6:51 if you include the years

6:52 that Mist was an independent company.

6:54 And then since we acquired Mist

6:55 and expanded that AI architecture across all aspects

7:00 of our solution.

7:01 So we have been collecting that data

7:03 and learning from real life deployments longer than

7:06 anybody else.

7:08 That gives us a unique advantage.

7:09 So Jassy's law applies here, no compression algorithm

7:12 for experience.

7:14 It seems like it's being challenged in LLMs,

7:18 but not in your space, certainly not in silicon.

7:21 Yeah. - Yeah.

7:22 Go ahead, Bob. I know you want to jump in.

Optimizing AI Network Efficiency: Expansion and Congestion Solutions

7:23 >> Yeah, no. The process has been really

7:26 what we've talked about is the mystification of Juniper.

7:29 And when we talk about the differentiation,

7:31 it's really been the extensibility of the Mist AI engine,

7:35 that you started with wireless, but it's ...

7:38 When you talk about what's differentiated,

7:39 they've got a single AI engine that can go across wireless,

7:43 wired, data center, WAN,

7:45 and the innovation just keeps on coming.

7:48 And a lot of the things that you had talked about,

7:51 it was really interesting to hear about the AI

7:53 for networking and networks for AI.

7:56 And the networks for AI is actually two components.

7:59 There's the front end and the back end component of it.

8:01 And you could argue that certainly on

8:03 that back end environment, AI

8:05 for networks is critically important for that

8:08 because you're talking about billions of dollars

8:10 as an investment, needing to keep it always optimized, self-

8:13 healing, self-optimized network.

8:15 And the only way to do that in these environments is

8:18 through AI.

8:19 And I'm really glad that you brought up

8:21 that it's been a 10 year process.

8:23 >> Yeah. - Because so many times when things get exciting,

8:26 there's a lot of washing, right?

8:27 AI washing. "Hey, look, we've now got AI ops.

8:30 >> Correct. - Right. And so when you talk,

8:34 you can really tell the difference from an organization

8:36 that has a mature solution that's able to say,

8:39 "We've gone back 10 years. 8:41 A lot of the processes that you've developed over"

8:44 that time really shows that you've got a mature solution

8:47 that's delivering real value, like you said.

8:49 And the initial value is always that we've been able

8:53 to reduce the number of tickets.

8:55 The real value is

8:56 with the time saved from firefighting those tickets,

8:59 they've been able to work on strategic initiatives.

9:01 >> Yes. - And we've seen that the research shows 93%

9:05 of organizations came out

9:06 and said, "The network is more important

9:09 to achieving our business goals.

9:10 " So they recognize that.

9:12 You're giving them the tools

9:14 to drive those business outcomes through the operations.

9:17 >> Yes, yes. Look, it's hard to follow that up

9:18 because I can't agree more with everything you just said.

9:21 That being said, you're absolutely right.

9:23 The strategy for us has been to make Mist AI way

9:28 beyond what it initially was, which was Wi-Fi,

9:30 to now include every single aspect of the networking stack.

9:35 We've expanded it to include wired switching,

9:37 network access control, SD-WAN, security in the cloud,

9:41 security on premises, data center.

9:43 And at this event we're now talking about Mist AI

9:47 basically simplifying the operations of WAN networks.

9:50 The other thing that you said that's really important,

9:53 when you look at networking inside of the AI data center,

9:57 the most precious resource are those extremely

10:00 expensive GPUs.

10:02 And any congestion in the network is evil

10:07 because it results in an inefficiency,

10:10 wasted cycles in those GPUs.

10:12 Nobody wants that.

10:14 And so we have built into our automation capabilities

10:18 for the data center the ability to detect congestion

10:22 and to proactively alleviate it

10:25 before it's starts to reduce the utilization

10:28 of those precious GPU resources.

10:31 That has resulted in some of the wins

10:32 that we're achieving now in that space.

10:34 >> Absolutely. - Look, I know there's like a backlash on DEI

Juniper's Quest for Sustainable Silicon Solutions

10:37 and ESG,

10:38 but you guys have never been about virtual signaling.

10:42 And frankly, most people in the data center business

10:44 understand the importance of having energy-efficient

10:48 equipment and software that actually can help fine-

10:51 tune the system.

10:52 So we just put out our forecast on the

10:55 future of the data center.

10:56 It blew me away when I saw the numbers.

10:58 The data center's been relatively flat.

11:00 There's been some share shifts over the years.

11:03 And then all of a sudden, 2023, 2024, it's a spike up

11:07 and it appears to be headed on a trillion dollar trajectory.

11:10 I'm talking all in, power, cooling, everything, networking,

11:13 storage, compute.

11:15 And it's growing at a ten-year CAGR of around 15%.

11:20 >> Yes. - Which is amazing. But the one big risk

11:22 to our scenario is energy

11:24 and not being able to get enough of.

11:25 Everywhere I go, it's like, "Yeah, well we wanted

11:27 to bring AI on-prem

11:29 or we want to build other data centers,

11:31 we just can't get energy in." So what are your thoughts on

11:35 that as a blocker?

11:36 How will the industry deal with it?

11:37 >> So not that long ago the key metric

11:42 that our customers cared about was performance,

11:44 basically bits per second.

11:46 Today, that's ancient history.

11:49 It's watts per bits per second.

11:51 It's how you achieve performance from a power

11:54 efficiency standpoint.

11:56 Now we've always at Juniper been good

11:58 at silicon development.

11:59 We continue to invest in this area.

12:01 We continue to develop

12:03 and release some incredible silicon technology that achieves

12:06 the performance gains that our customers need to keep up

12:09 with their requirements.

12:11 But increasingly,

12:12 our innovation is going in the architectures

12:14 to achieve even more efficient networking.

12:18 And we've added now software layers

12:21 that has the intelligence to understand how the

12:27 products we're developing in the network are being utilized,

12:30 and in near real time managing power, turning off engines

12:34 that are not being used, for example, in order

12:38 to just save power.

12:40 And that today is sort of a nice to have,

12:43 but I think in the very near future it's going

12:46 to become an absolutely must have. And we're prepared.

12:49 >> You mentioned silicon a couple of times.

12:52 When did you start your silicon journey?

Global Semiconductor Manufacturing Ecosystem

12:54 >> Well, me personally, when I joined Juniper

12:58 as the youngest employee

13:00 and engineer on the team, I was a silicon developer.

13:02 Actually initially I was a silicon verification engineer,

13:05 then I became a silicon developer.

13:07 Then I ran a bunch of silicon projects in the company.

13:10 So at heart, I'm still a silicon engineer.

13:14 It's great to see the transition over the years.

13:17 In the early days of Juniper, silicon was front

13:20 and center in the industry.

13:22 Then we went through a period where people were confused.

13:24 It was like software is eating the world

13:26 and silicon's sort of no longer-

13:28 >> Don't invest in silicon, right?

13:30 >> For us silicon guys, that was a difficult period.

13:32 >> Yeah, I bet. - Now silicon is once again

13:35 the front and center.

13:36 I mean, this is where a ton of the innovation

13:40 that's happening in artificial intelligence

13:43 is just staggering.

13:46 Honestly, you wake up every day

13:47 and you don't even know what's coming your way in terms

13:50 of some new big innovation.

13:53 And it's exciting time to be in IT.

13:55 It's exciting time to be a chip developer and in technology.

13:58 >> And it gives you a competitive advantage obviously.

14:01 Maybe you could explain why.

14:05 >> So at Juniper, we leverage, depending on the use case,

14:10 either custom or merchant silicon.

14:12 In some use cases, let's say data center top

14:14 of rack switching, we've got great partners, like Broadcom

14:18 for example, that provide us with wonderful technology

14:20 that we can leverage that's

14:22 for our customers exactly what they want.

14:24 In some use cases, take areas

14:27 where you need a high degree of flexibility.

14:30 Our Trio chipset for the MX product line, a product

14:34 that I actually had a hand in developing back when I was an

14:36 engineer many years ago, is the most flexible

14:41 network processor in the planet, that can keep up

14:44 with new changes and use cases, protocols and so forth.

14:48 So the investment protection

14:50 that a customer gets in leveraging Trio

14:52 for the MX is basically endless.

14:55 But we've also developed a line of silicon called Express

14:58 for our PTX product lines that optimizes

15:01 for power efficiency and performance.

15:04 This is where our large cloud provider customers,

15:07 large telcos that are building converged cores,

15:10 are really turning to as a great solution

15:13 for their use cases.

15:15 >> And your process nodes, do you require

15:21 super advanced manufacturing, like an iPhone?

15:24 >> Of course. - You do.

15:25 >> We're always on the cutting edge

15:27 of process node technology, of course.

15:29 >> How important is it for you as a silicon designer

15:33 to have a US-based

15:37 advanced manufacturing capability?

15:40 >> I think it's really important for the US.

15:43 I think having a healthy ecosystem of manufacturing,

15:47 basically fabrication options for silicon, is very good.

15:51 So we would fully support

15:53 and we would look to leverage any sort

15:56 of fabrication facilities and capacity in the US.

16:00 That's going to happen over time.

16:01 So at this point in time, we're still in a wait

16:04 and see mode, but I'm quite hopeful

16:06 and optimistic that that can happen in time.

16:07 >> I am as well. But so you would agree

16:09 it needs to be an onshore.

16:11 I guess it'd be fine in Europe too.

16:13 But as Americans, at least part American, you'd like

16:17 to see it in the United States.

16:20 How important is it that that's a US-

16:24 based company?

16:27 Does that matter as much?

16:29 >> I think it does. I mean for US national interests,

16:32 having US companies, not just design,

16:35 but also have the ability to manufacture chips,

16:39 which are such an important part

16:41 of the overall technology ecosystem

16:43 and the economy of any given country, is very important.

16:46 So as an American, I think this is a

16:48 very good thing to have.

16:49 >> So not just TSMC having plants in the US,

16:53 I'm saying a US-based ...

16:55 Whether it's a joint venture,

16:56 which I think it should be perhaps,

16:59 but US domiciled

17:02 HQ. You would agree?

17:04 >> So TSMC having plants in the US I think is great

17:07 and it's a wonderful step forward.

17:09 But to your point, I think it's also a very good thing

17:14 to have a US-based company such

17:16 as Intel have fabrication facilities available to many

17:21 different network chip

17:23 or any chip design for that matter -

17:25 >> I've laid out my plan for this to happen.

17:26 I'll share it with you. I've also laid out a plan

17:29 to keep Intel as an independent brand, as a designer.

17:32 I hope that happens as well.

17:34 I don't know how you feel about that as a silicon guy,

17:36 but maybe you don't want to comment, which is cool.

17:41 But you have really a vested interest in that.

17:45 >> Yeah. - And I think it's good for the

17:47 world actually if we can do it.

Closing Remarks on Juniper's Future

17:49 >> I cannot agree more.

17:51 Silicon will always be a key part of where we invest,

17:56 where we innovate, and where we differentiate.

18:00 Networking is inherently a distributed problem.

18:03 There are some aspects of it that can be centralized

18:05 and moved to the cloud, as we have demonstrated

18:08 better than anybody else with our Mist solution.

18:11 But there are some aspects of networking

18:14 that will inherently always be highly distributed,

18:17 and you must have purpose-built silicon for that function.

18:21 And this is where we will continue

18:23 to invest in our silicon technology.

18:24 >> Well, and you've seen it. A number

18:26 of companies have got ...

18:28 Apple has an advantage, clearly.

18:31 Amazon, its Annapurna acquisition.

18:34 Juniper is another great example, a Tesla.

18:37 I mean, it's silicon is

18:39 in a way eating the world, right? I mean it's the underpinning-

18:41 >> It's back in fashion.

18:42 >> It is so back in fashion. - Yeah.

18:46 >> All right, we'll give you the last word, Rami.

18:48 How do you want to sort of end this segment?

18:50 What do you want our audience to know about the future

18:53 of Juniper, its impact on service providers,

18:56 and the future of AI?

18:57 >> Well, I will reiterate for a CEO and technology

19:02 and for a technologist such as myself,

19:03 it is an absolutely exciting time to be in this industry.

19:07 Networking is back at the center of the action,

19:11 both in AI for networks, where we're leveraging AI

19:16 to simplify network operations and delight the end user.

19:19 And also it's become absolutely essential

19:23 to connecting the hundreds of thousands, if not soon

19:25 to be millions of GPUs in single clusters

19:30 to power these large LLMs

19:32 and these unbelievable applications

19:34 that we're seeing being born around us.

19:36 So let's see what the future holds.

19:39 But I'm super optimistic.

19:41 >> It's interesting you say that, Charlie Kawwas was here

19:43 last year, same time as you roughly,

19:46 and he was talking about the future

19:47 of a million GPU clusters.

19:49 We've certainly seen hundreds of thousands now.

19:51 And it's going to happen.

19:53 >> When Charlie says something,

19:54 I listen carefully. He's a smart guy.

19:56 >> Indeed, indeed. Well, Rami,

19:57 thank you so much for your time. .

19:59 >> My pleasure. - Thank you, Rami.

20:00 >> And thank you for watching. Bob Laliberte, Dave Vellante,

20:02 we're here at MWC 2025.

20:04 This is day two. You're watching theCUBE.

20:06 We'll be right back to wrap up day two. Keep it right there.

Show more