Top 5 Features of CN2 (Juniper Cloud-Native Contrail Networking)
This walkthrough describes the top features of Juniper Cloud-Native Contrail Networking (CN2). CN2 is a Kubernetes-native software-defined networking platform that enables organizations to automate Infrastructure as a Service (IaaS) operations and manage virtual network lifecycles.
To learn more about Juniper Cloud-Native Contrail Networking, visit the Juniper Networks TechLibrary: https://www.juniper.net/documentation/product/us/en/cloud-native-contrail-networking/
You’ll learn
How CN2 automates the creation and management of virtual networks by letting you connect, isolate, and secure workloads in both private and public clouds
Why CN2 is suited to multicluster environments shared by many tenants, teams, applications, and engineering phases
Who is this for?
Transcript
0:05 In this video we’ll take a look at the top 5 features of Juniper Cloud-Native Contrail
0:10 Networking, also known as CN2.
0:14 Namespace isolation ensures that compute resources, networking traffic, and cluster applications
0:19 are segregated by namespace.
0:21 Unless defined in a network policy, namespaces can’t communicate with one another.
0:26 This means tenants benefit from the security, redundancy, and designated resources of namespace
0:31 isolation.
0:33 CN2 maintains multitenancy and partitioned clusters across multiple users, tenants, and
0:39 applications.
0:41 CN2 supports the deployment and management of VM workloads alongside containers with
0:46 KubeVirt.
0:48 KubeVirt is a Kubernetes project that enables clusters to support lifecycle and networking
0:52 features for VMs and containerized workloads simultaneously.
0:57 With KubeVirt, VMs in CN2 environments have virtual interfaces that let them perform user-space
1:02 networking.
1:03 Like containers, your VMs can interface with the DPDK vRouter for high-throughput applications.
1:09 VNR Topologies allow your VMs and pods to communicate across virtual networks using
1:16 Virtual Network Router topologies.
1:18 VNR is a construct that lets virtual networks import routing information from other virtual
1:23 networks across namespaces.
1:26 With VNR, your VMs and pods can access resources from other workloads while still benefitting
1:31 from network isolation and security policies.
1:34 In the Mesh network topology, the VNR enables all of the pods in connected Namespaces to
1:39 communicate with one another.
1:41 In the Hub-spoke network topology, VirtualNetworks connect to a Hub type VNR or a Spoke type
1:47 VNR.
1:48 VirtualNetworks connected to Spoke type VNRs communicate with VirtualNetworks connected
1:53 to Hub type VNRs and vice-versa.
1:56 VirtualNetworks connected to Spoke type VNRs cannot communicate with other VirtualNetworks
2:00 attached to spoke VNRs.
2:03 CN2 enables users to embed Grafana dashboards into WebUI dashlets.
2:08 Dashlets are widgets that display visual data about different aspects of your cluster or
2:14 multicluster, such as cluster health and traffic flows.
2:18 Select a Grafana window you would like to display in the CN2 WebUI, copy the embed link,
2:23 and your Grafana window displays in CN2 for an overview of cluster analytics at a glance.
2:30 In a multicluster deployment, a single, centralized cluster provides CNI services like pod and
2:35 service networking, policy enforcement, and multi-interface networking to other distributed
2:41 clusters.
2:42 The CN2 config and control plane are installed on the centralized cluster.
2:46 The CN2 data plane is installed on the distributed clusters.
2:50 The centralized cluster manages API requests for CN2 resources like Virtual Network, Namespace,
2:56 and Network Policy.
2:58 The distributed clusters receive routing and interface configuration information.
3:03 Instead of individually managing hundreds of small clusters, this type of deployment
3:07 offers a scalable solution where centralized clusters provide SDN overlay and CNI services
3:13 to many distributed clusters.
3:15 Since config and control components are centralized, you can attach and detach distributed clusters
3:20 according to operational needs.