By Kishore Jethanandani
Future networks are going cloud-native with a wide range of ramifications for the speed of applications development. Developers will be freed up to create new solutions unencumbered by hardware constraints or even the choice of platforms.
Software-defined open service provider networks are following in the footsteps of datacenters and storage devices — they are becoming pools of network resources that can be used interchangeably by multiple clients in several regions.
In characteristic cloud-like manner, they will potentially serve a flow of variable services, in volumes and types and on-demand versus on-premise IT deployments. In this scenario, service flows are best able to move with demand currents via containers that can be added or subtracted as needed.
The heterogeneity of resources, operating systems, equipment vendors and services on telecom service provider networks is expanding as the epicenter of services delivery sprawls towards the edge in order to support the Internet of Things, big data analytics, mobile and wearable devices,
The development of applications with containers seamlessly dovetails into operations and deployments enabled by a growing range of scheduling, management,
Containers become far more portable than virtualized machines [VMs] as they abstract from not only hardware but also from operating systems. Stateless containers go a step further than statefull and decompose the configuration and operating data of containers. The state data for configuration and operations are stored in a database and is invoked when services are generated.
Service generation with containers in the telecom world
Containers, workloads, and operations
DevOps is a sequence of business processes starting with application development by a team of developers, which is followed by testing for bugs by developers. Then applications go through staging or a process of testing for the desired operating performance ending with production. Operations have been historically a valley of death for developers where many applications floundered because they could not work in its environment. Containers seek to smoothen the transition from development to production with continuous delivery methods with tools from Jenkins, Chef, Ansible
Container images play a role in enabling a distributed team of developers to write code and use it in any environment. They automate the tedium of manually ensuring that the code works in any IT operating environment with their dependencies, such as linked databases and firewalls, and the attendant configuration from one group of developers who could be, for example using Mac, to another using Windows.
Containers on telecom networks
Deployment of code into the production environment of a telecom service provider is an exercise in scaling while also ensuring security and quality of service. It includes the processes of clustering containers and joining them with resources and functions on the network at the desired nodes to generate a service.
New age tools like Mesos achieve scale by abstracting all network resources and functions That can be invoked by a single operating system hosted on a datacenter. Verizon is one carrier that is using Mesos for its hyperscale operations. Verizon Lab’s Damascene Joachimpillai, director of technology, explained the rationale for containers and management, as well as orchestration platforms such as Mesos, as opposed to virtualized machines.
“Most applications — IoT or otherwise — have multiple cooperating and coordinating tasks. Each of these tasks has specific resource requirements,” Joachimpillai said. “One can bundle them into a single monolithic application and provide management using a virtual machine, or deploy them independently. When one deploys these tasks as microservices, one needs a scalable resource scheduler… If they were run on bare metal, then redundancy and resiliency of the application must be considered — and one needs to provide an application management entity that monitors the health. Most of these needs and constraints are removed when using containers and an application orchestration system like Mesos.”
The production environment of a network does not only use containers, or would necessarily do it in the future, so means must be found to interlink with options such as virtual machines and physical resources, regardless of the IT environment.
“When you get into a production environment where you have workloads on physical or virtual assets or on the cloud, it is a whole new world… Instead of using multiple platforms for a diversity of workloads, we have a single platform for all of them,” Hussein Khazaal, head of marketing at Nuage Networks , said.
In the labyrinth of a network of this nature, with the sprawl growing with containers, security threats lurk and customer experience can suffer as the likelihood of failures abounds.
“We automate monitoring and responses to security events or failures through our Virtualized Security Services (VSS) features and integrations with security threat analytics solutions from our partners,” Hussein added. “VSAP [Virtualized Services Assurance Platform] can correlate failures in the underlay with impacted workloads in the overlay, so that operators can quickly identify faults and make corrections with minimal effort.”
The emerging software-driven network gains agility and flexibility by threading together several swarms of containers, virtualized and physical networks, abstracted resources and functions that are held together by data and intelligence for visibility, automated responses, and monitoring tools for failure prevention, optimization and quality assurance. Containers help by bundling together interrelated components of a larger application and making them reusable for ever-changing needs.
A version of this article was previously published by Light Reading’s Telco Transformation