Containers have whipped up a lot of excitement in recent months. And once containers assert their position in the data center, it will open the door for the arrival of this new thing people are referring to as microservices.
Cool, right? But what does it all mean?
Containers first became popular among platform-as-a-service providers, and Google (News - Alert) was the first to use containers at scale, said Chris Crane, vice president of product at Sysdig, which provides monitoring and visibility tools for environments using containers. Because containers are lightweight, he noted, Google was able to eliminate entire data centers due to the efficiencies they created.
Now enterprises, and communications and other tech companies, are getting interested in containers. However, Crane explained, what’s driving interest in containers in this case typically has to do with the fact that the portability of containers allows for continuous delivery and expedited time to market for DevOps efforts.
“Containers are extremely portable and very efficient to construct,” said Ben Bernstein, CEO of Twistlock, which provides vulnerability management, access control, and runtime protection for containers. “They take the process of continuous delivery and continuous integration to another degree that was previously impossible. That is the single most important reason that containers have become the darling of developers.”
More than three out of four IT decision makers are interested in running stateful applications such as databases within containers, according to a Robin Systems survey. The key motivations driving that interest, the survey suggests, are workload consolidation and lower performance overhead as compared to traditional virtualization.
The Key Container & Orchestration Players
As you know if you’ve been following the container space or have read any of the articles on this topic, Docker is the poster child of the container movement. Docker’s core product is its container run time. Other entities with container solutions in this realm include LXC, which came out of the Linux Community and stands for Linux Containers, and CoreOS, with its rkt solution, which is also Linux-based.
But the software running on top of containers that manages all the containers that are out there is really where the money is. Docker offers this kind of software in the form of a solution called Swarm. Kubernetes, which was created by Google, is another container orchestration product. And Mesos is the third big player in this arena.
Mesos is the granddaddy of the group. Developed at UC Berkeley, it was originally devised as a distributed compute platform that could create a single abstraction out of a shared pool of resources/servers. But, as it turns out, Mesos is also a pretty good framework on which to run containers, and it’s scalable and stable, Crane said, but it is for more general use cases. Kubernetes, meanwhile, was designed specifically for containers and microservices. And although Docker Swarm is probably the No. 3 player in the container orchestration space, Crane said it’s backed by Docker, the largest container company by far, and is designed to deliver the same management experience as that offered by Docker containers, which is noteworthy since the company looms so large in the space.
Telecom Vendors Like Containers Too
Ericsson, a giant in the telco equipment arena, a little more than a year ago took a majority interest in a company called Apcera. The nearly four-year-old company offers a platform-as-a-service solution that lets users build apps, can host those apps, and ensures security and compliance. CEO and founder Derek Collison said some people consider Apcera to fall into the PaaS product category, others call it a cloud management platform, and others see its place in the emerging container management arena. But Collison said he thinks all three product categories will merge.
Where’s what Apcera brings to the table. It offers a solution that determines what a workload is allowed to use, what is allowed to be used in a workload, and where a workload is allowed to run. It also ensures a truly programmable network that heals and configures itself on the fly at less than 10 milliseconds, Collison added.
When Apcera started out, Collison said, no one cared much about security, they just want speed. But now, he added, markets are starting to realize security should come first and not be bolted on later.
As for Metaswitch, it is sponsoring an open source project called Project Calico.
Chris Liljenstople, evangelist from Project Calico, explained that most approaches to doing cloud networking carry a lot of baggage with them from the enterprise days when multi-layer networking was the norm. Even OpenStack and virtual machine environments are based on multilayer networks containing Layer 2 gear and firewalls. That may work on a small scale, Liljenstople said, but now we have very fungible infrastructure, so servers in the container world might only last for seconds at a time.
Project Calico is a fabric that interconnects IP endpoints and then puts policies on them. It does not create overlays, but rather delivers high-level policy abstraction so application developers can enable specific things to talk to other things with specified protocols. When a workload shows up somewhere, the Calico software on that computer tells the rest of the fabric where the app is and gets the policies related to that app. It builds a set of security rules and installs them locally right in front of that node. Think of it as a giant firewall of sorts.
Adam Rothschild, senior vice president of network at Packet, a company that offers a platform that brings public cloud-style automation to bare metal (and a user of Project Calico), said another benefit of Project Calico is mobility of containers or mobility of IP addresses. That means if a user wants to move a container from a New York data center to a Dallas center, Packet can do that, with Calico as the orchestrator.
“No one makes money on networking in the cloud,” Rothschild said. They make money on the applications, he said, so if Packet and Project Calico can simplify the networking and make it easy to troubleshoot, businesses can focus on running apps at scale.
Keeping Watch & Solving Problems
Crane of Sysdig added that orchestration is the control layer, which handles the turning on and off of containers, assigning apps to containers, ensuring that when containers do die that new ones are spun up to keep apps running. Monitoring, which is what Sysdig does, watches the containers and the orchestration itself, he said. What Sysdig is building toward, he added, is feeding its visibility back into orchestration tools so they can make automated decisions based on that. Sysdig is already working with all three of the big container entities on that, he said.
It sounds as if OpsClarity is thinking along these lines. The young company, whose team is made up of data scientists and people with experience in large scale infrastructure at such well-known companies as eBay (News - Alert), Facebook, Google, and Yahoo, is talking about how it applies data science to IT operations, which is important in light of the rise of containers and interest in microservices.
OpsClarity can discover network resources, and collect a lot of metrics on apps, containers, servers, and other infrastructure in a network architecture. It then analyzes in real time the health of these resources and systems, and of various logical groups based on things like geography or services. When an error happens that needs immediate attention, and the developer gets a lot of alerts, OpsClarity provides a guided flow on where to focus first. Sachin Agarwal (News - Alert), vice president of marketing, and Sid Choudhury, vice president of product management, at OpsClarity, describe it as sort of a Google Map-type construct – guiding users step by step on how to correct it.
The whole idea of containers and microservices is about erecting more flexible architectures and creating smaller services that can be independently changed at a much faster pace and with more certainty that things will not break, Choudhury noted. With virtal machines, by comparison, every piece of code changes, so the chances of things going wrong are much higher, he said. Containers do come with their own challenges however, he added, and one key one is that they are very, very short lived. So when things go wrong, network operators need the ability to quickly understand what the problem is. A solution like the one OpsClarity launched in December that collects data from every layer, correlates it, flags what’s not right, and guides users on how to fix it, said Choudhury, is a must have.
Edited by Maurice Nagle