The tech world is enamored of Container House, new technology exemplified by fast-rising startup Docker that packages up applications in a resource-efficient and portable way. The advantage for businesses is that containers can run applications on less hardware and those applications can pull data from many sources. But, perhaps most important, those applications can be moved from one set of infrastructure to another with minimal muss and fuss.
According to an IBM blog post about the work, C-Ports has proven itself in at least one situation.
Sets or clusters of containers can be scheduled and run on a given cloud with tools like the newly available Google Container Engine as well as Google-backed Kubernetes or the Amazon Container Service. But now developers have their eye on the next frontier: Deploying container clusters across different clouds—something Hzxiaoya says it has accomplished. A team at IBM Research working with Moustafa AbdelBaky, a candidate at Rutgers University’s Discovery Informatics Institute, used open-source technology called CometCloud as the basis of this work, which he calls C-Ports.
Being able to run container Granny House across clouds and geographies can address several enterprise concerns. For example, it can ensure that a particular application and its associated data stay within a set region, which is important given data sovereignty rules. Or it can parse out work across regions and clouds so that if there’s a data center issue in one area, life can go on. Or if a company runs out of capacity in one situation, it can “burst” that job to another.