Understanding the 3rd Platform:
The very idea of 3rd Platform is disruptive to the majority of IT shops because it is a radical paradigm shift that transitions from a device centric to a solution orientated approach.
While the 1st platform was designed around mainframes and the 2nd platform was designed around client-server, the 3rd platform is designed around the cloud. In other words, applications are designed and built to live in the cloud. We can e ectively think of this as pushing many of the core infrastructure concepts (like availability and scale) into the architecture of the application itself with containers being a large part of this; they can be thought of as lightweight runtimes for these applications. With proper application architecture and a rock solid foundation either on-premise or in the cloud, applications can scale on demand, new versions can be pushed quickly, components can be rebuilt and replaced easily, as well as many other benefits discussed below.
Does this mean you should immediately move all of your applications to this model? Not so fast! While 3rd Platform architectures are exciting and extremely useful, they will not be the answer for everyone. A thorough understanding of the bene ts and, more importantly the complexities in this new world are extraordinarily important. VMware's Cloud-Native Apps group is dedicated to ensuring our customers are well informed in this space and can adopt this technology confidently and securely when the time is right.
New Business Imperative:
Competitive businesses are delivering new applications to market in increasingly faster cycles, ushering in technologies like Linux containers and microservices. Next- generation applications are being built on infrastructure assumed to be dynamic and elastic. To keep our customers agile, our Cloud-Native Apps group builds infrastructure technologies to open, common standards that preserve security, performance, and ease-of-use, from developer desktop to the production stack.
Moving Faster Requires Design and Culture Changes:
To move faster, businesses implement a variety of cultural, design, and engineering changes. At VMware, we are striving to make the Developer a first class citizen of the Data Center and help align them with IT's journey to achieve streamlined App and Infrastructure Delivery Automation.
History of Platforms:
1st Platform systems were based around mainframes and traditional servers without virtualization. Consolidation was a serious issue and it was normal to run one application per physical server.
2nd Platform architectures have been the standard mode for quite a while. This is the traditional Client/Server/Database model with which you are likely very familiar, leveraging the virtualization of x86 hardware to increase consolidation ratios, add high availability and extremely exible and powerful management of workloads.
3rd Platform moves up the stack, standardizing on Linux Operating Systems primarily, which allows developers to focus on the application exclusively. Portability, scalability and highly dynamic environments are valued highly in this space. We will focus on this for the rest of the module.
The recent rise of containerization has directly contributed to the uptake of microservices, as it is now very easy to quickly spin up a new, lightweight run-time environments for the application.
The ability to provide single-purpose components with clean APIs between them is an essential design requirement for microservices architecture. At their core, microservices have two main characteristics; they are stateless and distributed. To achieve this, let's take a closer look at the Twelve-Factor App methodology in more detail to help explain microservices architecture as a whole.
The Twelve-Factor App:
To allow the developer maximum flexibility in their choice of programming languages and back-end services, Software-as-a-Service web applications should be designed with the following characteristics:
Use of a declarative format to attempt to minimize or eliminate side e ects by describing what the program should accomplish, rather than describing how to go about it. At a high level it's the variance between a section of code and a con guration le.
Clean Contract with the underlying Operating Systems which enables portability to run and execute on any infrastructure. API's are commonly used to achieve this functionality.
Ability to be deployed into modern cloud platforms; removing the dependencies on underlying hardware and platform.
Keep development, staging, and production as similar as possible. minimize the deviation between the two environments for continuous development.
Ability to scale up (and down) as the application requires without needing to change the tool sets, architecture or development practices.
At a high level, the 12 Factors that are used to achieve these characteristics are:
- Codebase - One codebase tracked in revision control, many deploys
- Dependencies - Explicitly declare and isolate dependencies
- Config - Store config in the environment
- Backing Services - Treat backing services as attached resources
- Build, release, run - Strictly separate build and run stages
- Process - Execute the app as one or more stateless processes
- Port Binding - Export services via port binding
- Concurrency - Scale out via the process mode.
- Disposability - Maximize robustness with fast startup and graceful shutdown,
- Dev/Pro Parity - Keep development, staging, and production as similar as possible
- Logs - Treat logs as event stream
- Admin Process - Run admin/management tasks as one-o processes
For additional detailed information on these factors, check out 12factor.net.
Benefits of Microservices:
Microservice architecture has benefits and challenges. If the development and operating models in the company do not change, or only partially change, things could get muddled very quickly. Decomposing an existing app into hundreds of independent services requires some choreography and a well thought-out plan. So why are teams considering this move? Because there are considerable benefits!
With a properly architected microservice-based application, the individual services will function similarly to a bulkhead in a ship. Individual components can fail, but this does not mean the ship will sink. The following tenet is held closely by many development teams - "Fail fast, fail often." The quicker a team is able to identify a malfunctioning module, the faster they can repair it and return to full operation.
Consider an online music player application - as a user, I might only care about playing artists in my library. The loss of the search functionality may not bother me at all. In the event that the Search service goes down, it would be nice if the rest of the application stays functional. The dev team is then able to x the misbehaving feature independently of the rest of the application.
Defining "Service Boundaries" is important when architecting a microservice-based application!
If a particular service is causing latency in your application, it's trivial to scale up instances of that specific service if the application is designed to take full advantage of microservices. This is a huge improvement over monolithic applications.
Similar to the Resilience topic, with a monolithic application, one poorly-performing component can slow down the entire application. With microservices, it is almost trivial to scale up the service that is causing the latency. Once again, this scalability must be built into the application's DNA to function properly.
Once again, microservices allow components to be upgraded and even changed out for entirely new, heterogeneous pieces of technology without bringing down the entire application. Net ix pushes updates constantly to production code in exactly this manner.
Misbehaving code can be isolated and rolled back immediately. Upgrades can be pushed out, tested, and either rolled back or pushed out further if they have been successful.
The underlying premise here is that the application should align to the business drivers, not to the fragmentation of the teams. Microservices allow for the creation of right- sized, more exible teams that can more easily align to the business drivers behind the application. Hence, ideas like the "two pizza rule" in which teams should be limited to the number of people that can nish two pizzas in a sitting (conventional wisdom says this is eight or less...though my personal research has proved two pizzas do not feed more than four people.)
Microservices can be accompanied by additional operations overhead compared to the monolithic application provisioned to a application server cluster. When each service is separately built out, they could each potentially require clustering for fail over and high availability. When you add in load balancing, logging and messaging layers between these services, the real-estate starts to become sizable even in comparison to a large o the shelf application. Microservices also require a considerable amount of DevOps and Release Automation skills. The responsibility of ownership of the application does not end when the code is released into production, the Developer of the application essentially owns the application until it is retired. The natural evolution of the code and collaborative style in which it is developed can lend itself to challenges when making a major change to the components of the application. This can be partially solved with backwards compatibility but it is not the panacea that some in the industry may claim.
Microservices can only be utilized in certain use cases and even then, microservices open up a world of new possibilities that come with new challenges and operational hurdles. How do we handle stateful services? What about orchestration? What is the best way to store data in this model? How do we guarantee a data persistence model? Precisely how do I scale an application properly? What about "simple" things like DNS and content management? Some of these questions do not have de nitive solutions yet. A distributed system can also introduce a new level of complexity that may not have been such a large concern like network latency, fault tolerance, versioning, and unpredictable loads in the application. The operational cost of application developers needing to consider these potential issues in new scenarios can be high and should be expected throughout the development process.
When considering the adoption of a Microservices, ensure that the use case is sound, the team is aware of the potential challenges and above all, the benefits of this model outweigh the cost.