- Architecting Cloud Native Applications
- Kamal Arora Erik Farr John Gilbert Piyum Zonooz
- 563字
- 2021-06-24 15:20:57
Components
With cloud-native, we are wholeheartedly going out of our way to create bounded isolated components, thus it is important that we don't lose sight of this when we actually deploy these components. We have already discussed strategies for decomposing the system into components at the functional level. However, the functional components do not necessarily translate one for one into deployment units. Deployment units are natural bulkheads. Each has its own resources, each will contain its own failures, and each is independently deployable and scalable. In Chapter 3, Foundation Patterns, we will discuss the Trilateral API pattern. This pattern brings the fact that components will be accessed through more than a synchronous interface, such as REST, front and center. They may also have inbound and outbound asynchronous interfaces based on streams. Some components will have all three and some will have just one or two in the various permutations. Each of these will typically be its own deployment unit and sometimes multiple deployment units.
We also have options to consider regarding containers versus functions. When choosing containers, we have to keep in mind that they are just the tip of the iceberg. Containers themselves are a bulkhead, but they also have to run in a cluster with other containers; a cluster must be managed by a scheduler, some sort of load balancer is required, and so on, and so on. All of these pieces have the potential for failure. I always recommend using the native scheduler of your cloud provider, such as ECS on AWS and GKE on GCP. The native solutions are simply better integrated with their cloud provider and thus are lower risk alternatives.
We also have to be wary of monolithic clusters. Anytime it takes extra elbow grease to provision and manage a set of resources, we will have a tendency to share and overburden those resources. This holds true for container clusters. I frequently see them become their own monolith, in that all containers get deployed to the same cluster, at which point a catastrophic failure to that one cluster would cripple the entire region. Instead, we need to treat the container cluster as a bulkhead as well and strategically allocate components and containers to focused clusters.
As an alternative to containers, we have function-as-a-service. Functions provide us with fine-grained bulkheads. Each function invocation is in essence its own bulkhead. Functions do not share resources, a failing function instance will be immediately replaced, and functions implicitly scale as volumes increase. Furthermore, while functions are the tip of their own iceberg, the cloud provider manages that iceberg. Deploying functions to multiple regions is also essentially a turnkey exercise.
We also need to prepare proper bulkheads within components, down in the code, where they interact with external resources. This is where we need to pool resources independently, have proper timeouts, retries, and circuit breakers, and implement backpressure. We will discuss these topics in the Stream Circuit Breaker pattern in Chapter 3, Foundation Patterns. One important concept for creating responsive, resilient, and elastic components that I have already mentioned and that we will cover thoroughly throughout the book, is asynchronous inter-component communications. We will strive to limit synchronous communication to only the boundaries of the system and intra-component communications with cloud-native resources. We will rely on event streaming for asynchronous, inter-component communications. Next we turn to isolating these resources themselves.