- Architecting Cloud Native Applications
- Kamal Arora Erik Farr John Gilbert Piyum Zonooz
- 979字
- 2021-06-24 15:21:01
Resulting context
The primary benefit of this solution is that we are leveraging the asynchronous mechanisms of value-added cloud services to reliably chain together atomic operations to achieve eventual consistency in near real-time. When a write operation encounters an error, it returns control to its caller. The caller is responsible for performing a retry. When the write is successful, the cloud service is responsible for delivering the database or domain event and triggering the next command in the chain. This simple pattern is repeated as many times as necessary to chain together the desired functionality.
It is important to note that an atomic write operation can be a batch operation. Batch operations are critical to improving throughput. Write operations will also execute in the context of a stream processor command that is processing a micro-batch of events from a stream. Each atomic write in a stream processor is its own logical unit of work that must succeed or fail together. However, in a stream processor, a successful write can be followed by an unsuccessful write. We will discuss these scenarios in the Stream Circuit Breaker pattern.
The Event-First variant is appropriate in fire-and-forget scenarios. One example is a click-stream, where we want to track user actions in a browser, but it is not important to record a click locally before producing the event. Another example is a complex-event-processing algorithm that is evaluating many events over a time window and raising a higher order event when a condition is met. In both cases, it is assumed that downstream components will be capturing and storing these events. The benefit of just publishing the events is that it is very low latency and high throughput action. The downstream components will often be implementing the CQRS pattern that we will cover in Chapter 4, Boundary Patterns.
The Database-First variant is appropriate when we want to record the data locally before producing the event. The component may record each event individually, but we can actually have the best of both worlds, in that the component can record only the current state and rely on the CDC feature to emit the history. This allows for more traditional CRUD implementations that can easily query the database for the current state without relying on the CQRS pattern. One issue with this variant is that we typically need to include additional metadata in the database record, such as status and lastModifiedBy so that the database event can be transformed into a domain event. The status typically translates into an event type and the other metadata provides valuable context information to the event. We see a slight variant of this variant in the example presented in the Cloud Native Databases Per Component pattern. A component can chain together the CDC of each of its multiple databases, to achieve internal eventual consistency, until the last database in the internal chain produces a domain event.
This cloud-native version of the popular event-sourcing pattern is conspicuously missing an event storing mechanism. This is in part because we are leveraging value-added cloud services that are focused on their single responsibility, but ultimately because separating this concern provides more flexibility and provides for proper bulkheads. Instead, we use different mechanisms for different stages of the event life cycle. Initially, the event stream acts as a temporal event store that persists the events for consumption by consumers, until the events expire. The data lake, as we will discuss in the Data Lake pattern, is just another consumer that provides the perpetual event store to support replay, resubmission, and analytics. Furthermore, a great deal of flexibility and scale comes from any given components capturing its own micro event store in its own cloud-native database to perform its own logic. In Chapter 4, Boundary Patterns, in the Command Query Responsibility Segregation (CQRS) pattern, we will discuss how a component may store events to provide ACID 2.0 style logic to calculate materialized views. In Chapter 5, Control Patterns, in the Event Orchestration pattern we will discuss how a component can store events to help calculate process transitions, such as joins.
Notice how the last part of the event-first variant is the beginning of the database-first variant and the last part of the database-first variant is the beginning of the event-first variant. This is exactly why we are able to chain these together to solve sufficiently complex problems. However, for any given processing flow, it is best to discuss and document the entire flow from one perspective only. If the flow starts with a user invoking a command that produces an event, then stick with the database-first perspective. If the flow starts with a user producing an event that triggers a command, then stick with the event-first perspective. Switching between these perspectives in a single flow can lead to confusion and a sort of Abbott and Costello Who's On First comedy routine. Yet, keep in mind that every exit is an entrance when discussing flows across teams. From the perspective of an upstream component, an event is the result of its logic, whereas to a downstream component, an event is the beginning of its logic. Thus, switching between perspectives may be inevitable and just needs to be recognized.
Note that the Event Sourcing pattern is distinct from the lesser known Command Sourcing pattern. The Command Sourcing pattern records the inputs to a command, whereas the Event Sourcing pattern records the results of the execution of a command, aka the event. I mention this because I specifically named the second variant database-first to avoid any confusion with the Command Sourcing pattern.
Event sourcing is an instrumental pattern. It builds on its predecessor patterns, Cloud-Native Databases Per Component and Event Streaming, to facilitate its successor patterns, Data Lake and CQRS, which in turn power the higher order patterns, Backend For Frontend, External Service Gateway, Event Collaboration, Event Orchestration, and Saga.