Brace yourself, wall of text is coming. Microservices architecture is a never ending story. It took me a couple of years to collect so many resources about microservices. Now sharing the learning resources with you. Also available here: https://kgoralski.gitbook.io/wiki/microservices
The more I work with such architectures the more they feel that they are more about people and less about technologies. Actually "Microservices solve organizational problems and cause technical problems". For sure they are not a free lunch.
"Service-oriented architecture composed of loosely coupled elements that have bounded contexts" by Adrian Cockcroft (Amazon)
"Conway's Law states that Organizations that design systems are constrained to produce copies of the communication structures of these organizations [...] the organization chart will initially reflect the first system design, which is almost surely not the right one [...] as one learns, he changes the design [...]. Management structures also need to be changed as the system changes..."
- A single (layered) architecture
- A single technology stack
- A single code base maintained by multiple teams
In defence of the monolith https://docs.google.com/spreadsheets/d/1vjnjAII_8TZBv2XhFHra7kEQzQpOHSZpFIWDjynYYf0/edit#gid=0
- All parts are interconnected
- Many other systems are connected to your system
- Hard to change, hard to maintain
- Long time between releases, thereby increasing risks
- Slow innovation
- Hard to move to newer technologies
- Doesn’t scale very well
- Products not projects and here
- Decentralized governance
- Replaceable parts
- High performance
- Technology independent
- Polyglot persistence
- Easy to build
- Easy to test
- Easier deployment than monoliths Product Account Order Customer
- What is a microservice exactly?
- How small is a microservice? https://youtu.be/YQp85GzoxqA?t=2m48s
- Requirements in a microservice world
- Components or services
- Who owns a microservice?
- What technologies do you use?
- What protocols do you apply?
- How to define messages
- How to test microservices
- How to coordinate when business services run across components?
- How to build deployment pipelines?
Patterns (check next section) that can help you do it: API Gateway, Backend for Frontend, Strangler, Anti-Corruption Layer, Sidecar, Ambassador
The fallacies of distributed computing are a set of assertions made by L Peter Deutsch and others at Sun Microsystems describing false assumptions that programmers new to distributed applications invariably make.
"Essentially everyone, when they first build a distributed application, makes the following eight assumptions. All prove to be false in the long run and all cause big trouble and painful learning experiences." by Peter Deutsch
- The network is reliable.
- Latency is zero.
- Bandwidth is infinite.
- The network is secure.
- Topology doesn't change.
- There is one administrator.
- Transport cost is zero.
- The network is homogeneous. So a design is bad if one these aspects is neglected.
- Software applications are written with little error-handling on networking errors. During a network outage, such applications may stall or infinitely wait for an answer packet, permanently consuming memory or other resources. When the failed network becomes available, those applications may also fail to retry any stalled operations or require a (manual) restart.
- Ignorance of network latency, and of the packet loss it can cause, induces application- and transport-layer developers to allow unbounded traffic, greatly increasing dropped packets and wasting bandwidth.
- Ignorance of bandwidth limits on the part of traffic senders can result in bottlenecks.
- Complacency regarding network security results in being blindsided by malicious users and programs that continually adapt to security measures.
- Changes in network topology can have effects on both bandwidth and latency issues, and therefore can have similar problems.
- Multiple administrators, as with subnets for rival companies, may institute conflicting policies of which senders of network traffic must be aware in order to complete their desired paths.
- The "hidden" costs of building and maintaining a network or subnet are non-negligible and must consequently be noted in budgets to avoid vast shortfalls.
- If a system assumes a homogeneous network, then it can lead to the same problems that result from the first three fallacies.
- Config Management
- Service Discovery & LB
- Resilience & Fault Tolerance
- Api Management
- Service Security
- Centralized Logging
- Distributed Tracing
- Scheduling & Deployment
- Auto Scaling & Self Healing
- Service Mesh
- Clean Separation of Stateless and Stateful Services
- Do Not Share Libraries or SDKs (Dependencies will kill you)
- Avoid Host Affinity
- Focus on Services with One Task in Mind
- Use a Lightweight Messaging Protocol for Communication
- Design a Well-Defined Entry Point and Exit Point
- Implement a Self-Registration and Discovery Mechanism
- Explicitly Check for Rules and Constraints
- Prefer Polyglot Over Single Stack (do we really need that?)
- Maintain Independent Revisions and Build Environments
(Again) "Conway's Law states that Organizations that design systems are constrained to produce copies of the communication structures of these organizations [...] the organization chart will initially reflect the first system design, which is almost surely not the right one [...] as one learns, he changes the design [...]. Management structures also need to be changed as the system changes..."
'Inverse Conway Maneuver' recommends evolving your team and organizational structure to promote your desired architecture. Ideally your technology architecture will display isomorphism with your business architecture. https://www.thoughtworks.com/radar/techniques/inverse-conway-maneuver
Reactive Systems are:
- Message Driven
"The driver is efficient resource utilization, or in other words, spending less money on servers and data centres. The promise of Reactive is that you can do more with less, specifically you can process higher loads with fewer threads. This is where the intersection of Reactive and non-blocking, asynchronous I/O comes to the foreground."
"The Reactive Essence of a Microservice. Asynchronous Communication, Isolation, Autonomicity, Single Responsibility, Exclusive State, and Mobility. These are the core traits of Microservices." by Jonas Boner
From Go + Microservices = Go Kit [I] - Peter Bourgon, Go Kit
inspired by "Peter Bourgon - Go + Microservices = Go Kit" https://www.youtube.com/watch?v=JXEjAwNWays
- Team is too large to work effectively on shared codebase
- Teams are blocked on other teams — can't make progress
- Communication overhead too large
- Velocity stalled
- Gives more freedom about technology and ability to replace it
- Teams in different timezones
- Scalability and some technical issues
- Need well-defined business domains for stable APIs
- How to make it decoupled?
- No more shared DB — distributed transactions?
- Testing becomes really hard (chaos monkey anyone?)
- Require dev/ops culture: devs deploy & operate their work
- Job (service) scheduling — manually works, for a while…
- Addressability i.e. service discovery
- Monitoring and instrumentation — tail -f? Nagios & New Relic?
- Distributed tracing?
- Your SLA?
- Production database snapshots
- Code reuse
- Distributed systems are hard to do
- Evolutionary architecture!
- Microservices are changing organisations
- Devops/SysOps skill required, High level automation needed
- Just another level of complexity
- Always check if your framework solved your problem, already
- Code reuse might be hard (or you just don’t want to do this)
- Async communication / Event sourcing may help with decoupling
- Config / Discovery should exist from day one?
- One team per microservice?
- Think twice before going Micro
- Micro solves organisation problems, causing technical ones
- Hard to manage Micro when business requirements are unknown, where are correct split lines?
- Using Open Source Software is the way to go ?
- Don’t start with microservices ? Monolith First, Microservices later
- Software Houses - not sure if they possible to do such thing - time to market?
- One team and microservices? - Nope.
- It is easy to create distributed monolith
- ‘distributed’ tools by default are hard too
- Designed for failure
With bad decisions you can develop: "Distributed Monolithic Applications: a monolithic application disguised as a collection of microservices, stitched together using JSON, simultaneously writing to a single database" by Kelsey Hightower
"Communication between Microservices needs to be based on Asynchronous Message-Passing (while the logic inside each Microservice is performed in a synchronous fashion). As was mentioned earlier, an asynchronous boundary between services is necessary in order to decouple them, and their communication flow, in time—allowing concurrency—and in space—allowing distribution and mobility. Without this decoupling it is impossible to reach the level of compartmentalization and containment needed for isolation and resilience.
Asynchronous and non-blocking execution and IO is often more cost-efficient through more efficient use of resources. It helps minimizing contention (congestion) on shared resources in the system, which is one of the biggest hurdles to scalability, low latency, and high throughput."
"It is unfortunate that synchronous HTTP is widely considered as the go-to Microservice communication protocol. Its synchronous nature introduces strong coupling between services which makes it a very bad default protocol for inter-service communication." by Jonas Boner
Microservices Common Mistakes:
eBay & Google