So, you’ve evaluated your application’s state of affairs and have concluded that the adoption of microservices will improve overall performance and scalability. Great. What comes next? In this article, we outline the baseline considerations for microservices design and implementation.
Microservices is a software architecture strategy that breaks down applications into a collection of decoupled, autonomous services. These independent application services communicate with one another through APIs. Each service is managed by its own team of domain experts so that every software development team can control its own development cycles, test and deploy on its own schedule, use its own enterprise tools and resources, and accelerate time to market.
Our Microservice Architecture Key Concepts drill into the foundations of this concept and offers some practical advice for getting started, including evaluating the wisdom of microservice architecture adoption for your shop. Now it’s time to get into the weeds.
The first step in designing a microservices architecture is surveying the lay of the land, so to speak. One of Developer.com’s top ten microservices design principles is the single responsibility principle, which dictates that each service needs to be responsible for and infuse all of its resources into the successful development of one function and one function only within the microservices based application.
Software architects should consider conducting a domain analysis to map out how to compartmentalize each service and what elements need to be factored into the application stack. This domain analysis is known as domain-driven design (DDD). It applies patterns, such as the entity pattern and the aggregate pattern, to a single bounded context in order to identify a single domain’s boundaries with more calculated precision.
As author and Agile Manifesto signatory Martin Fowler explains, DDD is “an approach to software development that centers the development on programming a domain model that has a rich understanding of the processes and rules of a domain.” In other words, you should build each microservice around a specific business function.
Once you identify the domains and outline their boundaries, it’s time to define the variables that best suit the application stack.
Creating a microservices tech stack is a bit ad hoc. You often have to use a host of tools, frameworks, and programming languages to implement it all into a cohesive yet loosely coupled system.
Factor in these variables as you choose your tools:
Narrowing down the best programming language to use for microservices comes down to your familiarity with the language, the libraries available for the features you need, and the suite of features each language provides. Obviously, it saves time and energy to choose languages that are already in your development team’s repertoire.
Understand more about intuitive object mapping and high-level client libraries in Introducing the Redis OM Client Libraries.
When you choose a suitable database to use with the applications you build for a microservices architecture, keep scalability, availability, and security at the top of your mind. Choose a database that best supports the data model which you plan to use in your microservice. Your tech stack should scale to handle any application load, ensure availability with failover protocols, and secure the application from any malicious attacks.
For more information on landing a high-performance database for microservice based applications, read Microservices and the Data Layer.
Your business function may require your microservices to use synchronous interservice communication methods for certain operations and asynchronous communication for others. Several communication formats and protocols can be used to assist microservices communication, including HTTP/REST, gRPC, and AMQP.
For asynchronous communications, an event-driven message broker leveraging Streams with consumer groups can help fortify scalability and reliability so applications can grow and no service is ever out of reach.
For more on choosing the right communication tools and patterns, see What to Choose for Your Synchronous and Asynchronous Communication Needs.
Each microservice team is responsible for monitoring application performance, which usually employs logging and observability tools to keep a pulse on operations. That lets developers and operations staff track the entire system, from application performance to message broker streams to database resource utilization.
When using a message broker, consider using a logging stream where each microservice can publish messages. This way, you can connect your preferred logging and observability tools to the stream and monitor your application asynchronously without slowing things down.
Learn how the right monitoring resources can combat system complexity in our 5 Microservices Misconceptions blog post.
Want your microservice architecture to truly thrive? Here are five microservice application design principles to reference as you design an architecture built to ensure top performance.
Loose coupling and strong cohesion can be explained both by the single responsibility principle mentioned earlier; giving each domain team one single responsibility helps fortify the cohesion within that one domain, making, ironically enough, a monolith of all the functions within that service to the point where they’re essentially inextricably linked. Each service has its own domain experts and tools, but can still communicate with each other via APIs and other protocols. It’s much like the way coworkers from different departments interact: You share information with one another when it helps get the work done, without being too chatty about irrelevant-to-others details.
Business applications are rarely stagnant. Software changes as new business needs arise, industry assumptions change, and technology capabilities offer more functionality. Microservices should be evolvable and adapt to new requirements when needed.
Responding to change is one of the foundations of the Agile Manifesto and for good reason. The world changes. People change. So should your software.
One reason to implement microservices is their ability to automate processes that improve overall scalability. With container orchestration systems like Kubernetes, you can deploy a microservice’s entire database from a single image alongside the microservice. With the assistance of a Kubernetes controller, these portability benefits can help DevOps teams to manage, schedule, and orchestrate an automatic container deployment.
Implementing microservices requires that the services in any given application maintain their own decentralized data. Service boundaries should isolate all the logic and data pertaining to any single service from other services within an application.
This is the same logic that permits containerized microservices to have independent deployments. According to Red Hat, this principle has its own set of naysayers who believe that the principle leads to a proliferation of data redundancy. But one of the greatest upsides to establishing these discrete boundaries is that “When a microservice carries its own data, any strange behavior is confined within the microservice.” Who needs the guesswork?
Disruptions occur. Application services go down without warning. Fiber-optic-seeking backhoes take down network operations. People forget to renew domains. Systems are interrupted by data connection issues resulting from a firewall failure.
Plan for all the ways that things can go pear-shaped. Do your best to account for potential failures at the implementation level. Design for resiliency, such as with the Circuit Breaker Pattern, to keep services from dropping when a microservice isn’t capable of performing a given operation.
Infrastructure automation may be a substantial plus in favor of microservices, but operational disruptions are still a very real possibility. In Organizing the Chaos of Data, Redis’ Allen Terleto is joined by Mike Leone of Enterprise Strategy Group (ESG) and Jim Roomer of Google Cloud to pull back the curtain on database disasters. They reveal the path toward efficient data handling, featuring Redis on Google Cloud customer stories.