As Hash’s constantly grows it has become necessary to develop new tools with the goal to make it easier to integrate our partners with our products. In this context, we created a white label Acquisition platform. This product allows the entrance of new clients and the purchase of PoS (Point of Sales) terminals as if it was an e-commerce.
At Hash, the services that belong to our service mesh communicate by default through gRPC. However, in the Acquisition service we had the need to also expose it by using another form of transfer protocol: HTTP/1. It eases the integration of the Acquisition service with partners or even with applications where the implementation of a gRPC client isn’t viable or convenient. In these situations, we chose to study different ways to provide this API while using more than one transfer protocol.
As a suggestion of how to implement multiple interfaces on a single application, we can highlight the creation of multiple handlers (gRPC and REST) separately in which each one has a method to handle the acquisition flows — as shown in the example below:
The approach above is completely valid and used a lot but to expose a new endpoint it’s necessary to write the handlers for each protocol; therefore it’s a more laborious task depending on it’s complexity.
After searching for alternative ways to bypass this issue and make the development more agile, we’ve found the gRPC Gateway library. It is a plug-in that based on the Protocol Buffers (protobuf) definitions and it generates a reverse proxy that converts an RESTfull HTTP API into a gRPC.
In the integration with the gRPC Gateway we’ve implemented the APIs contracts through the gRPC and Protobuf files. The goal was to compile the files to create the stubs that are used by the gRPC and REST handlers through one single method for each endpoint. Now let’s see how we used the gRPC Gateway to obtain the same behavior mentioned before where we created multiple handlers:
gRPC + HTTP using gRPC Gateway
Because we’re using the same protobuf files to generate the APIs’ contracts, we can guarantee that all our interfaces have the same contracts. Another advantage of this method is that we are able to generate the REST APIs documentations using the Swagger ’cause it has an integration to generate the docs based on a protobuf file.
With the implementation of the gRPC Gateway we’ve exposed the same API using two different types of transfer protocol but with only one source of truth since we provide our service with gRPC for internal integrations and also provide an REST API for external integrations. That’s how we make sure that the constant development of new features are easy to maintain. Here’s an example of how the integration on an gRPC API would be:
In the proto file we’d have to add a gRPC notation declaring the HTTP path to which that RPC call correspond:
In the main file of our application we have to start the HTTP server:
With our possible solution fully visualized, we began to tackle the trade-offs of this path that are positive enough to make us not abandon the gRPC Gateway. We searched for any projects that already used this library or that had experienced it. We found some and were able to understand a lot about where the gRPC Gateway helped those projects — and also the measures necessary when using this tool.
The more important projects that we found are the ETCD, that uses this tool, and as a comparison, the DEX, that stopped using it. We gathered all the information that we collected in the process and came up with the trade-offs that would affect our projects:
The first item that we discussed was how to compile the Protobufs in a context with multiple environments (Linux flavors, MacOSs and Windows). We end up choosing to use a Docker image to make the replicability easier in multiple environments.
In this script we defined where is the folder with the protocol buff and it’s gateways. It also contains the compiled files and the swagger’s configuration.
A second item that helps with the maintenance of the project on a long term basis was the definition of the file structure and the segregation of responsibilities since we aim to isolate the responsibilities and acquire consistency to make development easier.
In this structure we have some important aspects:
- mocks: responsible for containing all the mocks used on the system’s tests;
- internal/endpoints: responsible for containing thegRPC gateway handler implementation, and to make the abstraction of the protobuf by passing forward the entities. It also validates input data;
- internal/store: responsible for the integration with the database layer;
- internal/services: responsible for containing all the application’s business rules.
The structure defined on the previous section (based in the go-kit) made us think in a more robust mocks’ structure because the interfaces are spread on different files. Our structure is focused on maintaining our test tree as healthy as possible. This means that it makes sure that we have a good cover of the unit tests that are followed by integration tests.
In order to do that, we used the Mockery that makes it easier to create mocks, aiding in the maintenance of the unit tests. Just like in the process of compiling the Protobuf files, we used an docker image to create the mocks of all the interfaces at once.
Thanks to the API’s specification in the protobuf file and the use of the gRPC gateway, we added a REST layer to our service, increasing the integration options by external entities. This brings us a significant improvement in code reusage which in turn favors maintenance and the addition of new features to our application in the future.
We’re excited with the future of the decisions we made throughout this project and we are eager to bring more content like this on this channel. Our team is always open to welcome people that get excited with new technical challenges as much as we do. If you’re one of those people, you should check our job opportunities and the values of our tech team.