Bruno Luiz Silva

GRPC: A powerful way to improve your Golang APIs

December 05, 2019

Photo by Israel Palacio on Unsplash

Web APIs are everywhere, with REST being one of the most popular ways to distribute it. With recent technologies, there are better ways to implement them, GRPC been one of them.

Why REST is popular and what are its pitfalls?

Companies used to write web-services in SOAP until REST got enough hype to be the next big thing, although the concept has been around since 2000. It was like a fresh breeze for developers. It had a low learning curve, simplified payload (usually JSON instead of XML), schema-less approach and no need for specific clients/server generators.

At a certain point, some of its strengths shown to be some of its weaknesses. Not having a schema initially seemed a good idea, but developers eventually realised it is required. It can be useful for validation, documentation, contract agreement between teams and code-generation, to mention a few. Initiatives such as Swagger, RAML and most recently OpenAPI popped up trying to fill these gaps.

Even though, a lack of an official standard to guide through decision processes usually ended up in analysis paralysis in big projects. Besides, code generation in most of these tools was not as good and ready to use (example: swagger-tools).

JSON is the preferred way to return data from REST APIs. It is an excellent choice for public services used by front-end applications and third-parties. Due to its overhead on transporting and decoding, it is not well optimised for communication between internal services.

Even with these issues, it doesn’t mean REST is a bad choice. It is by far the simplest way to implement an API, but there are known pitfalls due to its simplicity. Technologies such as GraphQL, Thrift and GRPC come to fill these gaps.

Enters GRPC and Protocol Buffers

GRPC is a modern Remote Procedure Call (RPC) framework built on top-off HTTP 2.0, using Protocol Buffers as its interface definition language (IDL). These help it to be a low latency, high performance and scalable option to REST.

The strict use of HTTP 2.0 enables developers to embrace some of its features, mainly streaming (client-side and server-side). For example:

  • A client can send a bulk of messages through a stream, instead of sending one request each or one with all of them
  • A client can send a request and receive a stream of responses, instead of waiting for the server to finish the processing and send a huge payload

Protocol Buffers are a simpler and optimised way to define and serialise structured data. It can be used not only with GRPC but for other use cases, such as event modelling and data storage.

Think Swagger (REST) or WSDL (SOAP), but lightweight due to the fact the payloads are binary, allowing faster serialisation and smaller footprint. Being generated by tools such as protoc, it brings some benefits such as type checking in compile-time and auto-completion in IDEs.

In the snippet below, a GRPC service is defined using Protobuf. Generating a client and server requires a simple protoc call. Easy right?

syntax = "proto3";

package api;

// Defines where your go package will be placed after compilation
option go_package = "api";

service Identity {
  rpc GetUser (GetUserRequest) returns (GetUserResponse) {}
  rpc GetUsers (GetUsersRequest) returns (GetUsersResponse) {}
}

message User {
  string user_id = 1;
  string name = 2;
  bool active = 3;
}

message GetUserRequest {
  string user_id = 1;
}

message GetUserResponse {
  User user = 1;
}

message GetUsersRequest {}

message GetUsersResponse {
  // Defines an array of users
  repeated User users = 1;
}

In any protobuf file, each field has a number associated with it. These uniquely identify a position of a field in the binary message, so they shouldn’t be changed. If one of these gets deprecated, they can’t be re-used. It helps on avoiding breaking changes, allowing a progressive model evolution. Developers don’t need to worry about fields being re-used and clashing with previous payloads of the message.

Creating a simple GRPC server and client in Golang

Code of this part available at Github brunoluiz/grpc-example@master

It is possible to create a simple GRPC server and client based on the previous protobuf definition. The following tools need to be installed:

In a Golang project, running protoc -I. --go_out=plugins=grpc:./generated api/api.proto will generate both server and client. The go_package option in the protobuf specifies the generated code output path. In this case it will be ./generated/api.

To create a server, all methods defined by the proto need to be implemented by a Golang type. Peaking into the generated files, this is the interface generated from service Identity. A type which implements this interface should be enough to implement the server.

// File: api/generated/api.pb.go

type IdentityServer interface {
	GetUser(context.Context, *GetUserRequest) (*GetUserResponse, error)
	GetUsers(context.Context, *GetUsersRequest) (*GetUsersResponse, error)
}
// File: service/service.go

type GRPCServer struct {}

func (g *GRPCServer) GetUser(_ context.Context, _ *api.GetUserRequest) (*api.GetUserResponse, error) { ... }
func (g *GRPCServer) GetUsers(_ context.Context, _ *api.GetUsersRequest) (*api.GetUsersResponse, error) { ... }

With its implementation, the following code will be enough to run it as a server.

// File: cmd/server/main.go

// Create a new GRPC Server
s := grpc.NewServer()

// Register our service implementation against the GRPC service
api.RegisterIdentityServer(s, service.NewServer())

// Listen to a specific port
lis, err := net.Listen("tcp", os.Getenv("GRPC_ADDRESS"))
if err != nil {
  return err
}

// Start serving
return s.Serve(lis)

A client will be generated by protoc as well. It can be created through NewIdentityClient, returning an IdentityClient implementation.

// File: api/generated/api.pb.go

type IdentityClient interface {
	GetUser(ctx context.Context, in *GetUserRequest, opts ...grpc.CallOption) (*GetUserResponse, error)
	GetUsers(ctx context.Context, in *GetUsersRequest, opts ...grpc.CallOption) (*GetUsersResponse, error)
}
// File: cmd/client/main.go

// Set up a connection to the server.
conn, err := grpc.Dial(c.String("grpc-address"), grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
  return err
}
defer conn.Close()

// Create GRPC Client
client := api.NewIdentityClient(conn)

// Call GRPC Service and Get User
u, err := client.GetUser(context.Background(), &api.GetUserRequest{
  UserId: c.Args().Get(0),
})
if err != nil {
  return errors.Wrap(err, "issue on retrieving users")
}

Of course, this is the bare minimum set-up for a client and server. Interceptors can be added on both client and server, adding custom hooks to any method call and applying some transformations.

UnaryInterceptors deal with requests which expect a single response back. StreamInterceptors applies to any streamed response or request. In this article by David Bond, there is more information about interceptors and how to implement custom ones.

Most common interceptors, such as authorisation, validation, monitoring can be found at go-grpc-middleware. Remember, don’t repeat yourself 😉

Code of this part available at Github brunoluiz/grpc-example@master

GRPC and REST together: What is this, a crossover episode?

Code of this part available at Github brunoluiz/grpc-example@with-gateway

As previously mentioned, REST simplicity and low learning curve are some of its selling points. It allows quick testing, without the need to set-up a client binary — who never did a quick curl to check some API? Besides, it is easier for non-developers to play with it. As it is web services lingua franca, it might be required for third-party or front-end integrations as well.

GRPC might be more efficient and better in some cases, but it is not as straightforward to use as REST. What if both could be used together, within the same service? That is where GRPC Gateway shines, integrating both worlds in one.

Through annotations, a service can define which methods will be exposed as REST. Using the previous GRPC protobuf, here is an example — note the google.api.http options:

import "google/api/annotations.proto";

service Identity {
  rpc GetUser (GetUserRequest) returns (GetUserResponse) {
    // Maps GetUser to an HTTP GET request, with the param `user_id`
    // been mapped to GetUserRequest.user_id
    option (google.api.http) = {
      get: "/v1/users/{user_id}"
    };
  }
  rpc GetUsers (GetUsersRequest) returns (GetUsersResponse) {
    // Maps GetUsers to an HTTP GET request
    option (google.api.http) = {
      get: "/v1/users"
    };
  }
}

With annotations in place, protoc needs to be changed to use the grpc-gateway plugin.

  • Install grpc-gateway locally
  • Add GRPC Gateway paths to import paths (-I$(GOPATH)/src/github.com/grpc-ecosystem/grpc-gateway/third\_party/googleapis)
  • Add --grpc-gateway_out=logtostderr=true:./generated to generate gateway code. It will place at go_package path
  • Add --grpc-swagger_out=logtostderr=true:. to generate Swagger definitions. It will place it in the same folder as the proto file.
protoc -I. \
  -I$(GOPATH)/src \
  -I$(GOPATH)/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis \
  --grpc-gateway_out=logtostderr=true:./generated \
  --swagger_out=logtostderr=true:. \
  --go_out=plugins=grpc:./generated \
  api/api.proto

After running protoc, the reverse-proxy code will be available for use. It is just a matter of creating an HTTP server for it. The following implementation is running it in a separate server.

// File: cmd/gateway/main.go

// Note: Make sure the gRPC server is running properly and accessible

mux := runtime.NewServeMux()
opts := []grpc.DialOption{grpc.WithInsecure()}

// Register gRPC server endpoint
if err := api.RegisterIdentityHandlerFromEndpoint(
  context.Background(), mux, c.String("grpc-address"), opts)
); err != nil {
  return err
}

// Start HTTP reverse proxy: sends calls to GRPC server
return http.ListenAndServe(c.String("gateway-address"), mux)

Any REST client should be able to request data from the Gateway. Server up, this is curl’s output:

➜  ~ curl localhost:8080/v1/users
{"users":[{"user_id":"xyz","name":"Pelican Steve","active":true},{"user_id":"foo","name":"John Doe","active":true},{"user_id":"bar","name":"Chauffina Carr","active":true}]}
➜  ~ curl localhost:8080/v1/users/xyz
{"user":{"user_id":"xyz","name":"Pelican Steve","active":true}}
➜  ~ curl localhost:8080/v1/users/aaa
{"error":"resource was not found","code":5,"message":"resource was not found"}%

Magic, isn’t it? Having this REST set-up might put a smile on some faces. The service can be migrated to GRPC for internal usage, keeping compatibility with REST clients — third-parties, non-techies and front-end applications.

There is an excellent talk by Johan Brandhorst giving more details on GRPC Gateway.

Code of this part available at Github brunoluiz/grpc-example@with-gateway

Fantastic debugging tools and where to find them

Sooner or later, a quick peek into the service will be required, for debugging or scriptting purposes. In REST this can be done through tools such as curl, Insomnia or Postman. In GRPC, a different set of tools is required.

For the Graphical Interface lovers, the most popular one is BloomRPC. It only requires the proto files to be loaded and then it works similarly to any other REST client.

BloomRPC demo

For mouse avoiders — or terminal lovers 😅 — there are two quite useful tools. GRPCurl is an awesome CLI tool, resembling curl and allow easy GRPC automation and scripting. A simple example of its usage follows:

grpcurl --plaintext --proto api/api.proto localhost:5000 api.Identity/GetUsers

Evans goes a little bit further, as it has a REPL mode as well, allowing easier resource inspections (my favourite).

Evan REPL Demo

These tools, GUI or not, are quite useful while testing and inspecting GRPC services. Try them and find the one which fits your needs better 😉

My experience using it

In my workplace GRPC has proved quite useful for connecting services, especially internal ones. Protocol Buffers came to be our main IDL when defining contracts for APIs and Stream Events. Due to these contracts, communicating implementations and changes are easier.

Implementing service clients and servers had been easier and faster. The most common issue is due to include paths errors, which sometimes get messy. But this is more due to our learnings around it than the technology itself. Some of the boilerplate, such as registering the server or dialing the client, can be tackled by creating custom protoc plugins. This proved quite useful, specially when the team is more product-oriented.

For Golang, it proved to work quite well in most cases, with good community support. But, protoc can generate code to other languages as well, such as NodeJS and Java. Although, in our NodeJS services, we felt GRPC support is still lacking behind Golang.

In general, GRPC has been a good choice for our services. Hopefully, it will be good for your team as well.


A collection of random software engineering thoughts
e-mailtwittergithubrss