Maintaining 65k open connections in a single Ruby process


Posted on October 29, 2018 by wjwh


In the Ruby world, the Rack protocol is the specification for how web servers should communicate with applications. In the core it is quite simple: an application is an object that responds to the call method, taking the environment hash as a parameter, and returning an Array with three elements. It will be called for every HTTP request. The ‘environment’ mentioned is a giant hash containing all the parameters like the HTTP method, the path requested, the request headers, etc. It must return an array of the form ['200', {'Content-Type' => 'text/html'}, ['A barebones rack app.']]. Almost every Ruby application that serves anything over HTTP will be of this form, from the largest Rails application to a single method Sinatra app.

This simplicity makes it very straightforward to implement simple web servers, but comes with some important drawbacks: it works on the level of a single HTTP request. Almost every common server for Ruby/Rack uses either a process per request model or a thread per request model. Both of these models take quite a bit more resources than required for just holding a connection open, for example if you want to work with websockets or Server Sent Events (SSE).

In this post I will outline a simple method of managing a large amount of connections to a single Ruby process that can be used as the basis for more advanced protocols. We will make a simple program that does nothing but receive connections and send the string BEEP\n to each connected client every three seconds.

Hijacking the Rack socket

Luckily, Rack provides an escape hatch for if you want to do more complicated things than just sending a HTTP response with a known length. The hijacking API works through the env hash passed into your app by the server. You can check whether the server supports hijacking by calling env['rack.hijack?'], which should contain a boolean. If the server support hijacking, you can perform a “full” hijack by calling env['rack.hijack'].call, after which you can access the socket in env['rack.hijack_io']. You will be responsible for emitting both headers and the response body and you will have to close the socket yourself.

It is also possible to perform a “partial” hijack by assigning a lambda to the rack.hijack response header. This lambda will be called after the server has sent out headers and will receive the socket as an argument. Any response body will be completely ignored by the server. As with full hijacking, you are responsible for closing the socket yourself.

What if we never close the socket?

For this demo we will use the Puma server. It is a threaded server using a single thread per request. For efficiency (and to prevent Puma from using too many resources), threads are reused from a single thread pool with a default maximum size of 16 threads. Any additional incoming requests will have to wait until there is a thread available. However, we can improve this concurrency considerably with socket hijacking.

If we perform a full or partial hijack, the thread of execution will still be inside the thread from the Puma thread pool and so we would not gain anything. Letting the thread ‘end’ does not help either, as the socket will be closed by the garbage collector. However, by storing the socket in a globally available Array, we can make sure that the socket will never be collected by the garbage collector and will live at least as long as it is present in the array.

Using a partial hijack, this looks as follows:

require 'puma'
require 'rack' conn_storage = [] app = lambda do |env| response_headers = {} response_headers['Transfer-Encoding'] = 'binary' response_headers["Content-Type"] = "text/plain" response_headers["rack.hijack"] = lambda do |io| conn_waiting_area << io end [200, response_headers, nil]
end

We can then use a separate thread to iterate over the array with each every few seconds and do something with each connection. We also need to make sure to remove the socket again from the array. For now we’ll just remove it whenever anything goes wrong:

Thread.new do loop do sleep 3 start_time = Time.now conn_storage.each do |c| begin c << "BEEP\n" rescue conn_storage.delete(c) end end end_time = Time.now puts "Number of clients connected: #{conn_storage.length}" puts "Time for entire write run: #{end_time - start_time}" end
end

Testing our new server

To test this simple setup I copied it into a file called c10k.ru on a smallish Linux server we had lying around and then ran it with rackup c10k.ru. Then, using the apache benchmark tool (ab for short), I tried establishing as much connections to it as I could. Sadly, the ab tool can only open up to 20000 concurrent connections and our toy server easily handled that many. It used quite a lot of memory though; about 350 MB per 10k open connections, measured by comparing the memory use of a new server process versus the steady state memory use of a process with 20k connections open. Apparently NGINX can do it with only 2.5 MB per 10k connections, but we’re doing pretty well so far with 30 lines in pure Ruby so I’m not worried.

Since I wanted to test the limits, I got knee deep into writing my own testing tool that could establish more than 20k connections before a passing coworker suggested just running multiple instances of ab. This was clearly a good idea and we got onto it straight away. Sadly, this bumped into problems where we were running out of ephemeral IP ports on the system. Eventually we got it working by using four servers each running a single instance of ab in addition to the one running our Ruby process. The maximum amount of connections achieved was 65523, just 13 short of 2^16. This is the theoretical maximum number of connections for a single IP address on a single port, so I was pretty happy about that. Iterating through all 65k connections took about 750 milliseconds, which was also not too bad.

There were some small problems with race conditions, where multiple connections would be inserted into the conn_storage array. This was easily fixed by using a Queue as a staging area for new connections, as it is a thread safe data structure. Removing a socket from the array also took an excessive amount of time with larger amounts of connections, so that got switched for a Set to speed up deleting. The complete file with more comments can be found here.

Conclusion

In very little code we have a system that can easily take in excess of 65 thousand connections, even though it is not very economical in its memory usage. This is meant as a simple exploration of how you can use the internals of Rack and Ruby to get more out of a system than you ‘should’ be able to get. It is amazing that a problem which took huge engineering efforts to solve back in the early 2000s can now be solved easily by anyone in just a handful lines of code.