Play support

The Play team are proud to announce official support for We have created a library called which provides a complete and implementation, tested against the reference implementation client (that is, the official JavaScript client), and including a number of useful features such as backpressure and cluster support that the JavaScript implementations do not have.

Play has already proved itself to be apt at scaling to hundreds of thousands of connections per node, for example as demonstrated by LinkedIn, so having the straight forward multiplexing and event based API offered by the JavaScript client in combination with Play's powerful backend makes for a compelling technology stack for reactive applications.

Akka streams based is built on Akka streams. Each namespace is handled by an Akka streams Flow, which takes at its inlet the stream of messages for that namespace coming from the client, and emits messages to go to the client.

One advantage of using Akka streams is that backpressure comes for free. This is an important feature for protecting servers from being overwhelmed with events. Without backpressure, there's no way for the server to tell the client to stop sending messages, so the server has to either process them, exhausting itself of CPU and other resources, or buffer them, and risk running out of memory. However will push back on the TCP connection when it can't keep up with rate of messages being sent from the client, preventing the client from sending any further messages. Likewise, backpressure from slow consuming clients gets pushed back to the source of Akka streams flows, ensuring a server will slow down its emission of messages and won't run out of memory buffering the messages that are yet to be consumed by the client.

Built-in clustering

Being built on Akka, does not need a sticky load balancer or any intelligent routing to serve endpoints. In most other implementations, if you have a endpoint served by a cluster of servers, you need to ensure that requests for the same session always get routed to the same node. With, requests can be handled by any node, and Akka clustering is used to ensure that they get routed to the right node. This allows the use of dumb, stateless load balancers, simplifying your deployment. The clustered chat example app in Scala and Java shows how to configure to work in a multi node environment, and even comes with a handy script to start 3 nodes behind an nginx load balancer to demonstrate the multi node setup at work.

Detailed documentation on using in a clustered setup can be found in the Scala and Java documentation.

Example code

Here's a minimal chat engine (similar to the official chat example) written in Play Scala:

import play.engineio.EngineIOController  
import play.socketio.scaladsl.SocketIO

class ChatEngine(socketIO: SocketIO)(implicit mat: Materializer) {  
  import play.socketio.scaladsl.SocketIOEventCodec._

  // codec to encode/codec chat message events to/from strings
  val decoder = decodeByName {
    case "chat message" => decodeJson[String]
  val encoder = encodeByType[String] {
    case _: String => "chat message" -> encodeJson[String]

  // Merge/broadcast hub that each client will connect to
  private val chatFlow = {
    val (sink, source) = MergeHub.source[String]
    Flow.fromSinkAndSourceCoupled(sink, source)

  val controller: EngineIOController = socketIO.builder
    .addNamespace("/chat", decoder, encoder, chatFlow)

And then to ensure Play routes requests to the EngineIOController, add the following to your routes file:

GET     /         play.engineio.EngineIOController.endpoint(transport)  
POST    /         play.engineio.EngineIOController.endpoint(transport)  

And that's all!

Documentation and samples

For installation instructions, comprehensive documentation and links to sample apps, see the documentation for Scala and Java. To contribute, visit the projects GitHub page.

James Roper

comments powered by Disqus