0 Design: Runtime extension points
TJKoury edited this page 2016-10-20 16:44:07 -04:00

(Original thread: https://groups.google.com/forum/#!topic/node-red/LFu_z85G3Fg)

There are a number of places in the node-red runtime where it would be desirable to plug-in alternative implementations, or provide hooks to add functionality. There are a number of use cases this would enable.

This page is an attempt to start organising these thoughts.

Use Cases

Running multiple instances of a flow

This is where the flows are designed such that multiple instances can be run with some sort of load balancing in front of them. This is suited to HTTP-initiated flows, with any shared state maintained via an external service such as Redis.

The requirement is to be able to trigger the deploy of flows across all instances.

This could be done in the existing storage-plugin layer.

Running a flow across multiple instances

This is where a flow is distributed across multiple runtimes. Two connected nodes may not be running in the same instance.

The requirements are:

  • co-ordinate the deploy of the flow across multiple machines
  • perform some processing on the flows as part of the deploy process
  • be able to route messages between the instances are defined by the flow

Running different flows for different users

Unlike support for multiple users accessing a single instance, this is the case where different users have access to different node red instances/run times on a single multi-core server or cluster. To do this may require a suitable UI to manage users and instances, a way to broker communications to instances for administration (flow deployment, debugging) and access to flow inputs (e.g. HTTP, web sockets, others) based on user identity and credentials.

Extension Points

  • Storage layer - how/where things get stored - already done.
  • Message routing - how a message gets routed. Default implementation does the in-memory direct access to a node's .receive function. Alternative implementations could serialise the message to JSON and send over the network.
  • Flow handling - sits in the load/save path for flows. Need to refine how much is exposed to the API and where exactly it sits in the deploy process.
  • Context backing - we intend to make context more widely used. As soon as a flow can run in multiple instances, the shared context needs to be truly shared. This extension point would allow a backing store to be plugged in - for example Redis. Even in the single-instance case, this would allow context to be persisted.