1
0
mirror of https://github.com/node-red/node-red.git synced 2023-10-10 13:36:53 +02:00
0 Design: Runtime Editor Split
Thor Berggren edited this page 2018-11-03 14:24:25 -07:00

Currently, Node-RED is a monolithic entity that has both the editor and runtime packaged in one place.

The plan is to break them up into their own discrete packages to provide some more flexibility in how they are used.

Use Cases

Headless

A device that runs a flow it is given. The flow is edited elsewhere and remotely pushed to the device (or hardcoded on the device).
  • don't want the overhead of the editor that isn't going to be used
  • supports http endpoints (just doesn't have full editor capability)

Multi-user, single-tenant runtimes

  • A single hosted instance of the editor is backed by multiple runtimes - one per user or group of users
  • The user accesses the editor and the underlying api requests are proxied to the appropriate runtime
  • Authentication is handled at the proxy layer
  • API requests can be proxied unmodified (other than auth tokens)

Multi-user, multi-tenant runtimes

  • A single hosted instance of the editor is backed by multiple runtimes.
  • The user accesses the editor and the underlying api requests are proxied to the appropriate runtime.
  • Each runtime may host flows of different users that should not see each other
  • The proxy layer provides a filtered view of the flow to ensure the editor only displays information available to the logged in user.

Scaled

A node-red flow is edited in one place. When deployed, it gets sent to multiple runtimes to provide horizontal scaling.
  • Custom actions are added behind the deploy action
  • Serves up the editor without a local version of the runtime

Distributed

A node-red flow is edited in one place. When deployed, it gets carved up and different parts are pushed to different runtimes, either local or remote.
  • Custom actions are added behind the deploy action
  • Serves up the editor without a local version of the runtime

Repurposed Editor

The Node-RED editor is used for another system entirely that shares the same node/wire visualisation.

Packaging

The following reflects the current state of the repackage branch where this is all being done.

  • @node-red/editor-api - This provides an Express application that can be used to serve the Node-RED editor
  • @node-red/editor-client - This provides all of the client-side resources of the Node-RED editor application.
  • @node-red/nodes - This provides all of the core Node-RED nodes.
  • @node-red/registry - This provides the node registry, responsible for discovering and managing the node modules available to the Node-RED runtime.
  • @node-red/runtime - This provides the core flow engine of Node-RED. It is the main entry point for the runtime.
  • @node-red/util - This provides common utilities shared by the Node-RED components, including logging and i18n.
  • node-red - the existing package that pulls the above packages together and delivers exactly the same experience as it does today

Code structure

The following reflects the current state of the repackage branch where this is all being done.

All of the modules are maintained in a single monorepo. We do not plan to use Lerna or another toolkit for managing the monorepo. Instead we are using a structure inspired by PouchDB.

All of the modules live under:

packages
  \- node_modules
       |- node-red
       |    |- package.json
       |    \- ...
       \- @node-red
            |- editor-api
            |    |- package.json
            |    \- ...
            |- editor-client
            |    |- package.json
            |    \- ...
            |- nodes
            |    |- package.json
            |    \- ...
            |- registry
            |    |- package.json
            |    \- ...
            |- runtime
            |    |- package.json
            |    \- ...
            \- util
                 |- package.json
                 \- ...

Each module has a package.json as normal, with its dependencies listed. A top-level package.json contains all non-node-red dependencies of the modules - this is the one that gets npm installed at development time.

This directory structure means that when a node-red module requires another node-red module, it can be resolved on the node path.

A new test script is added, scripts/verify-package-dependencies.js that checks that the top level package.json is in sync with the individual module dependencies and that their version numbers agree.

Test material

The test material still lives under the test directory, albeit under a new unit sub-directory and the existing grunt test task continues to work.

A new test utility function has been added to help the tests to load individual module files. Whereas previously they would need lots of "../../../../" type paths to find their file, that was very brittle when files got moved. There is now a test module added under test/node_modules/nr-test-utils. This provides common utilities for the tests (distinct from node-red-node-test-helper). For example, the _spec file for @node-red/registry/lib/localfilesystem uses the following to load the file under test:

var NR_TEST_UTILS = require("nr-test-utils");
var localfilesystem = NR_TEST_UTILS.require("@node-red/registry/lib/localfilesystem");

Similarly, if a test needs to know the full path to a particular file, .resolve can be used:

const defaultIcon = NR_TEST_UTILS.resolve("@node-red/editor-client/src/images/icons/arrow-in.png");

Discussion

(DCJ) - One of the main items to think about are the UI changes. In the distributed case how is the partitioning of flows done? What attribute says where a particular node should be run? Is it an attribute of a node? Or a group of nodes? or a Tab/Flow? Or should the/a new link node be the thing that distinctly separates parts and holds "location" information for anything it connect to?

While distributed and scaled are similar, there is one possible distinction that should be highlighted. For the "simple" scaled case the entire flow is sent to multiple runtimes. Is this worth bringing out separately as this may not need any of these UI changes?

(KN) - Currently a runtime does not receive user information from editor. In our past discussion, a runtime needs to receive the information from editor after splitting them. We may also need to consider the case that both runtime and editor authenticate with different servers.

(MAB) - started a test project to demonstrate how to use npm install (npm 5.x) to set up symlinks in a mono-repo during development. https://github.com/mblackstock/test-project. Normal npm install uses published npm packages. If you run the devInstall script, it sets up the symlinks to the local code in the repo. More info in the README.

(JME) - I'd like to propose a slightly different structure of the npm packages for the editor/runtime split per ideas from @townlyn. We've discussed this lightly in calls but want to document here.

The current proposal in this design doc is to have 4 nodes: runtime, core-nodes, editor, and node-red.

We've been having discussions at Particle around the idea of seeing this split as 5 packages: runtime, admin-api, core-nodes, editor, and node-red. This is because you can think about what currently is bundled together as the "runtime" as 2 separate things:

  • The application that actually processes messages and runs flows, ongoing, in real time
  • The admin API that allows for managing node and flow configuration

At Particle, we would likely want to run the message processing using separate infrastructure than the admin API. The message-processing runtime should (be able to) run as a separate process, with some form of IPC between the runtime and the admin API. This would provide more flexibility in how one could deploy Node-RED, as well as create more control over scaling the different parts of the system.

We currently follow a similar pattern for our webhooks system. Our REST API is a standalone app that allows for CRUD of webhooks. Separately we have a webhooks service that is responsible for actually queuing, sending and receiving the HTTP requests. The API and the webhooks service can communicate with one another, but functionally carry out distinct and separate responsibilities. This has served us very well in managing these two systems as their nature requires optimizing for different things in infrastructure-land (i.e. load-balancing inbound, synchronous requests to an API is quite different to managing queues of work in the background).

This approach of seeing the runtime as 2 separate components seems to align with the architecture diagram proposed in the "Roadmap to 1.0" slide deck:

What would y'all think of this proposal?