Tornado is the spiritual successor of the NetEye EventHandler. As such, we took inspiration from its limitations, picking the most interesting ones, and trying to optimize for these scenarios. One example of this is the Processing Tree.
In the past, all rules had to be placed in one of four predetermined groups, and events ran through one of these linear sets of rules only. As these lists of rules become longer, it becomes:
This is where the Processing Tree comes into play. In Tornado, Rules are now split into Rules
and Filters
arranged as a Tree with the following properties:
Rule Set
, similar to the predetermined set of Rules
in the Event Handler.Filter Node
containing exactly one Filter
.Rule Set
s contain an ordered set of Rule
s, which – as soon as an event reaches them – will be evaluated in that order. Should any of the matched rules trigger an action[1], it will be fired immediately.
Filter Node
s contain no Rule
s, just a single Filter
. Its purpose is to extract meaningful content from events passing through, enriching the event for any subsequent Rule
s and Filter
s, as well as allowing only those events to pass through that fulfill all the constraints it imposes. Should it have no constraints, all events will pass through to its children.
Using this simple concept lets us create intuitive flows of events that eventually reaches one or more Rule Sets.
The simplest example of a Processing Tree is a tree without any filter at all. At the root is a single Rule Set, which will be evaluated for every single event sent to Tornado.
[Rule Set]
It gets a little more complicated if we add a filter node before the Rule Set. Let’s assume it lets through only events that contain a payload field named hostname
. All other events will be discarded.
[Filter: Exists(payload.hostname)] | v [Rule Set]
We’re happy that our Rule Set now gets only the events that contain the necessary hostname
needed in the Rule Set.
Now suppose that there are still too many events reaching the Rule Set – in reality we’re only interested in those arriving from a specific Tornado webhook [2], let’s call it critical_event_occurred
. Note that the order here is significant: in general the earlier you can discard unwanted events, the higher your throughput will be.
[Filter: event_type == "webhook_critical_event_ocurred"] | v [Filter: Exists(payload.hostname)] | v [Rule Set]
Now suppose we get notified that there was a call to the critical_event_occurred
webhook, but “somehow” it didn’t contain the necessary hostname
payload. For debug purposes we would like to introduce another Rule Set containing only a single Rule, which logs all those events reaching our webhook that don’t contain this important field.
[Filter: event_type == "webhook_critical_event_occurred"] | | v | [Filter: Exists(payload.hostname)] | | | v | [Rule Set: Trigger Action] | | ----------------------------------------- | v [Filter: Not(Exists(payload.hostname))] | v [Rule Set: Critical Webhook Event Without Host]
Of course it would be possible to reach the same goal with a single Rule Set containing two rules, one for Triggering and one for Logging, but this concept can be applied on larger scale for more complex use cases.
With the processing tree you can combine filters by branch office, location, subnet, event type, payload content, host group, etc., without ever losing that vital high level perspective of what is happening when.
[1] Actions can only be triggered by Rules and are baked-in mechanisms e.g. write log, set monitoring state, run custom script, …
[2] The tornado Webhook collector is a simple http server, accepting POST requests containing JSON. It can be used as endpoint to be called from various webhooks. In the R&D Team we use it to manage PRs and Issues from BitBucket, GitHub and Jira.