First of all, I’ll briefly explain what the “Tornado” in NetEye actually is.
Tornado is a Complex Event Processor that receives reports of events from data sources such as monitoring, email, and SNMP Traps, matches them against rules you’ve configured, and executes the actions associated with those rules, which can include sending notifications, logging to files, and annotating events in a time series graphing system like Grafana.
Recently I had a customer who wanted to display their incoming SNMP traps as alerts in NetEye monitoring, and at the same time store them in their Elastic database. (I should mention in advance that the SNMP traps reach the master monitoring system via various NetEye satellites.)
In order to be able to implement this requirement using the standard NetEye installation, I decided to use Tornado. First of all, I created a new data stream called snmptraps-archive in Kibana and gave the user “Tornado” write access to it.
I then created an additional rule in Tornado under the ruleset snmptrap and set up the new Elasticsearch action. The action definition can be seen in the following screenshot:
Let me annotate what you’re seeing here.
#endpoint: The local Elasticsearch server must be specified as the endpoint.
#index: The index in which Tornado should store the SNMP traps is specified — here it’s the new snmptraps-archive data stream we defined above.
#data: Here the content of the document must be defined in Elastic, i.e. which fields are written to Elastic by the SNMP trap. In my example I added the @timestamp and a username so I know that this document was written by Tornado, and add the entire trap.
#auth: In this section we have to set up authentication with Elastic. Since the certificates for the Tornado user are already defined in NetEye, I use them to authenticate myself in Elastic.
As soon as this rule is activated, the traps are written to the desired index.
Have fun trying it out.
These Solutions are Engineered by Humans
Did you find this article interesting? Does it match your skill set? Our customers often present us with problems that need customized solutions. In fact, we’re currently hiring for roles just like this and others here at Würth Phoenix.
I started my professional career as a system administrator.
Over the years, my area of responsibility changed from administrative work to the architectural planning of systems.
During my activities at Würth Phoenix, the focus of my area of responsibility changed to the installation and consulting of the IT system management solution WÜRTHPHOENIX NetEye.
In the meantime, I take care of the implementation and planning of customer projects in the area of our unified monitoring solution.
Author
Tobias Goller
I started my professional career as a system administrator.
Over the years, my area of responsibility changed from administrative work to the architectural planning of systems.
During my activities at Würth Phoenix, the focus of my area of responsibility changed to the installation and consulting of the IT system management solution WÜRTHPHOENIX NetEye.
In the meantime, I take care of the implementation and planning of customer projects in the area of our unified monitoring solution.
In a previous blog post by one of my colleagues, we shared how we developed a powerful semantic search engine for our NetEye User Guide. This solution uses Elasticsearch in combination with machine learning models like ELSER to index and Read More
When using Kibana in environments that require a proxy to reach external services, you might encounter issues with unrecognized SSL certificates. Specifically, if the proxy is exposed with its own certificate and acts as an SSL terminator, requests made by Read More
Hello everyone! As you may remember, a topic I like to discuss a lot on this blog is the Proof of Concept (POC) about how we could enhance search within our online NetEye User Guide. Well, we're happy to share Read More
We all know that NetEye Upgrades are boring activities. Upgrading is important and useful because it brings you bug fixes and new features, but nonetheless it's extremely expensive in terms of time. The most boring, tiring and lengthy part is Read More
Hey everyone! We played around a bit last time with our radar data to build a model that we could train outside Elasticsearch, loading it through Eland and then applying it using an ingest pipeline. But since our data is Read More