Ein paar Worte zu den Themen Logstash-Filter und Datum
Vor einiger Zeit habe ich einen Artikel veröffentlicht in dem ich beschrieben habe wie Logs vom NetEye SMS Protokoll in einer ELK-Umgebung gespeichert werden können. Jetzt, nachdem ich es einige Male selbst wie beschrieben implementiert habe, habe ich bemerkt, dass es leider nicht ganz korrekt funktioniert. Das kommt daher, dass die Time/Date Funktionen für Logstash-Filter etwas kompliziert sind.
Konkret, ist das Datum im File des SMS-Protokolls so geschrieben:
June 29th 2016, 10:30:22 CEST 2016
Und wir benutzten den Logstash Datumsfilter um es zu konvertieren:
date {
locale = "en"
match = [ "sms_timestamp_text", "EEE MMM dd HH:mm:ss" ]
}
Auf den ersten Blick schien es zu funktionieren, aber nach einiger Zeit (ein paar Tage nach Beginn des nächsten Monats), haben wir bemerkt, dass das Datum der ersten Tage im Monat so aussah:
July 1th 2016, 10:30:22 CEST 2016
Da wir eine textuelle Zeitzone hatten und Logstash Datumsfilter dies nicht unterstützen, hatten wir diese Regel um den sms_timestamp_text zu parsen:
match =>[ "message", "%{SMS_TIMESTAMP_SHORT:sms_timestamp_text}
%{WORD:timezone} %{YEAR}:%{INT:sms_phonenumber}:%{GREEDYDATA:sms_text}"
Das sind die Pattern die wir dafür verwenden würden:
Wir haben erkannt, dass unsere Filter nicht richtig funktionierten, da wir „dd“ für zweistellige Tage verwenden. Was machen wir nun wenn ein Datum weder bei „d“ noch bei „dd“ passt? Nachdem ich mir die Filter-Regeln genauer angeschaut habe, habe ich die Lösung gefunden. Es ist möglich „or“ Regeln innerhalb eines Datums zu definieren und somit mehreren Datumsformaten zu entsprechen:
date {
locale = "en"
match => [ "sms_timestamp_text", "EEE MMM dd HH:mm:ss Z yyyy", "EEE MMM d HH:mm:ss Z yyyy" ]
}
Sie sehen den neuen Z und yyyy Parameter, denn wenn kein genaues Datum passt, kann es nicht richtig funktionieren. Um richtig zu parsen, muss der Pattern-Match folgendermaßen geändert werden:
match => [ "message", "%{SMS_TIMESTAMP:sms_timestamp_text}:%{INT:sms_phonenumber}:%{GREEDYDATA:sms_text}" ]
Wie vorhin erwähnt, kann Logstash textuelle Zeitzonen nicht parsen, aber genau diese Situation haben wir hier. Was sollen wir also tun?
Wir wissen, dass unser Datum in westeuropäischer Zeit ist, deshalb haben wir eine Lösung um diesen Tag umzuwandeln:
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix. Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
Author
Juergen Vigna
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix. Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
When using Kibana in environments that require a proxy to reach external services, you might encounter issues with unrecognized SSL certificates. Specifically, if the proxy is exposed with its own certificate and acts as an SSL terminator, requests made by Read More
In a previous post we went through the configuration of the Elastic Universal Profiling in NetEye, we saw how we can profile applications written in programming languages that do not compile to native code (for example python, php, perl, etc.). Read More
Elastic 8.16, which comes with NetEye 4.39, made Elastic Universal Profiling generally available for self-hosted installations. This means that NetEye SIEM installations will now be able to take advantage of the continuous profiling solution by Elastic. In this blogpost we'll Read More
In the first part of this series, we explored how Jira Service Management (JSM) helps streamline Incident Management, aligning with ITIL v4 best practices. Incident Management aims to restore normal service operation as quickly as possible after a disruption, ensuring Read More
Hello everyone! Today, I'd like to briefly discuss an improvement to the update and upgrade procedures that we've started to adopt with NetEye 4.39! What we wanted to improve One aspect that made quite an impact was that whenever the Read More