NetEye installations can be either in Standalone (Single Node) or in Cluster configuration, and for each one there’s the possibility to extend monitoring in segregated portions of the network or remote locations, or simply to be able to lighten the load of the master through the use of one or more satellites (the number of satellites that can be installed is related to the type of NetEye license purchased).
These installations do not always the standard mold, and they can even sometimes evolve according to the needs of each customer. In fact, in addition to cluster installations where it’s expected that the resources configured on the pcs service are distributed among the various nodes in order to offer high reliability service, sometimes the need may arise to balance one or more services directly exposed by two or more satellites.
There are situations where it won’t always be possible for the client to use an external balancing service, and therefore it will be necessary to create a local software balancing service. What we’re going to use in this situation is the KeepAlived service already present in the RedHat license included with NetEye.
Let’s walk through an example of how we can balance an SNMP Trap reception service directly on two NetEye satellites.
PREREQUISITES:
CONFIGURATION:
We start by installing the keepalived package on each satellite:
# dnf install keepalived
Once the package is installed we’ll have to edit the following configuration file on each satellite according to the role you want to assign to it:
# cd /etc/keepalived/
# vim keepalived.conf
On the first satellite we’re going to edit that configuration file so it looks like this (be sure to use your own email addresses):
global_defs {
notification_email {
wuerth-phoenix.net
andrea.mariani@wuerth-phoenix.net
}
notification_email_from neteye@wuerth-phoenix.net
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id neteye-sat-1
}
vrrp_sync_group VG1 {
group {
VI_1
}
}
vrrp_instance VI_1 {
state MASTER
interface ens192
virtual_router_id 1
priority 200
advert_int 1
unicast_src_ip 10.0.0.11
unicast_peer {
10.0.0.12
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.100/24
}
}
On the second satellite we edit the configuration file to look like this:
global_defs {
notification_email {
wuerth-phoenix.net
andrea.mariani@wuerth-phoenix.net
}
notification_email_from neteye@wuerth-phoenix.net
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id neteye-sat-2
}
vrrp_sync_group VG1 {
group {
VI_1
}
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
virtual_router_id 1
priority 100
advert_int 1
unicast_src_ip 10.0.0.12
unicast_peer {
10.0.0.11
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.100/24
}
}
Looking at the newly edited files we can see that in the “global_defs” section we defined the domain name and email address of the recipient for a potential notification email in case of problems on the service. We also configured the sender address for the NetEye machine and the local IP of the mail sending service (if configured). Finally, we just need to insert the name of the keepalived node, which corresponds to the Satellite hostname.
All instances have to belong to the same sync group which in this case is VG_1 (Virtual Group 1) as well as to the instance that has been defined which in this case has the name VI_1 (Virtual IP 1).
Within vrpp_instance VI_1 we define for each node whether it will be the MASTER node or whether it will be the BACKUP node.
We also defined the ID for the Virtual Router (VRID), the priority which on the master node should always be higher than on the other nodes, the local IPs, the IP of the secondary node, and finally the IP of the VIP.
After this step is completed, we can proceed with the start and automatic startup of the service:
# systemctl restart keepalived.service
# systemctl enable keepalived
# systemctl status keepalived.service
At this point if we haven’t encountered any errors using the command above on the first satellite (MASTER) we should end up with the following output:
# ip -brief address show
lo UNKNOWN 127.0.0.1/8 ::1/128
ens192 UP 10.0.0.11/24 10.0.0.100/24 fe80::250:56ff:feb9:cc8f/64
While on the second satellite (or BACKUP) nodes:
# ip -brief address show
lo UNKNOWN 127.0.0.1/8 ::1/128
ens192 UP 10.0.0.12/24 fe80::250:56ff:feb9:3c4e/64
The final step is to verify that the listening ports of the services on which we want the VIP to respond are open, and add them if necessary:
# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens192
sources:
services: cockpit dhcpv6-client http https ssh
ports: 4222/tcp 5665/tcp
protocols:
forward: no
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
# firewall-cmd --zone=public --add-port=161/udp --permanent
# firewall-cmd --zone=public --add-port=162/udp --permanent
# firewall-cmd --reload
# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens192
sources:
services: cockpit dhcpv6-client http https ssh
ports: 4222/tcp 5665/tcp 161/udp 162/udp
protocols:
forward: no
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
I hope this short guide can help you quickly implement balanced addresses. If you need further insights I refer you to the official RedHat guides:
Did you find this article interesting? Does it match your skill set? Our customers often present us with problems that need customized solutions. In fact, we’re currently hiring for roles just like this and others here at Würth Phoenix.