NFS and Elasticsearch: A Storage Disaster for Data but a Lifesaver for Snapshots
When designing an Elasticsearch architecture, choosing the right storage is crucial. While NFS might seem like a convenient and flexible option, it comes with several pitfalls when used for hosting live Elasticsearch data (hot, warm, cold, and frozen nodes). However, NFS proves to be an excellent choice for storing snapshots and searchable snapshots. Here’s why.
Why NFS is a Bad Choice for Elasticsearch Data
1. Poor Performance and High Latency
NFS is a network protocol, introducing additional latency compared to local storage (NVMe, SSD, or HDD). Elasticsearch is highly sensitive to disk latency, as it performs numerous real-time I/O operations, such as:
Writing and updating indices
Searching and aggregating
Replica and segment rebalancing across nodes
The latency introduced by NFS can severely degrade cluster performance, slowing down queries and indexing operations (check the Elastic discussion forum here).
2. Locking and Concurrency Issues
Elasticsearch relies on advanced locking mechanisms to ensure data consistency. However, NFS does not handle file system locks well, especially when multiple nodes access the same segments simultaneously. This can lead to:
Index corruption
Lock errors on shards
Failures during shard rebalancing or recovery
3. Weak Consistency and Crash Recovery Problems
NFS does not provide the same level of consistency and persistence guarantees as local file systems like XFS or ext4. In the event of a node crash or loss of connection to the NFS server, Elasticsearch might end up in an inconsistent state, resulting in hard-to-diagnose errors.
4. Scalability Bottlenecks
Elasticsearch is designed to scale by distributing the load across multiple nodes, each with its own local storage. Using NFS as shared storage for multiple nodes introduces contention, becoming a bottleneck that limits the cluster’s ability to scale efficiently.
Why NFS is Perfect for Snapshots (and Searchable Snapshots)
1. Snapshots: Reliable and Scalable Backups
Elasticsearch snapshots are point-in-time backups of indices, used for disaster recovery or data migration. In this case, NFS is an excellent choice because:
Snapshots are sequential operations, and thus not dependent on low-latency disk performance
NFS is easily expandable, allowing you to store large numbers of snapshots without impacting cluster performance
Snapshot recovery is asynchronous, so network latency does not affect Elasticsearch’s real-time operations
2. Searchable Snapshots: Cost Optimization and Efficient Archiving
For the cold and frozen tiers, Elasticsearch introduces searchable snapshots, allowing searches to be performed directly on snapshots without restoring them first. Here, NFS provides significant advantages:
Searchable snapshots are accessed in read-only mode, avoiding the locking and consistency issues of using NFS for live data
Local storage savings: searchable snapshots eliminate the need to keep full local copies of indices, reducing storage costs
On-demand access: searchable snapshot data is read-only when needed, making NFS latency acceptable for data that’s accessed less frequently
Want to know more about snapshots? Check-out the Elastic Documentation here:
Using NFS as primary storage for Elasticsearch is highly discouraged due to latency, locking, consistency, and scalability issues. However, NFS is an excellent solution for managing snapshots and searchable snapshots, providing a reliable backup strategy and efficient long-term data management in cold and frozen tiers.
If you’re deciding where and how to use NFS in your Elasticsearch cluster, use it for snapshots—but never for live indices!
I'm an IT professional with a strong knowledge of Security Information and Event Management solutions.
I have proven experience in multiple Enterprise contexts with managing, designing, and administering Security Information and Event Management (SIEM) solutions (including log source management, parsing, alerting and data visualizations), its related processes and on-premises and cloud architectures, as well as implementing Use Cases and Correlation Rules to enable SOC teams to detect and respond to cyber threats.
Author
Matteo Cipolletta
I'm an IT professional with a strong knowledge of Security Information and Event Management solutions.
I have proven experience in multiple Enterprise contexts with managing, designing, and administering Security Information and Event Management (SIEM) solutions (including log source management, parsing, alerting and data visualizations), its related processes and on-premises and cloud architectures, as well as implementing Use Cases and Correlation Rules to enable SOC teams to detect and respond to cyber threats.
Choosing the right backup solution is critical for system administrators and IT professionals. The upcoming NetEye 4.41 version will bring an update to MariaDB, moving from version 10.3 to 10.11. This makes it especially timely to explore the opportunities offered Read More
As traffic to applications deployed on OpenShift grows, it's essential to gain visibility into the flow of data entering your cluster. Monitoring this incoming traffic helps administrators maintain optimal performance, reduce security risks, and quickly resolve any emerging issues. Enabling Read More
In NetEye environments we use Tornado to collect events, elaborate on them, and send notifications based on them from a lot of sources (syslog, email, SNMP traps and so on). In this article I'd like to suggest a different use Read More
As technology continually evolves, keeping our software stack up to date is essential for performance, security, and access to new functionalities. In this post, I want to share how we upgraded MariaDB from version 10.3 to 10.11 as part of Read More
In some test or development environments, you may need to simulate the presence of GSM modems without having an actual physical device. This can be useful for example when testing monitoring checks, SMS management systems, or creating new notification rules. Read More