During NetEye development, the R&D team relies on a Continuous Integration system (based on automated tools), that helps us identify errors, gaps and missing requirements, or in other words, that checks whether the actual results match the expected results.
Developing high quality software means not only staying on the safe side by finding and correcting bugs, but also improving a product’s performance. Insightful, detailed performance analysis can reveal how a system behaves and responds during various situations. This goes for NetEye, too. We know it can perform very well when monitoring ten hosts and five services, but what about monitoring and managing thousands of objects?
In terms of performance, we’d really like to achieve high speed, scalability and system stability.
Users expect pages to load as quickly as possible, and when they don’t, satisfaction decreases, even perceiving load times to be slower than they actually are. The application should also always work: NetEye should be capable of working well even when stressed, which might be due to a large amount of data, or a large number of requests that have to be processed.
Investing in software performance analysis helps in detecting problems early in development, and saves developers from spending a lot of time on poorly designed code. Certainly it increases productivity and improves the quality and usefulness of the resulting software product.
Our experience with software performance analysis has been exciting. With it you can deeply understand code bottlenecks and the real performance of your code. During this process, we clearly identified the critical use cases, but it is not always easy to define the best approach for improving algorithm performance. We find that you need a good balance between what could be defined as an optimal approach, and what we can achieve during a release cycle.
This means that if you face performance issues in your code, you should first address the most critical use cases that can usually be identified as:
Tuning with a profiler can guide all levels of optimization, but we would caution that a top-down approach is generally advisable, and therefore care should be taken to avoid optimizing at too low a level, too soon.