I've been working for the last week on implementing Zenoss to replace Nagios and Cacti. Individually Nagios and Cacti are pretty good at what they do, but they don't integrate well.
Nagios is primarily an availability monitor, so it's good for notifying you when something goes down, or a disk is filling up, or the load average is too high. etc., but it's not so great for monitoring performance. Nagios 1.4 uses text configuration files. There is a templating system which can be helpful if you have a lot of identical systems.
Cacti, on the other hand, is pretty good at monitoring performance, as in how much bandwidth are you using, resource utilization, and so on with nice long-term graphs using RRDtool, but it's not so great for notifying if something is down. Cacti is almost exclusively SNMP-based, and as a result, you can usually just point it at a device through the web interface and it will auto-discover everything interesting. If you have more than a few hundred items to measure, you need to use cactid, which is a very fast threaded poller written in C.
I've been using both for about 3-4 years separately, but because they don't integrate easily (even though both use MySQL as their backend storage), there's a lot of duplication of effort in getting both of them configured.
And then there's Zenoss. Zenoss does both availablity and performance monitoring, with long-term graphing using RRDtool, log analysis, and network-based auto-discovery. Zenoss is written in Python using the Zope-2 framework. Most of the device metadata is stored on ZODB, Zope's native object database. Long-term performance data is stored in RRDtool. Event logs are stored in MySQL.
Everything in Zenoss integrates together very well. The data is faceted in the sense that you can browse devices by location, by class, by group, or by system. It has a built-in syslog server, it can use WMI for monitoring Windows systems, it has very flexible event handling.
There are still some rough edges in 2.1.92, which is a beta for 2.2. First is, it's a bit of a memory hog and I'm inclined to believe there are some memory leaks. After a day or two the main process will start to use over 200 MB; restarting tends to knock it back down to to around 100 MB or so.
Syslog support has some issues. When I first started feeding it some syslog data, all the events were being classified as "/Unknown". This is normal. Once you have some log entires, you can then tell it to map that entry to an event. The problem was, the events had components (the process name when parsing syslog data), but they had no event classes set. Looking at the code, it seemed like it should have been setting the event class ID to whatever the component/process name was. It just wasn't. After some Googling, I found out the code to build the event class key was just plain broken. After making these suggested changes, I could start mapping events.
Another syslog problem was in parsing the hostname. I have a satellite syslog-ng server in a remote location that logs to my central syslog-ng server. Because of this, the hostname has the relay information in it. Zenoss' syslog support has an option to parse this though, so no problem, right? Despite turning this on, I was still getting entires like IP/IP, so back into the syslog code. It turns out, Zenoss expects the separator between the two hostnames to be "@", and syslog-ng uses "/'. Easy fix in the code, but I suspect this may work for the standard syslog, and it needs to be a configuration option.
Despite all of this, I like Zenoss a lot. I am running it parallel with Nagios until I get all the event handling nailed down. I might need 2 GB RAM on the monitoring server though, and I have already moved the MySQL database onto a different server.
Nagios is primarily an availability monitor, so it's good for notifying you when something goes down, or a disk is filling up, or the load average is too high. etc., but it's not so great for monitoring performance. Nagios 1.4 uses text configuration files. There is a templating system which can be helpful if you have a lot of identical systems.
Cacti, on the other hand, is pretty good at monitoring performance, as in how much bandwidth are you using, resource utilization, and so on with nice long-term graphs using RRDtool, but it's not so great for notifying if something is down. Cacti is almost exclusively SNMP-based, and as a result, you can usually just point it at a device through the web interface and it will auto-discover everything interesting. If you have more than a few hundred items to measure, you need to use cactid, which is a very fast threaded poller written in C.
I've been using both for about 3-4 years separately, but because they don't integrate easily (even though both use MySQL as their backend storage), there's a lot of duplication of effort in getting both of them configured.
And then there's Zenoss. Zenoss does both availablity and performance monitoring, with long-term graphing using RRDtool, log analysis, and network-based auto-discovery. Zenoss is written in Python using the Zope-2 framework. Most of the device metadata is stored on ZODB, Zope's native object database. Long-term performance data is stored in RRDtool. Event logs are stored in MySQL.
Everything in Zenoss integrates together very well. The data is faceted in the sense that you can browse devices by location, by class, by group, or by system. It has a built-in syslog server, it can use WMI for monitoring Windows systems, it has very flexible event handling.
There are still some rough edges in 2.1.92, which is a beta for 2.2. First is, it's a bit of a memory hog and I'm inclined to believe there are some memory leaks. After a day or two the main process will start to use over 200 MB; restarting tends to knock it back down to to around 100 MB or so.
Syslog support has some issues. When I first started feeding it some syslog data, all the events were being classified as "/Unknown". This is normal. Once you have some log entires, you can then tell it to map that entry to an event. The problem was, the events had components (the process name when parsing syslog data), but they had no event classes set. Looking at the code, it seemed like it should have been setting the event class ID to whatever the component/process name was. It just wasn't. After some Googling, I found out the code to build the event class key was just plain broken. After making these suggested changes, I could start mapping events.
Another syslog problem was in parsing the hostname. I have a satellite syslog-ng server in a remote location that logs to my central syslog-ng server. Because of this, the hostname has the relay information in it. Zenoss' syslog support has an option to parse this though, so no problem, right? Despite turning this on, I was still getting entires like IP/IP, so back into the syslog code. It turns out, Zenoss expects the separator between the two hostnames to be "@", and syslog-ng uses "/'. Easy fix in the code, but I suspect this may work for the standard syslog, and it needs to be a configuration option.
Despite all of this, I like Zenoss a lot. I am running it parallel with Nagios until I get all the event handling nailed down. I might need 2 GB RAM on the monitoring server though, and I have already moved the MySQL database onto a different server.