Anyone considered using a low end product for a large implementation? I had occasion recently with 2 pretty large implementations to discuss this with them. On one of these implementations, they were excited about the lower end product until they realized that there was no distributed architecture. What does that mean? The maximum throughput you can put in to the product is what a single instance can handle – thus having a dramatically negative effect on scalability. If you want more then you put a whole separate instance in place – and they don’t talk to each other.

Nimsoft has a highly distributed architecture using a multi-tiered model. Hubs (they don’t have to be dedicated machines) can be put anywhere and all connect together. It’s firewall friendly (1 port, 1 direction), it’s encrypted, and it’s guaranteed delivery. We have customers running today on thousands and thousands of devices and we can do much more.

My advice to any buyer is to always test the scalability. I know it’s not easy to do in a lab environment, but you need to do it.

When do you most need a monitoring tool? When things are going wrong and events are flooding in.

When are non-scalable monitoring tools most likely to fail? Yep…you got it!

Comments are closed.