As a spin-off from the other thread about Sourcefire (
http://www.techexams.net/forums/off-topic/69318-sourcefire.html), I'd like to get some generalized input from those who have deployed various IPS systems within their environments.
I'm evaluating several different vendors at the moment, each having price tags attached with bragging rights when it comes to high dollar amounts. I'm a firm believer that what works for some organizations doesn't necessary work for others. However, a few fundamental issues come into scope as I form conclusions and recommendations.
1) Open vs. closed rule set. Coming from a Snort background, I've enjoyed the luxury of responding to an alert and seeing what rule caused the sensor to trigger. Some commercial vendors have completely proprietary systems where you can't determine the signature language to ascertain whether the packet hit a rule because the rule was too wide in scope or not. Snort / Sourcefire obviously also have shared object rules which are binary blobs, and Sourcefire does this to hide vulnerability information if the affected third-party vendor wants it closed for the time being given the potential of it becoming weaponized as a 0-day. Most of their rules are open, however. Which leads me to...
2) Being able to see the packet, both header and payload data. Snort / Sourcefire gives you the specific packet that triggered the alert. Although it doesn't provide additional data (such as the complete stream), it does provide a small slice of evidence to form some initial conclusions. If I don't get this with vendor x, then I'm essentially left with having to completely be reliant on that vendor's ability to have extremely accurate rules because there's no way I can verify it other than relatively-limited surrounding data (similar alerts in the same net, same type of hosts, within a small time window, etc.).
3) I think the IPS vs. IDS debate still rages on, but in short I'm assuming that organizations who newly-deploy IPS systems start with a "safe" (read: default) rule set which should block most of the obviously-bad stuff. I'd guess that most organizations never bother to continuously monitor and tune. If so, that means potentially there are still a lot of things which slip by because in my mind there's no way any vendor can get everything right at that first line of defense. I haven't read all the NSS Labs reports (assuming they're even reliable), but I'm throwing out a number to say out-of-the-box we have an average 80% catch-rate. So, does anyone deploy additional detection-only listening posts with more aggressive tuning? I understand that doing large scale analysis on trillions of packets is expensive and prohibitive, but I'm just not comfortable trusting any vendor's claim of being able to stop it all with their default rules, I don't care how large their research teams are or if they buy vulnerabilities discovered by third-parties.
For those who have deployed Sourcefire vs. others (Check Point, HP TippingPoint, Cisco, McAfee, IBM, Top Layer, Juniper, etc.), how do you feel about these points assuming you had some degree of bandwidth to perform things like log analysis and alert reviews? I'm trying to give the other vendors a fair shake and want to keep an open mind to new ways of doing things. But like all things expensive, there's a lot of marketing fluff that I need to weed through.