Nagios HTTP Check
RobertKaucher
Member Posts: 4,299 ■■■■■■■■■■
in Off-Topic
Do any of you Naios users know if there is a way to do an HTTP check, actually HTTPS, on a site whose server is unable to run the Nagios client?
Comments
-
undomiel Member Posts: 2,818check_http -S -H $HOSTNAME$Jumping on the IT blogging band wagon -- http://www.jefferyland.com/
-
ChooseLife Member Posts: 941 ■■■■■■■□□□“You don’t become great by trying to be great. You become great by wanting to do something, and then doing it so hard that you become great in the process.” (c) xkcd #896
GetCertified4Less - discounted vouchers for certs -
Forsaken_GA Member Posts: 4,024Generally, speaking, yes, you just do an active check instead of a passive one. And it also depends on your requirements. I personally don't like checking for just the port being open, I prefer to check for a unique string that should always be served up on the page, and if that doesn't come through, time to pop the critical.
-
ChooseLife Member Posts: 941 ■■■■■■■□□□Forsaken_GA wrote: »Generally, speaking, yes, you just do an active check instead of a passive one. And it also depends on your requirements. I personally don't like checking for just the port being open, I prefer to check for a unique string that should always be served up on the page, and if that doesn't come through, time to pop the critical.
ping the server -> check the port is open -> check the / or /health is served with the right content -> check the web application works properly (user authentication, advanced app functions, etc).
That way when a warning state is raised, the failed check readily provides indication of where in the application stack the problem is happening.“You don’t become great by trying to be great. You become great by wanting to do something, and then doing it so hard that you become great in the process.” (c) xkcd #896
GetCertified4Less - discounted vouchers for certs -
Forsaken_GA Member Posts: 4,024ChooseLife wrote: »The nice thing about Nagios is the ability to build dependencies. For web servers, I often chain a few checks of increasing complexity, e.g.
ping the server -> check the port is open -> check the / or /health is served with the right content -> check the web application works properly (user authentication, advanced app functions, etc).
That way when a warning state is raised, the failed check readily provides indication of where in the application stack the problem is happening.
Yup, I've always been a big fan of nagios, and for the folks who realize nagios is more of a framework than it is an application, those are the folks who will be able to use it to it's fullest. There are very few monitoring problems I've never been able to solve with nagios.
On the other hand, I also keep hearing good things about OpenNMS, so I'm thinking about giving it a spin sometime soon. -
RobertKaucher Member Posts: 4,299 ■■■■■■■■■■So what I am looking for is an example of how to make sure it is serving the content that is expected and I cannot seem to find an example of how to do that properly. Sorry I was not very specific. The issue is my search terms seem far too generic...
Everyone, don't make me come over there!
-
Everyone Member Posts: 1,661I believe what you're after is the same sort of "health check" that most hardware load balancers do. That is, response codes.
200 range is success codes.
300 range are redirect codes, but generally 302 is the only one you want to see in this range.
400 and 500 ranges are bad.
etc.
Something like this maybe: Nagios Exchange - check_http perl script
Some others that may be useful: Nagios Exchange - Websites, Forms and Transactions -
Forsaken_GA Member Posts: 4,024RobertKaucher wrote: »So what I am looking for is an example of how to make sure it is serving the content that is expected and I cannot seem to find an example of how to do that properly. Sorry I was not very specific. The issue is my search terms seem far too generic...
Everyone, don't make me come over there!
Ok, well what is the content? Does it contain text that is unique to that page? If so, then this is a trivial check, just use the -s parameter of check_httpd. Result codes can't always be trusted, as proxies can interfere, and webservers can return 200's when the application has in fact crashed.
If you can't get a unique string, then it gets more interesting, may have to check content headers and the like, but it's all still very possible.
If this is a publicly accessible web page, PM me the URL and I'll see if I can craft you something -
RobertKaucher Member Posts: 4,299 ■■■■■■■■■■Ok, but what is meant by a unique string? Could I use the entire text of the index page? It's only about 14 lines. Or do I have to use a smaller amount of text. If I can use the entire HTML content, how do I escape quotes? I would really like to see an example of this.
The documentation for that command switch is non-existent and just try searching for -s. -
ChooseLife Member Posts: 941 ■■■■■■■□□□RobertKaucher wrote: »Ok, but what is meant by a unique string? Could I use the entire text of the index page? It's only about 14 lines. Or do I have to use a smaller amount of text. If I can use the entire HTML content, how do I escape quotes? I would really like to see an example of this. The documentation for that command switch is non-existent and just try searching for -s.
curl http://www.google.ca 2>/dev/null | md5sum
then compare the output with a known value.“You don’t become great by trying to be great. You become great by wanting to do something, and then doing it so hard that you become great in the process.” (c) xkcd #896
GetCertified4Less - discounted vouchers for certs -
ChooseLife Member Posts: 941 ■■■■■■■□□□E.g. this will do:
#!/usr/bin/env bash if [ "`curl $1 2>/dev/null | md5sum | awk '{print $1}'`" = "$2" ]; then exit 0; else exit 2; fi
You then run it as
./script_name https://www.example.com 91ced8078ac428e28b8b93ae066320a2“You don’t become great by trying to be great. You become great by wanting to do something, and then doing it so hard that you become great in the process.” (c) xkcd #896
GetCertified4Less - discounted vouchers for certs -
Forsaken_GA Member Posts: 4,024RobertKaucher wrote: »Ok, but what is meant by a unique string? Could I use the entire text of the index page? It's only about 14 lines. Or do I have to use a smaller amount of text. If I can use the entire HTML content, how do I escape quotes? I would really like to see an example of this.
The documentation for that command switch is non-existent and just try searching for -s.
Well, that depends. I tend to like using Copyright notices if they exist as unique strings. If the entire page is dynamic content, then I'll try and match on Meta data instead. The object is to make sure the content that's supposed to being served is, and there will be *something* you can match on the page. -
Forsaken_GA Member Posts: 4,024ChooseLife wrote: »E.g. this will do:
#!/usr/bin/env bash if [ "`curl $1 2>/dev/null | md5sum | awk '{print $1}'`" = "$2" ]; then exit 0; else exit 2; fi
You then run it as
./script_name https://www.example.com 91ced8078ac428e28b8b93ae066320a2
Great example of the versatility, however, this is only useful for static content which rarely, if ever, changes, as any change to the page changes the md5sum, necessitating a change to what you call your script with. It's also entirely useless as a solution on a page with any dynamic content. (I'm not panning your idea, just pointing out that different problems require different solutions) -
Forsaken_GA Member Posts: 4,024RobertKaucher wrote: »The documentation for that command switch is non-existent and just try searching for -s.
It surely is documented, run check_httpd from a commandline with -help as a parameter
(Note - I use Debian, your distro may contain a different version of check_httpd)
As far as practical examples -rhaegar:/usr/lib/nagios/plugins# ./check_http -H www.google.com HTTP OK: HTTP/1.1 200 OK - 13009 bytes in 0.059 second response time |time=0.059006s;;;0.000000 size=13009B;;;0 rhaegar:/usr/lib/nagios/plugins# ./check_http -H www.google.com -s "About Google" HTTP OK: HTTP/1.1 200 OK - 13009 bytes in 0.058 second response time |time=0.057803s;;;0.000000 size=13009B;;;0 rhaegar:/usr/lib/nagios/plugins# ./check_http -H www.google.com -s "Forsaken is a putz" HTTP CRITICAL: HTTP/1.1 200 OK - string 'Forsaken is a putz' not found on 'http://www.google.com:80/' - 13045 bytes in 0.062 second response time |time=0.062015s;;;0.000000 size=13045B;;;0
Like I said, incredibly trivial if you can match on a string guaranteed to be unique on the page. -
ChooseLife Member Posts: 941 ■■■■■■■□□□Forsaken_GA wrote: »Great example of the versatility, however, this is only useful for static content which rarely, if ever, changes, as any change to the page changes the md5sum, necessitating a change to what you call your script with. It's also entirely useless as a solution on a page with any dynamic content. (I'm not panning your idea, just pointing out that different problems require different solutions)
If the content is dynamically generated and needs to be verified in its entirety (OP's requirement), that will certainly require custom scripting to generate the page against which returned content should be verified. Even in that case, I recommend comparing hashes of two pages rather than the pages as blocks of text.“You don’t become great by trying to be great. You become great by wanting to do something, and then doing it so hard that you become great in the process.” (c) xkcd #896
GetCertified4Less - discounted vouchers for certs -
Forsaken_GA Member Posts: 4,024If the content is dynamically generated and needs to be verified in its entirety (OP's requirement), that will certainly require custom scripting to generate the page against which returned content should be verified. Even in that case, I recommend comparing hashes of two pages rather than the pages as blocks of text.
Well, like I said, the problem is that with everything being so dynamic these days (I mean, crap, it seems like *everything* has AJAX integrated these days), hashing is damn near impossible to pull off. For pages like those, I prefer to pull meta data to make sure the web server is actually returning data from the application on the backend (ex, ColdFusion) and then monitor the backend services individually.