The cron job needs to have a record in your cron tab to work.
To open your crontab (in Ubuntu), run:
crontab -e
That opens an editor allowing you to edit your crontab. At present, the following has been set:
*/1 9-16 * * 1-5 python ~/private_html/django/djamon/scripts/cronjob.py
*/5 0-8 * * 1-5 python ~/private_html/django/djamon/scripts/cronjob.py
*/5 17-23 * * 1-5 python ~/private_html/django/djamon/scripts/cronjob.py
*/5 * * * 0,6 python ~/private_html/django/djamon/scripts/cronjob.py
This ensures that cronjob.py is called by the server every minute Monday-Friday between 9.00a.m. and 4.59p.m., and every 5 minutes at all other times.
This script is called by the cron job.
The cron job checks that the internet connection is working before checking your site(s). This means that it won’t record an outage if the problem might not be localised to your web pages. You will need to add some information to settings.py to set this up:
# Add a list of URLs that you know are usually working. If they are all down, then we assume the internet is down and don't bother with the monitoring process
SANDBOX_URLS = ('http://www.bbc.co.uk/', 'http://www.google.co.uk/')
GOOD_CODES = (200,301,304)
The good codes are a list of HTTP responses that it deems as being ‘up’. The sandbox urls are a list of URLs that usually work.
My own function this one. But it’s based on the above render_to_json.
Effectively you pass it a dictionary/list thing and it renders a JSON object over HTTP.
“I have a gweat fwiend in Wome called Biggus Dickus”
And this takes a big dict... or a small dict. Or a list. It doesn’t discriminate as long as the object is serializable as JSON.
I’d like to write something that loops through the queryset and renders as a JSON object, regardless of type, so splitting datetimes into datetimes and all subqueryset objects into dicts/lists too. But that might be one for another time.
Simple function to paginate a list
Pass the queryset, the number of objects to return and details of the current page and it returns the paginated list of objects, e.g.:
groups = paginate(group_list, 25, request.GET.get('page'))
This means you don’t need to do this for all paginated pages
Use this instead of render_to_response if you want to access using an XMLHttpRequest
This is because ipads return a response status of 0 (error) when it’s cached content, meaning that it just doesn’t work.
This function gets the status code of a website (host) by requesting HEAD data from the host. This means that it only requests the headers. If the host cannot be reached or something else goes wrong, it returns None instead.
Thanks to Evan Fosmark - http://stackoverflow.com/questions/1140661/python-get-http-response-code-from-a-url
Works as following:
>>> url = 'http://www.bbc.co.uk/'
>>> get_status_code(url)
200
>>> url = 'http://www.bbc.co.uk/404-page/'
>>> get_status_code(url)
404
>>> url = 'http://www.bbc.co.uk/500-error/'
>>> get_status_code(url)
500
If it takes more than 5 seconds to get the header we assume that there is a problem with the server and returns a 500 code. The server might be OK, but if it’s taking that long to respond, we should assume that it’s down - our users won’t tolerate it taking that long.
See http://djangosnippets.org/snippets/690/
Calculates and stores a unique slug of value for an instance.
slug_field_name should be a string matching the name of the field to store the slug in (and the field to check against for uniqueness).
queryset usually doesn’t need to be explicitly provided - it’ll default to using the .all() queryset from the model’s default manager.