Website Monitoring
How Website Monitoring Works
Automated website monitoring keeps a continuous eye on URLs so you don't have to. Here's what happens under the hood.
The basic loop
At its core, website monitoring is a scheduled fetch loop. A monitoring tool sends an HTTP request to a target URL at a configured interval — every minute, every five minutes, every hour. It captures the response, parses the relevant content, and compares it to what it recorded on the previous run.
If nothing changed, the run is logged silently. If the content differs, an alert fires. That's the entire model — but the details of what you compare and how you compare it determine whether a monitoring tool is useful or noisy.
Full-page vs. selector-based monitoring
Naive monitoring tools diff the entire HTML response. This produces constant false positives — ads rotate, timestamps update, personalization elements shift, and A/B tests modify markup. Teams end up ignoring alerts because most don't indicate a real change.
Selector-based monitoring solves this by watching only a specific element. Using a CSS selector like .product-price or #stock-status, the tool extracts only the target element's text content and compares that. The result is far fewer false positives and alerts that always indicate something meaningful changed.
Establishing a baseline
When a new monitor is created, the tool runs an initial check and records the current value as the baseline. Every subsequent run compares the live value against this baseline. Some tools reset the baseline automatically after each change; others keep the original baseline so you can see cumulative drift over time.
Alert delivery
When a change is detected, the monitoring system queues an alert job. The job sends a notification to configured channels — email, Telegram, Slack, webhooks — with the old value, the new value, the URL, and a timestamp. The best monitoring tools deliver this context in the alert itself so recipients can decide whether to act without having to visit the page.
Check frequency and rate limiting
How often a monitor runs depends on the plan tier and the target server's tolerance. Checking a page every 30 seconds is more likely to surface real-time changes but also more likely to trigger rate limiting or bot detection. Most production monitoring tools use adaptive scheduling and respect robots.txt crawl delay directives.
What WatchPage does differently
WatchPage uses selector-based monitoring with stored change history. Every detected change records the previous value and the new value so you can audit the timeline of changes on any monitored URL. Alerts include full context so your team can act without logging in to investigate.