Server Status Checker
Check server status instantly—verify web server health, response codes, and connectivity from global probe locations. Diagnose server-side issues including 500 errors, timeouts, and unreachable hosts.
Why Use Server Status Checker
Server issues manifest differently from general website problems: the domain may resolve correctly but the web server (Apache, Nginx, Node.js) crashes or stops responding. This checker specifically tests server-layer health—verifying the server process is running and returning valid responses. Essential for DevOps teams verifying server restarts completed successfully, developers confirming new deployments are live, or sysadmins diagnosing specific server-side errors like 502 Bad Gateway (app server crashed) or 504 Gateway Timeout (app server too slow).
- Server-layer focus: Tests web server response directly
- Error diagnosis: Interprets 5xx errors with likely causes
- TTFB measurement: Time to First Byte shows server processing speed
- Global validation: Confirms server reachable from multiple regions
- Header analysis: Shows server type (Nginx, Apache, Cloudflare)
Step-by-Step Tutorial
- Enter server URL: https://api.yourapp.com/health
- Click "Check Server"
- Review HTTP status: 200 = healthy, 502/503/504 = server issue
- Check TTFB: under 200ms = healthy, over 1s = slow server
- Review server header: shows Nginx, Apache, or CDN type
- Compare across regions to isolate geographic failures
Real-World Use Case
A backend developer deploys new Node.js API update. Post-deploy monitoring shows 502 errors reported by users. Running server status check: all probe locations return "502 Bad Gateway" within 100ms. Fast 502 with no content indicates Nginx is running (receiving request, returning quickly) but Node.js process has crashed (no upstream to proxy to). Developer checks process manager: pm2 list shows Node.js process in "errored" state. Checking PM2 logs reveals startup error—missing environment variable in production config. Deploy fix adds missing env var, PM2 restarts process, status check immediately returns 200 OK. Total resolution time: 12 minutes. Server status check instantly confirmed server infrastructure vs application layer failure, guiding the correct fix path.
Best Practices
- Check a dedicated /health endpoint rather than full page for accurate TTFB
- TTFB under 200ms = healthy server, 200-500ms = monitor, over 500ms = investigate
- 502 errors = app server down; 503 = overloaded; 504 = app server timeout
- Monitor TTFB trends over time to detect gradual performance degradation
- Test after every deployment to catch regressions immediately
Performance & Limits
- Metrics reported: HTTP status, TTFB, total response time, server headers
- Error interpretation: Plain-English explanation for common 4xx/5xx errors
- Probe locations: 4+ global regions
- Timeout handling: Shows "Connection timeout" if server unreachable in 10s
- Check frequency: Up to 10 manual checks per minute
Common Mistakes to Avoid
- Checking heavy pages for TTFB: Use /health endpoint—eliminates database/rendering time
- Confusing 502 and 503: 502 = upstream crashed; 503 = overloaded but running
- Ignoring server headers: "Server: cloudflare" means CDN is serving, not origin
- Not testing after deployments: Silent failures go unnoticed without post-deploy checks
Privacy and Data Handling
Server checks are performed by our probe infrastructure. Only the URL you provide is used. Response headers including server type information are displayed for diagnostics but not stored. No authentication credentials should be included in check URLs.
Frequently Asked Questions
What's the difference between 502, 503, and 504 errors?
All indicate server-side failures but with different causes: 502 Bad Gateway—reverse proxy (Nginx/Apache) received invalid response from upstream app server. Usually means app server (Node.js, PHP-FPM, Python) crashed or restarted. Check: is app process running? 503 Service Unavailable—server is up but refusing requests due to overload or intentional maintenance mode. Check: server resource usage (CPU/memory), active maintenance? 504 Gateway Timeout—reverse proxy didn't receive response from upstream within timeout period. App server is running but too slow. Check: slow database queries, blocking operations, resource contention. Priority: 502 = restart app server immediately, 503 = scale up or end maintenance, 504 = profile performance bottleneck.
What is TTFB and why does it matter?
Time To First Byte (TTFB) = time from client sending request until receiving first byte of response. Measures pure server processing time excluding transfer time. TTFB breakdown: DNS resolution + TCP connection + SSL handshake + server processing + first byte delivered. For static pages: TTFB should be 50-200ms. Dynamic pages (database queries): 100-500ms acceptable. API endpoints: under 200ms target. High TTFB indicates: slow database queries (most common), heavy server-side rendering, blocking I/O operations, cold starts (serverless). TTFB directly affects Core Web Vitals (Largest Contentful Paint). Google recommends TTFB under 800ms for good LCP. Use TTFB as first diagnostic signal—high TTFB → server-side problem, low TTFB but slow page → frontend/asset delivery problem.
How do I set up a health check endpoint for monitoring?
Simple health endpoint returns 200 OK with basic status: Node.js/Express: app.get('/health', (req, res) => res.json({status:'ok',uptime:process.uptime()})). Advanced health check verifies dependencies: check database connection, cache connectivity, external API availability, then return 200 if all healthy or 503 if any critical dependency fails. Include: version number (for deployment verification), uptime, memory usage, and dependency statuses. Secure health endpoint: allow public access for simple ping, restrict detailed info to internal monitoring IPs. Use /health for basic availability, /health/detailed for dependency status (authenticated). Uptime monitoring services probe /health every minute—it should be fast (under 100ms), stateless, and not trigger side effects (no logging each probe to database).
Why does my server show as up but API calls fail?
Server responding doesn't mean application logic is healthy. Common scenarios: (1) Server returns 200 for homepage but 500 for API endpoints—different code paths with different failure modes, (2) Database connection exhausted—new connections timeout but server still serves cached pages, (3) Background job processor crashed—server up but async tasks failing silently, (4) Third-party API dependency down—server healthy but dependent service unavailable, (5) Memory leak—server functional but slow and approaching OOM crash. Solution: implement comprehensive health checks that test critical paths (not just HTTP reachability). Test actual API endpoints, not just homepage. Add application-level monitoring (Sentry, Datadog) to catch logic errors invisible to uptime checkers.