Backend Server Access and Firewall Configuration

Your edge proxy needs reliable access to your backend servers to function correctly. When firewalls or rate limiting systems block our edge servers, your endpoint stops working and visitors see errors instead of your website.

How the Edge Proxy Connects

When a request comes to your endpoint, our edge server makes HTTP or HTTPS requests to your backend URL. From your backend's perspective, all requests come from our edge IP address, not from individual visitors. Your server sees a high volume of requests from a single source, which can trigger security measures that mistake legitimate proxy traffic for an attack.

Common Problems from Blocked Access

If our edge server is blocked, your endpoint stops working entirely. Users receive timeout errors because we can't reach your backend to fetch content.

Rate limiting causes more subtle problems. Your backend might allow the first few requests from our edge IP, then start rejecting subsequent ones. This creates intermittent failures where your site works inconsistently or loads slowly. Some security systems implement progressive penalties, escalating blocks that persist even after you identify the problem.

Firewall Configuration

If your backend server has a firewall, you need to explicitly allow traffic from our edge IP addresses. Check your dashboard for the current edge IP that needs access to your backend.

For Linux servers running UFW, add an allow rule: ufw allow from EDGE_IP_ADDRESS to any port 80. For iptables: iptables -A INPUT -s EDGE_IP_ADDRESS -p tcp --dport 80 -j ACCEPT. Make sure these rules persist across reboots by saving your firewall configuration.

Many hosting providers offer firewall controls in their control panels. Look for firewall, security, or access control settings where you can add our edge IP to an allowlist. The interface varies by provider, but the concept is the same: create a rule allowing our edge IP on your application's port (usually 80 for HTTP or 443 for HTTPS).

Rate Limiting

Many hosting providers include rate limiting that can inadvertently block our edge servers. These systems trigger on high request volumes from a single IP address, which is what our edge proxy generates during normal operation.

Check your hosting control panel for rate limiting or security settings. Look for options to whitelist or exclude our edge IP from automatic rate limiting. If your hosting provider implements network-level rate limiting without configuration options, contact their support to explain you're using a reverse proxy service and need our edge IP whitelisted.

Application-level rate limiting in your web server configuration or application code also needs consideration. If you've configured nginx, Apache, or your application to limit requests per IP, our edge server will hit those limits quickly. Either raise the limits for our edge IP or exclude it from rate limiting entirely.

Web Application Firewalls

If you're using a web application firewall, it may block our edge servers by misinterpreting proxy traffic patterns as malicious activity. Most WAFs allow you to create IP-based bypass rules for trusted sources. Add our edge IP to this allowlist so it bypasses WAF inspection.

Testing Backend Connectivity

Before pointing your DNS to our edge servers, verify connectivity by checking your backend's access logs. When you test your endpoint through our service, you should see log entries showing our edge IP as the source. If you don't see any requests arriving, traffic is being blocked.

Our health check feature provides ongoing monitoring. If the health check fails, it indicates our edge server cannot reach your backend, usually due to firewall or network configuration issues. A successful health check confirms basic connectivity is working.

You can test directly using curl: curl -H "Host: your-domain.com" http://your-backend-ip/. This simulates what our edge server does when making backend requests. If this command times out or fails, investigate network-level blocking.

SSL/TLS Configuration

If your backend uses HTTPS, our edge server establishes SSL/TLS connections automatically. Your backend's SSL certificate should be valid and not expired. For backends using self-signed certificates, the "backend_ssl" setting in your endpoint configuration controls how we handle these connections.

Monitoring

Set up monitoring to detect when our edge servers are being blocked. Your firewall logs, application logs, and web server logs provide visibility into rejected connections. Configure alerts for repeated connection failures from our edge IP.

Monitor your endpoint's health check status in our dashboard. A suddenly failing health check often indicates new firewall rules blocking our access. Track cache miss rates in your statistics—unusual drops might indicate requests are failing to reach your backend.

Best Practices

Keep your firewall rules updated if we notify you of edge IP changes. Test firewall changes in a staging environment before applying them to production. Keep rules specific to only the necessary ports—allow our edge IP on port 80 for HTTP or port 443 for HTTPS, but don't open unnecessary ports.

Document your firewall configuration for future reference, including why specific rules exist and how to test they're working correctly.

Getting Help

If you still experience connectivity issues after following this guidance, contact support with details about your backend infrastructure, firewall configuration, and any error messages. Provide your backend server logs showing connection attempts from our edge IP to help identify the issue faster.