API Reference & Cache Management
Your edge proxy endpoint comes with a API that lets you monitor and control your cache programmatically. Whether you need to check cache performance, flush stale content after a deployment, or integrate cache management into your continuous deployment pipeline, the API provides everything you need with a simple HTTP interface.
Understanding Your Secret API Key
When you create an endpoint, the system automatically generates a unique secret key that acts as your authentication credential for API operations. This key is essentially a password that grants access to cache management functions for your specific endpoint. You can view your secret key at any time by clicking the key icon next to your endpoint in the dashboard.
The secret key is specific to each endpoint, which means if you manage multiple domains, each one has its own independent key. This isolation ensures that compromising one key doesn't give access to your other endpoints. Treat your secret key like any other sensitive credential—don't commit it to public repositories, don't share it in public forums, and consider using environment variables or secret management systems to store it in your applications.
If you ever suspect your secret key has been compromised, you can regenerate it from the dashboard. Keep in mind that regenerating the key will immediately invalidate the old one, so you'll need to update any scripts, deployment pipelines, or applications that use it.
The Public Health Check Endpoint
Every endpoint includes a public health check at /.health that requires no authentication. This endpoint serves as a quick way to verify that your edge proxy is responding and that the backend server is reachable. The health check is particularly useful for monitoring services, uptime checkers, and quick diagnostic tests.
To use the health check, simply make a GET request to your domain with the /.health path. For example, if your endpoint hostname is "api.example.com", you would request "https://api.example.com/.health". The response comes back as JSON with information about the endpoint's status.
A successful health check returns an HTTP 200 status code with a JSON object indicating that the proxy is operational. If you receive an error or timeout, it suggests either a DNS resolution issue, a problem with the edge server, or an unreachable backend server. The health check is intentionally lightweight and doesn't perform complex operations, making it suitable for frequent polling without impacting performance.
Because the health check requires no authentication, anyone can access it. This is by design—monitoring services and uptime checkers need to verify your endpoint's availability without managing credentials. The health check doesn't expose sensitive information about your configuration, cache contents, or backend infrastructure.
Flushing the Cache
The flush endpoint at /.cache/flush allows you to immediately invalidate all cached content for your endpoint. This is particularly useful after deploying new code, updating content management systems, or making configuration changes that should be reflected immediately for all users.
To flush the cache, make a GET request to /.cache/flush?key=YOUR_SECRET_KEY with your secret key. For example: "https://api.example.com/.cache/flush?key=abc123def456". The operation completes quickly and returns a JSON response confirming that the cache has been cleared.
Cache flushing removes content from both the RAM cache and the disk cache, ensuring a complete purge. After flushing, the next request for any URL will be a cache miss, forcing the edge server to fetch fresh content from your backend. Subsequent requests will then be served from the newly populated cache based on your TTL settings.
You should use cache flushing strategically rather than reflexively. While it's tempting to flush the cache after every small change, doing so defeats the purpose of caching and puts unnecessary load on your backend server. Instead, consider flushing only when you've made significant changes to your website or application that must be reflected immediately.
A common pattern is to integrate cache flushing into your deployment pipeline. After successfully deploying new code to your backend servers, your deployment script can make a flush request to ensure users see the updated version immediately. This approach gives you control over cache invalidation while keeping the benefits of long cache TTLs during normal operation.
Be cautious about exposing the flush endpoint to untrusted users or automated systems. Anyone with your secret key can flush your cache, potentially causing a sudden spike in backend load. If you need to provide cache clearing capabilities to multiple team members, consider creating a controlled interface rather than sharing the raw API key.
Building API Requests
All API endpoints use standard HTTP GET requests, making them accessible from virtually any programming language or tool. You can use command-line tools like curl for quick tests, web browsers for manual checks, or HTTP libraries in your preferred programming language for automated integration.
A typical curl command for checking statistics looks like this: curl "https://api.example.com/.cache/stats?key=abc123". Notice the quotes around the URL—they're important when the URL contains special characters like the query string. The response comes back as JSON, which you can pipe through tools like jq for formatted output or parse programmatically in your scripts.
When integrating API calls into applications, remember to handle errors gracefully. Network issues, DNS problems, or backend outages can cause API requests to fail. Your code should catch exceptions, retry with exponential backoff, and have sensible fallback behavior. For example, if a statistics check fails, your monitoring system should alert you but shouldn't crash.
For flush operations, consider implementing safeguards to prevent accidental or excessive cache clearing. You might add a confirmation prompt in user interfaces, implement rate limiting to prevent abuse, or log all flush operations for audit purposes. These protections become especially important as your team grows and more people have access to deployment tools.
Integrating into Deployment Workflows
Cache management becomes most powerful when integrated into your continuous deployment pipeline. Modern deployment tools and CI/CD platforms can easily incorporate API calls as part of the deployment process, ensuring your cache stays synchronized with your backend code.
A typical deployment workflow might look like this: First, your CI system runs tests on the new code. Upon passing tests, it deploys to your staging environment for final verification. After successful staging tests, it deploys to production. Immediately after the production deployment succeeds, it calls your cache flush endpoint to clear old cached content. Finally, it might call the health check endpoint to verify the entire system is responding correctly.
This automation removes the manual step of remembering to flush the cache after deployments. It also ensures consistent timing—the cache is flushed at exactly the right moment, not too early (which would serve old content from cache while new code is deploying) and not too late (which would serve old content after new code is live).
You can make this even more sophisticated by implementing conditional cache flushing. If your deployment only changes backend logic without affecting the frontend, you might skip the flush. If it only updates static assets with versioned filenames, you might also skip it since the old cached content won't interfere. This selective flushing minimizes backend load while ensuring critical updates reach users immediately.
Security Best Practices
The secret key is your endpoint's primary security mechanism for cache management. Protecting this key is essential to maintaining control over your cache operations. Never expose the key in client-side JavaScript, public repositories, or application logs. Anyone who obtains your key can view statistics and flush your cache, potentially causing performance issues or exposing usage patterns.
Use environment variables to store the key in your applications and deployment scripts. Most hosting platforms and CI/CD systems provide secure ways to store secrets that get injected at runtime. This approach keeps the key out of your codebase while making it available to the processes that need it.
Consider implementing an internal API or administrative interface that wraps the raw cache management endpoints. This abstraction layer can add authentication, authorization, logging, and rate limiting. For example, you might create an admin panel where authenticated team members can flush the cache through a button click, while your system logs who pressed the button and when.
Monitor your cache statistics regularly for unusual patterns. A sudden spike in cache misses might indicate someone is repeatedly flushing your cache. Unusually high statistics request rates could suggest your key has been compromised and is being used for reconnaissance. Setting up alerts for these anomalies helps you detect and respond to security issues quickly.
Monitoring and Alerting
Cache statistics become truly valuable when you monitor them over time and set up alerts for abnormal conditions. A healthy endpoint typically shows a high cache hit ratio, meaning most requests are served from cache. If this ratio drops significantly, it might indicate configuration problems, excessive cache flushing, or content that isn't cacheable.
Consider tracking your cache hit ratio as a key performance indicator. Calculate it by dividing cache hits by total requests (hits plus misses). A ratio above 80% is excellent for most websites, though the ideal number depends on your content and traffic patterns. Dynamic applications with mostly personalized content might have lower ratios, while static websites should achieve very high ratios.
Set up alerts for sudden changes in cache performance. A normally high hit ratio suddenly dropping to zero might mean someone accidentally flushed the cache, changed DNS to bypass the proxy, or modified cache exclusion patterns too aggressively. Quick detection of these issues allows quick resolution before they significantly impact user experience or backend load.
You can also monitor cache size metrics to ensure you're not approaching resource limits. If your RAM cache is consistently full and evicting entries prematurely, you might need to adjust your cache TTL settings, be more selective about what you cache, or consider upgrading your plan for more cache capacity.
Working with Multiple Endpoints
If you manage multiple endpoints, each one has its own independent secret key and cache. This isolation is intentional—it allows you to manage different websites or applications with different security boundaries and operational concerns. However, it also means you need strategies for managing multiple keys and coordinating cache operations across endpoints.
Consider creating a configuration file or database that maps endpoint hostnames to their corresponding secret keys. Your deployment scripts can then look up the appropriate key based on which service is being deployed. This approach centralizes key management and makes it easier to rotate keys or add new endpoints.
For global cache clearing across all your endpoints, you might build a script that iterates through all your configured endpoints and flushes each one sequentially. This is useful when you've made changes to shared infrastructure or dependencies that affect multiple services. Just be mindful of the cumulative backend load from simultaneously flushing multiple caches.
Command Line Examples
For quick manual operations, curl commands provide a simple way to interact with the API without writing code. To check if an endpoint is responding, use: curl https://api.example.com/.health. This returns immediately with either a success response or an error.
To retrieve statistics, include your secret key in the query string: curl "https://api.example.com/.cache/stats?key=your_secret_key_here". Pipe the output through jq for pretty formatting: curl -s "https://api.example.com/.cache/stats?key=your_secret_key_here" | jq.
To flush the cache, the command looks similar: curl "https://api.example.com/.cache/flush?key=your_secret_key_here". You might add the -v flag for verbose output to see the full HTTP transaction, or -w '\n' to add a newline after the response for cleaner output.
These curl commands can be saved as shell scripts for convenience. Create a script named flush-cache.sh that contains the flush command with your key, make it executable with chmod +x flush-cache.sh, and then you can simply run ./flush-cache.sh whenever you need to clear the cache.
Programmatic Integration Examples
In Node.js or JavaScript environments, you can use the fetch API or libraries like axios. A simple statistics check might look like: const response = await fetch('https://api.example.com/.cache/stats?key=' + process.env.API_KEY), followed by const stats = await response.json() to parse the JSON response. The API key comes from an environment variable for security.
Python users can leverage the requests library with similar simplicity: response = requests.get(f'https://api.example.com/.cache/stats?key={os.environ["API_KEY"]}'), followed by stats = response.json() to access the data. Error handling with try/except blocks ensures your application doesn't crash if the API is temporarily unavailable.
For bash scripts, you can capture the curl output into a variable: STATS=$(curl -s "https://api.example.com/.cache/stats?key=$API_KEY"). Then parse specific values using jq: HIT_RATIO=$(echo $STATS | jq -r '.hit_ratio'). This technique is useful for deployment scripts that need to make decisions based on cache state.
Troubleshooting API Issues
If you receive authentication errors when calling the stats or flush endpoints, verify you're using the correct secret key from the dashboard. Copy and paste the key directly rather than typing it manually to avoid transcription errors. Remember that the key is case-sensitive and must match exactly.
Connection timeouts or DNS errors usually indicate network issues or incorrect hostnames. Verify that your DNS has fully propagated and that you're using the exact hostname configured in your endpoint settings. Test with the health check endpoint first since it requires no authentication and can help isolate whether the issue is with connectivity or credentials.
If you receive unexpected responses or empty data, check that your endpoint is in an active state. Pending or inactive endpoints may not respond to API calls correctly. Also verify that you're making requests over HTTPS rather than HTTP, as the API endpoints require secure connections.
For cache flush operations that seem to have no effect, remember that the operation only clears the cache—it doesn't prevent new entries from being cached immediately. If you flush and then immediately make requests, those requests will populate the cache again. This is normal behavior and means the flush worked correctly.
Best Practices Summary
Use the health check endpoint for monitoring and uptime checks since it requires no authentication and has minimal overhead. Reserve the authenticated statistics endpoint for performance monitoring and analysis, checking it at intervals that make sense for your operational needs.
Implement cache flushing judiciously as part of deployment workflows rather than as a routine operation. The cache exists to improve performance, and excessive flushing defeats this purpose. Trust your cache TTL settings to handle most content updates naturally.
Protect your secret key as you would any other credential. Store it securely, never commit it to version control, and rotate it if you suspect compromise. Use environment variables and secret management systems to keep keys secure while making them available to authorized systems.
Monitor cache performance metrics over time to understand your application's caching behavior. Use this data to tune cache TTL settings, adjust exclusion patterns, and optimize overall performance. The API gives you the visibility you need to make informed decisions about cache configuration.