How it works

This service makes your website faster and more reliable by sitting between your visitors and your web server. Think of it as a smart middleman that remembers and reuses content instead of asking your server for the same information repeatedly.

The Basic Concept

Your website normally works like this: A visitor requests a page, your server generates it, and sends it back to the visitor. Every single visitor triggers this same process, even if they're all asking for identical content. Your server does the same work over and over.

With our edge proxy, the process changes. The first visitor requests a page, our edge server fetches it from your backend, saves a copy, and sends it to the visitor. When the second visitor requests the same page, our edge server simply sends them the saved copy without bothering your backend server. Your server does the work once, and we handle hundreds or thousands of identical requests from the cached copy.

What Gets Cached

Static content like images, CSS files, JavaScript, and HTML pages are perfect for caching. These files rarely change, so serving them from cache makes sense. We store these files both in fast memory (RAM) and on disk, ensuring quick access even for content that hasn't been requested recently.

Dynamic content that changes for each user shouldn't be cached. Your shopping cart, user dashboard, and checkout pages need to be fresh every time. This is why you configure cache exclusion patterns and cookie bypass rules—they tell our edge server which content is safe to cache and which must always come from your backend.

How Caching Decisions Happen

When a request arrives at our caching layer, we check several things. First, does the URL match any of your exclusion patterns? If so, we skip the cache entirely and forward the request to your backend. Second, does the request include any of your bypass cookies? If so, same thing—straight to your backend.

If neither condition applies, we check our cache. Do we have this exact URL saved? Is it still fresh according to your TTL settings? If yes to both questions, we serve it immediately from cache. If we don't have it cached or it's expired, we fetch it from your backend, save it for next time, and send it to the visitor.

The TTL Setting

Time To Live (TTL) determines how long content stays in the cache before we consider it stale. A 3600 second TTL means we cache content for one hour. After that hour, the next request triggers a fresh fetch from your backend to update the cache.

Longer TTLs reduce load on your backend but mean visitors might see slightly outdated content. Shorter TTLs keep content fresher but require more frequent backend requests. Most websites use TTLs between 1-24 hours depending on how often their content changes.

Why This Makes Your Site Faster

Speed comes from two factors. First, serving content from our edge server is faster than reaching your backend, especially for geographically distant visitors. Second, we compress content with Brotli and optimize images, reducing the amount of data sent over the network.

Your backend server also benefits. Instead of handling every visitor request, it only handles cache misses and excluded content. This reduced load means your server can handle traffic spikes more easily and may allow you to run on less expensive hardware.

Understanding Cache Hits and Misses

A cache hit means we found the requested content in our cache and served it without contacting your backend. A cache miss means we had to fetch from your backend, either because the content wasn't cached or had expired. Your dashboard shows these statistics, and a high hit ratio indicates effective caching.

If you see low hit ratios, it suggests content isn't being cached effectively. This could mean your TTL is too short, your exclusion patterns are too broad, or your content includes elements that prevent caching like unique URLs with random parameters.

The Role of DNS

For this service to work, visitors must reach our edge server instead of going directly to your backend. This is accomplished through DNS—when someone types your domain name, DNS tells their browser to connect to our edge IP address instead of your backend IP.

You maintain complete control. Your backend server continues running normally at its original IP address. We simply intercept requests through DNS and handle them on your behalf, forwarding to your backend when necessary. If you ever want to stop using our service, you change your DNS back to point directly to your backend.

Security and Access

Your backend server needs to allow connections from our edge IP address. This might require adjusting your firewall settings to whitelist our IP. Without this access, our edge server cannot fetch content from your backend, and the service won't function.

Each endpoint gets a unique secret key for API access. This key allows you to check cache statistics and flush the cache when needed. Keep this key secure—anyone with it can view your cache performance and clear your cached content.

What Happens When You Update Your Site

When you deploy changes to your backend, the cached content on our edge server becomes outdated. You have two options: wait for the TTL to expire naturally, or use the cache flush API to immediately clear all cached content. Most users integrate cache flushing into their deployment process for instant updates.

For WordPress users, we provide a plugin that automatically manages cache flushing. When you publish or update content in WordPress, the plugin automatically flushes the cache so your changes appear immediately to visitors. This removes the need for manual cache management or API integration.

After flushing, the cache is empty. The next requests will be cache misses that fetch fresh content from your backend and populate the cache again. Within minutes, your cache is serving the updated content to all visitors.

Real-World Example

Imagine you run a blog that gets 1000 visitors per day, and each visitor views 3 pages. Without caching, your server handles 3000 page requests. With a 90% cache hit rate, only 300 requests reach your backend while 2700 are served from cache. Your server does 10% of the work while visitors get faster page loads.

The edge proxy essentially multiplies your server's capacity. A backend that might struggle with 1000 concurrent users can suddenly handle much higher traffic because most requests never reach it. This is particularly valuable during traffic spikes from social media or news coverage.

Limitations and Considerations

This service works best for content that can be cached. Sites with mostly personalized content for logged-in users will see less benefit than sites serving the same content to many visitors. E-commerce sites benefit from caching product pages while excluding checkout flows. News sites and blogs benefit tremendously since most content is identical for all readers.

Backend response time still matters. While caching reduces backend load, cache misses require fetching from your backend. If your backend is slow, those misses will still be slow. The edge proxy improves your site by reducing how often slow requests happen, not by making slow backends fast.

Getting Started

To use this service, you create an endpoint in our dashboard specifying your domain and backend URL. You verify domain ownership, configure your cache settings, and update your DNS. Within hours of DNS propagation, traffic flows through our edge server and your site benefits from caching.

Start with conservative cache settings and gradually optimize based on the statistics you see in your dashboard. Monitor your cache hit ratio and adjust your TTL, exclusion patterns, and cookie rules to find the right balance for your specific needs.

This service makes your website faster and more reliable by sitting between your visitors and your web server. Think of it as a smart middleman that remembers and reuses content instead of asking your server for the same information repeatedly.

## The Basic Concept

Your website normally works like this: A visitor requests a page, your server generates it, and sends it back to the visitor. Every single visitor triggers this same process, even if they're all asking for identical content. Your server does the same work over and over.

With our edge proxy, the process changes. The first visitor requests a page, our edge server fetches it from your backend, saves a copy, and sends it to the visitor. When the second visitor requests the same page, our edge server simply sends them the saved copy without bothering your backend server. Your server does the work once, and we handle hundreds or thousands of identical requests from the cached copy.

## What Gets Cached

Static content like images, CSS files, JavaScript, and HTML pages are perfect for caching. These files rarely change, so serving them from cache makes sense. We store these files both in fast memory (RAM) and on disk, ensuring quick access even for content that hasn't been requested recently.

Dynamic content that changes for each user shouldn't be cached. Your shopping cart, user dashboard, and checkout pages need to be fresh every time. This is why you configure cache exclusion patterns and cookie bypass rules—they tell our edge server which content is safe to cache and which must always come from your backend.

## How Caching Decisions Happen

When a request arrives at our edge server, we check several things. First, does the URL match any of your exclusion patterns? If so, we skip the cache entirely and forward the request to your backend. Second, does the request include any of your bypass cookies? If so, same thing—straight to your backend.

If neither condition applies, we check our cache. Do we have this exact URL saved? Is it still fresh according to your TTL settings? If yes to both questions, we serve it immediately from cache. If we don't have it cached or it's expired, we fetch it from your backend, save it for next time, and send it to the visitor.

## The TTL Setting

Time To Live (TTL) determines how long content stays in the cache before we consider it stale. A 3600 second TTL means we cache content for one hour. After that hour, the next request triggers a fresh fetch from your backend to update the cache.

Longer TTLs reduce load on your backend but mean visitors might see slightly outdated content. Shorter TTLs keep content fresher but require more frequent backend requests. Most websites use TTLs between 1-24 hours depending on how often their content changes.

## Why This Makes Your Site Faster

Speed comes from two factors. First, serving content from our edge server is faster than reaching your backend, especially for geographically distant visitors. Second, we compress content with Brotli and optimize images, reducing the amount of data sent over the network.

Your backend server also benefits. Instead of handling every visitor request, it only handles cache misses and excluded content. This reduced load means your server can handle traffic spikes more easily and may allow you to run on less expensive hardware.

## Understanding Cache Hits and Misses

A cache hit means we found the requested content in our cache and served it without contacting your backend. A cache miss means we had to fetch from your backend, either because the content wasn't cached or had expired. Your dashboard shows these statistics, and a high hit ratio indicates effective caching.

If you see low hit ratios, it suggests content isn't being cached effectively. This could mean your TTL is too short, your exclusion patterns are too broad, or your content includes elements that prevent caching like unique URLs with random parameters.

## The Role of DNS

For this service to work, visitors must reach our edge server instead of going directly to your backend. This is accomplished through DNS—when someone types your domain name, DNS tells their browser to connect to our edge IP address instead of your backend IP.

You maintain complete control. Your backend server continues running normally at its original IP address. We simply intercept requests through DNS and handle them on your behalf, forwarding to your backend when necessary. If you ever want to stop using our service, you change your DNS back to point directly to your backend.

## Security and Access

Your backend server needs to allow connections from our edge IP address. This might require adjusting your firewall settings to whitelist our IP. Without this access, our edge server cannot fetch content from your backend, and the service won't function.

Each endpoint gets a unique secret key for API access. This key allows you to check cache statistics and flush the cache when needed. Keep this key secure—anyone with it can view your cache performance and clear your cached content.

## What Happens When You Update Your Site

When you deploy changes to your backend, the cached content on our edge server becomes outdated. You have two options: wait for the TTL to expire naturally, or use the cache flush API to immediately clear all cached content. Most users integrate cache flushing into their deployment process for instant updates.

For WordPress users, we provide a plugin that automatically manages cache flushing. When you publish or update content in WordPress, the plugin automatically flushes the cache so your changes appear immediately to visitors. This removes the need for manual cache management or API integration.

After flushing, the cache is empty. The next requests will be cache misses that fetch fresh content from your backend and populate the cache again. Within minutes, your cache is serving the updated content to all visitors.

## Real-World Example

Imagine you run a blog that gets 1000 visitors per day, and each visitor views 3 pages. Without caching, your server handles 3000 page requests. With a 90% cache hit rate, only 300 requests reach your backend while 2700 are served from cache. Your server does 10% of the work while visitors get faster page loads.

The edge proxy essentially multiplies your server's capacity. A backend that might struggle with 1000 concurrent users can suddenly handle much higher traffic because most requests never reach it. This is particularly valuable during traffic spikes from social media or news coverage.

## Limitations and Considerations

This service works best for content that can be cached. Sites with mostly personalized content for logged-in users will see less benefit than sites serving the same content to many visitors. E-commerce sites benefit from caching product pages while excluding checkout flows. News sites and blogs benefit tremendously since most content is identical for all readers.

Backend response time still matters. While caching reduces backend load, cache misses require fetching from your backend. If your backend is slow, those misses will still be slow. The edge proxy improves your site by reducing how often slow requests happen, not by making slow backends fast.

## Getting Started

To use this service, you create an endpoint in our dashboard specifying your domain and backend URL. You verify domain ownership, configure your cache settings, and update your DNS. Within hours of DNS propagation, traffic flows through our edge server and your site benefits from caching.

Start with conservative cache settings and gradually optimize based on the statistics you see in your dashboard. Monitor your cache hit ratio and adjust your TTL, exclusion patterns, and cookie rules to find the right balance for your specific needs.