NOTE: This is a work in progress. Please send questions, comments, or suggestions to r@tomayko.com.
Rack::Cache can be used with Rails 2.3 or above. Documentation and a sample application is forthcoming; in the mean time, see this example of using Rack::Cache with Rails 2.3.
Rack::Cache is often easier to setup as part of your existing Ruby application than a separate caching system. Rack::Cache runs entirely inside your backend application processes - no separate / external process is required. This lets Rack::Cache scale down to development environments and simple deployments very easily while not sacrificing the benefits of a standards-based approach to caching.
Rack::Cache takes a standards-based approach to caching that provides some
benefits over framework-integrated systems. It uses standard HTTP headers
(Expires
, Cache-Control
, Etag
, Last-Modified
, etc.) to determine
what/when to cache. Designing applications to support these standard HTTP
mechanisms gives the benefit of being able to switch to a different HTTP
cache implementation in the future.
In addition, using a standards-based approach to caching creates a clear separation between application and caching logic. The application need only specify a basic set of information about the response and all decisions regarding how and when to cache is moved into the caching layer.
No. Your design is the only thing that can make your app scale.
Also, Rack::Cache is not overly optimized for performance. The main goal of the project is to provide a portable, easy-to-configure, and standards-based caching solution for small to medium sized deployments. More sophisticated / performant caching systems (e.g., Varnish, Squid, httpd/mod-cache) may be more appropriate for large deployments with crazy-land throughput requirements.
Yes. Both freshness and validation-based caching is supported. A response
will be cached if it has a freshness lifetime (e.g., Expires
or
Cache-Control: max-age=N
headers) and/or includes a validator (e.g.,
Last-Modified
or ETag
headers). When the cache hits and the response is
fresh, it's delivered immediately without talking to the backend application;
when the cache is stale, the cached response is validated using a conditional
GET request.
Not really. Rack::Cache deals with entire responses and doesn't know anything about how your application constructs them.
However, something like ESI may be implemented in the future (likely as a separate Rack middleware component that could be situated upstream from Rack::Cache), which would allow applications to compose responses based on several "fragment resources". Each fragment would have its own cache policy.
Although planned, there is currently no mechanism for manually purging an entry stored in the cache.
Note that using an Expires
or Cache-Control: max-age=N
header and relying on
manual purge to invalidate cached entry can often be implemented more simply
using efficient validation based caching (Last-Modified
, Etag
). Many web
frameworks are based entirely on manual purge and do not support validation at
the cache level.
Set the rack-cache.force-pass
variable in the rack environment to true
.
It means that your application performs only the processing necessary to
determine if a response is valid before sending a 304 Not Modified
in response
to a conditional GET request. Many applications that perform validation do so
only after the entire response has been generated, which provides bandwidth
savings but results in no CPU/IO savings. Implementing validation efficiently
can increase backend application throughput significantly when fronted by a
validating caching system (like Rack::Cache).
Here's an example Rack application that performs efficient validation.
Yes.
Sure. HTTPS is typically managed by a front-end web server so this isn't really relevant to Rack::Cache.