Django Cache Versioning

But there are some problems with this approach. For one, the first (uncached) request still takes the full 3.5 seconds.

.. Django supports “versioning” of cache records. If you specify a version number in your cache.set(key, value, version=my_version) call, then a corresponding cache.get(key, version=some_other_version) call will not return any data. Using versioning, we can change things around and store the user’s current ‘version’ in the cache. When we want to get the cached data set, we specify the user’s cached version number. In this way, we are able to invalidate old cache entries without searching through the cache for all of a user’s cached items. An example will help clarify:

Django – Understanding X-Sendfile

I’ve been doing some research regarding file downloads with access control, using Django. My goal is to completely block access to a file, except when accessed by a specific user. I’ve read that when using Django, X-Sendfile is one of the methods of choice for achieving this (based on other SO questions, etc). My rudimentary understanding of using X-Sendfile with Django is:

  1. User requests URI to get a protected file
  2. Django app decides which file to return based on URL, and checks user permission, etc.
  3. Django app returns an HTTP Response with the ‘X-Sendfile’ header set to the server’s file path
  4. The web server finds the file and returns it to the requester (I assume the webs server also strips out the ‘X-Sendfile’ header along the way)

Compared with chucking the file directly from Django, X-Sendfile seems likely to be a more efficient method of achieving protected downloads (since I can rely on Nginx to serve files, vs Django), but leaves 2 questions for me:

 

 

How to return static files passing through a view in django?

<span class="pln">abspath </span><span class="pun">=</span> <span class="str">'/most_secret_directory_on_the_whole_filesystem/protected_filename.css'</span><span class="pln">

response </span><span class="pun">=</span> <span class="typ">HttpResponse</span><span class="pun">()</span><span class="pln">
response</span><span class="pun">[</span><span class="str">'X-Sendfile'</span><span class="pun">]</span> <span class="pun">=</span><span class="pln"> abspath

response</span><span class="pun">[</span><span class="str">'Content-Type'</span><span class="pun">]</span> <span class="pun">=</span> <span class="str">'mimetype/submimetype'</span>
<span class="com"># or let your webserver auto-inject such a header field</span>
<span class="com"># after auto-recognition of mimetype based on filename extension</span><span class="pln">

response</span><span class="pun">[</span><span class="str">'Content-Length'</span><span class="pun">]</span> <span class="pun">=</span> <span class="str"><filesize></span>
<span class="com"># can probably be left out if you don't want to hassle with getting it off disk.</span>
<span class="com"># oh, and:</span>
<span class="com"># if the file is stored via a models.FileField, you just need myfilefield.size</span><span class="pln">

response</span><span class="pun">[</span><span class="str">'Content-Disposition'</span><span class="pun">]</span> <span class="pun">=</span> <span class="str">'attachment; filename=%s.css'</span><span class="pln"> \
    </span><span class="pun">%</span> <span class="str">'whatever_public_filename_you_need_it_to_be'</span>

<span class="kwd">return</span><span class="pln"> response</span>

 

Options to efficiently synchronize 1 million files with remote servers?

At a company I work for we have such a thing called “playlists” which are small files ~100-300 bytes each. There’s about a million of them. About 100,000 of them get changed every hour. These playlists need to be uploaded to 10 other remote servers on different continents every hour and it needs to happen quick in under 2 mins ideally

.. Since instant updates are also acceptable, you could use lsyncd.
It watches directories (inotify) and will rsync changes to slaves.
At startup it will do a full rsync, so that will take some time, but after that only changes are transmitted.
Recursive watching of directories is possible, if a slave server is down the sync will be retried until it comes back.