Rate Limiting with Python and Redis (GitHub)

 

import time

 

from client import get_redis_client
from exceptions import RateLimitExceeded
def rate_per_second(count):
    def _rate_per_second(function):
        def __rate_per_second(*args, **kwargs):
client = get_redis_client()
            key = frate-limit:{int(time.time())}
            if int(client.incr(key)) > count:
                raise RateLimitExceeded
            if client.ttl(key) == 1# timeout is not set
                client.expire(key, 1# expire in 1 second
            return function(*args, *kwargs)
        return __rate_per_second
    return _rate_per_second
@rate_per_second(100# example: 100 requests per second
def my_function():
    pass  # do something
if __name__ == __main__:
    success = fail = 0
    for i in range(2000):
        try:
            my_function()
            success += 1
        except RateLimitExceeded:
            fail += 1
        time.sleep(5/1000# sleep every 5 milliseconds
    print(fSuccess count = {success})
    print(fFail count = {fail})

 

5.2.1 Storing counters in Redis

In order to update counters, we’ll need to store the actual counter information. For each counter and precision, like site hits and 5 seconds, we’ll keep a HASH that stores information about the number of site hits that have occurred in each 5-second time slice. The keys in the hash will be the start of the time slice, and the value will be the number of hits. Figure 5.1 shows a selection of data from a hit counter with 5-second time slices.

As we start to use counters, we need to record what counters have been written to so
that we can clear out old data. For this, we need an ordered sequence that lets us iterate
one by one over its entries, and that also doesn’t allow duplicates. We could use a LIST
combined with a SET, but that would take extra code and round trips to Redis. Instead,
we’ll use a ZSET, where the members are the combinations of precisions and names that
have been written to, and the scores are all 0. By setting all scores to 0 in a ZSET, Redis
will try to sort by score, and finding them all equal, will then sort by member name. This gives us a fixed order for a given set of members, which will make it easy to sequentially
scan them. An example ZSET of known counters can be seen in figure 5.2.

Figure 5.1A HASH that shows the number of web page hits over 5-second time slices around 7:40 a.m. on May 7, 2012
Figure 5.2A ZSET that shows some known counters

Host Google Analytics Locally

CAOS (Complete Analytics Optimization Suite) for Google Analytics allows you to host analytics.js/gtag.js/ga.js locally and keep it updated using WordPress’ built-in Cron-schedule. Fully automatic!

Whenever you run an analysis of your website on Google Pagespeed InsightsPingdom or GTMetrix, it’ll tell you to leverage browser cachewhen you’re using Google Analytics. Because Google has set the cache expiry time to 2 hours. This plugin will get you a higher score on Pagespeed and Pingdom and make your website load faster, because the user’s browser doesn’t have to make a roundtrip to download the file from Google’s external server.

Just install the plugin, enter your Tracking-ID and the plugin adds the necessary Tracking Code for Google Analytics to the header (or footer) of your theme, downloads and saves the analytics.js/ga.js/gtag.js-file to your website’s server and keeps it updated (automagically) using a scheduled script in wp_cron(). CAOS is a set and forget plugin.

Please keep in mind that, although I try to make the configuration of this plugin as easy as possible, the concept of locally hosting a file or optimizing Google Analytics for Pagespeed Insights or GT Metrix has proven to be confusing for some people. If you’re not sure of what your doing, please consult a SEO expert or Webdeveloper to help you with the configuration and optimization of your WordPress blog. Or feel free to contact me for a quote.

For more information: How to setup CAOS.

KeyCdn: Content delivery, simiplified.

KeyCDN is a registered trademark and service of proinity LLC, a privately funded company headquartered in Switzerland. We are a leading European CDN provider that has built a next-generation content delivery architecture from the ground up. Our mission is to develop and engineer a content delivery solution that is accessible to as many people as possible. KeyCDN is not based on a federated CDN or any other form of reselling.

Your experience is in focus, therefore, a simple and easy CDN management is important to us. Our CDN gets you started in just a few clicks but also offers a multitude of configuration options. We operate a state of the art infrastructure for you, which gives you the advantage to focus on your core business. KeyCDN makes content delivery smarter and less expensive.