Rate Limiting with Python and Redis (GitHub)

 

import time

 

from client import get_redis_client
from exceptions import RateLimitExceeded
def rate_per_second(count):
    def _rate_per_second(function):
        def __rate_per_second(*args, **kwargs):
client = get_redis_client()
            key = frate-limit:{int(time.time())}
            if int(client.incr(key)) > count:
                raise RateLimitExceeded
            if client.ttl(key) == 1# timeout is not set
                client.expire(key, 1# expire in 1 second
            return function(*args, *kwargs)
        return __rate_per_second
    return _rate_per_second
@rate_per_second(100# example: 100 requests per second
def my_function():
    pass  # do something
if __name__ == __main__:
    success = fail = 0
    for i in range(2000):
        try:
            my_function()
            success += 1
        except RateLimitExceeded:
            fail += 1
        time.sleep(5/1000# sleep every 5 milliseconds
    print(fSuccess count = {success})
    print(fFail count = {fail})

 

5.2.1 Storing counters in Redis

In order to update counters, we’ll need to store the actual counter information. For each counter and precision, like site hits and 5 seconds, we’ll keep a HASH that stores information about the number of site hits that have occurred in each 5-second time slice. The keys in the hash will be the start of the time slice, and the value will be the number of hits. Figure 5.1 shows a selection of data from a hit counter with 5-second time slices.

As we start to use counters, we need to record what counters have been written to so
that we can clear out old data. For this, we need an ordered sequence that lets us iterate
one by one over its entries, and that also doesn’t allow duplicates. We could use a LIST
combined with a SET, but that would take extra code and round trips to Redis. Instead,
we’ll use a ZSET, where the members are the combinations of precisions and names that
have been written to, and the scores are all 0. By setting all scores to 0 in a ZSET, Redis
will try to sort by score, and finding them all equal, will then sort by member name. This gives us a fixed order for a given set of members, which will make it easy to sequentially
scan them. An example ZSET of known counters can be seen in figure 5.2.

Figure 5.1A HASH that shows the number of web page hits over 5-second time slices around 7:40 a.m. on May 7, 2012
Figure 5.2A ZSET that shows some known counters

WordPress: Redis Object Cache

A persistent object cache backend powered by Redis. Supports PredisPhpRedis (PECL), HHVM, replication, clustering and WP-CLI.

To adjust the connection parameters, prefix cache keys or configure replication/clustering, please see Other Notes.

Forked from Eric Mann’s and Erick Hitter’s Redis Object Cache.

REDIS CACHE PRO

business class Redis object cache backend. Truly reliable, highly optimized, fully customizable and with a dedicated engineer when you most need it.

  • Rewritten for raw performance
  • WordPress object cache API compliant
  • Easy debugging & logging
  • Fully unit tested (100% code coverage)
  • Secure connections with TLS
  • Seamless WP CLI & Debug Bar integration
  • Optimized for WooCommerce, Jetpack & Yoast SEO

What is Redis Object Caching and How to Use It for Your WordPress Site

Redis and object caching can vastly speed up your WordPress page load times with each subsequent visit. It’s also used by many popular websites like GitHub, Pinterest, StackOverflow and many others.

Remote Dictionary Server (Redis) “is an open source, in-memory data structure store used as a database, cache, and message broker.” It’s a key-value store which is often called a NoSQL database.

It’s best used on dynamic websites such as WordPress sites when it comes to object caching, which caches repeating query results.

Today, I’ll share more detail on object caching, its benefits, and how to install and use Redis for object caching on WordPress websites.

 

More: kinsta.com

Plugin: redis-cache

GitHub WordPress Plugin: