Falcon: it’s easy to use, very fast (for a Python framework, that is), and has 100% unit test coverage.
request per second:
CPython 3.4.3 19.5
PyPy 2.5.1 256.4
Indiscriminate use of CTEs considered harmful
However, there is one aspect of the current implementation of CTEs that should make you pause. Currently CTEs are in effect materialized before they can be used. That is, Postgres runs the query and stashes the data in a temporary store before it can be used in the larger query. There are a number of consequences of this.
.. After some analysis and testing, the simple act of inlining two CTEs in the query in question resulted in the query running in 4% of the time it had previously taken. Indiscriminate use of CTEs had made the performance of this query 25 times worse.
tinys3: Quick and minimal S3 uploads for Python
import tinys3
conn = tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY,tls=True)
f = open(‘some_file.zip’,’rb’)
conn.upload(‘some_file.zip’,f,’my_bucket’)
Github: Mass Downloader using python map
How to write a simple multi-threaded mass downloader using workerpool
Solution 3: Multi-threaded, using map
WorkerPool
implements amap
method which is similar to Python’s nativemap
method. This is a convenient shortcut for when writing a customJob
class is more work than it’s worth.# download3.py - Download many URLs using multiple threads, with the ``map`` method. import os import urllib import workerpool def download(url): url = url.strip() save_to = os.path.basename(url) urllib.urlretrieve(url, save_to) print "Downloaded %s" % url # Initialize a pool, 5 threads in this case pool = workerpool.WorkerPool(size=5) # The ``download`` method will be called with a line from the second # parameter for each job. pool.map(download, open("urls.txt").readlines()) # Send shutdown jobs to all threads, and wait until all the jobs have been completed pool.shutdown() pool.wait()