PostgreSQL 9.5 introduces a new SKIP LOCKED option to SELECT ... FOR [KEY] UPDATE|SHARE. It’s used in the same place as NOWAIT and, like NOWAIT, affects behaviour when the tuple is locked by another transaction.
The main utility of SKIP LOCKED is for building simple, reliable and efficient concurrent work queues.
“How do I find the first row (by some given ordering) in a queue table that nobody else has claimed and claim it for myself? It needs to automatically revert to being unclaimed again if I crash or exit for any reason. Many other workers will be doing the same thing at the same time. It is vital that each item get processed exactly once; none may be skipped and none may be processed more than once.”
This is harder than you’d think because SQL statements do not execute atomically. A subquery might run before the outer query, depending on how the planner/optimizer does things. Many of the race conditions that can affect series of statements can also affect single statements with CTEs and subqueries, but the window in which they occur is narrower because the statement-parts usually run closer together. So lots of code that looks right proves not to be, it’s just right 99.95% of the time, or it’s always right until that day your business gets a big surge of business and concurrency goes up. Sometimes that’s good enough. Often it isn’t.
How SKIP LOCKED helps
SKIP LOCKED tries to make this easier by letting you use normal SQL to write efficient, safe queue systems. You don’t need to import a large and complex 3rd party app or library to implement a queue, and you don’t need to deal with the key mapping and namespace issues with advisory locking.
Given a trivial queue:CREATE TABLE queue( itemid INTEGER PRIMARY KEY, is_done BOOLEAN NOT NULL DEFAULT 'f' ); INSERT INTO queue(itemid) SELECT x FROM generate_series(1,20) x;
an application can grab a single queue item safely while holding an open transaction with:DELETE FROM queue WHERE itemid = ( SELECT itemid FROM queue ORDER BY itemid FOR UPDATE SKIP LOCKED LIMIT 1 ) RETURNING *;
- Scans the queue table in itemid order
- Tries to acquire a lock on each row. If it fails to acquire the lock, it ignores the row as if it wasn’t in the table at all and carries on.
- Stops scanning once it’s locked one item
- Returns the itemid of the locked item
- Looks up the found itemid in the index to get its physical location
- Marks the tuple as deleted (but this doesn’t take effect until commit)
Canada’s Health-Care Queues
Bernie Sanders’s model system makes patients wait and wait and . . .
surveyed physicians in 12 specialties across 10 provinces and found “a median waiting time of 21.2 weeks between referral from a general practitioner and receipt of treatment.”
.. The wait to see a specialist for a consultation is now 177% longer than in 1993, while the wait from consultation to treatment is 95% longer than in 1993.
.. The shortest waits are in radiation and oncology. But long waits for orthopaedic surgery, neurosurgery and ophthalmology, among others, far exceed what’s recommended and aren’t benign.
.. Some provinces perform better than others. “The shortest specialist-to-treatment waits are found in Ontario (8.6 weeks)”
.. while the longest are in Manitoba (16.3 weeks),”
What is the best Django queue backend?
Getting Started Scheduling Tasks with Celery
Introduction to Celery
The purpose of Celery is to allow you to run some code later, or regularly according to a schedule.
Why might this be useful? Here are a couple of common cases.
First, suppose a web request has come in from a user, who is waiting for the request to complete so a new page can load in their browser. Based on their request, you have some code to run that’s going to take a while (longer than the person might want to wait for a web page), but you don’t really need to run that code before responding to the web request. You can use Celery to have your long-running code called later, and go ahead and respond immediately to the web request.