Speed up Django’s collectstatic command with Collectfasta

Django’s collectstatic command (added in Django 1.3 – March 23, 2011) was designed for storage backends where file retrieval was cheap because it was on your local disk.

In Django 1.4 (March 23, 2012) Django introduced CachedStaticFilesStorage which would append md5 hashes to the end of files so that you could have multiple versions of files which could stick around while you did a blue/green deployment. It also meant you could put your app in-front of a CDN and the filename hashes would ensure that when the file changed so did the cache key. This meant you didn’t need to worry about invalidating the CDN assets or users’ browser caches.

Later on (Django 1.7 – September 2, 2014) we got ManifestStaticFilesStorage which stores the filenames in a json file assisting with hosting on remote storage like S3.

The original django-storages is even older than collectstaticthe initial commit was back in  Jun 12, 2008. Its purpose was to provide a storage backend for AWS S3 which has since taken over the world. It also provides S3ManifestStaticStorage which is great for static file serving – you don’t even need to set up a static web server to serve them – they can come straight from the bucket or CDN.

The big problem with all of this is that running collectstatic on S3-based storage is painfully slow. Especially hashing storage which uses the post-process hook to modify and re-upload files to update file references (which then can trigger further updates). There used to be a solution to this – Collectfast (released May 2013) was an awesome drop-in replacement for the collectstatic management command which would auto-magically speed things up. Unfortunely, it has been archived and is no longer maintained – the last release being in 2020. Waiting for collectstatic to run has become tiring.

I’ve spent the past few weekends forking the original Collectfast trying to get the repo up-to-date and working again. It has been an interesting challenge and I’ve finally got it to a state where I am happy with the performance improvements it provides over the Django command and am confident it works. Introducing…. Collectfasta -an updated fork of Collectfast – even faster than before.

What’s new in Collectfasta?

You can now run all tests without connecting to cloud services

One of the reasons Collectfast was archived was because it was difficult to find a new maintainer, as most tests, specifically the ‘live tests’, required real Google Cloud Platform (GCP) and AWS credentials for execution.

I have now set up popular mocking tools LocalStack and fake-gcs-server to allow these tests to run without any AWS or GCP credentials. This has also opened up a new avenue of testing since you can run these mocks for free: testing for performance on many files rather than just a single file. I’m observing performance improvements of 5x-10x with local mocks, and these improvements are even more significant with remote APIs.

I’ve kept both the live tests and the docker tests running on master for better coverage.

AWS_PRELOAD_METADATA reimplemented

AWS_PRELOAD_METADATA has been removed in django-storages 1.10 (2020-08-30) and hard-coding preload_metadata = True has been a key performance optimisation that collectfast made in the boto3 strategy. The reason was straightforward: during collectstatic the exists method checks if a file already exists. This is fine when exists is cheap – but for the S3Storage exists will do a HeadObject request to the S3 API every time, for every file.

In contrast, when preload_metadata was working:

  1. it would initially call ListObjectsV2 to see what is already there
  2. stores the results in a dict,
  3. then exists checks the dict first, returning True if the key exists – otherwise deferring to the initial implementation.

This significantly speeds up subsequent collectstatic runs on the same files, since you’re replacing hundreds of API calls with one.

Removing this feature from django-storages made sense – it’s not the kind of thing you want people enabling on a web server – because it will cause memory leaks and is not concurrent-safe. However, for a management command like collectstatic – concurrency doesn’t matter.

Re-implementing the functionality was nasty – I wrapped the storage object with my own storage subclass of key methods that saved the preloaded data so that it could be kept up to date on save, delete etc. There’s surely a better pattern than what I ended up with – but I was optimising for replicating the removed logic rather than beautiful code – this is ripe for a refactor.

The two-pass strategy

After I got the preload_metadata working again, I found that my code was still pretty slow. The culprit was the multiple post-processing hashing passes that occur when the files reference each other. It confused me a lot because there are comments in ManifestFilesMixin that specifically mention consideration for S3:

            # use the original, local file, not the copied-but-unprocessed
            # file, which might be somewhere far away, like S3
            storage, path = paths[name]
django/contrib/staticfiles/storage.py#L341

Upon further investigation, I discovered the cause was worse than I thought. Staticfiles does an exists check here on L358 and then deletes the file that exists on L378 which means we need to re-upload it – this happens when there’s references between the static files. As a result, the system re-uploads these files every time, even with the preload_metadata optimisations. I wanted to find a better way.

I thought of a simple solution: a two-pass strategy. It works by running collectstatic using the InMemoryStorage or FileSystemStorage mixed in with ManifestFilesMixin. This means all the post-processing happens locally. Then for the second pass, we just iterate over the storage used in the first-pass and copy the files, as-is to S3. It means that it is still quite a bit slower than other strategies, because the first-pass has to run every time. But the first-pass is quite fast, and on subsequent runs the second-pass copies 0 files if they haven’t changed. It also only does a single ListObjectsV2 call at the start as we re-use the preload strategy for the second pass.

What needs work?

  1. The tests could be refactored to be a bit simpler – as raised in #217
  2. The two-pass strategy only works for AWS – the Google version doesn’t even have a manifest files version in django-storages
  3. I haven’t touched the filesystem strategies at all – but in my experience filesystem storages are usually fast anyway. Potentially they (and the threading vars) could be removed – the main bottleneck I think has always been network requests.
  4. I fought the current Strategy abstraction quite a bit – especially for two-pass – there’s an opportunity to refactor this to something simpler.

PRs are accepted / encouraged – github.com/jasongi/collectfasta

Hottest 100 Predictions – A Comparison

This Hottest 100 I made a program to scrape Instagram for hottest 100 votes. I then collated the predictions from other programs (100 Warm Tunas and ZestfullyGreen’s Twitter scraper) and scored them based on performance, you can see the results here (I also opened this up to manual entries, one which outscored all the predictors).

I also decided to combine the results of the twitter scraper and my Instagram scraper, which turned out the be a better predictor than any of them. Next year I will have to incorporate a twitter scraper into my predictor.

Below is a summary of some interesting stats about the three automated prediction methods, plus the combination of 100 Toasty Tofu(s) and ZestfullyGreen’s Twitter scraper. I decided to take the results from ZestfullyGreen’s twitter scrape and add them to my results to see if this would be any better. I had a look at my predictions that included duplicate votes, however these performed worse than everything except the twitter prediction, so I have excluded them. This means my hypothesis on excluding duplicate votes (that they make the prediction less accurate) seems confirmed.

The final question that remains is, who truly is the internet’s most accurate Hottest 100 predictor? As you can see below, there isn’t really an answer for this. By my (somewhat arbitary) scoring system, 100 Warm Tunas and myself have a very similar accuracy. I think we will have to wait until next year to really test them.

JG 100 Tunas ZG JG + ZG
Points 7289/10000 7288/10000 5679/10000 7297/10000
Number of Songs in Correct Position 7/100 4/100 1/100 3/100
Number of Correct Songs in any Position 83/100 83/100 70/100 83/100
Number of Correct Top 5 Songs in Correct Position 2/5 2/5 1/5 2/5
Number of Correct Top 5 Songs in any Top 5 Position 4/5 4/5 4/5 4/5
Number of Correct Top 10 Songs in Correct Position 2/10 2/10 1/10 2/10
Number of Correct Top 10 Songs in any Top 10 Position 8/10 8/10 5/10 8/10
Number of Correct Top 20 Songs in Correct Position 2/20 3/20 1/20 2/20
Number of best predictions (see below) 45 50 34 45
Number of worst predictions (see below) 16 20 65 15
Number of Correct Top 20 Songs in any Top 20 Position 16/20 16/20 11/20 16/20
Guessed #1? Yes Yes No Yes

Song-by-song comparison of predictors

# JG 100 Tunas ZG JG + ZG Title Artist
1 1 1 2 1 HUMBLE. Kendrick Lamar
2 2 3 4 2 Let Me Down Easy Gang Of Youths
3 6 6 25 6 Chateau Angus & Julia Stone
4 3 4 3 3 Ubu Methyl Ethel
5 4 2 5 4 The Deepest Sighs, The Frankest Shadows Gang Of Youths
6 10 8 1 10 Green Light Lorde
7 5 5 13 5 Go Bang PNAU
8 11 10 43 11 Sally {Ft. Mataya} Thundamentals
9 16 15 33 16 Lay It On Me Vance Joy
10 9 13 14 9 What Can I Do If The Fire Goes Out? Gang Of Youths
11 7 7 29 7 SWEET BROCKHAMPTON
12 15 16 39 15 Fake Magic Peking Duk & AlunaGeorge
13 23 24 30 23 Young Dumb & Broke Khalid
14 29 30 6 29 Homemade Dynamite Lorde
15 12 11 24 12 Regular Touch Vera Blue
16 30 32 36 30 Feel The Way I Do Jungle Giants, The
17 13 12 20 13 Marryuna {Ft. Yirrmal} Baker Boy
18 14 14 9 14 Exactly How You Are Ball Park Music
19 17 19 15 17 The Man Killers, The
20 35 38 59 35 Let You Down {Ft. Icona Pop} Peking Duk
21 8 9 22 8 Birthdays Smith Street Band, The
22 26 26 27 26 Lemon To A Knife Fight Wombats, The
23 19 18 10 19 Not Worth Hiding Alex The Astronaut
24 78 86 N/A 77 rockstar {Ft. 21 Savage} Post Malone
25 34 31 18 33 Weekends Amy Shark
26 39 39 23 39 Feel It Still Portugal. The Man
27 43 41 N/A 43 Be About You Winston Surfshirt
28 47 51 76 47 Mystik Tash Sultana
29 28 27 37 28 Mended Vera Blue
30 36 35 26 36 Low Blows Meg Mac
31 25 25 48 25 Lay Down Touch Sensitive
32 27 28 91 27 NUMB {Ft. GRAACE} Hayden James
33 22 23 58 22 Slow Mover Angie McMahon
34 37 37 19 37 DNA. Kendrick Lamar
35 51 46 31 51 Passionfruit Drake
36 18 17 12 18 I Haven’t Been Taking Care Of Myself Alex Lahey
37 63 70 52 62 Slide {Ft. Frank Ocean/Migos} Calvin Harris
38 46 48 34 46 Bellyache Billie Eilish
39 53 49 N/A 52 Got On My Skateboard Skegss
40 24 21 44 24 True Lovers Holy Holy
41 41 40 35 41 Blood {triple j Like A Version 2017} Gang Of Youths
42 59 56 N/A 59 Cola CamelPhat & Elderbrook
43 91 74 74 91 Murder To The Mind Tash Sultana
44 49 50 42 49 In Motion {Ft. Japanese Wallpaper} Allday
45 21 20 7 21 Every Day’s The Weekend Alex Lahey
46 57 54 17 57 Better Mallrat
47 45 52 16 45 Want You Back HAIM
48 54 47 N/A 53 The Comedown Ocean Alley
49 33 34 82 34 Passiona Smith Street Band, The
50 77 84 84 74 On Your Way Down Jungle Giants, The
51 N/A N/A 56 N/A Man’s Not Hot Big Shaq
52 N/A N/A N/A N/A Glorious {Ft. Skylar Grey} Macklemore
53 62 68 87 63 Moments {Ft. Gavin James} Bliss N Eso
54 50 57 N/A 50 Homely Feeling Hockey Dad
55 42 44 N/A 42 6 Pack Dune Rats
56 32 29 72 32 Watch Me Read You Odette
57 67 67 N/A 67 Bad Dream Jungle Giants, The
58 20 22 11 20 The Opener Camp Cope
59 80 79 N/A 80 Used To Be In Love Jungle Giants, The
60 69 66 8 69 Boys Charli XCX
61 73 77 N/A 73 21 Grams {Ft. Hilltop Hoods} Thundamentals
62 92 89 N/A 92 Saved Khalid
63 40 43 28 40 Life Goes On E^ST
64 60 58 45 60 Fool’s Gold Jack River
65 65 62 38 64 Everything Now Arcade Fire
66 66 65 93 65 Lemon N.E.R.D. & Rihanna
67 38 36 N/A 38 Shred For Summer DZ Deathrays
68 48 45 80 48 Golden Kingswood
69 44 42 96 44 I Love You, Will You Marry Me Yungblud
70 31 33 54 31 Amsterdam Nothing But Thieves
71 N/A N/A 21 N/A Perfect Places Lorde
72 88 85 71 88 In Cold Blood alt-J
73 83 64 N/A 82 Nuclear Fusion King Gizzard & The Lizard Wizard
74 N/A N/A 98 N/A XO TOUR Llif3 Lil Uzi Vert
75 61 60 N/A 61 Braindead Dune Rats
76 76 76 N/A 75 Cloud 9 {Ft. Kian} Baker Boy
77 N/A 100 66 N/A Million Man Rubens, The
78 N/A N/A N/A N/A Electric Feel {triple j Like A Version 2017} Tash Sultana
79 N/A N/A 69 N/A Hey, Did I Do You Wrong? San Cisco
80 90 90 61 90 Say Something Loving xx, The
81 N/A N/A 32 N/A Liability Lorde
82 N/A N/A 46 N/A 1-800-273-8255 {Ft. Alessia Cara/Khalid} Logic
83 74 72 60 76 Blood Brothers Amy Shark
84 84 73 N/A 85 Oceans Vallis Alps
85 58 59 N/A 58 Does This Last Boo Seeka
86 94 91 95 94 Maybe It’s My First Time Meg Mac
87 72 63 78 71 The Way You Used To Do Queens Of The Stone Age
88 56 61 N/A 56 Edge Of Town {triple j Like A Version 2017} Paul Dempsey
89 N/A N/A N/A N/A Dawning DMA’s
90 N/A N/A N/A N/A Hyperreal {Ft. Kučka} Flume
91 N/A N/A N/A N/A Big For Your Boots Stormzy
92 N/A N/A N/A N/A LOVE. {Ft. ZACARI} Kendrick Lamar
93 95 95 85 96 Do What You Want Presets, The
94 99 93 N/A 98 Second Hand Car Kim Churchill
95 N/A N/A N/A N/A Mask Off Future
96 100 97 55 100 Chasin’ Cub Sport
97 N/A N/A N/A N/A LOYALTY. {Ft. RIHANNA} Kendrick Lamar
98 N/A N/A N/A N/A Snow Angus & Julia Stone
99 64 N/A N/A 66 Arty Boy {Ft. Emma Louise} Flight Facilities
100 N/A N/A N/A N/A Don’t Leave Snakehips & MØ

100 Toasty Tofu(s) – Another Triple J Hottest 100 Predictor

Update: Think you can do better than my prediction? Prove it by filling out your prediction here: Triple J Hottest 100 Prediction tracker submission. Also, you can look at the leaderboard of predictions over here.

100 Toasty Tofu(s) is another Triple J Hottest 100 Predictor, made for your entertainment with no guarantees what-so-ever.

Since 2012, various people have been predicting the Hottest 100 using social media scrapes and OCR. This started with The Warmest 100 and was continued by 100 Warm Tunas. I’ve long thought it’s an awesome experiment because the conditions are good for using social media as a predictor. Two factors make this a good experiment – the average person is willing to share their hottest 100 votes and the stakes are so low, unlike political elections, that there aren’t hoards of true believers/trolls/Russian government agents trying to manipulate public sentiment.

I use instagram-scraper to scrape the hashtags (the same as 100 Warm Tunas) and then a python script that uses Tesseract OCR to convert them to text. They are then matched with the Triple J song list (PDF) and saved. I removed any duplicate votes I found, that is people who voted for the same songs in the same order when there are greater than 3 songs in the image (a very unlikely occurrence). I figure these are probably the same person uploading the same image twice.

This is an initial cut, there’s still some extra work to do including:

  • Manually add songs that would be in the hottest 100 to the song list
  • Tune the OCR, including doing some pre-processing to images if needed
  • Tune the matching algorithm – currently using Levenshtein distance
  • Do more analysis on voting combinations (e.g are there factions who vote for particular songs together and what can we learn from this).
  • Make the table pretty like the other ones.
  • Make a form for people to upload their own predictions and show a leaderboard as they come in on the 27th.

The results are quite different to 100 Warm Tunas – I seem to be picking up more votes. I’m not sure if this is due to some sort of filtering I’m not doing or just algorithm differences, but we will see if 100 Warm Tunas still is the internet’s most accurate prediction of Triple J’s Hottest 100 for 2017 on January 27!

This table is updated automatically every few hours.
Total number of images: loading…
Total number of duplicates: loading…
Total number of votes: loading…

# Title Artist Votes % Votes Inc dupes %
Loading… Loading… Loading… Loading… Loading… Loading… Loading…

Linux.conf.au: Day 3

Wow. I now realise the last two days were just a warmup. The Conference really came into it’s own today. With talks going about twice as long as the previous two days, we really got some in-depth information from some top notch speakers.

This is a continuation of this post here about my first Linux.conf.au, here.

It started off with the lightning talks in lieu of a keynote speaker. I was a bit disappointed that there weren’t more of them (they finished well before scheduled) but the ones that were there were very well spoken about things ranging from Raising Geek Girls to Password Storage best-practices.

Next up I attended the Python 3: Making the Leap! by Tim Leslie tutorial. This was a great tutorial where Tim went through the nuances of the differences between the two versions of python and how the -3 flag and the 2to3.py tool can be used to convert a 2.xx script to python 3. I found this particularly helpful as someone who recent started having a go at python with my (disclaimer: linking to horrible, horrible code) blackboard scraper I recently wrote. For some reason despite all my dependencies working with python 3 I wrote it in 2.7. Learning to write pretty much depreciated code was probably not the most intelligent thing to do, but at least now I have a good explanation on what is different when writing code for 3 in the future.

Next up was Building Effective Alliances around the Trans-Pacific Partnership Agreement by Sky Croeser [ Video ]. The great thing about this talk is that it went beyond saying why the TPPA is bad, which is all we ever seem to hear from tech news sites, and actually delved into strategies of how to effectively combat. The strategies were diverse with no possibility left unturned, from writing letters to politicians to 1999 Seattle WTO Protest tactics – fun for the whole family! I thought the way she approached the topic was terrific, as a person who is not in the slightest bit interested in radical activism, but a strong interest in political issues such as this, it was nice to be acknowledged as somebody who can still contribute. If you’re at all worried about TPPA, I would strongly recommend watching the talk when it’s uploaded on Monday.

Bringing MoreWomen to Free and Open Source Software by Karen Sandler [ Video ] was a really eye opening talk on the strategies used at the GNOME Foundation the bring their amount of female contributors up to par with the rest of the Computing sector (3% compared to 18%, still a woefully small number). The lack of women in computing is quite a serious problem which I think is overlooked (and sometimes even perpetuated) by many computer scientists and programmers, and perhaps society in general. I’m fairly certain (and very hopeful) this is a changing statistic I’ve met many people who are committed to the issue.

Opening up government data by Pia Waugh [ Video ] continued from Monday’s open government miniconf about making government datasets available freely to the public. As I mentioned on Monday, Pia Waugh is definitely someone who is “on our side” in the government sector relating to open data and is in charge of the great data.gov.au website where you can request and vote for certain datasets to be release. She talked about certain issues related to open government data including ways to automate data collection and uploading (which would always provide up-to-date-data) and issues with the format in which data is provided. I love playing to datasets so this talk was very interesting to me, I would recommend giving it a watch when it’s online.

Reverse engineering vendor firmware drivers for little fun and no profit by Matthew Garrett [ Video ]* would have to be the best talk of the day (and possibly the week). In this talk we follow the protagonist Garrett as he embarks on a journey to reverse engineer a vendor tool for modifying servers which doesn’t quite go as expected. I won’t go any further because you have to watch it to really appreciate. The thing I really like about Garrett’s talk is not only the sprinkles of humor paced throughout, but the fact that whenever he says anything that isn’t basic computer/Linux knowledge, he stops and gives a short explanation of what that is. One mistake I feel like many speakers have is that they assume every attendee is as knowledgeable as they are in the domain they are speaking about when in reality there are people of all different types of skillsets. By not stopping to explain things you run the risk of having half your audience feel stupid and switch off. As a student these kinds of explanations are much appreciated and help me achieve my goal of learning more about the kernel.

*for those with a keen eye, you might notice that two Garrett and the Open Data talk overlapped. I watched one on video after the end of today’s conference.

That was all for today, looking very forward to tomorrow with a keynote by Matthew Garrett who will hopefully be beating his own record of “Best Talk so Far”. If you can’t make it but want to watch, tune in at http://timvideos.us/octagon at 9AM tomorrow.

My linux.conf.au adventure is continued here.