Stripe CTF 3 write up

Stripe CTF 3 finished a few days ago. The latest CTF focussed on distributed systems, instead of security. I have read about distributed systems but getting to develop one was a good learning experience. Unfortunately, it took me a while to solve the last level, and after the capturing the flag I didn’t have time to go back and optimize my code for the other levels. I will walk through my solutions here.

Level 0

Given a path to a dictionary file, and a text file as input, print words which are not in the dictionary wrapped in <>

The non-optimized solution was implemented in Ruby, which built the word array and performed an O(n) lookup for every word, resulting in a O(mn) running time. Hash tables and bloom filters sprang to my mind immediately. I wrote my solution in Python:

words = open(path).read().splitlines()

entries = set(words)

output = ''
for line in sys.stdin:
    for word in re.split('([^ \n]+)', line):
        w = word.strip()
        if not w or w.lower() in entries:
            output += word
            output += '<{0}>'.format(word)

This basic solution netted me 340 points and I moved on to the next level.

Level 1

Git commit a commit message with SHA1 hash lexicographically smaller than 000001, before the bot does.

The non-optimized solution was written in shell, using git commands to generate hashes which involves spawning a new process – a very slow operation. Git commits are SHA1 hashes. Assuming the hashes are evenly distributed, one in 16^6 (= 2^24 = 16777216) hashes will match the given criteria, which is about 17 million. I used Python multiprocess Pool map to generate 2 million hashes in ~8 seconds:

def commit_hash(nonce):

    body = '''tree {2}
parent {3}
author CTF user <> {0} +0000
committer CTF user <> {0} +0000

Mined a GitCoin!
nonce {1}
'''.format(timestamp, nonce, tree, parent)

    sha1 =  hashlib.sha1('commit {0}\0{1}'.format(len(body), body)).hexdigest()

    if sha1 < '000001':
        commit = 'commit/commit' + nonce
        f = open(commit, 'w')
        sys_stdin('git hash-object -t commit -w "{0}"'.format(commit))
        sys_stdin('git reset --hard "{0}" > /dev/null'.format(sha1))
        sys_stdin('git push origin master')

def solve():
    step = 100
    p = Pool(step)
    for c in xrange(0, 2000000, step):
        a = []
        for counter in xrange(c, c + step):
            nonce = str(counter)
            a.append(nonce), a)

This also had a player-vs-player round, but the high latency of my connection meant I was at a huge loss.

Level 2

Implement a DDoS protection for “Shield”. Some attackers were sending a lot of requests (elephants), breaking the servers for legitimate users (mice).

The shield was implemented in Node.js, originally letting all requests through. I modified the code to reject requests based on the average requests and added a rudimentary idleness check:

if (timeNow - lastRequestTS <= 1000 &&
    requestCount > 20 &&
    ipRequestCount > requestCount / ipCount / 5) {
else {
    this.proxies[0].proxyRequest(reqData.request, reqData.response, reqData.buffer);

Level 3

Implement a distributed search engine. You are given a master node with 3 slave nodes and 4 minutes for indexing. Each node is limited to 500 MB of RAM. Queries will be dictionary words only. Results should include any suffixes/prefixes (eg: fan should also return results for fancy or infant).

The non-optimized solution was implemented in Scala. The master node would query all 3 nodes for the search results… and return result only for the first one. Very inefficient, to say the least. I started modifying the solution to distribute the searches across 3 nodes and combining the results in the master, but Scala turned out to be a difficult language to hack in. I moved back to Python and threw together a small Flask app which copied the directory contents to /tmp/ (I tried /dev/shm first, but there wasn’t enough space) and used grep to get the search results (grep is fast, really fast). After a couple of tries, I got to the next level.

I am not really satisfied with my solution, because search is an interesting problem and I did this without building any index. I was hoping to come back to this problem and implement it with suffix arrays and will try to write one soon.

Level 4

Build a distributed SQLite Cluster with 5 nodes. The servers listen on Unix sockets. There’s an evil Octopus though, which simulates TCP latency and lossy connections. The nodes in the SQLite cluster had to be replicated to stay in sync. Scoring was based on how many successful queries you answered, minus the network traffic.

This was unarguably the toughest problem in the CTF. I went through the Raft paper to understand how I could implement a distributed system, but it would have been difficult to write a Raft implementation from scratch (some folks did though). I found raftd, which is a sample implementation of goraft. The non-optimized code was based on raftd, with almost the same structure.

I started integrating the raftd code in the sqlcluster code, which turned out to be a lot harder than I had imagined. In raft, only the leader can perform actions, the slaves must forward their requests to the leader for responses. Forwarding was easy, but the responses could get lost due to Octopus. However, every query eventually did make it to each node, so I set up a local cache of queries and poll the cache to check if the response was ready and return it.

It was easy to capture the flag once I got my distributed failover working. But I wanted to optimize it more and looked at decreasing my network traffic first. I tried gzip but it doesn’t compress small strings too well. I realized I could compress queries by representing each word with a single character (eg: SELECT with !). I shaved off almost 100 bytes from each query. To speed up query responses, I put my query forwarding in a goroutine for concurrency. Finally, I added in memory SQLite instead of using a file in /tmp/. After a few hundred tries, I got my highest score of 2723, 5 ranks short of the leaderboard.

My solution for level 4 on Github.


I really liked the focus on distributed systems. It gave a chance to get acquainted with how scaling works. The scoring system was an interesting twist. My only gripe was the huge variance in scoring on the same code. It’d be nice to have advanced test cases for each level for the next time where you get to play with real world test cases.

Posted in hacking Tagged with: , , , , , , ,

Leave a Reply