One of the issues I deal with since developing the UniswapV3 helper package is the latency-heavy nature of working with these pools.
In order to do offline calculation of arbitrage amounts, I must have an accurate view of the liquidity for a particular pool. To get the liquidity for that pool, I must ask the blockchain. All good, but recall that a V3 pool is divided into chunks of liquidity at markers called ticks. These ticks are separated into “words” of 256 ticks, which I have to fetch individually. Fetching any particular word is simple, and I can use multicall to fetch several at once, but there is a floor to how fast this all can work.
If I have a few thousand pools with a few hundred words each, I can easily spend a half hour fetching words and filling up the liquidity data for each V3 helper. Fine for a long-running bot, but when I’m doing rapid testing and want to iterate on a new feature, I have to pay this startup cost each time. And it gets worse as more V3 pools are built!
The solution is maintaining a persistent cache of data that my pool helpers can read and write from. But what cache should I use? It’s simple enough to do JSON, but I didn’t want to deal with a monolithic file with tens of thousands of pools. I also wanted to try something a little more scalable, so I looked at Redis for the task.
Redis is described on the project page as —
The open source, in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker.
Redis is quite simple to use, and it offers a command line interface that you can play with to get the hang of it.
I won’t cover the specifics of using Redis at length, since I’m fairly new to it, but I want to demonstrate a few nice things I’ve built in the past few days.
redis-cli via Docker
You’re likely familiar with Docker already, so I’m going to demonstrate how to spin up a Redis instance in a Docker container, connect to it on the command line, then connect to it via Python and move data in and out of that instance.
Run this on the terminal to start up an instance:
[btd@dev ~]$ sudo docker run -p 6379:6379 --rm --name redis redis:latest
Unable to find image 'redis:latest' locally
latest: Pulling from library/redis
[...]
1:M 14 Mar 2023 04:31:37.397 * Ready to accept connections
Now on another console:
[btd@dev ~]$ sudo docker exec -it redis redis-cli
127.0.0.1:6379>
This is the Redis console, and we can start pushing values into it.
The main one I care about here is the HASH data type. You can read about it HERE. It is most similar to a Python dictionary, with one important difference — it only supports one level of key-value pairs. So a nested dictionary would not be directly supported.
You can solve this one of two ways. The first and most obvious is to use JSON strings for your value, and then do the work of encoding/decoding that string as you need to access it.
That feels a bit hacky to me, so I’m going to use another technique, a text delimiter. This is a simple concept used many places.
Let’s say that I have a Python dictionary that looks like this:
my_diary = {
'name': 'secret thoughts on blockchains',
'password': 'not financial advice lmao',
'crypto': {
'BTC': 'old and busted',
'ETH': 'new hotness'
}
}
I can push most of these values into a Redis hash named ‘some_dict’ using the HSET command. In fact let’s do that:
127.0.0.1:6379> HSET 'my_diary' 'name' 'secret thoughts on blockchains'
(integer) 1
127.0.0.1:6379> HSET 'my_diary' 'password' 'not financial advice lmao'
(integer) 1
So now I’ve pushed these key-value pairs into Redis at the ‘my_diary’ hash. Take a look using KEYS:
127.0.0.1:6379> HKEYS 'my_diary'
1) "name"
2) "password"
And retrieve the values for these keys:
127.0.0.1:6379> HGET 'my_diary' 'name'
"secret thoughts on blockchains"
127.0.0.1:6379> HGET 'my_diary' 'password'
"not financial advice lmao"
Now what if I wanted to record the juiciest gossip of all, my thoughts on ETH and BTC?
Here’s where the delimiter comes in. I choose some character that separates portions of the string used for the key. Let’s select the colon (:) and put it between the two sub-keys of my nested dictionary:
127.0.0.1:6379> HSET 'my_diary' 'crypto:ETH' 'new hotness'
(integer) 1
127.0.0.1:6379> HSET 'my_diary' 'crypto:BTC' 'old and busted'
(integer) 1
Now how can I find these?
127.0.0.1:6379> HSCAN 'my_diary' 0 MATCH 'crypto:*'
1) "0"
2) 1) "crypto:ETH"
2) "new hotness"
3) "crypto:BTC"
4) "old and busted"
This returns successive key-value pairs for all keys matching the wildcard pattern. Kind of clunky looking but once we understand it, it’s fairly simple to automate this.
Python and Redis
Now let’s move to Python. Install the redis
package with pip install redis
, start a Python shell, load the module and create an object called r
to interact with our running server:
>>> import redis
>>> r = redis.Redis(decode_responses=True)
Then check that our hash exists:
>>> r.keys()
['my_diary']
And pull the values we pushed into it earlier:
>>> r.hget('my_diary', 'name')
'secret thoughts on blockchains'
>>> r.hget('my_diary', 'password')
'not financial advice lmao'
And search for the delimited entries about crypto:
>>> r.hscan('my_diary',match='crypto:*')
(0, {'crypto:ETH': 'new hotness', 'crypto:BTC': 'old and busted'})
OK, pretty straightforward.
Redis as a V3 Liquidity Pool Cache
Now let’s try something a little more wild. I maintain two dictionaries inside my V3LiquidityPool
helper class called tick_bitmap
and tick_data
.
The tick_bitmap
dictionary is keyed by words (integers) and the values are bitmaps (integers) associated with the initialized status for the 256 bits held inside that word.
The tick_data
dictionary is keyed by tick (integers) and the values are a tuple of values (integers) that track liquidityNet
and liquidityGross
at that tick.
All together they allow the helper to track swaps across various tick ranges. The helper is smart enough to fetch missing ticks, and will use multicall whenever possible to shorten the time spent fetching those ticks.
But what if it could pull those values from a cache instead?
So far we’ve just used a hash to store a single dictionary. Now let’s build a hash with multiple top-level keys and push data into it from our LP.