I’ve been in the tank for a bit, but have emerged with a (mostly) robust solution for quickly “hot loading” a V3 liquidity pool helper. The default mode for the V3LiquidityPool
is to fetch unknown liquidity regions on demand, but this scales very poorly because fetching liquidity information is a high-latency operation.
I played around with both Redis and Pickle to save and restore liquidity data. I learned a few things along the way. I found found laughter, sadness, pain, and finally acceptance!
Why Is It Like This?
UniswapV3 is designed with efficiency in mind. However, you cannot be efficient at all things equally. The authors of the contracts have prioritized capital efficiency and gas efficiency at the cost of complexity. It took me several months to work through the contracts and develop helpers that could abstract away many of the details. One of those messy details is the internal representation of ticks and liquidity.
If you have not done so, I encourage you to read through the entire Uniswap V3 series, starting with the exploration of the Pool Contract. You will learn a lot!
If you don’t care about learning, or you’ve already worked through the series and just want some hot hot pools, here is the rough summary: UniswapV3 swaps occur inside distinct regions of constant liquidity and exponentially increasing price marked by regularly spaced markers called ticks. Liquidity providers can provide liquidity in any range defined by two ticks. To execute a swap, the pool takes a given amount of token0 or token1, moving up or down through liquidity regions according to the direction of the swap, adjusting the available liquidity at each tick boundary.
Simple really!
It’s fairly effortless from the perspective of the contract, since it has access to its internal storage state. There is no latency involved with managing liquidity as a swap progresses along the price scale. There is only a gas cost associated with this behavior. Move the price a little, pay a little. Move the price a lot, pay a lot.
However from our perspective as off-chain / on-chain integratoooooooooors, we have to work harder. We do not have instant access to the contract storage state. To predict the output of a swap off-chain, we must have a local (and most importantly, accurate) representation of that liquidity data. We can retrieve it by executing read-only calls to the network, which is where the latency comes into play.
Once this data is retrieved, simulation is fast and life is good. But during the fetching phase, life is pain and you’ll wonder why you even bother.
You think you know hardship, but you’ve probably never taken a bot offline after seven days of running and filling up that internal liquidity state info for fast off-chain simulation. Just restarted your bot? Enjoy the next 6 hours while it rebuilds the liquidity info for thousands of pools! OOOOOF.
Liquidity Snapshots
But enough about the existential pain of bot building — let’s fix this problem once and for all!
The ideal solution:
A liquidity pool helper that accepts a state snapshot during instantiation
A method of generating a state snapshot
A method of keeping a state snapshot up-to-date
No promises that my current solution is the best, but it’s much better than what I had going before.
Here’s the game plan for today’s lesson:
Build a state snapshot from scratch for a single pool
Load that state snapshot into a pool helper
Extend to multiple pools
Build a State Snapshot
We will work again with our familiar friend (Uniswap V3: wBTC-WETH, 0.3% fee).
In the previous post on pickle
, I proposed a technique of finding the pool deployment block by searching for the Initialize
event, then working from that block to find Mint
and Burn
events. That’s fine but in my testing I discovered that some pools are initialized and then never used. Whoops! A pool must be initialized before it can be used, so I can skip that entirely and just find the Mint
and Burn
events. Cool, that saves us some unnecessary calls.
I also quickly realized that fetching these events separately for each pool is wasteful. Using Infura’s event caching helps a lot, but the best method is to fetch liquidity events in a generic way, iterating over a block one time, then storing relevant events for processing later.
For now, let’s keep it simple and work with a single pool.
This script will connect to your Ethereum RPC (preferably a local node), fetch all Mint
and Burn
events for this pool, then dump it to JSON. I have not decided how to store the serialized data, but JSON is fine for demonstration purposes and requires no external modules.
ethereum_uniswapv3_liquidity_events_fetcher.py
BUG FIXES:
2023-04-02 — added
force=True
argument toexternal_update
to skip checking of last update block (was rejecting updates for past blocks). Pull from github, this optional argument was added post-lesson.
import asyncio
import json
import sys
import brownie
import web3
from web3._utils.filters import construct_event_filter_params
from web3._utils.events import get_event_data
import degenbot as bot
BROWNIE_NETWORK = "mainnet-local"
UNISWAPV3_START_BLOCK = 12_369_621
async def prime_pools():
print("Starting pool primer")
liquidity_snapshot = {}
try:
with open("ethereum_liquidity_snapshot.json", "r") as file:
json_liquidity_snapshot = json.load(file)
except:
snapshot_last_block = None
else:
snapshot_last_block = json_liquidity_snapshot["snapshot_block"]
print(
f"Loaded LP snapshot: {len(json_liquidity_snapshot)} pools @ block {snapshot_last_block}"
)
for pool_address, snapshot in [
(k, v)
for k, v in json_liquidity_snapshot.items()
if k not in ["snapshot_block"]
]:
liquidity_snapshot[pool_address] = {
"tick_bitmap": {
int(k): v for k, v in snapshot["tick_bitmap"].items()
},
"tick_data": {
int(k): v for k, v in snapshot["tick_data"].items()
},
}
V3LP = w3.eth.contract(abi=bot.uniswap.v3.abi.UNISWAP_V3_POOL_ABI)
liquidity_events = {}
for event in [V3LP.events.Mint, V3LP.events.Burn]:
print(f"processing {event.event_name} events")
start_block = (
max(UNISWAPV3_START_BLOCK, snapshot_last_block + 1)
if snapshot_last_block is not None
else UNISWAPV3_START_BLOCK
)
block_span = 10_000
done = False
event_abi = event._get_event_abi()
while not done:
end_block = min(newest_block, start_block + block_span)
_, event_filter_params = construct_event_filter_params(
event_abi=event_abi,
abi_codec=w3.codec,
address="0xCBCdF9626bC03E24f779434178A73a0B4bad62eD",
argument_filters={},
fromBlock=start_block,
toBlock=end_block,
)
try:
event_logs = w3.eth.get_logs(event_filter_params)
except:
block_span = int(0.75 * block_span)
continue
for event in event_logs:
decoded_event = get_event_data(w3.codec, event_abi, event)
pool_address = decoded_event["address"]
block = decoded_event["blockNumber"]
tx_index = decoded_event["transactionIndex"]
liquidity = decoded_event["args"]["amount"] * (
-1 if decoded_event["event"] == "Burn" else 1
)
tick_lower = decoded_event["args"]["tickLower"]
tick_upper = decoded_event["args"]["tickUpper"]
# skip zero liquidity events
if liquidity == 0:
continue
try:
liquidity_events[pool_address]
except KeyError:
liquidity_events[pool_address] = []
liquidity_events[pool_address].append(
(
block,
tx_index,
(
liquidity,
tick_lower,
tick_upper,
),
)
)
print(f"Fetched events: block span [{start_block},{end_block}]")
if end_block == newest_block:
done = True
else:
start_block = end_block + 1
block_span = int(1.05 * block_span)
for pool_address in liquidity_events.keys():
try:
snapshot_tick_data = liquidity_snapshot[
pool_address
]["tick_data"]
except KeyError:
snapshot_tick_data = {}
try:
snapshot_tick_bitmap = liquidity_snapshot[
pool_address
]["tick_bitmap"]
except KeyError:
snapshot_tick_bitmap = {}
try:
lp_helper = bot.V3LiquidityPool(
address=pool_address,
silent=True,
tick_data=snapshot_tick_data,
tick_bitmap=snapshot_tick_bitmap,
)
except:
continue
sorted_liquidity_events = sorted(
liquidity_events[pool_address],
key=lambda event: (event[0], event[1]),
)
for liquidity_event in sorted_liquidity_events:
(
event_block,
_,
(liquidity_delta, tick_lower, tick_upper),
) = liquidity_event
lp_helper.external_update(
updates={
"liquidity_change": (
liquidity_delta,
tick_lower,
tick_upper,
)
},
block_number=event_block,
fetch_missing=False,
force=True,
)
liquidity_snapshot[pool_address] = {
"tick_data": lp_helper.tick_data,
"tick_bitmap": {
key: value
for key, value in lp_helper.tick_bitmap.items()
if key != "sparse" # ignore sparse key
if value["bitmap"]
},
}
liquidity_snapshot["snapshot_block"] = newest_block
with open("ethereum_liquidity_snapshot.json", "w") as file:
json.dump(liquidity_snapshot, file, indent=2)
print("Wrote LP snapshot")
# Create a reusable web3 object to communicate with the node
# (no arguments will default to localhost on the default HTTP port)
w3 = web3.Web3()
try:
brownie.network.connect(BROWNIE_NETWORK)
except:
sys.exit(
"Could not connect! Verify your Brownie network settings using 'brownie networks list'"
)
newest_block = brownie.chain.height
if __name__ == "__main__":
asyncio.run(prime_pools())
print("Complete")
Running this will do a focused scrape of all Mint
and Burn
liquidity events associated with address 0xCBCdF9626bC03E24f779434178A73a0B4bad62eD. Then it sorts and processes each liquidity event in order before dumping the LP helper’s tick_data
and tick_bitmap
dictionaries to a JSON file named ethereum_liquidity_snapshot.json
Load a State Snapshot
JSON uses strings for keys so it will be a little cumbersome, but still simple enough to understand.
Fire up a Brownie console:
[devil@dev bots]$ . .venv/bin/activate
(.venv) [devil@dev bots]$ brownie console --network mainnet-local
Brownie v1.19.3 - Python development framework for Ethereum
BotsProject is the active project.
Brownie environment is ready.
>>> import json
>>> import degenbot as bot
>>> file = open('ethereum_liquidity_snapshot.json','r')
>>> snapshot = json.load(file)
Now confirm that the snapshot includes the expected data:
>>> snapshot[
'0xCBCdF9626bC03E24f779434178A73a0B4bad62eD'
]['tick_bitmap']
{
'-58': {
'bitmap': 6917529027641081856,
'block': 13035555
},
'-6': {
'bitmap': 2,
'block': 12604301
},
[...]
'57': {
'bitmap': 50216813883093446110686315385661331328818843555712276103168,
'block': 13035555
},
'9': {
'bitmap': 21741053132098019175522475746641612337257189351054491279686364306905686343680,
'block': 12760382
}
}
}
>>> snapshot[
'0xCBCdF9626bC03E24f779434178A73a0B4bad62eD'
]['tick_data']
{
'-887160': {
'block': 12982678,
'liquidityGross': 80064092962998,
'liquidityNet': 80064092962998
},
'-887220': {
'block': 16932064,
'liquidityGross': 96662458571310,
'liquidityNet': 96662458571310
},
[...]
'92040': {
'block': 16243768,
'liquidityGross': 1620915153,
'liquidityNet': 1620915153
},
'92100': {
'block': 16519534,
'liquidityGross': 3624442144135,
'liquidityNet': 3624442144135
}
}
Now we need to convert the keys from strings to integers, which we can do with a dictionary comprehension.
>>> tick_bitmap = {
int(key):value for key,value in
snapshot[
'0xCBCdF9626bC03E24f779434178A73a0B4bad62eD'
]['tick_bitmap'].items()
}
>>> tick_data = {
int(key):value for key,value in
snapshot[
'0xCBCdF9626bC03E24f779434178A73a0B4bad62eD'
]['tick_data'].items()
}
Now create a pool helper with the snapshots:
>>> lp_helper = bot.V3LiquidityPool(
address='0xCBCdF9626bC03E24f779434178A73a0B4bad62eD',
tick_bitmap=tick_bitmap,
tick_data=tick_data
)
And confirm that the bitmap dictionary’s sparse
key has been set to False
, which signifies that the bitmap is complete and has no gaps. A bitmap like this will allow the helper to execute calculations up and down the entire range without needing to query the chain for missing regions.
>>> lp_helper.tick_bitmap['sparse']
False