Thanks to the curious and helpful degens in Discord, I have identified and fixed two important bugs.
One is specific to the multiprocessing-enabled Arbitrum bot recently posted, and one is specific to the V3LiquidityPool helper in the degenbot repo.
ProcessPoolExecutor Misbehavior on Windows and MacOS
The end result of several bug reports revealed that process pools on Windows did not behave correctly out-of-the-box.
Big Daddy Elon isn’t a fan of Substack these days so my long tweet about this won’t appear nicely, but you can follow THIS LINK to read my write-up on this.
It took some time to sort out the cause of the issue, but eventually it became clear that operating system-specific differences were to blame. On Linux (my development platform), the default method for creating a process with the multiprocessing library is “fork”. On Windows and MacOS, the default is “spawn”. The specifics of each method aren’t particularly relevant, but I’ve made some tweaks to the bot to restore functionality on any OS that uses “spawn” (which will become the default in a future version of Python anyway, apparently “fork” is broken in several ways).
The code has been updated on the original post, please review and update if you are having any issues on either Windows or MacOS.
Liquidity Inaccuracies in Snapshot-Loaded V3 Pool Helpers
Occasionally my bot would send a transaction involving a UniswapV3 pool that looked very profitable, but executed at partial size. It wasn’t necessarily a money-loser, but I would be stuck holding some amount of an intermediate token that I’d have to swap back to WETH.
I attributed it to the dark forest — perhaps my bot just got tripped up by a shitcoin that didn’t allow me to swap the whole balance at once.
But then a few users in Discord reported the same thing, so I dug into it.
The issue primarily affected low-liquidity V3 pools. The UniswapLpCycle
helper would build a valid payload that can be executed, but the amountSpecified
argument to swap
was always too high. It resulted in the pool being emptied of whatever liquidity it had and the sqrtPrice going to the min or max of the range.
This occurs because a call to swap
at the pool requires a sqrtPriceLimitX96
argument. We generally don’t care about the ending price, just the amounts in or out. The arb helper sets that sqrtPrice to the highest or lowest value depending on the direction of the swap, with the assumption that our input will be consumed or our output will be delivered before the price limit is reached.
That is typically true if the amount specified can be satisfied by the reserves within the pool. This can only be calculated if the liquidity information available to the helper is accurate. If the liquidity information known to the helper is wrong, it will calculate swaps that are incorrect!
It turns out that my effort to reduce loading times via liquidity snapshotting introduced this regression.
The helper had a very simple code block that adjusted the in-range liquidity whenever a Mint
or Burn
event (liquidity position change) was processed.
It was:
if lower_tick <= self.tick <= upper_tick:
self.liquidity += liquidity_delta
Which is about as straightforward as it can be with a V3 calculation. If the current tick is inside the tick range of the liquidity event, adjust it. Simple!
This is necessary because a Mint
or Burn
event does not emit the new liquidity value. Liquidity updates are only emitted on Swap
events, even though it is modified by Mint
or Burn
events. If you always wait until a Swap
to update liquidity, you’ll be in trouble!
The helper tracks two “timestamps” attributes, self.update_block
and self.liquidity_update_block
. It will reject stale updates if the block associated with the update is older than the last timestamp. Whenever a new update is received and processed, the timestamp is updated. All good except that snapshot updates require the helper to accept values for blocks in the past.
For example: you create the helper at block 1,000,000 from a snapshot that was current at block 995,000, and then pushed liquidity updates from blocks 995,001 - 999,999. The helper will accept these “stale” updates if you pass an argument force=True
. The bot code uses this method during startup, but not anywhere else. Forcing stuff is generally bad, but we do need a way to work around that check…
The liquidity pool helper assigns itself an update_block
inside the constructor, and retrieves the current in-range liquidity.
It all worked fine except for one thing — if force=True
, the stale block checks were skipped, the state values were updated, and any historical liquidity change event would trigger an update of in-range liquidity!
So from the previous example, you’d have a pool helper with in-range liquidity accurate at block 1,000,000, then update all the liquidity positions up to block 995,000, and then start pushing updates to it. The timestamp check would be bypassed, and the in-range liquidity adjuster would run on every historical event that included the current tick. Needless to say, this leads to increasingly inaccurate values for in-range liquidity if many events occurred between the last snapshot block and the current block.
Highly active pools result in a lot of Swap
events, so this bad in-range liquidity value was often overwritten with an accurate one from the pool. But for low-liquidity, low-activity pools, this bad would persist. If a pool helper was built for one of these pools, and then part of an arbitrage calculation, the expected profit due to this “phantom in-range liquidity” would be higher than the pool could actually deliver, and the “partial swap” bug was the result.
Despite the complexity of the bug and the difficulty discovering it, the fix is simple.
The helper now checks the update block and only adjusts in-range liquidity for new blocks. It also skips updating the timestamp when force
is set, which avoids issues with the timestamp getting into an inconsistent state.
I’ve pushed the fix to the repo, so please review and pull it and please continue to report bugs!