This is a short post detailing two bug fixes for the Uniswap V3 bot in the last post.
I will release periodic posts like this as I identify and fix bugs. Thank you for reporting issues!
Last-Leg V3 Slippage
A reader pointed to some unexpected behavior in his testing bot.
Here’s the summary: A V2 pool will simply revert if the requested swap cannot be performed at current reserves. All good there. A V3 pool will attempt to execute the requested swap up to the point that one of the constraints is violated. Since the V3 pool only expects payment after the swap calculation, there is no risk of losing your input but there is risk of the swap being performed at < 100% of requested size. The behavior this reader saw occurred in an arbitrage of a Sushiswap (V2-derivative) pool to a Uniswap V3 pool. In this case, another searcher found a similar arbitrage between Uniswap V2 and that same V3 pool. The rival searcher’s bundle was included first in the block, and our reader’s bundle was included after. Because each arbitrage started in different V2 pools, the initial swap was valid. The only difference was the “last-leg” V3 swap.
The rival searcher secured the full arbitrage opportunity, while our reader’s arbitrage occured at a slightly higher price which reduced his net profit to below the gas price. The requested V3 swap was still valid, so it was included by the block builder, despite being slightly unprofitable for our searcher.
How Did This Happen?
The UniswapLpCycle
arbitrage helper optimizes arbitrage amounts with an “unset” sqrtPriceLimitX96
, which means that the swap calculation will proceed until the ideal amount is calculated.
That’s desirable in a vacuum, but the payload generator needs to take care of profits by setting a limit against the effects of negative price impact.
We can do this with a simple tweak. The UniswapV3Pool
contract allows the argument amountSpecified
to be positive or negative. The behavior inside the swap calculation is mostly unchanged, but the difference is where computeSwapStep
stops.
When positive, the swap is considered an
exactInput
type. The pool will attempt to spend all of your tokens.When negative, the swap is considered an
exactOutput
type. The pool will attempt to spend as many tokens as needed until you receive the amount you requested.
An exactInput
swap can therefore “slip” if it is the final leg in an arbitrage, since the V3 pool might deliver fewer tokens than you expect if the price changes before your transaction is included.
An exactOutput
swap will not behave like this if it is the final leg in an arbitrage, since if the price changes against you before your transaction is included, the final payment will be higher than your balance and the transaction will revert.
What About Setting SqrtPriceX96?
The swap
calculation provides a nice “safety” on price impact, instructing the calculation to stop when the final sqrtPrice of a trade hits some limit. This is nice if you’re willing to accept a partial swap, but we are not interested in partial swaps.
OK So What Can We Do?
The resolution is to modify the payload generator to intelligently specify an exactOutput
swap under certain conditions.
This is fairly simple, just change this line in _build_multipool_tokens_out
(github source):
pools_amounts_out.append(
{
"uniswap_version": 3,
"amountSpecified": token_in_quantity, # for an exactInput swap, always a positive number representing the input amount
"zeroForOne": _zeroForOne,
"sqrtPriceLimitX96": TickMath.MIN_SQRT_RATIO + 1
if _zeroForOne
else TickMath.MAX_SQRT_RATIO - 1,
}
)
To this:
pools_amounts_out.append(
{
"uniswap_version": 3,
# for an exactOutput swap, amountSpecified is a negative
# number representing the OUTPUT amount
"amountSpecified": -token_out_quantity,
"zeroForOne": _zeroForOne,
"sqrtPriceLimitX96": TickMath.MIN_SQRT_RATIO + 1
if _zeroForOne
else TickMath.MAX_SQRT_RATIO - 1,
}
)
Pretty simple actually, this specifies that we want a fixed output quantity (expressed as a negative) instead of a fixed input (expressed as a positive).
Not So Fast! What About First-Leg?
If the V3 pool is the first leg in the arbitrage path, it should be formatted as an exactInput
swap since we want to receive less if the price has shifted away from us. This gives us assurance that the next leg will fail in two ways:
If it is a V2 pool, the transferred amount will be insufficient to perform the next swap
If it is a V3 pool, it will be formatted as an
exactOutput
swap, which will similarly fail because the intermediate amount will be insufficient.
If we made all V3 swaps exactOutput
, the same issue would re-appear on V3 pools in the first-leg position, since we might overpay for the initial swap if the price moved away from us.
Automatic Swap Mode
One more modification allows us to intelligently set the behavior of each swap
call in the payload generator. Simply look at the position of the V3 pool. If it’s the first position, call for exactInput
, otherwise call for exactOutput
. I do this with a simple ternary operator here:
pools_amounts_out.append(
{
"uniswap_version": 3,
# for an exactInput swap, amountSpecified is a positive
# number representing the INPUT amount
# for an exactOutput swap, amountSpecified is a negative
# number representing the OUTPUT amount
"amountSpecified": token_in_quantity
# exactInput for first leg (i==0)
# exactOutput for others
if i == 0
else -token_out_quantity,
"zeroForOne": _zeroForOne,
"sqrtPriceLimitX96": TickMath.MIN_SQRT_RATIO + 1
if _zeroForOne
else TickMath.MAX_SQRT_RATIO - 1,
}
)
TL;DR
Run a git pull
to sync up with the latest degenbot code to apply this change on your local machine.
“Uninteresting” Block Processing
One interesting thing I noticed while watching my bot is that some blocks would pass without any arbs being processed. I looked into it and realized that my watch_events
coroutine only started the process_onchain_arbs
coroutine when it detected a pool change that was associated with an arbitrage helper.
However there may be arbitrage helpers that still show profit, even if none of the pools have changed on that block, so I still want to start the processing once per block.
Simple fix, change this block inside watch_events
from this :
while True:
try:
message = json.loads(
await asyncio.wait_for(
websocket.recv(),
timeout=_TIMEOUT,
)
)
# if no event has been received in _TIMEOUT seconds, assume all
# events have been received, reduce the list of arbs to check with
# set(), repackage and send for processing, then clear the working
# queue
except asyncio.exceptions.TimeoutError as e:
end = time.time()
# print(f"actual time = {end - start:0.2f}s")
if arbs_to_check:
asyncio.create_task(
process_onchain_arbs(
deque(set(arbs_to_check)),
)
)
arbs_to_check.clear()
continue
finally:
start = time.time()
To this:
last_processed_block = newest_block
while True:
try:
message = json.loads(
await asyncio.wait_for(
websocket.recv(),
timeout=_TIMEOUT,
)
)
# if no event has been received in _TIMEOUT seconds, assume all
# events have been received, reduce the list of arbs to check with
# set(), repackage and send for processing, mark the current block
# as processed, then clear the working queue
except asyncio.exceptions.TimeoutError as e:
end = time.time()
# print(f"actual time = {end - start:0.2f}s")
# if arbs_to_check:
if last_processed_block < newest_block:
asyncio.create_task(
process_onchain_arbs(
deque(set(arbs_to_check)),
)
)
last_processed_block = newest_block
arbs_to_check.clear()
continue
finally:
start = time.time()