I continue to extend the feature set of the degenbot code base. Many changes aren’t big enough for a dedicated post, but there is a risk of them being overlooked and untested if I just push them to Github and then move to the next shiny thing.
Here is a list of some interesting new developments that you might enjoy.
Sushiswap V3
I almost missed this, and no surprise — almost nobody cares about Sushiswap anymore. It got a huge surge of interest by the DeFi farming aficionados during the last bull run, but since then it’s largely fallen out of favor.
Last month they posted a blog update about their new product offering: Sushi Concentrated Liquidity v3. The phrase “concentrated liquidity” and “v3” caught my eye for obvious reasons, so I read it.
They don’t mention it in the press release, but they have simply forked Uniswap V3 and deployed it across several blockchains (including Degen Code fan favorites Arbitrum and mainnet Ethereum).
The good news coming from this “copy paste” approach to innovation [sic] is that the V3LiquidityPool
helper is reusable. The hard work continues to pay off!
You can review these Github repos for Sushiswap v3-core and v3-periphery. You’ll find all of the familiar contracts with the same functions. They even have the same callbacks, which is really nice for us. Very little has to change at the smart contract level.
Here is an Arbitrum Sushiswap V3 LP fetcher script, very loosely adapted from the Uniswap V3 LP fetcher:
arbitrum_lp_fetcher_sushiswapv3.py
from brownie import network, Contract, chain
import sys
import json
import time
BROWNIE_NETWORK = "arbitrum-local"
# starting block span to process with getLogs
BLOCK_SPAN = 5_000
try:
network.connect(BROWNIE_NETWORK)
except:
sys.exit("Could not connect!")
exchanges = [
{
"name": "Sushiswap (V3)",
"filename": "arbitrum_lps_sushiswapv3.json",
"factory_address": "0x1af415a1EbA07a4986a52B6f2e7dE7003D82231e",
"factory_deployment_block": 75998697,
},
]
newest_block = chain.height
for name, factory_address, filename, deployment_block in [
(
exchange["name"],
exchange["factory_address"],
exchange["filename"],
exchange["factory_deployment_block"],
)
for exchange in exchanges
]:
print(f"DEX: {name}")
try:
factory_contract = Contract(factory_address)
except:
try:
factory_contract = Contract.from_explorer(factory_address)
except:
factory_contract = None
finally:
if factory_contract is None:
sys.exit("FACTORY COULD NOT BE LOADED")
try:
with open(filename) as file:
lp_data = json.load(file)
except FileNotFoundError:
lp_data = []
if lp_data:
previous_pool_count = len(lp_data)
print(f"Found previously-fetched data: {previous_pool_count} pools")
previous_block = lp_data[-1].get("block_number")
print(f"Found pool data up to block {previous_block}")
else:
previous_pool_count = 0
previous_block = deployment_block
failure = False
start_block = previous_block + 1
while True:
if failure:
BLOCK_SPAN = int(0.9 * BLOCK_SPAN)
# reduce the working span by 10%
else:
# increase the working span by .1%
BLOCK_SPAN = int(1.001 * BLOCK_SPAN)
end_block = min(newest_block, start_block + BLOCK_SPAN)
if end_block > newest_block:
end_block = newest_block
elif end_block == newest_block:
break
try:
pool_created_events = factory_contract.events.PoolCreated.getLogs(
fromBlock=start_block, toBlock=end_block
)
except ValueError as e:
failure = True
time.sleep(1)
continue
else:
print(
f"Fetched PoolCreated events, block range [{start_block},{end_block}]"
)
# set the next start block
start_block = end_block + 1
failure = False
# print(pool_created_events)
for event in pool_created_events:
lp_data.append(
{
"pool_address": event.args.get("pool"),
"fee": event.args.get("fee"),
"token0": event.args.get("token0"),
"token1": event.args.get("token1"),
"block_number": event.get("blockNumber"),
"type": "SushiswapV3",
}
)
with open(filename, "w") as file:
json.dump(lp_data, file, indent=2)
print(f"Saved {len(lp_data) - previous_pool_count} new pools")
Feel free to modify the 2pool and 3pool parser scripts to generate new arbitrage paths using these pools. There are roughly 315 Sushiswap V3 pools on Arbitrum as I write this, and more are being created every day.
The required change at the arbitrage contract level is adjusting the pool address verification function to accept a factory address input, instead of using the hard-coded value.
Here is an updated contract that supports both factories:
arbitrum_executor_v3.vy
# @version ^0.3
from vyper.interfaces import ERC20 as IERC20
interface IWETH:
def deposit(): payable
interface IUniswapV3Pool:
def factory() -> address: view
def fee() -> uint24: view
def token0() -> address: view
def token1() -> address: view
OWNER_ADDR: immutable(address)
WETH_ADDR: constant(address) = 0x82aF49447D8a07e3bd95BD0d56f35241523fBab1
MAX_PAYLOADS: constant(uint256) = 16
MAX_PAYLOAD_BYTES: constant(uint256) = 1024
struct payload:
target: address
calldata: Bytes[MAX_PAYLOAD_BYTES]
value: uint256
@external
@payable
def __init__():
OWNER_ADDR = msg.sender
# wrap initial Ether to WETH
if msg.value > 0:
IWETH(WETH_ADDR).deposit(value=msg.value)
@external
@payable
def execute_payloads(
payloads: DynArray[payload, MAX_PAYLOADS],
):
assert msg.sender == OWNER_ADDR, "!OWNER"
for _payload in payloads:
raw_call(
_payload.target,
_payload.calldata,
value=_payload.value,
)
@internal
@pure
def generate_v3_address(
factory: address,
tokenA: address,
tokenB: address,
fee: uint24
) -> address:
token0: address = tokenA
token1: address = tokenB
if convert(tokenA,uint160) > convert(tokenB,uint160):
token0 = tokenB
token1 = tokenA
return convert(
slice(
convert(
convert(
keccak256(
concat(
b'\xFF',
convert(factory,bytes20),
keccak256(
_abi_encode(
token0,
token1,
fee
)
),
0xe34f199b19b2b4f47f68442619d555527d244f78a3297ea89325f843f87b8b54,
)
),
uint256
),
bytes32
),
12,
20,
),
address
)
@external
def uniswapV3SwapCallback(
amount0_delta: int256,
amount1_delta: int256,
data: Bytes[32]
):
assert amount0_delta > 0 or amount1_delta > 0, "REJECTED 0 LIQUIDITY SWAP"
# get the factory/token0/token1 addresses and fee reported by msg.sender
factory: address = IUniswapV3Pool(msg.sender).factory()
assert factory == 0x1F98431c8aD98523631AE4a59f267346ea31F984 or factory == 0x1af415a1EbA07a4986a52B6f2e7dE7003D82231e, "INVALID FACTORY"
token0: address = IUniswapV3Pool(msg.sender).token0()
token1: address = IUniswapV3Pool(msg.sender).token1()
fee: uint24 = IUniswapV3Pool(msg.sender).fee()
assert msg.sender == self.generate_v3_address(factory,token0,token1,fee), "INVALID V3 LP ADDRESS"
# repay token back to pool
if amount0_delta > 0:
IERC20(token0).transfer(msg.sender,convert(amount0_delta, uint256))
elif amount1_delta > 0:
IERC20(token1).transfer(msg.sender,convert(amount1_delta, uint256))
@external
@payable
def __default__():
# accept basic Ether transfers to the contract with no calldata
if len(msg.data) == 0:
return
# revert on all other calls
else:
raise
You’ll note that the contract checks the address reported by the factory
call against two known addresses in the callback. Please verify both on Arbiscan: UniswapV3 Factory & SushiswapV3 Factory.
The contract only performs a verification of msg.sender
against the values returned from fee
, factory
, token0
and token1
.
A malicious contract could attack you by providing fake values that hash against the init values from Uniswap to a seemingly-valid pool address. To guard against this, the contract will only process transfers for callers that provide expected values.
Camelot Stable Liquidity Pools
It was simple to support the “volatile pair” pool type from Camelot DEX, which is based on the Uniswap V2 x*y=k
function. However they also offer a “stable pair” pool type within that same contract. This does not use the V2 functions, instead using the Solidly (x**3)y + (y**3)x = k
function.
I have commited this functionality to the CamelotLiquidityPool
helper, so please pull, test, report bugs and enjoy!
Pools of the “stable pair” type will be detected by the helper automatically. You can verify the outputs of the helper against a live USDT/USDC contract HERE.
Arbitrum Node Pruning
The recently-released 2.0.14 version of Nitro introduced a significant change: the default configuration is an archive node! I started investigating this after discovering that my node was consuming roughly 550 GB, a large increase compared to the initial install of 2.0.13.
The release notes include a switch (--node.staker.enable=false
) to disable this behavior. Add this to the Docker compose file I shared before to switch your node back to “full”.
I have also added another switch to enable full node state pruning (--init.prune=full
). I restarted my container, which has been executing the pruning for roughly 4 hours (and displaying many messages about traversing the trie database).
EDIT: The prune has completed, and has resulted in a ~190 GB savings. Success! Restarting the container with the --init.prune=full
option begins another complete prune, so I recommend using this option once if you have a bloated storage volume, then removing it from your container stack after.
Module-Level Logging
I have finally implemented an industry best practice by stripping the print
statements from the code in the degenbot repo. They have been replaced with varying calls to the logging
module. I’ve primarily swapped print
with logging.info
, converted some more low-level output to logging.debug
, and used logging.exception
inside “catch-all” try/except blocks
.
Logging within the degenbot code base using a common logger named “degenbot”. It is created with some sensible defaults — level INFO
, output to StreamHandler
. It is available within the module namespace as degenbot.logger
, so you are free to adjust the level and add/remove stream handlers as you please. Many thanks to reader alehpineda who shared his logging configuration in THIS Github issue. I’m particularly interested in the automatic log rotation, but have made this initial change as simple as possible.
NOTE: By default, loggers will pass messages up to the root level or parent-level logger. This will result in messages appearing twice if the loggers are set to the same severity level (INFO
, for example). If you are seeing log messages from your bot appearing twice, you can disable propagation by setting logger.propagate = False
on that logger.