Base Szn continues.
This project is a natural culmination of a series of articles covering Transient Storage, Uniswap Transfer Optimization, Running a Local Base Node, and Extracting Base-specific Data.
In the spirit of other projects, this is a “batteries-included” project with several scripts, a working bot, a smart contract, and a discussion on new features, bugs and workarounds, and my experience operating the bot until now.
Reasonable Expectations
You got access to the full project code at the same time as hundreds of your fellow readers. Since the barrier to entry here is essentially zero (copy, paste, launch), you should expect that high competition will quickly reduce the profitability of any one bot operator to zero.
I have turned off my bot and will stop searching with this strategy to allow as many of you as possible to capture the remaining profit.
Luckily playing on Base is incredibly cheap, so even if you never land a successful transaction it will be a very inexpensive experience and great practice. Uniswap forks are a dime a dozen, and porting this bot to work on other chains (or even other exchanges on Base) is simple.
Bot Structure
This is a backrunner that executes atomic two-pool arbitrage between Uniswap and Uniswap-derived exchanges.
It monitors Uniswap V2/V3, SushiSwap V2/V3, PancakeSwap V2/V3, and SwapBased V2.
The smart contract is written in Vyper and uses transient storage to recursively deliver payloads in both V2 and V3 callbacks without relying on encoded information sent through the data
parameter to swap()
.
My Experience
I have been testing on Base for a little over a month, and have sent roughly 20,000 transactions across various contracts.
I run my node from my home office, which is served by gigabit cable. Latency and throughput are good for residential broadband. My typical ping time to the Base sequencer is 10ms. I can consistently broadcast a transaction at block n and land in block n+2. I occasionally confirm a transaction in block n+1 when node latency is low. It is not consistent enough that I want to compete for the top of the next block, but I am very interested in how a co-located VPS might perform. If any of you run in data centers and want to chase this strategy, please share your results!
Two-pool arbitrage is a highly competitive arena, and scoring big arbs is rare since there are many bots watching the high return opportunities. I have landed a few $10+ profit arbs, which is fun, but these are rare and mostly came in a tidy bundle after I implemented a “leftovers” check which I will discuss later.
I have profited roughly 0.02 ETH, approximately $40. A dollar a day is absolutely nothing to get excited about, but everyone knows how to do two-pool arbitrage and competition is fierce. I’ve also burned a lot of gas doing fee testing, working through bugs, and deploying several contract iterations. Profitability has been much better this past week, so the actual results would look more impressive if I excluded all the trial runs.
There were also several multi-dollar fee transactions resulting from the unfortunate L1 blob fee spike in mid-June. See this nice Twitter overview for reference on how that happened.
Base Chain Lessons Learned
Sequencer
The Base sequencer is responsible for ordering transactions and block building. It is completely centralized, but seems to operate neutrally.
Transaction Ordering
The sequencer orders transactions based on gas fee, which is fair and predictable provided that you can submit a transaction to the sequencer before some cutoff. I cannot find any published information on that cutoff, so I’m mostly operating on a hunch. Someone is landing those next-block transactions, and the only “dial” that I’m aware of after gas fee is latency. Therefore I assume they are located in a low-latency data center and running an optimized strategy.
Stalled Transactions
I have observed that some transactions will be sent to the sequencer and remain in its txpool but not be recorded in a block despite having an appropriate gas fee. This caused me a lot of heartburn initially, because subsequent transactions would queue behind the stalled one and remain there. Broadcasting a replacement for the stalled transaction would clear it, then all the ones behind it would be included in new blocks. Nearly all of these would revert since their opportunities had long since expired, so just a big waste of time and money.
To deal with this, I have implemented a nonce tracking and expiry feature which mitigates the tendency for transactions to pile up behind a stalled nonce.
There is also a minimum priority fee of 50 Wei to be included in a block. Public RPCs will reject transactions below this value, but Reth will accept it and relay it to the sequencer where it will stall.
Unintuitive L1 Fees
Base follows all of the Ethereum mainnet gas rules, so an L2 operation costs the same as an L1 operation in terms of gas. However, when a Base transaction is recorded, it incurs a fee to record it on L1.
The L1 gas cost is dependent mostly on the length of the L2 transaction. The Base docs about fees link directly to the Optimism docs.
Here’s what you need to know: Base (and other Optimism chains) implement a contract called GasPriceOracle
which can be called to estimate the L1 cost for a given L2 transaction. The getL1Fee
function accepts a serialized unsigned transaction and returns an L1 gas estimate, which is added to the total cost of the transaction.
The Base GasPriceOracle
contract lives at address 0x420000000000000000000000000000000000000F
. There are other special Base contracts listed in the docs.
The project bot will query this contract and include the L1 fee in its profit calculation.
Web3py and other Python modules do not make it easy to generate an unsigned serialized transaction. Since the L1 fee is small compared to the L2 fee, I simply send the signed serialized transaction, which is longer, and assume that the resulting L1 fee overestimation is insignificant.
Ecotone Upgrade
Optimism publishes a network upgrade schedule, similar to the approach on Ethereum. It’s under-advertised, and completely omitted from the Base docs, but you can deploy Cancun-versioned EVM contracts on Base following the Ecotone upgrade. This means that the transient storage opcodes from EIP-1153 are fully supported.
New Features
Pub/Sub Updates
I have built a simple pub/sub feature into the pool and arbitrage helpers. Each helper can subscribe to messages from other helpers, and publish messages to their subscribers.
The feature is not highly refined, but I have some “canned” behavior in place. When an arbitrage helper is initialized, it will subscribe to the relevant pool helpers. When those pool helpers are updated, they will notify their subscribers. And when the arbitrage helper receives an update, it will also publish a message to its subscribers that it should be checked.
Ultimately, now instead of sorting through thousands of pool events for relevant arbitrage paths on each new block, the processing routine simply subscribes to events from arbitrage helpers and adds them to a set to be evaluated on the next cycle. Very simple and low overhead. Look for it in the small ArbitrageSubscriber
class, which will subscribe to all known arbitrage helpers after startup, and continue to listen for their updates. This is a good place to implement filtering, and I’ve included (commented out) a simple success threshold check that will discard arbitrage helpers that fail too often after some number of transactions have been sent.
One-Time “Sweep”
In addition to subscribing to active updates, the ArbitrageSubscriber
class will dump all known arbitrage helpers into a low priority set to be checked later. Whenever the high priority set is empty or too small to saturate the processing queue, it will move items from the low priority set for a one-time evaluation.
Occasionally this will result in the bot finding some lingering opportunities across low-activity pools. If these opportunities were created while the bot was offline, they will be discovered through this mechanism.
Block Priority Targeting
The block watcher now maintains a set of block priority gas targets for the previous block. It retrieves them via the eth_feeHistory
argument “rewardPercentiles” which returns priority gas fees ordered by block placement percentile. More simply, if you wanted to be in the middle position at block n, calling eth_feeHistory
for that block with rewardPercentiles=[50.0]
will return the priority fee associated with that position.
There is no guarantee that subsequent blocks will follow this same ordering, but targeting the previous block is a reasonable method for targeting your position in the next block.
Low percentiles imply lower gas use, which means you can be more profitable at the risk of reverts from unfavorable transaction ordering.
Single Websocket Subscriber
This uses the newer WebsocketProviderV2
from web3py, which integrates the ability to send eth_subscribe
requests directly. A key benefit of this approach is that events receives from the websocket are delivered in the same order they are received, which is much nicer than managing them by hand across multiple subscribers. It also means that the explicit dependency on the websockets library can be dropped.
Nonce Tracking
During times of high activity, it is common for the bot to find and send several transactions per block. Nonce tracking becomes critically important during this condition, because the nonces on the transaction simulator, the current block, and the expected next block(s) will be different. Not to mention all that stuff I mentioned above about stalled transactions…
The solution is to maintain the current nonce at the current chain tip, which is updated by the block watcher subscription, and a set of nonces for pending transactions previously sent to the sequencer.
Whenever a new arbitrage transaction is evaluated, the bot will check the current nonce, plus any pending nonces, and select the next available accounting for gaps.
This allows stalled transactions to expire and drop their pending nonce, which can be reused and “unstick” the sequencer queue.
Pending Pool Update Check
In addition to tracking the nonces, the bot also keeps track of which pools are involved in pending transactions. When an arbitrage helper is evaluated by the processing routine, it will be discarded if it conflicts with one of these pools. This is useful because the pool state is expected to change when the pending transaction is included in a block, so the current pool state is not worth considering.
When the pending transaction confirms, the subscriber will receive another notification and re-evaluate the opportunity.
Pending Arbitrage Check & Optional Delay
Because of my setup (residential broadband, inability to get into block n+1), I forego many lucrative opportunities in favor of reducing reverts.
I have implemented a default strategy that sleeps for 1-2 blocks following discovery of a profitable opportunity, then checks if any pool updates have triggered a re-check for that arb. If so, that indicates that someone has either captured the opportunity, or normal swaps have driven the pool states out of alignment and it must be rechecked.
This has reduced reverts considerably, though it lowers profit potential for operators with very low latency that might land in the next block.
Depending on your setup, you might decide to adjust this “ready, wait, fire” behavior.
Opportunities for Improvement
First and most obvious, you can find more opportunities by extending the arbitrage helper to support three-pool arbitrage with the transient executor.
The arbitrage helper does not implement a proper pre-calculation check for the reverse path, so the bot builds twice as many helpers as it needs to.
I am providing an ad-hoc helper here based on the previous customization lesson, and will integrate a more refined version into degenbot once I have extended it.
Version Info
This project uses degenbot version 0.2.4 and Vyper version 0.3.10. You can install both using pip
.
The minimum supported version of Python is 3.10.