Building Custom Spammers in Rust with Contender

Contender is typically used as a CLI tool, but it was designed as a library with extensibility being top of mind. When we wanted to test OP-Interop transactions, which pass messages from one chain to another, we realized we had to add some custom code to make contender aware of these secondary transactions on another chain, so that we could properly detect & relay the cross-chain messages, and measure cross-chain latency. Thankfully, contender is fairly easy to extend and use in your own projects. This post is about how that works, and how you can apply these methods to make your own custom spammers with contender.

Throughout this article, I’ll be loosely referring to this repo: GitHub - zeroXbrock/op-interop-contender — the first custom spammer built with contender.

Callbacks

Contender has two traits that you can implement to plug in your own functionality after transactions are sent: OnTxSent and OnBatchSent.

OnTxSent is triggered for each individual transaction after it’s sent to the RPC.

OnBatchSent is triggered once after each batch of transactions is sent. A batch in contender is determined by the spam rate; if you specify spam --tps 20 (20 tx/s), this callback triggers after all 20 txs are finished sending.

Here are the trait definitions:

pub trait OnTxSent<K = String, V = String>
where
    K: Eq + std::hash::Hash + AsRef<str>,
    V: AsRef<str>,
{
    fn on_tx_sent(
        &self,
        tx_response: PendingTransactionConfig,
        req: &NamedTxRequest,
        extra: RuntimeTxInfo,
        tx_handlers: Option<HashMap<String, Arc<TxActorHandle>>>,
    ) -> Option<JoinHandle<crate::Result<()>>>;
}

pub trait OnBatchSent {
    fn on_batch_sent(&self) -> Option<JoinHandle<crate::Result<()>>>;
}

Much of this may look foreign; NamedTxRequest and RuntimeTxInfo are native to contender. But no need to worry; the only thing we’re really interested in for the sake of this article is the PendingTransactionConfig (tx_response, an alloy type). This contains the pending transaction which we’ll use to find and relay interop transactions.

Notice that the functions each return an Option<JoinHandle> which, if you’re not familiar, is an async tokio task response. This allows us, as library users of contender, to define async functions which contender will await before sending another batch transactions. It’s also possible to run synchronous code in this closure, and then return None.

Writing a Callback for OP-Interop

To manually send a cross-chain message with OP-Interop, we had to do three things:

  1. Send the “initiating transaction” — this calls an interop-enabled smart contract which is designed to pass a message to the other chain when the function postBulletin is called.
  2. Detect a special log that contains the cross-chain message.
  3. Relay that log’s message in a new transaction to the other chain. This is typically performed by an “auto-relay” operated by OP, but if we call it directly, we don’t have to depend on an auto-relay, which is not guaranteed to be running on any given OP-Interop deployment.

The functions we care about in the interop-enabled smart contract are postBulletin and receiveBulletin, which post cross-chain messages, and receive/store messages, respectively.

Step 1 is just contender spamming the postBulletin function. To handle the remaining steps in contender, we need to extend the OnTxSent callback. As we send spam transactions, the callback will be triggered with the associated transaction hash, which we can use to complete steps 2 & 3. When we send the “relay tx” in step 3, the receiveBulletin function will automatically be called according to the message data.

To start writing a new callback implementation, we just need to define a new struct on which we’ll implement the contender traits. To equip our callback with the ability to send new transactions to another chain, we give it two RPC providers; one for the “source chain” and one for the “destination chain”:

pub struct OpInteropCallback {
    destination_provider: Arc<AnyProvider>,
    source_provider: Arc<AnyProvider>,
		// ...
}

Checkout the source file to see a full implementation.

Here’s what the implementation of the interop tx callback looks like:

impl OnTxSent for OpInteropCallback {
    fn on_tx_sent(
        &self,
        pending_tx: PendingTransactionConfig,
        _tx_req: &NamedTxRequest,
        extra: RuntimeTxInfo,
        tx_actors: Option<HashMap<String, Arc<TxActorHandle>>>,
	    ) -> Option<tokio_task::JoinHandle<Result<(), ContenderError>>> {
		    // clone data to pass safely to threads
        let dest_provider = self.destination_provider.clone();
        let source_provider = self.source_provider.clone();
        let source_chain_id = self.source_chain_id;
        let source_tx_hash = pending_tx.tx_hash().to_owned();

        let handle = tokio_task::spawn(async move {
            let relay_tx_hash = handle_on_tx_sent(
                &source_provider,
                source_tx_hash,
                source_chain_id,
                &dest_provider,
            )
            .await
            .map_err(|e| ContenderError::with_err(e.deref(), "failed to handle on_tx_sent"))?;
            if let Some(relay_tx_hash) = relay_tx_hash {
                info!("Message {source_tx_hash} relayed by tx {relay_tx_hash}");
                let tx = CacheTx {
                    tx_hash: relay_tx_hash,
                    start_timestamp_ms: extra.start_timestamp_ms(),
                    kind: extra.kind().cloned(),
                    error: extra.error().cloned(),
                };
                if let Some(Some(actor)) =
                    tx_actors.map(|actors| actors.get(OP_ACTOR_NAME).cloned())
                {
		                // add tx to cache to watch for receipt,
		                // potentially saving results to DB
                    actor.cache_run_tx(tx).await.map_err(|e| {
                        ContenderError::with_err(e.deref(), "failed to cache run tx")
                    })?;
                }
            }
            Ok(())
        });

        Some(handle)
    }
}

By returning a JoinHandle (given by spawning a tokio task) from the callback, we can define async code that runs when we send a transaction (and get a tx hash back).

Here’s the rest:

/// Waits for transaction to land on source chain, then
/// finds the xchain log in the receipt and relays it to the destination chain.
pub async fn handle_on_tx_sent(
    source_provider: &AnyProvider,
    tx_hash: TxHash,
    source_chain_id: u64,
    destination_provider: &AnyProvider,
) -> Result<Option<TxHash>, Box<dyn std::error::Error>> {
    // wait for tx to land
    let _ = source_provider
        .watch_pending_transaction(PendingTransactionConfig::new(tx_hash))
        .await?
        .await?;

    // get receipt for logs
    let receipt = source_provider
        .get_transaction_receipt(tx_hash)
        .await?
        .ok_or(format!("tx receipt for {tx_hash} not found"))?;

    // Find xchain log; if present, relay msg to destination chain.
    let xchain_log = find_xchain_log(&receipt).await?;
    let mut relay_tx_hash = None;
    if let Some(log) = xchain_log {
        info!("Interop message {tx_hash} detected.");
        let block = source_provider
            .get_block_by_hash(receipt.block_hash.expect("receipt block hash"))
            .await?
            .ok_or_else(|| format!("Block for receipt {tx_hash} not found"))?;
        let res = relay_message(
            &log,
            block.header.timestamp,
            source_chain_id,
            destination_provider,
        )
        .await?;
        relay_tx_hash = res.map(|tx| tx.tx_hash().to_owned());
    }
    Ok(relay_tx_hash)
}

handle_on_tx_sent waits for the source transaction to land, finds the cross-chain log (if present), and then sends a tx to the destination chain to relay the message.

Relaying an OP-Interop Cross-Chain Message

When we detect a cross-chain log (by calling find_xchain_log(&receipt)) we call relay_message, which looks like this:

pub async fn relay_message(
    log: &Log,
    source_timestamp: u64,
    source_chain_id: u64,
    dest_provider: &AnyProvider,
) -> Result<Option<PendingTransactionConfig>, Box<dyn std::error::Error>> {
    let payload = build_payload(log);

    let id_req =
        IdentifierWithPayload::new(log, source_timestamp, source_chain_id, payload.to_owned());
    let access_list = get_access_list_for_identifier(&id_req).await?;

    let calldata = relayMessageCall {
        _id: id_req.to_sol(),
        _sentMessage: payload.into(),
    }
    .abi_encode();

    let tx_req = TransactionRequest::default()
        .to(*L2_TO_L2_CROSS_DOMAIN_MESSENGER)
        .input(calldata.into())
        .access_list(access_list);

    let pending_tx = dest_provider
        .send_transaction(tx_req.into())
        .await
        .inspect_err(|e| {
            println!("Failed to send transaction: {e}");
        })
        .ok();
    Ok(pending_tx.map(|tx| tx.inner().to_owned()))
}

This function builds a cross-chain message payload with build_payload, then encodes a call to relayMessage on the L2ToL2CrossDomainMessenger contract, which is OP-interop’s way of relaying a message from one chain to another. We also have to make a call to get_access_list_for_identifier which is a convenience method on supersim’s admin API (supersim is a local interop devnet), which we’ve recreated locally to make the tool portable to any OP-Interop deployment.

After calling relay_message via handle_on_tx_sent, the callback adds the transaction hash of the “relay tx” to the pending-tx cache (actor.cache_run_tx), which will make contender watch those hashes for transaction receipts and measure their inclusion time on the destination chain.

Making Your Own Spammer Callback

Contender spammer callbacks don’t necessarily have to watch for inclusion, which is why we have to manually invoke actor.cache_run_tx to collect metrics. So if you want to take metrics (which are needed to generate reports via contender report) then make sure you cache run txs via the tx_actors in OnTxSent::on_tx_sent. Otherwise, you’re free to do whatever you want in response to a sent tx, or a sent batch of transactions!

We didn’t cover OnBatchSent in this post, but to give you a better idea of what it’s good for, the contender CLI uses it to trigger block-building calls when a custom auth API is provided to the spammer (meaning contender builds blocks manually rather than waiting for the chain to advance on its own).

Tell Me About It

I hope this post got you thinking about other ways to build more intelligent spammers! If you have ideas that you’d like to explore, please send them my way. I want to hear your ideas and help you build them. My DMs are always open @zeroXbrock on X.

Resources

2 Likes