Snooper Overhead on get_blobs

Networks:
Ethereum Mainnet
Time range:
Start 2026-01-15T00:00:00Z
End 2026-01-16T23:59:59Z
snooper engine-api get-blobs latency 7870
Published January 16, 2026

Question

What is the latency impact of rpc-snooper on get_blobs calls between the consensus layer and execution layer?

Background

On January 16, 2026, we deployed rpc-snooper to the 7870 mainnet reference nodes. The snooper sits between the consensus layer (CL) and execution layer (EL), intercepting Engine API calls to capture detailed timing data.

This investigation measures:

  1. End-to-end impact - Does adding a proxy affect CL-perceived latency?
  2. Internal overhead - How much time does serialization/deserialization add?

Data Sources

  • consensus_engine_api_get_blobs - Timing from Prysm's perspective (CL side)
  • execution_engine_get_blobs - Timing from the snooper (EL side)

The difference between these represents the snooper's serialization overhead.

Investigation

End-to-End Impact

Comparing what Prysm observed on January 15 (direct CL→EL) vs January 16 (CL→snooper→EL).

View Query: snooper_end_to_end_comparison
No Results
Loading...

The snooper adds negligible end-to-end latency. Most clients show less than 5% difference between direct and proxied connections. Some variance is expected due to different blob counts and network conditions between days.

Internal Overhead by EL Client

Measuring the actual serialization cost by comparing EL-side timing (from snooper) to CL-side timing (from Prysm).

View Query: snooper_overhead_by_client
No Results
Loading...

Per-blob overhead varies significantly by client:

  • nethermind has the lowest per-blob overhead (~3.9ms)
  • besu has the highest per-blob overhead (~11.8ms)
  • Average across all clients: ~7ms per blob

Overhead by Blob Count

How does serialization overhead scale with the number of blobs in a get_blobs call?

View Query: snooper_overhead_by_blobs
Loading...
Loading...

Total overhead scales roughly linearly with blob count, while per-blob overhead remains fairly consistent at 6-7ms per blob.

Takeaways

  • The snooper adds negligible end-to-end latency - most clients show less than 5% difference
  • Serialization overhead is approximately 6-7ms per blob on average
  • For a typical 6-blob block: ~45ms serialization overhead
  • nethermind has the lowest per-blob overhead (~3.9ms), while besu has the highest (~11.8ms)
  • The overhead is acceptable for production use and gives us Engine API visibility we didn't have before