Node CPU Resource Usage

Networks:
Ethereum Mainnet
Time range:
Start 2026-03-10T00:00:00Z
End 2026-03-16T23:59:59Z
resources cpu consensus-layer execution-layer observoor
Published March 30, 2026

Question

How much CPU do consensus and execution layer clients use on Ethereum mainnet full nodes? Can we isolate non-execution overhead from total process CPU?

Background

Researchers want to know how many CPU cores are typically occupied by non-execution tasks on a full Ethereum node: attestation processing, state management, p2p networking, epoch transitions, and so on. This matters for realistic benchmarking of block execution, where you need to know how much CPU budget is left over after the CL and EL background work.

We measure process-level CPU using observoor, an eBPF-based profiler running on our Xatu nodes at 200ms intervals, giving sub-slot resolution within Ethereum's 12-second slots. The total_on_cpu_ns metric was cross-validated against Prometheus container_cpu_usage_seconds_total. It is accurate for Lighthouse, Prysm, Lodestar, Nimbus, and Geth. Grandine, Teku, Besu, Nethermind, and Reth showed inflated values (2x to 7500x) due to an eBPF profiling issue with certain runtimes and are excluded.

Important caveat: These are process-level metrics. CL CPU is entirely non-execution work (attestations, state transitions, p2p). But EL CPU is a mix of block execution, mempool management, p2p networking, and state trie maintenance. We can't separate "pure execution" from "EL background work" at this level of instrumentation.

Investigation

Nodes Analyzed

One representative node per CL client, all 32-core machines, non-proposing. These are busy infrastructure nodes that also serve API calls and run debugging tools, so CPU figures here are an upper bound.

CL Client EL Client Node
Lighthouse Geth utility-mainnet-lighthouse-geth-001
Prysm Geth utility-mainnet-prysm-geth-tysm-001
Lodestar Nethermind utility-mainnet-lodestar-nethermind-001
Nimbus Besu utility-mainnet-nimbus-besu-001

For EL baseline, the Geth instance on utility-mainnet-lighthouse-geth-001 is used.

Overall CL CPU Usage

View Query: node_cpu_cl_summary
Loading...
No Results

Lighthouse is the most CPU-hungry CL client at ~5.6 cores on average, followed by Prysm at ~1.9 cores. Lodestar and Nimbus are both under 1 core.

Over Time

View Query: node_cpu_cl_hourly
Loading...

CPU usage is stable across the week. Lighthouse has the most variance, with periodic spikes likely from epoch processing.

When Processing Epoch Transitions

An epoch on Ethereum is 32 slots (~6.4 minutes). At epoch boundaries the CL client must process the epoch state transition — computing rewards, shuffling committees, and updating validator balances. This is the heaviest CL workload.

View Query: node_cpu_cl_by_epoch_slot
Loading...

Slot 0 (first slot of new epoch) and slot 31 (last slot, where state transition begins) show elevated CPU for all clients. Lighthouse is the most visible, jumping from ~5.6 to ~6.2 cores at epoch boundaries.

Within a Single Slot

Each slot is 12 seconds. With 200ms sampling, we can see what happens inside a slot. These charts show both CL and EL CPU, comparing mid-epoch slots against epoch boundary slots. EL data (dashed lines) is only available for Geth; Besu and Nethermind have broken observoor data.

View Query: node_cpu_cl_intra_slot
View Query: node_cpu_el_intra_slot
Loading...
Loading...
Loading...
Loading...

Epoch boundary work concentrates in the second half of the slot (6-12s). Lighthouse spikes from ~5 cores to ~14 at epoch boundaries. Lodestar goes from under 1 core to nearly 8, the biggest relative jump. Prysm peaks around 5 cores. Nimbus stays flat at under 1 core throughout.

EL Client Baseline

The EL client also uses CPU for mempool management, p2p, and state trie maintenance even when not producing blocks. Only Geth is shown here since other EL clients had unreliable observoor data.

View Query: node_cpu_el_summary
No Results

Geth uses under 1 core on average when idle.

EIP-7870 Reference Nodes

The EIP-7870 reference nodes all run Prysm as the CL, each paired with a different EL client. They run on 16-core (ax52) and 20-core (asus-sydney) machines with minimal additional workload, so these numbers are a lower bound.

Data source: Prometheus container_cpu_usage_seconds_total (24h, 1m resolution, 2m rate).

View Query: node_cpu_7870_summary
Loading...
No Results

With the same CL across all pairs, total non-execution overhead is 0.6 to 1.1 cores on average. EL client choice matters less than CL choice: EL idle CPU ranges from 0.15 (Erigon) to 0.48 cores (Reth). Prysm's CL overhead is consistent at 0.5-0.9 cores regardless of which EL it's paired with.

Takeaways

  • On the EIP-7870 reference nodes (Prysm + 6 different EL clients, 16/20-core machines), total CL + EL process CPU is 0.6-1.1 cores. This is the best lower bound we have for total node overhead
  • CL CPU varies a lot by client. On busy infrastructure nodes: Nimbus and Lodestar < 1 core, Prysm ~2 cores, Lighthouse ~5.6 cores. At epoch boundaries these spike to 5-14 cores briefly (second half of the slot)
  • EL process CPU includes both execution and background work (mempool, p2p, state trie) and we can't split these apart with process-level metrics. Properly isolating non-execution EL overhead would need function-level profiling