ARTICLE AD BOX
RPC rate limiting is one of the most common production issues in Web3 apps. Here's a systematic approach.
1. Detect rate limit errors correctly
HTTP 429 is the standard rate limit response, but some providers return it differently:
async function rpcCallWithDetection(method, params) { try { const response = await provider.send(method, params); return response; } catch (error) { if ( error.status === 429 || error.message.includes('Too Many Requests') || error.message.includes('rate limit') || error.message.includes('limit exceeded') ) { // Rate limit hit — handle separately from other errors throw new RateLimitError(error); } throw error; } }2. Implement exponential backoff with jitter
Never retry immediately on a 429. Use exponential backoff:
async function rpcWithBackoff(method, params, maxRetries = 5) { for (let attempt = 0; attempt < maxRetries; attempt++) { try { return await provider.send(method, params); } catch (error) { if (!isRateLimitError(error)) throw error; if (attempt === maxRetries - 1) throw error; const baseDelay = Math.pow(2, attempt) * 1000; // 1s, 2s, 4s, 8s... const jitter = Math.random() * 1000; await sleep(baseDelay + jitter); } } }3. Reduce unnecessary RPC calls
Most rate limit problems come from polling. Replace polling with event subscriptions:
// Bad — polling every block setInterval(async () => { const balance = await provider.getBalance(address); }, 12000); // Good — subscribe to events provider.on('block', async (blockNumber) => { const balance = await provider.getBalance(address); }); // Better — only fetch when relevant event occurs contract.on('Transfer', async (from, to, amount) => { if (to === userAddress) { const balance = await provider.getBalance(userAddress); } });4. Implement request queuing and batching
const { ethers } = require('ethers'); // Use multicall to batch multiple reads into one RPC call const multicall = new ethers.Contract(MULTICALL_ADDRESS, MULTICALL_ABI, provider); async function batchGetBalances(addresses) { const calls = addresses.map(addr => ({ target: TOKEN_ADDRESS, callData: tokenInterface.encodeFunctionData('balanceOf', [addr]) })); const { returnData } = await multicall.aggregate(calls); return returnData.map(data => tokenInterface.decodeFunctionResult('balanceOf', data)[0] ); }5. Use multiple RPC providers with failover
Single provider = single point of failure. Implement multi-provider failover:
const providers = [ new ethers.JsonRpcProvider(PRIMARY_RPC_URL), new ethers.JsonRpcProvider(SECONDARY_RPC_URL), ]; async function resilientCall(method, params) { for (const provider of providers) { try { return await provider.send(method, params); } catch (error) { if (isRateLimitError(error)) continue; // Try next provider throw error; } } throw new Error('All providers rate limited'); }Root cause: shared vs dedicated endpoints
If you're hitting rate limits consistently, the architecture fix is moving from shared public endpoints to a dedicated RPC endpoint. Shared endpoints aggregate traffic from all users — your limits get consumed by other applications' traffic patterns, not just yours.
Dedicated endpoints give you:
Isolated rate limits (only your traffic counts)
Predictable throughput
No degradation during other users' traffic spikes
For reference: providers like GetBlock, Alchemy, and QuickNode offer dedicated endpoints starting at fixed monthly rates which often work out cheaper than the engineering time spent debugging rate limit issues in production.
Summary checklist:
□ Detect 429s separately from other errors □ Implement exponential backoff with jitter on retries □ Replace polling with event subscriptions where possible □ Batch multiple reads with Multicall □ Implement multi-provider failover □ For high-traffic apps: use dedicated RPC endpointsDisclosure: I work at GetBlock, a dedicated RPC node provider.
