Skip to main content

Understanding Sealmetrics API Rate Limits

Updated over 3 weeks ago

Overview

Sealmetrics API employs a request-based rate limiting system designed to ensure consistent, high-volume data access while preserving optimal server performance. Our rate limiting architecture emphasizes bulk data processing efficiency over conventional query frequency restrictions.

Rate Limit Specifications

Core Limits

  • Requests per minute: 60 requests

  • Burst capability: 60 concurrent requests

  • Records per request: Up to 10,000 records

  • Total throughput: 600,000 records per minute maximum

How It Works

┌─────────────────────────────────────────────────────────────┐
│ Sealmetrics Rate Limiting │
├─────────────────────────────────────────────────────────────┤
│ 60 requests/min × 10,000 records/request = 600K/min │
│ │
│ ┌───┐ ┌───┐ ┌───┐ ┌───┐ │
│ │Req│ │Req│ │Req│ ... │Req│ (60 total per minute) │
│ │10K│ │10K│ │10K│ │10K│ │
│ └───┘ └───┘ └───┘ └───┘ │
│ │
│ Maximum burst: All 60 requests can be made simultaneously │
└─────────────────────────────────────────────────────────────┘

Request Examples

Single Large Request

GET /api/v1/analytics/events?limit=10000&start_date=2025-01-01&end_date=2025-01-31 Authorization: Bearer your_api_token Response: 10,000 event records (1 request consumed)

Batch Processing Pattern

// Efficient bulk data retrieval const batchSize = 10000; const totalBatches = 60; // Maximum per minute const requestInterval = 1000; // 1 second between requests for smooth distribution async function fetchAllData() { const allRecords = []; for (let i = 0; i < totalBatches; i++) { const offset = i * batchSize; const response = await fetch(`/api/v1/analytics/events`, { method: 'GET', headers: { 'Authorization': 'Bearer your_api_token', 'Content-Type': 'application/json' }, params: { limit: batchSize, offset: offset, start_date: '2025-01-01', end_date: '2025-01-31' } }); const batch = await response.json(); allRecords.push(...batch.data); // Optional: Add delay for smooth distribution if (i < totalBatches - 1) { await new Promise(resolve => setTimeout(resolve, requestInterval)); } } return allRecords; // Up to 600,000 records }

Rate Limit Headers

Every API response includes rate limiting information in the headers:

HTTP/1.1 200 OK X-RateLimit-Limit: 60 X-RateLimit-Remaining: 45 X-RateLimit-Reset: 1641024000 X-RateLimit-Used: 15 Content-Type: application/json { "data": [...], // Up to 10,000 records "pagination": { "total": 150000, "limit": 10000, "offset": 0, "has_more": true } }

Header Descriptions

  • X-RateLimit-Limit: Maximum requests allowed per minute (60)

  • X-RateLimit-Remaining: Requests remaining in current window

  • X-RateLimit-Reset: Unix timestamp when rate limit resets

  • X-RateLimit-Used: Requests consumed in current window

Burst Capability Explained

Unlike traditional APIs that spread requests evenly, Sealmetrics allows burst processing for maximum efficiency:

Burst Processing Benefits

Traditional API (spreading required):
├── Request 1 ──┬── Request 2 ──┬── Request 3 ──┬── ...
│ (1 second) │ (1 second) │ (1 second) │
└── 1K records └── 1K records └── 1K records └── Total: 60K/min

Sealmetrics (burst allowed):
├── Request 1 ──┤
├── Request 2 ──┤── All 60 requests
├── Request 3 ──┤── in first 10 seconds
├── ... ──┤
└── Request 60 ──┘── Total: 600K records in 10 seconds

When to Use Burst Processing

  • Large data exports: Download complete datasets quickly

  • Real-time synchronization: Catch up after downtime periods

  • Batch ETL processes: Efficient data pipeline operations

  • Report generation: Fast data aggregation for dashboards

Error Handling

Rate Limit Exceeded Response

HTTP/1.1 429 Too Many Requests X-RateLimit-Limit: 60 X-RateLimit-Remaining: 0 X-RateLimit-Reset: 1641024060 Retry-After: 60 { "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "API rate limit exceeded. You have made 60 requests in the current minute window.", "details": { "limit": 60, "window": "1 minute", "reset_time": "2025-01-01T12:01:00Z", "retry_after": 60 } } }

Implementing Retry Logic

async function makeRequestWithRetry(url, options = {}, maxRetries = 3) { for (let attempt = 1; attempt <= maxRetries; attempt++) { try { const response = await fetch(url, options); if (response.status === 429) { const retryAfter = parseInt(response.headers.get('Retry-After') || '60'); if (attempt < maxRetries) { console.log(`Rate limit hit. Retrying after ${retryAfter} seconds...`); await new Promise(resolve => setTimeout(resolve, retryAfter * 1000)); continue; } else { throw new Error('Max retries reached for rate limit'); } } if (!response.ok) { throw new Error(`HTTP ${response.status}: ${response.statusText}`); } return await response.json(); } catch (error) { if (attempt === maxRetries) { throw error; } // Exponential backoff for other errors const backoffTime = Math.pow(2, attempt) * 1000; await new Promise(resolve => setTimeout(resolve, backoffTime)); } } }

Optimization Strategies

1. Maximize Records Per Request

// ✅ Efficient: Get maximum records per request const response = await fetch('/api/v1/analytics/events?limit=10000'); // ❌ Inefficient: Small batch sizes waste requests const response = await fetch('/api/v1/analytics/events?limit=100');

2. Use Appropriate Date Ranges

// ✅ Efficient: Specific date ranges const params = { start_date: '2025-01-01', end_date: '2025-01-31', limit: 10000 }; // ❌ Inefficient: Overly broad ranges may hit other limits const params = { start_date: '2020-01-01', end_date: '2025-12-31', limit: 10000 };

3. Implement Smart Pagination

async function fetchAllPages(baseUrl, params) { const allData = []; let hasMore = true; let offset = 0; while (hasMore && (offset / 10000) < 60) { // Respect rate limit const response = await fetch(`${baseUrl}?${new URLSearchParams({ ...params, limit: 10000, offset: offset })}`); const data = await response.json(); allData.push(...data.data); hasMore = data.pagination.has_more; offset += 10000; // Check rate limit headers const remaining = parseInt(response.headers.get('X-RateLimit-Remaining')); if (remaining <= 5) { console.log('Approaching rate limit, pausing...'); await new Promise(resolve => setTimeout(resolve, 60000)); } } return allData; }

Performance Comparison

Sealmetrics vs Industry Standards

Platform

Requests/Min

Records/Request

Total Records/Min

Sealmetrics

60

10,000

600,000

GA360

Variable

25

~83,333

Amplitude

Variable

2,000

~96,000

Mixpanel

60

2,000

120,000

Segment

6,000

2,500

150,000

Why Sealmetrics Leads in Bulk Processing

  1. High Record Density: 10,000 records per request (4-5x industry average)

  2. Burst Capability: All 60 requests can be made simultaneously

  3. No Token Complexity: Simple request counting vs complex token calculations

  4. Predictable Performance: Consistent limits without variable cost calculations

Best Practices

1. Monitor Rate Limit Headers

Always check response headers to track your usage:

function logRateLimitStatus(response) { const limit = response.headers.get('X-RateLimit-Limit'); const remaining = response.headers.get('X-RateLimit-Remaining'); const reset = response.headers.get('X-RateLimit-Reset'); console.log(`Rate Limit Status: ${60 - remaining}/${limit} used`); console.log(`Resets at: ${new Date(reset * 1000).toISOString()}`); if (remaining < 10) { console.warn('Approaching rate limit threshold'); } }

2. Implement Request Queuing

For applications making frequent requests:

class RequestQueue { constructor(maxRequestsPerMinute = 60) { this.maxRequests = maxRequestsPerMinute; this.requests = []; this.processing = false; } async addRequest(requestFn) { return new Promise((resolve, reject) => { this.requests.push({ requestFn, resolve, reject }); if (!this.processing) { this.processQueue(); } }); } async processQueue() { this.processing = true; const startTime = Date.now(); let requestCount = 0; while (this.requests.length > 0 && requestCount < this.maxRequests) { const { requestFn, resolve, reject } = this.requests.shift(); try { const result = await requestFn(); resolve(result); requestCount++; } catch (error) { reject(error); } } // Wait for next minute if we hit the limit if (requestCount >= this.maxRequests) { const elapsedTime = Date.now() - startTime; const waitTime = Math.max(0, 60000 - elapsedTime); if (waitTime > 0) { await new Promise(resolve => setTimeout(resolve, waitTime)); } } // Continue processing if there are more requests if (this.requests.length > 0) { setTimeout(() => this.processQueue(), 1000); } else { this.processing = false; } } } // Usage const queue = new RequestQueue(60); // Add requests to queue const result1 = await queue.addRequest(() => fetchAnalyticsData('/events')); const result2 = await queue.addRequest(() => fetchAnalyticsData('/conversions'));

3. Cache Frequently Accessed Data

Reduce API calls by implementing intelligent caching:

class SealmetricsCache { constructor(ttlMinutes = 60) { this.cache = new Map(); this.ttl = ttlMinutes * 60 * 1000; } getCacheKey(endpoint, params) { return `${endpoint}:${JSON.stringify(params)}`; } async get(endpoint, params) { const key = this.getCacheKey(endpoint, params); const cached = this.cache.get(key); if (cached && Date.now() - cached.timestamp < this.ttl) { console.log('Cache hit:', key); return cached.data; } // Cache miss - fetch from API const data = await this.fetchFromAPI(endpoint, params); this.cache.set(key, { data, timestamp: Date.now() }); return data; } async fetchFromAPI(endpoint, params) { const response = await fetch(`/api/v1${endpoint}?${new URLSearchParams(params)}`); return await response.json(); } }

Troubleshooting Common Issues

Issue 1: "Rate Limit Exceeded" Errors

Symptoms: 429 status codes, requests failing Solutions:

  • Implement retry logic with exponential backoff

  • Monitor X-RateLimit-Remaining header

  • Distribute requests evenly across the minute window

  • Use request queuing for high-frequency applications

Issue 2: Slow Data Retrieval

Symptoms: Long processing times for large datasets Solutions:

  • Use burst capability - make all 60 requests quickly

  • Maximize records per request (use limit=10000)

  • Implement parallel processing for independent requests

  • Consider data filtering to reduce total volume needed

Issue 3: Incomplete Data Sets

Symptoms: Missing records, pagination issues Solutions:

  • Always check pagination.has_more in responses

  • Use proper offset calculation for pagination

  • Implement continuation logic for large datasets

  • Verify date range parameters are correct

Enterprise Support

For Enterprise customers requiring higher rate limits or custom configurations:

  • Custom Rate Limits: Up to 120 requests/minute available

  • Dedicated Infrastructure: Isolated processing for guaranteed performance

  • Priority Support: 4-8 hour SLA for rate limit issues

  • Custom Integrations: Tailored solutions for high-volume use cases

Contact our Enterprise team at [email protected] for discussions about custom rate limits.


Summary

Sealmetrics API rate limiting is designed for maximum bulk processing efficiency:

  • 60 requests per minute with full burst capability

  • 10,000 records per request for efficient data transfer

  • 600,000 total records per minute industry-leading throughput

  • Simple request-based counting without token complexity

  • Predictable performance with clear header feedback

This architecture makes Sealmetrics ideal for bulk data processing, ETL pipelines, and large-scale analytics applications requiring consistent, high-volume data access.

Did this answer your question?