Integrating Screenshot Automation Into Backend or Worker Pipelines

    Learn how to integrate PeekShot into backend or worker pipelines. Architecture patterns, integration examples, error handling, and scaling considerations.

    Automation & Integrations
    7 min read
    13 Jan 2026
    architecture
    automation
    backend
    developer
    integration
    worker-pipelines

    Integrating PeekShot into backend systems or worker pipelines requires understanding architecture patterns, error handling, and scaling considerations. This guide shows you how to build reliable integrations.


    Architecture Patterns

    Sync vs Async

    Synchronous:

    • Wait for screenshot to complete before responding

    • Simpler implementation

    • Blocks request until complete

    • Good for low-volume, user-triggered requests

    Asynchronous (Recommended):

    • Queue request and return immediately

    • Process in background

    • Use webhooks to receive results

    • Better for high-volume or automated workflows

    Webhooks vs Polling

    Webhooks (Recommended for Production):

    • PeekShot notifies you when ready

    • More efficient

    • Real-time updates

    • Requires webhook endpoint

    Polling:

    • Periodically check status

    • Simpler setup

    • Less efficient

    • Good for development or low volume

    Queue-Based Architecture

    For high-volume systems:

    • Queue screenshot requests

    • Process queue with workers

    • Handle results via webhooks

    • Scale workers based on load


    Integration Examples

    Node.js Worker Example

    const Bull = require('bull');
    const fetch = require('node-fetch');
    
    // Create queue
    const screenshotQueue = new Bull('screenshots', {
      redis: { host: 'localhost', port: 6379 }
    });
    
    // Process queue
    screenshotQueue.process(async (job) => {
      const { url, viewport } = job.data;
      
      // Request screenshot
      const response = await fetch('https://api.peekshot.com/v1/screenshots', {
        method: 'POST',
        headers: {
          'x-api-key': process.env.PEEKSHOT_API_KEY,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({ url, viewport })
      });
      
      const result = await response.json();
      
      // Store requestId for webhook matching
      await storeRequestId(job.id, result.requestId);
      
      return result;
    });
    
    // Webhook handler
    app.post('/webhook/peekshot', async (req, res) => {
      const { data } = req.body;
      
      // Find job by requestId
      const job = await findJobByRequestId(data.requestId);
      
      if (data.success) {
        await job.progress(100);
        await job.finish({ fileUrl: data.fileUrl });
      } else {
        await job.fail(new Error(data.error));
      }
      
      res.status(200).json({ received: true });
    });

    Python Celery Example

    from celery import Celery
    import requests
    
    app = Celery('screenshots')
    
    @app.task
    def capture_screenshot(url, viewport):
        response = requests.post(
            'https://api.peekshot.com/v1/screenshots',
            headers={
                'x-api-key': os.getenv('PEEKSHOT_API_KEY'),
                'Content-Type': 'application/json'
            },
            json={
                'url': url,
                'viewport': viewport
            }
        )
        
        result = response.json()
        
        # Store requestId for webhook matching
        store_request_id(current_task.request.id, result['requestId'])
        
        return result
    
    # Webhook handler (Flask)
    @app.route('/webhook/peekshot', methods=['POST'])
    def webhook_handler():
        data = request.json.get('data', {})
        
        # Find task by requestId
        task = find_task_by_request_id(data['requestId'])
        
        if data.get('success'):
            task.update_state(
                state='SUCCESS',
                meta={'fileUrl': data['fileUrl']}
            )
        else:
            task.update_state(
                state='FAILURE',
                meta={'error': data.get('error')}
            )
        
        return jsonify({'received': True}), 200

    Error Handling in Pipelines

    Transient Errors

    Handle transient errors with retries:

    • Network timeouts

    • 5xx server errors

    • Temporary rate limiting

    Permanent Errors

    Fail permanently for:

    • Invalid URLs

    • Authentication errors

    • Bot detection (site policy)

    Dead Letter Queues

    For failed jobs:

    • Move to dead letter queue after max retries

    • Log for manual review

    • Alert on persistent failures


    Scaling Considerations

    Rate Limits

    • Respect PeekShot's rate limits

    • Implement request throttling

    • Queue requests to stay within limits

    • Monitor API usage

    Concurrency

    • Limit concurrent API requests

    • Use worker pools with size limits

    • Balance throughput with rate limits

    Cost Management

    • Monitor credit usage

    • Optimize capture settings

    • Cache results when possible

    • Batch requests efficiently


    Best Practices

    Queuing

    • Use message queues for reliability

    • Persist queue state

    • Handle queue failures

    Retries

    • Implement exponential backoff

    • Set maximum retry attempts

    • Distinguish retryable from permanent errors

    Monitoring

    • Track success/failure rates

    • Monitor processing times

    • Alert on error spikes

    • Log all operations


    Related Guides

    Need Help?

    Can't find what you're looking for?

    Comments (0)

    Sign in to comment

    You need to be logged in to post a comment on this article.

    No comments yet. Be the first to share your thoughts!