curl --request POST \
  --url https://ingest.dashdive.com/s3/batch \
  --header 'Content-Type: application/json' \
  --header 'X-API-Key: <x-api-key>' \
  --data '[
    {
      "action": "ListBuckets",
      "timestamp": "yyyy-MM-ddTHH:mm:ssZ",
      "provider": "aws",
      "customerId": "string",
      "featureId": "string",
      "clientType": "string",
      "clientId": "string"
    },
    {
      "action": "GetObject",
      "timestamp": "yyyy-MM-ddTHH:mm:ssZ",
      "provider": "aws",
      "bucketName": "bucket",
      "objectKey": "key",
      "versionId": "version",
      "bytes": 123,
      "customerId": "string"
    }
  ]'
OK

Required Headers

Header
Value
Content-Typeapplication/json
X-API-Key<x-api-key>

Request Body Content

This endpoint is the batch analog to the /s3 endpoint. While the /s3 endpoint only supports ingesting a single event per request, this endpoint supports ingesting an arbitrarily large number of events in a single request.

The request payload must be an array where every entry is a valid usage event, as detailed in the /s3 endpoint docs page. If even a single one of the events in the array is malformed, the entire request will be rejected with a 400 error. This means that even the well-constructed events within the payload will not be ingested.

Note that in the event of a rejection, the payload is still saved to the database in raw form and can be corrected manually later, as detailed here.