Public API

igscraper REST API Documentation

Integrate with igscraper using the v1 Bearer-key API. All endpoints are under /api/v1.

Overview

The igscraper API is asynchronous and job-based. You submit a scraping job, poll for completion, then fetch paginated results.

Deduplication is automatic per API key project and external_user_id, so repeated jobs for the same end user only return new, unique rows within that project.

Authentication

All requests require a Bearer token in the Authorization header.

Authorization: Bearer YOUR_API_KEY

Get your API key from the API dashboard.

For POST requests, send Content-Type: application/json.

Quickstart

Submit Job

curl -X POST "https://igscraper.co/api/v1/jobs" \
  -H "Authorization: Bearer $IGSCRAPER_API_KEY" \
  -H "Content-Type: application/json" \
  -H "Idempotency-Key: unique-key-per-job" \
  -d '{
    "external_user_id": "customer-123",
    "tool_type": "followers",
    "params": {
      "usernames": ["nasa"],
      "max_followers": 50
    }
  }'

Use a unique Idempotency-Key for each new job (UUID recommended). Reusing the same key with the same payload returns the original job.

Response: 201 Created
{
  "job_id": "4d83c31f-ef37-4fa2-ad3e-07bbd15e9936",
  "status": "QUEUED",
  "replayed": false
}

Save job_id from the response for steps 2 and 3.

export JOB_ID="4d83c31f-ef37-4fa2-ad3e-07bbd15e9936"

Poll Job Status

curl "https://igscraper.co/api/v1/jobs/$JOB_ID" \
  -H "Authorization: Bearer $IGSCRAPER_API_KEY"
Response: 200 OK (job still running)
{
  "job_id": "4d83c31f-ef37-4fa2-ad3e-07bbd15e9936",
  "external_user_id": "customer-123",
  "namespace": "team-acme",
  "status": "PROCESSING",
  "created_at": "2026-03-17T17:30:00.000Z",
  "updated_at": "2026-03-17T17:30:05.000Z",
  "completed_at": null,
  "progress_percent": 45,
  "number_of_results": 0,
  "cost_charged": 0,
  "tool_type": "followers"
}
Response: 200 OK (job completed)
{
  "job_id": "4d83c31f-ef37-4fa2-ad3e-07bbd15e9936",
  "external_user_id": "customer-123",
  "namespace": "team-acme",
  "status": "COMPLETED",
  "created_at": "2026-03-17T17:30:00.000Z",
  "updated_at": "2026-03-17T17:30:12.000Z",
  "completed_at": "2026-03-17T17:30:12.000Z",
  "progress_percent": 100,
  "number_of_results": 50,
  "cost_charged": 250,
  "tool_type": "followers"
}

Keep polling until status is COMPLETED. A 2s polling interval is recommended for most jobs.

Fetch Results

curl "https://igscraper.co/api/v1/jobs/$JOB_ID/results?page=1&limit=50" \
  -H "Authorization: Bearer $IGSCRAPER_API_KEY"
Response: 200 OK
{
  "job_id": "4d83c31f-ef37-4fa2-ad3e-07bbd15e9936",
  "external_user_id": "customer-123",
  "status": "COMPLETED",
  "columns": [
    "source",
    "type",
    "user_id",
    "username",
    "full_name",
    "verified",
    "private"
  ],
  "data": [
    {
      "source": "nasa",
      "type": "follower",
      "user_id": "12345678",
      "username": "space_fan_01",
      "full_name": "Alex Johnson",
      "verified": "false",
      "private": "false"
    }
  ],
  "messages": [],
  "pagination": {
    "page": 1,
    "limit": 50,
    "total": 50,
    "total_pages": 1,
    "has_next": false,
    "has_prev": false
  }
}

Webhooks are not available yet. Polling is currently required for completion tracking.

Core Concepts

Job Statuses

StatusDescriptionWhat To Do
QUEUEDJob accepted and waiting to start.Keep polling.
PROCESSINGJob is actively scraping.Keep polling.
COMPLETEDAll results are ready.Fetch results.
FAILEDJob encountered an error.Check error details and resubmit.
CANCELLEDJob was cancelled.Resubmit if needed.

external_user_id

This is your stable end-user identifier (1-255 chars, no control characters). Use the same value for the same end user across jobs.

Deduplication is scoped to your API key project and this field. Returning users get only new rows not previously delivered to that same project-scoped external_user_id.

Job submit limits are also scoped to your API key project and this field inside your account, so each project-scoped external_user_id has its own submit quota.

Idempotency Keys

  • Use a unique key per new logical job.
  • Reuse the same key only when retrying the exact same payload.
  • Reusing the same key with different payload returns 409 Conflict.
  • Maximum key length is 255 characters.

Credits

Credits are charged when jobs complete. Costs depend on tool type and delivered rows.

See the pricing page for pricing details.

Polling Best Practices

  • Use 2-second polling for small jobs and 5 seconds for larger jobs.
  • Set a timeout so polling loops cannot run forever.
  • Stop polling on FAILED or CANCELLED.
  • If you get 429, respect Retry-After.

Tool Reference

Each tool type accepts different parameters. Select a tool below to see its parameters, an example request, and the base columns returned in results.

Extract follower profiles for one or more usernames.

Parameters

FieldTypeRequiredDescription
usernamesstring[]YesInstagram usernames (letters, numbers, dot, underscore).
max_followersintegerNoMaximum followers to extract per username.
enrich_profilesbooleanNoEnable optional profile enrichment on delivered rows.
enrich_country_datebooleanNoEnable optional country/date add-on on delivered rows. date is normalized to MM-YYYY (or empty when unparseable).

Submit Example

curl -X POST "https://igscraper.co/api/v1/jobs" \
  -H "Authorization: Bearer $IGSCRAPER_API_KEY" \
  -H "Content-Type: application/json" \
  -H "Idempotency-Key: unique-followers-job" \
  -d '{
  "external_user_id": "customer-123",
  "tool_type": "followers",
  "params": {
    "usernames": [
      "nasa"
    ],
    "max_followers": 50
  }
}'

Base Result Columns

sourcetypeuser_idusernamefull_nameverifiedprivateprofile_pic_url
Conditional Appended Columns
  • When enrich_profiles=true, the result appends biography, email, phone, follower_count, media_count, following_count, account_type, business, business_category_name, category_name, category, city_id, city_name, website.
  • When enrich_country_date=true, the result appends country, date.

Optional Enrichment

Submit one tool job at a time. followers and likers are separate source families and should be submitted as separate jobs.

For followers, following, likers, hashtag, location, and single-post-likers, you can request add-ons independently or together: params.enrich_profiles=true (profile fields) and/or params.enrich_country_date=true (country, date).

profile_pic_url is base scraping data and is never appended by enrichment. For emails and phones, only params.enrich_country_date is supported; params.enrich_profiles=true is rejected. Retroactive enrichment requires at least one add-on: when both add-ons are omitted or false, preflight and start return NO_ADDON_SELECTED.

Country/date contract: date is normalized to MM-YYYY; when unparseable, it is returned as an empty string.

You can also use GET /api/v1/jobs/{job_id}/enrich (preflight) and POST /api/v1/jobs/{job_id}/enrich (start upgrade) for an existing completed job. A completed job that already has one add-on can still request the missing add-on.

Upgrade responses include supported_addons, applied_addons, enriched, and fully_enriched. applied_addons only counts add-ons already materialized in the current output, while enrichment_requested stays true for submit-time selections or in-flight upgrades that are still pending. selected_addons reflects the current request after source-compatible normalization, while fields_added reflects the columns actually added by that request.

If a queued or processing upgrade already covers the requested add-ons, POST /api/v1/jobs/{job_id}/enrich returns 202 with replayed: true. If the source job already covers the requested add-ons, it returns 200 with replayed: true.

Zero-cost retroactive enrichment is not supported. Upgrade add-on pricing is enforced with a minimum of 1 credit per selected add-on row.

While enrichment is queued or processing, GET /api/v1/jobs/{job_id}/results and CSV download endpoints may temporarily return 409 until upgrade processing finishes.

Endpoints

Jobs

Submit jobs and read job status/results.

post
/api/v1/jobsSubmit a job

Submit a scraping job for one external user in your API key namespace.

Security

bearerAuth

Request Example

curl -X POST "https://igscraper.co/api/v1/jobs" \
  -H "Authorization: Bearer $IGSCRAPER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "external_user_id": "customer-123",
  "tool_type": "followers",
  "params": {
    "usernames": [
      "nasa"
    ],
    "max_followers": 50
  }
}'

Parameters

NameInTypeRequiredDescriptionRules
Idempotency-KeyheaderstringNoOptional idempotency key for safe retries of the exact same request payload.maxLength: 255

Request Body

application/json
Schema: JobSubmitRequest (oneOf)

Schema Variants

Choose variant by tool_type.

Variant
followers(tool_type=followers)

Followers extraction job.

FieldTypeRequiredDescriptionRules
external_user_id
stringYesYour stable customer/user ID from your system (for example: customer-123). Control characters are not allowed.pattern: ^[^\u0000-\u001F\u007F]+$ | minLength: 1 | maxLength: 255
tool_type
stringYes-const: followers
params
objectYes--
params.usernames
arrayYes-minItems: 1 | maxItems: 10
params.max_followers
integerNo-minimum: 1 | maximum: 10000 | default: 50
params.enrich_profiles
booleanNoEnable optional profile enrichment add-on on delivered rows. Base profile_pic_url is already included without this add-on, and the add-on appends fields such as biography, email, phone, website, and business.default: false
params.enrich_country_date
booleanNoEnable optional country/date add-on on delivered rows. date is normalized to MM-YYYY (or empty when unparseable).default: false

Request payload for creating a job. Select a variant by tool_type.

get
/api/v1/jobs/{job_id}Get job

Get one job by id within your namespace.

Security

bearerAuth

Request Example

curl -X GET "https://igscraper.co/api/v1/jobs/$JOB_ID" \
  -H "Authorization: Bearer $IGSCRAPER_API_KEY"

Parameters

NameInTypeRequiredDescriptionRules
job_idpathstringYesJob id returned by POST /api/v1/jobs.-

Request Body

No request body.

get
/api/v1/jobs/{job_id}/resultsGet job results (JSON)

Read paginated JSON rows from a completed job result file.

Security

bearerAuth

Request Example

curl -X GET "https://igscraper.co/api/v1/jobs/$JOB_ID/results?page=1&limit=50" \
  -H "Authorization: Bearer $IGSCRAPER_API_KEY"

Parameters

NameInTypeRequiredDescriptionRules
job_idpathstringYesJob id returned by POST /api/v1/jobs.-
pagequeryintegerNoOffset pagination page number.minimum: 1 | default: 1
limitqueryintegerNoPage size for results rows.minimum: 1 | maximum: 200 | default: 50

Request Body

No request body.

Rate Limits

ScopeLimit
Requests per minute60
Requests per hour1000
Job submissions per account per hour120
Job submissions per external_user_id12/hour and 5/minute burst

You are seeing default public limits. Sign in to view your account-specific limits.

Admin can customize limits per API account. On 429 Too Many Requests, check Retry-After and retry with exponential backoff + jitter.

Error Reference

CodeCauseWhat To Do
400Invalid or missing parameters.Check the error message to see which field failed validation.
401Missing or invalid API key.Verify your API key in /developers/api.
402Not enough credits for the requested job.Top up credits or reduce max_* parameters.
403The external_user_id is blocked in your namespace.Contact support or use a different external_user_id.
404Job ID does not exist in your namespace.Verify the job_id from your submit response.
409Idempotency mismatch, or results requested before completion.Use a new idempotency key, or wait for status COMPLETED.
413Request body exceeds size limit.Reduce request size (for example fewer usernames).
415Missing or incorrect Content-Type header.Send Content-Type: application/json for POST requests.
429Rate limit exceeded.Wait for Retry-After seconds, then retry with backoff + jitter.
500Unexpected server-side error.Retry after a short delay. Contact support if it persists.
503Temporary service issue.Retry after a short delay with backoff + jitter.

Do not retry 400, 402, 403, or 409 without changing your request. These are request-level issues, not transient failures.

Webhooks

Webhook delivery is not available yet.

Poll GET /api/v1/jobs/{job_id} every 2-5 seconds until status reaches COMPLETED, FAILED, or CANCELLED.

We use cookies

By clicking "Accept", you consent to cookies. Cookies Policy.