Pagination

All list endpoints in the CoinMENA Partner API use cursor-free, page-based pagination. This page explains how to request pages, interpret responses, and iterate through large result sets.

⚠️

Results are not guaranteed to be consistent between requests

Pagination is not snapshot-based. New records may appear or existing records may change between page requests. Always rely on the total field from each response rather than pre-calculating a fixed page count.


How It Works

Paginated endpoints accept two query parameters:

ParameterTypeDefaultMaximumDescription
pageinteger1Page number, starting from 1
page_sizeinteger10100Number of items returned per page

Every paginated response includes these fields alongside the result items:

FieldTypeDescription
itemsarrayThe list of results for the current page
totalintegerTotal number of records matching the query
pageintegerCurrent page number
page_sizeintegerNumber of items returned per page (maximum — last page may contain fewer items)

Example Request

Fetch the first page with default page size:

curl -X GET "https://external-api.coinmena.com/v1/partner/clients?page=1&page_size=10" \
  -H "X-Partner-ID: your_partner_id" \
  -H "X-Timestamp: 1737654321000" \
  -H "X-Signature: your_generated_signature"

Fetch with maximum page size for large datasets:

curl -X GET "https://external-api.coinmena.com/v1/partner/clients?page=1&page_size=100" \
  -H "X-Partner-ID: your_partner_id" \
  -H "X-Timestamp: 1737654321000" \
  -H "X-Signature: your_generated_signature"

Example Response

{
  "items": [...],
  "total": 85,
  "page": 1,
  "page_size": 10
}

Calculating Total Pages

Use total and page_size to calculate how many pages exist:

import math

total = 85
page_size = 20

total_pages = math.ceil(total / page_size)  # → 5 pages

Iterating Through All Pages

Python

import math
import requests

def get_all_clients(headers, base_url):
    page = 1
    page_size = 100
    all_clients = []

    while True:
        response = requests.get(
            f"{base_url}/v1/partner/clients",
            headers=headers,
            params={"page": page, "page_size": page_size}
        )
        data = response.json()
        items = data["items"]

        # Handle empty results
        if not items:
            break

        all_clients.extend(items)

        # Stop when fewer items than page_size are returned (last page)
        # or when total pages are reached — whichever comes first
        total_pages = math.ceil(data["total"] / page_size)
        if page >= total_pages or len(items) < page_size:
            break
        page += 1

    return all_clients

Node.js

async function getAllClients(headers, baseUrl) {
  let page = 1;
  const pageSize = 100;
  const allClients = [];

  while (true) {
    const url = `${baseUrl}/v1/partner/clients?page=${page}&page_size=${pageSize}`;
    const res = await fetch(url, { headers });
    const data = await res.json();

    allClients.push(...data.items);

    // Stop when fewer items than pageSize are returned (last page)
    // or when total pages are reached — whichever comes first
    const totalPages = Math.ceil(data.total / pageSize);
    if (page >= totalPages || data.items.length < pageSize) break;
    page++;
  }

  return allClients;
}

Paginated Endpoints

The following endpoints support pagination:

EndpointDescription
GET /v1/partner/clientsList clients
GET /v1/partner/clients/{id}/documentsList client documents
GET /v1/partner/ordersList orders

Best Practices

  • Use the maximum page size (page_size=100) when fetching large datasets to minimize the number of requests
  • Always check total before iterating — if total is 0 there is nothing to fetch
  • Re-sign each request — every paginated request requires a fresh timestamp and signature (see the Authentication guide)
  • Iterate sequentially — avoid requesting all pages in parallel to prevent rate limiting or throttling
  • Apply filters alongside pagination where possible to reduce result sets — for example filter by status or from_datetime when listing clients or orders
📘

Filter first, paginate second

If you only need a subset of records, apply query filters (status, source, from_datetime, to_datetime, partner_client_id) before paginating. Filtering reduces total, which reduces the number of pages you need to iterate. Paginating through all records and filtering client-side is slower, wastes bandwidth, and risks rate limiting.


Common Mistakes

MistakeFix
Starting page at 0Pages start at 1page=0 will return an error
Assuming fixed page countAlways recalculate total pages from the total field — results may change between requests
Using the same signature for multiple pagesEach request requires a fresh timestamp and signature
Requesting page_size above 100Maximum is 100 — higher values will be rejected
Not handling empty resultsAlways check if items is empty before processing — this is valid and expected on the last page or when no records match the filters


What’s Next
Next StepDescription
AuthenticationFull breakdown of the signing algorithm, rules, and troubleshooting
IP WhitelistingAdd a second security layer to your integration
Error CodesReference for every error the API returns