Skip to main content
Batch generation is the process of running a model against collected data to produce batches on a ledger. This cookbook walks through the full workflow: checking what inputs are needed, previewing the results, and triggering generation.

Example Scenario

You have a biochar project with production data collected over Q1 2025. You want to generate batches for the Net Carbon Removal ledger, which will run the linked model against your feedstock delivery and pyrolysis data to produce carbon removal batches.
1

Pre-requisites

Before you begin, make sure you have the following:
  • A ledger with a model linked to it. You can check this via GET /ledgers and looking for the latest_model field.
  • An API token with Production Accounting read and write permissions.
  • Data points (events with measurements) already collected for the time period you want to generate batches for.
2

Find the target ledger

List your ledgers and identify the one you want to generate batches for.Endpoint: GET /ledgers
curl -X GET https://app.gomangrove.com/api/v1/ledgers \
  -H "Authorization: Bearer YOUR_API_TOKEN"
Response
{
  "data": [
    {
      "id": "lgr_abc123def456",
      "name": "Net Carbon Removal",
      "unit": "t",
      "system_ledger_type": null,
      "partition_type": "feedstock",
      "cadence": "monthly",
      "latest_model": {
        "id": "mdl_xyz789",
        "name": "Carbon Calculation Model"
      },
      "asset_category": null,
      "created_at": "2025-01-15T10:00:00.000Z",
      "updated_at": "2025-01-15T10:00:00.000Z"
    }
  ]
}
Save the ledger id (e.g., lgr_abc123def456). Confirm latest_model is not null — batch generation requires a linked model.
3

Check required inputs

Before generating, check what data inputs the model requires. This helps you verify you have the right data collected.Endpoint: GET /ledgers/:id/required_inputs
curl -X GET https://app.gomangrove.com/api/v1/ledgers/lgr_abc123def456/required_inputs \
  -H "Authorization: Bearer YOUR_API_TOKEN"
Response
{
  "data": [
    {
      "id": 1,
      "name": "Mass of Feedstock Delivery",
      "slug": "feedstock-delivery-mass",
      "unit": "U.S. ton",
      "value_type": "number",
      "category": "input",
      "subcategory": "feedstock",
      "is_manual_select": false,
      "allocation_groups": [],
      "output_of_ledgers": []
    },
    {
      "id": 2,
      "name": "Transport Distance",
      "slug": "transport-distance",
      "unit": "mile",
      "value_type": "number",
      "category": "input",
      "subcategory": "transport",
      "is_manual_select": false,
      "allocation_groups": [],
      "output_of_ledgers": []
    }
  ]
}
Inputs with is_manual_select: true come from other ledgers and may require you to specify which data points to include via data_point_ids in the generation request.
4

Preview the generation

Before committing to generation, preview how many batches will be created. This is especially useful when the model runs concurrently across multiple ledgers.Endpoint: POST /ledgers/:id/generate_preview
curl -X POST https://app.gomangrove.com/api/v1/ledgers/lgr_abc123def456/generate_preview \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "start_time": "2025-01-01T00:00:00Z",
    "end_time": "2025-03-31T23:59:59Z"
  }'
Response
{
  "data": [
    {
      "id": "lgr_abc123def456",
      "name": "Net Carbon Removal",
      "count": 3
    }
  ]
}
The response shows how many batches will be generated per ledger. If the model runs concurrently with other models, you’ll see multiple entries.
5

Trigger batch generation

Once you’re satisfied with the preview, trigger the actual generation. This is an async operation — it returns immediately with a task ID that you can poll for progress.Endpoint: POST /ledgers/:id/batch_generations
curl -X POST https://app.gomangrove.com/api/v1/ledgers/lgr_abc123def456/batch_generations \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "start_time": "2025-01-01T00:00:00Z",
    "end_time": "2025-03-31T23:59:59Z"
  }'
Response (202 Accepted)
{
  "id": 456,
  "type": "ProductionAccounting::BatchGeneration",
  "state": "pending",
  "progress": 0,
  "created_at": "2025-04-01T12:00:00.000Z"
}
Save the task id from the response to check on progress.Optional parameters:
  • data_point_ids: Array of specific data point friendly IDs to include. If omitted, data points are auto-discovered for the time range.
  • batch_amounts: Array of objects {batch_id, primary_output_amount, allocation_group} for specifying custom amounts on existing batches.
  • skipped_data_point_type_ids: Array of data point type IDs to exclude from generation.
6

Poll for task completion

Use the async task endpoint to check on the generation progress.Endpoint: GET /async_tasks/:id
curl -X GET https://app.gomangrove.com/api/v1/async_tasks/456 \
  -H "Authorization: Bearer YOUR_API_TOKEN"
Response (in progress)
{
  "id": 456,
  "type": "ProductionAccounting::BatchGeneration",
  "state": "running",
  "progress": 50,
  "progress_description": "Processing batch 2 of 3",
  "started_at": "2025-04-01T12:00:01.000Z",
  "finished_at": null,
  "error_message": null,
  "result": null,
  "args": {
    "project_id": 123,
    "ledger_id": 456
  }
}
Response (complete)
{
  "id": 456,
  "type": "ProductionAccounting::BatchGeneration",
  "state": "complete",
  "progress": 100,
  "progress_description": null,
  "started_at": "2025-04-01T12:00:01.000Z",
  "finished_at": "2025-04-01T12:00:15.000Z",
  "error_message": null,
  "result": null,
  "args": {
    "project_id": 123,
    "ledger_id": 456
  }
}
Poll until state is "complete" or "failed". A reasonable polling interval is every 2-5 seconds.
7

Verify generated batches

Once the task completes, list the batches on the ledger to confirm they were created.Endpoint: GET /ledgers/:id/batches
curl -X GET "https://app.gomangrove.com/api/v1/ledgers/lgr_abc123def456/batches?sort=-created_at" \
  -H "Authorization: Bearer YOUR_API_TOKEN"
Response
{
  "data": [
    {
      "id": "bat_newbatch001",
      "tracking_id": "Jan-2025-001",
      "start_time": "2025-01-01T00:00:00.000Z",
      "end_time": "2025-01-31T23:59:59.000Z",
      "state": "complete",
      "created_at": "2025-04-01T12:00:10.000Z",
      "primary_output": {
        "id": 1,
        "name": "Net Carbon Removal",
        "unit": "t",
        "value": 33.5
      },
      "ledger_balance": 33.5
    }
  ]
}

Async task states

The state field on async tasks follows this lifecycle:
StateDescription
pendingTask created, waiting to be picked up
runningTask is actively processing
completeTask finished successfully
failedTask encountered an error — check error_message
The async task pattern is also used by report submission. After submitting a report to a registry, you’ll receive an async_task_id that follows the same polling workflow.