Example Scenario
You have a biochar project with production data collected over Q1 2025. You want to generate batches for the Net Carbon Removal ledger, which will run the linked model against your feedstock delivery and pyrolysis data to produce carbon removal batches.
Pre-requisites
Before you begin, make sure you have the following:
- A ledger with a model linked to it. You can check this via GET /ledgers and looking for the
latest_modelfield. - An API token with Production Accounting read and write permissions.
- Data points (events with measurements) already collected for the time period you want to generate batches for.
Find the target ledger
List your ledgers and identify the one you want to generate batches for.Endpoint: GET /ledgersSave the ledger
Response
id (e.g., lgr_abc123def456). Confirm latest_model is not null — batch generation requires a linked model.Check required inputs
Before generating, check what data inputs the model requires. This helps you verify you have the right data collected.Endpoint: GET /ledgers/:id/required_inputsInputs with
Response
is_manual_select: true come from other ledgers and may require you to specify which data points to include via data_point_ids in the generation request.Preview the generation
Before committing to generation, preview how many batches will be created. This is especially useful when the model runs concurrently across multiple ledgers.Endpoint: POST /ledgers/:id/generate_previewThe response shows how many batches will be generated per ledger. If the model runs concurrently with other models, you’ll see multiple entries.
Response
Trigger batch generation
Once you’re satisfied with the preview, trigger the actual generation. This is an async operation — it returns immediately with a task ID that you can poll for progress.Endpoint: POST /ledgers/:id/batch_generationsSave the task
Response (202 Accepted)
id from the response to check on progress.Optional parameters:data_point_ids: Array of specific data point friendly IDs to include. If omitted, data points are auto-discovered for the time range.batch_amounts: Array of objects{batch_id, primary_output_amount, allocation_group}for specifying custom amounts on existing batches.skipped_data_point_type_ids: Array of data point type IDs to exclude from generation.
Poll for task completion
Use the async task endpoint to check on the generation progress.Endpoint: GET /async_tasks/:idPoll until
Response (in progress)
Response (complete)
state is "complete" or "failed". A reasonable polling interval is every 2-5 seconds.Verify generated batches
Once the task completes, list the batches on the ledger to confirm they were created.Endpoint: GET /ledgers/:id/batches
Response
Async task states
Thestate field on async tasks follows this lifecycle:
| State | Description |
|---|---|
pending | Task created, waiting to be picked up |
running | Task is actively processing |
complete | Task finished successfully |
failed | Task encountered an error — check error_message |
The async task pattern is also used by report submission. After submitting a report to a registry, you’ll receive an
async_task_id that follows the same polling workflow.