If you’re building a SaaS product on AWS Marketplace with usage-based billing, you’ll be calling BatchMeterUsage. The API looks simple. The documentation is thin. And there are at least five things that will bite you in production if you don’t know about them upfront.
I’ve shipped 4 SaaS products on AWS Marketplace. Here’s what I learned about metering the hard way.
What BatchMeterUsage Does
BatchMeterUsage is the AWS Marketplace API that reports how much of your product a customer used. AWS uses these reports to bill the customer. You call it periodically (typically hourly), and each call contains a batch of usage records – one per customer per metering dimension.
A usage record looks like this:
{
"CustomerIdentifier": "cust-abc-123",
"Dimension": "api_calls",
"Quantity": 1500,
"Timestamp": "2025-03-15T13:00:00.000Z"
}
Simple, right? Here are the gotchas.
Gotcha 1: You Must Send Zero-Usage Records
This is the one that surprises everyone. If a customer is subscribed but didn’t use your product in the last hour, you still need to send a record with Quantity: 0.
Why? AWS Marketplace uses the absence of metering records as a signal that something is wrong. If you stop sending records for a customer, AWS may flag the subscription for review or pause it. The zero-usage heartbeat tells AWS “this customer is still active, they just didn’t use anything this hour.”
This means your metering job needs to know about all subscribed customers, not just the ones with usage:
const subscribedCustomers = db.customers.getSubscribedCustomers();
const usageMap = aggregateUsageByCustomer(hourStart, hourEnd);
const customersWithRecords = new Set();
// Build records for customers with actual usage
for (const customer of subscribedCustomers) {
if (usageMap.has(customer.tenantId)) {
customersWithRecords.add(customer.custId);
// ... build usage records
}
}
// Zero-fill for idle customers
for (const customer of subscribedCustomers) {
if (!customersWithRecords.has(customer.custId)) {
for (const dimension of dimensions) {
usageRecords.push({
Timestamp: hourStart,
CustomerIdentifier: customer.custId,
Dimension: dimension,
Quantity: 0
});
}
}
}
But wait – which dimensions do you send zero records for? You can’t just guess. You need to know the exact set of ExternallyMetered dimensions defined for your product. More on that in Gotcha 3.
Gotcha 2: The Timestamp Is the Hour, Not “Now”
The Timestamp field in each usage record is not when you’re making the API call. It’s the start of the billing hour you’re reporting for.
If your metering job runs at 14:35 UTC, you’re reporting usage for the 13:00-14:00 UTC window. The timestamp must be 2025-03-15T13:00:00.000Z – the start of the previous hour, truncated to the hour boundary.
function getPreviousHourStart() {
const now = new Date();
const previousHour = new Date(now);
previousHour.setUTCHours(previousHour.getUTCHours() - 1);
previousHour.setUTCMinutes(0);
previousHour.setUTCSeconds(0);
previousHour.setUTCMilliseconds(0);
return previousHour;
}
Get this wrong and you’ll either:
- Double-bill a customer (reporting the current hour’s usage when the previous hour’s usage was already reported)
- Get rejected by AWS (timestamps must fall within the last 6 hours)
Your usage aggregation query needs to match this window exactly. Use a half-open interval – greater-than-or-equal to the hour start, strictly less than the hour end:
SELECT acctId, dimension, COALESCE(SUM(usage), 0) as total
FROM usage
WHERE datetime(timestamp) >= datetime(?) -- 13:00:00
AND datetime(timestamp) < datetime(?) -- 14:00:00
GROUP BY acctId, dimension
The < (not <=) on the end boundary is critical. A usage event timestamped at exactly 14:00:00.000 belongs to the next hour’s window, not this one.
Gotcha 3: You Need to Know Your Dimensions at Runtime
When you define your AWS Marketplace product, you configure metering dimensions (e.g., api_calls, storage_gb, users). Some are ExternallyMetered (you report them via BatchMeterUsage), others might be contract-based.
Your metering job needs to know which dimensions are externally metered so it can:
- Send zero-usage records for the right dimensions
- Not accidentally skip a dimension
You could hardcode them. But then you’d need a code change every time you add a dimension to your product listing. A better approach is to fetch them from the AWS Marketplace Catalog API at startup:
const { MarketplaceCatalogClient, DescribeEntityCommand } = require('@aws-sdk/client-marketplace-catalog');
async function getExternallyMeteredDimensions(productEntityId) {
const client = new MarketplaceCatalogClient({ region: 'us-east-1' });
const resp = await client.send(
new DescribeEntityCommand({
Catalog: "AWSMarketplace",
EntityId: productEntityId,
})
);
const details = JSON.parse(resp.Details ?? "{}");
const dimensions = details.Dimensions || [];
return dimensions
.filter(dim => dim.Types && dim.Types.includes('ExternallyMetered'))
.map(dim => dim.Key);
}
Cache the result in memory for the lifetime of the process. Dimensions don’t change often, and you don’t want to call the Catalog API on every metering cycle. A process restart picks up new dimensions.
One thing to note: the Marketplace Catalog API, like the Metering API, only works in us-east-1. Even if your application runs in another region.
Gotcha 4: The 25-Record Batch Limit
BatchMeterUsage accepts a maximum of 25 usage records per API call. If you have 10 customers and 3 dimensions, that’s 30 records – you need two API calls.
This is easy to overlook in development when you have 1-2 test customers. It breaks in production when you have real customers:
async function sendBatchMeterUsage(usageRecords) {
const BATCH_SIZE = 25;
for (let i = 0; i < usageRecords.length; i += BATCH_SIZE) {
const batch = usageRecords.slice(i, i + BATCH_SIZE);
const command = new BatchMeterUsageCommand({
ProductCode: PRODUCT_CODE,
UsageRecords: batch
});
const response = await meteringClient.send(command);
// Handle response...
}
}
Send batches sequentially, not in parallel. AWS rate limits the metering API, and you don’t want to deal with throttling errors on top of everything else.
Gotcha 5: Failures Are Silent and Varied
A successful API call doesn’t mean all records were accepted. BatchMeterUsage has three distinct failure modes, and you need to handle all of them:
1. Per-record failures in Results:
The response includes a Results array where each record has a Status. A Status of "Success" means the record was accepted. Anything else – "CustomerNotSubscribed", "DuplicateRecord", etc. – means it wasn’t.
for (const result of response.Results) {
if (result.Status === 'Success') {
// Save the MeteringRecordId for your audit trail
saveSuccessReport(result);
} else {
// This record was rejected -- log it, save it, alert on it
saveFailureReport(result);
}
}
The MeteringRecordId returned on success is your receipt. Save it. If there’s ever a billing dispute, this is your proof.
2. Unprocessed records:
The response may include an UnprocessedRecords array – records that AWS didn’t even attempt to process. This happens under load or transient issues:
if (response.UnprocessedRecords && response.UnprocessedRecords.length > 0) {
// These need to be retried
logger.warn(`${response.UnprocessedRecords.length} records were not processed`);
}
3. Full batch failure (network/API error):
The entire send() call throws. None of the records were submitted:
try {
const response = await meteringClient.send(command);
// handle Results + UnprocessedRecords
} catch (error) {
// None of the 25 records in this batch were submitted
// Log all of them as failed
for (const record of batch) {
saveErrorReport(record, error.message);
}
}
The key insight: you need an audit table. Every record you submit should be saved with its outcome – success (with MeteringRecordId), failure (with status), or error (with error message). Without this, you’re flying blind.
CREATE TABLE metering_reports (
id INTEGER PRIMARY KEY,
customer_identifier TEXT NOT NULL,
dimension TEXT NOT NULL,
quantity INTEGER NOT NULL DEFAULT 0,
metering_timestamp DATETIME NOT NULL,
metering_record_id TEXT,
status TEXT NOT NULL DEFAULT 'success',
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
This table is your billing truth. Query it to answer “what did we report to AWS for customer X in March?” or “which records failed last week?”
Gotcha 6: Not all regions support the Metering API
As of this writing, BatchMeterUsage is supported in the following AWS Regions (from here):
Commercial Regions:
eu-north-1, me-south-1, ap-south-1, eu-west-3, ap-southeast-3, us-east-2, af-south-1, eu-west-1,me-central-1, eu-central-1, sa-east-1, ap-east-1, ap-south-2, us-east-1, ap-northeast-2, ap-northeast-3, eu-west-2, ap-southeast-4, eu-south-1, ap-northeast-1, us-west-2, us-west-1, ap-southeast-1, ap-southeast-2, il-central-1, ca-central-1, eu-south-2, eu-central-2
China Regions:
cn-northwest-1
Gotcha 7: Two versions of the API
There are actually two versions of the BatchMeterUsage API but with the same name. One version takes in the CustomerIdentifier and ProductCode and another version takes in
CustomerAWSAccountId (instead of CustomerIdentifier) and LicenseArn (instead of ProductCode). The first version is what is widely used but you have to use the second version
starting June 1, 2026 if you want to use Concurrent Agreements.
Read about both of them including code samples here.
Putting It All Together
Here’s the overall structure of a production metering job:
Every hour:
1. Compute the time window (previous hour start/end)
2. Fetch all subscribed customers from your database
3. Aggregate usage from the usage table, grouped by (customer, dimension)
4. Build usage records:
a. Real usage records for customers who had activity
b. Zero-usage records for idle customers (for all ExternallyMetered dimensions)
5. Split into batches of 25
6. Send each batch sequentially
7. Save every result to the audit table (success, failure, or error)
And separately:
On new subscription (subscribe-success SQS event):
- Send an immediate zero-usage record for all dimensions
- This registers the customer with AWS Metering without waiting for the next hourly job
Takeaway
BatchMeterUsage is deceptively simple. The API call is one function. But the operational concerns around it – zero-usage heartbeats, hourly timestamp semantics, batch limits, three failure modes, dimension discovery, identifier mapping – are where the real complexity lives.
If you’re implementing this for the first time, budget more time than you think. And build the audit table from day one. When a customer questions their bill three months from now, you’ll be glad you did.
I’ve packaged a production-tested implementation of all of this – metering, auth, entitlements, and the rest of the AWS Marketplace plumbing – into a self-hosted Node.js gateway kit. If you’re listing a SaaS product on AWS Marketplace, check it out here.