ERP Integration FAQ
This guide answers common questions about integrating Pico MES with ERP systems. For step-by-step integration instructions, see the Work Order Integration Guide.
Data Model & Terminology
What is Pico's object model for production execution?
Pico uses a unified Operation model that represents products, subassemblies, and processes:
| Object | Description |
|---|---|
| Operation | A product, subassembly, or process with part number and revision |
| SubOperation | Parent-child relationship defining product structure and routing sequence |
| OperationOrderComplete | Completed operation order with summary and end state |
| OperationSummary | Execution metrics: timestamps and consumed materials |
| State | Capture point containing the produced serial for a completed build |
The hierarchy is: Product → Subassembly → Process, all represented as Operations linked by SubOperations.
Important notes:
ERPs may call the ordering of a product a "work order", which differs from Pico. In Pico, we differentiate an ordered product (an operation order) from the individual pieces of work that may be done at separate stations to complete the ordered product. Ordered operations may be made of many work orders. A work order is for a single process to be completed at a single station.
How does Pico represent an Order?
An order in Pico is called an Operation Order. It is created via the operationOrderSave mutation.
Lifecycle states:
| State | How It's Indicated |
|---|---|
| Created | Mutation returns message: "created" |
| Updated | Mutation returns message: "updated" |
| In Progress | Implicit when shop floor work begins |
| Completed | operationOrderCompletesStream subscription fires |
| Cancelled | Mutation returns message: "canceled" |
How are external identifiers and revisions represented?
Every Operation exposes external identifiers that your ERP likely calls part numbers:
id- Pico's internal identifierexternalId- External identifier for the operation (typically the part number, from Pico's "Part Number" field)externalRevision- External revision (from Pico's "Revision" field)name- Human-readable name
Consumed serials (Attribute on the operation template, ConsumedSerial on a completion event) carry their own externalId — again, typically a part number — so you can correlate consumed material to your ERP.
How do I match Pico operations to ERP operations?
Pico has a separate operator-facing BOM in work instructions feature that shows operators the parts needed at a given process step. That feature is a UI concern for operators and is unrelated to the GraphQL API. The API does not surface the operator-facing BOM in work instructions.
What the API does expose is the external identifiers for operations and the hierarchy of products, subassemblies, and processes. That information is intended to help you match operations in Pico with operations in your ERP.
The two things the API exposes serve two different purposes:
- To match orderable operations between Pico and the ERP — in most cases, the operation's
externalIdis sufficient to associate an orderable thing in Pico with the corresponding record in your ERP. If additional context helps with the mapping, useSubOperationto list parent part numbers. - To consume serialized material in the ERP — the runtime
OperationSummary.consumedSerials(each withid,externalId, andvalue) delivered on completion events tells you which serials were captured during the build, so you can post the corresponding consumption in your ERP.
Query the operation template and its children to retrieve the mapping inputs:
query MatchOperations($productId: ID!) {
operations(where: { id: { _eq: $productId } }) {
id
externalId
consumedSerials { id, name, externalId }
}
subOperations(where: { parentId: { _eq: $productId } }) {
childId
childIndex
}
}
On the Operation type, consumedSerials is the set of serials you should expect to be captured on completion of the operation. The runtime values (with value populated) arrive on the completion event via OperationSummary.consumedSerials.
How are routings represented?
Routings in Pico are defined as workflows on products or subassemblies. The structure of a workflow is surfaced through SubOperation relationships:
parentId- The parent product or subassemblychildId- The child operation (process or subassembly)childIndex- Position in the sequence (0-based)
Operations with the same childIndex can run in parallel. Sequential indices define linear dependencies.
Operation Order Management
How are operation orders (work orders) created?
ERPs often call these work orders. In Pico, an ERP operationOrderSave call creates an operation order; Pico then computes the work orders operators actually execute at stations from the ordered product's workflow (the "ordered product's workflow" is how operators consume the operation order on the shop floor).
Use the operationOrderSave mutation:
mutation CreateOperationOrder {
operationOrderSave(input: {
operationId: "your-product-id"
externalOrderId: "ERP-ORDER-123"
}) {
message
}
}
The tuple (operationId, externalOrderId, orderIndex) is the unique key for an operation order — you cannot create more than one with the same values. externalOrderId is only optional when ordering operations that are singular processes; for products, subassemblies, or any multi-process operation, always provide an externalOrderId so you can correlate Pico events back to your ERP records. We strongly recommend always providing it.
Which version of an operation is used when an order is created?
On operation order creation, Pico uses the most recently deployed version of the operation to calculate the work orders created at stations. If a deployment happens near the time of order creation, inclusion of that deployment in the calculation is not guaranteed.
When update support lands (see How are operation orders updated?), updates will evaluate the operation against whatever is deployed at the time of the update — potentially adding to what's ordered, but never removing.
How are operation orders updated?
Updates use the same operationOrderSave mutation as create. Updates are not currently supported — only the initial create path is wired up today. Support is planned; see Which version of an operation is used when an order is created? for how re-evaluation will work when updates ship.
How are operation orders released?
Operation orders are released immediately upon creation. There is no separate release step - once created, orders appear on the shop floor.
How are operation orders cancelled?
Use the operationOrderCancel mutation:
mutation CancelOperationOrder {
operationOrderCancel(input: {
operationId: "your-product-id"
externalOrderId: "ERP-ORDER-123"
}) {
message
}
}
Important: Only unstarted operation orders can be cancelled. Once work begins on the shop floor, cancellation is not supported via the API.
What happens when I order a product with subassemblies?
When you create an order for a product, Pico uses a waterfall pattern - child operation orders are created automatically as their dependencies complete. You only need to order the top-level product.
Real-Time Events
What execution events does Pico emit?
Pico provides four subscription streams:
| Subscription | Events |
|---|---|
operationsStream | Operation definition changes |
subOperationsStream | Product structure changes |
operationOrderCompletesStream | Operation order completions |
noteEventsStream | Operator notes created during a build |
What data is included in completion events?
The operationOrderCompletesStream provides:
externalOrderId- Your ERP order referenceat- Completion timestamp (top-level field onOperationOrderComplete)operation- What was built (id, part number, revision, name)operationSummary.startedAt- When the build startedoperationSummary.consumedSerials- Materials used (part numbers and serial/lot values)operationSummary.cycleTime- Total active build time in days (Float!)endState.producedSerial- Output serial number
What is the latency of execution events?
Events are pushed within 10 seconds of when they occur. This is not guaranteed by SLA and depends on network conditions.
Does Pico support event replay and recovery?
Yes. All subscriptions support cursor-based replay:
subscription ResumeFromCheckpoint {
operationOrderCompletesStream(
cursor: {
initialValue: { at: "2024-01-15T14:30:00Z" }
ordering: ASC
}
batchSize: 100
) {
externalOrderId
at
}
}
Store the at timestamp of each processed event. On reconnection, resume from your last checkpoint to replay missed events.
How is event reliability handled?
| Mechanism | Description |
|---|---|
| Cursor replay | Resume from any timestamp after disconnection |
| Batch control | batchSize parameter manages throughput |
| Filtering | where clauses reduce unnecessary events |
| At-least-once delivery | Events may be delivered more than once; implement idempotent handlers |
Important: Events are delivered at-least-once, not exactly-once. Your integration must handle duplicate events gracefully.
Item Master & Products
Can I create or update operations via the API?
No. The operations query is read-only. Products, subassemblies, and processes must be created and maintained through the Pico UI.
The API allows you to:
- Query existing operations
- Subscribe to changes when operations are modified
- Map Pico operations to your ERP items using
externalIdandexternalRevision
How do I sync Pico products to my ERP?
- Query all operations:
query SyncProducts {
operations {
id
externalId
externalRevision
name
consumedSerials { externalId }
}
}
- Subscribe to changes:
subscription ProductChanges {
operationsStream(
cursor: { initialValue: { updatedAt: "2024-01-01T00:00:00Z" } }
batchSize: 10
) {
id
externalId
externalRevision
updatedAt
}
}
How does Pico handle engineering changes?
Changes to operations are reflected immediately upon deployment in Pico. Subscribe to operationsStream and subOperationsStream to receive notifications when definitions change.
There is no explicit ECO/ECN workflow in the API - you'll need to manage engineering change processes in your ERP or PLM system.
Production Data
What completion data is available for integration?
| Data | Field |
|---|---|
| Produced serial | endState.producedSerial |
| Completion time | at (on OperationOrderComplete) |
| Start time | operationSummary.startedAt |
| Cycle time | operationSummary.cycleTime (Float!, days) |
| Consumed materials | operationSummary.consumedSerials[].externalId, .value |
How is labor time captured?
Labor time is captured at the operation level, not clock-in/clock-out:
operationSummary.startedAt- When the build startedat- When the operation order completed (onOperationOrderComplete)
Cycle time at the operation level is exposed as operationSummary.cycleTime (Float!, days).
This is the time the build is active at a station, not payroll-grade timekeeping.
Does Pico provide payroll-grade timekeeping?
No. Pico does not track:
- Clock-in/clock-out times
- Breaks or attendance
- Shift assignments
- Indirect labor
How is scrap, rework, or yield captured?
Scrap and rework are not explicitly captured as separate fields in the GraphQL API. Completed orders represent good output. Scrap/rework tracking may be handled through Pico UI features not exposed via the API.
How do I perform material backflush?
Material backflush is done at completion events — Pico does not currently publish events that would support backflush before the entire order completes. consumedSerials is only required when some of the consumed material is serialized; for non-serialized material, backflush against the produced quantity implied by the completion itself.
When consumedSerials is present, it is a flat list of runtime consumption records with id, externalId, and value:
operationSummary {
consumedSerials {
id
externalId # Part number to backflush
value # Serial/lot consumed
}
}
When you receive a completion event, iterate through consumedSerials and post material issues to your ERP for each entry.
Machine & Downtime
How is machine runtime tracked?
Machine runtime is not tracked in the GraphQL API. OEE, cycle counting, equipment utilization, and station-level machine data on completion events are not exposed. Completion events do not include operator or station information — those fields exist only on note events (see noteEventsStream / NoteState).
Machine/OEE telemetry is not part of the GraphQL schema and must be collected separately.
How is downtime captured?
Downtime is not available via the GraphQL API. There are no:
- Downtime event types
- Downtime reasons or categories
- Downtime duration tracking
Does Pico integrate with SCADA systems?
There is no direct SCADA integration in the GraphQL API.
Today, the GraphQL API surfaces completion events and cycle time. The rest of the measurement and torque information captured by Pico will be exposed through the GraphQL API at a later date.
Inventory
Does Pico maintain inventory balances?
No. Pico captures:
- Material consumption (what was used)
- Production output (what was produced)
Your ERP is responsible for:
- Inventory quantities
- Stock locations
- Lot/serial tracking at the inventory level
Does Pico expose inventory consumption events?
Only at order completion. Pico does not currently publish intermediate/during-build consumption events — everything you need to post a backflush arrives on the completion event (see How do I perform material backflush?).
Reliability & Error Handling
How does Pico ensure transactional integrity?
Unique key for operation orders: The tuple (operationId, externalOrderId, orderIndex) is the unique key for an operation order — you cannot create more than one with the same values. See How are operation orders (work orders) created? for the create contract and Which version of an operation is used when an order is created? for how deployed versions are resolved.
Event delivery: Pico offers three subscription delivery mechanisms — WebSocket (graphql-ws), Server-Sent Events (text/event-stream over a persistent HTTP client), and webhooks. All three provide at-least-once delivery; consumers should dedupe by event id and timestamp (see the Subscriptions guide for the per-event identity fields).
What retry mechanisms are supported?
The API itself does not retry on your behalf. Your integration should:
- Store the cursor field of each processed event (e.g.,
atforoperationOrderCompletesStream,updatedAtforoperationsStreamandsubOperationsStream) - On reconnection, resume from your last checkpoint
- Handle duplicate events idempotently (e.g., check if transaction already posted)
- Implement exponential backoff for transient failures
How should I handle ERP rejections?
Partial failures are your responsibility to handle:
- Receive completion event
- Attempt ERP transaction
- If rejected:
- Log the failure with full event data
- Store for retry or manual review
- Continue processing other events
- Do not block the subscription on individual failures
How do I reconcile data between Pico and my ERP?
No built-in reconciliation exists today. Recommended approaches:
- Cursor replay - Replay completions from a known date
- External ID matching - Compare
externalOrderIdbetween systems - Timestamp queries - Query operations with
updatedAtfilters
A future feature of Pico will provide a dashboard and notifications for mapping Pico operations to ERP operations, reducing the need for DIY reconciliation. No ETA today.
Security & Operations
What authentication is used?
See the Authentication Guide for details on how the Pico API is secured and how to obtain credentials for your integration.
How do I monitor API health?
There is no external way to monitor API health today. Subscription connection liveness and cursor-based replay (see Does Pico support event replay and recovery?) are the primitives your integration should rely on for continuity, but they are not a substitute for a dedicated health endpoint.
What data retention policies apply?
Retention policies are configured at the infrastructure level, not in the API. Operations include an archivedAt field indicating soft-delete status. Contact your Pico administrator for retention details.
Scope & Limitations
What is NOT supported via the Pico API?
| Feature | Status |
|---|---|
| Operation creation/updates | Not available (read-only) |
| Inventory balances | Not tracked |
| Inbound receipts | Not available |
| Inventory transfers | Not available |
| Quality inspection management | Not available |
| Machine maintenance | Not available |
| Production scheduling | Not available |
| Costing/pricing | Not available |
| Downtime tracking | Not available |
| Payroll timekeeping | Not available |
| Operator/station on completion events | Not exposed (only on noteEventsStream via NoteState) |
What integration responsibilities remain with the ERP?
| Responsibility | Owner |
|---|---|
| Inventory balances | ERP |
| Operation creation | Pico UI / ERP |
| Purchase orders | ERP |
| Sales orders | ERP |
| Financial transactions | ERP |
| Payroll/HR | ERP |
| Customer/vendor master | ERP |
| Production scheduling | ERP |
What assumptions should integrators avoid?
| Assumption | Reality |
|---|---|
| Pico maintains inventory balances | No - ERP responsibility |
| Operation orders can be cancelled after starting | No - only unstarted orders |
| Machine data is captured automatically | No - not in the API |
| Payroll-grade timekeeping is available | No - production time only |
| Downtime is tracked | No - not in API |
| Operations can be created via API | No - use Pico UI |
| Events are exactly-once | No - at-least-once; handle duplicates |
Implementation
What are the technical dependencies?
| Dependency | Requirement |
|---|---|
| Transport | Required for subscriptions — WebSocket (graphql-ws), Server-Sent Events via persistent HTTP client (text/event-stream), or webhooks |
| GraphQL client | Apollo Client, urql, graphql-ws, or similar (required only for WebSocket subscriptions) |
| Persistent storage | Store cursor positions for replay |
| Idempotent handlers | Handle duplicate events |
What GraphQL clients are compatible?
- Apollo Client (with
@apollo/client/link/subscriptions—GraphQLWsLink) - urql (with
@urql/exchange-graphql-ws) - graphql-ws (standalone)
- Relay (with subscription support)
What integration risks should I plan for?
| Risk | Mitigation |
|---|---|
| Event loss on disconnect (WebSocket/SSE only — not applicable to webhooks) | Cursor-based replay from checkpoint |
| Duplicate events | Idempotent handlers |
| Latency spikes | Monitor subscription lag |
Pico will notify integrators of any breaking schema changes well in advance of removing fields.
See Also
- Work Order Integration Guide - Step-by-step integration walkthrough
- Subscriptions Guide - Real-time streaming details
- Authentication Guide - How the API is secured
- Operation Type - Full operation reference
- OperationOrderComplete Type - Completion event structure