The customer's program manager opened the engagement with a question, not a brief: "where does the bottleneck actually sit on an SDV launch, and what gets built against it first?" Then he answered it himself. The triage team carries roughly 5,000 active defect tickets across four lifecycle states. The bottleneck stage is time to develop fix, running 3× every other stage. He named the order in which the surfaces should land and told us what to build against.
A pilot built against named blockers. In most enterprise software engagements the vendor proposes the sequence and the customer accepts it or pushes back. Here the structure came from the customer side. The rest of this note explains what they asked, what's being built, and what gates the engagement clears next.
The customer-defined sequence
The program manager named two surfaces and the order between them. Diagnostic triage first, as the minimum proof of value. Ticket analysis second, as the extension downstream into aftersales. Same underlying engineering ontology, two different audiences served by two different tools.
The PM's framing was direct: the first tool is the minimum proof of value; the second is the extension if the first lands. The path is the customer's, not the vendor's, and the sequencing is conditional on what the first surface demonstrates.
Where the bottleneck actually sits
The diagnostic challenge is volume multiplied by complexity. Across the active fleet, the cross-domain triage team tracks roughly 5,000 active defect tickets, distributed across four lifecycle states. The bottleneck isn't where most external observers assume. It is not find a bug, not reproduce a bug, not verify a fix. It is the stage in between: time to develop fix, averaging roughly 3× any other lifecycle stage.
The shape of the bar matters more than its scale. The bottleneck stage isn't find a bug, it is find a bug's cause across a software-defined vehicle's interconnected systems. That distinction is what the diagnostic tool addresses, and it is why the customer PM picked it as the minimum proof of value rather than something else. The leverage point chosen is the leverage point that exists.
What the engineer asks at the bottleneck
When a defect ticket lands, the triage engineer's question is some variant of:
"Which of the recent code changes that touch this signal path could have caused this DTC pattern on this VIN's specific configuration?"
Today, that requires walking the commit history of touched components, checking which architectural elements share the implicated signal, and ruling out unrelated changes by hand. A senior engineer's mental model is the integrating layer; the tool stack is fragmented underneath it.
The diagnostic tool compresses the "where do I even look?" phase by linking the diagnostic trouble code (DTC) to the VIN-specific (vehicle identification number) configuration, surfacing recent commits that touch the implicated signal path, and showing architecturally-related faults that may share root cause. The engineer starts with a ranked candidate set instead of an open-ended search. Search becomes evaluation.
The voices at the customer
Three operational groups at the customer shape what Error Inspector needs to look like.
The triage team. Tracks 4 lifecycle states across ~5,000 active defect tickets. The time-to-develop-fix stage averages 3× every other stage. This is the cost basis for any meaningful improvement claim.
The vehicle testers. Most are not trained in CAN-bus tooling. Their direct ask: give me a button, I'll press it. A one-click capture, designed for a tester who isn't and shouldn't have to be a tooling specialist. This is a UX requirement, not a capability requirement, and the distinction matters because diagnostic depth is irrelevant if the data never gets captured at the line.
The data engineer. The customer-side data position is substantively ready: network-bus data and ADAS compute-module internals are available. The gating constraint is the AI governance review and the security supply chain. The mitigation path proposed by the engineer is a declaration of open-source software taken through the customer's existing procurement review. It is one of two named blockers on this engagement, and like the other it is procurement-engineering, not technology.
What the sequencing tests
The MVP-then-extension framing is a test of one premise: that compressing fix-development time at the engineering and triage layer is the right first move on an SDV launch. The ticket-analysis layer is the customer's named next step once the first surface delivers compression, extending the same engineering ontology into aftersales. Same Product Twin, different audience.
Two named blockers, both procurement-engineering, not technology
Two named blockers (procurement-engineering, not technology) are the gates the engagement clears before scale-up:
- AI governance and security supply chain. Data is ready; clearance to send it isn't. The customer's IT-side is working a declaration-of-open-source-software path through procurement review. Ingestion starts once those gates clear.
- UX for line testers. Vehicle testers on the line are not CAN-bus tooling specialists and shouldn't have to be. Their ask is one-click capture. Engineering effort, not capability.
Both blockers are procurement-engineering work, not model work. That is a category-realistic place for a pilot of this shape to sit, and it is where the engagement sits today.
Program shape
| Program | SDV launch program |
|---|---|
| Customer-defined sequence | Diagnostic triage first (MVP) → ticket analysis (extension) |
| Active defect tickets | ~5,000 across the cross-domain triage team |
| Lifecycle bottleneck | Time to develop fix, 3× every other stage |
| Workflow shift | Open-ended search becomes ranked evaluation |
| Downstream audience | Global premium-SUV service network (orders of magnitude larger than warranty engineering) |
| Named blockers (open) | AI governance review · line-tester UX · both procurement-engineering, not technology |
| Voices shaping requirements | Triage team · vehicle testers · data engineering (framed by the program manager) |