<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=5292226&amp;fmt=gif">
Automotive OEM · multi-brandSTORY.01

€20M+ a year, by shifting errors left on the SDV cost curve.

An error caught at architecture review costs almost nothing. Caught in the field, a thousand times more. SPREAD's job at the flagship electric platform is to shift errors leftward on this curve. €20M+ in annual savings is what that shift produces.

May 12, 2026

€20M+annual savings

An integration error caught at architecture review costs almost nothing. The same error caught in the field, as a warranty claim, costs roughly a thousand times more. At a European automotive group's flagship electric platform, SPREAD's job is to shift errors leftward on that curve. The shift is worth €20M+ a year.

The instrument is an engineering ontology, live since 2024: one integrated graph over ARXML (AUTOSAR XML), ReqIF (Requirements Interchange Format), BoMs (bills of materials), and CAD (computer-aided design files). Three product applications sit on the graph: Product Explorer for the architect, Error Inspector for the quality engineer, Action Tower for the program manager. In SPREAD's vocabulary the graph is a Product Twin.

The cost curve in software-defined vehicle programs

There is a cost curve that runs through every software-defined vehicle (SDV) program. An error caught at architecture review costs almost nothing. The same error caught at HiL (hardware-in-the-loop) test costs ten times more. Caught after a physical build, a hundred times. Caught in the field as a warranty claim, a thousand times.

The job of an engineering ontology at a program of this scale is one thing: shift errors leftward on this curve.

What the cost curve actually looks like

Each step right on the curve is roughly an order of magnitude. The €20M+ in annual savings on the flagship electric platform is what happens when a program of this scale shifts a measurable share of its integration errors from the right side of the curve back to the left.

Architecture review
Update a model
HiL test
~10×
Re-run a rig, sometimes re-spin hardware
Physical build
~100×
Unbuild and rebuild the vehicle
Field / warranty
~1,000×
Warranty claim, possibly recall
Cost of fixing an integration error, by stage of detection. Each step right is roughly an order of magnitude.

The framing that put this in motion

The framing question came from the program's Delivery Manager:

"Can AI enable the group to master SDV complexity across all brands, built on one E/E and SW architecture?"Delivery Manager, flagship electric platform

The question doesn't ask whether AI can build a better architecture. It asks whether AI can make the existing electrical/electronic (E/E) architecture answerable to the engineering organization's questions, at the cadence those questions arrive.

What changes for the engineer

An architect puts a software change on the table for the powertrain variant. Before it ships, one question has to be answered:

"If I change signal X, what breaks across the platform family?"

The blast radius is real: multiple vehicle variants, plus the shared platform output to sister brands. Tens of millions of signal connections. Hundreds of ECUs (electronic control units).

Pre-Product-Explorer
  1. Open ARXML in the architecture tool
  2. Query ReqIF in the requirements tool
  3. Walk the BoM exported to Excel
  4. Cross-reference variants by hand
  5. Pin down the cross-domain colleague who knows
  6. Reconcile and document, for each affected brand
≈ days per query
With Product Explorer
  1. Issue the graph traversal
  2. Review the surfaced dependency set
  3. Confirm or escalate the residual
≈ minutes per query

A platform program produces thousands of these queries a year. Compressing each from days to minutes restructures who can be in the room when a change is debated, and how late in development a regression is allowed to surface. 10× engineer leverage on architecture work: one architect now covers the territory that previously required ten, because the architect is no longer reconciling tool exports; they're querying a graph and interpreting the answers.

Three personas, one Product Twin

The architect's question is one vantage. Two more sit on the same Product Twin.

Flagship electric platform Product Twin
ARXML × ReqIF × BoM × CAD
Tens of millions of signals · hundreds of ECUs
Product Explorer
Architect
"What breaks if I change X?"
Error Inspector
Quality engineer
"What's escalating before it becomes a warranty claim?"
Action Tower
Program manager
"Where is each module on the maturity curve?"

The questions don't reduce to each other. The graph that answers them does. Ingestion of ARXML, ReqIF, BoMs, and CAD is paid once; each new product surface pulls value out of integration work already done.

The footprint

R&D at the group isn't a single deployment. Four group entities ingest the same Product Twin: Group HQ, the passenger-vehicle entity, the software unit, and Aftersales. Five sub-workstreams run off the same ingestion: the flagship electric platform plus four adjacent platforms across R&D and aftersales. Each new sub-workstream is cheaper to stand up than the last because the engineering ontology doesn't replicate.

The customer procured all three product surfaces on the flagship platform. The ontology compounds across them; that is what the multi-product footprint reports.

Program shape

Engineering intelligence

Want results like this?Map your numbers in 20 minutes.

See SPREAD's engineering platform map across PLM, CAD, ERP and ALM in a tailored 30-minute walkthrough.