The Compute Layer Underneath: How Cloud Spend Ate Photonic Sensing

Compute Infrastructure · Photonics · Long Read

The compute layer ate the photonics industry while nobody was looking.

A LiDAR sensor that cost seventy-five thousand dollars in 2015 now costs less than a thousand. The compute required to actually use the data that sensor produces — for training, simulation, mapping, and perception — has gone the other direction. Cloud GPU spend is now the binding economic constraint on autonomous vehicle development, on commercial mapping platforms, and on the digital twin industry that photonic sensing was supposed to make possible. The industry is talking about the wrong cost curve.

§ 01 · The Wrong Cost Curve

The single most quoted statistic in the photonic-sensing industry is the LiDAR cost curve. A spinning Velodyne unit that ran $75,000 in 2015 has dropped to under $1,000 in commodity automotive packaging. Solid-state flash modules are targeting $200 at automotive scale. Defence-grade Geiger-mode systems still command six figures, but even those have seen meaningful unit economics improvements as InGaAs SPAD foundries have scaled. The headline conclusion every industry analyst draws from this trajectory is that LiDAR will become a commodity sensor over the next decade, the way image sensors did in the 2000s. That conclusion is probably correct as far as it goes.

The problem is that the LiDAR cost curve has stopped being the cost curve that matters. A modern autonomous vehicle company running multiple development fleets, large-scale simulation environments, neural network training pipelines, and high-definition map maintenance operations will spend more on cloud compute in a single quarter than it spends on photonic hardware in a year. A commercial mapping platform processing aerial LiDAR surveys for infrastructure clients will pay more for the GPU instances doing the photogrammetry and point cloud classification than it pays for the survey aircraft and the LiDAR units mounted on them. A digital twin company building city-scale 3D models from photonic and satellite data sources will spend the bulk of its operating budget on storage, training, and simulation compute — not on the sensors that produce the source data.

This shift has happened gradually enough that most coverage of the photonic sensing industry has not caught up to it. The trade press still treats LiDAR as a hardware story, with occasional gestures toward the AI software running on top of the data. The honest read of the industry in 2026 is that the photonic hardware has become the easy part. The hard part — the part that determines who actually wins in autonomous vehicles, in mapping, in defence ISR, in industrial automation — is the compute infrastructure underneath, the data pipelines that feed it, and the unit economics of running large-scale machine learning workloads on top of multimodal sensor streams. That layer has its own technology curve, its own supply chain, its own concentration risks, and its own emerging market dynamics. None of it gets adequately covered in the photonics trade press, which is what this article is trying to correct.

— The Cost Curves That Matter —

98%

LiDAR unit cost decline 2015–2025 (commodity tier)

~7×

Cloud GPU compute spend per AV firm 2020–2025

$166B

Forecast AI sensor market 2034 (43% CAGR)

3

Hyperscalers running ~all of it underneath

Two curves moving in opposite directions, one supply chain at the bottom of both.

§ 02 · The Compute Stack Underneath Modern Photonic Sensing

A modern photonic sensing operation runs on an eight-layer compute stack that has accumulated over the past decade through hundreds of independent procurement decisions. None of it was designed as a coherent system. The layers interact with each other, interfere with each other, and fail in ways that none of the single-layer vendors will fully acknowledge. Understanding what is actually happening underneath requires a layer-by-layer read.

At the bottom of the stack sits the photonic hardware itself — the lasers, detectors, and optical assemblies that turn photons into electrical signals. This layer has been the focus of nearly all photonic sensing coverage. Above it sits the embedded compute layer that runs on the sensor itself or on the immediate platform — the FPGAs, the dedicated photonic SoCs, the small inference chips that handle real-time tasks like time-of-flight calculation, point cloud assembly, basic anomaly detection, and pre-classification before data leaves the sensor. This embedded layer is where the most interesting silicon work is happening right now, and where companies like Mobileye, Hailo, Ambarella, and a dozen others are quietly competing to define the architecture of next-generation perception modules.

Above the embedded layer sits the platform compute — the in-vehicle computer or the local edge node that aggregates data from multiple sensors, runs the perception stack, executes the control logic, and handles communication with the broader system. NVIDIA DRIVE platforms dominate this layer in automotive applications. Qualcomm Snapdragon Ride, NXP, and Renesas have meaningful positions. Tesla runs proprietary silicon. The platform layer is where the actual real-time decisions get made — whether to brake, whether to steer, whether to flag an object for further scrutiny — and the unit economics of this layer have become structurally tight as model complexity has grown.

Above the platform sits the data pipeline layer — the systems that ingest the firehose of sensor data, label it, store it, version it, and deliver it to wherever it needs to go for training. This is the layer where data engineering teams the size of small armies have been quietly built up at every serious autonomous vehicle company. Above it sits the training compute — the largest budget line for any modern photonic perception company — running on hyperscaler GPU clusters at scales that would have been considered science-fictional five years ago. Above the training layer sits the simulation compute, where companies generate synthetic sensor data to train and validate their perception stacks. And above all of it sits the deployment and observability layer that actually monitors the deployed systems in the field. Eight layers, three or four hyperscalers underneath all of them, and a set of unit economics that has stopped resembling the original photonics-hardware cost structure.

Layer Function Cost Trajectory
8 — Observability & OTAFleet monitoring, model updates, edge case captureRising; correlates with fleet size
7 — SimulationSynthetic sensor data generation, scenario testingRising fast; the new battleground
6 — Training ComputeGPU clusters running perception model trainingLargest budget line; rising sharply
5 — Data PipelineIngestion, labelling, storage, versioningStable to rising; storage is the long tail
4 — Platform ComputeIn-vehicle / edge perception & controlRising; model complexity outpaces silicon
3 — Embedded InferenceSensor-side ML, pre-classificationFalling per inference; rising per sensor
2 — Photonic FrontendLasers, detectors, optical assembliesFalling sharply; the commodity story
1 — Substrate & PhysicsWafer fabs, materials, photolithographyStable; capacity-constrained at premium nodes

The cost dynamics at each layer move differently. The aggregate effect is that the bottom layers commoditise while the upper layers absorb a rising share of the unit economics.

§ 03 · Simulation Is Where The Compute Actually Goes

The single most under-discussed cost line in the modern photonic perception industry is simulation compute. Every serious autonomous vehicle company, every commercial mapping platform, and every defence ISR contractor now runs large-scale simulation environments that generate synthetic sensor data at volumes far exceeding what their physical fleets could produce in a lifetime. The reason is straightforward: training a perception model on real-world data alone is impossibly slow, dangerous, and expensive. Edge cases — the rare scenarios that determine whether a system handles a one-in-a-million event correctly — are by definition rare. Generating them artificially in simulation is the only way to expose a perception model to enough variation to be trustworthy.

The compute economics of simulation are punishing. A high-fidelity sensor simulation that accurately models photonic behaviour — the way a real LiDAR pulse bounces off wet asphalt at a particular angle, the way a real camera handles direct sunlight through a windshield, the way a real radar return is corrupted by rain on a metal sign — is computationally expensive in a way that a simple game-engine visualisation is not. Companies like Waymo, Cruise, Wayve, Aurora, and Mobileye have spent the better part of a decade building proprietary simulation stacks specifically because off-the-shelf game engines do not produce sensor-accurate synthetic data, and the synthetic data is only useful for training if it accurately represents what the real sensor would have measured.

The result is that simulation compute has become one of the largest single line items in autonomous vehicle company budgets, second only to direct training compute and ahead of physical fleet operating costs at most companies past Series C. NVIDIA DRIVE Sim, AWS RoboMaker, CARLA, the open-source ecosystem, and the major proprietary platforms collectively burn an enormous amount of GPU time. The standard industry practice has shifted toward running simulation on the same hyperscaler infrastructure as training, partly for data-locality reasons and partly because the workload patterns are similar enough that the same reserved GPU capacity can be amortised across both. None of this shows up in the headline cost-curve narratives the photonic sensing trade press tends to tell. All of it shows up in the actual operating expenses of every company in the industry.

The same dynamics apply with different specifics to commercial mapping platforms processing aerial and satellite LiDAR data. Photogrammetry, point cloud classification, change detection, and object extraction are all GPU-intensive workloads, and the volume of source data being processed has grown roughly with the cube of sensor resolution improvements over the past five years. A national LiDAR mapping programme that produced a few terabytes of point cloud data in 2018 now produces tens of petabytes annually, and every byte of that needs to pass through processing pipelines that are themselves significant cloud compute consumers. The compute layer is not a side cost of the photonic sensing industry. It is the industry, viewed through the operating-expense lens.

A modern autonomous vehicle company spends more on simulation compute generating synthetic LiDAR data than it spends buying actual LiDAR sensors. The hardware story is not the cost story any more.

PRINCETON LIGHTWAVE REVIEW · EDITORIAL

§ 04 · The Real Cost Economics of an AV Company in 2026

The interesting way to read autonomous vehicle company financials is to look at where the cloud spend actually goes. Public disclosures are limited, but a combination of leaked numbers, hyperscaler partnership announcements, and industry-standard estimation models produces a fairly consistent picture across the well-funded companies in the category. Training compute typically runs 35 to 50 percent of total cloud spend at well-funded AV companies. Simulation compute runs another 20 to 30 percent. Data pipeline storage and processing absorbs 15 to 25 percent. Observability, OTA infrastructure, and miscellaneous workloads make up the rest. The aggregate cloud bill at a top-tier AV company runs from the low tens of millions of dollars per quarter to nine figures annually for the largest players, with significant year-over-year growth as model complexity and fleet sizes both expand.

For comparison, a typical modern AV development fleet of two hundred vehicles, each fitted with multiple LiDAR units, cameras, radars, and IMUs, represents a hardware bill of perhaps fifteen to twenty million dollars all-in. The annual cloud bill at the same company will frequently exceed that number by a factor of five or more. The hardware is essentially a one-time capital expense; the cloud bill is a recurring operating expense that scales with development velocity, model size, and simulation complexity. Compounded across the industry, the photonic sensing supply chain produces somewhere on the order of a few billion dollars in annual hardware revenue, and the autonomous vehicle development industry pays multiples of that to the major hyperscalers every year just to make use of what those sensors produce.

The cost dynamics inside commercial mapping platforms are different in detail but similar in shape. A commercial aerial mapping company operating a fleet of survey aircraft will have hardware costs concentrated in the aircraft, the LiDAR and imaging payloads, and the ground equipment for survey planning and aircraft maintenance. The processing of the resulting data — photogrammetric reconstruction, point cloud classification, semantic labelling, integration with existing GIS platforms — is overwhelmingly cloud-based and represents a larger share of the company’s total operating cost than the physical assets themselves. Defence ISR contractors face similar economics, with the additional complication that classified workloads cannot run on commercial cloud infrastructure and have to be supported on government-cloud equivalents at significantly higher unit costs.

Cost Category Share of AV Cloud Spend Primary Driver
Perception model training35–50%GPU cluster hours; model size scaling
Sensor simulation20–30%Synthetic data volume; scenario coverage
Data storage & pipelines15–25%Fleet data ingestion; retention policies
Map maintenance5–10%HD map processing; change detection
Observability & OTA3–7%Fleet size; update frequency
Misc / engineering5–10%Internal tools, dashboards, dev infra

Composite ranges based on industry estimation models, leaked figures, and hyperscaler partnership announcements. Specific company allocations vary significantly based on fleet size, simulation strategy, and proprietary infrastructure investment.

§ 05 · Three Hyperscalers Underneath Almost Everything

The compute infrastructure underneath the photonic sensing industry is concentrated to a degree that surprises people new to the space. Roughly three quarters of all autonomous vehicle development workloads run on one of three hyperscalers: Amazon Web Services, Microsoft Azure, or Google Cloud Platform. The remainder is split between proprietary infrastructure built by the largest players (Waymo running primarily on Google Cloud given the corporate parent, Tesla running substantial proprietary hardware), Oracle Cloud for specific workloads with regulatory or pricing advantages, and a long tail of specialised GPU-cloud providers like CoreWeave, Lambda, and Crusoe that have grown rapidly during the AI infrastructure buildout.

Each hyperscaler has positioned itself differently for the photonic perception workload. Google Cloud has emphasised its tensor processing units and its native integration with Google’s extensive geospatial data assets, making it a natural fit for mapping-heavy workloads. AWS has emphasised its breadth of GPU instance types, its RoboMaker simulation service, and its mature data pipeline infrastructure, making it the default choice for many AV companies that prioritise operational tooling. Microsoft Azure has emphasised its OpenAI partnership, its enterprise compliance posture, and its strong defence and government cloud presence, making it well-suited to ISR and dual-use applications. The differences matter at the margin, but the structural reality is that any photonic perception company at scale ends up running on one of the three, often on more than one of them simultaneously, and increasingly with multi-cloud architectures designed to manage concentration risk.

The concentration produces a specific set of operational dynamics that are worth understanding. The hyperscalers offer significant discounts in exchange for prepaid annual commitments — reserved instances, committed use discounts, or enterprise agreements that lock in pricing in exchange for spend guarantees. A photonic perception company committing to fifty million dollars of annual GCP spend gets materially better effective per-instance pricing than one paying month-to-month. The discount math creates pressure to forecast spend optimistically and over-commit in order to qualify for the largest discount tier, which produces a different problem at the end of the commitment period: companies routinely end the year sitting on significant unused commitment balances that cannot be carried forward or refunded by the hyperscaler.

This dynamic is becoming meaningful enough at the industry level that it has produced its own emerging market response. Marketplaces have begun to appear that match buyers and sellers of unused cloud commitments directly, allowing companies with leftover capacity to recover value that would otherwise expire and allowing companies running into their commitment ceiling to buy google cloud credits at a discount to retail pricing. The economic logic is the same as in any other secondary market for prepaid enterprise services: when a meaningful percentage of contracted capacity routinely goes unused while another segment of the market is paying full retail at the margin, a marketplace will emerge to clear the inefficiency. For photonic perception companies running tight unit economics on simulation and training compute, the secondary credit market has become a real procurement consideration alongside the standard hyperscaler negotiation cycle.

Hyperscaler Photonic Workload Strengths Notable Customers
AWSRoboMaker simulation, broadest GPU portfolio, mature data toolingMobileye, Aurora, Cruise (historical)
Google CloudTPU performance, geospatial data integration, mapping pipelinesWaymo, various AV simulation workloads
Microsoft AzureEnterprise compliance, defence cloud, OpenAI integrationDefence ISR contractors, dual-use applications
Specialised GPU cloudsBare-metal H100/B200 access, lower per-hour pricingCoreWeave, Lambda, Crusoe customers
Proprietary infrastructureDirect silicon control, custom interconnects, vertical integrationTesla (Dojo), large-scale incumbents

Customer attributions reflect known historical relationships and publicly disclosed partnerships. Most photonic perception companies at scale operate multi-cloud architectures rather than committing exclusively to a single provider.

§ 06 · The Dual Cost Curve: Edge Inference vs. Cloud Training

The most important architectural decision in any modern photonic perception system is which workloads run on the edge and which run in the cloud. The trade-off is not subtle. Edge inference — running the perception model directly on the in-vehicle compute platform — has the obvious advantages of low latency, no network dependency, and no per-inference cloud cost. The disadvantages are equally obvious: the model running at the edge is necessarily smaller, less capable, and more expensive per unit of silicon than the equivalent model would be running in the cloud. Cloud-based perception — sending sensor data over a network to be processed remotely — allows for arbitrarily complex models running on optimal hardware, but introduces latency, network dependency, and recurring operating cost.

Most production autonomous vehicle systems split the workload along functional lines that have stabilised over the past several years. Real-time safety-critical perception — the sub-100ms decisions about whether to brake, steer, or accelerate — runs on edge silicon, because the latency budget makes anything else infeasible. Higher-level reasoning — route planning, behavioural prediction over multi-second horizons, traffic flow analysis — can sometimes run partially in the cloud, particularly for L4 robotaxi systems operating in geofenced areas with reliable connectivity. Training, simulation, fleet learning, and map updates run almost entirely in the cloud because the compute requirements simply cannot be supported at the edge.

The interesting cost dynamic is that both ends of this split are running into capacity walls simultaneously. Edge silicon is hitting the limit of what current automotive-qualified processors can run within thermal and power budgets — the next generation of perception models is genuinely larger than the previous generation, and silicon scaling has not kept pace with model size growth. Cloud compute is hitting the limit of what enterprise budgets can absorb — the major AV companies have all reported that cloud spend is now a significant operating cost discussion at the board level, and the discount tiers that used to cushion this cost have already been negotiated to their floors. The industry is entering a phase where neither the edge nor the cloud can simply absorb model complexity growth at the current trajectory, and architectural creativity will determine which companies handle the transition gracefully.

Several approaches to the dual cost curve have emerged. Model distillation — training large cloud models and then producing smaller specialised versions for edge deployment — has become standard practice. Mixed-precision inference and quantisation reduce edge compute requirements at modest accuracy cost. On-device caching of inference results for repeated scenarios reduces the actual rate of model evaluation. Federated learning and on-device personalisation distribute some of the training workload to the edge, although the privacy and data governance implications make this approach suitable only for specific use cases. None of these techniques has fully solved the problem. All of them are part of the operational architecture that any serious photonic perception company has to invest in if it intends to scale.

§ 07 · What The Photonics Trade Press Is Missing

The first thing the photonics trade press is missing is the magnitude of the spend shift. Most coverage of the industry treats photonic hardware as the centre of gravity, with software and compute as supporting layers. The financial reality has inverted: in autonomous vehicles, in commercial mapping, in defence ISR, and in industrial automation, the compute layer now absorbs more capital than the hardware layer. Coverage that does not reflect this is, increasingly, describing the wrong industry. The interesting questions about photonic sensing in 2026 are mostly about the compute infrastructure underneath, not the photonic hardware on top.

The second thing missing is the concentration risk in the underlying supply chain. The photonic hardware industry has historically discussed supply chain risk in terms of GaAs wafer availability, InGaAs SPAD foundry capacity, and high-power 1550nm laser sources. Those risks remain real. But the more consequential supply chain risk facing the industry is the concentration of GPU compute capacity at a small number of hyperscalers, all of whom are themselves competing for capacity from a single dominant chip vendor. A photonic perception company that has secured its laser supply, its detector supply, and its silicon supply is still one hyperscaler-pricing-decision away from having its unit economics rewritten unilaterally. That risk does not show up in conventional supply chain analyses because conventional supply chain analyses treat cloud compute as a service rather than as a critical input.

The third thing missing is the role of secondary markets in managing cloud commitment risk. The combination of optimistic forecasting, prepaid commitment discounts, and end-of-period unused balances has created a real inefficiency that the industry has only recently started to address through marketplace mechanisms. The photonics trade press, fixated on hardware narratives, has not yet noticed that procurement teams at major photonic perception companies now routinely participate in cloud credit secondary markets as part of their cost management process. The marketplaces themselves are still small, but the structural dynamic that supports their existence is large and growing. Coverage that ignores this misses one of the more interesting commercial developments at the intersection of photonic perception and infrastructure economics.

The fourth thing missing is the regulatory dimension. As autonomous vehicles approach broader deployment, defence ISR systems become more capable, and digital twin platforms accumulate detailed records of urban environments, the regulatory frameworks governing what photonic systems can do, what data they can retain, and how that data can be processed are becoming binding constraints on system design. The compute infrastructure decisions that look purely technical — where data is stored, what regions it is processed in, which classifications of data can run on which clouds — are increasingly regulatory decisions in technical clothing. The photonics trade press, more comfortable with optical engineering than with data governance law, has been slow to integrate this dimension into its coverage. The industry would benefit from journalism that does.

§ 08 · What Comes Next For the Compute Layer

The compute layer underneath photonic sensing is going to keep absorbing capital faster than the hardware layer for the foreseeable future. Every major trend in the industry — larger perception models, more comprehensive simulation, multi-modal sensor fusion, fleet learning at scale, real-time map updates, autonomous defence platforms — pushes compute requirements upward. The trend lines in semiconductor performance, in algorithmic efficiency, and in cloud unit pricing are all moving in the right direction, but none of them are moving fast enough to offset the demand growth from the workloads themselves. Net spend per company has been rising and is likely to keep rising through at least 2028.

The structural implications are worth thinking through. First, the competitive moat in the photonic perception industry is increasingly the compute moat, not the hardware moat. Companies that have figured out how to run training and simulation efficiently — through architectural choices, vendor negotiations, secondary market participation, or proprietary infrastructure investment — have a real cost advantage that compounds over multiple model generations. Second, the consolidation pressure on the industry will be driven by compute economics as much as by sensor economics. Smaller AV companies and mapping platforms will run out of money on cloud bills before they run out of money on hardware, and the M&A landscape will reflect that. Third, the next generation of breakout companies in photonic perception will probably look more like infrastructure plays than like pure photonics plays. The companies that own the compute layer will own the industry.

None of this is a reason for pessimism about the photonics industry as a whole. The sensing capability is real, the addressable markets are large, and the underlying technology trajectory is one of the most impressive in any segment of advanced manufacturing. The point is that the centre of gravity has moved. The next decade of photonic perception will be defined less by who builds the best LiDAR sensor and more by who can run the most useful operation on top of the data that LiDAR sensors collectively produce. The trade press, the analyst community, and the investment community would all benefit from coverage that reflects where the industry actually is rather than where it was five years ago.

Frequently Asked Questions: The Compute Layer Underneath Photonic Sensing

1. Why is cloud compute now a bigger cost than LiDAR hardware for AV companies?

Because the LiDAR cost curve has been dropping by roughly 90 percent per decade while the perception model training compute curve has been rising sharply. A modern AV company runs continuous training, large-scale simulation, fleet learning, and HD map maintenance, all of which scale with development velocity rather than fleet size. The aggregate effect is that hardware spend is a one-time capital expense and cloud spend is a recurring operating expense that has grown into the dominant line item.

2. What is sensor simulation and why does it consume so much compute?

Sensor simulation generates synthetic data that mimics what a real LiDAR, camera, or radar would produce in scenarios the real fleet has not encountered. High-fidelity sensor simulation has to model photonic physics accurately — how a LiDAR pulse bounces off wet asphalt, how a camera handles direct sun, how a radar return is corrupted by rain on metal — which is computationally expensive in a way that ordinary game-engine rendering is not. The compute requirements are large because the volume of synthetic data needed to train a robust perception model is enormous.

3. Which hyperscalers run the photonic perception industry?

Roughly three quarters of autonomous vehicle development workloads run on AWS, Google Cloud, or Microsoft Azure. The remainder is split between proprietary infrastructure (Tesla Dojo, Waymo on Google Cloud given the corporate parent), specialised GPU clouds (CoreWeave, Lambda, Crusoe), and government cloud variants for classified workloads. Most companies operate multi-cloud rather than committing to a single provider.

4. How big is the AV industry’s cloud bill?

A top-tier AV company runs cloud spend in the low tens of millions of dollars per quarter at minimum, scaling to nine figures annually for the largest players. Specific numbers are rarely disclosed publicly, but industry estimation models combining hyperscaler partnership announcements, leaked figures, and inferred infrastructure size produce reasonably consistent ranges across observers.

5. What is edge inference?

Edge inference is running a perception model directly on the in-vehicle or on-device compute platform rather than sending data to the cloud for processing. Edge inference is required for safety-critical real-time decisions because cloud round-trip latency is too high. The trade-off is that edge silicon limits how complex a model can run, while cloud compute does not.

6. Why split workloads between edge and cloud?

Because each environment has structural advantages and limitations. Edge inference is fast and network-independent but constrained in model size. Cloud compute supports arbitrarily large models but introduces latency and network dependency. Production photonic perception systems split workloads functionally: real-time safety on edge, training and simulation in cloud, and various intermediate workloads distributed based on the latency budget and compute requirements.

7. What is model distillation and why does it matter?

Model distillation is the process of training a large model in the cloud and then producing a smaller specialised version that runs on edge silicon at acceptable accuracy. It has become standard practice in autonomous vehicle perception because it lets companies use cloud-scale models for training while still deploying edge-feasible models for production. The accuracy gap between the cloud teacher model and the edge student model is one of the active research areas in the field.

8. How do reserved instance commitments work for AV companies?

Hyperscalers offer significant discounts (typically 40 to 70 percent off retail pricing) in exchange for prepaid annual or multi-year commitments to specific compute capacity. Reserved instances, committed use discounts, and enterprise agreements are the main vehicles. The trade-off is that the commitment is locked in regardless of whether the company actually consumes the capacity, which creates a forecasting problem.

9. What happens to unused cloud commitments?

Historically they expired worthless. The hyperscalers do not refund or credit forward unused commitment balances under standard contracts. The economic inefficiency this creates — companies routinely ending their commitment period 20 to 40 percent under their committed capacity — is what produced the secondary market for cloud credits over the past few years.

10. What are cloud credit marketplaces?

Marketplaces that match buyers and sellers of unused cloud commitment balances directly. A company sitting on unused GCP, AWS, or Azure capacity can list it for sale at a discount; another company that has run into its commitment ceiling can purchase it below retail pricing. The marketplaces structure the transactions to respect the underlying provider terms and verify the legitimacy of the credits being transferred.

11. Are cloud credit secondary markets compliant with hyperscaler terms?

It depends on the provider and the structure. Some hyperscalers explicitly permit account-level credit transfers under specific conditions; others require approval; others prohibit resale entirely. Reputable marketplaces structure their transactions to fit within whatever the underlying provider terms allow, but companies participating should verify the specific arrangement before entering a transaction.

12. How does GPU shortage affect the photonics industry?

Significantly. NVIDIA H100 and B200 capacity has been chronically constrained for the past several years, and major AV companies have reported delays in training cycles due to GPU availability. Specialised GPU clouds like CoreWeave have grown rapidly partly as a hedge against capacity issues at the major hyperscalers. The supply chain risk for advanced GPUs is now one of the meaningful constraints on photonic perception development.

13. What is a digital twin and how does it relate to photonic compute?

A digital twin is a three-dimensional, data-rich simulation of a real physical environment, typically built from photonic sensor data (LiDAR, photogrammetry, satellite imagery) and updated continuously. Digital twin platforms are major consumers of cloud compute because the underlying point clouds, the simulation engines that operate on them, and the AI models that interpret them all run on hyperscaler infrastructure at significant scale.

14. Is Tesla’s Dojo a real alternative to hyperscaler infrastructure?

In a narrow sense, yes — for Tesla’s specific workloads. Dojo is purpose-built silicon optimised for video-based perception training, and it gives Tesla supply chain independence and unit-cost advantages that pure hyperscaler customers do not have. The model does not generalise easily; building proprietary training silicon requires capital and expertise at a scale only the largest companies can sustain.

15. What about defence ISR — do those workloads run on commercial cloud?

Mostly no. Classified workloads run on government cloud variants — AWS GovCloud, Azure Government, Google Cloud for Government — that are physically and logically segregated from commercial infrastructure. The unit economics on government cloud are typically materially higher than commercial cloud, which is one of the reasons defence ISR contractor cost structures look different from commercial AV companies.

16. How does sensor data labelling fit into the cost picture?

It is one of the larger hidden costs. Raw sensor data has to be labelled before it can be used for supervised learning, and labelling for 3D point clouds is more expensive than labelling for 2D images. Some labelling can be automated through self-supervised techniques, but high-quality labels still require human review at scale. Major AV companies maintain large labelling operations either internally or through contracted vendors, and the labelled data they accumulate is one of their most defensible assets.

17. What is fleet learning?

Fleet learning is the practice of continuously improving a perception model by feeding back data from the deployed fleet into the training pipeline. Edge cases encountered in production are flagged, uploaded, processed, and used to update future model versions. The continuous data flow from a deployed fleet of vehicles is one of the things that makes mature AV companies hard to compete with, but the cloud infrastructure required to handle it is also one of their largest cost lines.

18. How concentrated is the GPU supply chain?

Extremely. NVIDIA holds the dominant position in AI training GPUs by a significant margin, with AMD as a credible but smaller alternative and a long tail of specialised chips for specific workloads. Below the GPU layer, TSMC manufactures most leading-edge silicon. The aggregate supply chain has at least three single points of failure that any company building a long-term photonic perception strategy needs to model carefully.

19. Will custom silicon for photonic perception become widespread?

Probably yes for the largest companies and probably no for the long tail. Companies like Mobileye, Hailo, Ambarella, and the AV-focused programs at Qualcomm and NVIDIA are producing increasingly specialised silicon for photonic perception workloads. The investment required to design and tape out custom silicon is large enough that only well-capitalised players can sustain it. Smaller companies will continue to use commercially available silicon and compete on the layers above.

20. What does the next five years look like for photonic compute?

Compute spend will continue to outpace hardware spend in the photonic perception industry. Edge silicon will get more capable but not fast enough to absorb model complexity growth without architectural innovation. Hyperscaler concentration will remain a structural risk, partly mitigated by specialised GPU clouds and proprietary infrastructure. Secondary markets for cloud commitments will mature into a routine part of procurement. The companies that win the next phase of the industry will be the ones that treated compute infrastructure as a first-class strategic concern rather than as a service to be procured.

Sources · Further Reading

AI Sensors Report: Analysis on the Market, Trends, and Technologies, TrendFeedr, January 2026.

Global AI Sensor Market Forecast 2024–2034, Market.us, 2025.

Princeton Lightwave Review’s previous coverage on detection architecture comparisons, photonic supply chain dynamics, and the consumer-electronics ToF repositioning is referenced throughout this analysis.

— Editor’s Note —

On reading the photonics industry through its compute layer.

The photonic sensing industry has spent the past decade telling its story primarily as a hardware story — the falling cost of sensors, the rising performance of detectors, the emerging architectures that promise smaller and cheaper modules at every product cycle. That story is true and important, but it has stopped being the most useful frame for understanding where the industry actually is in 2026. The cost centre of gravity has moved up the stack, the strategic moats have moved with it, and the next phase of competitive dynamics will be determined as much by compute infrastructure decisions as by photonic engineering decisions.

Princeton Lightwave Review remains editorially independent. We have no commercial relationship with any of the hyperscalers, GPU vendors, autonomous vehicle companies, photonic hardware vendors, or marketplace operators referenced in this analysis. The framings, interpretations, and structural reads in this article are our own. Readers making investment, procurement, or operating decisions on the basis of this analysis should treat it as a starting framework rather than a substitute for direct due diligence on the specific vendors, contracts, and technical architectures involved.

Contact Us

We'd love to hear from you