
Consumer Electronics · Depth Sensing · 2026 Outlook
The Quiet Repositioning of 3D Sensing in Consumer Electronics: Where ToF Actually Stands in 2026
Five years ago, depth cameras were going to be everywhere. Every flagship phone, every tablet, every pair of glasses. The reality has turned out stranger — and more interesting. Apple quietly dropped Time-of-Flight from its iPhone lineup after iPhone 14 Pro. Most Android flagships that shipped ToF in 2020 no longer do. And yet the total volume of ToF shipments keeps climbing, driven by categories almost nobody was talking about in 2021. This is a report on where 3D sensing actually lives in consumer hardware today, why the hype cycle broke, and what replaces it.
The Shape of the Market, Honestly
You will not find a shortage of industry reports claiming that Time-of-Flight sensing is about to conquer the smartphone. Most of them are written by component manufacturers who have a catalogue of ToF modules to sell. The real picture, looking across flagships shipped between 2020 and 2026, is more specific. ToF has a handful of genuine sweet spots, a handful of categories where it lost decisively to alternative approaches, and a quietly expanding long tail in industrial and robotic applications that is where the volume growth is actually coming from.
The basic physics is simple. A ToF sensor emits a pulse or a modulated wave of light — typically near-infrared at 850 nm or 940 nm — and measures the time or phase shift of the returned signal. From that, it calculates distance. Build an array of those pixels and you get a depth map of the scene. Compared to structured-light systems (which project a known pattern and infer depth from its deformation) and stereo-vision systems (which triangulate from two cameras), ToF promises simpler optics, faster frame rates, and better performance in low light. Those advantages are real. They are also not, in every application, sufficient.
The Three 3D Sensing Technologies, Side by Side
Anyone evaluating 3D sensing for a consumer product is really choosing between three families of technology. Each has a distinct profile of strengths and costs, and the choice is rarely as clean as a single spec-sheet comparison suggests.
| Technology | How It Works | Best At | Worst At |
|---|---|---|---|
| Time-of-Flight (ToF) | Measures light’s round-trip time or phase shift | Medium range (0.3–5m), fast frame rates, low light | Close-range sub-millimetre precision; bright sunlight |
| Structured Light | Projects a known pattern, reads deformation | High-accuracy short-range depth (face ID, 0.1–1m) | Outdoor use, range beyond ~1.5m |
| Stereo Vision | Triangulates from two or more cameras | Outdoor use, passive operation, long range | Featureless surfaces, low light without IR illuminator |
This is the core of why the smartphone ToF story unfolded the way it did. For the specific job of face authentication at arm’s length, structured light is simply more accurate — and Apple’s Face ID system has used a structured-light dot projector since iPhone X, not a ToF sensor. For photographic bokeh, computational approaches using dual cameras and neural networks have closed most of the quality gap that ToF was supposed to solve. For outdoor AR, stereo vision with neural depth refinement has proven more robust than ToF under direct sunlight, where ambient infrared swamps the sensor.
Apple’s Quiet Pullback and What It Signalled
Apple introduced a rear-facing LiDAR scanner on the iPad Pro in early 2020 and brought it to the iPhone 12 Pro line later that year. The marketing framed it as a foundation for augmented reality — faster autofocus in low light, better portrait photography, and spatial mapping for AR apps. For several product cycles, the LiDAR module was a standard feature of the Pro-tier iPhone. Developers received a new set of APIs for room-scale scanning. Third-party apps appeared for home measurement, object capture, and accessibility.
The adoption curve for consumer-facing use cases, however, was flatter than Apple had hoped. AR measurement apps turned out to be a one-time novelty for most users. Object-scanning workflows remained the domain of professionals in specialised fields — estate agents, industrial inspection, accessibility research — rather than mass-market features. Vision Pro, Apple’s spatial computing headset, ultimately relied on a different sensing stack rather than porting the iPhone’s LiDAR architecture wholesale. By the iPhone 15 Pro launch cycle, Apple had begun a quiet walk-back, and rumours circulating through the supply chain suggested the module’s future on the iPhone was not secure.
The Lesson of the iPhone LiDAR Experiment
Adding a sensor to a flagship phone is easy. Building a software ecosystem that makes ordinary users care about it is enormously hard. ToF on phones turned out to be a classic case of hardware running ahead of a killer application. The sensor worked. The features it enabled were technically impressive. But “technically impressive” and “something a user will pay a premium for” are different thresholds, and the latter was the one that mattered.
The Android Story: Rise, Retreat, and Selective Return
The Android flagship response to Face ID and Apple’s LiDAR initiative was a scramble. Samsung, Huawei, LG, Honor, Sony, and several Chinese OEMs shipped ToF sensors in their top-tier 2019 and 2020 devices. The Samsung Galaxy S10 5G, Galaxy S20+, and Note 10+ all carried dedicated rear ToF modules. Huawei’s P30 Pro and P40 Pro included ToF. LG’s G8 ThinQ attempted front-facing ToF for hand-wave gestures. For a brief period, it looked as though ToF was on its way to becoming a standard flagship spec alongside optical image stabilisation and telephoto lenses.
Then the sensors started disappearing. The Samsung Galaxy S21 line removed the ToF module. So did the Note 20 Ultra’s successors. LG exited the smartphone business entirely. Huawei’s trajectory was disrupted by US export controls that were only incidentally related to optical sensing. By 2023, ToF had become a feature that appeared selectively on specific models for specific reasons, not a default flagship expectation. The reasons were unglamorous and largely economic. ToF modules added bill-of-materials cost — the VCSEL emitter, the specialised CMOS receiver, the dedicated illumination optics, and the processing overhead all sat above the rest of the camera stack. The feature differentiation they delivered was small enough that consumers did not reliably notice its absence.
| Phone / Generation | ToF Present? | Application |
|---|---|---|
| Samsung Galaxy S10 5G (2019) | Yes (rear) | Bokeh, measurement |
| Samsung Galaxy S20+ / Note 10+ / Note 20 Ultra | Yes (rear) | Bokeh, AR |
| Samsung Galaxy S21 onward | No | Removed |
| Huawei P30 Pro / P40 Pro / Mate 40 Pro | Yes (rear) | Bokeh, AR |
| LG G8 ThinQ (2019) | Yes (front) | Gesture control, face auth |
| iPhone 12 Pro – 14 Pro (2020–2022) | Yes (rear LiDAR) | AR, autofocus, portraits |
| iPhone 15 Pro / 16 Pro | Yes (rear LiDAR, diminishing role) | AR, autofocus |
| Google Pixel line | No (any generation) | Computational depth only |
| Xiaomi / OPPO flagships | Selective | Model-specific AR, gesture |
Google’s position is worth flagging. The Pixel line has never shipped ToF, and has consistently produced industry-leading computational photography — including excellent portrait-mode bokeh — using a combination of dual-pixel autofocus data, neural depth estimation, and careful image processing. That is the real competitive threat to consumer-grade ToF in smartphones. If a pure software approach can produce 90% of the visible quality at zero additional bill-of-materials cost, the ToF module struggles to justify its inclusion.
Where ToF Is Actually Winning
The smartphone story is the most visible part of the ToF market but, in shipment-volume terms, no longer the largest. Three adjacent categories have quietly become the real drivers of ToF demand.
Robotic Vacuum Cleaners and Service Robots
The unglamorous truth is that robotic vacuums are now one of the largest consumer-facing ToF markets in unit terms. The category has moved up-market rapidly — high-end models from Roborock, Dreame, Ecovacs, and others now routinely include ToF-based obstacle avoidance and sometimes dedicated LiDAR turrets for mapping. The sensing requirements of a floor robot are almost perfectly matched to what ToF does well: medium-range depth mapping, indoor lighting conditions, continuous operation, and sub-centimetre accuracy in the environment the robot actually has to navigate. Pet-detection models added in 2023–2024 further validated the sensor load.
AR and VR Headsets
Meta’s Quest line, Apple Vision Pro, Pico, and the various Chinese entrants all use depth sensing — often a combination of ToF and stereo-vision — to handle hand tracking, room mapping, and guardian-boundary setup. The sensors are less visible than on a phone because they sit inside the headset rather than being visible through an aperture, but the aggregate shipment volume has grown steadily. Meta sold tens of millions of Quest units across its product lines. Each one contains multiple depth-sensing cameras. This is a quiet but meaningful pull on the ToF and IR imaging sensor supply chain.
Automotive In-Cabin Sensing
Driver-monitoring systems, occupancy detection, and in-cabin gesture control have become standard features on mid-to-high-end vehicles, driven partly by regulatory mandates in Europe requiring driver-attention monitoring in new cars. ToF is a strong fit for this application — it works in darkness, it is robust to changing ambient lighting, and it can operate at the frame rates needed for continuous monitoring. Several Tier-1 automotive suppliers have built in-cabin camera systems around ToF or hybrid ToF + IR architectures. The unit volumes here are smaller than smartphones but the design-in cycles are longer and the margins considerably better.
The Hardware Underneath the Market
A modern ToF module is a stack of specialised photonic and semiconductor components that have each gone through their own compression curve over the past five years. Understanding what lives inside the module helps clarify why cost trajectories and form-factor improvements look the way they do.
| Component | Role | Recent Trend |
|---|---|---|
| VCSEL emitter | Produces the modulated infrared light pulse | Wavelength shift toward 940 nm for better sunlight rejection; higher peak power at lower duty cycles |
| Diffractive optical element | Shapes the emitted beam across the scene | Thinner, higher-efficiency designs using wafer-level optics |
| Receiver optics | Collects returning light, filters out ambient | Narrow-band interference filters tightened to ~20 nm bandwidth |
| CMOS depth sensor | Converts photons to depth readings per pixel | Pixel pitch shrinking toward 3.5 µm; resolutions climbing to VGA and beyond |
| Timing / processing ASIC | Phase extraction, depth computation | Increasingly integrated with the sensor die itself |
| Module package | Optical alignment, thermal management | Height reductions below 5 mm now standard for mobile integration |
The single most important hardware trend underlying the current market is the maturation of VCSEL arrays at 940 nm. The older 850 nm wavelength sat closer to the peak of ambient solar infrared, which made outdoor performance a persistent problem for early mobile ToF. The shift to 940 nm — where atmospheric water absorption reduces ambient IR — combined with tighter receive-side filtering has materially improved outdoor performance. It has not eliminated the problem, but it has raised the ceiling of usable conditions.
The second important trend is the move toward indirect Time-of-Flight (iToF) architectures in consumer modules, with direct Time-of-Flight (dToF) reserved for higher-end applications. iToF measures the phase shift of a continuous-wave modulated signal, which simplifies the receiver electronics at the cost of a fixed unambiguous range. dToF measures individual photon arrival times using single-photon avalanche diode (SPAD) arrays, producing longer-range and higher-accuracy data at substantially higher component cost. Apple’s iPhone LiDAR is a dToF system. Most Android ToF modules have been iToF. The split reflects a genuine architectural trade-off, not a hierarchical “better or worse” ranking.
iToF vs dToF in One Paragraph
Indirect ToF is cheaper, denser, and simpler to integrate, but suffers from multi-path artefacts and fixed range limits. Direct ToF, using SPAD arrays, handles longer ranges and complex scenes more gracefully at significantly higher cost. Most consumer products use iToF because the use cases sit inside its comfort zone. Automotive, professional AR, and robotics applications are increasingly pulling toward dToF as SPAD manufacturing costs fall.
The Rise of Computational Depth
The most important competitive force acting on consumer ToF is not another depth sensor — it is neural depth estimation from ordinary images. Monocular depth networks now produce startlingly good dense depth maps from a single RGB frame. Multi-frame approaches, dual-pixel parallax, and stereo-from-motion pipelines close the gap further. For the core consumer uses of ToF in a smartphone — bokeh, segmentation, measurement — the purely computational path is now competitive on quality and vastly cheaper in hardware.
This does not make ToF obsolete. Neural depth networks are excellent at producing plausible depth but poor at producing verifiably accurate depth. For any application that needs ground-truth distance — AR placement, volume estimation, accessibility features, robotics, driver monitoring — a physical ToF measurement retains its edge. What has shifted is the set of applications that actually require that ground truth. Most consumer photography applications do not. Most industrial and robotic applications very much do.
Supply Chain Realities
The ToF sensor supply chain is concentrated. Sony dominates high-performance mobile ToF sensor shipments through its CMOS imaging fab capacity. STMicroelectronics has become a leader in dToF modules for consumer applications, including the sensor inside the iPhone LiDAR module. Infineon, pmd, Melexis, ams OSRAM, and Analog Devices hold important positions across automotive and industrial segments. VCSEL production for ToF sits with Lumentum, II-VI (now Coherent), and a handful of Chinese suppliers including Vertilite and Everbright Photonics.
This concentration has two practical consequences. First, the supply chain is genuinely strategic — ToF modules are among the photonic components now subject to scrutiny under Western export-control regimes. Second, price trajectories over the next several years will depend significantly on whether Chinese sensor and VCSEL manufacturers continue their trajectory of closing the quality gap with incumbent suppliers. If they do, and political conditions permit cross-border trade, consumer module prices compress further. If they do not, or if trade fragments, prices stabilise at current levels with regional supply bifurcation.
| Supply Chain Layer | Key Players | Strategic Notes |
|---|---|---|
| Mobile ToF sensors (iToF) | Sony, Samsung, SK Hynix | Japanese and Korean dominance; high barriers to entry |
| dToF / SPAD sensors | STMicroelectronics, Sony, ams OSRAM | Consolidating around a smaller set of fabs |
| Automotive ToF | Infineon, Melexis, Analog Devices, pmd | Longer design cycles, higher unit margins |
| VCSEL emitters | Coherent, Lumentum, Vertilite, Everbright | Concentration point; strategic-autonomy concern |
| Module integration | LG Innotek, Sunny Optical, O-Film, Q Tech | Asian optical-module ecosystem dominates |
Wearables and the Smart-Glasses Question
Smart glasses are the category most likely to become the next genuine volume driver for consumer ToF — and also the category where the technology faces its hardest design challenges. The Meta Ray-Ban product line demonstrated that lightweight, socially acceptable smart glasses can reach meaningful consumer adoption when they deliver a narrow, well-chosen set of features. The next generation of products — from Meta, Samsung, Apple, and a growing Chinese field — is expected to add display and spatial sensing capabilities that will almost certainly require some form of depth input.
The design envelope for glasses is brutal. Every gram matters. Every milliwatt of battery matters more. Optical apertures are tiny, housing volumes are vanishingly small, and industrial design considerations often overrule what engineers would prefer. This pushes hard against conventional ToF module form factors. Expect a wave of increasingly miniaturised, often hybrid, depth-sensing modules — combinations of dual cameras, sparse ToF dot illumination, and neural fusion — rather than the large rear-mounted modules that appeared on 2020-era smartphones. The photonics engineering problem is genuinely interesting and not yet solved.
Where This Goes: A Realistic Outlook
The neat narrative — “ToF is going to be in every consumer device by 2025” — never made physical or economic sense, and did not come true. The actual trajectory is more interesting. ToF has become a niche-but-growing component in smartphones, a near-ubiquitous component in higher-end robotic vacuums, a standard element in VR and AR headsets, a rapidly expanding part of automotive in-cabin systems, and an open question in smart glasses. The aggregate picture is healthy growth without the consumer-facing saturation the 2020 forecasts predicted.
| Segment | 2020 Narrative | 2026 Reality |
|---|---|---|
| Smartphones | ToF standard in all flagships | Selective, declining footprint; computational depth winning |
| Robotic vacuums | Niche | Major volume driver; mapping + obstacle avoidance standard |
| AR/VR headsets | Promising | Validated; every major headset ships depth sensing |
| Automotive in-cabin | Experimental | Regulated and ramping; Tier-1 standard feature |
| Smart glasses | Not yet on the roadmap | Emerging; design constraints still being solved |
| Wearables (watches, bands) | Miniaturised ToF imminent | Has not materialised; power budget too tight |
The technology itself will keep getting better. Pixel counts will rise. Module heights will fall. Power draw will drop. SPAD-based dToF will continue its migration down-market from iPhone-tier products toward mid-range devices. What will not change is the underlying truth that a sensor needs an application to justify it. Hardware that answers a question nobody is asking gets designed out of the bill of materials, regardless of how elegant it is. The ToF industry has learned that lesson the hard way and is now — quite sensibly — chasing applications where depth sensing is a load-bearing feature rather than a marketing asterisk.
Frequently Asked Questions: 3D Sensing in Consumer Electronics
A ToF sensor emits a pulse or modulated wave of infrared light, measures the time or phase shift of the returning signal, and converts that into a per-pixel distance reading. Repeat it across an array of pixels and you get a depth map of the scene.
They overlap. LiDAR is the broader umbrella term for light-based distance measurement, and most automotive and survey-grade LiDAR uses scanning direct-ToF architectures. Consumer ToF modules in phones are technically a form of LiDAR, but they use flash illumination and imaging arrays rather than scanning mirrors.
Indirect ToF (iToF) measures the phase shift of a continuous-wave modulated signal. Direct ToF (dToF) measures the arrival time of individual photons using single-photon avalanche diodes. iToF is cheaper and denser; dToF handles longer ranges and complex scenes better but costs more.
Apple introduced LiDAR on the iPad Pro and iPhone 12 Pro to accelerate augmented reality experiences, improve autofocus in low light, and enable room-scale scanning. The feature has been scaled back in more recent iPhone product cycles as consumer adoption of AR use cases failed to match initial expectations.
Samsung included ToF modules on the Galaxy S10 5G, S20+, and Note 10+ / Note 20 Ultra. The feature was removed from the S21 generation and later models because the added bill-of-materials cost did not generate corresponding user-visible value — computational photography was closing the bokeh quality gap without extra hardware.
No. Face ID uses structured light, not ToF. A dot projector emits a known infrared pattern, and an IR camera captures the deformation of that pattern to reconstruct a depth map of the face. Apple uses LiDAR (which is ToF) on the rear of Pro iPhones for a different set of applications.
Google has consistently favoured computational approaches — neural depth estimation, dual-pixel parallax, and careful image processing — over dedicated depth hardware. Pixel phones produce competitive portrait photography without any ToF module, which is strong evidence that the sensor is not strictly necessary for many smartphone depth applications.
Most modern consumer ToF modules operate at 940 nanometres in the near-infrared. Older modules used 850 nm, but 940 nm offers better sunlight rejection because atmospheric water absorption reduces ambient infrared at that wavelength.
Partially. Bright sunlight contains substantial infrared radiation that swamps the ToF sensor’s returning signal, reducing range and accuracy. The shift to 940 nm wavelengths and tighter receive-side optical filtering has improved outdoor performance but has not eliminated the limitation. Demanding outdoor applications often pair ToF with stereo vision or switch to stereo entirely.
A VCSEL (Vertical-Cavity Surface-Emitting Laser) is a compact semiconductor laser that emits light perpendicular to its chip surface. VCSELs can be manufactured in dense arrays, modulated at high frequency, and packaged at low cost — making them the standard emitter for consumer ToF modules.
Consumer ToF modules are engineered to Class 1 eye-safety standards under normal operating conditions. The invisible infrared illumination is power-limited, duty-cycled, and optically spread to ensure that even extended exposure does not exceed safety thresholds.
Consumer smartphone ToF modules are typically accurate from around 0.3 metres to 4 or 5 metres, with degraded accuracy outside that window. dToF systems like the iPhone LiDAR extend the range somewhat further — usable up to roughly 5 metres in most conditions.
For close-range face authentication, structured light generally delivers higher spatial accuracy because the projected pattern provides dense reference points regardless of ambient lighting. ToF can be used but is typically a second choice for security-grade face authentication.
ToF provides real-time obstacle detection and mapping data. Higher-end models use dedicated ToF turrets for full-room LiDAR mapping, while mid-range models integrate small ToF modules for forward obstacle detection and pet recognition. This category has become one of the largest single consumer-facing markets for ToF in shipment terms.
Most current VR and mixed-reality headsets use some form of depth sensing for hand tracking, room mapping, and guardian-boundary detection. The architectures vary — some use pure stereo vision with neural depth, some use ToF, and many use hybrid combinations. Aggregate headset shipments have made this a meaningful ToF end market.
Driver-monitoring systems use small in-cabin ToF or IR imaging sensors to track driver gaze, attention, and drowsiness. The category has become effectively mandatory in new cars sold in the EU due to safety regulations, making automotive in-cabin sensing one of the fastest-growing ToF segments.
Probably yes, but in miniaturised or hybrid form. The severe size, weight, and power constraints of smart glasses do not accommodate conventional ToF modules. Expect a new generation of sparse-dot ToF illumination combined with stereo vision and neural fusion rather than straightforward module transplants from phones.
Sony dominates mobile ToF sensor production, with Samsung and SK Hynix also active in the segment. STMicroelectronics leads in dToF / SPAD modules including the iPhone LiDAR sensor. Infineon, Melexis, Analog Devices, pmd, and ams OSRAM hold significant positions in automotive and industrial ToF.
An RGB-D camera combines a standard colour image sensor with a depth sensor — typically a ToF module or structured-light unit. The colour and depth streams are aligned to produce a single dataset containing both visual and geometric information per pixel, useful for AR, robotics, and spatial computing.
In photography and casual consumer applications, largely yes. In applications requiring ground-truth distance measurements — AR placement, robotics, driver monitoring, industrial inspection — no. Neural depth networks estimate plausible depth but do not measure it, and for applications where the difference matters, physical depth sensors remain essential.
Simultaneous Localisation and Mapping (SLAM) is the problem of building a map of an environment while simultaneously tracking your position within it. ToF data provides dense, accurate depth readings that significantly improve SLAM robustness, particularly in low-texture or low-light environments where purely visual SLAM struggles.
Consumer ToF module prices have compressed steadily as VCSEL manufacturing has scaled and CMOS ToF sensors have migrated to smaller process nodes. Representative module costs have fallen significantly from 2020 peaks, though rates of decline have slowed as the technology matures.
Yes, meaningfully. Chinese VCSEL and ToF sensor manufacturers have closed a substantial portion of the quality gap with incumbent Japanese, Korean, and European suppliers over the past several years, particularly for mid-market consumer applications. The highest-performance tier still sits with the established Japanese and European players.
Steady, unspectacular growth concentrated in applications where depth measurements are genuinely load-bearing — robotics, headsets, automotive, industrial — rather than the all-encompassing smartphone takeover that 2020-era forecasts predicted.
Three things. First, whether any major smartphone OEM reintroduces ToF in response to a new AR platform becoming popular. Second, the depth-sensing architecture chosen by the next generation of smart glasses from Meta, Apple, and Samsung. Third, the continuing compression of dToF / SPAD costs, which will determine how quickly the higher-accuracy architecture spreads from premium to mass-market devices.
