AVCaptureDevice.exposurePointOfInterest readback does not match set value on iPad Pro M4 / M3(UltraWide)and Gen2,3,4(TrueDepth) using resizeAspectFill

6 hours ago 1
ARTICLE AD BOX

Environment:

Devices: iPad Pro 11" M4, iPad Air 11" M3, iPad Pro 11" Gen2/3/4

Language: Swift

Framework: AVFoundation

Front camera: UltraWide (M4/M3), TrueDepth (Gen2–4)

Video gravity: .resizeAspectFill


Background

I am setting an exposure point of interest using coordinates defined in captured image pixel space.

Input point: (1170, 1370)

Image sizes:

Gen2/3/4: 2316 × 3088

M3/M4: 3024 × 4032

Preview sizes:

Gen2/3/4: 834 × 1194

M4: 834 × 1210

M3: 820 × 1180


What I do

First, I convert image pixel coordinates to preview layer coordinates, then use captureDevicePointConverted(fromLayerPoint:).

let devicePoint = previewLayer.captureDevicePointConverted(fromLayerPoint: layerPoint)

Reading back exposure point

After capture, I convert back:

let layerPoint = previewLayer.layerPointConverted(fromCaptureDevicePoint: exposurePoint)

This results in:

X ≈ 1137 (expected 1170)

Y ≈ 1488 (expected 1370)


Observation

It seems that captureDevicePointConverted(fromLayerPoint:) does not perform a linear mapping when using .resizeAspectFill.

My understanding is that:

The preview layer applies cropping to maintain aspect fill

This causes coordinate distortion between sensor space and preview space


Questions

Does captureDevicePointConverted(fromLayerPoint:) account for .resizeAspectFill cropping, making it unsuitable for direct pixel mapping?

Is it correct to compute exposure points directly using normalized coordinates (pixel / image size) instead of using preview layer conversion?

Is exposurePointOfInterest always expressed in full sensor normalized coordinates (0–1), independent of preview settings?

Does this behavior differ between UltraWide (M3/M4) and TrueDepth cameras?

Is there official documentation describing correct coordinate mapping for this scenario?


Read Entire Article