diff --git a/core/java/android/hardware/camera2/CameraCharacteristics.java b/core/java/android/hardware/camera2/CameraCharacteristics.java index 45d6e885766ca..37bead845e056 100644 --- a/core/java/android/hardware/camera2/CameraCharacteristics.java +++ b/core/java/android/hardware/camera2/CameraCharacteristics.java @@ -1007,7 +1007,7 @@ public final class CameraCharacteristics extends CameraMetadata { * each channel is specified by the offset in the * {@link CameraCharacteristics#SENSOR_BLACK_LEVEL_PATTERN android.sensor.blackLevelPattern} tag.

*

The white level is typically determined either by sensor bit depth - * (10-14 bits is expected), or by the point where the sensor response + * (8-14 bits is expected), or by the point where the sensor response * becomes too non-linear to be useful. The default value for this is * maximum representable value for a 16-bit raw sample (2^16 - 1).

* diff --git a/core/java/android/hardware/camera2/CaptureResult.java b/core/java/android/hardware/camera2/CaptureResult.java index 03661f0d4a375..9bf1b98d0571e 100644 --- a/core/java/android/hardware/camera2/CaptureResult.java +++ b/core/java/android/hardware/camera2/CaptureResult.java @@ -1719,8 +1719,12 @@ public final class CaptureResult extends CameraMetadata { * filter array.

*

The green split is calculated as follows:

*
    - *
  1. A representative 5x5 pixel window W within the active - * sensor array is chosen.
  2. + *
  3. A 5x5 pixel (or larger) window W within the active sensor array is + * chosen. The term 'pixel' here is taken to mean a group of 4 Bayer + * mosaic channels (R, Gr, Gb, B). The location and size of the window + * chosen is implementation defined, and should be chosen to provide a + * green split estimate that is both representative of the entire image + * for this camera sensor, and can be calculated quickly.
  4. *
  5. The arithmetic mean of the green channels from the red * rows (mean_Gr) within W is computed.
  6. *
  7. The arithmetic mean of the green channels from the blue