Smile, it’s CameraX! [analysis and extensions]

Image analysis and applying extensions when pictures are taken are use cases often accessed by the users. By using CameraX API we are able to implement them in a very simple way. If you want to learn more check the code samples and the theory behind these features in the current article.

In my previous articles about CameraX I covered topics like:

  • the challenges we are facing by using oldest Camera APIs
  • what are the advantages of using this new API
  • steps to implement the preview use case
  • steps to implement the capture use case

If you want to check the previous posts here are the details:

Analysis Use Case

CPU-accessible image for image processing, computer vision, ML

For each use case first of all we will check the implementation steps and after that the assigned code for each step.

Step 1: create an ImageAnalysis reference

val imageAnalysis = ImageAnalysis.Builder().build()
// the executor receives sequentially the frames
val blocking = ImageAnalysis.STRATEGY_BLOCK_PRODUCER
// the executor receives last available frame (default)
val nonBlocking = ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST
val imageAnalysis = ImageAnalysis.Builder()
.setBackpressureStrategy(nonBlocking)
.build()
  • ImageAnalysis is one of the provided implementations of the UseCase abstract class
  • This is the simplest way to create an instances of the ImageAnalysis class
  • ImageAnalysis acquires images from camera via an ImageReader
  • These images could be processed in 2 different ways:
    • Sequentially 
    • Or only by using the last available frame
  • So actually we have a backpressure strategy added by using setBackpressureStrategy method
  • If it is not setted the default strategy is the one when the executor receives the last available frame

Step 2 & 3: define a custom analyser and implement analyze() method

class PurpleColorAnalyser : ImageAnalysis.Analyzer {
private var lastAnalyzedTimestamp = 0L
private fun ByteBuffer.toByteArray(): ByteArray {
rewind()
val data = ByteArray(remaining())
get(data)
return data
}
override fun analyze(image: ImageProxy) {
val currentTimestamp = System.currentTimeMillis()
val oneSecond = TimeUnit.SECONDS.toMillis(1)
if (currentTimestamp – lastAnalyzedTimestamp >= oneSecond) {
val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
val pixels = data.map { it.toInt() and 0x9370DB }
val averagePurplePixels = pixels.average()
Log.e("PURPLE", "Average purple pixels: $averagePurplePixels")
lastAnalyzedTimestamp = currentTimestamp
}
image.close()
}
}
  • Before returning from analyze(), close the image reference by calling image.close() to avoid blocking the production of further images (causing the preview to stall) and to avoid potentially dropping images. 
  • Analyzes an image to produce a result.
  • This method is called once for each image from the camera, and called at the frame rate of the camera. Each analyze call is executed sequentially.

Step 4: set the custom analyser

imageAnalysis.setAnalyzer(executor, PurpleColorAnalyser())

Step 5: update the call to bind lifecycle

// bind the image analysis use case to camera
camera = cameraProvider.bindToLifecycle(
this as LifecycleOwner,
cameraSelector,
imageAnalysis,
preview
)

Image format

We all know the RGB color system. It constructs all the colors from the combination of the Red, Green and Blue colors.

CameraX produces images in YUV_420_888 format. The YUV color encoding assigns 2 dimensions to the pixel:

  • Luminance = refers to the brightness of the pixel = Y (grayscale image)
  • Chrominance = refers to the color = UV

YUV => Y = Luminance U = Chrominance of blue  V  = Chrominance of red. YUV = YCbCr

This format is a generic YCbCr format, capable of describing any 4:2:0 chroma-subsampling planar or semi planar buffer (but not fully interleaved), with 8 bits per color sample.

If we use the Camera2 API, capture images in ImageFormat.YUV_420_888 format. If we use the older Camera API, capture images in ImageFormat.NV21 format.

YUV is one of the encoding systems used mostly in the Color Image Pipeline which means it is used between an image source (camera) and an image renderer (display of the device). This is one of the most efficient way in the image processing apps where displays are involved. The transmission errors are reduced comparing to RGB scheme. The luminosity of the given color is detached and the hue (color) is determined. So by using YUV we are able to apply image processing algorithms without affecting the quality.

The research shows that the human eye is capable to identify the color luminosity but is not so good at seeing the different nuances of the same color so this is how the chroma-subsampling appeared. Chroma subsampling is actually a compression technique used for images and videos. So the information about the brightness of an image is more important than the colors from it.

There are 3 common schemes of chroma subsampling:

  1. 4:4:4
  2. 4:2:2
  3. 4:2:0

The main idea is to reduce the resolution of the color component and so it reduces the size of the image without affecting the perception of the human eye.

Now the normal question to address here is how much data we save?! For YUV_420_888 we save 50%. So I think it is a good deal.

Camera Controls

The CameraControl provides various asynchronous operations like zoom, focus and metering which affects output of all UseCases currently bound to that camera.

Also CameraInfo plays an important role in this equation. CameraInfo is an interface for retrieving camera information.

val cameraControl = camera.cameraControl
val cameraInfo = camera.cameraInfo
cameraInfo.torchState.observe(this, Observer { state ->
if (state == TorchState.ON) {
// state on
} else {
// state off
}
})

Extensions Use Case

Dedicated API for optional effects like HDR, portrait, night-mode.

Extensions are separate from the Camera2 core of CameraX. In the next diagram, the red arrows indicate the main data flow when users trigger an extension-based feature, such as HDR image capture.

ImageCaptureExtender is an abstract class in right now in the api it has 5 different implementations:

  • AutoImageCaptureExtender = Load the OEM (Original Equipment Manufacturer) extension implementation for auto effect type.
  • BeautyImageCaptureExtender = Load the OEM extension implementation for beauty effect type.
  • BokehImageCaptureExtender = Loads the OEM extension implementation for bokeh effect type.
  • HdrImageCaptureExtender = Load the OEM extension implementation for HDR effect type.
  • NightImageCaptureExtender = Load the OEM extension implementation for night effect type.

Bokeh effect

For each use case first of all we will check the implementation steps and after that the assigned code for each step.

For extensions an important thing to mention is that they are not available on all the devices:

  • Huawei (HDR, Portrait): Mate 20 series, P30 series, Honor Magic 2, Honor View 20.
  • Samsung (HDR, Night, Beauty, Auto): Galaxy Note 10 series.

Based on documentation: “For a device to support vendor extensions, all of the following must be true:

  • The effect has library support from the device OEM.
  • The OEM library is installed on the current device.
  • The OEM library reports the device as supporting the extension.
  • The device has a version of the operating system that the library requires.

You can enable an extension preferentially: if the extension is both supported by and physically on the device, then it will be enabled; otherwise, it will degrade gracefully.”

Step 1: create an Extender object

val builder = ImageCapture.Builder()
val beautyExtender = BeautyImageCaptureExtender.create(builder)

Step 2: enable the extension

if (beautyExtender.isExtensionAvailable(cameraSelector)) {
beautyExtender.enableExtension(cameraSelector)
}

That’s all folks! A short recap about CameraX main advantages:

If you want to learn more about CameraX you could check the next resources:

Enjoy and feel free to leave a comment if something is not clear or if you have questions. And if you like it please share !

Thank you for reading! 🙌🙏😍✌

Follow me on: Twitter Medium | Dev.to

Leave a comment