Skip to main content

How to integrate CameraX with Firebase ML Kit for real-time object detection in Kotlin Android

How to integrate CameraX with Firebase ML Kit for real-time object detection in Kotlin Android.

Here's a step-by-step tutorial on how to integrate CameraX with Firebase ML Kit for real-time object detection in Kotlin Android.

Step 1: Set up your project

  1. Create a new Android project in Android Studio.
  2. Add the necessary dependencies to your project's build.gradle file:
dependencies {
// CameraX dependencies
implementation 'androidx.camera:camera-core:1.1.0-alpha07'
implementation 'androidx.camera:camera-camera2:1.1.0-alpha07'
implementation 'androidx.camera:camera-lifecycle:1.1.0-alpha07'

// Firebase ML Kit dependencies
implementation 'com.google.firebase:firebase-ml-vision-object-detection-model:19.0.7'
implementation 'com.google.firebase:firebase-ml-vision:24.0.2'
}
  1. Sync your project with the Gradle files.

Step 2: Set up the camera preview

  1. Open your project's main activity XML layout file (activity_main.xml).
  2. Add a TextureView element to display the camera preview:
<TextureView
android:id="@+id/textureView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
  1. In your main activity class, initialize the TextureView and set it up to display the camera preview:
class MainActivity : AppCompatActivity() {
private lateinit var textureView: TextureView

override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)

textureView = findViewById(R.id.textureView)
textureView.post { startCamera() }
}

private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)

cameraProviderFuture.addListener({
val cameraProvider = cameraProviderFuture.get()

val preview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(textureView.createSurfaceProvider())
}

val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

try {
cameraProvider.unbindAll()
cameraProvider.bindToLifecycle(this, cameraSelector, preview)
} catch (e: Exception) {
Log.e(TAG, "Failed to bind camera", e)
}
}, ContextCompat.getMainExecutor(this))
}
}

Step 3: Implement object detection

  1. Create a new Kotlin file called ObjectDetection.kt.
  2. Add the following code to the file to define the ObjectDetection class:
import android.content.Context
import android.graphics.Bitmap
import android.util.Log
import androidx.camera.core.ImageAnalysis
import androidx.camera.core.ImageProxy
import com.google.firebase.ml.vision.FirebaseVision
import com.google.firebase.ml.vision.common.FirebaseVisionImage
import com.google.firebase.ml.vision.objects.FirebaseVisionObjectDetector
import com.google.firebase.ml.vision.objects.FirebaseVisionObjectDetectorOptions

class ObjectDetection(private val context: Context) {
private val options = FirebaseVisionObjectDetectorOptions.Builder()
.setDetectorMode(FirebaseVisionObjectDetectorOptions.STREAM_MODE)
.enableMultipleObjects()
.enableClassification()
.build()
private val detector: FirebaseVisionObjectDetector =
FirebaseVision.getInstance().getOnDeviceObjectDetector(options)

fun detectObjects(bitmap: Bitmap, callback: (List<String>) -> Unit) {
val image = FirebaseVisionImage.fromBitmap(bitmap)
detector.processImage(image)
.addOnSuccessListener { objects ->
val result = mutableListOf<String>()
for (obj in objects) {
val category = obj.classificationCategory
result.add(category)
}
callback(result)
}
.addOnFailureListener { e ->
Log.e(TAG, "Object detection failed", e)
}
}
}

Step 4: Connect the camera preview and object detection

  1. In your main activity class, create an instance of the ObjectDetection class.
  2. Add an ImageAnalysis use case to the camera provider to process the camera frames:
private fun startCamera() {
// ...

val imageAnalysis = ImageAnalysis.Builder()
.build()
.also {
it.setAnalyzer(ContextCompat.getMainExecutor(this), { imageProxy ->
val bitmap = imageProxy.toBitmap()
objectDetection.detectObjects(bitmap) { result ->
// Process the object detection result
}
imageProxy.close()
})
}

// ...

cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageAnalysis)
}
  1. Add an extension function to convert the ImageProxy to a Bitmap:
private fun ImageProxy.toBitmap(): Bitmap {
val buffer = planes[0].buffer
val bytes = ByteArray(buffer.capacity())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
  1. Process the object detection result as desired in the detectObjects callback.

That's it! You have now successfully integrated CameraX with Firebase ML Kit for real-time object detection in Kotlin Android. You can customize the object detection logic and UI as needed to suit your application requirements.

Note: Don't forget to replace TAG with the appropriate logging tag in your code.