← Back to Portfolio

KITTI Sensor Fusion

Multi-Modal Sensor Integration for 3D Object Detection

Sensor Fusion Approaches

Early Fusion

Fuses raw data from multiple sources (camera, LiDAR, GPS/IMU) and then performs object detection on the combined data.

Late Fusion

First detects objects independently in each sensor stream, then fuses the detection results.

Modified Late Fusion (This Implementation)

This notebook uses a hybrid approach:

  1. Detect objects in 2D camera images using YOLOv5
  2. Associate detected object centers with LiDAR point cloud data to obtain depth
  3. Use GPS/IMU data to determine world coordinates

Note: While labeled as "Early Fusion" in the title, this is fundamentally a late fusion approach because detection happens first on camera data, then detections are enriched with LiDAR depth information.

Sensor Details

Why 3D Detection? Detection in 3D space is crucial for autonomous vehicles as it provides precise physical location of objects in the world, enabling better path planning and collision avoidance.

Demo Video