Jun Wei’s Status Report for Feb 9 2025

1. Personal accomplishments for the week

1.1 Image stitching experimentation

I experimented with image stitching to explore the feasibility of a one-camera solution. The rationale for the use of image stitching over merely relying on a having a high field of view (FOV) camera was to

  • Mitigate obstruction/obscurity from other objects; and
  • Gather information from different POVs (through multiple images).

I made use of the OpenCV library’s Stitcher_create() function in Python. OpenCV’s Stitcher class provides a high-level API with a built-in stitching pipeline that performs feature detection and mapping, as well as homography based estimation for image warping. I captured images with my smartphone camera, with both the regular (FOV 85˚) and the ultra-wide (FOV 120˚) lens. However, I found that image stitching failed on images taken with the latter. As such, I only have outputs from the regular FOV lens:

Stitched image outputs:

 

These were my learning points and takeaways

  • Image stitching is best suited for cameras with low FOVs as higher FOVs tend to warp features on the extreme end;
  • Images need some overlap for feature mapping (ideally around 1/3);
  • Too much overlap can lead to unwanted warping during stitching/duplicated features; and
  • Drastic changes in POV (either due to sparse image intervals or objects being extremely close to the camera, such as the plastic bottle above) can cause object duplication due to diminished feature mapping effectiveness.

For comparison, I have the following single high-FOV shot taken from my smartphone:

In all, I believe image stitching does confer significant advantages over a single high FOV shot:

  • More information captured (apples and blue container) obscured by transparent bottle in high FOV shot)
  • Reduced object warp/deformation, which is crucial for accurate object classification

Following this, a natural extension would be to explore effective image stitching pipeline implementations on an embedded platform, or even a real-time implementation.

2. Progress status

While I did not fill out the equipment requisition form as early in the week as I had hoped, I was able to get a head start on the imaging stitching algorithm that in turn, better informs decisions on 1) camera placement, and 2) frequency of image capture, and 3) desired camera FOV. I will defer the camera testing and data transmission pipelines to the coming week, which is when I will (hopefully) have received the equipment.

3. Goals for upcoming week

For the upcoming week, I would like to

  • Acquire the equipment outlined in my previous update
  • Test the camera within the confines of the fridge
  • Develop data transmission pipeline from the camera to the RPi
  • Develop transmission pipeline from RPi to cloud server (ownCloud), if time permits

Leave a Reply

Your email address will not be published. Required fields are marked *