Lucky’s Status Report for April 23, 2022

This week I did the following tasks:

  • Worked towards integration
    • In order to make integration more seamless, I decided on the approach to include the object detection source code with the web app source code
      • This required transitioning the source code from a script that was to run on the raspberry pi camera subsystem to an internal library of functions that could be used by the web app using object oriented programming
        • This enables the web app to more easily supply the desired aisle and items, and receive the desired json object without too much hassle with transfer protocols
          • The web app will create a class instance for each aisle by calling on the object oriented library and supplying necessary information i.e. items expected in that aisle
          • Each class instance has its own set of functions for detection distributed detection for multiple aisles if needed
      • Also created a testing script for the library
  •  Began planning for functionality of displaying an image of the detected shelf (the idea was a cool feature idea from Prof. Savvides that we will try after completing MVP)
    • The overall programmatic design I am working on is
      • Produce empty shelf image base layer
      • As the sliding window moves through during detection, store the presence of items along with the x and y coordinates
      • For each item that was detected as present, use the x and y coordinates to add an additional image layer in the location where it was detected
        • The images will be preprocessed stored images for our global item set
  • Live testing and prep for demo
    • Finalized workable live capture code
    • Acquired groceries for testing and demo day
    • Began testing of live capture object detection
  • Final Presentation
    • Put together tests and visualizations for the presentation
  • Started working on the video script

 

In terms of schedule, I feel a bit pressured, basically hoping things go as planned, but preparing for when they do not, which is almost always the case for engineering projects. My biggest worry is the trigger mechanism between the background subtraction subcomponent and object detection subcomponent because it requires communication between multiple devices

 

What Next

  • Test, test, and more test, because we need to produce quantitative data and documentation of our decisions that we have made thus far
  • Finalize integrated system
  • Work on setup for demo
    • Cameras and shelf locations / distances
    • Script / overview of our demonstration
  • Work on video script
  • Storyboard poster components

Leave a Reply

Your email address will not be published. Required fields are marked *