- Worked with Lucky to create the final presentation slides
- After that, I wrote speaker notes and practiced my presentation while making sure I knew what my setup would be like for when I present
- Listened in and filled out peer review forms for everyone’s final presentation during class time
- Gave my team’s final presentation on Wednesday during class
- Discussed trade-offs/design changes with Lucky in terms of integration
- i.e. if integration with the RPI/ARDUCAM system and the web application fails, we can integrate the background subtraction code as a library to the laptop hosting the web application and use a wired webcam to run both people and object detection
- i.e. we could use the RPI/ARDUCAM to not only send a trigger to the web application to run object detection, but we could also send a picture to the app as well. This would allow us to seamlessly insert an image into the object detection code without the use of the external webcam.
- Worked on integration with the RPI/ARDUCAM and the web application (Very close to being finished!)
- I was able to send a trigger from the RPI to the web application and tomorrow I will try the same thing but in the reverse direction
- Discussed poster ideas with Lucky and will finalize the poster design tomorrow
I believe that we are on track for the final demo later this week. I believe we may have roughly one full day’s worth of work to do for the final demo. Hopefully we can do that asap so that we have a few days of slack time to polish up our system, run some more tests, and make sure everything is ready to go for Friday.
During this upcoming week, I plan to do the following:
- Finalize integration and testing
- connect from the web application to the RPI (using same mechanism from the reverse direction that was proven to be successful today)
- integrate Lucky’s object detection code with the web application
- Complete poster
- Complete final video
- Complete final paper