(12/04) Weekly Status Update: Shivi

Well, this is more of a 1.5-week report, instead of a week, and I got several things done. They were all short tasks.

1) Completion of Server Integration
Jeffrey had integrated my testing server into his final server. So both of us sat together and debugged anything that was going wrong or missed while integrating. Once integrated, we ran the server for a couple of iterations of receiving images and it worked well.

2) Implementation of Solutions I Came up with last Week
I implemented the increase of the ignore-zone to exclude extra space that is not a part of the clutter zone. On top of that, I also added code to dismiss an image if humans were blocking the view of the camera. This was done by detecting the biggest contour.

3) Retesting the Images again
After implementing these changes, the counter showed improved accuracy with the images.

4) Testing of Unique Clutter Detection
I tested my algorithm to uniquely identify images on the counter by sending live pictures every 60 seconds, by casually putting several objects in the clutter zone in several orientations. The testing gave me an 80.2% accuracy, by detecting 57 out of the 71 objects placed in the clutter zone uniquely. In the objects that weren’t identified separately, I learned the following things:
– Intersecting shadows can cause objects to be detected as the same contour. So I had to find a way to ignore the shadow.
– The placement of the camera in our testing ground was at an angle and was using a fisheye lens. So objects at the edge of the counter might appear to he overlapping even if they weren’t. This issue could not be dealt with, and otherwise, we will have to make use of an extensive machine learning algorithm, which was not in the scope of our project.

I believe that implementing shadow ignoring will tune up the accuracy even more, and help us achieve the desired accuracy of 85%.

5) Preparing for Demo
Our Hub broke over the Friday of Thanksgiving break, which couldn’t be fixed until Monday because of holiday traveling plans. And hence our preparation time was cut short. But, I had managed to prepare a live server and client which could send and receive a live image and perform manipulation on it. But unfortunately, before the demo, the camera was taking dark images and I had to debug that last minute, and hence couldn’t showcase it. This bug was later resolved by making the client sleep for 2 seconds after initiation so that the camera could be properly exposed, and exposure numbers could be fixed to take consistent images.

6) Final Presentation Slides
We prepared and made the Final presentation slides on Sunday evening-night. Reporting extensively what happened in our testing and took several pictures of clutter detection for the slides.

7) Testing of Bilateration*
We switched from trilateration to bilateration* due to several issues. And I helped David and Jeffrey carry out the testing of the wearable, and placing an object on the scenery to increase the clutter.

ToDo this coming week:
– Implement Shadow Removal
– Test after Shadow Removal
– Write a script that starts the client on boot, and if the server/client crashes it keeps starting the client every 5 minutes until it finds a connection.
– Polish the GUI



Leave a Reply

Your email address will not be published. Required fields are marked *