Weekly Report #12 – 5/4

Jerry:

I’ve been giving another go at training the neural network, where this time I didn’t use the Google Open Images dataset due to the bad quality of the bounding boxes in the images. I reverted back to the COCO + Caltech Pedestrian dataset combo, and this time I also added negative training examples from the NYU Depth v2 dataset, which had many images of relatively cluttered rooms with few people in view. I manually took out all the images there that had people in it, and trained the network longer than before. This time we had success; the system has much fewer false positives and is better at drawing bounding boxes for people who are partially out of view.

We also tweaked the anchoring algorithm to give a grace period of around a second after a bounding box is lost, changed the smoothing algorithm to predict the motion of bounding boxes (to a certain maximum number of frames in the future), and tweaked the numbers for a smoother tracking experience. Now the system can track people who are moving relatively fast, and is pretty good at zooming in while keeping a person’s face in view.

We also made the person tracking code run automatically on boot. The last step is to build the enclosure, collect some demo footage, and prepare for the final demo.

Nathan:

This week I’ve been spending most of my time preparing the enclosure for the final presentation. I’m not quite done sculpting the box in Solidworks, but I plan to have it complete by tomorrow (Sunday) morning. I’ll post an update then. For now, however, I’ll describe some of the specifics. It will be 9″x7″x4″ (dimensions subject to review), made out of plywood procured from the Makerspace, and laser cut to fit. It’s taking slightly longer than I expected because I’m making the joints interlock both for the better aesthetic and superior mechanical properties. Also makes it easier to test-fit before we glue it all together. There will be holes in the back for 3 thing: a power button, a 12V DC jack for power, and a combined HDMI and USB port. The USB portion will be vital for attaching a storage device, while the HDMI will be for optional display functionality. I’ll edit in some good pictures once it’s done, but I plan to have the cutting done by 10am or so, and the final assembly and such done by noon. Will leave time to do it all over if something goes wrong, just in case, but I’m confident no such situation will come to pass. This will be my last significant contribution to the project before the final report.

Karthik Natarajan

This week we didn’t have too much to do as most of our project was done for the final presentation. So, overall, throughout this week, after Jerry finished training an updated neural network, I helped him with modifying the code and tweaking some of the parameters to make our motion smoother with the new neural network. On top of that, I have been helping Jerry work with the motion sensor to incorporate the shutdown/turn-on feature. Outside of that, I helped Nathan plan the measurements out for the enclosure so we can use the laser cutter tomorrow morning. At this point, I think most of the risk factors are gone and we are pretty much on schedule, so we should be ready for the public demo. 🙂

Team:

It’s the home stretch, but we’re in good shape for the final demo on Monday. The only unfinished items are the final tuning of the tracking, which is more of an indefinite polishing step than a well defined to-do item, and the integration with the enclosure, which should be completed by mid-day tomorrow. There are no remaining risks to the project, nor should there be any at this late stage, and barring some sort of catastrophe, we should be all set for a successful Monday demo. There are no other changes to report.

Leave a Reply