Week 11 Post

Ajay

This week we had our presentation and finished up all our metrics. In terms of results, we got a mAP@50 of 99% and mAP@75 of 91%. In respect to our requirements which was 80%, we exceeded those by quite a bit. Our accuracy was 73% which works for our use case because of the notion of queuing heuristics. Most of this week I spent building out the compute mAP function. I used the INRIAPerson dataset and wrote the intersection and the union functions my self. We ran the function on the p2.xlarge. Other than that we are working on our final report and getting ready for the poster.

Vayum

This week we put all the final touches on the project ot make sure it was functioning fully and according to what we thought. We tweaked our graph.js interface to make it moving and dynamic and overall we increased our precision values

 

Team Report

Overall our team is pretty much done. Our schedule is all finalized and we are working together for the final report.

 

 

Week 10 Post

Ajay

This week I spent mostly cleaning up the code base and verifying all the EC2 Connections were still working properly. We actually faced an issue with IT where they kept marking my raspberry pi as an insecure device and restricting network access. After repeated attempts to let them know that i had secured my device, eventually they realized that they had a bug in the vendor software they used to scan the network and our access was restored. The other half of the week I spent building the final presentation and practicing presenting. I also spent time verifying out final metrics and our mAP calculations. Most of these lined up close to our numbers that we wanted except that our matching characteristic had a little bit of a low percentage on accuracy. In general, the product is working pretty well but there are places where we could improve on it.

 

 

 

Team Report

Our in lab demo went well, we are still coordinating to practice our final demo and what it will look like. We are going to get started on the final report and poster next week.

 

 

 

 

 

Week 6 Status Report

Ajay

This week I ported my code into AWS. The biggest issue I faced was something I didn’t even expect would be an issue but when I tried to reserve a p2.xlarge instance on aws I was instantly blocked. To resolve this, i had to contact AWS support and get the instance limit to be removed for my account. After that, we were able to test our system on AWS and immediately we saw insane gains from the GPU’s. On a 6th generation i5 processor, it took around 25 seconds to do object detection using the YOLO method. With the Tesla k80, it took .15 seconds. Currently, our bottleneck is in the method we have with storing photos as it takes around 2 seconds to upload the photo to the s3 bucket. If we figure out another way of storing these photos we might approach that but I think at this point this is our best approach. I also wrote the histogram function and sketched out the function for calculating average wait time. For the average wait time, it revolves around using the total wait time so far and averaging in the next value. This week I want to work on the matching functionality. I thought we would get to it this week but the AWS setup ended up taking longer than expected.

Vayum

I finished the home page dashboard, the layout for the additional page after that and connected the web service. We also sketched up the API for how this will interact with everything else, including the YOLO detection system and the hardware sensors. Currently, I still have to write the algorithms and get more data in our database, with actual data connected from our PIs and our sensors. We also ordered parts so are we basically ready to put everything together. I would say we are on track.

 

Team Report

In terms of team status we are a little bit behind since the AWS setup took so much time. I think that we can recover the time although because now that we are set up on AWS, testing the system can happen in much faster time. Other than that our diagram for connecting our individual parts has been sketched out and the internetworks has been designed. We need to build it this week and test it.

Week 5 Status Report

Ajay

We missed the last status report but will update on what we’ve done overall. I’ve gotten the function that takes the photo -> recognized a person and spits out the bounding box to work. This was a simple application of the Yolo algorithm. After that, I wrote the histogram function to extract out color data from the image.  Currently, I’m running this on a CPU so my main tasks for this week are getting this running on an AWS instance. This week I also want to write the matching functionality to start getting matches from photos.

Vayum

I’ve made good progress overall on the web application portion of the project. I am currently working on the views and controller parts of the backend to deal with dynamically changing data. I’ve set up the database with all the relevant fields, have the front page working with our basic set up, and wrote the server code. This week, the plan is to finish the controller and try to get the additional pages working. I have been stuck on some bugs in the backend side of the project so hopefully I will resolve that within the next few days. Apart from that, I will soon need to start integrating with Ajay and Peter to ensure that the transfer of data from our hardware portions of the project go smoothly.

 

 

Team Report

Overall we are on schedule and are working well together. We have placed our orders for our parts and they should be arriving within the next week. There are no major risks at this point but we are monitoring the speed of the algorithms as that is the main point that we are a little worried about. Overall no schedule changes.

Week 3 Status Report

Vayum

This week I worked on developing the socket needed to transfer data from our cameras and sensors to a central server for all of our data processing. In addition, I specified the web app implementation for and began working on the MVC architecture needed.  The implementation is as follows below.

In addition to this, I looked more into the process of the predictive features for determining business of the restaurant. After researching how Yelp and Google Times do it, I will be making a KNN classifier to give segments of busiest to non busiest times, with a bar graph or something similar to display the relevant information.

The next step is to start generating test data to test to see if my socket works, and how to best format my database. Most of the actual backend implementation with the front end will begin after the break ends. I am on schedule so far in everything that I am doing and I think we are progressing well.

 

Ajay

This week we spent most of our time working with the design review and writing the report out and getting that well specked out.

In terms of the reidentification work I did not accomplish as much I would have liked. I got the Yolo v3 tiny to work but this was not as important as a GPU instance will be plenty fast enough with the default yolo detector for our purposes. I did more reading and I think I have figured out the algorithmic approach I want to take to determine the dominant color regions for the feature vector. After taking a convolution of the image to blur together the colors, I will try solving this via a connected components approach to get the color blobs. I am also looking at the RGB/YUV histogram to extract that into a feature vector as well. Next week I want to place the orders for the Raspberry PI’s/ cameras and get a rudimentary feature extractor working.

 

Week 2 Status Report

Ajay

This week I spent working with darknet and working with the YOLO v3 detector. After running it on my laptop I was able to get the weights working correctly and able to detect people within images. The one issue I found was it took about 8 seconds to identify on a laptop CPU which means I need to use GPU to be able to classify it within the performance constraints. The next week I want to spend time working with Yolo v3 tiny which is a more lightweight version of the object detector which might good enough for our use case. Next week I also want to work more with the the pixel data and the convolution function to see if I can extract the color data from the image.

 

Vayum

This week I interfaced with sensor data to see if the data we were getting would match the correct format that we wanted to store in our DB. I realized I need to write a data parsing stand a lone program to convert the JSON fields into fields that would be easily accessible in the backend. In addition to this, I realized that I might also need to do this for camera data.

 

Next week I am going to work on writing the socket and also doing this same idea with the socket data assuming I can properly form a connection. We also decided upon using SQL for the database as it was the most common and best choice we had.

 

I would say that I am on schedule barring any setbacks next week and overall the project is coming along fine.

 

 

 

 

 

Week 1 Status Reports

Ajay

This week I spent doing research on how to solve the problem of person re-identification. The approach we want to use is probably similar to the paper  Getting the Look: Clothing Recognition and Segmentation for Automatic Product Suggestions in Everyday Photos. We want to segment the person and isolate the articles of clothing and use the clothing to re-identify the person. The paper mentioned using Deepfashion which is a fully labeled dataset that we can use to train a classifier. Essentially i need a way to extract features from an image and I need a way to compare two images. Using Deepfashion we should be able to extract some features from the image. The other paper I was reading was trying to do person reidentification using pose detection. By detecting pose, you can extrapolate details about the people such as height or other characteristics. This might be useful later on if clothing is not enough to distinguish between people. I spent most of my week doing research on these methods and plan to implement it next week

Our project is on schedule, I wanted to spend this week researching the approaches for identifying humans which I accomplished

I want to be able to create a bounding box around a human using the YOLO algorithm and write the CNN that uses deep fashion.

 

Vayum

This week I solidified the design requirements for the project. We are designing a web application that should seamlessly integrate with the hardware portions of the project. Given that we want to solve wait time and occupancy management of different restaurants, the web application should display the relevant information:

⁃ restaurants on a clickable dashboard

⁃ wait times provided for each dashboard

⁃ a page with further information once the restaurant has been clicked

⁃page will include a heat map of occupancy, past data restaurant analytics, and future predictive analysis based on previous information

I worked on the design documentation and designed how some of the frontend features should look. They are as follows: As for the backend design, it was challenging to decide the framework. After doing extensive research, looking at React for the frontend, Java, C, and Python for the backend, I decided on using Python with React. The technology that best integrates with this is the IoT-ignite sensor API, and we would expect our data to be in JSON format. I am on schedule for the project so far as it is in its early phases and just starting out.

 

Deliverables for the next week is to write a test client, with a socket and a server, to see if there is a proper connection, and check to see if the data is properly formatted. In addition to this, I will need to make adjustments to the database and choice of this if the data does not take the expected format. Overall, once this is done, I will start writing the backend with test data in the following weeks to come.

Peter

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. 

This past week I received ordered sensors, set up a Raspberry Pi with Raspbian, and began writing a script to write sensor data every second to an excel file.  Setting up the pi with the OS was painless.  Getting the sensor setup and connected to the right GPIO pins was fairly straight forward as well.  One issue I encountered was trying to write to a CSV file, somehow the file had a bunch of Chinese characters instead of doubles from the sensor.  To get around this we cast the sensor output to Strings before writing to the file.

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? 

My progress is on schedule.  Setbacks due to sensors not being able to detect human presence were accounted for in our schedule, but I have not fully tested our current humidity and temperature sensors enough to know if they will be sufficient by themselves.

What deliverables do you hope to complete in the next week?

By the end of next week, I want to determine if the current sensors are enough to detect human presence at a table and write some code that generates a Boolean value with if a table is occupied.

 

TEAM C0

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?
    • Our largest risk currently is with the image recognition. Since we are planning on writing this part from scratch and not using libraries there could be some issues with getting this part to work. Our contingency plan is to resolve to using openCV to solve some of the harder parts of the image detection. Currently we want to write the bounding box algorithm and pass it into a cnn to do recognition. OpenCV has a solved a majority of these problems already so we could use those resources if we are having trouble writing it from scratch.
    • Another risk is if the sensors we have chosen are not able to reliably detect someone sitting at a table.  To mitigate this risk we have additional types of sensors that we can fallback on if this happens.
  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?
    • No major changes were made to our design.