Week 1 Status Reports

Ajay

This week I spent doing research on how to solve the problem of person re-identification. The approach we want to use is probably similar to the paper  Getting the Look: Clothing Recognition and Segmentation for Automatic Product Suggestions in Everyday Photos. We want to segment the person and isolate the articles of clothing and use the clothing to re-identify the person. The paper mentioned using Deepfashion which is a fully labeled dataset that we can use to train a classifier. Essentially i need a way to extract features from an image and I need a way to compare two images. Using Deepfashion we should be able to extract some features from the image. The other paper I was reading was trying to do person reidentification using pose detection. By detecting pose, you can extrapolate details about the people such as height or other characteristics. This might be useful later on if clothing is not enough to distinguish between people. I spent most of my week doing research on these methods and plan to implement it next week

Our project is on schedule, I wanted to spend this week researching the approaches for identifying humans which I accomplished

I want to be able to create a bounding box around a human using the YOLO algorithm and write the CNN that uses deep fashion.

 

Vayum

This week I solidified the design requirements for the project. We are designing a web application that should seamlessly integrate with the hardware portions of the project. Given that we want to solve wait time and occupancy management of different restaurants, the web application should display the relevant information:

⁃ restaurants on a clickable dashboard

⁃ wait times provided for each dashboard

⁃ a page with further information once the restaurant has been clicked

⁃page will include a heat map of occupancy, past data restaurant analytics, and future predictive analysis based on previous information

I worked on the design documentation and designed how some of the frontend features should look. They are as follows: As for the backend design, it was challenging to decide the framework. After doing extensive research, looking at React for the frontend, Java, C, and Python for the backend, I decided on using Python with React. The technology that best integrates with this is the IoT-ignite sensor API, and we would expect our data to be in JSON format. I am on schedule for the project so far as it is in its early phases and just starting out.

 

Deliverables for the next week is to write a test client, with a socket and a server, to see if there is a proper connection, and check to see if the data is properly formatted. In addition to this, I will need to make adjustments to the database and choice of this if the data does not take the expected format. Overall, once this is done, I will start writing the backend with test data in the following weeks to come.

Peter

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. 

This past week I received ordered sensors, set up a Raspberry Pi with Raspbian, and began writing a script to write sensor data every second to an excel file.  Setting up the pi with the OS was painless.  Getting the sensor setup and connected to the right GPIO pins was fairly straight forward as well.  One issue I encountered was trying to write to a CSV file, somehow the file had a bunch of Chinese characters instead of doubles from the sensor.  To get around this we cast the sensor output to Strings before writing to the file.

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? 

My progress is on schedule.  Setbacks due to sensors not being able to detect human presence were accounted for in our schedule, but I have not fully tested our current humidity and temperature sensors enough to know if they will be sufficient by themselves.

What deliverables do you hope to complete in the next week?

By the end of next week, I want to determine if the current sensors are enough to detect human presence at a table and write some code that generates a Boolean value with if a table is occupied.

 

TEAM C0

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?
    • Our largest risk currently is with the image recognition. Since we are planning on writing this part from scratch and not using libraries there could be some issues with getting this part to work. Our contingency plan is to resolve to using openCV to solve some of the harder parts of the image detection. Currently we want to write the bounding box algorithm and pass it into a cnn to do recognition. OpenCV has a solved a majority of these problems already so we could use those resources if we are having trouble writing it from scratch.
    • Another risk is if the sensors we have chosen are not able to reliably detect someone sitting at a table.  To mitigate this risk we have additional types of sensors that we can fallback on if this happens.
  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?
    • No major changes were made to our design.