This week we showed our final demo which included our MVP product that showed a user’s wearable device turn on with a light whenever it was their turn to do a task and the button which would rotate the task order for when someone finished …
Well, this is more of a 1.5-week report, instead of a week, and I got several things done. They were all short tasks. 1) Completion of Server Integration Jeffrey had integrated my testing server into his final server. So both of us sat together and …
This week I got a lot of work done. I successfully implemented extremely low power deep sleep modes. All that’s left is to profile it with a multimeter to see the sleep mode current drop to < 1mA. At the same time, I will be finishing up having only deep sleep on the button except through external interrupt trigger (a button press). The porting of the sleep mode activation shouldn’t be too bad, since I don’t have to worry about writing a real-time timer driver in order to schedule wakeup like I did with the wearable device. Since my teammates seem to be busy, I also helped out with writing the GUI for the hub server. I believe that all the visual components are now done; I just need to connect with Jeffrey’s hub server so that I can pull data to display as well as push new data from the camera configuration. I also need Shivi’s image to display on the heatmap page. At this point I am extremely worried about our progress since the final demo is Monday, but I think perhaps with a super big push, we can make it to the end with a very presentable product.
This week we worked on building our final product from our MVP. I traveled back to home (Florida) from Pittsburgh, but I was able to access the Raspberry Pi remotely to work on the integration. I was in charge of getting our location detection working. …
This week I got lots of things done. I successfully ported a fragile implementation of the current wifi signal scraping onto the wearable device. Next up is getting low power modes working as well as implementing a debug port for this wearable. I also need …
I did not get to work as much on capstone this week due to exams and other deadlines, but I did manage to get some things done:
1) Testing of Clutter Detection
I had taken a bunch of images last week, so this week I spent running them over my algorithm and checking the accuracy. I found out several things this week,
– I was detecting clutter when the reflection in the stainless appliances was changing
– I needed to use different clean images based on the different lighting conditions of the counter
– If humans came into the scene, everything would just go haywire
– The counter is made of marble so it reflects light differently sometimes and can hence also cause clutter. Since I can’t do anything about this case, this has to be the reason why we have the error threshold.
2) Integrating Testing Server into actual Server
I forwarded the server I made to Jeffrey to integrate it into the actual hub.
3) Coming Up with Solutions for the issues in Clutter Detection
To resolve the issues that came up while testing clutter detection I decided to add the following to the code.
– The ability to select from several clean images, which one should be used as the base. This is in a very basic and rudimentary form.
– Detect humans based on if there is a very large contour, then ignore the current input image. Detecting humans can be done via searching for the biggest contour in the image as the only time they should interfere is when they are in between the clutter zone and the camera
– Increasing the ignore zone to a curved line, so that it can ignore the appliances that are on the edge of the zone. Since the user wants them there.
This week, we were able to add more functionality to our hub by making it be able to perform bilateration to understand the location of the wearables. On top of that, Jeffrey also made the hub more robust and debugged the message passing interface between …
This week I continued to try to get our MVP working. I started making the hub code more robust and continued to debug the message passing interface between the hub, wearable device, and the button. I also started to finalize our location detection method with …
This week we made a push to get all of the major components of our MVP ready. We wanted to have the hub server be able to take connections from both a wearable device and a button. Since our MVP did not require the integration of the Camera data and CV code yet, Shivi worked on making a small test server with her CV code, so we would be able to easily integrate the CV code for our final product. David also did some work attaching a WiFi breakout board to the raspberry pi zero. This would allow the use of bilateration as we have two defined points (the hub and the camera). Having these two wifi points allows the wearable device to send its location from these two points allowing us to get a rough estimation of where the user is located with respect to the table. Jeffrey was setting up the raspberry pi hub code, so that as soon as David completed the coding of the wearable device, the hub would be able to take a button press and rotate through the task order as well as send a message to the wearable notifying the user to clean up.