Team Status Report for 3/25

Risk and Risk Management

The major risk that we have is related to full integration testing. Since real time testing requires the weather to be sunny and we also have other time commitments so we need the sunny day to line up with our availability to test for capstone. This makes rapid revision and testing very difficult. We aim to mitigate this by utilizing past data from sunny day to simulate a sunny day. We won’t be able to get real life confirmation but we can double check the response to what we think is the ideal response.

System Design Changes

As of now there hasn’t been a change to the existing design of the system.

Schedule

The schedule has not changed yet, and everyone is roughly on track for the time being.

Elizabeth’s Status Report for 3/25

Progress

Dianne and I worked on integration.py (work can be found on the team repository) that essentially integrated the LAOE algorithm and the User Position Extraction. It’s mostly just taking the work I did last week and calling Dianne’s LAOE functions. Actually, in the process of integration, I realized that some of my math from last week was incorrect, and had to edit my calculations. There were also some other minor changes, like turning the testsuncalc.py file into a module with get_suncalc(). Some other issues this week included merge conflicts. I set up the Raspberry Pi by inserting the microSD, putting on the heatsinks, attaching the fan, and putting it in its case, however I didn’t realize that to interact with the Pi, I would have to have some kind of monitor. Jeff has a monitor, so next time we are both available, I will try working with the Pi in his room.

Schedule

So far my progress is roughly on schedule.

Next Steps

Next week, I hope to upload the software end of things into the Raspberry Pi, and see if it works. I will also work on connecting the Raspberry Pi and the Arduino (likely with Jeff).

Jeff’s Status Report for 3/25

Personal Progress

This week I was able to speed up the motor system to a reasonable speed by  setting the Vref to the correct voltage and to send out control pulses at the fastest pace possible. I also finish setting up the interface for the software people to send the control they want the motor to perform. I also finished assembling the 3D printed gear pieces with the motor.

I am current still on schedule and no need for adjustment.

Plans for Next Week

To test the motor system with actual blinds attached.

Dianne’s Status Report for 3/25

Personal Progress

This week I worked on both testing and integration with my teammates.

With Elizabeth, we worked on the integration of the main file that would continuously be running face detection and doing calculations if a face is found. I also wrote a testing file that would make sure the projection coordinates are correct, as this is the most important part for accuracy reasons. There are a few issues right now (I might have gotten the axis or angle mixed up relative to North, as the sign of the x-value is flipped). The full scope of the work can be found in our repository: https://github.com/djge/18500_GroupC1.

Jeff and I discussed potential ways to integrate the motor system with the adjustment of the blinds, but have yet to integrate as we first need to calibrate the turning of the blinds with the blind size to figure out how we can get the blinds to go up or down a certain amount. We are currently on track.

Next Steps

Next week, we hope to have an integrated system, where we can start to debug as well as fine tune parts of the project that are not performing as well as they should be. Additionally, I will work on the issues with the algorithm and writes tests for the intersect function as well.

Elizabeth’s Status Report for 3/18

Progress

I have some bits and pieces in cascade.py (work can be found on the team repository) that are related to tinkering with OpenCV and super resolution, but nothing related to it that resulted in something concrete. This week I mainly just focused on finding the distance of a face’s chin by combining information from the depth and color frames.  First I had to align the depth and color frames, and this kind of confused me because  the shape of the numpy vector from the depth frame was consistently (480, 848), which is a far cry from the resolution we were expecting (1280 x 720). Then, using calculations shown here, I calculated the angle of each pixel, and using the number of pixels the person was away from the center, I calculated the x, y distance of the person from the camera. Essentially I have finished an elementary version of the User Position Extraction.

Schedule

So far my progress is roughly on schedule.

Next Steps

Next week, I hope to work with Dianne in integrating the LAOE algorithm and the User Position Extraction, and seeing if the results seem reasonable. If time and weather allow for it, I’d like to try testing this integration.

Dianne’s Status Report for 3/18

Personal Progress

This week I finished the LAOE implementation. Since I already finished the projection coordinates function, what I worked on implementing was the intersection between a user and the light area of effect and the necessary change to the blinds in the case that a user is in the LAOE. The full scope of the update can be seen in this commit: https://github.com/djge/18500_GroupC1/commit/69665b1dfaf08be004fe2e29e75437b43dc36aa9

I also wrote up some pseudocode for a main function that takes these inputs and sends them to the correct functions in the right steps to get the necessary change to the blinds.

Next Steps

Next week, I hope to get some working integration. I also want to set up test cases to fix the accuracy of the LAOE projected coordinates, but we will be focusing on integration first. This will be both software and hardware integration, such as testing the change in blinds with a command from a blinds change function.

Team Status Report for 3/18

Risk and Risk Management

One risk that we have is related to 3D printing the parts. We know the entire process of printing will take 9 hours, but it will likely be a couple days before we actually receive the parts we want. We also don’t (as of now) know the cost of the order (we recently requested a quote). If it goes over budget, we might have to eliminate some of the parts we want. However, only a couple pieces are central to the functioning of our project, so in the case it’s not affordable, we’ll at least try to maintain those pieces (e.g. the ‘gear’ that holds onto the blinds’ beaded cord lock). In the meantime, however, everyone is working in parallel on their own portions.

Other risks include that the depth sensor of the D455’s depth sensor resolution isn’t as high as expected. From what has been seen, it is closer to 848×480 as opposed to the 1280×720 that was expected (listed on their website); this may affect the accuracy results of the User Position Extraction. Another risk is the possibility that our LAOE algorithm might not work as accurate as expected, since we have only tested it on very little data so far. We hope to do more testing in the future regarding the accuracies of these algorithms.

System Design Changes

As of now there hasn’t been a change to the existing design of the system.

Schedule

The schedule has not changed yet, and everyone is roughly on track for the time being.

Jeff’s Status Report for 3/18

Personal Progress

This week I was able to test our light detection circuit in actual sunlight and was able to extract the threshold for sunlight as shown below in figure 1. I was also able to get the stepper motor working. It can rotate in both direction and at different speeds and duration.  I also set up the initial serial interface to receive some basic commands from the serial monitor (currently just adjusting the direction of the rotation.  I also learned how to use Cura for 3D printing slicing for the 3D printing request for the gear to turn the blind. I submitted a request for a quote to make sure it fits in our budget as shown in figure 2.

figure 1: Left is direct sunlight, Right is ambient light

I am current still on schedule and no need for adjustment.

 

Plans for Next Week

The motor currently turns very slowly and I’ve been researching how to make it turn faster. I found a few solutions and plan on implementing them and testing it next week. I also hope the gear gets printed by next week so I can attach it to the motor and begin testing on the actual blinds.

Elizabeth’s Status Report for 3/4

Progress

This week I mostly just worked on the Design Review Document with my team members. For the design specific portions, I focused more on the User Position Extraction sections.  Some other things I worked on are the Introduction, Bill of Materials, Team Responsibilities, etc. I also looked into using Tensorflow models to enhance an image’s resolution. Although I had planned to use a FSRCNN model to increase resolution, I might test the waters with an ESRGAN model for now instead because there is already an established example listed here. Using the given helper functions, all one likely has to do is just convert between numpy arrays and tensors (though it might not convert very well, depending on how images are represented as vectors in Tensorflow). However, a concern I have with increasing the resolution of the image is time – it takes over one second to predict a new image (not just train), and I believe this is the case for many other models as well. I wonder how well it would fare on the Raspberry Pi, which isn’t as strong as a regular computer, especially because using two cascades (profile and frontal) is already somewhat slower than expected. What might happen is that we might just focus on frontal faces. Another concern is finding a dataset that contains specifically downsampled images of faces (the dataset used in the example is the DIV2K dataset, which is a set of generic downsampled images) to train the model on.

Schedule

For the most part I think I am on schedule, but I could be better. I didn’t get work done other break, even though I personally planned to, but I am still on track in terms of the team schedule.

Next Steps

For now instead of focusing on increasing the resolution of the image, for the next week I will just implement extracting the exact distance the user’s chin is from the system (which involves getting the number of pixels away the user is fromt the center, and performing geometry). I will look more at increasing image resolution after this is accomplished.

New Tools

I haven’t really implemented anything yet, so I haven’t used any new tools. But I will probably use some kind of image resolution enhancer in the future.

Jeff’s Status Report for 3/4

Personal Progress

This week our team just focused on writing our design document. I focused on writing all the subsections that involves the sensor system and motor system.  I also worked on writing the related work section and risk mitigation plan section.

We are on schedule and don’t need to make any changes to my schedule and plan.

Plans for Next Week

I plan to test the light detection circuit under actual sunlight so I can find the brightness threshold for actual sunlight. I also hope to get the motor control fully completed.