Elizabeth’s Status Report for 3/18

Progress

I have some bits and pieces in cascade.py (work can be found on the team repository) that are related to tinkering with OpenCV and super resolution, but nothing related to it that resulted in something concrete. This week I mainly just focused on finding the distance of a face’s chin by combining information from the depth and color frames.  First I had to align the depth and color frames, and this kind of confused me because  the shape of the numpy vector from the depth frame was consistently (480, 848), which is a far cry from the resolution we were expecting (1280 x 720). Then, using calculations shown here, I calculated the angle of each pixel, and using the number of pixels the person was away from the center, I calculated the x, y distance of the person from the camera. Essentially I have finished an elementary version of the User Position Extraction.

Schedule

So far my progress is roughly on schedule.

Next Steps

Next week, I hope to work with Dianne in integrating the LAOE algorithm and the User Position Extraction, and seeing if the results seem reasonable. If time and weather allow for it, I’d like to try testing this integration.

Dianne’s Status Report for 3/18

Personal Progress

This week I finished the LAOE implementation. Since I already finished the projection coordinates function, what I worked on implementing was the intersection between a user and the light area of effect and the necessary change to the blinds in the case that a user is in the LAOE. The full scope of the update can be seen in this commit: https://github.com/djge/18500_GroupC1/commit/69665b1dfaf08be004fe2e29e75437b43dc36aa9

I also wrote up some pseudocode for a main function that takes these inputs and sends them to the correct functions in the right steps to get the necessary change to the blinds.

Next Steps

Next week, I hope to get some working integration. I also want to set up test cases to fix the accuracy of the LAOE projected coordinates, but we will be focusing on integration first. This will be both software and hardware integration, such as testing the change in blinds with a command from a blinds change function.

Team Status Report for 3/18

Risk and Risk Management

One risk that we have is related to 3D printing the parts. We know the entire process of printing will take 9 hours, but it will likely be a couple days before we actually receive the parts we want. We also don’t (as of now) know the cost of the order (we recently requested a quote). If it goes over budget, we might have to eliminate some of the parts we want. However, only a couple pieces are central to the functioning of our project, so in the case it’s not affordable, we’ll at least try to maintain those pieces (e.g. the ‘gear’ that holds onto the blinds’ beaded cord lock). In the meantime, however, everyone is working in parallel on their own portions.

Other risks include that the depth sensor of the D455’s depth sensor resolution isn’t as high as expected. From what has been seen, it is closer to 848×480 as opposed to the 1280×720 that was expected (listed on their website); this may affect the accuracy results of the User Position Extraction. Another risk is the possibility that our LAOE algorithm might not work as accurate as expected, since we have only tested it on very little data so far. We hope to do more testing in the future regarding the accuracies of these algorithms.

System Design Changes

As of now there hasn’t been a change to the existing design of the system.

Schedule

The schedule has not changed yet, and everyone is roughly on track for the time being.

Jeff’s Status Report for 3/18

Personal Progress

This week I was able to test our light detection circuit in actual sunlight and was able to extract the threshold for sunlight as shown below in figure 1. I was also able to get the stepper motor working. It can rotate in both direction and at different speeds and duration.  I also set up the initial serial interface to receive some basic commands from the serial monitor (currently just adjusting the direction of the rotation.  I also learned how to use Cura for 3D printing slicing for the 3D printing request for the gear to turn the blind. I submitted a request for a quote to make sure it fits in our budget as shown in figure 2.

figure 1: Left is direct sunlight, Right is ambient light

I am current still on schedule and no need for adjustment.

 

Plans for Next Week

The motor currently turns very slowly and I’ve been researching how to make it turn faster. I found a few solutions and plan on implementing them and testing it next week. I also hope the gear gets printed by next week so I can attach it to the motor and begin testing on the actual blinds.

Elizabeth’s Status Report for 3/4

Progress

This week I mostly just worked on the Design Review Document with my team members. For the design specific portions, I focused more on the User Position Extraction sections.  Some other things I worked on are the Introduction, Bill of Materials, Team Responsibilities, etc. I also looked into using Tensorflow models to enhance an image’s resolution. Although I had planned to use a FSRCNN model to increase resolution, I might test the waters with an ESRGAN model for now instead because there is already an established example listed here. Using the given helper functions, all one likely has to do is just convert between numpy arrays and tensors (though it might not convert very well, depending on how images are represented as vectors in Tensorflow). However, a concern I have with increasing the resolution of the image is time – it takes over one second to predict a new image (not just train), and I believe this is the case for many other models as well. I wonder how well it would fare on the Raspberry Pi, which isn’t as strong as a regular computer, especially because using two cascades (profile and frontal) is already somewhat slower than expected. What might happen is that we might just focus on frontal faces. Another concern is finding a dataset that contains specifically downsampled images of faces (the dataset used in the example is the DIV2K dataset, which is a set of generic downsampled images) to train the model on.

Schedule

For the most part I think I am on schedule, but I could be better. I didn’t get work done other break, even though I personally planned to, but I am still on track in terms of the team schedule.

Next Steps

For now instead of focusing on increasing the resolution of the image, for the next week I will just implement extracting the exact distance the user’s chin is from the system (which involves getting the number of pixels away the user is fromt the center, and performing geometry). I will look more at increasing image resolution after this is accomplished.

New Tools

I haven’t really implemented anything yet, so I haven’t used any new tools. But I will probably use some kind of image resolution enhancer in the future.

Jeff’s Status Report for 3/4

Personal Progress

This week our team just focused on writing our design document. I focused on writing all the subsections that involves the sensor system and motor system.  I also worked on writing the related work section and risk mitigation plan section.

We are on schedule and don’t need to make any changes to my schedule and plan.

Plans for Next Week

I plan to test the light detection circuit under actual sunlight so I can find the brightness threshold for actual sunlight. I also hope to get the motor control fully completed.

Team Status Report for 3/4

Risks and Risk Management

After we received the D455 LIDAR camera, our most significant risk as of now is probably how our LAOE testing is dependent on not only our schedules, but the weather as well, as we can only test on sunny days. To combat this, we want to collect as much data as possible early on. So far, we have a few data points from sunny days previously in the semester, but we want to continue to collect as much as possible during varying times of day.

System Design Changes

There are no system design changes this week.

Schedule

The schedule has not changed, and everything on both the software side and hardware side are on track.

Teamwork Adjustments

No adjustments have been made as of now, as everyone is roughly still on schedule. All team members have been meeting up with each other throughout the week, and there are no new design challenges yet.

New Tools

With our finished Design Report, we have no new tools to learn besides the ones we had already planned on learning.

Dianne’s Status Report for 3/4

Progress

This week, I worked primarily on the Design Report. I focused on sections in design relating to the LAOE algorithm, such as illustrating and describing how the algorithm works in implementation, trade studies with figuring out the AOE with hardware, and the tests we plan to conduct in regards to LAOE. I also worked on the use case requirements and the architecture and principles of operation sections.

Schedule

So far, I am on schedule. This week is part of the algorithm implementation, which was started last week and is mostly complete except for some parts of the intersection detection and blinds change calculation.

Next Steps

Going forwards, I will finish up the LAOE algorithm and start setting up test cases so we can make sure everything is working correctly and make the necessary adjustments as soon as possible.

Team Status Report for 2/25

Risks and Risk Management

The most significant risk as of now is probably waiting for the Intel RealSense D455’s arrival because we really need to the D455 to meet our project requirements. It is the prerequisite for a lot of the parts for our project, like the face detection software testing. This risk is being managed by working with the L515 we currently possess since the data produced by the D455 and L515 should be very similar outside of range and resolution differences (which is why we haven’t returned it yet). They use the same library (pyrealsense2) as well.

System Design Changes

There are no system design changes this week.

Schedule

The schedule only changed for Jeff, who is handling hardware, since the D455 did not arrive this week. Because of this, Jeff has moved the task of building the motorized blinds up the schedule until the D455 camera arrives, and has been working on that this week. The schedule for the software side of things has not been changed because the LAOE algorithm does not need camera data, and the face detection software is temporarily being made with the L515 camera data.

Teamwork Adjustments

No adjustments have been made as of now, as everyone is roughly still on schedule. All team members have been meeting up with each other throughout the week, and there are no new design challenges yet.

Elizabeth’s Status Report for 2/25

Progress

This week I made an OpenCV example using the old Intel RealSense L515 (which should also work for the new Intel Realsense D455). I used Haar cascades (the most common method) to detect faces using the RGB camera that the Intel Realsense device comes with. I used both a frontal and profile cascade (so if it cannot detect a frontal face, it can use the profile face). I also looked into the different methods for face detection. These methods are clearly described in this link. I think using OpenCV’s DNN module might be better for our project, as it is more accurate, so I might make an example of that next week. The DNN model might be less accurate based on the training set though, so I will look for a representative training dataset online.  In case we want to make the OpenCV process even faster, I found a C++ library that runs OpenCV faster by using different SIMD CPU extensions that I might try to use in the future to use if/after MVP is reached. My example can be found in our team repository.

Schedule

I believe as of now, our progress is on schedule.

Next Steps

Over the next week, I’ll try to get a DNN example going. More importantly, I will write the Design Review Report with my group members that is due next Friday.