Elizabeth’s Status Report for 4/29

Progress

This week I worked on setting up SSH on the Raspberry Pi. Although this seemed like an easy task, there were a lot of guides online that didn’t work for some reason (unsure why). In the end, it was relatively easy to set it up for campus wifi because CMU-DEVICE auto-assigns your device an IP address to SSH into. For the place we were testing though, I was trying to get the device to be assigned a static IP address so SSHing would be a more consistent process. However although running hostname -I on the RPi would give the static address I requested, for some reason I was never able to SSH into it while it held a static address. For now I settled with just using a non-static one (which for some reason is consistent enough); I will look more into why it doesn’t work, but for now as long as we are able to create a demo video with it and run it at school for the demo, it’s probably fine.

Schedule

We are roughly on schedule.

Next Steps

The only thing basically left to do is to work a bit more on the demo and demo video (we are still working with some details regarding setup). And as I said last week, I want to try downsampling the squares that OpenCV captures (to make the edges more prominent, though I guess I wouldn’t know what filter to use), and re-running those images through the Haar cascade again to reduce the model’s error of identifying faces.

Elizabeth’s Status Report for 4/8

Progress

This week we did some user position testing, but the error was kind of high for people who were somewhat far away and people in more extreme angles. Also, if one is too close to the camera, the camera will not catch the person’s face. More on testing is in the section below.

I also worked on getting the Raspberry Pi to work. I finally got it to build the pyrealsense2 module from source (this build alone took multiple hours, and it had to all be built consecutively because apparently it was getting errors from previous partially built builds). So I had to remove and just completely restart a build. After it built, I had to debug why the instructions on the Intel RealSense Github (on how to build it on Raspbian) wasn’t working. Apparently although the librealsense.so file was getting put into usr/local/lib, the pyrealsense2.so file wasn’t. Although it seems simply in hindsight, I was very confused when trying to figure this out. What I did instead is set $PYTHONPATH in the wrappers/python folder, which had pyrealsense2.so. I also discovered something else after that – To find the latitude/longitude of an address, we use geopy/Nominatim, where geopy sends a GET request to some Nominatim server. However, I was getting that these requests were timing out because apparently, Nominatim has a limit on how many requests you can do, and it’s plausible that this is partly tracked using IP address, and someone at CMU used up all our requests. This is for sure the cause of the issue because once I used a VPN, this issue disappeared. I’m still thinking about how to avoid this (use a different library, make our user_agent’s name more unique with a random number), but once this issue is solved, all that we should have to do to finish with the Pi is to work out which port we should connect the Arduino and make the changes to the lines using the Serial package.

Testing

We did some distance testing this week, but found that for people in more extreme angles or extreme distances, the distance calculation is not super accurate. The way we did these tests were by using an image and the depth data to calculate the user position, and then physically measuring the distance to the person. Ideally we would want a percent error of below 10%, but this might not be achievable, partly due to previously mentioned issues, partly due to it being kinda hard to accurately measure the distance to the person with just one measuring stick, as it’s kinda hard to keep the stick straight and perpendicular to a specific spot in the air. Also, OpenCV sometimes interprets some objects as people, when this is clearly inaccurate. For the distance calculations, some online sources say it is plausible that using a radians/pixel approach is not linear, in other words, it may not be acceptable just to divide horizontal FOV by the number of horizontal pixels. However, I’m not actually sure if this is the case, as it is hard to get data on, nor am I sure how to correct this if this is the case. In the future we plan to do more integrated testing and individual testing.

Schedule

We are a little behind schedule, as basically everything works, but the Raspberry Pi’s code needs to be a bit edited to account for timeout issues with Nominatim. However at this stage in time, we should be solely focused on testing. But because we aren’t very far behind, we can focus most of next week on testing.

Next Steps

The goal for next week is to finish up with working with the Pi, and spend most of next week testing.

Elizabeth’s Status Report for 4/1

Progress

I worked on creating test files to more systematically gather and test data for the User Extraction. My files can be found in test/distance_testing in the team respository. As said in the README, you run get_data.py to gather a data point, add your real data to measured_data.txt, and run test_data.py to calculate (x,y,z) values and calculate percent error. We also gathered one data point this week and for that point, our calculated (x, y, z) had (13.0%, 2.7%, 6.4%) error. The image of the data point can be seen using the display() function in test_data.py. In the future we will strive to gather more test points, and in more unique user positions. Dianne and I also spent some time prepping the Raspberry Pi, and the Pi is relatively set up and we have installed OpenCV and other things on it, but installing pyrealsense2 has proven to take longer than expected, as it cannot just be pip installed on the Pi, and must be compiled from source.

Schedule

We are roughly on schedule.

Next Steps

By the demo, I hope to finish at least some kind of integrated version of our product (where the motor system is connected to our software side). If this is not finishable before the demo, then at least by next week this is the goal. I also wish to test more.

Elizabeth’s Status Report for 3/25

Progress

Dianne and I worked on integration.py (work can be found on the team repository) that essentially integrated the LAOE algorithm and the User Position Extraction. It’s mostly just taking the work I did last week and calling Dianne’s LAOE functions. Actually, in the process of integration, I realized that some of my math from last week was incorrect, and had to edit my calculations. There were also some other minor changes, like turning the testsuncalc.py file into a module with get_suncalc(). Some other issues this week included merge conflicts. I set up the Raspberry Pi by inserting the microSD, putting on the heatsinks, attaching the fan, and putting it in its case, however I didn’t realize that to interact with the Pi, I would have to have some kind of monitor. Jeff has a monitor, so next time we are both available, I will try working with the Pi in his room.

Schedule

So far my progress is roughly on schedule.

Next Steps

Next week, I hope to upload the software end of things into the Raspberry Pi, and see if it works. I will also work on connecting the Raspberry Pi and the Arduino (likely with Jeff).

Elizabeth’s Status Report for 3/18

Progress

I have some bits and pieces in cascade.py (work can be found on the team repository) that are related to tinkering with OpenCV and super resolution, but nothing related to it that resulted in something concrete. This week I mainly just focused on finding the distance of a face’s chin by combining information from the depth and color frames.  First I had to align the depth and color frames, and this kind of confused me because  the shape of the numpy vector from the depth frame was consistently (480, 848), which is a far cry from the resolution we were expecting (1280 x 720). Then, using calculations shown here, I calculated the angle of each pixel, and using the number of pixels the person was away from the center, I calculated the x, y distance of the person from the camera. Essentially I have finished an elementary version of the User Position Extraction.

Schedule

So far my progress is roughly on schedule.

Next Steps

Next week, I hope to work with Dianne in integrating the LAOE algorithm and the User Position Extraction, and seeing if the results seem reasonable. If time and weather allow for it, I’d like to try testing this integration.

Elizabeth’s Status Report for 3/4

Progress

This week I mostly just worked on the Design Review Document with my team members. For the design specific portions, I focused more on the User Position Extraction sections.  Some other things I worked on are the Introduction, Bill of Materials, Team Responsibilities, etc. I also looked into using Tensorflow models to enhance an image’s resolution. Although I had planned to use a FSRCNN model to increase resolution, I might test the waters with an ESRGAN model for now instead because there is already an established example listed here. Using the given helper functions, all one likely has to do is just convert between numpy arrays and tensors (though it might not convert very well, depending on how images are represented as vectors in Tensorflow). However, a concern I have with increasing the resolution of the image is time – it takes over one second to predict a new image (not just train), and I believe this is the case for many other models as well. I wonder how well it would fare on the Raspberry Pi, which isn’t as strong as a regular computer, especially because using two cascades (profile and frontal) is already somewhat slower than expected. What might happen is that we might just focus on frontal faces. Another concern is finding a dataset that contains specifically downsampled images of faces (the dataset used in the example is the DIV2K dataset, which is a set of generic downsampled images) to train the model on.

Schedule

For the most part I think I am on schedule, but I could be better. I didn’t get work done other break, even though I personally planned to, but I am still on track in terms of the team schedule.

Next Steps

For now instead of focusing on increasing the resolution of the image, for the next week I will just implement extracting the exact distance the user’s chin is from the system (which involves getting the number of pixels away the user is fromt the center, and performing geometry). I will look more at increasing image resolution after this is accomplished.

New Tools

I haven’t really implemented anything yet, so I haven’t used any new tools. But I will probably use some kind of image resolution enhancer in the future.

Elizabeth’s Status Report for 2/25

Progress

This week I made an OpenCV example using the old Intel RealSense L515 (which should also work for the new Intel Realsense D455). I used Haar cascades (the most common method) to detect faces using the RGB camera that the Intel Realsense device comes with. I used both a frontal and profile cascade (so if it cannot detect a frontal face, it can use the profile face). I also looked into the different methods for face detection. These methods are clearly described in this link. I think using OpenCV’s DNN module might be better for our project, as it is more accurate, so I might make an example of that next week. The DNN model might be less accurate based on the training set though, so I will look for a representative training dataset online.  In case we want to make the OpenCV process even faster, I found a C++ library that runs OpenCV faster by using different SIMD CPU extensions that I might try to use in the future to use if/after MVP is reached. My example can be found in our team repository.

Schedule

I believe as of now, our progress is on schedule.

Next Steps

Over the next week, I’ll try to get a DNN example going. More importantly, I will write the Design Review Report with my group members that is due next Friday.

Elizabeth’s Status Report for 2/18

We met as a team to collect data. We went around to each other’s places to discover which place was the best to test our data. Jeff and I first tried to get data on Tuesday morning, but failed to get data due to time limitations, and physical limitations (we discovered that a lot of our windows are facing away from the sunlight during this time of year). As a team we also worked on the design presentation.

As a team, we tested out our Intel RealSense Lidar Camera L515, but we discovered it doesn’t work under sunlight/ambient light (more can be found here). We decided to switch plans instead and go with the Intel RealSense Depth Camera D455 instead, as its accuracy is not dependent on the presence of sunlight, and its range is 6 meters. I also helped order the materials and find a rough cost of the parts we need 3D printed.

I also looked into the pros and cons of using different distance sensors, to see if perhaps an ultrasonic distance sensor would fit our needs better. Referencing multiple websites, I found that LiDAR in general measures distance more accurately, but ultrasonic sensors tend to be cheaper and use less power. Because our accuracy goals are kind of high, it would be best to use LiDAR in this case to measure the distance of the person. I also looked at the GitHub for Intel RealSense, but haven’t started a small demo yet.

Progress

As a team, we are on schedule, though I feel like I could’ve accomplished more this week.

Deliverables

I will work on an OpenCV example using the Intel RealSense Lidar Camera L515 (while we are still in the process of getting the D455)which I will work on over the next week. I will also help Dianne with the LAOE adjustments and help in general with getting more test points as needed.

Courses: 18202, 15112

Elizabeth’s Status Report 2/11

This week, Jeff and I researched the different hardware we can use in our project. We decided we wanted to use a unit that had an integrated LIDAR and camera to make things simpler. We were initially thinking about using a unit called Astra from Orbbec, but decided against it as the user guide and documentation for it was sparse and hard to navigate. We also found that it also had questionable reviews on Amazon. Because we wanted to use something with established documentation, we decided on an Intel unit, the Intel RealSense Depth Camera D415, and found a graph that displayed its depth RMS error, which seemed somewhat acceptable. However later when we were looking at the ECE Inventory, we found an even better item, the Intel RealSense Lidar Camera L515, which also seems to satisfy our accuracy requirements. The depth accuracy of the L515 is shown below. On the list we also found a Raspberry Pi, so we filled out the request form and notified our TA.

 

I looked into how to use pre-existing APIs to calculate the azimuth/altitude of the sun, to use for calculating the area of effect of the sun, but I was very inefficient in my approach, as I should’ve thought there was a Python library for this. At first I tried to make GET requests to suncalc.org, but the values we needed weren’t directly in the HTML. When I looked up suncalc APIs online, at first I only saw the JavaScript API, so I spent some time trying to find out how to run JavaScript in Python (using the js2py library), but it kept giving me errors on how it couldn’t link the required node modules for the suncalc API. In the end, I just found a Python library that gives us these values based on time and location, and used another library called geopy to find latitude/longitude based on an address. The short code is below. I ended up not putting it on the GitHub repository for now, as it’s only a very preliminary venture into our project, and it’s very simple (I was just being stupid).

Our progress is roughly on schedule.

The deliverables for the next week I plan on working on is working with Dianne on the Sun Position calculating API, and start the implementation of our area of effect API. I will also work on the design review/report and presentation, as my group members and I have decided that I will be in charge of the next presentation. I will also help my group members in making more in-depth schematic diagrams for our project.