Jeff’s Status report for 4/29

Personal Progress

This week I finished building the frame for the final demo and lights.
We also tested the setup to make sure it works. I also performed full comprehensive on the hardware system to make sure it’s still performing normally.

Plans for Next Week

We plan on fine tuning the demo setup and also working on the final poster and documentations.

Elizabeth’s Status Report for 4/29

Progress

This week I worked on setting up SSH on the Raspberry Pi. Although this seemed like an easy task, there were a lot of guides online that didn’t work for some reason (unsure why). In the end, it was relatively easy to set it up for campus wifi because CMU-DEVICE auto-assigns your device an IP address to SSH into. For the place we were testing though, I was trying to get the device to be assigned a static IP address so SSHing would be a more consistent process. However although running hostname -I on the RPi would give the static address I requested, for some reason I was never able to SSH into it while it held a static address. For now I settled with just using a non-static one (which for some reason is consistent enough); I will look more into why it doesn’t work, but for now as long as we are able to create a demo video with it and run it at school for the demo, it’s probably fine.

Schedule

We are roughly on schedule.

Next Steps

The only thing basically left to do is to work a bit more on the demo and demo video (we are still working with some details regarding setup). And as I said last week, I want to try downsampling the squares that OpenCV captures (to make the edges more prominent, though I guess I wouldn’t know what filter to use), and re-running those images through the Haar cascade again to reduce the model’s error of identifying faces.

Dianne’s Status Report for 4/29

Personal Progress

This week we worked on setting up the system for the demo (with the frame and the artificial lights) as well as adding some features for the user, such as stopping in the middle of a movement if a person leaves the LAOE and readjusting accordingly.

Next Steps

We will need to fine tune a bit more with the setup for the demo (the artificial lights), and focus on working on the final document, poster, and video.

Team Status Report for 4/29

Risk and Risk Management

The biggest risk to our project right now is not finishing the final document. Our mitigation plan is to start early.

System Design Changes

As of now there hasn’t been a change to the existing design of the system.

Schedule

The schedule has not changed and everyone is on schedule.

Comprehensive Testing

  • LAOE:
    • Testing the location of the LAOE projection onto the floor (16 total cases of measured distance and size of the projection in meters and compared it to the calculated distance and size of the projection)
    • Testing the intersection of the LAOE with the user (used various locations of a person either in the LAOE, on of the edges of the LAOE, and outside of the LAOE and checked whether the returned value matched)
    • Testing the necessary change calculated by the LAOE (measuring the distance from the chin that the LAOE ends at after the adjustment is made and whether or not the LAOE is past the chin or above)
  • User Position Extraction:
    • Testing the return value of the position (various images of a person’s face throughout the room and measuring the returned value against the actual position)
    • Testing the face detection limits (testing various distances until the program is unable to detect a face)
  • Motor System:
    • Timed the total time it takes for the blinds to move up/down completely
  •   Overall Tests:
    • Timed the feedback latency (time it takes for the blinds to determine a change and send it to the motor)
    • Tested the overall accuracy (the accuracy in relative error of the change made and whether or not a change was actually made)

Some changes made based on the findings were some more leniency for the LAOE intersection, fine tuning the user position detection, and changing the speeds of the motor.

Team Status Report for 4/22

Risk and Risk Management

The biggest risk to our project right now is not finishing the final document. Our mitigation plan is to start early.

System Design Changes

As of now there hasn’t been a change to the existing design of the system.

Schedule

The schedule has not changed and everyone is on schedule.

Dianne’s Status Report for 4/22

Personal Progress

This week we worked on the final presentation as well as preparing for the final demo. We have planned out what we will be doing and working on the physical as well as technical setup, such as the frame and what the different light source location examples we will be showing will be.

We are currently on track.

Next Steps

Next week, we hope to finish implementing the things we need for our demo (physical setup, hardcoded sun location examples, etc) and prepare the final report and video and other materials.

Jeff’s Status report for 4/22

Personal Progress

This week we did extensive testing and confirmed that our blinds works. There is however some amount of error at the edges and the blinds work much slower and less consistent when adjusting the blinds upwards. This is because it requires much more torque to make the blind goes upwards and the gear occassionally skips and had to go much slower to get the torque.

I also designed and built the frame for the final demo. I also worked on the final presentation slides.

We are on schedule and nothing needs to be changed.

Plans for Next Week

We plan on finishing the final document and finish building frame.

Team Status Report for 4/8

Risk and Risk Management

The major risk is finding ways to improve our system’s accuracy. Our LAOE’s accuracy and User Position Extraction’s accuracy is alright at closer distances, but at further distances and at specific edge cases (the edges of projection of light; extreme angles in relation to the camera), our system is not super accurate.  We will try to think of ways to make our calculations more accurate, but we are also limited by the accuracy of things like OpenCV, the LIDAR data, etc.

Although not exactly a risk, but we are in general also trying to come up with ways to improve the user experience, in response to our interim demo. Our motor moves slightly faster than it did before (but there is a limit as to how fast it can move, since the motor is small, to make the system smaller and less invasive). We have also improved our LAOE algorithm to have more false positives, to be on the safe side to ensure light does not hit the user’s face. We will continue thinking about improving the user experience moving forward, but it’s hard to come up with ideas.

System Design Changes

As of now there hasn’t been a change to the existing design of the system.

Schedule

The schedule has not changed yet, and everyone is roughly on track for the time being, though the software portion is slightly behind with getting the Raspberry Pi on board.

Elizabeth’s Status Report for 4/8

Progress

This week we did some user position testing, but the error was kind of high for people who were somewhat far away and people in more extreme angles. Also, if one is too close to the camera, the camera will not catch the person’s face. More on testing is in the section below.

I also worked on getting the Raspberry Pi to work. I finally got it to build the pyrealsense2 module from source (this build alone took multiple hours, and it had to all be built consecutively because apparently it was getting errors from previous partially built builds). So I had to remove and just completely restart a build. After it built, I had to debug why the instructions on the Intel RealSense Github (on how to build it on Raspbian) wasn’t working. Apparently although the librealsense.so file was getting put into usr/local/lib, the pyrealsense2.so file wasn’t. Although it seems simply in hindsight, I was very confused when trying to figure this out. What I did instead is set $PYTHONPATH in the wrappers/python folder, which had pyrealsense2.so. I also discovered something else after that – To find the latitude/longitude of an address, we use geopy/Nominatim, where geopy sends a GET request to some Nominatim server. However, I was getting that these requests were timing out because apparently, Nominatim has a limit on how many requests you can do, and it’s plausible that this is partly tracked using IP address, and someone at CMU used up all our requests. This is for sure the cause of the issue because once I used a VPN, this issue disappeared. I’m still thinking about how to avoid this (use a different library, make our user_agent’s name more unique with a random number), but once this issue is solved, all that we should have to do to finish with the Pi is to work out which port we should connect the Arduino and make the changes to the lines using the Serial package.

Testing

We did some distance testing this week, but found that for people in more extreme angles or extreme distances, the distance calculation is not super accurate. The way we did these tests were by using an image and the depth data to calculate the user position, and then physically measuring the distance to the person. Ideally we would want a percent error of below 10%, but this might not be achievable, partly due to previously mentioned issues, partly due to it being kinda hard to accurately measure the distance to the person with just one measuring stick, as it’s kinda hard to keep the stick straight and perpendicular to a specific spot in the air. Also, OpenCV sometimes interprets some objects as people, when this is clearly inaccurate. For the distance calculations, some online sources say it is plausible that using a radians/pixel approach is not linear, in other words, it may not be acceptable just to divide horizontal FOV by the number of horizontal pixels. However, I’m not actually sure if this is the case, as it is hard to get data on, nor am I sure how to correct this if this is the case. In the future we plan to do more integrated testing and individual testing.

Schedule

We are a little behind schedule, as basically everything works, but the Raspberry Pi’s code needs to be a bit edited to account for timeout issues with Nominatim. However at this stage in time, we should be solely focused on testing. But because we aren’t very far behind, we can focus most of next week on testing.

Next Steps

The goal for next week is to finish up with working with the Pi, and spend most of next week testing.

Jeff’s Status Report for 4/8

Personal Progress

This week I updated the hardware software interface to allow for interruption of blinds. I also got cardboard to build the fake window frame for the final demo and began construction of the frame.

I am currently on schedule personally since the hardware system is fully complete and am waiting to the software team to be fully finished to perform full system testing.

Plans for Next Week

There plenty of sunny days on the coming week so we will be doing full integrated system testing and calibration. I will also finish building the frame.

The tests planned for the hardware system include sending in test vectors asking the blinds to move the blinds at every 10 cm intervals (10cm, 20cm, 30cm from the top). I also plan to test the light detect circuit by moving it into the light and not light and make sure the response is correct.