Daniel’s Status Report for 12/4/2021

At the beginning of this week, I mainly helped the team work on the final presentation. In addition to working together on deciding slide content and layout, I personally worked on data analysis to extract latency values of various parts of our system. We had previously recorded various slow motion videos in an attempt to measure latencies of core parts of our system; for example, to find the time from the Arduino sending a PWM value to the motor to actually seeing motor movement, we recorded a video with the motor in the frame, and an LED that would turn on when the PWM value was sent. These videos all included a stop watch in the frame to keep track of time. In order to extract the latency values for this, I imported the videos to iMovie and went frame by frame, writing down the two time stamps of when the LED turned on, and when the motor actually spun. The difference between these was the latency for this part of the system. I recorded these values on a Google Sheets and found the average of all the runs we did.

I then created a slide in the presentation to show where these latency values fit into the overall system, showing a block diagram of the major steps of the system, and the latencies between them. After this, I also helped the team record a quick demo video demonstrating the camera to propulsion pipeline.

Later that week, we started working on finalizing the laser cut and 3D printed housing. Vikram and I glued the walls to the floor on the base of the housing using hot glue, and I also soldered 16 AWG gauge 2 mm to 3.5 mm adapters for the motors to be able to connect them to the ESCs.

Lastly, we spent the end of the week testing our full system. As usual, I would hold the device and drop it as Lahari initiated it and Vikram caught it down below. Since this testing would be used in the final report, I took a picture from above after every drop, and recorded the distance the device was away from the target that Vikram measured. During our drops, we encountered an issue where the unstable WiFi connection on the Pausch bridge meant that SSHing into the Pi and running the CV script could not be done consistently, thereby lengthening the time for each drop as we had to wait for the Pi to reconnect to the WiFi. To fix this issue, we went back to the lab and I edited the Python script on the Pi to run once the Arduino sends a particular message to it over UART. This meant that the Pi’s CV portion of the script would only start once the button connected to the Arduino was pressed, thereby allowing us to stop relying on a stable SSH connection as we could run script while we had a WiFi connection, and the Pi would simply wait for the button press to begin. I also tried to make the script run automatically after the Pi booted up, such that we would never even have to SSH into the Pi; however, some issues regarding the version of Python the .bashrc file was using meant that our script could not run properly.

We dropped 7 times, and while wind was an issue at certain times, we managed to get our first drop within the target foam-core rectangle, around 50 inches from the center of the target. Additionally, as it got darker, detection got weaker. Lahari and I changed the threshold of the CV algorithm, allowing us to keep dropping in darker conditions.

We will continue testing the system in the coming week,  as well as record our final video.

Daniel’s Status Report for 11/20/2021

This week, I helped the team do more tests in order to see if we could replicate the results we found last week regarding our solution of the swinging problem. We also finalized our choice of cameras after some range tests off the bridge, and began designing our final housing.

For the drop tests, we first lowered the total added weight through rocks to exactly 1 pound (it was 600g before, whereas ~450g/1lb is the actual weight of our payload). We also charged the battery fully and ran additional thrust tests using the scale. We found that at 100% power using 16 AWG wure, we were getting almost 700g of thrust, a marked improvement over previous week’s results.

For the actual drops, as usual, I held up the device by the parachutes as Lahari initiated the device. We first ran with 50% power to the motors and noticed a very minute movement. We then decided to change this to 75% and dropped an additional two times, noticing noticeable levels of movement in the direction of the motors. I also took pictures from the bridge looking down to document where the device landed, so we could compare to the no motor drop.

To try alternative cameras, I ordered an 120 degree camera from the ECE parts inventory to see if the smaller FOV would make the circle easier to detect. I tried setting it up with the Pi only to find that the one we had was for the Nvidia Jetson, so the Firmware was not compatible. I then helped Lahari and Vikram modify existing scripts we had in order to support a USB webcam (as the exiting PiCamera module only supported the CSI Pi Camera module). This involved modifying the code to use the generic OpenCV camera access functions, rather than PiCamera specific methods. After this was done, we went out to test this new webcam against the old 160 degree FOV camera.

 

We came up with a quick testing strategy where I would hold the device over the bridge, say the testing parameters out loud (resolution + camera type + lateral distance from target), and Lahari would record the screen and mic feed using OBS. Using a tape measure, I measured up to 4 meters away from the location of the target, with increments of 1 meter such that we measure the performance of each camera at each resolution at each distance away from the target center. Lahari and I started with the 160 FOV fish eye camera and noticed that it was not even picking up the target on the ground, even after we doubled its diameter to 2 meters (the fish eye effect made everything in the center of the frame super small). We decided to call of the further tests with the fisheye, and moved onto the Webcam. With the Webcam, the target was large and clear in the frame, and we were able to proceed with the entire suite of tests. (640 x 480 vs 1280 x 720, at 1, 2, 3 and 4 meters away from the center of the target).

Lastly, we brainstormed ideas together for a final housing. As we ended up choosing acrylic, Lahari and I worked together to quickly create DXF files for the hexagonal base of the device, as well as the rectangular side walls using Solidworks. Our next phase is to fully construct this housing, and test the device with the full camera to propulsion piepline.

Daniel’s Status Report for 11/13/2021

This week, I helped the team prepare for and present our demo to the two groups on Monday and Wednesday, as well as work on modifications to the device to solve swinging problems in the air.

On Sunday before the demo, we worked together to fine tune the demo setup by testing what thrust to use, and how high up to tie the device to appropriately show off our system. Additionally, we ran various tests to see if the system performed as expected, as Vikram and I moved the circle under the device, and Lahari modified the detection code accordingly to improve detection. For the demo itself , I gave a brief overview of the main components of our project, then explained specifically my part (the vectorization algorithm), and how it fits into the overall data pipeline (my algorithm receives data from the CV subsystem, and passes it onto the arduino via UART).

 

After the demos, due to the low thrust we measured for each of our motors using the smaller props (~220g), I researched various thrust tests to see what other people were getting. I noticed people were getting easily over 800g using our motor with a 4S LiPo and 5 inch props (like us), so I suggested to the team that we should construct a proper motor thrust setup (motor attached to a raised platform, with a flipped propeller to generate thrust downwards onto a scale. Lahari and I found a wooden rod, and with the help of a friend, cut it to an appropriate size. I then measured the middle of a rectangular platform and glued the rod to it to give it a solid base. We then attached the motor to the top, and ran several tests. We found 330g of thrust with the 3S LiPo (the 4S was low battery), and 400g using thicker wires for the ESCs.

In order to solve the swinging problem we noticed in the air (which we attributed to lightness), we thought it would be beneficial to add weight to the device. I suggested we could use fish aquarium rocks (which I knew Lahari may have) as a stand in for the payload. We measured three 200g  bags of rocks, and Lahari and I cut a hexagonal insert such that we could put the bags at the bottom of the device, and cover them with this platform. After this, we tested  the device off the bridge (I dropped it, Lahari initiated it and Vikram caught it) and noticed the swinging no longer occurred, but minimal to no movement was observed. We then found that the 4S LiPo was low battery, causing the motors to slow down mid flight, so we charged it. Our next tests will have a fully charged battery, and 2 instead of 3 rock bags; we will then change thrust and weight accordingly to what we observed.

 

I also helped the team run detection tests for the circle off the Pausch bridge. I held the device off the side as Lahari viewed the camera feed on the laptop and made on the fly edits to the detection code. We found that the circle was too small to be detected in the thresh-holded image, so we printed a circle two times the size (2 meter diameter), and will be testing with that as soon as we construct it.

 

Lastly, I researched and ordered larger propellers (7 inches) in order to increase thrust further, incase we still cannot move after lowering the weight and using a fully charged battery.

Daniel’s Status Report for 11/6/2021

This week, I worked primarily on helping the team test the device with more drops, as well as implementing the vectorization algorithm, and integrating it with the arduino code (which controls the motors) and the CV code (which detects the circles). We then worked as a team to test the full detection-to-propulsion pipeline.

 

For the drops, we tried various parachute setups (2 vs 3, shorter and longer slack lengths). As usual, I held up the parachutes and let go of the device as Lahari initiated it with the press of a button and Vikram was below to catch it. We noticed a pendulum motion as I dropped the device. We then tried to drop the device, then initiate the motors mid flight, but the same issue persisted. We think this may be a lightness issue, so we will resume drops after the interim demo with additional weight at the base of the device to prevent swinging.

As for the vectorization code, I implemented it in Python, and sent the data over serial so the Arduino can use the values. In short, the algorithm takes in the angle and magnitude of the vector towards the target, from the center of the frame, and computes the necessary PWMs for each of the three motors that will create a resultant vector that is equal to the target vector. After this, we tested the script by connecting the Arduino (which was running code Vikram wrote to receive the PWM values over serial), and noticed a hanging/slowdown using the serial over the USB connection. We then moved to the GPIO RX and TX pins, and using a logic-level-shifter, connected the Arduino and Pi directly. This solved the hanging issue (the serial over USB is prone to slowdowns as it is dependent on the processor frequency). Then, I helped the team test the vectorization algorithm by inputting various angles to the script, and seeing if the correct motors spun up. Since our motors are defined at angles 0, 120 and 240 degrees, we tried these values as well as values in between (such as 60 and 90), and found that it worked as intended, meaning I was on schedule with its completion.

 

After this, I then worked to integrated the full Camera to Propulsion pipeline by having the CV Python script call the Vectorization algorithm, which then communicates with the Pi over serial. To test this, I placed a circle on the ground as the target and moved it around, checking to see if the angle generated by the CV code was being passed on to the vectorization algorithm, then seeing if the correct motors spun up. With some CV tuning, we were able to get a consistent detection-to-propulsion pipeline, as the circle changes position the motors move accordingly; however, one issue we found that we will fix tomorrow is the relative angle 0 to the camera, and to the device were not matching. This meant that an offset existed between the angles that the device moves towards, and the angle the camera produces . This should be a matter of experimentally deducing which angle is 0 to the camera, which angle is 0 for the device, and matching those. The above was all tested using our new demo setup (described below), as I ran the camera+vectorization script on the Pi and Vikram and Lahari turned the device on or off between tests.

I also helped the team build a testing setup to demo our device for the interim demo on Monday. This involved getting various 80-20 pieces and attaching them in such a way that would allow us to hang the device over a target using string. We tested the setup and found that it was able to hold the device stable, as well as allow it to move a sufficient amount to demonstrate the mobility of the system.

Daniel’s Status Report for 10/30/2021

This week, I mainly helped the team complete more tests to understand how our device behaves under its proposed use case. We noticed during our last tests that a lot of spin was induced from the unsynchronized dropping of two people letting go of the device. As a result, we decided one person should drop it. During each test, I held up the parachutes from their center, held the device over the edge of the Pausche bridge and waited for Lahari to initiate the system using the buttons we installed. Once she pressed it, I waited for the motor to spin up, and let go of the parachutes once it was on full thrust. The spin issue was solved, yet we found that there was insufficient thrust.

To fix this, we are going to add an additional motor to each side. After Vikram cut out a new piece of wood that has enough space for two motors, we worked together to measure various distances to ensure the motors were equidistant from the middle mounting point, as well as to drill the necessary holes for mounting. I then drilled the holes and attached the motors. To test both motors working at the same time, I suggested we use the lab clip wires with banana plug ends to quickly prototype a splitter without needing to solder anything just yet. Then, we worked together to hook up the system. Finally, as Vikram held the splitter, I held the motors and Lahari held the wires in place, I turned on the servo tested and we found that the two motors worked together. As for my vectorization algorithm in regards to this change, I will simply feed in same PWM duty cycle to both motors on any given side since they are both pointing in the same general direction, so the output should still remain as 3 PWM duty cycle values.

Additionally, I also helped the team draw a circle to test the circle detection algorithm at a larger scale. We tied a string to a sharpie, and while Vikram and Lahari held the string in place, I drew on the foam-core, creating a circle as the string acted as the radius (of 0.5 m based on previous calculations described in the past based on scaling up the detection of a 5 cm radius circle from 1 meter away). Due to the uneven nature of this process, our circle was not perfect, and the resulting edge was too thin (even after we filled it in). To solve this, Lahari then printed a perfect circle with a much thicker edge, and glued it to the foam-core. I then ran the python detection script on my laptop, holding up the camera as Vikram moved the circle further and further away. At first, we noticed the range was no where near our required 40 feet range. I noticed the camera quality was set to 640×480, so I proposed that we change it to 720p (1280×720). We tested this both indoors and outdoors, and found a maximum detection range of around 50 feet, which fits our use case.

Daniel’s Status Report for 10/23/2021

Oct 10-16: 

During this week , I mainly assisted the team in constructing our first prototype. We chose to build it from foam-core, so I cut out some of the shapes needed to build the different parts of the device (such as the arms and sides). After constructing the basic skeleton, I wrote an Arduino program to control the device for preliminary tests. This involved  connecting some buttons to the Arduino on a breadboard, and writing code to generate the necessary PWM signal to run the brushless motor. We had two sets of buttons. One button was used to start the motor, and three buttons on the bottom of the device were used to stop the device when it hit the ground. After I tested this worked on a breadboard,  I then soldered wires to the buttons and with the help of the team, attached these and the remaining electronics onto the housing. We then ran some tests as a team indoors (as it was raining) to see if the motor was able to move the device, and it was (explained in the team status reports with videos). I also began thinking of how to create the vectorization algorithm for the control system, and with the help of Lahari and Vikram in the realm of matrices, we landed on an approach that I can begin to implement, keeping me on track,

 

Oct 17-23:

This week was mainly focused on testing the propulsion of the device outdoors, and seeing what modifications we needed to make. We worked as a team to figure out how to mount the parachutes to the device, and went out the Pausch bridge to drop it. Lahari and I dropped the device from the bridge, while Vikram was at the bottom in order to catch it. The results and consequences of our findings will be discussed in the team status report. Due to the rain, it was difficult for us to do more testing outdoors. At this point, we also found that the buttons placed on the bottom of the device to stop it were not ideal as they could sometimes not be pressed (if the device fell on its side, or on grass), so we decided to move to an ultrasonic sensor that would stop once the device was close enough to the ground. We have also moved to a camera approach for our perception (discussed in group status report), so I helped attach the ultrasonic sensor and camera to the bottom of the device, and cut a slit to allow the wires through. Lastly, as we needed to test the camera on our Raspberry Pi, we used an OpenCV example script to run the Pi Camera, and I helped integrate Lahari’s circle detection algorithm (which we will use to find our target) onto to the existing python script. We found that the script runs fairly well, but more testing needs to be done to determine the true latency of the script. The results from this script will be fed into the vectorization algorithm that I’m working on.

Daniel’s Status Report for 10/9/2021

This week, I helped the team create a test setup for the motor to measure the thrust. We initially had the motor set up on a flat piece of wood with legs, standing on a scale so we could measure thrust; however, the lack of space under the motor did not allow the propellers to move enough air, and no thrust was generated. I suggested that we should attach the motor to an arm extending over the test setup such that the motor had clearance. When we did this, we were able to actually measure the thrust of the motor. In preparation for this, I soldered wires for the motor controller, and helped put the overall construction (drilling, screwing, gluing).

I also did research regarding potential boards to connect our antennas to, to be able to measure RSSI. We had trouble with our ESP32 board when connecting an external antenna to it; we had to move an SMD resistor to change the antenna from onboard to external, which proved difficult and resulted in a solution that could effect our antenna reading (excess wire, solder). So, I found a board that required no DIY work (SparkFun Thing Plus). Once we get the board, I will be able to run more Antenna tests to gauge the antenna sensitivity more accurately; Specifically, I will need to test multiple antennas to see the difference in RSSI observed from different directions (before we get multiple Antennas, I plan to use a single spinning one to replicate multiple ones). In the meantime, I will begin working on the design for the vectorization algorithm which takes in RSSI and outputs PWM duty cycles for each of the motors, so I continue to be on track for my tasks.

Lastly, in considering other perception methods (if the Antennas do not work sufficiently well), I brought an 8×8 Thermal Camera I had lying around for an Arduino to test if it is a feasible method. I soldered and wired it up to test it. However, the low resolution mixed with potential damage to the camera meant the data was not too useful. Additionally, the small sensor size meant that the range was highly limited. If we wanted to test this perception method again, we would need to acquire a higher resolution thermal camera.

 

 

Daniel’s Status Report for 10/2/2021

This week, in preparation for our Design proposal presentation (which I will be presenting), I helped with some of the content on the slides while also preparing what I should say. I wrote notes for each slide in the form of main points, and rehearsed the presentation multiple times. On the technical slide of the project, I helped program the ESP32 board, and connect it with our new directional antenna to find out if direction sensing can work using our approach. This involved setting up the Arduino IDE to work with the ESP32-CAM board, and using the WiFi library to gather RSSI data.

I helped run some preliminary sanity checks indoors to see if the Antenna was behaving as expected. We then moved outside where I helped collect the RSSI data for different test cases of our antenna (such as different angles of the WiFi beacon from the antenna, different distances from antenna, as well as orientation and tilt of the antenna). I then moved this data to Excel and plotted it to find out if there was a discernable peak in one direction so we can use it for direction finding. We ultimately found that there is a peak of signal strength of around 30 degrees when the antenna is placed horizontally, and that it had enough resolution such that we could tell the RSSI peak from around 30 feet away until right up to the antenna. Now that we have obtained this data, I am starting to think about the specifics of the propulsion algorithm that will take the antenna’s RSSI data to output various PWM values for each of the motors. My team also suggested that we may need to consider a filter to pre-process the data.

When the parachute and motors arrive next week, we will begin testing drop speeds, motor thrust as well as do additional testing on the antenna in different configuration

Daniel’s Status Report for 9/25/2021

This week, I focused on two main tasks. Firstly, leading up the design phase of our Capstone, we had to begin ordering materials to begin testing our antenna approach. Specifically, after group discussions about how we should have the antennas interfacing with the main compute (to be able to grab RSSI), I researched online for a WiFi capable chip that not only had an external antenna connector (such that we could connect our directional antenna to it), but also a serial interface over USB for easy programming and debugging. Importantly, it had to be able to get the RSSI of the WiFi signal  so that we can plug in our directional antennas, and sense the signal strength. We landed on an ESP32 board with a physical micro antenna connector (it comes with a converted to an RP-SMA connector) , and a USB serial board that hooks onto it. I submitted the order forms for this as well as the Antenna.

After we got feedback about our proposal presentation on Wednesday, I read through it and started looking into the NRF24L01 board suggested by the instructors as a possible alternative to the WiFi approach we laid out in our proposal.

Daniel’s Status Report for 9/18/2021

 

This week,  I sat with my team and helped brainstorm and do the necessary calculations to figure out certain requirements of our project, such as lateral drop distance as well as landing accuracy. These are crucial to estimate how we expect our device to behave, and what design considerations are relevant to make it behave in the way we want. I also personally worked on the proposal slides for the upcoming presentation this week. I had to distill all the information we came up with during our calculations and group discussions into an easily understandable format for the viewers. This involved summarizing our project problem statement and solution in such a way that the audience understands why our device exists. Additionally, calculations had to be summarized and combined alongside diagrams to properly explain how the project works, why the requirements we chose make sense, and why we expect it to work (by explaining the kinematic calculations we worked on this past week).

 

As of right now, I believe we are on schedule as we have finalized the project idea, as well as core requirements. As for next steps, we are going to begin the design phase of the project. We will begin experimenting with water bottles attached to parachutes, as well as antennas to figure out certain design elements of our final device, such as which type of motors are appropriate as well as antenna setup.