Caroline’s Status Report for 4/29

This week, I helped with the unit testing for our robot and worked on materials for our final presentation and final poster. The testing plan is as described in the Team Status Report, and we are mainly observing the performance while noting failure cases for possible adjustment in the future. I also laser cut small cylinders to hold the scents for our demo. We need these cylinders to prop up the scented object so that it can be picked up by the ultrasonic sensors.

Aditti’s Status Report for 04/29

This week focused more on testing and making any final tuning changes to constants. I also focused on the project’s documentation and worked on the final report and poster before the demo. We now have a concrete testing plan thanks to Eshita and will be sticking to it for the rest of the week. Apart from some testing during the earlier part of the week, I had a slow week as I was sick.

Eshita’s Status Report for 4/29

This week, I came up with a concrete testing plan from the advice received during our final presentation and performed 32 different test runs with the paint thinner scent. To come up with the test plan was important to gather the correct metrics regarding the performance of the robot. I truly observed the tradeoff between random exploration and planned path planning in testing, as when the object would be a straight path away from the robot, sometimes it would not converge under 3 minutes or get triggered for the scented object. I also tested out a new battery power supply using Lithium Ion batteries and observed a significant increase in the life of the robot motor driver and the Arduino. Alkaline 9V batteries provided a battery life of 30-45 minutes, whereas the lithium-ion batteries provided a life of 3h 13 minutes due to its higher current power (1200mAh compared to 350mAh on the alkaline batteries).

Presenting the final presentation was an important milestone for our team. I also worked on the slides and script this week, along with starting off with writing sections on the design report and poster.

Team Status Report for 4/29

This week, we focused on presenting the performance of our robot in our final presentations and getting our final documentation ready. For unit testing we carried out the following for all our subsystems.

Unit Testing

For unit testing, we observed some values locally on the Arduino Serial Monitor. A lot of our testing was done in an arena-like setup (described in our overall setup), watching the different states of the robot and adjusting the findings shown below.

(1) Motion control: When deciding the functions for motion control, we were careful to design functions that separated the random exploration from the scent localization. By printing out distances and observing angles, distances reached, and hard stops (where the robot overshoots or undershoots the target and self-corrects itself), we were able to test the basic motion and translation ability of the robot. This involved a lot of tuning for our robot’s weight, wheel speed, and surfaces the robot can drive on comfortably. For the ultrasonic sensors, a similar test for tuning the position and height of the ultrasonic sensors, along with testing in an arena with unscented obstacles was the way we tested for obstacle avoidance.

Findings: We added two additional ultrasonic sensors, and glued our DC motors to the car chassis in order to get stable and reliable movement from the robot.

(2) Alerting: Since our robot has clearly defined states, the code separates the messages displayed and LED color patterns for each state. Putting the robot in transition from scan mode, random exploration, obstacle avoidance and classification as shown in the figure below is how we tested for the same.

Findings: For a high-speed control loop like ScentBot’s, we took the decision to host the entire system locally, and the only communication is to the output devices of the LCD and LED display. We explored cloud computing, UART, Wi-Fi, and Master-Slave byte transfer before integrating these, highlighted in our earlier status reports.

(3) Sensing and Scent Localization: The sensors were integrated with our LED display, and this helped us determine the samples we are getting at any point in the robot’s traversal. The scan angles and samples taken can be seen on the LCD display. We tested for whether the robot translated to the correct scan angle, and tested in scenarios where an obstacle was in the way while scanning and while translating to the maximum scan angle so that the robot does not run into the object. Observing the falling and rising values, we also estimated whether the robot was entering and exiting in the case of false positives. There was additional tuning performed for our thresholding for multiple sensors to detect and confirm the scent in our testing.

Findings: We learned that certain channels can classify our scents better, and work with higher/lower thresholds. Namely, the TVOC channel on the ENS160 and the ethanol channel on the Grove sensors. We also increased our scan time from 3s to 5s, to allow for more samples to be collected and prevent the robot from having to perform repeated scans and be more confident in its scent localization.

(4) Classification: Our SVC model was tested on a train-test split of data collected over 2 days in varying temperature conditions over ambient, paint thinner and alcohol scents. This was conducted locally using a Colab notebook before integrating the model onto the Arduino Mega. Moreover, the ability to recognize and confirm the scent while the robot is moving was conducted in our overall test.

Findings: We tuned the SVC model from a polynomial model to a linear model to account for the limited space on the Arduino. The linear model also performed better in our unit tests with live sensor readings. We also explored with using normalization and added statistics for each of our sensor readings like RMS, Mean, Standard Deviation, Maximum and Minimum values.

Overall Testing

Initially, we conducted 24 tests with alcohol and paint thinner scents by randomly placing the object and robot in different positions. We received feedback from our professor on the number of trials and statistics we had reported, hence we went back and conducted more testing according to a concrete plan described below.

Our overall testing plan is shown in the figure below. We placed the object at one of 9 grid positions, and tested the robot convergence time from corners (1,2,3,4) of the map. We made sure that the object would be at least 1m away from the robot in our testing. The object would be a paint thinner or alcohol scent on a cotton ball. This gives us 32 configurations for each scent to test. We currently have over 35 trials with our paint thinner scent and aim to complete testing in this manner with alcohol this weekend. An example of our test metrics being collected is shown below as well.

 

Caroline’s Status Report for 4/22

The past weeks, I worked on various improvements to the robot including modifying and reprinting the design of top and back panel, integrating two new ultrasonic sensors on the side of the robot to reduce obstacle collisions, and making various adjustments to the scent detection and confirmation logic to improve the accuracy of detection.

With regards to the robot exterior, I redesigned the top of the robot to include cutouts for the LCD screen and newly added neopixel so that all of the wiring is hidden inside the robot and it looks more finished overall. I also added vents to the back of the robot to help with airflow throughout the chamber. Lastly, I drilled two holes on the side of the robot to install the two new side ultrasonic sensors. The two ultrasonic sensors were added to help prevent the robot from running into the scented object, which occurred often while it was scanning. I initially just tried to implement logic that made the robot reverse slightly if it detected a side object during scanning and then continue its scans from the same position. Getting the robot to reverse while preserving the scan angle actually required some significant effort which my team assisted me with in debugging. While testing the obstacle detection, we also decided that it would be valuable to change the logic for obstacle avoidance during random exploration as well due to some issues with the robot getting stuck when facing an obstacle. I changed it so that if the front ultrasonic sensor is triggered, it reverses and then continues to a new random coordinate, and if a side ultrasonic sensor is triggered, it rotates 90 degrees in the opposite direction. Lastly, I changed scent detection and confirmation logic to also take into account readings from the ENS 160 TVOC channel in addition to the Grove ethanol channel. The Grove sensor has had sensitivity issues where the ethanol value will not change at all unless the scent is very close. The ENS 160 TVOC channel is more sensitive to the scents at a farther distance, but is also more prone to random spikes which poses the risk of triggering more false detections. We tested different thresholds and logic for using the two sensors and will continue to tune this if necessary.

In the upcoming week, my main focus will be assisting with the testing of the robot and making any adjustments to the robot according to our observations. We don’t anticipate to make any more major system changes — the main goal is to improve the reliability and accuracy of the scent detection for our demo.

Aditti’s Status Report for 04/22

Eshita’s Status Report for 4/22

This week, my focus was to round out and complete the overdue tasks of creating and integrating our embedded ML model and adding Neopixel light cycles to ScentBot’s different states. We collected data for paint thinner, smoke, and alcohol as a team, and in exploring ML models we could utilize, I came across a GCP tool called Neuton, which had backed tutorials of being able to run on an Arduino. After training on our dataset to create a neural network, we faced issues in integrating it for the Arduino Mega specifically. Upon more research, I found that the architectural driver to run the model on an ATmega processor was not the same and hence, this model would not compatible to run. I instead shifted tracks to utilizing MicroMLgen, which has the capability to convert a Support Vector Classification model in Python to a C header file (an example that we are using for ScentBot currently), which we could include on our Arduino sketch as a library.

In creating the SVC, there was a tradeoff between storage space on the Mega vs. the accuracy that the model could obtain. We also found that while the classification of the model was good after a high threshold was reached, localization would prove difficult due to the high number of false positives. Hence, we decided on a linear SVC model, which would only classify upon reaching a set threshold of sensor values. Upon testing, we also found that it was difficult to test smoke in the presence of other flammable substances and make its directionality toward the sensor array. We also explored propane and isobutane medical sprays to potentially trigger our sensors, but the concentration was not high enough to trigger the sensors. We decided to ain to make ScentBot work for alcohol, paint thinner and ambient scents working toward our final demo.

I am currently on track with all our tasks. Our testing will go into the next week as we complete trial runs beyond the final presentation. I still need to work on my script, presentation skills and time taken for our slides for Monday, which we are going to work on together with the help of a practice run tomorrow.

 

Team Status Report for 4/22

Coming into the final stretch of our project, we dedicated this week to achieving better obstacle detection and integrating our multiclass classification algorithm. We also introduced a Neopixel LED ring light on top of the robot with various color patterns for different modes and scents that ScentBot can identify. 

We added two additional ultrasonic sensors, one on each side of the robot, and implemented new obstacle avoidance logic for when the robot detects objects on the side. We have seen a tremendous improvement in the robot not running into the scented object when exploring and scanning. Now, the robot is able to back up, and continue its scan and/or random exploration. ScentBot also now uses various sensor values to determine if a scent has been detected or confirmed which has improved the detection and confirmation performance.

We also explored utilizing propane and isobutane sprays for our third scent, as we hypothesized that substances with hydrocarbons would trigger the sensors. Upon testing with our sensor arrays, we discovered that the concentrations of TVOCs, Ethanol and other hydrocarbons was not high enough to trigger our sensors. We have decided to have our embedded Support Vector Classification (SVC) only work on the following: alcohol, paint thinner and ambient scent. We also integrated the SVC model on the Arduino Mega to only classify a scent after a global threshold has been reached. This was a decision made with considering the tradeoff between false positive rates and sensor sensitivity. This ensures that ScentBot is confident enough over a sampling period that there is a scented object. 

We have come up with a test plan and run 20 initial trials with paint thinner and alcohol, introducing either 0, 1 or 2 unscented objects in trials to observe ScentBot’s performance. On average, we found that the classification is always correct, while convergence time is around 183s with an average first scan detection at 39cm. We expect this to be more true to ScentBot’s performance as we host more trial runs, which is our goal before our final demo. We are also working on fine-tuning the front ultrasonic sensor to prevent ScentBot from running into walls.

Linked is a successful test run, similar to the ones we plan to showcase at our final demonstration.

Caroline’s Status Report for 4/8

During the first half of this week, I helped test and refine our robot in preparation for the interim demo. My team tested many different locations to hold the demo and tuned hyper parameters to improve the ability of the robot to track down a scent. After the demo, I switched out the Arduino Uno in our robot to an Arduino Mega. This solved the memory issues we were facing and we are now able to read from multiple sensors at a time. This means that we can now simultaneously measure the ethanol values in addition to CO2, CO, and get temperature and humidity readings. We tested the new sensor array with both incense smoke and isopropyl alcohol and found that these two scents cause all of the sensor channels to rise. This is problematic because it means that we are unable to differentiate scents just by thresholding different sensor channels. Because of this, we are switching to using an ML model to try to distinguish between scents. Moving forwards, I will help with improving this model and testing the robot. Once the classification part is integrated, our project focus is more on improving the existing setup.

In terms of verification and validation, my team has done lots of testing to optimize the robot hyper parameters and testing setup. We have placed the robot in multiple different environments and observed whether or not it detected and track down a scented cotton ball placed in the environment. We discovered that the robot is not able to move on certain uneven surfaces and in rooms where there is strong air conditioning — in these cases, the robot struggled to move or would not move in the correct direction towards the scent. Moving forwards, we will continue adjusting the robot and testing setup to ensure that it can consistently meet the design requirements of being able to track down and correctly identify a scent. We will keep track of metrics such as the time the robot takes to identify a scent and the scent classification accuracy.

Team Status Report for 4/08

The first half of this week focused on testing and verification using alcohol for our interim demo and practicing our pitch. We tested in various environments to figure out what factors influence our readings. We also transitioned our robot over to the Arduino Mega and redid all the connections to fit on the new board. The robot can now read from all three sensors and perform robot calculations and movement at the same time without memory issues. We have also begun dataset generation for paint thinner and alcohol, which we will complete by this weekend for the remaining scents. We will be training a neural network using TinyML and deploying it on the Arduino, and using the prediction confidence to determine the direction of movement. There will be many risks associated with this: 1) generating useful feature vectors with pre-processing, 2) constraints in deploying the model to Arduino (memory), and 3) time taken for inference. Since we are using Neuton AI (which uses TinyML), we are hopeful that the model will be deployable on Arduino. We plan on testing our setup extensively in our newly constructed 2m x 2m field.