Caroline’s Status Report for 4/29

This week, I helped with the unit testing for our robot and worked on materials for our final presentation and final poster. The testing plan is as described in the Team Status Report, and we are mainly observing the performance while noting failure cases for possible adjustment in the future. I also laser cut small cylinders to hold the scents for our demo. We need these cylinders to prop up the scented object so that it can be picked up by the ultrasonic sensors.

Aditti’s Status Report for 04/29

This week focused more on testing and making any final tuning changes to constants. I also focused on the project’s documentation and worked on the final report and poster before the demo. We now have a concrete testing plan thanks to Eshita and will be sticking to it for the rest of the week. Apart from some testing during the earlier part of the week, I had a slow week as I was sick.

Caroline’s Status Report for 4/22

The past weeks, I worked on various improvements to the robot including modifying and reprinting the design of top and back panel, integrating two new ultrasonic sensors on the side of the robot to reduce obstacle collisions, and making various adjustments to the scent detection and confirmation logic to improve the accuracy of detection.

With regards to the robot exterior, I redesigned the top of the robot to include cutouts for the LCD screen and newly added neopixel so that all of the wiring is hidden inside the robot and it looks more finished overall. I also added vents to the back of the robot to help with airflow throughout the chamber. Lastly, I drilled two holes on the side of the robot to install the two new side ultrasonic sensors. The two ultrasonic sensors were added to help prevent the robot from running into the scented object, which occurred often while it was scanning. I initially just tried to implement logic that made the robot reverse slightly if it detected a side object during scanning and then continue its scans from the same position. Getting the robot to reverse while preserving the scan angle actually required some significant effort which my team assisted me with in debugging. While testing the obstacle detection, we also decided that it would be valuable to change the logic for obstacle avoidance during random exploration as well due to some issues with the robot getting stuck when facing an obstacle. I changed it so that if the front ultrasonic sensor is triggered, it reverses and then continues to a new random coordinate, and if a side ultrasonic sensor is triggered, it rotates 90 degrees in the opposite direction. Lastly, I changed scent detection and confirmation logic to also take into account readings from the ENS 160 TVOC channel in addition to the Grove ethanol channel. The Grove sensor has had sensitivity issues where the ethanol value will not change at all unless the scent is very close. The ENS 160 TVOC channel is more sensitive to the scents at a farther distance, but is also more prone to random spikes which poses the risk of triggering more false detections. We tested different thresholds and logic for using the two sensors and will continue to tune this if necessary.

In the upcoming week, my main focus will be assisting with the testing of the robot and making any adjustments to the robot according to our observations. We don’t anticipate to make any more major system changes — the main goal is to improve the reliability and accuracy of the scent detection for our demo.

Aditti’s Status Report for 04/22

Team Status Report for 4/22

Coming into the final stretch of our project, we dedicated this week to achieving better obstacle detection and integrating our multiclass classification algorithm. We also introduced a Neopixel LED ring light on top of the robot with various color patterns for different modes and scents that ScentBot can identify. 

We added two additional ultrasonic sensors, one on each side of the robot, and implemented new obstacle avoidance logic for when the robot detects objects on the side. We have seen a tremendous improvement in the robot not running into the scented object when exploring and scanning. Now, the robot is able to back up, and continue its scan and/or random exploration. ScentBot also now uses various sensor values to determine if a scent has been detected or confirmed which has improved the detection and confirmation performance.

We also explored utilizing propane and isobutane sprays for our third scent, as we hypothesized that substances with hydrocarbons would trigger the sensors. Upon testing with our sensor arrays, we discovered that the concentrations of TVOCs, Ethanol and other hydrocarbons was not high enough to trigger our sensors. We have decided to have our embedded Support Vector Classification (SVC) only work on the following: alcohol, paint thinner and ambient scent. We also integrated the SVC model on the Arduino Mega to only classify a scent after a global threshold has been reached. This was a decision made with considering the tradeoff between false positive rates and sensor sensitivity. This ensures that ScentBot is confident enough over a sampling period that there is a scented object. 

We have come up with a test plan and run 20 initial trials with paint thinner and alcohol, introducing either 0, 1 or 2 unscented objects in trials to observe ScentBot’s performance. On average, we found that the classification is always correct, while convergence time is around 183s with an average first scan detection at 39cm. We expect this to be more true to ScentBot’s performance as we host more trial runs, which is our goal before our final demo. We are also working on fine-tuning the front ultrasonic sensor to prevent ScentBot from running into walls.

Linked is a successful test run, similar to the ones we plan to showcase at our final demonstration.

Caroline’s Status Report for 4/8

During the first half of this week, I helped test and refine our robot in preparation for the interim demo. My team tested many different locations to hold the demo and tuned hyper parameters to improve the ability of the robot to track down a scent. After the demo, I switched out the Arduino Uno in our robot to an Arduino Mega. This solved the memory issues we were facing and we are now able to read from multiple sensors at a time. This means that we can now simultaneously measure the ethanol values in addition to CO2, CO, and get temperature and humidity readings. We tested the new sensor array with both incense smoke and isopropyl alcohol and found that these two scents cause all of the sensor channels to rise. This is problematic because it means that we are unable to differentiate scents just by thresholding different sensor channels. Because of this, we are switching to using an ML model to try to distinguish between scents. Moving forwards, I will help with improving this model and testing the robot. Once the classification part is integrated, our project focus is more on improving the existing setup.

In terms of verification and validation, my team has done lots of testing to optimize the robot hyper parameters and testing setup. We have placed the robot in multiple different environments and observed whether or not it detected and track down a scented cotton ball placed in the environment. We discovered that the robot is not able to move on certain uneven surfaces and in rooms where there is strong air conditioning — in these cases, the robot struggled to move or would not move in the correct direction towards the scent. Moving forwards, we will continue adjusting the robot and testing setup to ensure that it can consistently meet the design requirements of being able to track down and correctly identify a scent. We will keep track of metrics such as the time the robot takes to identify a scent and the scent classification accuracy.

Team Status Report for 4/08

The first half of this week focused on testing and verification using alcohol for our interim demo and practicing our pitch. We tested in various environments to figure out what factors influence our readings. We also transitioned our robot over to the Arduino Mega and redid all the connections to fit on the new board. The robot can now read from all three sensors and perform robot calculations and movement at the same time without memory issues. We have also begun dataset generation for paint thinner and alcohol, which we will complete by this weekend for the remaining scents. We will be training a neural network using TinyML and deploying it on the Arduino, and using the prediction confidence to determine the direction of movement. There will be many risks associated with this: 1) generating useful feature vectors with pre-processing, 2) constraints in deploying the model to Arduino (memory), and 3) time taken for inference. Since we are using Neuton AI (which uses TinyML), we are hopeful that the model will be deployable on Arduino. We plan on testing our setup extensively in our newly constructed 2m x 2m field.  

Aditti’s Status Report for 4/08

This week focused on testing and verification with one scent for the interim demo in the first half and making progress to accommodate multiple scents in the second half. During the first half of the week, I worked on testing our setup on multiple surfaces to ensure that the motors could carry the load at desired speeds without locking up, and also in rooms with different ventilation to make sure that there was not too much interference with our sensor readings due to fluid dynamics. Once we found a suitable place, we focused on tuning the constants to be able to detect alcohol decently well. Unfortunately, we later realized that the sensitivity of our sensors decreases as the ambient temperature increases, which caused some issues with testing. After the demo, I worked on setting up the field for our robot to operate in. We constructed a 2m x 2m space walled using cardboard. I also set up the code to read from all different sensors after transitioning over to the Arduino Mega board. Caroline and I also worked on some preliminary testing with smoke from incense and realized that most scents cause all sensor readings to go up and it is difficult to differentiate between scents based on simple data processing. We worked on dataset generation to train and deploy a neural network on the Mega using TinyML. I helped collect data for paint thinner and isopropyl alcohol, and will be working on training the model over the weekend. Using machine learning should also help us account for the differences due to temperature variations. 

So far, we have done comprehensive testing for one scent (alcohol) using our slope-based detection approach in different environments and noted the factors that influence our readings, which include airflow due to ventilation, surface texture, air flow due to movement around the test field, temperature, and sensor warm-up time. We can also do basic obstacle avoidance based on ultrasonic sensor measurements. Moving forward, we will be integrating new approaches (machine learning based) to see if we can expand the project to meet our use-case requirement of being able to differentiate between multiple scents with good accuracy. We will stick to our new field setup and will run tests in both techspark as well as the UC gym to make sure we can prepare for the final demo. We are currently meeting our budget requirements and accessibility requirements. In the coming week, I will work on collecting more data for training and preprocessing it to generate quality feature vectors. I will also work on deploying the model on the Arduino and integrating it with our motor control code. 

Caroline’s Status Report for 4/1

This week I worked on implementing and testing the “targeted search” algorithm for our robot, where it tries to track down the direction of a scent upon detection. Initially, I just used a simple thresholded sensor value to trigger the “targeted search” mode. Once it enters this mode, the robot stops in place and begins sampling the sensor values at different angles of rotation. It then proceeds in the direction at which the maximum sensor value was taken. It confirmed the location of a scent once the sensor value exceeded another, higher threshold. With perfect sensors, this algorithm should work in theory, but we have had to strategize and adjust due to noise and inconsistencies in the sensor readings. We tested different ranges of rotations and looking at the slope of the best fit line to more reliably trigger the search mode and identify the direction of the scented object, but the results are still not very consistent. We will try using a fan in order to amplify the distribution of the scent in preparation for the interim demo. Additionally, I integrated the ultrasonic sensor into the code, so now the robot stops and turns around if it gets too close to an obstacle. Before the interim demo, we need to build a barrier around the arena to prevent the robot from going out of bounds. We will also start testing the sensor sensitivity to paint thinner so that we can demo the robot detecting multiple scents.

Aditti’s Status Report for 4/01

This work was focused on integration and testing, and implementing the code for navigation towards a scent source. Caroline set up the main logic for scanning in place upon detection of a scent (decided by a threshold), and I helped debug the code. I later set up the code so that detection is set based on gradients and an increasing slope of the best-fit line over a period of one second. We tested out the code using the ENS160 TVOC readings for the alcohol scent. We then switched over to the Grove VOC reading instead as it was giving us more reliable results. We also concluded that the inclusion of a pump to suction air into the car was not a feasible solution to fix the problem of our sensors’ sensitivity and latency in pick-up. We will need a fan behind the source to blow the evaporating particles to ensure that the scent reaches the sensors in time. As we still don’t have the Arduino Mega, we cannot read from multiple sensors at once due to memory constraints and cannot use a combination of sensor readings to predict our results as we intended to at the beginning. We fixed the robot motor issues with some speed control and tuning, which now causes the robot to go faster than we wanted but with less skidding. I also changed the random exploration to compute the next set of coordinates in terms of polar coordinates rather than cartesian. The progress is slower than what I wanted, but given the constraints of not having all the parts and the interim demo coming up, we are trying to prepare as best as we can for the upcoming week. Next week, I hope to switch over to the Arduino Mega and be readings from all three sensors and to test out the code with multiple different substances to ensure that we can differentiate between them.