Team Status Report for 4/29

This week, we focused on presenting the performance of our robot in our final presentations and getting our final documentation ready. For unit testing we carried out the following for all our subsystems.

Unit Testing

For unit testing, we observed some values locally on the Arduino Serial Monitor. A lot of our testing was done in an arena-like setup (described in our overall setup), watching the different states of the robot and adjusting the findings shown below.

(1) Motion control: When deciding the functions for motion control, we were careful to design functions that separated the random exploration from the scent localization. By printing out distances and observing angles, distances reached, and hard stops (where the robot overshoots or undershoots the target and self-corrects itself), we were able to test the basic motion and translation ability of the robot. This involved a lot of tuning for our robot’s weight, wheel speed, and surfaces the robot can drive on comfortably. For the ultrasonic sensors, a similar test for tuning the position and height of the ultrasonic sensors, along with testing in an arena with unscented obstacles was the way we tested for obstacle avoidance.

Findings: We added two additional ultrasonic sensors, and glued our DC motors to the car chassis in order to get stable and reliable movement from the robot.

(2) Alerting: Since our robot has clearly defined states, the code separates the messages displayed and LED color patterns for each state. Putting the robot in transition from scan mode, random exploration, obstacle avoidance and classification as shown in the figure below is how we tested for the same.

Findings: For a high-speed control loop like ScentBot’s, we took the decision to host the entire system locally, and the only communication is to the output devices of the LCD and LED display. We explored cloud computing, UART, Wi-Fi, and Master-Slave byte transfer before integrating these, highlighted in our earlier status reports.

(3) Sensing and Scent Localization: The sensors were integrated with our LED display, and this helped us determine the samples we are getting at any point in the robot’s traversal. The scan angles and samples taken can be seen on the LCD display. We tested for whether the robot translated to the correct scan angle, and tested in scenarios where an obstacle was in the way while scanning and while translating to the maximum scan angle so that the robot does not run into the object. Observing the falling and rising values, we also estimated whether the robot was entering and exiting in the case of false positives. There was additional tuning performed for our thresholding for multiple sensors to detect and confirm the scent in our testing.

Findings: We learned that certain channels can classify our scents better, and work with higher/lower thresholds. Namely, the TVOC channel on the ENS160 and the ethanol channel on the Grove sensors. We also increased our scan time from 3s to 5s, to allow for more samples to be collected and prevent the robot from having to perform repeated scans and be more confident in its scent localization.

(4) Classification: Our SVC model was tested on a train-test split of data collected over 2 days in varying temperature conditions over ambient, paint thinner and alcohol scents. This was conducted locally using a Colab notebook before integrating the model onto the Arduino Mega. Moreover, the ability to recognize and confirm the scent while the robot is moving was conducted in our overall test.

Findings: We tuned the SVC model from a polynomial model to a linear model to account for the limited space on the Arduino. The linear model also performed better in our unit tests with live sensor readings. We also explored with using normalization and added statistics for each of our sensor readings like RMS, Mean, Standard Deviation, Maximum and Minimum values.

Overall Testing

Initially, we conducted 24 tests with alcohol and paint thinner scents by randomly placing the object and robot in different positions. We received feedback from our professor on the number of trials and statistics we had reported, hence we went back and conducted more testing according to a concrete plan described below.

Our overall testing plan is shown in the figure below. We placed the object at one of 9 grid positions, and tested the robot convergence time from corners (1,2,3,4) of the map. We made sure that the object would be at least 1m away from the robot in our testing. The object would be a paint thinner or alcohol scent on a cotton ball. This gives us 32 configurations for each scent to test. We currently have over 35 trials with our paint thinner scent and aim to complete testing in this manner with alcohol this weekend. An example of our test metrics being collected is shown below as well.

 

Team Status Report for 4/22

Coming into the final stretch of our project, we dedicated this week to achieving better obstacle detection and integrating our multiclass classification algorithm. We also introduced a Neopixel LED ring light on top of the robot with various color patterns for different modes and scents that ScentBot can identify. 

We added two additional ultrasonic sensors, one on each side of the robot, and implemented new obstacle avoidance logic for when the robot detects objects on the side. We have seen a tremendous improvement in the robot not running into the scented object when exploring and scanning. Now, the robot is able to back up, and continue its scan and/or random exploration. ScentBot also now uses various sensor values to determine if a scent has been detected or confirmed which has improved the detection and confirmation performance.

We also explored utilizing propane and isobutane sprays for our third scent, as we hypothesized that substances with hydrocarbons would trigger the sensors. Upon testing with our sensor arrays, we discovered that the concentrations of TVOCs, Ethanol and other hydrocarbons was not high enough to trigger our sensors. We have decided to have our embedded Support Vector Classification (SVC) only work on the following: alcohol, paint thinner and ambient scent. We also integrated the SVC model on the Arduino Mega to only classify a scent after a global threshold has been reached. This was a decision made with considering the tradeoff between false positive rates and sensor sensitivity. This ensures that ScentBot is confident enough over a sampling period that there is a scented object. 

We have come up with a test plan and run 20 initial trials with paint thinner and alcohol, introducing either 0, 1 or 2 unscented objects in trials to observe ScentBot’s performance. On average, we found that the classification is always correct, while convergence time is around 183s with an average first scan detection at 39cm. We expect this to be more true to ScentBot’s performance as we host more trial runs, which is our goal before our final demo. We are also working on fine-tuning the front ultrasonic sensor to prevent ScentBot from running into walls.

Linked is a successful test run, similar to the ones we plan to showcase at our final demonstration.

Team Status Report for 4/08

The first half of this week focused on testing and verification using alcohol for our interim demo and practicing our pitch. We tested in various environments to figure out what factors influence our readings. We also transitioned our robot over to the Arduino Mega and redid all the connections to fit on the new board. The robot can now read from all three sensors and perform robot calculations and movement at the same time without memory issues. We have also begun dataset generation for paint thinner and alcohol, which we will complete by this weekend for the remaining scents. We will be training a neural network using TinyML and deploying it on the Arduino, and using the prediction confidence to determine the direction of movement. There will be many risks associated with this: 1) generating useful feature vectors with pre-processing, 2) constraints in deploying the model to Arduino (memory), and 3) time taken for inference. Since we are using Neuton AI (which uses TinyML), we are hopeful that the model will be deployable on Arduino. We plan on testing our setup extensively in our newly constructed 2m x 2m field.  

Team’s Status Report for 4/1

This week, our focus was to be prepared for our interim demos next week. Based on our meetings, we implemented a gradient best line-fit based on 10 samples taken every second using least squares. We then hardcoded a threshold value for this slope that would start the robot going into a scanning mode to detect scents. Trying this with the ENS160 sensor gave a lot of unpredictable results because the sensor values are very sensitive and inconsistent. We have instead switched to the Grove multichannel sensor’s TVOC value to detect ethanol. This proved to be more consistent, and we also increased the scan time taken once it detects a scent to account for the weaker nature of the Grove sensor. Our experimentation shows that a lag still exists when the sensor detects a scent. The way the robot calculates the angles while scanning, hence leads it to turn the wrong way because it picks up the scent several seconds after encountering it. We have several strategies to mitigate this risk with our sensors and to work around their inconsistent nature.

Having a consistent airflow behind the source helped in finding the object. We also tried utilizing air pumps directly from the object on top of the sensor, but this showed no improvement in its performance. We also discovered issues with surface tension and the wheels that were getting stuck is more due to the wheel speed. Increasing the speed has fixed our issue for now, but we are also monitoring the overall power usage of our robot and the motors while it is randomly exploring.

We integrated our code with the ultrasonic sensor, and the robot now reaches a hard stop and re-arranges itself to not run into obstacles or walls around our test arena. We are also meeting to work more on the exact pitch and scenario we want to present during our interim demo. Currently, everything on our system works locally off of a single Arduino sketch to detect ethanol-based scents. It can, in most cases with the correct airflow, begin a scan mode near the object’s location. The orientation of where it decides to localize the scent is dependent, as we mentioned earlier, on the airflow and timing of the sensors. With this and more fine-tuning for our scent-confirmed threshold, we hope to display this functionality during the interim demo.

Working on communication with our classification model has proved to be challenging. We decided to switch to using a NodeMCU to send data from the sensors to the NodeMCU, which would then parse data to a classification model and return the result. I2C communication has proved to be impossible to implement, as the NodeMCU cannot receive data from the slave Arduino and update across Wi-Fi at the same time. An alternative we thought of was to host the classification model on the NodeMCU and have it communicate through I2C or serial as they are physically tethered. However, the speed of I2C communication does not fit the high-speed control flow we have for the robot. Serial communication is the other alternative we explored, and although it is faster, we are facing issues in sending across an array of float data and receiving all the updated values on the NodeMCU. Looking past the interim demo, this is the biggest risk in our project that we are actively working to mitigate and work toward devising alternatives for.

Team Status Report for 3/25

This week, we worked on robot construction and integration of our sensor and motion subsystems. In terms of robot assembly, we began soldering connections onto a new protoboard and organizing all of the internal connections. We also glued the robot frame together and were finally successful in securing the motors to the chassis using wood glue. This greatly improved the stability of the wheels and increased the accuracy of the robot’s motion.  In terms of integration, we finally wired together all of our subsystems so that our sensors and motors are connected and controlled by the same MCU. When we combined all of our systems together, we realized that the 9V battery was not enough to power every component. Therefore, we connected an additional 9V battery to directly power the Arduino. We also worked on software integration, which involved combining the motion logic and sensor logic. We were able to successfully take sensor readings at a specified sampling frequency while also issuing motor commands. We also added random path planning to determine the course of the robot instead of using predetermined coordinates. 

However, we also identified several issues while integrating our subsystems. The biggest issue is that our sketch takes up too much space which causes stability issues to occur and does not leave enough memory available for local variables. Because of this, we are unable to establish the wireless connection required to send the sensor data to a local machine. Due to this, we will need to upgrade to an Arduino Mega so that we have more flash memory. Till we receive the new parts, we will not be able to continue with our system integration which might set us back by a few days.

The wi-fi subsystem is also presenting multiple issues with setting TCP/IP connection protocols with our web server because of the stability and memory issues. In general, the chip sends raw data strings, so the program needs to account for a response status, and user header metadata in order to be perceived as a proper response for the machine learning model to retrieve it. Doing this across several sensors with multiple data streams is a big risk to mitigate for the team moving forward. Working on integrating these systems, as described above, is going to set us back a few days.

Moving forwards, we will need to define our expectations for the interim demo and work on refining individual subsystems.

Team Status Report for 3/18

This week we focused on motor control and data communication between the sensors and the local machine. While working on the code to fine-tune the rotation and translation motion of the robot, we identified several problems and potential risks. Firstly, the error from odometry and encoder readings seems to accumulate very quickly which sometimes causes the robot to overshoot and not converge to the target position. To account for this, we are planning to reset the encoder readings periodically once the robot reaches a predetermined target coordinate on the global map. Additionally, we added a self-correction mechanism in case of overshoot. Another issue is that most of our monitoring and evaluation of the robot coordinates and encoder readings happens through the serial port, which requires a tethered connection that affects the robot’s motion. We’ve also identified an issue in the hardware – our design would require the motors to be more secure than what we have currently, as compensating for hardware limitations in the codebase is not enough. 

We decided to move away from using the cloud and instead process the sensor data on a local server instead. This is primarily due to the incompatibility of our wifi chip with the Azure cloud. In our experiments, we realized that accessing data on the localhost does not incur too many additional costs. 

This week we also set up the pipeline for recording sensor data through serial communication to a CSV. We also acquired 99% ethyl alcohol and spray bottles. Now, we have all of the setup that we need to start dataset generation and we will start collecting data as soon as possible. At the minimum we need to get baseline readings for one scent, such as alcohol, so that we can start testing out the robot’s ability to actually track down a scent. 

Team Status Report for 2/25

This week we worked on finishing up the robot design and laser cutting it, assembling the robot, getting started with the motor control, and interfacing the ESP8266 and connecting to the cloud. We also worked on the design review presentation and got started on the design report. We are currently focusing our efforts on the random exploration approach and will be setting up the motor control code accordingly. 

We had to make some modifications to the robot chassis, including holes for the wheels, and a structure to lift the motor up to the correct height for the wheels to touch the ground. We found some challenges in securing the motors because of the force on them from the wheels, and are working on adding a secure structure that can go over the motor snugly so that it will not move while the wheels are turning. We all met to work on getting the motors set up connecting to the Arduino, and see how the components with wiring fit into the robot chassis, which was not technically under our task assignments, but we feel it was necessary to get an understanding of how everything will look once it is connected with our different systems.  

We are currently behind on the sensor system assembly, as one of our parts (the Grove Multichannel Gas Sensor) has not come in yet. We will start collecting data once this part arrives, and in the meantime, will be setting up Azure templates for the BME280 and the ENS160 sensors to the cloud.

Team Status Report for 2/18

This week we worked on getting started on several aspects of our robot, including the CAD design and the Wavefront Segmentation algorithm for our path planning. We also ordered all the parts for our robot and received them, focusing on how different parts will integrate together. Additionally, we researched how to connect the Arduino and Wifi module to Azure, which is now the decided cloud platform for our ScentBot. Towards the end of the week, we collected this information onto our Design Review slides. 

We have attached a photo of our completed CAD design.

 

With the parts we have ordered, we anticipate a few challenges, which we also discussed with our advisors. These would be good motor control and getting the robot to follow a straight path since we are assembling the robot using custom-built parts. We are also considering an alternate path planning approach because of the high dependence on sensor sensitivity in our project. 

If the sensors are sensitive enough to detect an object from a distance farther than the 0.5m radial distance, we will change our test setup to have a single-scented object. This will be placed in a scent diffuser/spray to create a radial distribution of scent for our robot to follow. The robot will “randomly” explore the map until it detects a scent, and will follow in the direction of increasing probability. The robot will receive a travel distance and angle to follow and will reorient itself to a different angular orientation after this set distance. An image of our alternate testing approach is shown.

The Wavefront Segmentation algorithm runs on a letter size sheet of paper within 0.22s on average, and can detect objects present on a white background. It thresholds the images for faster computation, and calculates and prints the location of the centroids of the objects. One challenge we immediately faced was making sure shadows do not overlap within the image capture from the overhead camera.  

 

The principles that we utilized to solve the problem of determining our robot design to fit our use-case requirements involved research into differential-drive robots, PID control, wavefront segmentation, Runge-Kutta localization, A* graph search, visibility graph, state machines on the Arduino, fluid dynamics & distributions, and chemical compositions and gas sensor types.

 

Team Status Report for 2/11

This week, we delivered our proposal presentation. Based on the feedback we received, we are considering changing the design of the path planning system to rely more on the sensor module to find the scented objects without the assistance of an overhead camera to communicate the locations of objects. The feasibility of this strategy depends heavily on the performance of our sensors, so we will wait until we can test with the actual sensors before fully committing to any design changes.

We currently envision the sensitivity of our sensors to be a concern in the near future. We are planning on mitigating this risk by spending more time calibrating the sensors well and reading up on documentation provided by Seeed and Adafruit. We may also revise our path planning strategy in case the sensitivity of the sensors (in terms of distance) does not meet our requirements. 

Because of the proposal presentations this week and the release date of the part ordering form, we have shifted a few items in our schedule to the following week. We plan to start the robot design next week as well as the field construction in addition to the items we already have scheduled.

Our project includes considerations for public safety. We want to ensure that our robot can path plan correctly around potential hazards without running into them or causing spills. Our use case of helping people with anosmia also lends to the positive impact we’re aiming to have on public health and welfare.