Eshita’s Status Report for 4/29

This week, I came up with a concrete testing plan from the advice received during our final presentation and performed 32 different test runs with the paint thinner scent. To come up with the test plan was important to gather the correct metrics regarding the performance of the robot. I truly observed the tradeoff between random exploration and planned path planning in testing, as when the object would be a straight path away from the robot, sometimes it would not converge under 3 minutes or get triggered for the scented object. I also tested out a new battery power supply using Lithium Ion batteries and observed a significant increase in the life of the robot motor driver and the Arduino. Alkaline 9V batteries provided a battery life of 30-45 minutes, whereas the lithium-ion batteries provided a life of 3h 13 minutes due to its higher current power (1200mAh compared to 350mAh on the alkaline batteries).

Presenting the final presentation was an important milestone for our team. I also worked on the slides and script this week, along with starting off with writing sections on the design report and poster.

Team Status Report for 4/29

This week, we focused on presenting the performance of our robot in our final presentations and getting our final documentation ready. For unit testing we carried out the following for all our subsystems.

Unit Testing

For unit testing, we observed some values locally on the Arduino Serial Monitor. A lot of our testing was done in an arena-like setup (described in our overall setup), watching the different states of the robot and adjusting the findings shown below.

(1) Motion control: When deciding the functions for motion control, we were careful to design functions that separated the random exploration from the scent localization. By printing out distances and observing angles, distances reached, and hard stops (where the robot overshoots or undershoots the target and self-corrects itself), we were able to test the basic motion and translation ability of the robot. This involved a lot of tuning for our robot’s weight, wheel speed, and surfaces the robot can drive on comfortably. For the ultrasonic sensors, a similar test for tuning the position and height of the ultrasonic sensors, along with testing in an arena with unscented obstacles was the way we tested for obstacle avoidance.

Findings: We added two additional ultrasonic sensors, and glued our DC motors to the car chassis in order to get stable and reliable movement from the robot.

(2) Alerting: Since our robot has clearly defined states, the code separates the messages displayed and LED color patterns for each state. Putting the robot in transition from scan mode, random exploration, obstacle avoidance and classification as shown in the figure below is how we tested for the same.

Findings: For a high-speed control loop like ScentBot’s, we took the decision to host the entire system locally, and the only communication is to the output devices of the LCD and LED display. We explored cloud computing, UART, Wi-Fi, and Master-Slave byte transfer before integrating these, highlighted in our earlier status reports.

(3) Sensing and Scent Localization: The sensors were integrated with our LED display, and this helped us determine the samples we are getting at any point in the robot’s traversal. The scan angles and samples taken can be seen on the LCD display. We tested for whether the robot translated to the correct scan angle, and tested in scenarios where an obstacle was in the way while scanning and while translating to the maximum scan angle so that the robot does not run into the object. Observing the falling and rising values, we also estimated whether the robot was entering and exiting in the case of false positives. There was additional tuning performed for our thresholding for multiple sensors to detect and confirm the scent in our testing.

Findings: We learned that certain channels can classify our scents better, and work with higher/lower thresholds. Namely, the TVOC channel on the ENS160 and the ethanol channel on the Grove sensors. We also increased our scan time from 3s to 5s, to allow for more samples to be collected and prevent the robot from having to perform repeated scans and be more confident in its scent localization.

(4) Classification: Our SVC model was tested on a train-test split of data collected over 2 days in varying temperature conditions over ambient, paint thinner and alcohol scents. This was conducted locally using a Colab notebook before integrating the model onto the Arduino Mega. Moreover, the ability to recognize and confirm the scent while the robot is moving was conducted in our overall test.

Findings: We tuned the SVC model from a polynomial model to a linear model to account for the limited space on the Arduino. The linear model also performed better in our unit tests with live sensor readings. We also explored with using normalization and added statistics for each of our sensor readings like RMS, Mean, Standard Deviation, Maximum and Minimum values.

Overall Testing

Initially, we conducted 24 tests with alcohol and paint thinner scents by randomly placing the object and robot in different positions. We received feedback from our professor on the number of trials and statistics we had reported, hence we went back and conducted more testing according to a concrete plan described below.

Our overall testing plan is shown in the figure below. We placed the object at one of 9 grid positions, and tested the robot convergence time from corners (1,2,3,4) of the map. We made sure that the object would be at least 1m away from the robot in our testing. The object would be a paint thinner or alcohol scent on a cotton ball. This gives us 32 configurations for each scent to test. We currently have over 35 trials with our paint thinner scent and aim to complete testing in this manner with alcohol this weekend. An example of our test metrics being collected is shown below as well.

 

Eshita’s Status Report for 4/22

This week, my focus was to round out and complete the overdue tasks of creating and integrating our embedded ML model and adding Neopixel light cycles to ScentBot’s different states. We collected data for paint thinner, smoke, and alcohol as a team, and in exploring ML models we could utilize, I came across a GCP tool called Neuton, which had backed tutorials of being able to run on an Arduino. After training on our dataset to create a neural network, we faced issues in integrating it for the Arduino Mega specifically. Upon more research, I found that the architectural driver to run the model on an ATmega processor was not the same and hence, this model would not compatible to run. I instead shifted tracks to utilizing MicroMLgen, which has the capability to convert a Support Vector Classification model in Python to a C header file (an example that we are using for ScentBot currently), which we could include on our Arduino sketch as a library.

In creating the SVC, there was a tradeoff between storage space on the Mega vs. the accuracy that the model could obtain. We also found that while the classification of the model was good after a high threshold was reached, localization would prove difficult due to the high number of false positives. Hence, we decided on a linear SVC model, which would only classify upon reaching a set threshold of sensor values. Upon testing, we also found that it was difficult to test smoke in the presence of other flammable substances and make its directionality toward the sensor array. We also explored propane and isobutane medical sprays to potentially trigger our sensors, but the concentration was not high enough to trigger the sensors. We decided to ain to make ScentBot work for alcohol, paint thinner and ambient scents working toward our final demo.

I am currently on track with all our tasks. Our testing will go into the next week as we complete trial runs beyond the final presentation. I still need to work on my script, presentation skills and time taken for our slides for Monday, which we are going to work on together with the help of a practice run tomorrow.

 

Eshita’s Status Report for 4/8

This week, I focused on preparing for the interim demo and verifying test results for different scents. For the interim demo, my contributions in the initial code base were with the sensor readings, and adding the LCD display was a good way to verify the initial results of getting live sensor values displayed as the robot is scanning and randomly exploring. For data routing, I tested out various communication methods which each had its shortcomings. I tested out the communication from the Arduino to the ESP8266 Wi-Fi module using MQTT via cloud (which lacked hardware drivers), the Arduino to ESP8266 Wi-Fi module to a hosted web server, which would not work so well with our high frequency of data and high-speed control loop. Switching gears to the NodeMCU as an alternative, I also extensively explored I2C communication and Serial communication methods. These presented their own pros and cons, lacking the quick updates we needed for the sensor data to be classified correctly. The unit tests helped me recognize what methods would be effective to achieve the use case and design requirements for our project. The latency requirements for ScentBot would only be achieved if we either moved the sensor data readings over to the NodeMCU module or hosted it all locally on the Arduino. Even if we pursued option 1, this would mean a delay in communicating a classification result to the robot, causing more delays in Scentbot detecting the scent.

Testing with smoke and paint thinner, we found that simple thresholding and slop calculation methods will not work to differentiate between different scents, since all the values go up for our sensors regardless of the kind of object placed in front of it from our three initial choices for our use case requirement: alcohol, paint thinner and smoke.

This week, I generated initial datasets for alcohol and paint thinner and fed them through a naive binary classification CNN task on GCP’s Neuton Tiny ML platform. Since we have the increased memory due to the Arduino Mega, we can now explore having a model placed locally on the Arduino. The binary classification model shows promising results on initial training data. I will complete all dataset generation by this weekend and move ahead with exploring the analysis of a CNN on these different scents. The unit testing involved will be a prediction on a test dataset once we export the model, and explore the changed logic of the robot for scanning and detection mode than the one we have currently.

We also discovered interesting aspects about the power drain of our robot, caused by the Arduino, LCD display, and fan drawing from the 5V battery. I would like to perform tests as part of our final report on the battery life of our robot, as I think it is important to do this from the user’s perspective. We are now proceeding with just replacing batteries if it goes under that threshold.

According to our new updated Gantt chart, my work is currently on schedule to be completed with data generation this week and in developing the classification model in the upcoming week. I also want to prepare for and start thinking about the final presentation and the skills I must display to properly showcase the work my teammates and I have done.

Eshita’s Status Report for 4/1

This week, my focus was on devising an alternative for network communications. I devoted time to research alternatives for communication using Serial and I2C protocols. I also helped debug and experiment with the gradient methodology for best line fit that Aditti had written up.

Investigating the I2C protocol was hopeful, where I tried to encode the Arduino as a slave that would send sensor data as it received it to the NodeMCU (Master). The NodeMCU can receive sensor readings as they update, but it is slower as the max speed is 400kbps over I2C. Moreover, the Wi-Fi communication would need a channel opened to listen for all client requests to pull data in, which prevents the I2C from readings updated readings at the same time. The other alternative would be to utilize the NodeMCU’s local memory to store our classification model, but with the slow speed of this protocol, it wasn’t the best fit for our high-speed control loop.

Investigating serial communication also led to issues where the updated values being sent from the Arduino were not showing up on the NodeMCU, although it is a much faster approach to receiving data. I am currently working on two approaches to debugging this. Since we are waiting for the MEGA to come in, I am working on setting up a TinyML embedded pipeline for the dataset generation we have completed. The C file can then be included as a separate file on our Mega, which can allow our project to hopefully work without communication lags and issues we are currently facing. The other alternative is to look more into serial communication and make that work with the model hosted locally on the NodeMCU. We are also meeting to work on our pitch for the interim demo, which I will contribute toward in devising materials and scripts.

Team’s Status Report for 4/1

This week, our focus was to be prepared for our interim demos next week. Based on our meetings, we implemented a gradient best line-fit based on 10 samples taken every second using least squares. We then hardcoded a threshold value for this slope that would start the robot going into a scanning mode to detect scents. Trying this with the ENS160 sensor gave a lot of unpredictable results because the sensor values are very sensitive and inconsistent. We have instead switched to the Grove multichannel sensor’s TVOC value to detect ethanol. This proved to be more consistent, and we also increased the scan time taken once it detects a scent to account for the weaker nature of the Grove sensor. Our experimentation shows that a lag still exists when the sensor detects a scent. The way the robot calculates the angles while scanning, hence leads it to turn the wrong way because it picks up the scent several seconds after encountering it. We have several strategies to mitigate this risk with our sensors and to work around their inconsistent nature.

Having a consistent airflow behind the source helped in finding the object. We also tried utilizing air pumps directly from the object on top of the sensor, but this showed no improvement in its performance. We also discovered issues with surface tension and the wheels that were getting stuck is more due to the wheel speed. Increasing the speed has fixed our issue for now, but we are also monitoring the overall power usage of our robot and the motors while it is randomly exploring.

We integrated our code with the ultrasonic sensor, and the robot now reaches a hard stop and re-arranges itself to not run into obstacles or walls around our test arena. We are also meeting to work more on the exact pitch and scenario we want to present during our interim demo. Currently, everything on our system works locally off of a single Arduino sketch to detect ethanol-based scents. It can, in most cases with the correct airflow, begin a scan mode near the object’s location. The orientation of where it decides to localize the scent is dependent, as we mentioned earlier, on the airflow and timing of the sensors. With this and more fine-tuning for our scent-confirmed threshold, we hope to display this functionality during the interim demo.

Working on communication with our classification model has proved to be challenging. We decided to switch to using a NodeMCU to send data from the sensors to the NodeMCU, which would then parse data to a classification model and return the result. I2C communication has proved to be impossible to implement, as the NodeMCU cannot receive data from the slave Arduino and update across Wi-Fi at the same time. An alternative we thought of was to host the classification model on the NodeMCU and have it communicate through I2C or serial as they are physically tethered. However, the speed of I2C communication does not fit the high-speed control flow we have for the robot. Serial communication is the other alternative we explored, and although it is faster, we are facing issues in sending across an array of float data and receiving all the updated values on the NodeMCU. Looking past the interim demo, this is the biggest risk in our project that we are actively working to mitigate and work toward devising alternatives for.

Eshita’s Status Report for 3/25

This week, I worked on soldering our circuit parts together, along with integrating sensor data reading over TCP communication to a local web server on the ESP8266. I have encountered multiple issues which have set me back in my progress toward software and hardware integration. When editing the code with hardware integration, we found stability issues which left no memory for local variables to issue commands to the ESP8266 module to receive and send data. This led to a pivot in my coding to try and make the sensor sampling, collection, and retrieval possible with a python script with just the ENS160 sensor. The differences between using the Wi-Fi module over the NodeMCU are leading to a lot of debugging issues. due to the lack of libraries around the ESP chip, I have realized through research that sending a JSON string across is going to take manual entering of response codes and header metadata sent across in normal TCP/IP communication for it to be read by a requests library on Python. Trying to integrate the existing NodeMCU library with the Wi-Fi module proved unsuccessful, and I am currently working on trying to code a function with the needed metadata and information that can then be read by a script to run our classification models successfully. With the Arduino Mega, I imagine this will be an easier decision, so in the next week (and in the next 2 days, hopefully), I need to determine whether the early issues I faced with this pin, combined with the ones I’m facing today, are enough to consider an alternative. The lack of hardware and software drivers with this chip is making software and hardware integration harder than it needs to be, in my opinion, but I do want to make an as informed decision as I can about this.

Eshita’s status report for 3/18

This week I worked on establishing a data collection pipeline for the sensors. The updated code now stores values and prints them in a CSV format, which will then be read by a python script from serial communication. This way, we can collect data efficiently and in a quicker manner. An example of the columns we tried collecting data with is shown below.

A major problem we faced was trying to work simultaneously on the motor calibration and odometry of the robot, while also wanting to collect data from within the car to emulate the readings the sensors will be getting. Hence, while Aditti and Caroline focused more on the odometry, I set up a pipeline that would be easy to distribute work within the team. We were also facing problems with the hardware drivers for our particular wi-fi module with Azure as discussed in our weekly meeting, so I am also working on code that will send across the JSON response to a python script that can run a classification model.

Eshita’s Status Report for 3/11

This week, I worked on creating the code for the Arduino and sensor array to transfer data from the sensors to the Arduino. The code is attached on Github here (https://github.com/aditti-ramsisaria/ece-capstone) in sensors/ and ens160_lib/. I faced a number of issues in implementing all the sensors to work together. The ENS160 library was not updated with the most recent libraries so I had to update one of the functions in the Arduino library made by ScioSense to make it functional. The picture of the sensors working is attached below.

The other aspect I was working on was communication using the ESP8266 chip. There is a way to send JSON data across wifi to a local web server hosted on the Wi-Fi module. The working implementation of this is attached, showing a simple JSON message hosted entirely locally on the Wifi-module. Sending across data using JSON is very feasible, but adds complexity in retrieving the data from the web server for the classification algorithm. On the other hand, the ESP8266 chip we have ordered does not have enough documentation for being implemented on Azure with MQTT, as was highlighted in my previous status report. We have a NodeMCU which works with Azure, and I will research communication between the NodeMCU and the Arduino since our sensors will only work all together with I2C on the Arduino.