Eshita’s Status Report for 4/8

This week, I focused on preparing for the interim demo and verifying test results for different scents. For the interim demo, my contributions in the initial code base were with the sensor readings, and adding the LCD display was a good way to verify the initial results of getting live sensor values displayed as the robot is scanning and randomly exploring. For data routing, I tested out various communication methods which each had its shortcomings. I tested out the communication from the Arduino to the ESP8266 Wi-Fi module using MQTT via cloud (which lacked hardware drivers), the Arduino to ESP8266 Wi-Fi module to a hosted web server, which would not work so well with our high frequency of data and high-speed control loop. Switching gears to the NodeMCU as an alternative, I also extensively explored I2C communication and Serial communication methods. These presented their own pros and cons, lacking the quick updates we needed for the sensor data to be classified correctly. The unit tests helped me recognize what methods would be effective to achieve the use case and design requirements for our project. The latency requirements for ScentBot would only be achieved if we either moved the sensor data readings over to the NodeMCU module or hosted it all locally on the Arduino. Even if we pursued option 1, this would mean a delay in communicating a classification result to the robot, causing more delays in Scentbot detecting the scent.

Testing with smoke and paint thinner, we found that simple thresholding and slop calculation methods will not work to differentiate between different scents, since all the values go up for our sensors regardless of the kind of object placed in front of it from our three initial choices for our use case requirement: alcohol, paint thinner and smoke.

This week, I generated initial datasets for alcohol and paint thinner and fed them through a naive binary classification CNN task on GCP’s Neuton Tiny ML platform. Since we have the increased memory due to the Arduino Mega, we can now explore having a model placed locally on the Arduino. The binary classification model shows promising results on initial training data. I will complete all dataset generation by this weekend and move ahead with exploring the analysis of a CNN on these different scents. The unit testing involved will be a prediction on a test dataset once we export the model, and explore the changed logic of the robot for scanning and detection mode than the one we have currently.

We also discovered interesting aspects about the power drain of our robot, caused by the Arduino, LCD display, and fan drawing from the 5V battery. I would like to perform tests as part of our final report on the battery life of our robot, as I think it is important to do this from the user’s perspective. We are now proceeding with just replacing batteries if it goes under that threshold.

According to our new updated Gantt chart, my work is currently on schedule to be completed with data generation this week and in developing the classification model in the upcoming week. I also want to prepare for and start thinking about the final presentation and the skills I must display to properly showcase the work my teammates and I have done.

Aditti’s Status Report for 4/08

This week focused on testing and verification with one scent for the interim demo in the first half and making progress to accommodate multiple scents in the second half. During the first half of the week, I worked on testing our setup on multiple surfaces to ensure that the motors could carry the load at desired speeds without locking up, and also in rooms with different ventilation to make sure that there was not too much interference with our sensor readings due to fluid dynamics. Once we found a suitable place, we focused on tuning the constants to be able to detect alcohol decently well. Unfortunately, we later realized that the sensitivity of our sensors decreases as the ambient temperature increases, which caused some issues with testing. After the demo, I worked on setting up the field for our robot to operate in. We constructed a 2m x 2m space walled using cardboard. I also set up the code to read from all different sensors after transitioning over to the Arduino Mega board. Caroline and I also worked on some preliminary testing with smoke from incense and realized that most scents cause all sensor readings to go up and it is difficult to differentiate between scents based on simple data processing. We worked on dataset generation to train and deploy a neural network on the Mega using TinyML. I helped collect data for paint thinner and isopropyl alcohol, and will be working on training the model over the weekend. Using machine learning should also help us account for the differences due to temperature variations. 

So far, we have done comprehensive testing for one scent (alcohol) using our slope-based detection approach in different environments and noted the factors that influence our readings, which include airflow due to ventilation, surface texture, air flow due to movement around the test field, temperature, and sensor warm-up time. We can also do basic obstacle avoidance based on ultrasonic sensor measurements. Moving forward, we will be integrating new approaches (machine learning based) to see if we can expand the project to meet our use-case requirement of being able to differentiate between multiple scents with good accuracy. We will stick to our new field setup and will run tests in both techspark as well as the UC gym to make sure we can prepare for the final demo. We are currently meeting our budget requirements and accessibility requirements. In the coming week, I will work on collecting more data for training and preprocessing it to generate quality feature vectors. I will also work on deploying the model on the Arduino and integrating it with our motor control code. 

Caroline’s Status Report for 4/1

This week I worked on implementing and testing the “targeted search” algorithm for our robot, where it tries to track down the direction of a scent upon detection. Initially, I just used a simple thresholded sensor value to trigger the “targeted search” mode. Once it enters this mode, the robot stops in place and begins sampling the sensor values at different angles of rotation. It then proceeds in the direction at which the maximum sensor value was taken. It confirmed the location of a scent once the sensor value exceeded another, higher threshold. With perfect sensors, this algorithm should work in theory, but we have had to strategize and adjust due to noise and inconsistencies in the sensor readings. We tested different ranges of rotations and looking at the slope of the best fit line to more reliably trigger the search mode and identify the direction of the scented object, but the results are still not very consistent. We will try using a fan in order to amplify the distribution of the scent in preparation for the interim demo. Additionally, I integrated the ultrasonic sensor into the code, so now the robot stops and turns around if it gets too close to an obstacle. Before the interim demo, we need to build a barrier around the arena to prevent the robot from going out of bounds. We will also start testing the sensor sensitivity to paint thinner so that we can demo the robot detecting multiple scents.

Eshita’s Status Report for 4/1

This week, my focus was on devising an alternative for network communications. I devoted time to research alternatives for communication using Serial and I2C protocols. I also helped debug and experiment with the gradient methodology for best line fit that Aditti had written up.

Investigating the I2C protocol was hopeful, where I tried to encode the Arduino as a slave that would send sensor data as it received it to the NodeMCU (Master). The NodeMCU can receive sensor readings as they update, but it is slower as the max speed is 400kbps over I2C. Moreover, the Wi-Fi communication would need a channel opened to listen for all client requests to pull data in, which prevents the I2C from readings updated readings at the same time. The other alternative would be to utilize the NodeMCU’s local memory to store our classification model, but with the slow speed of this protocol, it wasn’t the best fit for our high-speed control loop.

Investigating serial communication also led to issues where the updated values being sent from the Arduino were not showing up on the NodeMCU, although it is a much faster approach to receiving data. I am currently working on two approaches to debugging this. Since we are waiting for the MEGA to come in, I am working on setting up a TinyML embedded pipeline for the dataset generation we have completed. The C file can then be included as a separate file on our Mega, which can allow our project to hopefully work without communication lags and issues we are currently facing. The other alternative is to look more into serial communication and make that work with the model hosted locally on the NodeMCU. We are also meeting to work on our pitch for the interim demo, which I will contribute toward in devising materials and scripts.

Team’s Status Report for 4/1

This week, our focus was to be prepared for our interim demos next week. Based on our meetings, we implemented a gradient best line-fit based on 10 samples taken every second using least squares. We then hardcoded a threshold value for this slope that would start the robot going into a scanning mode to detect scents. Trying this with the ENS160 sensor gave a lot of unpredictable results because the sensor values are very sensitive and inconsistent. We have instead switched to the Grove multichannel sensor’s TVOC value to detect ethanol. This proved to be more consistent, and we also increased the scan time taken once it detects a scent to account for the weaker nature of the Grove sensor. Our experimentation shows that a lag still exists when the sensor detects a scent. The way the robot calculates the angles while scanning, hence leads it to turn the wrong way because it picks up the scent several seconds after encountering it. We have several strategies to mitigate this risk with our sensors and to work around their inconsistent nature.

Having a consistent airflow behind the source helped in finding the object. We also tried utilizing air pumps directly from the object on top of the sensor, but this showed no improvement in its performance. We also discovered issues with surface tension and the wheels that were getting stuck is more due to the wheel speed. Increasing the speed has fixed our issue for now, but we are also monitoring the overall power usage of our robot and the motors while it is randomly exploring.

We integrated our code with the ultrasonic sensor, and the robot now reaches a hard stop and re-arranges itself to not run into obstacles or walls around our test arena. We are also meeting to work more on the exact pitch and scenario we want to present during our interim demo. Currently, everything on our system works locally off of a single Arduino sketch to detect ethanol-based scents. It can, in most cases with the correct airflow, begin a scan mode near the object’s location. The orientation of where it decides to localize the scent is dependent, as we mentioned earlier, on the airflow and timing of the sensors. With this and more fine-tuning for our scent-confirmed threshold, we hope to display this functionality during the interim demo.

Working on communication with our classification model has proved to be challenging. We decided to switch to using a NodeMCU to send data from the sensors to the NodeMCU, which would then parse data to a classification model and return the result. I2C communication has proved to be impossible to implement, as the NodeMCU cannot receive data from the slave Arduino and update across Wi-Fi at the same time. An alternative we thought of was to host the classification model on the NodeMCU and have it communicate through I2C or serial as they are physically tethered. However, the speed of I2C communication does not fit the high-speed control flow we have for the robot. Serial communication is the other alternative we explored, and although it is faster, we are facing issues in sending across an array of float data and receiving all the updated values on the NodeMCU. Looking past the interim demo, this is the biggest risk in our project that we are actively working to mitigate and work toward devising alternatives for.

Aditti’s Status Report for 4/01

This work was focused on integration and testing, and implementing the code for navigation towards a scent source. Caroline set up the main logic for scanning in place upon detection of a scent (decided by a threshold), and I helped debug the code. I later set up the code so that detection is set based on gradients and an increasing slope of the best-fit line over a period of one second. We tested out the code using the ENS160 TVOC readings for the alcohol scent. We then switched over to the Grove VOC reading instead as it was giving us more reliable results. We also concluded that the inclusion of a pump to suction air into the car was not a feasible solution to fix the problem of our sensors’ sensitivity and latency in pick-up. We will need a fan behind the source to blow the evaporating particles to ensure that the scent reaches the sensors in time. As we still don’t have the Arduino Mega, we cannot read from multiple sensors at once due to memory constraints and cannot use a combination of sensor readings to predict our results as we intended to at the beginning. We fixed the robot motor issues with some speed control and tuning, which now causes the robot to go faster than we wanted but with less skidding. I also changed the random exploration to compute the next set of coordinates in terms of polar coordinates rather than cartesian. The progress is slower than what I wanted, but given the constraints of not having all the parts and the interim demo coming up, we are trying to prepare as best as we can for the upcoming week. Next week, I hope to switch over to the Arduino Mega and be readings from all three sensors and to test out the code with multiple different substances to ensure that we can differentiate between them.

Eshita’s Status Report for 3/25

This week, I worked on soldering our circuit parts together, along with integrating sensor data reading over TCP communication to a local web server on the ESP8266. I have encountered multiple issues which have set me back in my progress toward software and hardware integration. When editing the code with hardware integration, we found stability issues which left no memory for local variables to issue commands to the ESP8266 module to receive and send data. This led to a pivot in my coding to try and make the sensor sampling, collection, and retrieval possible with a python script with just the ENS160 sensor. The differences between using the Wi-Fi module over the NodeMCU are leading to a lot of debugging issues. due to the lack of libraries around the ESP chip, I have realized through research that sending a JSON string across is going to take manual entering of response codes and header metadata sent across in normal TCP/IP communication for it to be read by a requests library on Python. Trying to integrate the existing NodeMCU library with the Wi-Fi module proved unsuccessful, and I am currently working on trying to code a function with the needed metadata and information that can then be read by a script to run our classification models successfully. With the Arduino Mega, I imagine this will be an easier decision, so in the next week (and in the next 2 days, hopefully), I need to determine whether the early issues I faced with this pin, combined with the ones I’m facing today, are enough to consider an alternative. The lack of hardware and software drivers with this chip is making software and hardware integration harder than it needs to be, in my opinion, but I do want to make an as informed decision as I can about this.

Team Status Report for 3/25

This week, we worked on robot construction and integration of our sensor and motion subsystems. In terms of robot assembly, we began soldering connections onto a new protoboard and organizing all of the internal connections. We also glued the robot frame together and were finally successful in securing the motors to the chassis using wood glue. This greatly improved the stability of the wheels and increased the accuracy of the robot’s motion.  In terms of integration, we finally wired together all of our subsystems so that our sensors and motors are connected and controlled by the same MCU. When we combined all of our systems together, we realized that the 9V battery was not enough to power every component. Therefore, we connected an additional 9V battery to directly power the Arduino. We also worked on software integration, which involved combining the motion logic and sensor logic. We were able to successfully take sensor readings at a specified sampling frequency while also issuing motor commands. We also added random path planning to determine the course of the robot instead of using predetermined coordinates. 

However, we also identified several issues while integrating our subsystems. The biggest issue is that our sketch takes up too much space which causes stability issues to occur and does not leave enough memory available for local variables. Because of this, we are unable to establish the wireless connection required to send the sensor data to a local machine. Due to this, we will need to upgrade to an Arduino Mega so that we have more flash memory. Till we receive the new parts, we will not be able to continue with our system integration which might set us back by a few days.

The wi-fi subsystem is also presenting multiple issues with setting TCP/IP connection protocols with our web server because of the stability and memory issues. In general, the chip sends raw data strings, so the program needs to account for a response status, and user header metadata in order to be perceived as a proper response for the machine learning model to retrieve it. Doing this across several sensors with multiple data streams is a big risk to mitigate for the team moving forward. Working on integrating these systems, as described above, is going to set us back a few days.

Moving forwards, we will need to define our expectations for the interim demo and work on refining individual subsystems.

Caroline’s Status Report for 3/25

This week, I worked more on robot assembly and assisting with the integration of our different subsystems. I glued the components of the robot together and the motors are finally secure which has improved the accuracy of the robots movement. While working on integration of the sensor and motion systems, I helped test and debug when we ran into problems. We ran a few informal tests with the ENS160 sensor by thresholding the ethanol reading and were successful in getting the robot to stop when alcohol was placed around 6 inches away from the sensor. Initially, our goal with integration was to see if we could let the robot run while sampling and collecting data, but we ran into unexpected issues due to Arduino hardware constraints. Our program was too memory intensive to run on the Arduino, so we were unable to transmit the sensor data wirelessly. I ordered an Arduino Mega and another Grove shield and we will have to replace our old Arduino Uno as soon as the part arrives. In the meantime, we can still collect data through a wired connection, but our priority is more focused on integration and preparing for the interim demo. Potentially we will use a more simple thresholding method for scent detection instead of a classification algorithm for our interim demo, so robust dataset collection is not the number one priority at this stage. We will need to define as soon as possible what we aim to accomplish before the demo and form our plan for next week accordingly.