Caroline’s Status Report for 2/18

This week, I primarily worked on the robot design and CAD model. I worked with Aditti to scope out the dimensions of the robot and the layout of the internal components. Then, I modeled the parts in Solidworks and created a layout for laser cutting the pieces. I also inspected the parts that arrived for our robot this week and made additional measurements to ensure that our hardware assembly goes smoothly. We now have most of the key parts needed to assemble our robot, and are just waiting on one of the gas sensors and a shield for the Arduino). We are on track to laser cut the pieces and have the main robot chassis assembled by next week. Next week, I will start testing the parts that arrived. In particular, it is important that we verify that the functionality of the motors and their compatibility with the rest of our systems.

This week, I did not have to apply any skills learned from ECE courses as I was mostly focused on robot construction. I learned CAD modeling from previous internships and the introductory mechanical engineering course.

Aditti’s Status Report for 2/18

This week I focused on finalizing the robot CAD. I worked with Caroline to develop an effective layout for our parts on the robot and did the SolidWorks for the base plate. I also worked on researching controller strategies like pure pursuit and PID control for robot motion and finalized some algorithms – down-sampling and wavefront segmentation for vision, A* with visibility graphs for global path planning, and Runge-Kutta and PID control for motion. Some of the courses that cover these topics include: 18-370 (Fundamentals of Controls), 16-311 (Intro to Robotics), 16-385 (Computer Vision), and 18-794 (Pattern Recognition Theory).

I also started looking into alternatives for multi-threading on the Arduino as we discovered that we will need to have the robot sensing and moving at the same time to make good use of time. So far, I have found some approaches that make use of millis() based timing instead of delay() along with OOP to make a state machine on the Arduino. Additionally, I worked on preparing the slides for the design review and preparing for presenting next week. We plan on laser-cutting the robot chassis tomorrow (Sunday 02/19) and assembling the robot as and when the parts come in. Next week I plan on starting to test out the motors and calibrate them so that the robot can move along a fixed path. This will involve implementing the PID controller and odometry as well. Since we will be testing out the alternative approach of random motion before path planning, I’ve decided to prioritize odometry over path planning for the moment. 

Team Status Report for 2/18

This week we worked on getting started on several aspects of our robot, including the CAD design and the Wavefront Segmentation algorithm for our path planning. We also ordered all the parts for our robot and received them, focusing on how different parts will integrate together. Additionally, we researched how to connect the Arduino and Wifi module to Azure, which is now the decided cloud platform for our ScentBot. Towards the end of the week, we collected this information onto our Design Review slides. 

We have attached a photo of our completed CAD design.

 

With the parts we have ordered, we anticipate a few challenges, which we also discussed with our advisors. These would be good motor control and getting the robot to follow a straight path since we are assembling the robot using custom-built parts. We are also considering an alternate path planning approach because of the high dependence on sensor sensitivity in our project. 

If the sensors are sensitive enough to detect an object from a distance farther than the 0.5m radial distance, we will change our test setup to have a single-scented object. This will be placed in a scent diffuser/spray to create a radial distribution of scent for our robot to follow. The robot will “randomly” explore the map until it detects a scent, and will follow in the direction of increasing probability. The robot will receive a travel distance and angle to follow and will reorient itself to a different angular orientation after this set distance. An image of our alternate testing approach is shown.

The Wavefront Segmentation algorithm runs on a letter size sheet of paper within 0.22s on average, and can detect objects present on a white background. It thresholds the images for faster computation, and calculates and prints the location of the centroids of the objects. One challenge we immediately faced was making sure shadows do not overlap within the image capture from the overhead camera.  

 

The principles that we utilized to solve the problem of determining our robot design to fit our use-case requirements involved research into differential-drive robots, PID control, wavefront segmentation, Runge-Kutta localization, A* graph search, visibility graph, state machines on the Arduino, fluid dynamics & distributions, and chemical compositions and gas sensor types.

 

Caroline’s Weekly Status Report for 2/11

This week, I practiced and delivered the proposal presentation for my team. In addition to the presentation, I helped research and organize different parts for our robot so that we can order components as soon as possible and begin testing and assembly. We had to shift a few items on our schedule such as robot assembly because they required specific parts, but we will catch up as soon as possible. Next week, I will help work on the CAD modeling for the robot chassis and also start setting up the Arduino codebase for path planning.

Aditti’s Status Report for 2/11

This past week I focused on researching different sensor modules and robot chassis for our use-case scents and shortlisted parts for ordering. I also worked on the CV pipeline for finding centroids of various objects on a map by thresholding and segmenting. The model takes in a raw camera image, maps it to a 4×4 grid, and finds the centroid coordinates of each object on the map. The code still needs to be tested with the images from the actual camera that we will end up using. Since we are behind on ordering parts, I think robot assembly + CAD will have to be completed within the next week, which is why I decided to work on the segmentation section this week. By next week I aim to finalize the robot CAD and assemble the robot and also start thinking about path planning logic.

Team Status Report for 2/11

This week, we delivered our proposal presentation. Based on the feedback we received, we are considering changing the design of the path planning system to rely more on the sensor module to find the scented objects without the assistance of an overhead camera to communicate the locations of objects. The feasibility of this strategy depends heavily on the performance of our sensors, so we will wait until we can test with the actual sensors before fully committing to any design changes.

We currently envision the sensitivity of our sensors to be a concern in the near future. We are planning on mitigating this risk by spending more time calibrating the sensors well and reading up on documentation provided by Seeed and Adafruit. We may also revise our path planning strategy in case the sensitivity of the sensors (in terms of distance) does not meet our requirements. 

Because of the proposal presentations this week and the release date of the part ordering form, we have shifted a few items in our schedule to the following week. We plan to start the robot design next week as well as the field construction in addition to the items we already have scheduled.

Our project includes considerations for public safety. We want to ensure that our robot can path plan correctly around potential hazards without running into them or causing spills. Our use case of helping people with anosmia also lends to the positive impact we’re aiming to have on public health and welfare.