Joel’s Status Update for 03/06/2021

This week was focused on providing more concrete details about the technology to be used in the project as well as the design and integration.  In particular, I have been able to verify the feasibility of our planned design from the mechanical side of things. One concern that was solved this week was the use of the motor controllers to manage the motor system. We have settled on using two LN298 H bridges to control each motor pair (front, rear). This will allow us to have fine grained control of each wheel to allow for decent car agility. In addition to this, we have been able to finalize almost all of the components (including building materials, fasteners, screws, etc.) that we will need for each vehicle with the exception of exact sensor units for odometry and some power supply units.  With our current configuration of vehicles and parts, we anticipate being able to fully afford 3 vehicles with some budget left over for miscellaneous items, and potentially some experimentation.

This week the team also collaborated to come up with more concrete metrics and testing strategies to evaluate our progress. This has allowed us to narrow down scope and maintain a more focused approach as we continue to fine tune our design.

Moving forward, I need to focus on fabricating the vehicles once we are able to get all the parts. In addition I will also develop the motor system interface to enable the driving portion of the car.

RC Unit (No Camera)

Jeffrey’s Status Report 6/3/21

This week I mainly focused on finalising some of the object detection algorithms and figuring out what is possible given our limited compute power. I brushed up on CNNs including some of the state of the art object detection algorithms and investigated the possibility of using RGB-D information as part of the object detection. This would mean using depth as well as the image to make detection decisions, rather than simply using the depth information for planning. This led me to read about Faster R-CNN algorithm that uses RGB-D information for state of the art object detection. I read more about it and how it can be used in conjunction with VGG16. The authors of the paper had a frequency of around 5 fps processing rate on COCO datasets using a standard GPU.  This is definitely one algorithm I will look more into, however, it depends on the quality of data we can get from the depth camera. Since they used data generated from high quality depth camera such as Intel Real Sense, the point clouds they can generate using the RGB-D information will be much higher quality than what we can generate with a make shift PS4 camera.  More experimentation will be needed to see if such algorithms can work even if the RGB-D information isn’t as rich. Looking into VGG16, it seems like a very lightweight and accurate algorithm for our purposes. To get around the problem of the network being trained on real objects we plan to print out pictures and paste them on our obstacles so there isn’t a need to generate a new dataset and retrain the network. We can simply freeze most of the weights and tune the network to give the desired precision and recall.

Moving forward, I want to start generating RGB-D data from the PS4 camera that we bought this week and begin testing object detection algorithms to see what works best and whether or not we need to rethink our approach.

Team Update for 02/27/2021

This week the group gave the proposal presentation to introduce the project idea and solution approach. As a result of the feedback from the presentation, the group focused on correcting what was subject to critique in our proposal.  We decided to move away from including LIDAR as a form of image sensing for our devices and opt for the use of cameras and odometer of perception and localization. The decision to drop LIDAR was mainly due to concerns about the fidelity of the LIDAR capture at the price point that we could afford for the project as well as the difficulty that may arise in processing the raw data for use in the autonomous vehicle system.

We opted to replace LIDAR for depth sensing with a stereoscopic camera device, namely the PS4 camera. Research from this week has shown us that this camera may serve our purposes adequately at an affordable price point. We were also able to make strides in the vehicle mechanics as we now have a CAD model going with a growing parts list that we hope the finalize by the end of next week. Finally, we have started looking into creating a more explicit set of requirements, goals and testing for our project. We anticipate the we will have progressed far enough by the end of the week to have a rich set of requirements and goals backed by our decision choices in the hardware and software space.

Goals for this coming week include experimenting with the PS4 camera to verify its usability, finalizing CAD model and RC car parts list, developing and documenting more specific goals and requirements which connect qualitative and quantitative measures, and finalizing the design review presentation.

Joel’s Update for 02/27/2021

This week I focused on doing more research about the mechanics of the vehicle. During my research I was able to finalize two potential motor driver design which we will filter down to one by the end of next week. In addition to this, I also started working on a CAD model for the vehicle.  We found the we may end up being pressed for space on a traditional RC Kit size so we opted to create our own slightly larger design. Lastly I began testing some of the hardware we plan on using to drive the motors. An interesting find is that there is a 2v drop with the motor controllers which we may have to compensate for to get the speeds we are looking for. This will require some research about how best to go about power delivery for all the vehicle systems.

Fausto’s Status Update [02-27-2021]

At the beginning of this week, I practiced and presented our group’s Proposal Presentation. It went well and gave our group some ideas to think about and some timeline reconsiderations we could make. The biggest critique we received was in regard to the object detection using lidar so we will most likely be drifting away from that idea to a more manageable alternative. Additionally, we realized we needed to make our goals more quantitative. I researched methods of setting up a bluetooth ad hoc network and will continue to research and settle on an implementation next week.

Jeffrey’s Status Update 27/2/21

This week I focused on trying to finalise the design details of our project. After our presentation, we reconsidered the possibility of using LIDAR in our project, and decided against it. The reason for this is cheap LIDAR sensors don’t produce rich enough point maps to make meaningful object detection decisions. The good object detection algorithms require heavy computation and high quality point maps which isn’t possible with the hardware we are planning on getting. Instead we researched and found out a way of manipulating PS4 cameras to work with Python and OpenCV to perform depth calculations. We are heavily considering using this as an alternative to buying Intel Depth cameras since they are expensive.

Aside from this, I also looked into IMU units and found that the BMI160 IMU is compatible with the NVIDIA Jetson Nano. The communication is done with I2C, but supports SPI for faster data transfers. I looked into how setting up SPI would work, since we would want accelerometer and gyroscope information from the IMU fairly often in order to successively communicate with the following car. I also researched a bit on integration and it appears to be fairly straightforward since the BMI160 communicates via MMIO

Moving forward, we want to research more into outfitting the PS4 camera with the Jetson Nano and seeing if the tutorial on getting depth sensing to work is accurate enough. I also plan to look more into algorithms that can do object detection without needing to default to colour filters and painting all our obstacles the same colour.

Fausto’s Status Update [02-20-2021]

This week, my teammates and I worked on consolidating our ideas and goals for our project by narrowing down various different design options. After researching different forms of communication, we decided the best choice for communicating between RC cars was Bluetooth. Bluetooth works well for our purposes because it has good range, can be used for ad hoc networking and is simpler than some of the other options we were looking at (wifi). We are still deciding how much communication we want our cars to do and are exploring the idea of cars communicating at intersections for faster transportation (which car slows down, which one speeds up).  Next week, we want to finalize the hardware components we will be using for our RC cars and more clearly define communication guidelines.

Team’s Update for 02/20/2021

This week the team focused on getting the details of the project finalized. This included discussions with our assigned TA and professor to evaluate the state of our ideas. The meeting concluded with our team settling on the task of a convoy system to demonstrate as a use case for V2V communication. The rest of the week was spent on researching some of the core technologies we intended to use for our project. Some of these insights led us to settle on technologies for communication and perception. We also spent time preparing the our planned schedule for the rest of the semester. This lead us to decide on tasks for each of us to complete which are outlined in greater detail in our proposal documentation.

In the next coming week, the team will be focused on getting the necessary materials finalized according to our budget constraint. This will include our design decision regarding the mechanics of the RC cars. We would also like to setup team infrastructure such as git hub so we can better manage and coordinate our tasks.

Jeffrey’s Status Update 20/2/21

This week I mainly focused on finalising some of the details for our project. We met with the TA to discuss some of the details, including what kind of RC car to use, and whether or not we should build one from scratch or retro fit an existing one.  After some discussion, we decided to build our own car, since Joel has experience building a car with 4 motors. Furthermore, I also looked into some of the details regarding perception. I looked at previous work where students used LIDAR for object detection, and found out that most used some form of odometry to make the path planning more accurate. I then looked into efficient implement some form of odometry. I considered both using a eagle-eye camera and also hall-effect sensors, as both technologies have been used for autonomous RC cars in the past.

Looking forward, I want to finalise and discuss the odometry setup and continue finalising parts and start looking into possible perception algorithms.

Joel’s Status Update for 02/20/2021

This week I mostly focused on finalizing core project details with the rest of my teammates.  We met with our assigned TA and Professor to help reshape the scope of our project. After feedback from these meetings our team settled on the task that we would attempt to perform with the vehicle. Another thing that I looked into this week was the use of WiFi direct connections for our communication protocol. This research revealed that the technology would be quite a hassle to get correct and not critical in enabling the functionality we require for our application.  This lead us to the decision to use Bluetooth for our communication.

Looking forward, I want to focus on getting the required components finalized for the mechanics of the vehicle by the end of this coming week.