Jeffrey’s Status Update 27/2/21

This week I focused on trying to finalise the design details of our project. After our presentation, we reconsidered the possibility of using LIDAR in our project, and decided against it. The reason for this is cheap LIDAR sensors don’t produce rich enough point maps to make meaningful object detection decisions. The good object detection algorithms require heavy computation and high quality point maps which isn’t possible with the hardware we are planning on getting. Instead we researched and found out a way of manipulating PS4 cameras to work with Python and OpenCV to perform depth calculations. We are heavily considering using this as an alternative to buying Intel Depth cameras since they are expensive.

Aside from this, I also looked into IMU units and found that the BMI160 IMU is compatible with the NVIDIA Jetson Nano. The communication is done with I2C, but supports SPI for faster data transfers. I looked into how setting up SPI would work, since we would want accelerometer and gyroscope information from the IMU fairly often in order to successively communicate with the following car. I also researched a bit on integration and it appears to be fairly straightforward since the BMI160 communicates via MMIO

Moving forward, we want to research more into outfitting the PS4 camera with the Jetson Nano and seeing if the tutorial on getting depth sensing to work is accurate enough. I also plan to look more into algorithms that can do object detection without needing to default to colour filters and painting all our obstacles the same colour.

Jeffrey’s Status Update 20/2/21

This week I mainly focused on finalising some of the details for our project. We met with the TA to discuss some of the details, including what kind of RC car to use, and whether or not we should build one from scratch or retro fit an existing one.  After some discussion, we decided to build our own car, since Joel has experience building a car with 4 motors. Furthermore, I also looked into some of the details regarding perception. I looked at previous work where students used LIDAR for object detection, and found out that most used some form of odometry to make the path planning more accurate. I then looked into efficient implement some form of odometry. I considered both using a eagle-eye camera and also hall-effect sensors, as both technologies have been used for autonomous RC cars in the past.

Looking forward, I want to finalise and discuss the odometry setup and continue finalising parts and start looking into possible perception algorithms.