Weekly Report #3 – 3/2

Jerry:

I have fine-tuned the Yolov3-tiny network to specialize solely in person detection using data from MS-COCO (the dataset used to train the pre-trained network) and the Caltech Pedestrian Dataset, which would hopefully increase the accuracy of the network for our use case. I verified that it performed better on the Caltech dataset compared to the pretrained network.

Since we’ve formally decided to support battery operations, I have also been researching into possible batteries for the system. At first I was considering USB power banks, and found that though this is possible, a large amount of equipment will be needed:

  • USB power bank
  • A way to convert the 5V USB output to a 12V 4.8mm x 2.1mm jack output
  • A way to prevent the power bank from turning itself off under low current draw (!!)
  • Courage to believe that the power bank won’t draw too much power while it’s being forced on

I breadboarded a circuit that kept one of my own power banks on, but later by chance I found a different battery that outputs 12V DC directly and is well-suited for powering low-powered things for a long time, so we decided to use that instead.

Next time, I will try to make the neural network run on the board.

Karthik:

I have been working on trying to interface with the GPIO pins on the Ultra96 Board. This will specifically become more useful as the parts we ordered start to come in. Therefore, because we don’t have all the parts, I am trying to interface with LEDs that are connected through the GPIO pins just as a proof of concept.  To do this, I have been doing some installations of different board configuration files and been going through some Ultra96 startup guides to try and set up both Vivado and the Xilinx SDK on my laptop to use the GPIO pins. This has involved getting pin assignments from a third-party site. After that I started setting up both Vivado and the Xilinx SDK to use the proper pin assignments.

On top of this, I have been working with Nathan and Jerry to try and get an Archlinux system with the Deephi libraries that we needed for our project onto our board. Overall, I think we are pretty close to on schedule because after getting the OS on our board and the logitech webcam, we can now start to test the GPIO interfacing with the webcam.

Nathan:

I spent my week focusing on getting the Ultra96 functional with the Deephi demo IP. As the end result of this process involved pulling in Karthik and Jerry as well, I will detail it in more detail in the team section. Additionally, I worked on nailing down the specifics of our control and low power motion subsystem. The general system is as follows. We will use the Adafruit Feather Board platform (specifically their ARM M0+ based board, specialized for CircuitPython) to monitor the motion sensor with low power consumption. Upon detecting motion, it will send a signal over UART to the Ultra96 to wake it from deep sleep. This seems the lowest power way to economically monitor the device’s surroundings. The Feather Board will also control the servos and stepper motors of the control system, as Adafruit and the Arduino ecosystem have very robust software tools for precisely that purpose, and there is no effective power difference compared to using the Ultra96 itself, despite the latter’s greater complexity.

Also, I forgot to mention this in the last post, but I significantly improved the website before last week’s blog post, as some visitors may have noticed, and even included one-click links to our presentations (as opposed to embedding them in a post).

Team:

In addition to the contributions above, our team had an epic this Saturday to rival the Iliad. Nathan started by trying to get some Deephi demos working on the board. Nominally, this involved downloading a Xilinx board image and transferring over some test programs, a simple feat in theory. However, upon trying to boot that board image, nothing was happening. We tried reflashing the original Linux distribution, which works just fine, and also validated the flashing utility, but the Deephi boot image just wouldn’t work.

Eventually, we were able to track down a UART to USB tool, which we could use to debug the boot process instead of flying blind. However, what we saw was the following message, over and over again:

Xilinx Zynq MP First Stage Boot Loader Release 2018.2 Dec 10 2018 – 14:03:11 Reset Mode : System Reset Platform: Silicon (4.0), Running on A53-0 (64-bit) Processor, Device Name: XCZUG SD0 Boot Mode Non authenticated Bitstream download to start now PL Configuration done successfully

We tried various solutions, including booting from a USB (got nothing over debug) and even modifying the binary file, but nothing worked.

Eventually, however, we stumbled upon the true source of the problem. Apparently, the lab power supply can only provide 0.5A, which was just enough to boot into Linux, but was insufficient to power the boot process for the Deephi. However, by increasing the voltage, we were able to provide enough power to get to the desktop, but not enough to run the demos. We will receive a new power supply this week to fix this problem.

Edit 3/2: Corrected typo “logic” to “Linux”.

Weekly Report #2 – 2/23

Nathan:

This past week was largely spend completing the Vivado tutorials and building an understanding the of the software and hardware components left in the system. My conclusions (pending further group discussion) are as follows.

The deep learning inference should be primarily driven by Xilinx’s Deephi DNNDK edge inference technology. This was originally a Beijing-based startup that Xilinx acquired just last year. This makes the technology both largely unexplored (allowing room for differentiation even with off the shelf IP) and yet well documented through Xilinx’s involvement. The video from the camera can be imported directly through gstreamer, for which Xilinx already has some examples we can use as a reference, albeit not completely plug and play. From there, we can use a system of servos with a pan-tilt arrangement to reorient the camera. We will need to custom-write algorithms for moving the camera, focus/zoom as necessary, and obtaining a movement position based on the data returned by the neural net.

In terms of scheduling, I had a bit of a gap here in my work on the Gantt chart, and will use this as an opportunity to get ahead on my understanding of the software, and potentially investigate expanding the functionality of the project, e.g. through cloud integration.

Jerry:

So I had been familiarizing myself the Linux interface of the Ultra96 board, and was trying to get internet access on the device to make development easier on it (e.g. so we can download TensorFlow). I had registered the device on CMU’s network, which let me successfully access the front page of CMU’s website (everything else redirected to it). I think after a bit of time it will give true internet access, but I haven’t tested this. If this doesn’t work, it’s possible to just copy files to the SD card.

Next step would be to write a program on the Ultra96 that controls the GPIO pins, and test the signals from an oscilloscope.

Karthik:

Because we got the board this week, I helped Jerry with setting up the board and trying to connect it to the internet. Also, before we got the board on Thursday, I talked with both Nathan and Jerry about the types of cameras we would like to use for our tracking system and we decided on the Logitech Webcam c920. Because this webcam is not supported by gphoto or gphoto2, I have been looking into other alternatives for interfacing with our new camera. Out of the options I have looked into so far, I found that we could use either OpenCV or video4Linux2. But, out of these two options, I am leaning towards OpenCV believe that OpenCV may be better because it is already a preestablished library.

Then, because we switched from gphoto to a new capture software we are a bit behind, but I believe that during the next week once we install OpenCV and get the camera, I should be able to interface with and send video data to the Yolo-v3 neural network on the backend. So, if this gets done, I believe we will be back on schedule.

Team:

Our primary visible accomplishment this week was getting our FPGA board up and running. We’ve also discussed the control algorithm for the mechatronics. If we will be using a servo, a simple feedback controller would work to center the image given knowledge about whether the center of the bounding box should be moved up / down, left / right. A motor will be slightly more involved, because we’d need a PID controller, but we have experience implementing such algorithms.