Nathan:
This past week was largely spend completing the Vivado tutorials and building an understanding the of the software and hardware components left in the system. My conclusions (pending further group discussion) are as follows.
The deep learning inference should be primarily driven by Xilinx’s Deephi DNNDK edge inference technology. This was originally a Beijing-based startup that Xilinx acquired just last year. This makes the technology both largely unexplored (allowing room for differentiation even with off the shelf IP) and yet well documented through Xilinx’s involvement. The video from the camera can be imported directly through gstreamer, for which Xilinx already has some examples we can use as a reference, albeit not completely plug and play. From there, we can use a system of servos with a pan-tilt arrangement to reorient the camera. We will need to custom-write algorithms for moving the camera, focus/zoom as necessary, and obtaining a movement position based on the data returned by the neural net.
In terms of scheduling, I had a bit of a gap here in my work on the Gantt chart, and will use this as an opportunity to get ahead on my understanding of the software, and potentially investigate expanding the functionality of the project, e.g. through cloud integration.
Jerry:
So I had been familiarizing myself the Linux interface of the Ultra96 board, and was trying to get internet access on the device to make development easier on it (e.g. so we can download TensorFlow). I had registered the device on CMU’s network, which let me successfully access the front page of CMU’s website (everything else redirected to it). I think after a bit of time it will give true internet access, but I haven’t tested this. If this doesn’t work, it’s possible to just copy files to the SD card.
Next step would be to write a program on the Ultra96 that controls the GPIO pins, and test the signals from an oscilloscope.
Karthik:
Because we got the board this week, I helped Jerry with setting up the board and trying to connect it to the internet. Also, before we got the board on Thursday, I talked with both Nathan and Jerry about the types of cameras we would like to use for our tracking system and we decided on the Logitech Webcam c920. Because this webcam is not supported by gphoto or gphoto2, I have been looking into other alternatives for interfacing with our new camera. Out of the options I have looked into so far, I found that we could use either OpenCV or video4Linux2. But, out of these two options, I am leaning towards OpenCV believe that OpenCV may be better because it is already a preestablished library.
Then, because we switched from gphoto to a new capture software we are a bit behind, but I believe that during the next week once we install OpenCV and get the camera, I should be able to interface with and send video data to the Yolo-v3 neural network on the backend. So, if this gets done, I believe we will be back on schedule.
Team:
Our primary visible accomplishment this week was getting our FPGA board up and running. We’ve also discussed the control algorithm for the mechatronics. If we will be using a servo, a simple feedback controller would work to center the image given knowledge about whether the center of the bounding box should be moved up / down, left / right. A motor will be slightly more involved, because we’d need a PID controller, but we have experience implementing such algorithms.