Heather’s Status Update for 11/7
Worked on three things this week: servos, video transmission, and integration.
For the servos, I’m managing to get some movement from the servos, but not the full range I expected. I think it has something to do with the duty cycle. At least it is better than last week, where they weren’t working at all. The issue had to do with a difference in frequency, so when I changed that I stopped getting an error everytime I tried to set the angle.
For video transmission I tried a three different things. Firstly, I’m able to transmit video using rtsp, but I’m having trouble integrating it into a conferencing software. I followed some steps I found at the following link to set up the gst-rtsp-server https://gist.github.com/neilyoung/8216c6cf0c7b69e25a152fde1c022a5d. Windows 10 has the capability to detect ONVIF cameras, thereby allowing it to be integrated into my computer. I’m pretty sure I should be able to implement it using the code I have from the gst-rtsp-server, but my computer is not detecting the Jetson at all. I think it is because I’m not transmitting with ONVIF. Secondly, Eddy found something that allows linux machines to use udpsrc video as a source for Zoom using v4l2loopback, so we were trying that out. I haven’t gotten that to work either because of security stuff on my laptop that prevents modprobe v4l2loopback. Lastly, I’ve been trying to get the Nvidia sdkmanager for my laptop working and I finally did by just booting my computer to only Linux. I was originally trying to use dual boot, but I couldn’t get my computer to give the Ubuntu partition enough space, so now I have a Linux only laptop. Now that I’ve got sdkmanager working and I can get all the OS files, I see no way to be able to edit the kernel, so I’m not sure that it’s even feasible. Plus side is, now that I have a laptop with Linux, I installed Zoom onto it, so that we can keep trying out Eddy’s idea with v4l2loopback.
Eddy and I tested and fixed some bugs we found with the code structure for integrating everything. Right now, we’re able to get the computer vision to detect a face, give an angle to change to, pass that angle to the motor thread, and then have the motors turn. I’m hoping to get the servos working tonight so that I can add them into this too.
There are some goals for this upcoming week. Firstly, tomorrow is going to be spent prepping what we want to say and have for the demo. Then the rest of the week I will spend working on the video transmission. I’m really hoping I can get the servos 100% working tonight, so that the only things I’ll have to worry about after the demo is the video and cleaning up the prototype. I’ll also have to spend a small amount of time adding Anna’s microphone solution into the physical project.