This week I spent a lot of time prepping and practicing for the final presentation over Zoom with my groupmates. Sunday we spent a lot of time working on the slides and displaying information similarly to what we would want to show for our VR presentation. From there we were able to get much needed feedback for our project, most importantly, running more tests on our device to get more accurate metrics. We conducted more tests at each location to get better data to present when it comes time to write our final design presentation as well as when it comes time to VR present. We also started working on our poster for the VR presentation and it is mostly formatted from our final presentation slides with some updated fields and information as per the presentation guidelines. We also began filming our final video but will conduct the final filming tomorrow (Sunday) since we had some issues with our project and we needed more time to conduct more thorough tests.
Enock’s Status Report 05/01/2021
This week has been the final stretch in implementing the final software subsystem of our project. We were able to move all of the hardware from the ECE lab testbench onto a cart. We strapped down everything so that the wires and components don’t move around and disturb the signals traveling. In addition to this, we decided to do the process of data collection and overlaying on the laptop as we have been doing. However, in the past we were using 2 laptops, my laptop to collect the SDR data and send it over UDP and Vrishab’s laptop to receive the data to do the beamforming algorithm. This was all done for testing and so would be inconvenient for the actual final product and so Txanton and I worked on integrating the two software scripts into one so that it could be configured to work solely on mine. Throughout the week we worked on doing test runs to get lobe width, we tested to see that the box in the middle could respond to different signal strengths at different distances.
Enock’s Status Report 04/03/2021
This week was a hardware heavy week in which we finally got all our parts. The beginning part of this weekend was spent working setting up our GNURadio environment on Linux as well as the companion to use the SDR’s. Afterwards I spent time trying to configure the SDR’s to be synced and placed two of them in close proximity connected via a USB Hub to my computer to verify we could see FM radio signals of the two at the same time. At the time we had two antennas one longer than the other which produced a stronger reception as shown in the waterfall. This was the first milestone goal that we hit which was to: setup the SDR/GNU Radio environment, sync the SDRs, and see the signals in the companion app.
The next goals were all related to testing when the parts came in. Initially we received SMA cables which could not connect to our antennas so we put in an order for SMA to RPSMA adapters which fixed our issue.
Upon receiving the VCO I was able to test and verify that the output signal was a sine wave with an oscilloscope.
We decided to use an arduino to do setup processes for the VCO registers which has 5V data line outputs which would be intercepted by the Logic Level Converter and sent to the VCO but we could not see the proper voltage outputs which was related to the LLC not capturing at a fast enough rate due to the high SPI output rate. Thus we tried using a MSP430 which ended up not working and thus we settled on using the STM32 instead which natively had 3.3V output and we could phase out the LLC from our design.
When verifying that the STM was properly configured we attached the antenna to the downmixer and the downmixer to the VCO and SDR To verify we could see WiFi signals on GNURadio waterfall. This proved to be a success.
Next week’s goal will be to configure and optimize the beamforming algorithm depending on how the data is transmitted now that we are able to capture.
Enock’s Status Report 03/27/2021
This and last week I spent time working with the SDR’s that we received. We gladly were able to receive SDR’s from Dr.Kumar’s lab and me and Vrishab spent time setting up our digital environment to use the devices. We were able to set up multiple computers on all different OS’s (Mac, Windows, Linux) with the GNU Radio software so that we could find out which was the easiest to work with. Upon doing this, we were able to plug in both devices into our computers and have the signals be seen in sync. The signals that we were looking at were radio waves from radio stations and since they were audio we could verify that we were picking up the signals properly since we could play it back in real-time. We further started working on porting the I/Q data to be analyzed in real-time so that we could apply our beamforming algorithm but this process is currently in the works this and will most likely be finished the day after this post is out before our next meeting.
In terms of the hardware aspect I have started on configuring my environment and software suite (Vivado, HLS, etc.) I was able to pick up the Zynq board and have been working with it to make sure that everything works and interfaces properly.
Team-wise we have also been working on refining our team schedule to be more detailed and I have been working on trying to come up with a lot more detailed list which will specify what specific deliverables I need and when
Enock’s Status Report 03/13/2021
This week we spent a lot of time on the design aspect of the project in order to put out the presentation and report.
I focused mainly on the FPGA and onward part of the block diagram which ended up being not complete upon uploading the diagram to our presentation which was our fault. However, I worked on understanding how the signals should be coming into the FPGA and formatted so I could start writing block diagrams for the internal SystemVerilog modules. Upon figuring out format of the signals I roughly figured out that the 32 input signals must then be time delayed internally in order to create a proper signal that the DSP Beamforming algorithm module would take as an input. Once the signals have been time delayed the DSP algorithm as shown in our block diagram would run the proper computations to output one signal. This signal will then be run through another module that would take the amplitude of this signal in order to get a “pixel” intensity value for the heatmap. The pixel in this case is actually a grid box as shown in the Artist’s Rendition. This value will then sent to our code which will render the heatmap overlaying the input video feed which will be sent out via mini DisplayPort to a monitor.
This week and in the upcoming one I will be working on creating a detailed block diagram and specification for the FPGA SV modules and heatmap creation for the design report. I will continue to work on refining the design and starting to write code/pseudo-code which is on schedule for writing and then hopefully testing.
Enock’s Status Report 03/06/2021
This week has been very conceptual in the next steps of our project. Since I am primarily focused with the hardware and the latter steps of our project there isn’t too much I have been able to physically do before having a testing implementation of the antenna PCB to start designing the SystemVerilog modules.
Since last week I have narrowed our FPGA down to using the Zynq96 Ultra v2 Board since this board has an MPSoc which is allows for more parallelism and better efficiency than the SoC FPGA’s in Capstone stock. This board has enough GPIO’s and the current stock SoC FPGA’s do not have enough GPIO pins for our proposed design. Although the traditional FPGA’s may have enough GPIO they do not have the flexibility of using High Level Synthesis (HLS) which allows us to write in C which will synthesize to SystemVerilog (SV) which will increase workflow efficiency since in some cases it is much easier to implement algorithms in C rather than SV. I have experience with this board more than any other board listed as well which will allow for a better turnover rate for implementation since we will spend much less time trying to set up and learn the board.
Furthermore since last week I have thought about a more specific testing metric and we decided one a % accuracy of relative areas rather than the number of devices. This is because if two devices are stacked on each other we will not see readily see 2 devices on the heatmap but rather that the location has a strong WiFi signal and so we would like to accurately locate device locations rather than the number of devices.
In the upcoming week I will start looking into the HDL side of things and trying to find DSP modules and algorithms that we may be able to use depending on the kind of input data we will get, format, etc.
Enock’s Status Report 02/27/2021
This week I spent a long time trying to narrow the scope of our project and critiquing the different parts of our approach. We had a very broad sense of the purpose our project with no definitive and interesting use cases which led to making a weak argument for our project. From this I was able to come up with very specific use cases that would be used in the realm of security and police enforcement since this was one of the important parts for pitching our idea.
Together, we were able to decide that we would localize WiFi devices in a standard bedroom sized area. I deduced that 1Hz response would be a reasonable target metric for real-time detection so I looked into FPGA’s that could support the signal processing. Some FPGA’s that could be used are the Zynq Ultra96 v2 and Terasic DE10-Nano since they are relatively cheap, easy to work with, and I have experience using the two devices. Their form factor is quite small and light and so it fits with our design in terms of size and mobility since it isn’t too heavy.
Lastly I spent time trying to narrow our testing methods so that they were specific metrics that would address some of our technical challenges and goals mentioned during the presentation. One thing that we will need to work on, however, is determining a specific number of devices to locate since we were very general with our proposal and only determined lobe-width.
In addition to determining how many devices to locate, I will start working on looking to signal processing modules that we can use in conjunction with the algorithms we will be using. Finding IP’s will make the programming part much easier and will allow us to interface with the embedded parts smoother so that we don’t have to write our own algorithms/interfaces.
Enock’s Status Report for 02/20/2021
This week I focused on deciding on what kind of hardware we would try to do some of the signal processing and programming intensive computations. Within the budget and availability I found the Zynq Ultra96 v2 board would be the best in terms of price, size, and ease of use. This board is used in 18-643 so we are hoping to be able to borrow a board if possible. If we use Vivado it will be very easy to use existing IP blocks that will allow for us to use modules that will interface with certain components or that will do certain DSP computations without having to write our own in SystemVerilog. Furthermore the board is an MPSoC which allows for us to write certain parts in C++ and to establish a separate workflow for those who will be working on the programming side and those working on the hardware side (aka Verilog). Aside from this I decided it might be a good idea to focus on LTE signals rather than WiFi since it is more common for devices to be transmitting/receiving LTE than WiFi so from a security standpoint it might catch more users with this frequency. This, however, may limit our testing capabilities since we will have less devices with LTE compared to WiFi enabled devices. I hope to be able to find out if we can use this board in our project as well as if we will have enough testing data to use LTE as our base signal.