Varun’s Status Report for 4/6

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week has been mostly trying to debug my JPEG decoder implementation. As of right now, the JPEG decoder flashed on the FPGA doesn’t actually produce any results. I’m not entirely sure if this is because there’s an issue in the way the Arduino is sending the FPGA the JPEG bit stream or if there’s some inherent bug with the JPEG decoder.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

I’m a bit behind on the JPEG decoder itself but luckily this is the only thing that’s a bit behind. I plan on continuing to do some testing, most likely start with preloading the image onto the FPGA and seeing if it decodes it properly.

What deliverables do you hope to complete in the next week?

I plan on hopefully getting everything sorted out with the JPEG decoder next week.

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week was mostly spent into integrating the JPEG decoder into the pipeline and debugging issues that arose. Another big piece of the puzzle that I had missed was converting the decoded YCbCr output of the JPEG pipeline back into RGB.
This was fairly tricky to implement as this required a lot of floating point computation. I had to work on pipelining the design so that I could actually fit the design onto the FPGA. There are about 28 18×18 multipliers available on the FPGA, each pixel required about 4 of the 18×18 multipliers so I had to make sure to find a way to sufficiently parallelize/sequentially the decoding to use the multipliers the most appropriately.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress. Though it will be a little tight to get everything in for the interim demo.

What deliverables do you hope to complete in the next week?

I plan on hopefully getting everything sorted out for the interim demo.

Verification Test

Luckily, by doing almost everything in hardware, most of the verification has already taken place. This means in terms of things like how stable is the HDMI output to the display and the rate at which the JPEG decoder operates.

The HDMI frame rate can be monitored by an external display. The display will be able to display the input frame rate and this can be monitored for an hour to ensure that it stays at a stable 60fps. A success here would be marked by not missing any frames during this time.

The JPEG decoding rate is solely determined by how the design is pipelined and the clock speed of it. Currently, the 25MHz clock that the JPEG decoder is running at is more than sufficient to meet the effective 60fps (6 streams @ 10fps) required for the project.

 

Team Status Report For 3/30

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

We have mainly addressed main risk from last week. We were able to solve this through the setup of the WiFi on the ESP so that we were able to free up more bandwidth for the image encoding the remote node size.

Because of this being tested and is reliable, we currently do not have any further risks. We are happy with this solution as it did not require us to reduce the image quality being sent from the camera.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes

Varun’s Status Report for 3/30

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week was mostly spent into integrating the JPEG decoder into the pipeline and debugging issues that arose. Another big piece of the puzzle that I had missed was converting the decoded YCbCr output of the JPEG pipeline back into RGB.
This was fairly tricky to implement as this required a lot of floating point computation. I had to work on pipelining the design so that I could actually fit the design onto the FPGA. There are about 28 18×18 multipliers available on the FPGA, each pixel required about 4 of the 18×18 multipliers so I had to make sure to find a way to sufficiently parallelize/sequentially the decoding to use the multipliers the most appropriately.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress. Though it will be a little tight to get everything in for the interim demo.

What deliverables do you hope to complete in the next week?

I plan on hopefully getting everything sorted out for the interim demo. 

Varun’s Status Report for 3/23

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week was mostly worked on improving the speed of the JPEG decoder so that it better meets timing. I’ve included a copy of the SystemVerilog file for this. Previously, the design ran at about 100MHz, but with a better pipeline (8 stage pipeline to process 8 pixels of the MCU), I’m able to increase the throughput by a factor of 16. I’m able to better utilize the resource of the FPGA to process more pixels per clock and also increase the clock speed up to around 200MHz. This should make it more possible to handle the effective 120FPS requirement from the input streams.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress.

What deliverables do you hope to complete in the next week?

I plan on integrating this design more into the current pipeline. Right now the JPEG processor stands along but I need to incorporate the SPI interface to it as well as appropriately pass it to BRAM so that the display can view the image.

Varun’s Status Report for 3/16

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week was mostly focused on writing the JPEG decoder for the FPGA. I had to rewrite my SPI interface so that it matches the JPEG stream that will be received by the FPGA. Then I was able to write the code for the IDCT conversion that will need to take place. Code and testbenches for this code is attached as screenshots.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress.

What deliverables do you hope to complete in the next week?

The main deliverables for next week are to improve the timing of the JPEG decoder. As it stands it rans at about 100Mhz on the FPGA, and ideally it’s closer to 200Mhz. I will work on pipelining the IDCT transform better.

https://drive.google.com/file/d/1DKnRUHdgg2I1rG0Blsez0BALWXh-Vlra/view?usp=sharing

Varun’s Status Report for 3/9

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I worked on quite a lot of the work required for the FPGA. My main accomplishments this week are I wrote some code to interface with the HDMI driver. I did some code modifications to the HDMI driver. Some of the issues I faced with the previous implementation was the display was a little finnicky. I was able to fix this by implementing something called Reduced Blanking in the HDMI driver. This meant that I was able to decrease the clock speed of the HDMI IP by about 15% which meant that it was more easily able to meet timing. The previous design was actually failing timing.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress.

What deliverables do you hope to complete in the next week?

Next week I plan on working on the run length decoder and redoing the SPI peripheral to work with the ESP32 properly. The JPEG decoding is on the backburner as the finalized implementation is being coded.

https://photos.app.goo.gl/LBfp1qN6J4SgLGJp9

Varun’s Status Report for 2/24

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I worked on quite a lot of the work required for the FPGA. My main accomplishments this week are I wrote some code to interface with the HDMI driver. I was able to rewrite the driver to work with our refresh rate and unique setup of compositing multiple images into one frame. I have attached a video showing an Arduino driving some precomputed pixels to the display over its SPI connection.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress.

What deliverables do you hope to complete in the next week?

Next week I hope to start work on writing code for the actual JPEG decoder. This will require translating the C implementation that is currently written into hardware.

https://photos.app.goo.gl/LBfp1qN6J4SgLGJp9

Varun’s Status Report for 2/17

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I worked on quite a lot of the work required for the FPGA. My main accomplishments this week are I wrote some code to interface with the DRAM chip on my FPGA. This will be extremely important is this will serve as the backbone for how the frame buffer for the display will be stored. I also found some sample code online on how to setup HDMI on my FPGA and was able to display a static image. I also spent a lot of time working on the design presentation as well as helping my teammate practice for his presentation next week.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress.

What deliverables do you hope to complete in the next week?

I hope to rewrite the HDMI interface  to work with our specific resolution needs as well as integrate it into the DRAM.

 

Team Status Report 2/10/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready? 

 

As described in the project proposal, the main technical risks we have identified in the project are as follows:

 

1.We need to be able to compute frame compression fast enough: 

One of our primary requirements for the project was to ensure that the entire system is inexpensive and requires low power to run. In light of this, we decided to use ESP32 microcontrollers for compressing and transmitting the image frames from the camera. ESP32’s present a compute limitation to the system, and could probably cause the frame compression algorithm to not run as fast as expected. If need be, we plan on switching to difference compression algorithm such as delta compression

2.Stream enough data over the wireless connection:

Our prototype MVP uses 6 cameras, and a single receiver node. The receiver node’s  ESP32 will be receiving data frames from all the 6 camera nodes simultaneously, and hence there will be a high amount of data being transmitted over the wireless connection. We need to ensure that we drop no more than 10% of the data frames. Having less than 10% dropped frames means that there will only be a 100ms of video that will be lost when a frame is dropped. No animal will be able to cross the surveillance area in less than 100ms. In case we exceed that 10% threshold, we plan on increasing the data access points on the receiver node. Frame drop percentage will be computed by comparing the number of frames transmitted vs the number of frames received, and ensuring that the loss percentage does not go above 10%.

3.Decompress all the incoming frames fast enough:

As mentioned above, the receiver node will be collecting data from 6 camera nodes, decompressing them, and then driving the display. The decompression algorithm would have to be fast enough to ensure concurrent streaming from all the 6 camera nodes, without high latency and computation errors. We will be using a FPGA on the receiver node to perform the decompression, and if an issue arises due to the FPGA, we plan on opting for a larger FPGA with a higher compute capability and parallel processing techniques. 

4.Optimize performance to minimize power consumption: 

One of the major requirements from any portable security system is to ensure that we don’t need to charge it often. Keeping this in mind, we envision our system to be able to run for at least 24 hours on a single charge / battery. The entire system’s performance, including the camera’s feed capturing, compression, transmission, decompression and streaming to the portable monitor, all have to be optimized so that the setup works for at least 24 hours at a single charge / battery setting. Our contingency plan for this would be to increase the battery size if the system ends up taking too much power, even after final optimizations. 

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

 

No major changes have been made yet, however we still need to decide on the final specs for the camera (240p or not), and other technical specs for the microcontroller and monitor, based on our use case. 

 

Provide an updated schedule if changes have occurred. 

No schedule changes as such.

Varun’s Status Report for 2/10/2024

I mainly worked on getting the toolchain for the FPGA setup this week as well setting up build scripts so that it’s easier to flash a program onto the FPGA. I’ve attached some of the Makefiles that I’ve setup for the various tools required.

To validate my setup, I wrote a SPI interface for my FPGA and wrote some Arduino code to transmit data from the Arduino to the FPGA.

In general, I would say I’m pretty ahead on where I want to be with the FPGA. Hopefully next week, I can do more research on a memory setup scheme for my FPGA as well as implement it.