Team Status Report For 3/30

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

We have mainly addressed main risk from last week. We were able to solve this through the setup of the WiFi on the ESP so that we were able to free up more bandwidth for the image encoding the remote node size.

Because of this being tested and is reliable, we currently do not have any further risks. We are happy with this solution as it did not require us to reduce the image quality being sent from the camera.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes

Michael’s Status Report 3/30

What did you personally accomplish this week on the project? 

 

For this week, I got the performance of the JPEG pipeline working at a sufficient level to enable 10fps streaming and verified it using wireshark. Since integration has not yet happened, I used the hotspot of my phone to act as an access point and have the ESP32 join that access point. I also then joined my computer with wireshark. The ESP32 was then commanded to send data to my computer via the phone’s access point. When the packet reached the laptop, the time was noted down for measurement purposes. Since the maximal transmission unit is 1500 bytes, there is a need for packet fragmentation. Therefore, to measure the time correctly, those packet fragments must be discarded. The picture below is the measurement result showing that we are able to hit a frame interval of about 100ms which corresponds to 10fps. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Currently on schedule

 

What deliverables do you hope to complete in the next week?

For next week, I will begin integration with Neelansh for the receiving node. The remote node is now functionally complete with the only thing that is missing being the detection of movement using the luminance channel data. 

Varun’s Status Report for 3/30

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week was mostly spent into integrating the JPEG decoder into the pipeline and debugging issues that arose. Another big piece of the puzzle that I had missed was converting the decoded YCbCr output of the JPEG pipeline back into RGB.
This was fairly tricky to implement as this required a lot of floating point computation. I had to work on pipelining the design so that I could actually fit the design onto the FPGA. There are about 28 18×18 multipliers available on the FPGA, each pixel required about 4 of the 18×18 multipliers so I had to make sure to find a way to sufficiently parallelize/sequentially the decoding to use the multipliers the most appropriately.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress. Though it will be a little tight to get everything in for the interim demo.

What deliverables do you hope to complete in the next week?

I plan on hopefully getting everything sorted out for the interim demo. 

Neelansh’s Status Report for 30th March, 2024

What did you personally accomplish this week on the project? Give files or
photos that demonstrate your progress. Prove to the reader that you put sufficient
effort into the project over the course of the week (12+ hours).

I worked on the SPI interface and setting everything up, including data access points on the receiver ESP32 node, setting up the SPI interface and then transmitting data through it, ensuring that the data is not just garbled bytes but actual legitimate data, and then worked on ensuring the entire system works together as one module. This involved setting everything up together, using oscilloscopes to measure the data being transmitted via the SPI interface, and then also sending and receiving data from multiple ESP32s.

Is your progress on schedule or behind? If you are behind, what actions will be
taken to catch up to the project schedule?

It is on schedule.

What deliverables do you hope to complete in the next week?

Next week I plan on doing the interim demo, and working with my teammates on getting their components ready for a final test, and then start joining the individual components built by each of us together.

Varun’s Status Report for 3/23

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week was mostly worked on improving the speed of the JPEG decoder so that it better meets timing. I’ve included a copy of the SystemVerilog file for this. Previously, the design ran at about 100MHz, but with a better pipeline (8 stage pipeline to process 8 pixels of the MCU), I’m able to increase the throughput by a factor of 16. I’m able to better utilize the resource of the FPGA to process more pixels per clock and also increase the clock speed up to around 200MHz. This should make it more possible to handle the effective 120FPS requirement from the input streams.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress.

What deliverables do you hope to complete in the next week?

I plan on integrating this design more into the current pipeline. Right now the JPEG processor stands along but I need to incorporate the SPI interface to it as well as appropriately pass it to BRAM so that the display can view the image.

Team Status Report For 3/23

What are the most significant risks that could jeopardize the success of the

project? How are these risks being managed? What contingency plans are ready?

The most significant risk right now is the encoder side. As mentioned in Michael’s status report, there are severe constraints on what we can do on the encoder side due to the need to use the PSRAM module on the ESP32. The PSRAM module’s 40 MB/s is a hard limit that is hard for us to work around. Being one of the first stages of the pipeline means that any changes in the encoder side will trickle down and cause issues for the central receiving node and the FPGA decoder. Current contingency plans in case this PSRAM issue does materialize is to set the encoder at a lower quality which will minimize the amount of data that needs to be processed by the LWIP and Wi-Fi stacks. The reduced workload will in turn alleviate the pressure on the PSRAM module.

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes

Michael’s Status Report for 3/23

What did you personally accomplish this week on the project? 

 

For this week, I finally got the compression code ported over to the EPS32. This means that we now have all the steps that are needed on the remote camera node coded up and working. The remote camera node is now able to take a picture, compress it and then send it over via Wi-Fi to a receiver. Porting this over was significantly harder than I thought since we kept running into issues with the camera driver co-existing with Wi-Fi code. In integrating these two, we kept running into issues where the Wi-Fi sending code would stop transmitting at random intervals which would then incur a packet fragment time to live exceedance error message on the receiver. After a lot of debugging and configuration changes, I was able to solve the issue by making some changes on the encoder side and pinning the Wi-Fi task to core 1, which leaves core 0 free to handle the camera. However, this performance is still on the lower side since we are not limited by the bandwidth to the PSRAM. The PSRAM lives on a QSPI bus that runs at 80MHz. Thus, we are limited to a maximum of 40 MB/s of memory bandwidth and unknown latency. The internal data DRAM is only 320KB of size thus it is not an option for us to store a complete frame buffer. Keep in mind that this PSRAM is shared between LWIP, camera, Wi-Fi, and compression. 

 

Image captured, encoded, and then transmitted from ESP32

 

Is your progress on schedule or behind? If you are behind, what actions will be

taken to catch up to the project schedule?

 

Currently on schedule

What deliverables do you hope to complete in the next week?

 

For next week, I hope to begin integration with the FPGA. This will mostly entail me providing a bunch of test data to Varun which he will then run through the FPGA to make sure that we are in agreement in regards to the data format and algorithm stages

Neelansh’s Status Report for 23rd March, 2024

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours). 

This week was a long one. I worked on building the QuadSPI interface for transmitting data from the receiver ESP32 to the lattice ECP5 FPGA. This involved understanding the firmware of the FPGA to understand how to set up the quad data transfer points and send bits. I also had to write code to set up the SPI on the ESP32 after reading and understanding the entire manual. I had to experiment with the clock speed, configure the Quad SPI mode based on the clock polarity and clock phase that would match the FPGA (slave device). All these tasks involved reading, writing code, and extensive debugging. I also attended the ethics seminar and actively contributed to the discussion.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? 

The project is on schedule.

What deliverables do you hope to complete in the next week?

Next week, I plan on integrating the QuadSPI interface with the ESP32 and start receiving data on the ESP32, serializing it, and then transmitting it to the connected FPGA to simulate real world conditions. 



Team Status Report 3/16/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Wi-Fi was previously our greatest unknown. None of us had ever programmed a microcontroller with Wi-Fi capabilities and we didn’t really understand what the ESP’s Wi-Fi capabilities were. We now proved that the Wi-Fi capabilities are adequate for our needs and won’t be a risk factor going forward. The greatest remaining risk now is the JPEG runtime. All of the JPEG code that was written so far runs on a laptop and not a microcontroller. Even though the runtime on the laptop is an order of magnitude faster than the needed 100ms run time, even accounting for all the setup code, it still doesn’t give us concrete data on if the ESP can run JPEG at the speed we need. Michael is currently working on making the final changes to port the code over so the actuality of this risk will be soon known

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes

Provide an updated schedule if changes have occurred

No changes

This is also the place to put some photos of your progress or to brag about a component you got working.

I’m super excited to have gotten some baseline work running on the FPGA for running the JPEG decoder. By next week I should be able to display an uncompressed frame on the monitor!

Varun’s Status Report for 3/16

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week was mostly focused on writing the JPEG decoder for the FPGA. I had to rewrite my SPI interface so that it matches the JPEG stream that will be received by the FPGA. Then I was able to write the code for the IDCT conversion that will need to take place. Code and testbenches for this code is attached as screenshots.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress.

What deliverables do you hope to complete in the next week?

The main deliverables for next week are to improve the timing of the JPEG decoder. As it stands it rans at about 100Mhz on the FPGA, and ideally it’s closer to 200Mhz. I will work on pipelining the IDCT transform better.

https://drive.google.com/file/d/1DKnRUHdgg2I1rG0Blsez0BALWXh-Vlra/view?usp=sharing