Neelansh’s Status Report for 4/28

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours). 

Last week, I worked on more testing, ensuring that the system works, and also preparing for the final presentation that we had on Wednesday. This week wasn’t as much work as we had given a lot of time in the previous weeks to be at a safe position.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? 

The progress is on schedule.

What deliverables do you hope to complete in the next week?

We have our final presentation next week and will spend time preparing for that and ensuring our final solution is well tested and correct in all aspects. 

List all unit tests and overall system tests carried out for experimentation of the system. List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

There weren’t any major design changes on my part, except the fact that we went from a QuadSPI implementation to a SPI implementation, as that worked for our needs.

For the tests, we carried out all the tests as described in the final presentation. This included the range test, where we held the remote camera node and the receiver 50m apart, with multiple obstacles in between, and were still able to send and receive the frames with less than 10% drops.

Another test was the battery test, where we ran the system for 24 hours and were able to continuously see it working.

Another test for my part was using the logic analyzer to see that the SPI implementation is working correctly and the bytes transferred are padded, and in the right order. This test also included integration with the FPGA and making of the entire system.



Team Status Report 4/27

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

For this week, we got all the components fully integrated and running for our project. We were able to get 6 simultaneous streams of 240p video running at 10fps. Everything is working as expected and there are no more work left to do.

The following is a list of unit tests that we have run:

  1. FPGA JPEG decoding
  2. DRAM controller timings
  3. HDMI driver consistency
  4.  OctoSPI peripheral functional and data integrity tests
  5. Full FPGA end-to-end pipeline testing
  6. ESP32 Wi-Fi transmission range test
  7. ESP32 and FPGA subsystem power consumption tests
  8. ESP32 frame transmission interval consistency test
  9. ESP32 to FPGA frame interval tests
  10. ESP32 motion detection test
  11. Full end-to-end system test

Varun’s Status Report 4/27

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I worked on getting prepared for the final presentation this week. I was also able to fully finish my portion of this project. Everything works together!

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Everything is done!

What deliverables do you hope to complete in the next week?

Wrap up some small things for the final demo including poster and report.

 

Michael’s Status Report 4/27

What did you personally accomplish this week on the project? 

For this week, I wrote a system to address the streams that are sent to the FPGA. This is needed for the FPGA to identify where it should draw a picture when it gets a frame from the central ESP. With the addressing scheme in place, we are now able to run 6 simultaneous streams at once and have them all show up in the same place.

In addition, I also added some pacing code into the central ESP as well. We need to pace the frames at around a 20-25ms interval since the FPGA only has one instance of the decoder and it runs sequentially. It therefore can only accept a new image to draw every 20-25ms. Since it is much easier for the ESP to buffer this data, we decided to have the pacing and buffering code on the ESP side.

The last loose end that needed to be cleaned up was to black out the unused picture locations. At 720p we have divided the frame into 12 individual 240p streams. Since we are not using half of them, we need the ESP to send a black frame on initialization to those locations to make the system look nice. Without sending the black frame we will just get random colors in the unused locations which just looks bad.

Finally, I also integrated the whole system with Varun. We were able to get all 6 streams working simultaneously. The video below is of the system working with all 6 streams. Note that the code to black out the unused locations is not active in the video. 

https://drive.google.com/file/d/1J4ZgzfkFmw4zAhaMK7OQOmpwzOf-8wDz/view?usp=sharing

Output with Unused Locations Blacked Out

Neelansh’s Status Report for 4/20

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge? 

I was not familiar with ESP32s which is one of the most important microcontrollers we are using in this project. I had to learn from youtube videos, online tutorials and websites on how to set it up and get the entire project working. I had to consult friends who are proficient in it to learn and get advice on. 

I learnt about using the IDF environment, SPI interfacing, and working with microcontrollers. My biggest learning strategy was trying to find things out from the internet or books, and being ready to ask for help and advice from my peers and mentors in the process.

We recognize that there are quite a few different methods (i.e. learning strategies) for gaining new knowledge — one doesn’t always need to take a class, or read a textbook to learn something new. Informal methods, such as watching an online video or reading a forum post are quite appropriate learning strategies for the acquisition of new knowledge.

Yes, I agree with this statement completely. I had never worked with microcontrollers in the past before, since I had always been more on the pure software side of things. However, when tasked with working on making the ESP32 act as an Access Point and writing code for the SPI interface, I had to research online forums, especially during debugging. I had to ask my teammates for help at times and ask professors and TAs about any doubts I had. These experiences are valuable and make me understand the importance of all different resources available to learn and gain knowledge from.

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours). 

I worked on developing the SPI interface and ensuring it works correctly. I then worked on adding more features to the data such as adding padding and making it 4 byte aligned to allow for easier decoding on the FPGA end. I then worked on manual testing in Schenley Park with my teammates and did analysis on data we collected.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? 

It is on schedule.

What deliverables do you hope to complete in the next week?

I will be working on testing and making the entire 6 camera nodes system work well within our constraints. We also need to prepare and work for the presentation and the final demo.



Team’s Status Report for 4/20

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

There are no more significant risks that are facing us in the project. Earlier this week the system was able to be tested with a single camera node sending to the central node and we were able to get a good output on the monitor. Along with the validated range by testing in an outdoor situation and timing plots from the logic analyzer we now have good coverage on all of the potential areas that were of concern. All that is left to do is to integrate the stream addressing system with the FPGA. The addressing system was already integrated when we did all the testing so it is just making sure that the details are sorted out

Video of system working end to end: https://drive.google.com/file/d/1vU1QA5X1gf_H7oJ-TFqDhKXAVMIMwOnm/view?usp=sharing

Varun’s Status Report for 4/20

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

At this point almost everything works together. We were able to transmit an image from the remote node all the way through the pipeline to the display. I’ve deferred the image of this working to the team status report. To get this stage included a bit of bug fixing. I made some mistakes when it came to clock doman crossing and was able to sort that out.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

I am on track right now. We were able to do some real world testing today and should be able to finish everything up!

What deliverables do you hope to complete in the next week?

I hope to have everything done. Including increasing the processing speed of the decoder and the display driver. I realized that its a little too slow right now to hit the 6 streams @ 10fps target but I am pretty close to getting it all sorted out.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks?

I learnt quite a few new things putting this project together. Some of the new tools that I got very familiar with was a new vendor of FPGA devices (Lattice) and all the quirks that come with it. Using open source tools meant that some of the features that alternatives like Xilinx or Altera had (such as a platform builder to get IPs integrated) simply didn’t exist. I had to do more manual methods of getting things up and running.

What learning strategies did you use to acquire this new knowledge?

As for most things, the way I was able to learn these things was just by doing it! I was able to hunker down and just figure things out and was able to learn a lot during the process.

Michael’s Status Report 4/20/2024

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

I had to learn about the specifics of ESP32 development. The toolchain and configurations methods were all new to me. Along with that I also had to learn about the intricacies of how Wi-Fi worked on the ESP32s and how to debug them when the performance wasn’t were we wanted it. I mostly learned by reading the espressif programming manual and the provided example code. It also helped that I could learn by drawing from on my previous class work and work with microcontrollers to help me along.

What did you personally accomplish this week on the project? 

For this week, I used a multimeter to check the power consumption of the ESP-32 to verify the claimed values on its data sheet. I got a measurement of 1.5W maximum power draw for the remote camera node. This was with the camera continuously transmitting imagery which is an absolute worst case situation. The average power draw was more around the range of 1w which is inline with our initial assumptions based on the manufacturer’s datasheet. The power consumption for the remote camera node was also measured and it was slightly lower at only about 0.8w maximum. 

I got the change detector running on the ESP-32. The change detector is implemented as a separate task than the main sending task so it doesn’t block the main sending task. Since the ESP32 has two cores these two tasks are able to be run simultaneously to provide maximum performance. The change detector works by computing the number of pixels that have had significant changes. If the number is above a threshold then change is assumed to have happened and video streaming will commence. 

 I also got all 6 camera nodes sent to the central node at the same time. I checked the SPI output with a logic analyzer to verify the data and the frame spacings. Even with 6 nodes, the system is able to reliably deliver frames at 100ms intervals with only slight variations. The SPI clock rate was also bumped up to 20MHz to account for the time needed to decode the image on the FPGA side

Finally, I also got a range test completed for the remote camera nodes sending to the central node. The current system with omnidirectional antennas is able to hit the targeted 50m while maintaining a stable 10fps stream. The range test was done in Schenley park to simulate the system being used in an outdoor environment. For the test, the remote node was mounted to a small tree and I walked slowly away with the central node. I stopped walking when the video stream stopped working consistently
 

Range Test Distance
Images from Range Test

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Currently on schedule

 

What deliverables do you hope to complete in the next week?

For next week, I hope complete final integration with Varun’s FPGA with the new stream addressing system schema

Team’s Status Report for 4/6

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The risks that are yet to be fully quantified are the issue of range and the receiving capabilities of the central ESP. 

In regards to the issue of range, we already have some data points to compare against. The first is that we were able to get about 30 meters of range indoors and in a non-line of sight situation though two thick brick walls. From this, we judge that it is likely that a 50 meter range is achievable in a wilderness environment where there is minimal interference from neighboring access points and where there isn’t a need for the signal to penetrate multiple brick walls. Should it not be possible to reach the 50 meter range figure we can always install antennas that are more directional. The current external antennas that we have are 3dBi omni directional antennas which can be easily replaced with higher gain antennas if needed. To verify we can just set up a camera and receiving node in Schenley Park and keep track of the distance until the stream drops. The test can be run under a variety of different conditions, for example in an open area with direct line of sight and then in a wooded area where the line of sight is blocked by a couple of trees. Terrain would have to be accounted for as well since in the wilderness, it can be guaranteed that every node is at the same elevation.

The current knowledge of the receiving capabilities of the central ESP is that it is able to handle one stream of camera data right now. We have yet to do testing beyond that. While we do have 6 cameras, most of the time the system will have at most one active stream. This is because the cameras will only send data when there is movement and not send data when there isn’t. Thus, it is unlikely that all 6 cameras will be active and sending data to the central node at once. In case that we do run into processing limitations on the central ESP, we can always drop the quality of the frames which will decrease the transmission size which in turn will lower the processing demand. Alternatively, we can also just include a second ESP to split the load. This is the less preferred option because it adds extra complexity.

Were any changes made to the existing design of the system (requirements,

block diagram, system spec, etc)? Why was this change necessary, what costs

does the change incur, and how will these costs be mitigated going forward?

The main change is on the front of the FPGA. Due to logic element sizing and time constraints, the JPEG decoder and the video driver will be split into two different FPGAs. This does increase the price of the central node but it is within our budget of $150.

Provide an updated schedule if changes have occurred

No changes

This is also the place to put some photos of your progress or to brag about a component you got working.

Validation Plan

One of the validation plans will be to ensure that the communication between the central ESP and the FPGAs is steady. The metrics for this will be twofold, a counter will be implemented on the FPGA so that we know what is the data rate that the ESP is streaming data to the FPGA. In addition to that, performance counters on the JPEG decoder will be added so that we know how many invalid JPEG frames are received.

In terms of actual metrics, we expect to see 60 JPEG frames transmitted every second by the ESP. Then, we expect to see that no more than 10% of the transmitted JPEG frames are invalid.

We will also perform testing via sending varying forms of input, images with different gradients, colors, patterns, to ensure robustness. We will also work on ensuring that multiple camera streams are still able to transmit simultaneously, to ensure that the system works under high pressure (all 6 cameras sensing and streaming).  The receiver node needs to be able to handle all the 6 incoming streams, and then transmit them to the FPGA for further processing.

Varun’s Status Report for 4/6

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week has been mostly trying to debug my JPEG decoder implementation. As of right now, the JPEG decoder flashed on the FPGA doesn’t actually produce any results. I’m not entirely sure if this is because there’s an issue in the way the Arduino is sending the FPGA the JPEG bit stream or if there’s some inherent bug with the JPEG decoder.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

I’m a bit behind on the JPEG decoder itself but luckily this is the only thing that’s a bit behind. I plan on continuing to do some testing, most likely start with preloading the image onto the FPGA and seeing if it decodes it properly.

What deliverables do you hope to complete in the next week?

I plan on hopefully getting everything sorted out with the JPEG decoder next week.

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week was mostly spent into integrating the JPEG decoder into the pipeline and debugging issues that arose. Another big piece of the puzzle that I had missed was converting the decoded YCbCr output of the JPEG pipeline back into RGB.
This was fairly tricky to implement as this required a lot of floating point computation. I had to work on pipelining the design so that I could actually fit the design onto the FPGA. There are about 28 18×18 multipliers available on the FPGA, each pixel required about 4 of the 18×18 multipliers so I had to make sure to find a way to sufficiently parallelize/sequentially the decoding to use the multipliers the most appropriately.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule?

As of right now, I am on schedule so I’m not worried about my progress. Though it will be a little tight to get everything in for the interim demo.

What deliverables do you hope to complete in the next week?

I plan on hopefully getting everything sorted out for the interim demo.

Verification Test

Luckily, by doing almost everything in hardware, most of the verification has already taken place. This means in terms of things like how stable is the HDMI output to the display and the rate at which the JPEG decoder operates.

The HDMI frame rate can be monitored by an external display. The display will be able to display the input frame rate and this can be monitored for an hour to ensure that it stays at a stable 60fps. A success here would be marked by not missing any frames during this time.

The JPEG decoding rate is solely determined by how the design is pipelined and the clock speed of it. Currently, the 25MHz clock that the JPEG decoder is running at is more than sufficient to meet the effective 60fps (6 streams @ 10fps) required for the project.