Team Status Report 4/27

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

For this week, we got all the components fully integrated and running for our project. We were able to get 6 simultaneous streams of 240p video running at 10fps. Everything is working as expected and there are no more work left to do.

The following is a list of unit tests that we have run:

  1. FPGA JPEG decoding
  2. DRAM controller timings
  3. HDMI driver consistency
  4.  OctoSPI peripheral functional and data integrity tests
  5. Full FPGA end-to-end pipeline testing
  6. ESP32 Wi-Fi transmission range test
  7. ESP32 and FPGA subsystem power consumption tests
  8. ESP32 frame transmission interval consistency test
  9. ESP32 to FPGA frame interval tests
  10. ESP32 motion detection test
  11. Full end-to-end system test

Team’s Status Report for 4/6

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

The risks that are yet to be fully quantified are the issue of range and the receiving capabilities of the central ESP. 

In regards to the issue of range, we already have some data points to compare against. The first is that we were able to get about 30 meters of range indoors and in a non-line of sight situation though two thick brick walls. From this, we judge that it is likely that a 50 meter range is achievable in a wilderness environment where there is minimal interference from neighboring access points and where there isn’t a need for the signal to penetrate multiple brick walls. Should it not be possible to reach the 50 meter range figure we can always install antennas that are more directional. The current external antennas that we have are 3dBi omni directional antennas which can be easily replaced with higher gain antennas if needed. To verify we can just set up a camera and receiving node in Schenley Park and keep track of the distance until the stream drops. The test can be run under a variety of different conditions, for example in an open area with direct line of sight and then in a wooded area where the line of sight is blocked by a couple of trees. Terrain would have to be accounted for as well since in the wilderness, it can be guaranteed that every node is at the same elevation.

The current knowledge of the receiving capabilities of the central ESP is that it is able to handle one stream of camera data right now. We have yet to do testing beyond that. While we do have 6 cameras, most of the time the system will have at most one active stream. This is because the cameras will only send data when there is movement and not send data when there isn’t. Thus, it is unlikely that all 6 cameras will be active and sending data to the central node at once. In case that we do run into processing limitations on the central ESP, we can always drop the quality of the frames which will decrease the transmission size which in turn will lower the processing demand. Alternatively, we can also just include a second ESP to split the load. This is the less preferred option because it adds extra complexity.

Were any changes made to the existing design of the system (requirements,

block diagram, system spec, etc)? Why was this change necessary, what costs

does the change incur, and how will these costs be mitigated going forward?

The main change is on the front of the FPGA. Due to logic element sizing and time constraints, the JPEG decoder and the video driver will be split into two different FPGAs. This does increase the price of the central node but it is within our budget of $150.

Provide an updated schedule if changes have occurred

No changes

This is also the place to put some photos of your progress or to brag about a component you got working.

Validation Plan

One of the validation plans will be to ensure that the communication between the central ESP and the FPGAs is steady. The metrics for this will be twofold, a counter will be implemented on the FPGA so that we know what is the data rate that the ESP is streaming data to the FPGA. In addition to that, performance counters on the JPEG decoder will be added so that we know how many invalid JPEG frames are received.

In terms of actual metrics, we expect to see 60 JPEG frames transmitted every second by the ESP. Then, we expect to see that no more than 10% of the transmitted JPEG frames are invalid.

We will also perform testing via sending varying forms of input, images with different gradients, colors, patterns, to ensure robustness. We will also work on ensuring that multiple camera streams are still able to transmit simultaneously, to ensure that the system works under high pressure (all 6 cameras sensing and streaming).  The receiver node needs to be able to handle all the 6 incoming streams, and then transmit them to the FPGA for further processing.

Team Status Report For 3/30

What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

We have mainly addressed main risk from last week. We were able to solve this through the setup of the WiFi on the ESP so that we were able to free up more bandwidth for the image encoding the remote node size.

Because of this being tested and is reliable, we currently do not have any further risks. We are happy with this solution as it did not require us to reduce the image quality being sent from the camera.

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes

Team Status Report For 3/23

What are the most significant risks that could jeopardize the success of the

project? How are these risks being managed? What contingency plans are ready?

The most significant risk right now is the encoder side. As mentioned in Michael’s status report, there are severe constraints on what we can do on the encoder side due to the need to use the PSRAM module on the ESP32. The PSRAM module’s 40 MB/s is a hard limit that is hard for us to work around. Being one of the first stages of the pipeline means that any changes in the encoder side will trickle down and cause issues for the central receiving node and the FPGA decoder. Current contingency plans in case this PSRAM issue does materialize is to set the encoder at a lower quality which will minimize the amount of data that needs to be processed by the LWIP and Wi-Fi stacks. The reduced workload will in turn alleviate the pressure on the PSRAM module.

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes

Team Status Report 3/16/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Wi-Fi was previously our greatest unknown. None of us had ever programmed a microcontroller with Wi-Fi capabilities and we didn’t really understand what the ESP’s Wi-Fi capabilities were. We now proved that the Wi-Fi capabilities are adequate for our needs and won’t be a risk factor going forward. The greatest remaining risk now is the JPEG runtime. All of the JPEG code that was written so far runs on a laptop and not a microcontroller. Even though the runtime on the laptop is an order of magnitude faster than the needed 100ms run time, even accounting for all the setup code, it still doesn’t give us concrete data on if the ESP can run JPEG at the speed we need. Michael is currently working on making the final changes to port the code over so the actuality of this risk will be soon known

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes

Provide an updated schedule if changes have occurred

No changes

This is also the place to put some photos of your progress or to brag about a component you got working.

I’m super excited to have gotten some baseline work running on the FPGA for running the JPEG decoder. By next week I should be able to display an uncompressed frame on the monitor!

Team Status Report 3/9/2024

Part A: … with consideration of global factors. Global factors are world-wide contexts and factors, rather than only local ones. They do not necessarily represent geographic concerns. Global factors do not need to concern every single person in the entire world. Rather, these factors affect people outside of Pittsburgh, or those who are not in an academic environment, or those who are not technologically savvy, etc. 

 

Being a technology product, EyeSpy can present a challenge to use for people who are not very tech savvy. However, our design goal of plug-and-play operation should make it easy to use even for people that are not very tech savvy. EyeSpy is also fully compatible with all wireless regulations globally as it uses the internationally standardized 2.4 GHz ISM band. Since EyeSpy doesn’t require an electric grid connection to work, varying grid voltage and frequency shouldn’t be a hindering factor

 

Part B: … with consideration of cultural factors. Cultural factors encompass the set of beliefs, moral values, traditions, language, and laws (or rules of behavior) held in common by a nation, a community, or other defined group of people. 

 

Our product doesn’t necessarily have a high cultural impact, and does not offend / attack any particular rules set in any culture. Our camping security system can help people when they are doing religious prayers or ceremonies and need to have a sense of the surroundings, and thus continue the religious proceedings in peace. Our product does not break any existing laws as such, as long as it is used for the right purposes and not for anything illegal. 

 

 Part C: … with consideration of environmental factors. Environmental factors are concerned with the environment as it relates to living organisms and natural resources.

 

Our product does have some environmental factors. Some of the main concerns are with how it will interact with wildlife. It could be harmful to wildlife if they decided to try to eat or play with the remote camera nodes as they contain harmful chemicals and other items such as the camera, the battery, etc. The main concerns relating to natural resources is the sustainability of sourcing material for the remote node. The main concern for this is the battery and how often will the health of the battery remain. Batteries are also very expensive in terms of manufacturing and the environmental impact that comes with it.

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

 

No major changes have been made to the design of the system yet.

 

Provide an updated schedule if changes have occurred.

 

There are no major updates to our schedule as of yet and no changes have occurred.

Team Status report for 24th February, 2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Currently the major risks we have are ensuring that enough frames are transmitted from the remote camera nodes to the central receiver node, ensuring that our threshold of less than 10% frames being dropped is not crossed. Another issue is the fast decompression of these 6 incoming frames on the FPGA so that the streaming on the monitor is seamless and without any glitches. 

As explained in our presentations, we plan on testing these things in the coming week, and if we face issues with transmission, we plan on adding more data access points on the receiver ESP32, and if there are compute complexities with the FPGA, then we plan on trying to work more on optimizing the system verilog code or try doing some computation on the EPS32 or perhaps switch to a larger FPGA as a worst case scenario. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Currently no major changes have been made to the system and we are working as per our initial design plan.

Provide an updated schedule if changes have occurred. 

No changes have occurred on our schedule as of now. Things are on track.

This is also the place to put some photos of your progress or to brag about a component you got working.

Here is a video of the FPGA driving a display with the Arduino commanding pixel values: https://photos.app.goo.gl/LBfp1qN6J4SgLGJp9

 

Team Status Report for 2/17

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

 

As of right now, the main concerns we have are related to the WiFi stack and how much throughput we are actually able to get. We have attempted to minimize these risks by ordering the ESP32s that we plan on using last week. We will implement them as soon as possible and determine if more changes will have to be made. The main contingency plan that we have currently is if the throughput isn’t sufficient, we will include a larger antenna on the ESP32 or setup multiple ESP32s on the central node so that we can use multiple WiFi Channels for the camera nodes. Other measure could include increasing the compression setting on the JPEG compression algorithm or downsamping Cr Cb color components

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

 

No changes have been made to the current design. Proof of concept work for the modified JPEG compression algorithm is underway. Once the proof of concept compression algorithm is completed, we will have more information to decide if any aspects of the systems need to be changed. Work on the FPGA is also ongoing and once a proof of concept is complete, we will be able to revisit the design to determine if any changes are needed

Part A was written by Neelansh Kaabra, B was written by Varun Rajesh and C was written by Michael Lang.

 

Part A: 

Our product has immense applications and considerations for public health, safety, and welfare. Our product is a security system, so we essentially care most about people’s safety and welfare, allowing them to monitor their surroundings when they are camping, and giving them an additional layer of protection and information about their campsite. 

 

Being able to monitor the surroundings from all 360 degrees brings a sense of safety and security to the user, thus helping with the psychological and physiological well being of the user too. Our product helps better the public’s psychological state by allowing them to have a higher sense of comfort, while providing them with security benefits.

 

Part B:

This product has some social implications. One of the major benefits are for people who are afraid of the wilderness or have concerns about their safety when they are outdoors. This could bring a more diverse group of people together to go enjoy the outdoors. Beyond that however, I do not believe there are more social impacts. 

 

Part C:

Our product uses commonly available components for all of the different modules. There shouldn’t be a significant issue in getting a system of production setup to mass produce these. The only somewhat specialized component is the FPGA. Even so that FPGA we are using is actively supported by a large company (Lattice). Distribution shouldn’t also be an issue since none of our parts require special handling or have an expiration date. Being cost-effective is one of our major design goals and thus owning the product should be too difficult financially

 

Team Status Report 2/10/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready? 

 

As described in the project proposal, the main technical risks we have identified in the project are as follows:

 

1.We need to be able to compute frame compression fast enough: 

One of our primary requirements for the project was to ensure that the entire system is inexpensive and requires low power to run. In light of this, we decided to use ESP32 microcontrollers for compressing and transmitting the image frames from the camera. ESP32’s present a compute limitation to the system, and could probably cause the frame compression algorithm to not run as fast as expected. If need be, we plan on switching to difference compression algorithm such as delta compression

2.Stream enough data over the wireless connection:

Our prototype MVP uses 6 cameras, and a single receiver node. The receiver node’s  ESP32 will be receiving data frames from all the 6 camera nodes simultaneously, and hence there will be a high amount of data being transmitted over the wireless connection. We need to ensure that we drop no more than 10% of the data frames. Having less than 10% dropped frames means that there will only be a 100ms of video that will be lost when a frame is dropped. No animal will be able to cross the surveillance area in less than 100ms. In case we exceed that 10% threshold, we plan on increasing the data access points on the receiver node. Frame drop percentage will be computed by comparing the number of frames transmitted vs the number of frames received, and ensuring that the loss percentage does not go above 10%.

3.Decompress all the incoming frames fast enough:

As mentioned above, the receiver node will be collecting data from 6 camera nodes, decompressing them, and then driving the display. The decompression algorithm would have to be fast enough to ensure concurrent streaming from all the 6 camera nodes, without high latency and computation errors. We will be using a FPGA on the receiver node to perform the decompression, and if an issue arises due to the FPGA, we plan on opting for a larger FPGA with a higher compute capability and parallel processing techniques. 

4.Optimize performance to minimize power consumption: 

One of the major requirements from any portable security system is to ensure that we don’t need to charge it often. Keeping this in mind, we envision our system to be able to run for at least 24 hours on a single charge / battery. The entire system’s performance, including the camera’s feed capturing, compression, transmission, decompression and streaming to the portable monitor, all have to be optimized so that the setup works for at least 24 hours at a single charge / battery setting. Our contingency plan for this would be to increase the battery size if the system ends up taking too much power, even after final optimizations. 

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

 

No major changes have been made yet, however we still need to decide on the final specs for the camera (240p or not), and other technical specs for the microcontroller and monitor, based on our use case. 

 

Provide an updated schedule if changes have occurred. 

No schedule changes as such.

Introduction and Project Summary

When camping out in the wilderness, there is a need to setup a security perimeter to monitor one’s surroundings. However, existing systems on the market are expensive, internet dependent, and overall are not suited for the task of campsite surveillance. EyeSPy is a project that aims to changes this. The system that we plan on building will allow for the end user to monitor multiple camera streams simultaneously without the use of any wires while being cheaper than any commercial systems the currently exist. Each camera will be fully wireless and battery powered for maximal flexibility in their placement. The combined camera feed will be displayed on a portable monitor to allow for continuous surveillance should it be desired by the user