Michael’s Status Report 3/9/2024

What did you personally accomplish this week on the project? 

For this week, I got the camera driver working on the ESP32. The code that I have currently works at the 240p resolution that we are targeting.I also tested it at 360p, 480p, and 1080p to make sure that it is able to accommodate future extensions if need be. In addition to writing the camera code, I also have some code written that will write the image data from the camera into a microSD card. While this is not needed for our project, it serves as a useful debugging and development tool. Since the Wi-Fi communication has not yet been written, the microSD card is how we are currently testing the camera driver and pulling images from it. Finally, I also made a couple of small modifications in the JPEG encoder and decoder programs to change the pixel format to match the pixel format of the OV2640. The OV2640 outputs images with a 5-bit red and green channel and a 6-bit blue channel. The three channels are then packed together into 2 bytes before being stored. Such a configuration is commonly referred to as RGB565. However, the proof of concept JPEG encoder and decoder uses 8-bit for all three channels, also known as RGB888. To eliminate the need to convert from RGB565 to RGB888, I decided it was easier to just modify the proof of concept to handle a RGB565 format.
With the camera driver and JPEG pipeline modified, we are now able to take a picture, compress it, then decompress it. The following image shows the end result after going through all those steps. The reflection in the camera are from the screen that was displaying the color test bar images

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Currently ahead of schedule by approximately 2 weeks


What deliverables do you hope to complete in the next week?

For next week, I hope to finish porting the encoder over to the ESP32. After this, we should only have to run the decoder on the laptop and also test out the encoder performance when running the ESP32

Michael’s Status Report 2/24/2024

What did you personally accomplish this week on the project? 

For this week, we have the proof of concept JPEG encoder and decoder written and tested end to end. The current code is able to take in an array of RGB values and then compress them using methods inspired by the JPEG specification. The decoder is also there to aid in Varun when he implements the JPEG decoder on the FPGA. The current code is able to achieve a compression ratio of about 5.75:1 which is in line with our design assumptions. 

It is possible to improve this further by truncating the lower bits of Cr and Cb values so that only the most significant 4 bits of each channel are left. With this implemented the compression ratio rises to only slightly above 6:1. My hypothesis is that the bit packing of Cr and Cb is significantly increasing the entropy compared to just encoding Cr and Cb separately. Therefore even though we save encoding an entire channel, the rise in entropy wipes out most of the gains. Since the human eye is only very sensitive to the luminance encoded in the Y channel, the loss of the lower bits shouldn’t significantly degrade the quality issue. 

The only big advantage of this bit packing is the run time. The one where the Cr and Cb values are truncated is almost 25% faster than the regular version. Therefore, I have saved a copy of the code in case we need a little bit more performance and are willing to sacrifice some quality

 

On the toolchain setup front, I was also able to verify my toolchain installation by flashing one of the hello world examples on the boards that arrive earlier in the week and verifying its output if what is expected. 

 

Image Before Compression
Image After Compression
Image Compressed Using Packed Cr & Cb

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Currently on  schedule, the JPEG encoder and decoder didn’t present too many bugs in the process of writing it

What deliverables do you hope to complete in the next week?

For next week, I hope to run the JPEG encoder and decoder across a more rigorous set of test images to verify its functionality. Assuming all goes well, I plan to also start porting the JPEG encoder code on the ESP32 in anticipation of integration with the camera driver. 

Team Status Report for 2/17

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

 

As of right now, the main concerns we have are related to the WiFi stack and how much throughput we are actually able to get. We have attempted to minimize these risks by ordering the ESP32s that we plan on using last week. We will implement them as soon as possible and determine if more changes will have to be made. The main contingency plan that we have currently is if the throughput isn’t sufficient, we will include a larger antenna on the ESP32 or setup multiple ESP32s on the central node so that we can use multiple WiFi Channels for the camera nodes. Other measure could include increasing the compression setting on the JPEG compression algorithm or downsamping Cr Cb color components

 

Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

 

No changes have been made to the current design. Proof of concept work for the modified JPEG compression algorithm is underway. Once the proof of concept compression algorithm is completed, we will have more information to decide if any aspects of the systems need to be changed. Work on the FPGA is also ongoing and once a proof of concept is complete, we will be able to revisit the design to determine if any changes are needed

Part A was written by Neelansh Kaabra, B was written by Varun Rajesh and C was written by Michael Lang.

 

Part A: 

Our product has immense applications and considerations for public health, safety, and welfare. Our product is a security system, so we essentially care most about people’s safety and welfare, allowing them to monitor their surroundings when they are camping, and giving them an additional layer of protection and information about their campsite. 

 

Being able to monitor the surroundings from all 360 degrees brings a sense of safety and security to the user, thus helping with the psychological and physiological well being of the user too. Our product helps better the public’s psychological state by allowing them to have a higher sense of comfort, while providing them with security benefits.

 

Part B:

This product has some social implications. One of the major benefits are for people who are afraid of the wilderness or have concerns about their safety when they are outdoors. This could bring a more diverse group of people together to go enjoy the outdoors. Beyond that however, I do not believe there are more social impacts. 

 

Part C:

Our product uses commonly available components for all of the different modules. There shouldn’t be a significant issue in getting a system of production setup to mass produce these. The only somewhat specialized component is the FPGA. Even so that FPGA we are using is actively supported by a large company (Lattice). Distribution shouldn’t also be an issue since none of our parts require special handling or have an expiration date. Being cost-effective is one of our major design goals and thus owning the product should be too difficult financially

 

Neelansh’s Status Report 2/17/2024

What did you personally accomplish this week on the project? Give files or
photos that demonstrate your progress. Prove to the reader that you put
sufficient effort into the project over the course of the week (12+ hours).

This week was not a lot of physical work, but more so about planning and ideating. We finalized the components we need, and started preparing for the design presentation
next week. I am working on the ESP32 on the receiver node, and consequently spent my time going over data sheets and doing mathematical calculations to finalize the hardware necessary. I also worked on researching the existing tool chains present and how they can be incorporated into our solution.
I then spent time making a basic timeline for my own part, and designing the initial structure of the receiver node, along with the coding components. I spent time writing basic code for the decoding part on the receiver node, and spent time researching online for existing solutions, so I could differentiate ours from them and get the best parts from the existing ones.
I and my teammates also met Professor Tamal and Jason, our TA this week and finalized our plans for the upcoming few weeks.
We also spent time figuring out what products to order for an initial order so that we can start testing, and ended up placing an order for a few items.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

We are currently on schedule and are waiting for the products to come so that we can start testing.

What deliverables do you hope to complete in the next week?

For next week, we plan on giving the design presentation on either Monday or
Wednesday, and also will be doing basic testing on initial products, to be able to give a better representation of what our final product is going to look like. I will also be getting the Tool chain setup finalized and ready.

Michael’s Status Report for 2/17

What did you personally accomplish this week on the project?

For this week I got the Espressif IDF for ESP32 setup on my laptop. The IDF is needed for us to compile and flash code to the ESP32 once we have the hardware in hand. The order for ESP32 was also put in on Monday along with the camera module. I hope to get that in hand soon so I can test out the toolchain that I have installed and debug any issues that may arise.
I also started writing the JPEG encoder code while I am waiting for the ESP32 to come in. My hope is that once the ESP32 comes in, I can immediately port the code over to the ESP32, saving us some time later on. Coding it up on my laptop also serves as a proof of concept on our modified JPEG algorithm and allows us to begin to make optimizations so that it can better run a low power ESP32. So far, I have the RGB to YCrCb color space conversion code done and verified along with an unoptimized version of the discrete fourier transform algorithm. For each one of the encoder components, I also have to write a decoder for it as well to verify functionality

Is your progress on schedule or behind? If you are behind, what actions will be
taken to catch up to the project schedule?

Currently slightly ahead of schedule but it is possible that unforeseen bugs on the compression pipeline can result in it taking longer than initially expected

What deliverables do you hope to complete in the next week?

For next week, I hope to verify my toolchain installation on an actual ESP32 when it arrives. On the compression front, I hope to have a fully working version of the code so that our initial assumptions can be verified.

Team Status Report 2/10/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready? 

 

As described in the project proposal, the main technical risks we have identified in the project are as follows:

 

1.We need to be able to compute frame compression fast enough: 

One of our primary requirements for the project was to ensure that the entire system is inexpensive and requires low power to run. In light of this, we decided to use ESP32 microcontrollers for compressing and transmitting the image frames from the camera. ESP32’s present a compute limitation to the system, and could probably cause the frame compression algorithm to not run as fast as expected. If need be, we plan on switching to difference compression algorithm such as delta compression

2.Stream enough data over the wireless connection:

Our prototype MVP uses 6 cameras, and a single receiver node. The receiver node’s  ESP32 will be receiving data frames from all the 6 camera nodes simultaneously, and hence there will be a high amount of data being transmitted over the wireless connection. We need to ensure that we drop no more than 10% of the data frames. Having less than 10% dropped frames means that there will only be a 100ms of video that will be lost when a frame is dropped. No animal will be able to cross the surveillance area in less than 100ms. In case we exceed that 10% threshold, we plan on increasing the data access points on the receiver node. Frame drop percentage will be computed by comparing the number of frames transmitted vs the number of frames received, and ensuring that the loss percentage does not go above 10%.

3.Decompress all the incoming frames fast enough:

As mentioned above, the receiver node will be collecting data from 6 camera nodes, decompressing them, and then driving the display. The decompression algorithm would have to be fast enough to ensure concurrent streaming from all the 6 camera nodes, without high latency and computation errors. We will be using a FPGA on the receiver node to perform the decompression, and if an issue arises due to the FPGA, we plan on opting for a larger FPGA with a higher compute capability and parallel processing techniques. 

4.Optimize performance to minimize power consumption: 

One of the major requirements from any portable security system is to ensure that we don’t need to charge it often. Keeping this in mind, we envision our system to be able to run for at least 24 hours on a single charge / battery. The entire system’s performance, including the camera’s feed capturing, compression, transmission, decompression and streaming to the portable monitor, all have to be optimized so that the setup works for at least 24 hours at a single charge / battery setting. Our contingency plan for this would be to increase the battery size if the system ends up taking too much power, even after final optimizations. 

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

 

No major changes have been made yet, however we still need to decide on the final specs for the camera (240p or not), and other technical specs for the microcontroller and monitor, based on our use case. 

 

Provide an updated schedule if changes have occurred. 

No schedule changes as such.

Neelansh’s Status Report 2/10/2024

This was the first week after initial team selection and project ideation. We finalized what we will be creating over the course of the semester, and divided up tasks based on each team member’s interests and proficiency. I will be managing more of the software stack, and handling the ESP32 on the receiver node. 

 

My major role will be to ensure that all the data being transmitted from the 6 camera nodes is received appropriately on the receiver, and is then transmitted in the correct format to the FPGA for further processing. 

 

A major task for this week was the Proposal Presentation. I was the speaker, and therefore had to rehearse and plan the presentation with my teammates. I believe the presentation went well, as per the feedback received from my peers and the instructors. There are no images that we took during the presentation, however the proposal presentation slides have been uploaded to the website for reference.

 

Me and my teammates met multiple times over the course of the previous week to finalize the slides and rehearse multiple times before going into the presentation. Apart from that, I timed myself and rehearsed solo by recording myself, talking into the mirror, and planning out exactly what I would be saying.  

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? 

 

We are currently on schedule and there are no major delays as of yet.

 

What deliverables do you hope to complete in the next week?

 

In the coming week, we plan to do more research into finalizing our use case, and consequently finalizing the hardware specifications we require for fulfilling our goals. This will involve multiple iterations of ideas and selection of suitable hardware, based on both our budget and intended use. I and my teammates will also do mathematical calculations and run simulations (maybe try out different FPGAs, monitor displays, microcontrollers, etc.), over the course of the week to finalize the hardware specifications we need, and then curate a list of all the products we would require. 

Michael’s Status Report for 2/10/2024

For the remote camera node, I looked at the possible options that could fit our needs. I looked at TI’s CC3200, Raspberry Pi Zero W, and the ESP32. Ultimately, I settled on using the ESP32 mostly because there is a robust ecosystem that has been built up to support the chip while offering a high degree of flexibility to accommodate our needs should they evolve. I specifically chose the ESP32-CAM development kit mostly since it has a built-in camera connector, making wiring very easy and hassle free.  The ESP32-CAM also has one of the highest clock speeds of the entire ESP32 lineup and has two execution cores, which should help to avoid any compute limitations

I also looked into camera options and decided on using an OV2640 camera. The OV2640 camera is able to support standard VGA resolutions up to 1600*1200, giving us a lot of flexibility in choosing resolutions. The OV2640 is able to use the camera connector on the ESP32-CAM without any modification, a very important metric at the current stage. 

Next week, I plan on putting the initial order for one complete module (OV2640 and ESP32-CAM) and begin setting up the toolchain in anticipation of parts arriving.

Introduction and Project Summary

When camping out in the wilderness, there is a need to setup a security perimeter to monitor one’s surroundings. However, existing systems on the market are expensive, internet dependent, and overall are not suited for the task of campsite surveillance. EyeSPy is a project that aims to changes this. The system that we plan on building will allow for the end user to monitor multiple camera streams simultaneously without the use of any wires while being cheaper than any commercial systems the currently exist. Each camera will be fully wireless and battery powered for maximal flexibility in their placement. The combined camera feed will be displayed on a portable monitor to allow for continuous surveillance should it be desired by the user