Jaspreet’s Status Report for 12.09.23

This week, I continued to help with gathering images to test and train our ML models. I went to TechSpark and gathered about 400 total images of presentation slides being displayed on a large monitor. These slides had differently formatted slide numbers at the bottom right, and testing with these images helped us determine which format would be best for our slide matching model. I also added clips onto the side of our component case so that it can now attach to the side of glasses. However, in the middle of the week I tested positive for COVID, and I was unable to work for multiple days due to my sickness.

Similarly to last week, since our team’s progress is behind schedule, so is mine. Since we have not finished integration, I still have to place our code on our Jetson, and the plan is currently to do so on Sunday once integration is finalized. As a team, we must complete integration before the demo on Monday. Furthermore, we must complete user testing of our system either before or during the demo on Monday. After this, what remains is completing the final deliverables for our project.

Jaspreet’s Status Report for 12.2.23

Since the last status report, I have made a lot of progress on the hardware subsystem, and have helped with integration of our subsystems as well as gathered data for our ML models.

Regarding the hardware subsystem, the component case was finally printed and assembled, and is completely finished. Printing the case ended up being a lot more difficult than I thought, as I ran into issues where the print would fail halfway through, someone would stop my print, or the print would be successful, but have minor errors in the design that would require another reprint. Despite these problems, I have now assembled the glasses attachment, which is pictured below.

The camera is positioned on the right face, the buttons are positioned on the top face, and the charging port, the power switch, and other ports are positioned on the left face. As you can see, there is some minor discoloration, but fixing this is not a priority at the moment. If I have extra time next week, I should be able to fix this relatively easily.

Furthermore, I have helped with the integration of the subsystems. Specifically, I added code to the Raspberry Pi so that once the start button is pressed, it will not only send an image to the Jetson, but also receive the extracted description corresponding to that image. It then sends this description to our iOS app, where it is read aloud. Currently though, our code is running locally on our laptops instead of the Jetson, since we are prioritizing making our system functional first.

Finally, I spent many hours working on gathering image data for our ML models, as well as manually annotating our data. For our slide matching model, I used a script I had previously written on the Pi to gather images of slides that have slide numbers in boxes on the bottom right. One such picture is shown below. We were able to gather a couple hundred of these images. For our graph description model, I helped write graph descriptions for a few hundred graphs, including information about their trends and general shape.

My progress is currently behind schedule, since our team’s progress is behind schedule. We are currently supposed to be testing our system with users, but since our system is not complete we cannot do so. I am also supposed to have placed our code on the Jetson, but we cannot do so without finalized code. We will have to spend time as a team finalizing the integration of our subsystem in order to get back on track.

In the next week, I hope to be able to put all necessary code on the Jetson so that the hardware is completely ready. I also will help finalize integration for our project. After that, I will help with user testing, as well as working on final deliverables for our project.

Jaspreet’s Status Report for 11.18.23

This week, I made progress on the CAD for our hardware component case. It was slightly difficult to import the CAD models for some of the components we bought, but all that’s left is finalizing the case design that surrounds it. I also spent time this week programming our Raspberry Pi so we could use it to gather image data for our slide matching model. We were able to take close to 6000 images that we can train on. Finally, I ordered new cameras with different FOVs and dimensions so that we can compare their performances and effects on our ML models.

My progress is behind schedule. I expected to have a printed component case by the end of the week, but have not been able to do so yet. In order to catch up, I will finish the CAD by Sunday so that we can print out the case as soon as possible. In the next week, I hope to fully complete my subsystem so we can begin testing.

Jaspreet’s Status Report for 11.11.23

This week, I finished implementing the hardware pipeline for sending images from our Raspberry Pi to our Jetson. There were several steps that I went through in order to finish this up. First, I made it so that the Jetson runs a Flask server on startup, so that it can receive the images from the Raspberry Pi through a POST request. I then made it so that the Raspberry Pi camera capture is also run on startup, so that the start button can be pressed to send an image. I also worked with Aditi to set up the stop button, so that any audio description playing from the iOS app is stopped as soon as the button is pressed. I tried to play with the camera settings of the RPi, but will need to further adjust them in order to capture satisfactory images.

My progress is now on schedule, now that we have adjusted our schedule to account for our current level of progress. Despite being on track, there is still plenty of work to be done. The first and most important part for completing our project is to design and print our component case that will attach to the user’s glasses. This must be completed within the next week according to our schedule. If I complete this faster than expected, I will work on decreasing the latency from when the start button is pressed and when the Jetson receives a new image. When designing, we expected that this latency would be much smaller than it currently is, so I will find ways to decrease it.

According to our schedule, in two weeks I will be running the following tests on the hardware system:

  1. I will measure the latency between when the start button is pressed and when the Jetson receives a new image. This is one of the components of the total latency from when the start button is pressed and when the user receives audio description. The hardware component of the latency was estimated to be about 600 ms, but I did not properly account for the amount of time it would take to actually take an image. However, I do not see this being a large issue as we allowed for multiple seconds of leeway in our use case latency requirement.
  2. I will measure the total size weight of the device. In our requirements, we stated that it had to be at most 25mm x 35mm x 100mm in dimension, and at most 60g in weight.
  3. I will measure the battery life and power of the device. We stated that the device should be usable for at least 6 hours at a time before needing to be recharged.

Jaspreet’s Status Report for 11.04.23

This week, I continued to make progress on the pipeline for sending images to the Jetson from our Raspberry Pi. Now, the system is capable of saving an image after the start button is pressed, and it can take input from the stop button as well. However, the button setup is currently on a breadboard just so that it is ready for the demo this week. In the final setup, it should fit compactly within the component case. Once I have set up our HTTP server on the Jetson, we will be able to transfer the captured image via POST request.

My progress is behind schedule. I did not realize that both the Raspberry Pi and NVIDIA Jetson Orin Nano would require so many external components in order to operate. Specifically, I had to obtain microSD cards as well as various cables for display or input purposes. I also had more trouble setting up the WiFi connections than I had anticipated, and in hindsight, I should have reached out for help as soon as I started encountering issues. In order to catch up, I will need to speed up the process for designing and printing all of our 3D printed parts.

In the next week, I will first finish preparations for the interim demo, which include setting up the HTTP server on the Jetson and connecting both the Raspberry Pi and Jetson to campus WiFi. After the demo, I will finally begin designing our hardware component case as well as the textured button caps. This will put me back on track for completing the hardware subsystem on time.

Jaspreet’s Status Report for 10.28.23

This week I continued working on implementing the image to server pipeline using our Raspberry Pi Zero and Unistorm camera. I realized that the OS I had configured on the SD card was not properly compatible, so I went back and redownloaded Raspberry Pi OS. I then reconfigured the Pi so that I could use ssh to access it and use VNC viewer. I still have to finish setting up the GPIO button input and sending an image from the camera to an external server.

I ended up having to spend time completing work for other classes, and was not able to complete the goals I set for this week. I plan to spend most of Sunday completing my tasks for this week so that I can stay on schedule. Then, next week, I will begin creating a CAD of our 3d printed component case.

Jaspreet’s Status Report for 10.21.23

This week, we received the hardware components that we ordered, and I will be able to begin work on testing and assembling them once we are back from Fall Break. While waiting for the components, I was able to test out capturing and sending images from the Raspberry Pi 4 and Arducam camera module that we borrowed. This will make it much easier to set up the same pipeline with the Raspberry Pi Zero and Unistorm camera module. The majority of the rest of the week was spent working on our design report, which took much longer than we expected.

My planned tasks for the near future are to set up an image to server data pipeline and create a 3d printed case for our hardware components. In order to accomplish my planned tasks, I will have to look into how to capture an image with the Raspberry Pi based on a button press input, and how to then send an image to a remote web server. For the case, I will have to look into how to create a functional 3d model in CAD software. I will also have to look into different methods of attaching our device to the side of glasses.

My progress is not on schedule, as we had planned to receive our components earlier in the week. However, when I submitted our order forms, I forgot to check it off with our group’s TA, so our order was delayed. Therefore, according to our schedule I will need to set up the image to server pipeline and test data transfer from our camera by the end of the week to catch up. In the next week, I hope to do this as well as set up a server on the Jetson so that we can test sending our images to it. Since I am behind schedule, it will be necessary to spend extra time to finish these tasks by the end of the week.

Jaspreet’s Status Report for 10.07.23

This week, I ordered the hardware components that we plan on using for our project. This includes the Raspberry Pi Zero, camera, battery, and Nvidia Jetson. One issue I ran into while ordering parts was that the original camera that I selected was out of stock in many stores, and would take multiple weeks to arrive in others. Therefore, I ordered the backup camera instead, which is the Unistorm Raspberry Pi Zero W Camera. This camera has the same resolution and FOV, and is a very similar size, so I feel comfortable ordering it as a replacement. I also ordered an Nvidia Jetson Orin Nano Dev Kit from ECE inventory, which we plan on using to host our server with our ML models. Finally, I spent some time working with a Raspberry Pi 4 and Arducam module to test how to send an image wirelessly from the Pi. I plan on making more progress on this throughout the coming week.

I am slightly behind schedule, as even though I have ordered all of the hardware components, I have not looked into how to give the buttons texture so that they can be easily differentiated. However, I don’t expect this to take too much time, and I should be able to figure out the solution this weekend. For the next week, while I wait for components I hope to continue testing out how to send images wirelessly with a Raspberry Pi 4 and compatible camera. However, I don’t expect this to take up a lot of time, so I plan on helping my group members with their work. Specifically, I plan on helping gather image data for training our ML models, and I will get more information on what data to gather from Nithya.

Jaspreet’s Status Report for 09.30.23

This week, I selected the necessary hardware components for our design, including the camera, battery, and computing device.

Computing Device: Raspberry Pi Zero WH. Our computing device needed to be able to send image data wirelessly to our server at the press of a button. It also needed to be small and lightweight so that it could fit comfortably to glasses. The Raspberry Pi Zero WH fulfills all these roles, as the W indicates that it is compatible with WiFi, and the H indicates that it has GPIO headers which can be connected to our buttons. It has dimensions of 65 mm x 30 mm x 10 mm and weighs 11 g, which is small enough for our purposes. Another plus is that it has a built in CSI camera connector, which we can take advantage of.

Camera: Arducam 5MP OV5647 Miniature Camera Model for Pi Zero. Since we are using a Raspberry Pi Zero, it makes sense to use a camera made exactly for that board. Therefore, I chose the Arducam Miniature Camera Model. The camera itself is about 6mm x 6mm, and is attached to a 60 mm flex cable that in total weighs about 2 g, which is small compared to other camera modules.

Battery: PiSugar 2 Power Module. After searching for rechargeable lithium batteries for the Raspberry Pi Zero, I came across the PiSugar 2. This is a custom board and battery made specifically for the Pi Zero, which makes it easier to power the Pi. This weighs about 25 g, which is a lot, but most batteries that provide enough power for our use case requirements weigh about this much.

Buttons: Any medium sized push buttons will work for our use case. I need to look more into how I can texture these buttons to make it easier for a blind user to differentiate between the start and stop buttons.

The useful courses that helped me throughout this week include 18-441 Computer Networks and 18-349 Intro to Embedded Systems. In these courses I learned about sending data over wireless connections as well as using GPIO pins to read inputs from buttons.

My progress is now on schedule. In the next week I hope to order all necessary components, and begin working on the pipeline for sending an image from our camera through a Pi. I have acquired a Raspberry Pi 4 and a compatible camera that I can test on and use to gain insight into the work I will need to do once we receive our components.

Jaspreet’s Status Report for 09.23.23

This week, I focused primarily on preparing for the proposal presentation. My secondary goal was to do research on which hardware components we should be using for our design. These components are the camera, microcontroller, buttons, and battery.

I am behind schedule, as I expected to make more progress on selecting hardware components for our expected design. Therefore, I plan on spending extra time this weekend to catch up.

In the next week, I hope to have a completed first list of selected hardware components, with detailed explanations for why those selections were made. This includes listing out all components that were considered and the various tradeoffs between these components. SWaP-C must be considered for each component, especially since most of our use case requirements depend on the size, weight, and power consumption of our device.