Team Status Report for 12/10/2022

  1. What are the most significant risks that could jeopardize the success of the project?
  • Pre-processing: Pre-processing is completed and meets all design requirements and also possesses an machine-learning based alternative with a longer latency.
  • Classification: Classification is more-or-less complete and meeting the requirements set forth in our design review. We don’t foresee any risks in this subsystem
  • Hardware/integration: We are still in the process of measuring latency of the entire system, but we know that we are within around 5 seconds on the AGX Xavier, which is a big improvement over the Nano. We will continue to measure and optimize the system, but we are at risk of compromising our latency requirement somewhat.
  • Report: We are beginning to outline the contents of our report and video. It is too early to say if any risks jeopardize our timeline.

2. How are these risks being managed? 

Nearly everything has been completed as planned. 

3. What contingency plans are ready? 

Post-Processing: At this point there are no necessary contingency plans with how everything is coming together. 

4. Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

Since last week, we have been able to measure the performance of our system on the AGX Xavier, and have chosen to pivot back to the Xavier, as we had originally planned in our proposal and Design Review. 

5. Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

This change was necessary to more capably meet our latency requirements in the classification subsystem, where we were able to perform inferences 7x faster. This also improved the overall latency of the system.

6. Provide an updated schedule if changes have occurred. 

We are on schedule to present our solution at the final demo and make final measurements for our final report without making any schedule changes.

Team Status Report for 12/03/22

What are the most significant risks that could jeopardize the success of the project?

  • Pre-Processing: Currently, pre-processing has been fully implemented to its original scope.
  • Classification: Classification has been able to achieve 99.8% accuracy on the dataset it was trained on (using a 85% training set, 5% validation set). One risk is that the model requires on average 0.0125s per character, but the average page of braille contains ~200 characters. This amounts to 2.5s of latency, which breaches our latency requirement. However, this requirement was written on the assumption that each image would have around 10 words (~60 characters), in which case latency would be around 0.8s.
  • Post-Processing: The overall complexity of spellchecking is very vast, and part of the many tradeoffs that we have had to make for this project is the complexity v. efficiency dedication, as well as setting realistic expectations for the project in the time we are allocated. The main risk in this consideration would be oversimplifying in a way that might overlook certain errors that could put our final output at risk.
  • Integration: The majority of our pipeline has been integrated on the Jetson Nano. As we have communicated well between members, calls between phases are working as expected. We have yet to measure the latency of the integrated software stack.

How are these risks being managed? 

  • Pre-Processing: Further accuracy adjustments are handled in the post-processing pipeline.
  • Classification: We are experimenting with migrating back to the AGX Xavier, given our rescoping (below). This could give us a boost in performance such that even wordier pages can be translated under our latency requirement. Another option would be to experiment with making the input layer wider. Right now, our model accepts 10 characters at a time. It is unclear how sensitive latency is to this parameter.
  • Post-Processing: One of the main ways to mitigate risks in this aspect is through thorough testing and analysis of results. By sending different forms of data through my pipeline specifically, I am seeing how the algorithm reacts to specific errors.

What contingency plans are ready?

  • Classification: If we are unable to meet requirements for latency, we have a plan in place to move to AGX Xavier. The codebase has been tested and verified to be working as expected on the platform.
  • Post-Processing: At this point there are no necessary contingency plans with how everything is coming together.
  • Integration: If the latency of the integrated software stack exceeds our requirements, we are able to give ourselves more headroom by moving to the AGX Xavier. This was part of the plan early in the project, since we were not able to use the Xavier in the beginning of the semester.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

Based on our current progress, we have decided to re-scope the final design of our product to be more in line with our initial testing rig design. This would make our project a stationary appliance that would be fixed to a frame for public/educational use in libraries or classrooms.

Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

This change was necessary because we felt that the full wearable product would be too complex for processing, as well as a big push in terms of the time that we have left. While less flexible, we believe that this design is also more usable for the end user, since the user does not need to gauge how far they need to put the document from the camera.

Provide an updated schedule if changes have occurred. 

Based on the changes made, we have extended integration testing and revisions until the public demo, giving us time to make sure our pipeline integrates as expected.

How have you rescoped from your original intention?

As mentioned above, we have decided to re-scope the final design of our product to be a stationary appliance, giving us control over distance and lighting. This setup was more favorable to our current progress and allows us to meet the majority of our requirements and make a better final demo without rushing major changes.

Team Status Report 11/19/2022

  • What are the most significant risks that could jeopardize the success of the project?

This week, the team debriefed and began implementing changes based on feedback from our Interim Demo. We primarily focused on making sure that we had measurable data that could be used to justify decisions made in our system’s design. 

Pre-processing: I am primarily relying on the stats values (left, top coordinates of individual braille dots,as well as the width and height of the neighboring dots) from “cv2.connectedComponentsWithStats()” function. I have checked the exact pixel locations of the spitted out matrices and the original image and have confirmed that the values are in fact accurate. My current redundancies of dots come from the inevitable flaw of the connectedComponentsWithStats() function, and I need to get rid of the redundant dots sporadically distributed in nearby locations using the non_max_suppression. There is a little issue going on, and I do not want to write the whole function myself so I am looking for ways to fix this, but as long as this gets done, I am nearly done with the pre-processing procedures. 

Classification: Latency risks for classification have been mostly addressed this week by changing the input layer of our neural network to accept 10 images for a single inference. The number of images accepted per inference will be tuned later to optimize against our testing environment. In addition, the model was converted from MXNET to ONNX, which is interoperable with NVIDIA’s TensorRT framework. However, using TensorRT seems to have introduced some latency to inference resulting in unintuitively faster inferences on the CPU.

Post-processing: The primary concern with the post-processing section of the project at the moment is in determining the audio integration with the Jetson Nano. Due to some of the difficulties we had with camera integration, we hope that it will not be as difficult of a process since we are only looking to transfer audio outwards rather than needing to recognize sound input as well. 

 

  • How are these risks being managed? 

Pre-processing: I am looking more into the logic behind non_max_suppression in getting rid of the redundant dots to facilitate the debugging process. 

Classification: More extensive measurements will be taken next week using different inference providers (CPU, TensorRT, CUDA) to inform our choice for the final system. 

Post-processing: Now that the camera is integrated, it is important to shift towards the stereo output. I do think it will integrate more easily than the camera, but it is still important that we get everything connected as soon as possible to avoid hardware needs later on. 

 

  • What contingency plans are ready? 

Pre-processing: If the built-in non_max_suppression() function does not work after continuous debugging attempts, I will have to write it myself. 

 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

Classification: The output of the classification pipeline has been modified to include not only a string of translated characters, but a dictionary of character indexes with the lowest confidence, as well as the next 10 predicted letters. This metadata is provided to help improve the efficiency of the post-processing spell checker.

 

  • Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

This change was not necessary but it will help improve the overall efficiency of the pipeline significantly if it is able to stand on its own. It also does not require any significant overhead in time or effort so it is easy to implement. 

 

  • Provide an updated schedule if changes have occurred. 

 

 

Team Status Report for 11/12/2022

  1. What are the most significant risks that could jeopardize the success of the project?
  • Pre-processing:
    • Currently, I am relying on openCV’s “cv.2connectedComponentsWithStats()” function that outputs various statistical values in regards to the original image inputted, including the left, top coordinates as well as the width, height, and area of the most commonly appearing object (braille dots in our case). However, depending on the lighting or the image quality of the original image taken, the accuracy of this stats function needs to be further tested in order to modify further modifications. 
  • Classification:
    • On the classification side, one new risk that was introduced when testing our neural network inference on the Jetson Nano was latency. Since each character has around a 0.1s latency, if we were to process characters sequentially, an especially long sentence could produce substantial latency.
  • Hardware:
    • The Jetson Nano hardware also presented some challenges due to its limited support as a legacy platform in the Jetson ecosystem. Missing drivers and slow package build times make bring-up particularly slow. This is, however, a one-time cost which should not have any significant impact on our final product.
  • Post-processing:
    • Another hardware related risk to our final product is the audio integration capabilities of the Nano. Since this is one of the last parts of integration, complications could be critical. 

 

2. How are these risks being managed? 

 

  • Pre-processing:
    • On primary level, pixel by pixel comparison between image and printed matrices on terminal would be undergone to understand the current accuracy level and for further tweaking of the parameters. Furthermore, cv’s non_max_suppression() function is being further investigated to mitigate some of the inaccuracies that can rise from the initial “connectedComponentsWithStats().” 
  • Classification:
    • To address possible latency issues as a result of individual character inference latency, we are hoping to convert our model from the mxnet framework to NVIDIA’s TensorRT, which the Jetson can use to run the model on a batch of images in parallel. This should reduce the sequential bottleneck that we are currently facing.
  • Hardware:
    • Since hardware risks are a one-time cost, as mentioned above, we do not feel that we will need to take steps to manage them at this time. However, we are considering using a docker image to cross-compile larger packages for the Jetson on a more powerful system.
  • Post-processing:
    • After finishing camera integration, we will work on interacting with audio through the usb port. We have a stereo adapter ready to connect to headphones.

3. What contingency plans are ready? 

  • Classification:
    • If the inference time on the Jetson Nano is not significantly improved by moving to TensorRT, one contingency plan we have in place would be to migrate back to the Jetson AGX Xavier, which has significantly more computing power. While this comes at the expense of portability and power efficiency, it is within the parameters of our original design
  • Post-Processing:
    • There is a possible sound board input and output pcb that would allow us to attach to the nano and play sound. This comes with added expense and complexity, but it seems more likely to be proven effective. 

4. Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

Integrating each of our individual components into our overall software pipeline did not introduce any obvious challenges. Therefore, we did not think it is necessary to make any significant changes to our software system. However, in response to interim demo feedback, we are looking to create more definitive testing metrics when deciding on certain algorithms or courses of action. This will allow us to justify our choices moving forward and give our final report clarity. In addition to the testing, we are considering a more unified interaction between classification and post-processing that helps create a more deterministic approach to which characters might be wrong more often. 

5. Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

The minor changes that we are making to the individual subsystems are crucial for the efficiency and effectiveness of our product. Making sure that we stay on top of optimal decisions and advice given by our professors and TAs. 

 

 

Team Status Report for 11/05/2022

What are the most significant risks that could jeopardize the success of the project?

One issue we ran into this week was with connecting the eCAM50 MIPI CSI camera that we had expected to use initially. Due to unforeseen issues in the camera driver, the Jetson Nano is left in a boot loop after running the provided install script. We have reached out to the manufacturers for troubleshooting but have yet to hear back.

Looking at the feed from our two alternative cameras, the quality of video feed and the resulting captured image may not exhibit optimal resolution. Furthermore, the IMX219 camera with its ribbon design and wide angle FOV is highly vulnerable to shakes and distortions that can disrupt the fixed dimensional requirements for the original captured image, so further means to minimize dislocation should be investigated.

 

How are these risks being managed? 

 

There are alternative cameras that we can use and have already tested connecting to the Nano. One is a USB camera (Logitech C920) and the other is an IMX219, which is natively supported by the Nano platform and does not require additional driver installations. Overall, our product isn’t at risk, but there are trade offs that we must consider when deciding on a final camera to use. The C920 seems to provide a clearer raw image since there is some processing being done on-board, but it will likely have higher latency as a result.

We will be locating the camera in a fixed place(rod,…) along with creating dimensional guidelines to place the braille document to be interpreted. Since the primary users of our product could have visual impairments, we will place physical tangible components that will provide guidelines for placing the braille document.

What contingency plans are ready?

We have several contingency plans in place at the moment. Right now we are working with a temporary USB camera alternative to guarantee the camera feed and connection to the Nano. In addition, we also have another compatible camera that is smaller and has lower latency with a small quality trade off. Finally, our larger contingency plan is to work with the Jetson AGX Xavier connected to the wifi extension provided by the Nano, and mount the camera for the most optimal processing speeds.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

Since our last report, no significant changes have been made to the system design of our product. We are still in the process of testing out the camera integrated with the Jetson Nano, and depending on how this goes we will make a final decision as we start to finalize the software. In terms of individual subsystems, we are considering different variations of filtering that work best with segmentation and the opencv processing, as well as having the classification subsystem be responsible for concatenation to simplify translation of the braille language.

Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Since we did not change any major sections of the system design, we do not have any costs associated currently. Right now, we are prepping for the interim demo and if everything goes well we will be in solid form to finish the product according to schedule. Individually, the work will not vary too much and if needed can be shifted around when people are more accessible to help. At this stage in the product, most of the software for our individual subsystems has been created and we can start to work more in conjunction with each other for the final demo.

Provide an updated schedule if changes have occurred.

 

 

Since we had been working based off of the deadlines on the Canvas assignments, our Gantt chart was structured under the impression that Interim Demos would take place during the week of Nov. 14. We have since changed our timeline to match the syllabus.

Team Status Report 10/29/2022

  • What are the most significant risks that could jeopardize the success of the project?

At the moment, our main risks are associated with meeting timing requirements while making sure we can work with the hardware effectively. Since our eCAM50 is built for the Jetson Nano, we are temporarily pivoting to the Nano platform and working on getting the camera integrated. From this experience, we are seeing that it will be essential to have an extended ribbon cable to connect the camera to the Jetson to ensure reasonable wearability. However, as important as wearability is, we do not want this to hinder our overall product capabilities. One thing that Alex mentioned to us early in the design process was that lengthening the camera cable could significantly affect latency. Until now, we have mostly been working individually on our personal systems, since we are now testing out camera integration with the Nano and beginning to integrate our independent parts on the device, this may require us to rely on WiFi, which the Nano provides over the Xavier AGX. 

  •  How are these risks being managed? 

We currently have 22 days until the expected date of the interim demo on Nov 16th. Our goal for the interim demo is to be able to showcase how raw captured data is processed at each stage of our pipeline, from the camera, to the text-to-speech. Because we are temporarily pivoting to the Nano, we are putting less of a focus on latency so that we can focus on demoing functionality. As a result, we plan to work extensively on camera and software integration starting this coming week, and speaker integration the week after. We believe that such a schedule will guarantee enough time to troubleshoot any of the potential issues and further optimize our system. 

  • What contingency plans are ready? 

In case everything goes wrong in terms of integration of the e-CAM50 or the Nano does not provide the performance that we need, we do have a final contingency plan of falling back on the non-wearable fixed format using Jetson Xavier AGX instead of Nano. However, with proper time management and collaboration, we firmly believe that everything will be completed in time. 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

Due to several constraints of the Jetson Xavier AGX (wireless connectivity, weight, I/O), we are considering altering our plan to work with the Jetson Nano. The Jetson Nano would provide wifi capabilities as well as integrate well with the camera that we already have. It also serves to decrease power draw in case we want to package our final prototype as a portable battery powered wearable. The main trade-off would be the performance difference. With this being said, we believe that the Nano will provide us with enough speed to match the necessary pre/post processing as well as classification subsystems. 

  • Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

This change is necessary due to our product’s use case, and need for mobility. With the Nano being smaller and containing its own Wifi, we can better integrate our design and make it more wearable and usable. The main cost for this change would be the decrease in performance capabilities, but we believe it will be able to handle our processing sufficiently. Going forward, we do not believe it will change the overall course of our schedule, and the next two weeks will still be essential for the development of our product before the interim demo. 

  • Provide an updated schedule if changes have occurred.

Following Fall Break, we debriefed and re-assessed our progress and what we needed to do before the Interim Demo. As a result, we’ve moved camera and speaker integration up earlier in our Gantt chart. As we move closer to the integration phase, we will need to keep a closer eye on the Gantt chart to make sure everyone is on the same page and ready to integrate their deliverables.

Team Status Report for 10/08/2022

  1. What are the most significant risks that could jeopardize the success of the project?

      This week, the team focused on wrapping up and presenting our design review. We also spent some time experimenting with the Jetson and individually researching approaches for our respective phases. This early exploratory work has set us up nicely to begin writing our in-depth design report and finalize our bill of materials to order parts.

      Based on our research, we have also identified some further potential risks that could jeopardize the success of our project. While researching the classification phase, we realized that the time spent training iterations of our neural network may become a blocker for optimization and development. Originally, we had envisioned that we could use a pre-trained model or that we only needed to train a model once. However, it has become clear that iteration will be needed to optimize layer depth and size for best performance. Using the equipment we have on hand (Kevin’s RTX 3080), we were able to train a neural network for 20 epochs (13 batches per epoch) in around 1-2 hours. 

2. How are these risks being managed?

      To address training time as a possible blocker, we have reached out to Prof. Mukherjee to discuss options for an AWS workflow using SageMaker. Until this is working, we will have to be selective and intentional about what parameters we would like to test and iterate on.

3. What contingency plans are ready?

     While we expect to be able to use AWS or other cloud computing services to train our model, our contingency plan will likely be to fall back on local hardware. While this will be slower, we will simply need to be more intentional about our decisions as a result. 

     Based on initial feedback from our design review presentation, one of the things we will be revising for our design report will be clarity of the datapath. As such, we are creating diagrams which should help clearly visualize a captured image’s journey from sensor to text-to-speech. 

4. Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

     One suggestion that we discussed for our design review was the difference between a comfortable reading speed and a comfortable comprehension speed. Prof. Yu pointed out that while we would like to replicate the performance of braille reading, it is unlikely that text-to-speech at this word rate would be comfortable to listen to and comprehend entirely. As a result, we have adjusted our expectations and use-case requirements to take this into account. Based on our research, a comfortable comprehension speed is around 150wpm. Knowing this metric will allow us to better tune our text-to-speech output.

5. Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

      Placing an upper limitation on the final output speed of translated speech would not incur any monetary or performance costs. 

6. Provide an updated schedule if changes have occurred. 

      Based on our Gantt chart, it seems that we have done a good job so far of budgeting time generously to account for lost time. As such, we are at pace with our scheduled tasks for the most part. In fact, we are partially ahead of schedule in some tasks due to experimentation we performed to drive the design review phase. However, one task we forgot to take into account in our original Gantt chart was the Design Report. We have modified the Gantt chart to take this into consideration, as below:

 

Team Status Report for 10/01/2022

1. What are the most significant risks that could jeopardize the success of the project?

This week, our team was focusing on solidifying the technical details of our design. One of the main blockers for us was flashing the AGX Xavier board and getting a clean install of the newest OS. Because the necessary software was not available on the host devices that we had access to, we spent some time setting up the Xavier temporarily on the board itself. During this process, we also considered the pros and cons of using an Xavier when compared to the more portable, energy efficient Nano. 

Our work is split into three parts: pre-processing, where the initial picture is taken and processed. In our initial phase, due to the difficulties of natural scene braille detection, we are currently initiating our image-processing phase with reasonably cropped images of braille text. However, since our use-case requirements and apparatus model provides a headwear mounted camera, we might need to consider different ways the camera will be mounted to provide more reliable angles of photo capturing in case ML based natural scene braille detection does not return 90% use-case requirements accuracy. 

The second phase of our algorithm is the recognition phase. For this phase, because we want to work with ML, the greatest risks are poorly labeled data and compatibility with the pre-processing phase. For the former, we encountered a public dataset that was not labeled correctly for English Braille. Therefore, we had to look for alternatives that could be used instead. To make sure that this phase is compatible with the phase before it, Jay has been communicating with Kevin to add the pre-processing output to the classifier’s training dataset.

The final phase of our algorithm is post-processing, which includes spellcheck and text-to-speech in our MVP. One design consideration that was made was whether to use an external text-to-speech API or build our own in-house software. We decided against an in-house solution because we think the final product would be better served if using a tried and true publicly available package, specifically for our latency metrics.

2. How are these risks being managed? 

These risks are being mitigated by working through the process of getting the Xavier running with a newly flashed environment. This will allow us to work through some of the direct technical challenges like connecting to ethernet, storage capabilities, and general processing power. By staying proactive and looking ahead, we can try and scale down to the nano if necessary, or if steady progress is made on the xavier, then we will be able to demo/use it for our final product. Overall, we have divided our work in such a way that each of us is not heavily reliant on each other or on the hardware working perfectly (of course it is necessary for testing and requirements). 

3. What contingency plans are ready? 

As far as our core device is concerned, we have currently set up a Jetson Xavier AGX in REH340 and can run it via ssh. We will also be ordering in Jetson nano since we have concluded that our programs could also be run in nano under reasonable latency along with other perks such as supportability of wifi or relative compactness of the device. For the initial image pre-processing phase, in case ML based natural scene detection returns unreliable accuracy, various methods, to mount the camera in regulated manners to adjust the initial dimensions of the image, are being considered. For the second phase of our primary algorithm, recognition, we researched into the possibility of using Hough transform of which are also supported by openCV houghtTransform libraries in case ML based recognition returns unreliable accuracy. For our final phase, audio transition, various web-based text-to-speech translation APIs are being currently investigated. 

4. Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

Overall, there were no significant changes made to our existing design of the system except for creating a solidified testing approach. This testing approach both validates the physical design of our product, quantifies “success”, and tests in a controlled environment. Alongside our testing approach, we are still currently in the process of deciding on whether or not the xavier is the correct fit for our project, or if we will have to pivot to the Nano for its wifi capabilities and simplistic design. This would only change our system specs at the moment. 

5. Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

Adding a fully defined testing plan will allow us to properly measure our quantitative use case requirements, as well as give our audience/consumer a better understanding of the product as a whole. In addition, the Nano will not cost any more for us to use as it is available, but it may cost time to get adjusted to the new system and capabilities. This system has a significantly lower power draw (positive), but a slower processing speed (negative). Overall, we think that it will still be able to meet our expectations and mold well into our product design. Because we are still ahead of schedule, this cost will be a part of our initial research and design phase. 

6. Provide an updated schedule if changes have occurred. 

Everything is currently on schedule and does not require further revision.

Team Status Report for 09/24/2022

1. What are the most significant risks that could jeopardize the success of the project?

At this point in our project, most of our significant risks involve the general success of the software we provide. Alongside this, relying on the processing capabilities of the hardware to reinforce our quantitative requirements and optimizing for proper performance. Also, if we are unable to find any significant research in braille CV detection, it will require a more bottom-up development that could require more time and research rather than optimizing.

2. How are these risks being managed? 

By staying ahead of schedule in development, we can ensure we have plenty of time to do both unit testing and integration testing to give us a baseline for what needs to be worked on and optimized. We can continuously develop software in parallel so that it is easier to sidestep or add to the process if needed. 

3. What contingency plans are ready? 

Working steps have been modularized and parallelized to facilitate team cooperation and collaboration.

4. Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)?

While we are actively workshopping our design, some of the major considerations we made in the past weeks apply to narrowing the scope of our project and ironing out the details of our MVP. After speaking with Professor Yu, it became clear that we wanted to prioritize functionality and performance to meet our use-case requirements, with form factor and comfort as a secondary goal. Therefore, we decided to follow Alex’s advice to develop our MVP on the Jetson Xavier, which would provide ample headroom for optimization. However, due to its size and weight, the Jetson would not fit comfortably on a helmet or mounted to glasses, as we had originally envisioned. Therefore, we are likely to update our MVP to a wearable camera linked to the Jetson worn on a vest.

Following our Proposal Presentation, we received a lot of insightful feedback from our instructors and peers. Namely, there was some confusion about the technical details of our MVP and what our test environment would look like. As we move into the design stage of our development cycle, we will make sure to emphasize these features for our report and presentation. This is especially important so that our team has a clear idea of our goal by the end of the semester and so that we can order relevant materials ahead of time. There were also questions about how our solution addressed the problems we introduced in our use case. As we have narrowed our scope down to a more manageable size, we have also managed some tradeoffs in terms of functionality. However, we hope that our MVP will provide a strong foundation from which the path to an ideal solution will become clear.

5. Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward? 

Specifically, obtaining the actual Jetson Xavier board made us realize that it would be realistically impossible for the users to carry around all the parts on top of the helmet due to its heavy weight and bulky size. Therefore we will be adopting a combination of camera mounted glasses and a vest for our initial build design. Since we have been in the design phase so far and haven’t built the hardware yet, there will not be any costs that require further mitigations. 

6. Provide an updated schedule if changes have occurred. 

We have not made any changes to our schedule as a result of the updates we made to our design this week. Looking ahead on our Gantt chart, next week will be dedicated to planning out the technical details of our project and preparing our Design Review presentation. This will likely involve experimenting with the hardware and software involved in developing our MVP.