Tahaseen’s Status Report 04/27

This week, I worked on integrating the homography logic into the overall Flutter UI with Caroline. This required me to make some changes to the process of calibration and the input of images. Furthermore, I added more error handling methods to make sure the program does not just crash on usage (when more than 4 corners are detected). A problem I have been working on is the light gradient in various light settings. As the light disperses with the angle of the projector, the light is considerably dimmer and faded out. To fix this, I may have to change the overall projection to be closer to the projector. Furthermore, we standardized the size of the projection space to ensure that we can work perfectly in a set space – but also remain flexible to tuning to other spaces.

This coming week, I will be spending a lot of time completing the final report and filling out the poster. Additionally, I will be adding wire management and aesthetic factors to the setup (specifically the camera mount). As needed, I will continue tuning the calibration and overall system display.

Caroline’s Status Report for 04/27

This week, I continued to work on integration. We switched to using my laptop for the project, so I was mostly working on system integration. The calibration script has been added to the pipeline so that the program waits for when the user interacts with the table calibration part. Before, it was assumed that the calibration would happen without user interaction, but now, when the user performs a new calibration, they web app shows a loading screen on the web app -> tells the user to move the red dots on the table -> the user confirms on the web app that it is done -> web app shows calibration loading screen until calibration is complete. I also have been working on fine-tuning the voice commands to work more consistently when everything is running together. It works very well independently, but it still does not recognize enough when running all together.

The voice commands are a bit behind schedule, as I hoped to be finished by now, but I don’t think this will take too much longer.

To Do:
– improve voice command accuracy
– integration and recipe processing
– polish web app (just appearance wise)

Team Status Report for 4/20

Right now, our risk is meeting our user requirement of allowing this device to work on any table. We are tuning the configuration to the table and lighting set up of the conference room we are using, so it is unknown how well our system will run on other tables. Right now, we are more focused on getting our initial configuration to work, and then we will focus more on generalizing to other tables. If we do not have enough time to do this, we will just reduce the scope of the system, possibly still working on different tables but maybe tables more similar to the one in the conference room. We will also have to tune to the room and table where we are doing the final showcase, which would be the next step.

No changes made to system.

Implementing the software on the AGX was pushed back due to hardware constraints. No other schedule changes.

Tahaseen’s Status Report for 04/20

This week, I finalized a camera mount that is set to an appropriate height for our testing environment in HH 1303. Additionally, I decided to use red markers in order to guide the user in outlining the table in the case of projector overflow. Processing these sped up calibration significantly and allowed it to be a more consistent determination of the table outline. The rainbow streaking is a problem we are circumventing by adjusting the selected colors for projection and reducing the overall amount of projection. After these updates and the new camera setup, I have been working on mapping the projection appropriately onto the table so that the homography retains a standard level of brightness. This means playing with the height and angle of the projector further to find an optimal setup.

This coming week, I will finalize my process and complete the integration of calibration into the UI. This will enable us to run everything from one device and one package. In this project, I learned a lot more about calculating homographies and the limitations of a projector system. I also learned how to use Flutter, which I’ve never had to use previously. My main strategy was to rely on forums and data sheets in order to gain a full understanding of the components I was using. For Flutter, I really relied on their online documentation and example apps other people had made using this tool.

Caroline’s Status Report for 04/20

Last week, I worked on changing the voice module to use the GPU instead of the CPU. It involved me configuring CUDA on my device and changing function calls in python. I did see better results when the system was running, but it still needs to be tested with a fully integrated system. I focused on moving our design to the AGX. I had many issues this week while installing it but I have been making progress. First, I realized that there was only 32 GB available on the device given to us, so we had to buy an SSD to continue using it. I had attempted to make progress with just the 32GB, but I used so much storage that the device would no longer boot, and I had to reflash. Then, I got the SSD on Friday and started moving our design to it. I had a lot of trouble making Cuda work, but it currently works as of Saturday morning. Cuda worked easily on my own laptop when I tested this week, but it was very difficult for me on the Jetson. I am almost done working through compatibility issues with our code on the AGX as well.

I am a bit behind schedule. I was hoping to have finished installing things on the AGX earlier in the week, but it was pushed back due to the hardware issue. By the end of Saturday, I expect for our code to fully run on the AGX.

The is coming week, I plan to tweak my UI flutter apps to work smoothly with the overall system. It is already mostly implemented, but it will change slightly when we integrate further.

Team Status Report for 4/6

Our biggest risk this week is integrating all of our code into the AGX Xavier. We don’t anticipate this to be a terribly big risk , but it is something we have yet to do. It is a high priority this week and we hope to finish integrating all of the current code onto that. This will also give us the opportunity to test the camera with the AGX which we anticipate will give us a significantly higher performance. We’ve already begun working on this process from Friday. We have not had any other changes to our timeline and we anticipate being on track.

Validation:

  • User Acceptance Study: Our primary method of validation will be via the user acceptance study because it will allow us to assess how the design flow suits the user experiences. This study will encompass factors such as ease of UI, taste of product, training time, intuitiveness, etc. This will ensure that we can meet our basic use case requirements.
  • Total Error Rates: Total error rates on the number of hangups, crashes, and general errors that occur throughout the user’s experience are key to understanding how we can process errors better. Part of this is also accounting for cases where error is detected and where it’s not. This will ensure that we can meet our engineering design requirements.
  • Multi-Recipe Functionality: Our system should be compatible with multiple recipes in our library. This will prevent the danger of hard coding to a specific recipe which is not at all what we want. This will ensure we can meet our engineering design requirements and our general use case requirements.
  • Latency: Our latency validation is important to test at the higher level with all of the subsystems working together. Since several of our subsystems are computationally expensive, running them in parallel should not result in a notable latency while the user is interacting with the experience. This is key to our engineering design requirements but definitely impacts our use case requirements.
  • Expected vs. Recorded time: This validation is with regards to the expected time a recipe is marked to take and how much time it actually takes with the user’s ability to interact with the device. This is a major use case requirement because it conveys the actual effectiveness of our product.

Tahaseen’s Status Report for 04/06

This week, I focused on putting together the entire pipeline on the actual projector and camera setup. There were a lot of lessons learned during this portion of integration because we learned we needed more stable mounting mechanism and clearer guidance toward the user on setup. Furthermore, when testing the camera with the projector, I realized that the camera saw the projection in the RGB states because of the faster fps. This will likely not be a problem however by grayscaling and thresholding. This is necessary for the auto-calibration process. The USB camera we were testing with has a fisheye effect that our actual camera does not have.  None of these factors will delay our timeline significantly, but this does mean I need to put more bandwidth on manufacturing a more secure mounting for the camera and projector.

Next week, I will manufacture the mounts for the projector and camera. I will also work on speeding up the autocalibration by running some computer vision on the images taken in.

Verification:

  • Projector Calibration
    • Ensure the bleed over onto the floor is within the specified range from the requirements. If not, update table corner detection to identify table bounds.
    • Ensure that the spacing is measured to be the distances specified in the requirements. If not, change table bounds.
  • Hardware
    • Ensure that the hardware mounting for the projector is secure and not susceptible to falling from minor bumps. Verified via checking camera detection of projection difference.
    • Ensure that the hardware mounting for the camera is secure and not susceptible to falling from minor bumps. Verified via checking camera detection of projection difference.
  • Recipe Processing
    • Ensure that the recipe is split into appropriate steps. This will be analyzed from the user acceptance process.

Caroline’s Status Report for 04/06

I continued to work on both the web application and projector interface. Both components are mostly functional, but they need some more work. For example, the web application works with hard coded data, but I am working on making it so that recipe data can be dynamically taken from files on the device. I was also working on integrating the calibration step into the user flow. For demo, we just assumed that there is an old calibration step saved, so now I am working on giving the user the option to start a new calibration and wait for the script to finish in the backend.

I am on schedule.

This upcoming week, I will install the wifi card on the AGX and make sure that the network interface is set up properly. I also try to reduce the latency for the running processes that we noticed during demo. I want to focus on these things this coming week and then work on the rest of the UI if I have time.

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

One module that I am in charge of is the voice commands module. In the design review, we had outlined tests that I was planning on doing to verify that the voice commands work. One aspect of the latency of the voice commands. During demo, I noticed that there was a higher latency than expected, so we will run tests where we were say each command and measure both how long it takes for the command to be recognized in the script. It should only take 2-3 seconds to register the command. This is important to test to figure out if the latency is more of a problem within the voice command module or when it is integrated. Another test is checking the accuracy of the commands themselves. We will say commands and take a count of how many it correctly identifies. Additionally, we will try to do it with people talking in the background to see the accuracy in that environment.

Another module that I am in charge of is the web application. I will make sure to test each possible path of navigation and make sure that there are no errors while a user traverses the website. I will also make sure to test the display on different devices by physically testing and simulating to make sure that the interface spacing and size of components is styled accurately on different screen sizes.

To verify the projector interface, I will make sure that all the required elements are there and function correctly together. I will make sure that different components work simultaneously, such as the video and the timer.

Tahaseen’s Status Report for 03/30

This week, I worked on translating the warp homography and code to Flutter and tried it on the actual planned projector setup. Previously, I had been using a smaller projector for testing purposes. There have been several snags in the process of designing the calibration for the initial warping. Since the current method requires user input for the homography, it makes the process a little harder and less user friendly than intended. The checkerboard is meant to make this easier but is still under quality testing. Integrating with Flutter has been more difficult than anticipated given the different hierarchical structure. Futhermore, I think it may be necessary to design a lip for the new projector stand to ensure the projector is comfortably situated on the stand while optimizing the projection on the table.

I am on track this week, but plan to spend significant time on streamlining the calibration procedure.

Caroline’s Status Report for 03/30

I worked on the backend integration part. I was able to build a web page for the web app user interface, launch it on a server using python, and created a socket that connected the user interface interactions with the rest of the system. I also implemented using a pub/sub model instead of a queue because it is easier and more efficient to send messages this way because multiple modules need to be able to see values pushed from another module. I used ZeroMQ in python to do this and was successful in implementing it in the web app user interface. Additionally, I have been working with my team on integrating the system at large.

On schedule.

Next week, I will continue to integrate with my team by implementing the pub/sub model in other modules and also polishing the UI for demo day.