Aditi’s Status Report for 12.09.23

This week, I focused mostly on integration. I was able to fix all of the bugs with integrating the slide matching model, and am continuing to integrate the graph description model. I also worked on the final portfolio, and worked on the script for the final video.

Tomorrow, before the final demo, I need to finish up integrating the graph description model, and our team needs to perform some user testing. We are still behind schedule because we have not finished the graph description and the integration, but we hope to do this before the demo.

Aditi’s Status Report for 11.11.23

This week, I was able to integrate the hardware and the software by creating a server on my app-side. This server would wait for a STOP signal from the stop button, then would instantly stop the audio. I was having problems with the audio not playing earlier, but I was able to update my XCode version and have it work as normal. Now, every part of my app is fully working, except the API key functionality. That will not take much time, but I don’t want to add it unless I’m sure that it’s absolutely necessary — I am not totally sure if every professor needs to send their API key, or whether just one will suffice. This week, I plan to have my teammates create test Canvas classes with their own accounts in order to see if that matters, then either add or remove that functionality.

I am very on track with my progress, I am almost completely done, and am just waiting on integration from the ML side. Next week, I plan to help Nithya collect data for the ML model and do anything else my teammates need, since my subsystem is basically done.

These are the tests I’m planning on running:
1. Have a visually-impaired person test the app for compatibility with VoiceOver. I will give them my phone, and have them navigate the app and ask for feedback about where it can be improved to make it more accessible.
2. Have a visually-impaired person rate the usefulness of the device, and the outputs from the ML model by allowing them to use the app in conjunction with a pre-prepared slideshow, and compare these values with what we said in our use-case requirements (>90% “useful”)
3. Have sighted volunteers rate the usefulness of the device, and rate the graph description model, and compare with what we said in use-case requirements (>95% useful). If they feel like we are excluding too much of the graph information, we will refine the ML model and gather and train with more data.
4. Test the latency of button press to the start/stop of sound, and compare with what we said in our use case requirements (<100ms)
5. Test the accuracy of our ML model by comparing it to the test set error, as we said in use case requirements (>95% accurate).

Aditi’s Status Report for 10.28.23

This week, I worked on the Canvas scraping software, and finished it completely. It was pretty difficult to figure out everything about using the API, so this is the work I did for the bulk of the week. First, I had to create a new Canvas instructor account in order to create new test classes, and then I had to get the API key associated with the account. Then, I had to perform the following steps in the code:

1. Make a request to Canvas for the most recent file under the “Lectures” module, if such a module exists. I sent Canvas my API token and authorization key in the header for the security handshake.

2. Canvas sent me a JSON file of metadata about the most recent file. This was the most confusing part of the API — with no warning, when making a request for a specific file, the API will send back the JSON of metadata rather than the file itself, but the name of the JSON will be exactly identical to the name of the file.

3. Within this JSON, there is a value called “html_url,” which is the actual location of the file we need to extract. So, we must make another request to that url in order to retrieve the file.

Next week, I plan on modifying the app to make a settings screen, where they can set the classes they are taking, and the class code associated with each one, which also means I need to add functionality to support extracting information from multiple classes. I also plan to refine the UI to make it more accessible using the WCAG 2 guidelines.

I am still ahead of schedule, and am on track to finishing the app within the next couple of weeks. The final thing I need to do after next week will be integrating the button press/camera with the Flask server, and my subsystem will be complete.