Junrui’s Status Report for 4/27

Since I am the presenter for our team’s final presentation, this week I spent the first three days preparing for the final presentation. Then I discussed with Lin and modified our system’s integration logic. Initially when the system looped through the fingering data, it would take ‘slices’ of multiple lines and calculated the most common line, and that made the code inefficient. Now this is removed and new mistake tracking logic for mismatch tolerance is decided, which should be able to improve the latency. However, since I have some final homework deadlines this week, I didn’t have time to conduct new tests on the new implementation. I will continue to work on that with Lin on Sunday.

I am almost on track with the schedule in the integration phase, but the schedule is quite tight as the final report, demo, video deadlines are approaching. Next week I plan to finish the integration, conduct more tests with real inputs from users,  and also start working on the report and video.

Team’s Status Report for 4/20

The biggest risk we face now is still integration. Our individual work is completed, though some improvements can be done to increase accuracy. However, for the integration, the synchronization of the fingering detection data and audio detection data is problematic. Both the timing and the fault tolerance need precise refinement. The accuracy after integration currently falls short of our expectations, and the web app also incorrectly flags some minor deviations from the reference solution due to the inaccurate synchronizing. To deal with the risk, our team will work together in the integration tests and assist Junrui in writing some integration code.

The schedule remains the same as previous week, since the remaining time is planned for integration tests.

Junrui’s Status Report for 4/20

During the past two weeks, I have been working on the integration of the web app and the other 2 systems. I managed to write to the serial to notify the start to the fingering detection system, and read from the serial to get the real-time fingering info. Then the backend stores the fingering info in a buffer and generate the relative timestamp according to a line’s position in the buffer. For the audio detection system’s integration part, I managed to send a request to a new api endpoint to trigger the audio detection script in the backend. In addition to that, I modified the logic of the practice page and added start, end, replay button to only allow a user see the performance after their entire practice is recorded, since the audio detection process is not real-time.

I am currently on schedule. However, since the time generated for the fingering data are not so accurate, the synchronized result is not ideal most of the time. It’s hard for me to conduct integration tests and get results with moderate accuracy by this point. Next week, I plan to think of some ways to improve the situation, better synchronize all 3 parts, and try to get better results for tests.

 

Extra question:

To successfully implement and debug my project, I had to learn JavaScript for dynamic client-side interactions in HTML. Since most of the functions that can be done by Django are static, JS sections in HTML helps me a lot in constructing the web app. I also learned the Web Serial API for integrating the web app and the fingering detection system, and SVG for dynamic diagram coloring. I utilized interactive tutorials, official documentation, online discussion forum like Stack Overflow, as well as hands-on experiments to acquire those new knowledge and deepen my understanding.

Junrui’s Status Report for 4/6

This week our team discussed about the modifications we need to make to each individual part to move towards integration. I changed the logic of my app to let the user play after clicking on start, and click on replay to display the diagrams and feedback generated from the fingerings and pitch data in the practice session. Since the expected outputs from our pitch detection system and fingerings system contain timestamps of start and end for each fingering/note,  I was able to tolerate a mismatch for 0.5s in the synchronized reference data and user data to allow users play in a way that is not perfectly aligned with the reference.

I am on track this week. Next week I will test with the actual results from other parts to see if the web app will function as expected, and also check whether the 0.5s threshold is suitable for real-world data.

Junrui’s Status Report for 3/30

This week I continued to work on my web app part. As I mentioned in the meeting, I was struggling with the misplacement of diagram parts and the strange display of the user interface. Since I had exams on Wednesday and interviews after that, I didn’t have much time to fix the problem. Currently the problem still persists, and I am considering researching some graphic libraries to solve it.

Since our group has decided a new schedule, I am almost on track this week. Next week is the interim demo week, and after the interim demo, I am planning to work with my teammates to start the integration.

Team’s status report for 3/23

The most significant risks we face now is that the integration may start later than we planned in schedule, so if some problems occur in integration part, we may not have enough time to fix them. The contingency plan for this is to ensure the systems function after integration, but lower the metrics we need to reach. Also, we try to refine each component to make sure that they can work independently, and we believe that it will greatly reduce the burden of debugging the mistakes in integration.

There is no change made to our existing design this week.

Junrui’s status report for 3/23

This week I managed to display different text and diagrams based on different inputs. I hardcoded the user input into a vector, where 0 indicates a key is unpressed and 1 indicates the key is pressed, and the diagram and feedback to  correct fingering would change based on this vector. Currently the practice page is almost finished.

I am on track this week. Next week I am planning to link the database and implement the history statistics, and also check local hosting methods. If time permits, I will work with Lin to start integration with the pitch detection system.

Junrui’s status report for 3/16

This week I was working on the other pages of our web app. Since these pages only involved some static descriptive text, the process went smoothly. I also refined a bit the practice page’s UI to make it look clearer. I was not able to start mapping the contents of the practice page with specific inputs, so my progress was still slightly behind the overall schedule, but last week’s plan was almost fulfilled. I will try to work more on this this weekend.

Next week I think I have to start the user testing and the mapping of the contents, and plan to prepare for the integration.

Junrui’s Status Report for 3/9

Last week, my efforts were concentrated on crafting the foundational elements of the practice page for the web app. I have processed and integrated fingering diagrams for common fingerings, enabling the display of these diagrams within the application interface. Additionally, I was able to display pre-determined, hard-coded feedback related to common fingering mistakes. This feedback, though not dynamically generated based on real-time user input, serves as an initial step towards an interactive practice page.

My progress is slightly off track, but not much, mostly because I planned to work during spring break, but several personal stuff prevented me from that. To mitigate the delay and catch up with the overall project timeline, I plan to extend the time I will work on this project in the upcoming week. This effort will mainly focus on filling in the contents in other pages and preparing for user testing. The integration of dynamic feedback mechanisms and fingering diagrams based on real-time user inputs is a complex feature, so I hope to tackle it as part of the user testing phase, which will be split into work in next week and the week after next.

Team’s Status Report for 2/24

The most significant risk now is that all of us are slightly behind schedule, since we all have midterms coming and assignments due. To manage this risk, we will discuss, revise our plan and work together in next week’s mandatory lab to keep up with the schedule as quickly as possible.

An existing change made to our design is on the audio processing side. After the presentation on Wednesday, we received some suggestions from instructors about the rhythm processor. We initially planned to merely use a pitch detector when processing the audio, but as we should be dealing with rhythms instead of several single notes, a rhythm processor is definitely needed. So Lin is researching on the rhythm processor by researching papers and learning from previous projects now, and other teammates will also help if there are some problems implementing it.

an updated schedule is below: