Progress
I first tested the speech recognition pipeline on the RPI and it worked as desired. Then I spent most of my time this week working on the implementation of the web application. I wrote the CSS styling based on the UI design diagrams and constructed the financial report charts. Now the users could view financial report charts based on the time range they want to view. The current page layouts are attached below.
As for the VUI side, the user could now press the button to give voice commands, and the web app could redirect to certain pages based on the input keywords. For example, if the user gives the command “generate report”, the report page will be rendered and the charts will be displayed successfully.
I also wrote primary scripts for generating audio outputs. The script could convert the input txt file to audio and play the file directly, and the audio file will be deleted after playing to save memory.
Schedule
I am on schedule now.
Next Steps
I am planning to design the audio output template txt files and generate corresponding content based on the user data within the web app. The users should be able to receive audio outputs when interacting with the application.