Author: yunjiaz

Olina’s Status Report for 4/19/2025

This week, I finalized the fix for the gesture recording issue we identified earlier.  I re-checked the dataset and re-identified the top six gestures with the top success rates. I modified the CNN model architecture accordingly.

Besides, I scaled up the dataset to twice the sizemaking the model even more robust. Consequently, the wand can correctly identify all the six target gestures with much-enhanced performance.

I remain on schedule as per the project timeline.

When developing, I tried out quite a number of model architectures with the aim of determining the one that best fits the task of classifying gestures based on the IMU. I started with a simple RNN, which could capture basic motion patterns but showed poor generalization. I then experimented with an LSTM architecture, which improved on training accuracy but overfitted quickly with our small dataset and required long training times. To balance spatial and temporal modeling, I chose Conv1D + LSTM, but it did not produce better accuracyLastly, I tested Conv1D-based CNN with two small-kernel convolutional layers, flatten, and dropout layers. This model had the highest validation accuracy throughout and was within the size limit.

I learned new skills in time-series modeling, model optimization, and data augmentation. I watched YouTube tutorials to learn key modeling techniques. I referred to TensorFlow documentation for implementing and tuning. Additionally, I read blog posts and Stack Overflow discussions to troubleshoot overfitting and understand best practices for dropout and kernel sizing. These helped me to quickly iterate and effectively tailor the model to meet our system constraints.

 

Olina’s Status Report for 4/12/2025

This week, I’ve been working on figuring out what might be causing the unexpectedly poor performance of our wand gesture recognition system. One possible issue I identified is related to the timer used during data collection. Previously, each gesture was recorded over an 8-second window since there is issue with our code for the timer set up, which may have introduced a lot of irrelevant or noisy data, especially since most gestures take only a second or two to complete. We recently fixed this by adjusting the timer so that each gesture is now recorded for 1.5 seconds instead. Alongside that, we also made updates to the code to support the new timing setup. Right now, I’m testing to see whether this change leads to improved model performance. I am still stay on track, and would finish by next week.

Team Status Report for 3/22/2025

Risks and Contingency Plans
We completed fine-tuning the firmware this week and began integration testing. So far, the system appears to be functioning as expected. Firmware remains the most critical component, but with the recent fine-tuning, we are more confident in its stability.

Design Changes
There are no changes to the design at this stage.

Schedule
We remain on schedule.

Olina’s Status Report for 3/22/2025

This week, I finalized the fine-tuning of the CNN model. I also finalized the input feature set, based on prior experimentation with different column combinations.

To address the trade-off between model complexity and overfitting, I adjusted the batch size and verified that a slightly larger batch size improved training stability without exceeding available memory limits.

Next week, I will move forward with deployment and integration into the system.

Olina’s Status Report for 3/15/2025

This week, I spent time refining and testing the CNN model. I employed a series of Conv1D layers with LeakyReLU activations and a final Dense layer for classification. I spent time adjusting the hyperparameters and experimented with various learning rates for the Adam optimizer. Through experimentation, the optimal balance between convergence rate and stability was achieved with the learning rate of 0.001. Also, I experimented with different combinations of columns as input features and decide on a better choice for the combination.

There is a trade-off when we need to balance model complexity and overfitting. I found that larger batch sizes have better training stability, but they require more memory in order to cache intermediate results during training.

Next week, I will perform the final fine-tuning of our model.

Olina’s Status Report for 3/7/2025

These 2 weeks, I compared the CNN, RNN, and LSFM models. While all three models performed well in training, CNN demonstrated the best generalization with minimal overfitting, the fastest training time, and high robustness to noise. In addition to the comparison, I worked on fine-tuning the models to improve their performance and reduce overfitting, focusing on optimizing hyperparameters and refining preprocessing techniques.

During fine-tuning, I encountered difficulties in balancing model performance and overfitting. Additionally, optimizing computational efficiency while maintaining accuracy was challenging.

Next week, I will complete the fine-tuning process and finalize model evaluations. Additionally, I will analyze the impact of fine-tuning adjustments and document findings to guide further improvements.

Team Status Report for 2/22/2025

Risks and Contingency Plans

The most significant risk right now is that our model evaluation is behind schedule, which may delay integration with the IR wand system. Since we need to compare CNN, RNN, and LSFM models before finalizing one for deployment, delays in testing could impact our further implementation and optimization.

To mitigate this risk, Olina will prioritize completing model evaluations next week and continue working during spring break weekend to ensure we stay on track.

Design Changes

No changes to our design at this point.

Schedule

We are behind schedule due to delays in model testing. However, we are working to catch up next week and will use spring break to ensure we meet our project milestones.

Olina’s Status Report for 2/22/2025

This week, I focused on setting up and refining the model evaluation process. I continued working with the CNN, RNN, and LSFM models, preparing them for a detailed comparison in terms of accuracy, speed, and computational efficiency.

To ensure a fair comparison, I refined data preprocessing steps, standardized input formats. I also set up evaluation scripts to measure key performance metrics such as accuracy, precision, recall, and inference time. While initial testing is in progress, I have yet to complete a full analysis.

To get back on track, I plan to complete all model evaluations next week by running full-scale tests and comparing the architectures based on their efficiency and accuracy. Since I am behind schedule, I will continue working during spring break to ensure that I meet the project goals.