Category: Olina’s Status Reports

Olina’s Status Report for 4/19/2025

This week, I finalized the fix for the gesture recording issue we identified earlier.  I re-checked the dataset and re-identified the top six gestures with the top success rates. I modified the CNN model architecture accordingly.

Besides, I scaled up the dataset to twice the sizemaking the model even more robust. Consequently, the wand can correctly identify all the six target gestures with much-enhanced performance.

I remain on schedule as per the project timeline.

When developing, I tried out quite a number of model architectures with the aim of determining the one that best fits the task of classifying gestures based on the IMU. I started with a simple RNN, which could capture basic motion patterns but showed poor generalization. I then experimented with an LSTM architecture, which improved on training accuracy but overfitted quickly with our small dataset and required long training times. To balance spatial and temporal modeling, I chose Conv1D + LSTM, but it did not produce better accuracyLastly, I tested Conv1D-based CNN with two small-kernel convolutional layers, flatten, and dropout layers. This model had the highest validation accuracy throughout and was within the size limit.

I learned new skills in time-series modeling, model optimization, and data augmentation. I watched YouTube tutorials to learn key modeling techniques. I referred to TensorFlow documentation for implementing and tuning. Additionally, I read blog posts and Stack Overflow discussions to troubleshoot overfitting and understand best practices for dropout and kernel sizing. These helped me to quickly iterate and effectively tailor the model to meet our system constraints.

 

Olina’s Status Report for 4/12/2025

This week, I’ve been working on figuring out what might be causing the unexpectedly poor performance of our wand gesture recognition system. One possible issue I identified is related to the timer used during data collection. Previously, each gesture was recorded over an 8-second window since there is issue with our code for the timer set up, which may have introduced a lot of irrelevant or noisy data, especially since most gestures take only a second or two to complete. We recently fixed this by adjusting the timer so that each gesture is now recorded for 1.5 seconds instead. Alongside that, we also made updates to the code to support the new timing setup. Right now, I’m testing to see whether this change leads to improved model performance. I am still stay on track, and would finish by next week.

Olina’s Status Report for 3/15/2025

This week, I spent time refining and testing the CNN model. I employed a series of Conv1D layers with LeakyReLU activations and a final Dense layer for classification. I spent time adjusting the hyperparameters and experimented with various learning rates for the Adam optimizer. Through experimentation, the optimal balance between convergence rate and stability was achieved with the learning rate of 0.001. Also, I experimented with different combinations of columns as input features and decide on a better choice for the combination.

There is a trade-off when we need to balance model complexity and overfitting. I found that larger batch sizes have better training stability, but they require more memory in order to cache intermediate results during training.

Next week, I will perform the final fine-tuning of our model.

Olina’s Status Report for 3/7/2025

These 2 weeks, I compared the CNN, RNN, and LSFM models. While all three models performed well in training, CNN demonstrated the best generalization with minimal overfitting, the fastest training time, and high robustness to noise. In addition to the comparison, I worked on fine-tuning the models to improve their performance and reduce overfitting, focusing on optimizing hyperparameters and refining preprocessing techniques.

During fine-tuning, I encountered difficulties in balancing model performance and overfitting. Additionally, optimizing computational efficiency while maintaining accuracy was challenging.

Next week, I will complete the fine-tuning process and finalize model evaluations. Additionally, I will analyze the impact of fine-tuning adjustments and document findings to guide further improvements.

Olina’s Status Report for 2/22/2025

This week, I focused on setting up and refining the model evaluation process. I continued working with the CNN, RNN, and LSFM models, preparing them for a detailed comparison in terms of accuracy, speed, and computational efficiency.

To ensure a fair comparison, I refined data preprocessing steps, standardized input formats. I also set up evaluation scripts to measure key performance metrics such as accuracy, precision, recall, and inference time. While initial testing is in progress, I have yet to complete a full analysis.

To get back on track, I plan to complete all model evaluations next week by running full-scale tests and comparing the architectures based on their efficiency and accuracy. Since I am behind schedule, I will continue working during spring break to ensure that I meet the project goals.

Olina’s Status Report for 2/15/2025

This week, I worked on model building, training optimization, and dataset processing this week. In order to begin multi-class classification, I first cleaned and prepared the motion data, applied sequence padding, and converted gesture labels into one-hot encoding.

I experimented with three distinct architectures for model development: CNN, RNN, and LSFM. While the RNN (Bidirectional LSTM) assisted in tracking motion across time, the CNN was utilized to record spatial patterns. Both strategies were used in the LSFM model to increase precision and effectiveness.

I make progress to adjust hyperparameters such as learning rates, dropout rates, epoch counts, and batch size. I implemented model checkpointing and early halting to avoid overfitting.

I am on track with the project timeline. Next week, I will fine-tune the models by adjusting kernel sizes, units, and regularization. I will also run more evaluations and start real-time testing to ensure the model works smoothly with the wand system.

Olina’s Status Report for 2/8/2025

This week, I focused on data collection for the gesture recognition model in our wand project. Specifically, I collected data for six different gestures. Specifically,  I recorded 50 samples for each gesture. I manually recorded and labeled each set to ensure the data is consistent. In addition to data collection, I began preliminary data preprocessing, including normalizing the motion data inputs and organizing the dataset into a format compatible with our CNN training pipeline. I also spent time reviewing related research on gesture recognition to identify potential improvements for our model architecture.

My progress is on schedule.

Next week, I plan to work on

  1. Develop and implement the initial version of the CNN model for gesture recognition.
  2. Conduct initial training and testing of the model to evaluate baseline performance.