Alan’s Status Report for 12/4

This week and last week, I started doing system and component testing of our project and worked on and presented the Final Presentation for our group. For our metrics, I ran a bunch of trails of our clicking detection and jitter distance tests to contribute results to our group average. I also helped in contributing code to increase smoothness of mouse movement (by averaging positional inputs) and increase practical detection of gestures (by adjusting the number of detections needed to execute a mouse function). The coming week, I will continue to gather metrics for the other mouse functions and work on the final deliverables. I will continue adjusting numbers in our code to get us the best measurements possible so that we can surpass our requirements. I will also get new users such as my roommates to participate in user testing. I am on schedule and should be good to meet the final deadlines and contribute towards giving a successful final demo.

Team Status Report for 12/4

This week and last week, the team started doing testing, verification, and metric collection alongside consistent integration and improvement of our system. All of the risks and challenges throughout the semester have been handled and at this point we are just working on refining our product to function as well as possible for the final demo. We are mostly doing testing and making small system changes and adjustments to meet/surpass our requirements. The schedule and design for our project is still the same, and we will be focusing on finishing the final deliverables in the coming week.

Team Status Report for 11/20

This week, the team reflected on the feedback for our Interim Demo, most of which was incorporating some quality of life and smoothness changes into our design as well as being better prepared to explain our system and responsibilities for the final demo. The team has also transitioned from pure implementation to begin focusing on observing tradeoffs and collecting metrics before the final presentation, and moreso for the final report. The biggest risk our team has is gathering enough information to present as tradeoffs, specifically for creating a tradeoff graph. The development of our system is still proceeding nicely as we continue to improve our gesture model accuracy and implement smoothness changes to our mouse functions, but we have yet to see how smoothly our metric collection progresses. Our design and schedule are still the same and we are on track to present some tradeoffs and metrics during the final presentation after Thanksgiving break.

Alan’s Status Report for 11/20

This week, I finished implementing quality of life changes for our current mouse functions as well as some test code for implementing the other mouse functions such as right clicking and scrolling. I added a method of differentiating between clicking and holding by sampling gestures over time and determining which one to do based on if the click gesture was detected multiple times over a half a second time frame. I also experimented with other mouse functions such as right clicking (works the same as left clicking) and scrolling (instead of using mouse.move, mouse.wheel is used). mouse.wheel is interesting in the sense that the scrolling sensitivity is determined by how often the function is called, so the change in y positional value of the hand is used to determine how often the function is called instead of directly assigning a “distance” to scroll. Since we have a working system, for now the team and I will focus on gathering trade offs and metrics to prepare for the final presentation after Thanksgiving break. I am on schedule and will turn my focus towards finishing the implementation of other mouse functions once the gesture recognition model is more accurate as well as noting trade offs and collecting metrics for the final presentation.

Alan’s Status Report for 11/13

This week, most of my work was done in preparation for the Interim Demo. We managed to successfully demonstrate the mouse movement as we planned but we even made progress with integrating the gesture recognition model to allow for left mouse clicking and dragging. While Brian continues to refine the model, I will work on implementing the other mouse operations such as right clicking and scrolling. Right now I am working on making quality of life changes to the mouse functions. One example of this has to do with clicking vs dragging using the mouse. Due to the nature of the mouse module functions, if the gesture for holding the mouse is continuously input, it makes it somewhat difficult to differentiate a user who wants to drag something around for a short while, or a user who wants to click. I am working on implementing code that differentiates between these by sampling gesture inputs over a short period of time. This will be implemented along with a sort of “cooldown timer” on mouse clicking. This will prevent the module from accidentally sending spam click calls to the mouse cursor as it continually detects the gesture for clicking, and instead allows for a user to click once every half a second or so. These changes are being made to ensure a smoother user experience. Currently I am on track with the schedule.

Team Status Report for 11/13

This week, the team finished a successful Interim Demo! We were able to show off a large chunk of our system’s functionality, namely mouse movement and left mouse clicking/dragging, and received great feedback and suggestions with which to proceed. With this deadline out of the way and looking forward towards the final presentation and final report, the biggest risk in our path is dealing with testing and verification. Now that we have a working system, we can start with gathering testing metrics and preparing user stories for future testing.

Our system has no design changes although we are all now making new small changes to improve the smoothness and functionality of the system. We did however discover a bug in our schedule thanks to the professors! Here is an updated and fixed version of our schedule.

Schedule

Team Status Report for 11/6

This week, the whole team continued to work on the deliverables that we plan to show during the Interim Demo. There was more collaboration and discussion between team members this week as we started to integrate our components together. The integration of pose estimation and mouse movement is already functional but can still be fine tuned. Training of the gesture recognition model using pose estimation has also begun and is progressing smoothly. At this point, the biggest risks are if our implementation that we have committed to can meet the requirements and quantitative metrics that we set for ourselves. Hopefully through the Interim Demo we can receive feedback about if any aspect of our project needs to be rescoped or if there should be other considerations we have to make. Currently the system design and schedule are the same and we are working towards preparing a successful Interim Demo.

Alan’s Status Report for 11/6

This week, I continued to develop the mouse movement module and worked on a calibration module to prepare for the Interim Demo. The mouse movement module was updated to allow for different tracking sensitivity for users at different distances from our input camera. Additionally, now if the computer vision fails to detect the hand at any time, the mouse stays in place and will continue movement from the same location instead of jumping around once the hand is detected in a different location. This update video shows the new mouse movement at a larger distance. As seen in the video, even from across my room which is around 8-10 feet from the camera, the mouse movement is still able to precisely navigate over the minimize, resize, and exit buttons in VSCode. The cursor also stays in place whenever the hand is not detected and continues moving relative to the new location of hand detection.

Even with the poor webcam quality that misses some hand detection, the motion of the cursor is still somewhat smooth and does not jump around in an unwanted manner.

For the demo, I will also have a calibration module ready that will automatically adjust the sensitivity for users based on their maximum range of motion within the camera’s field of view. Currently, I am on schedule and should be ready to show everything that we have planned to show for the Interim Demo.

Alan’s Status Report for 10/30

This week, I first created an OS Interface document to help plan for the OS Interface implementation while waiting on progress with other modules.

Here is the OS Interface document.

Andrew finished enough pose estimation code for me to start developing the mouse movement portion of the OS Interface with actual camera input data. I started using the pose estimation data with my mouse movement test code to work on mouse movement.

One problem that I found that I am currently working on fixing is the fact that if the pose estimation stops detecting the hand and then detects it again at a different location, this might lead to drastic cursor movement. Instead, this should keep the cursor in the same location and start relative movement once it detects the hand again, instead of moving from the previous location to the location of the new detection. I hope to get this fixed and also possible improve the smoothness of cursor movement before this coming Monday. I am on pace with our new schedule and hope to get a functioning mouse module with movement tracking with hand detection and the calibration feature ready for the Interim Demo.

Team Status Report for 10/30

This week, the team came together to discuss our individual progress and to make plans going forward towards future deadlines, especially the Interim Demo. As mentioned last week, now that the team has had time to work on the actual implementation of the project, we decided to update our schedule to more accurately reflect the tasks and timeframes.

Here is the Updated Schedule.

Additionally, we also decided on a design change for our project. Originally, we planned on feeding full image data directly from our input camera into the gesture recognition model. However, since our approach for hand detection involved using pose estimation, which put landmark coordinates onto the detected hand in each image, we decided to instead use these landmark coordinates to train our machine learning gesture recognition model instead. All of the image data in the model dataset would first be put through our pose estimation module to obtain landmark coordinates on the hands of each image, and these coordinates would be passed into the model for training and testing. This should allow for a simpler model that can be trained quicker and produce more accurate results, since a set of landmark coordinates is much simpler than pure image data. This updated design choice is reflected in our schedule with an earlier pose estimation integration task that we are all involved in.

As we near the end of our project, integration does not seem to be as daunting of a risk and instead we need to plan ahead for how we will carry out our testing and verification. A new bigger risk now is for us to consider how to measure the metrics we outlined in the project requirements. For now, we will focus on finishing our planned product for the Interim Demo and start with testing on this interim product as we continue towards our final product.