Andrew’s Status Report 10/30

This week, I worked on and started finishing up the code for the hand detector class file. I’m almost done making the classes for the cursor update object that will continuously store the cursor location and will keep it stationary in the event the camera does not detect hand on some individual frames. We also started working on integrating my code with Alan’s and we’re able to get some form of cursor movement with the video camera. Once I finish up the class for the cursor class, we’ll have a more streamlined movement of the cursor on our object.

As a personal update, I did not have too much time to work on the project this week due to catching up on a lot of other work and matters due to my injury. I am almost fully recovered, so I am seeking to get back on track this week. I’ve run into one minor issue with using the webcam (which I met up with Tao over the week to try to fix) as I have some sort of permission issue on my laptop that is not letting me access the webcam, but I am more or less on schedule although lagging behind a bit. The webcam functionality is not too big a deal since it works on other computers and I can always just use my laptop webcam to test.

Alan’s Status Report for 10/30

This week, I first created an OS Interface document to help plan for the OS Interface implementation while waiting on progress with other modules.

Here is the OS Interface document.

Andrew finished enough pose estimation code for me to start developing the mouse movement portion of the OS Interface with actual camera input data. I started using the pose estimation data with my mouse movement test code to work on mouse movement.

One problem that I found that I am currently working on fixing is the fact that if the pose estimation stops detecting the hand and then detects it again at a different location, this might lead to drastic cursor movement. Instead, this should keep the cursor in the same location and start relative movement once it detects the hand again, instead of moving from the previous location to the location of the new detection. I hope to get this fixed and also possible improve the smoothness of cursor movement before this coming Monday. I am on pace with our new schedule and hope to get a functioning mouse module with movement tracking with hand detection and the calibration feature ready for the Interim Demo.

Team Status Report for 10/30

This week, the team came together to discuss our individual progress and to make plans going forward towards future deadlines, especially the Interim Demo. As mentioned last week, now that the team has had time to work on the actual implementation of the project, we decided to update our schedule to more accurately reflect the tasks and timeframes.

Here is the Updated Schedule.

Additionally, we also decided on a design change for our project. Originally, we planned on feeding full image data directly from our input camera into the gesture recognition model. However, since our approach for hand detection involved using pose estimation, which put landmark coordinates onto the detected hand in each image, we decided to instead use these landmark coordinates to train our machine learning gesture recognition model instead. All of the image data in the model dataset would first be put through our pose estimation module to obtain landmark coordinates on the hands of each image, and these coordinates would be passed into the model for training and testing. This should allow for a simpler model that can be trained quicker and produce more accurate results, since a set of landmark coordinates is much simpler than pure image data. This updated design choice is reflected in our schedule with an earlier pose estimation integration task that we are all involved in.

As we near the end of our project, integration does not seem to be as daunting of a risk and instead we need to plan ahead for how we will carry out our testing and verification. A new bigger risk now is for us to consider how to measure the metrics we outlined in the project requirements. For now, we will focus on finishing our planned product for the Interim Demo and start with testing on this interim product as we continue towards our final product.

Brian Lane’s Weekly Status Report for 10/23

I spent this week setting up some preliminary pytorch scripts for training our model and adapting the pre-built model we have slated to use. Because of a shift in our planning, I now need to assist Andrew in the creation of our pose estimation software, because the hand pose data will now be used when training our model, rather than raw image data.

Following this quick pivot, I will spend next week creating and running a script to apply the hand pose estimation to our data set while training our pre-built model in preparation for our upcoming interim demo.

Andrew’s Status Report

Last week, we had mid-semester break so the team did not submit a status report. Our primary focus that week was working on the design review. This past week, we each completed the ethics assignment and discussed our work in class with both the guest speaker as well as other teams in 18-500 as well as 18-555. As for our actual project, the webcam we ordered arrived this week, so I had to dedicate a bit to setting it up on my computer. I’m still finishing writing up the code for pose estimation, but that is nearly complete. I have started writing the calibration process for our range of motion algorithm. I should have that finished within the next couple of days. The main difficulty now is going to be porting the individual code I have written onto an interface that connects with the OS Interface library, but this is taken into account in our integration phase. As a personal update, I have been injured the past two days and have communicated this with the professor. As such, I’ll be a bit stalled these upcoming next few days, so I’ll be a bit more communicative on my end with regards to that with the team and the teaching staff.

Team Status Report for 10/23

The past 2 weeks, the team focused on completing the Design Review Report and also on working on our individual portions of the project. There may be some individual circumstances for team members (Andrew’s injury) that require a change of schedule and responsibilities. This is a risk that we did not plan for but we plan on dealing with by seeing if Alan can help contribute to Andrew’s project responsibility in the immediate future. There are no design choices so far but the schedule will be updated for next week’s status report depending on how things proceed this coming week.

Alan’s Status Report for 10/23

Status Report for 10/16:

Last week, I mainly worked on writing the Design Review Report. I wrote the abstract and sections I, II, III, VI,  and VII. I also wrote the OS Interface portions of sections IV and V. I formatted the report and added in my teammate’s contributions and then submitted.

Status Report for 10/23:

This week, I had a meeting with Tamal early in the week to discuss the midsemester quantitative feedback that I received. We talked about the Design Review Report, where I learned that the biggest issue was the lack of figures, which will be something we add for the final report. Additionally, we talked about the future of the project and how I could adjust my own personal contributions. We agreed that the OS Interface portion of the project was not too big of a challenge by itself, and that I should instead contribute more to the other two parts of the project (hand detection and gesture recognition). I was also tasked with creating an OS Interface document illustrating the finite state machine as well as the interface for each functional module in the OS Interface. This includes the calibration step and the running system state. I foresee some scheduling changes that will most likely be included in the next week’s status report due to the adjustment of my responsibilities.

Brian Lane’s Status Update 10/11

I spent the last week drafting, refining, and presenting our final design presentation, as well as preparing to give said presentation before the class.

Further, I did some more research into gesture recognition and found that many more studies used a pose estimation algorithm to identify hand landmarks and run the estimated pose through their model for gesture detection. This was in contrast to my initial approach, which was to feed in camera data directly. Presented with this new paradigm and upon further consideration this makes sense, as this method eliminates much of the noise of background colors and images and reduces the number of features the model will need to learn.

This week the adaptation of the pre-trained model will begin, as well as work on the final design documents/report. We are still waiting on AWS access to GPUs, which will aid in the speedy training of our model.

Andrew’s Status Report 10/10

This week, we came off the design review report presentation and god some good feedback from TAs. For me personally, I am ahead of schedule on MediaPipe handtracking research and am in the process of building visualization tools using Matplotlib so I can more easily work on the pose estimation algorithm. I should be finishing this up by the end of this week as well as finishing up my part in the design report.

Team Status Report for 10/9

As a team, our biggest focus this week was learning from the Design Review presentations and reflecting on the feedback we received for our own presentation. The team discussed the feedback and how we would incorporate it as we began writing our Design Review report. Currently, as pointed out in our feedback, we do not have a lot of preliminary results so a lot of initial parameters such as distance from the camera and having people in the background will likely be the biggest risk that we would have to work around. As we wrap up our Design Review report in the coming week, we will also begin with creating the hand detection and gesture recognition components and testing them, as we should have our needed AWS credits and camera by then. The design and schedule are still the same although we will provide more details in the Design Review report.