This week, our team made steady progress in preparing our systems for the final demonstration. Luke focused on enhancing the camera system by disabling the auto-focus and auto-blur features that were causing instability during fast motion capture. He made progress on motion detection capabilities through frame-difference analysis, though this remains a post-MVP feature. After team discussion, we collectively decided to deprioritize the Raspberry Pi web server implementation to concentrate on core deliverables.
Samuel achieved a major improvements in optimizing our shot simulation algorithm by implementing scipy.spatial.cKDTree for collision detection. This change improved the simulation speed by an impressive 600% by reducing the computational complexity from O(n) to O(log n). Alongside these technical improvements, Samuel dedicated time to finalizing our presentation materials and preparing demonstration assets.
Kevin made improvements to our ball categorization system by transitioning from square HSV sampling regions to circular ones, significantly increasing accuracy. His work on differentiating between similarly colored balls using HSV gradient analysis has brought our categorization accuracy in testing. Kevin continues to refine the system to handle the remaining edge cases, particularly in distinguishing between the 1-ball and 9-ball.
Looking ahead, Luke will finalize camera calibration for the demo environment, Samuel will polish the simulation integration and demo flow, and Kevin will complete the remaining refinements to ball identification. The team remains on schedule for our final demonstration, with all core systems operational and only minor tuning remaining. While motion detection has been deferred to post-MVP development, our primary systems are performing at target levels, with ball categorization currently achieving 95% accuracy and expected to improve further with final adjustments.