This week I focused on building the camera input pipeline. Early in the week I set up a webpage that requests camera permission and streams live video using the MediaDevices API, with controls to start/stop, pick a camera (front/rear), and a frame loop that draws each frame to a hidden canvas for processing. Later in the week I added single-frame capture. I can now grab the current video frame and export it as a JPEG (via canvas, with optional ImageCapture when available). Next week I plan to write some API to wire these frames into the CV part and begin basic keystroke event prototyping.
