This week I finished modifying the use-case requirements, abstract, and testing portions of the design report in preparation for our final report. I also integrated the computer vision component with the vectorization portion such that once the image is captured; it will be written as a .jpeg file to DrawBuddy/vectorization/inputs where it will then be fetched by a call to the VTracer API and vectorized as a .svg file in the DrawBuddy/vectorization/results folder.
I also began looking into ways we can display .svg files not as .pngs onto our whiteboard display since this was a problem we did not account for. We had assumed svgUtils had additional function for displaying .svg files but currently pyQt looks promising. The remaining work to integrate the rest of our project includes taking the .svg file and outputting it onto our virtual whiteboard.
In terms of identifying straight lines, I discussed with Lisa and we’re thinking of pivoting to performing this on the post processing side such that the user can explicitly mark which lines they would like to straighten as opposed to trying to produce and algorithm that could estimate the user’s preferences.
I am a little behind schedule since setting up the whiteboard and displaying objects onto it were setbacks that we did not expect. However this is a minor setback and we should be able to get back on schedule within the next week or so.
Next week we hope to have a fully integrated system for vectorization. such that a user can click our vectorize button and it will take whatever image the user has and output it onto the whiteboard. More work will need to be done to filter the noise from the image and send the images to connected peers.
I also hope to begin working on the translation and resizing features of objects within the image on our whiteboard.