Lisa’s Status Report for April 30, 2022

This week, I’ve mainly been working on finalizing features of the whiteboard. Mainly, there was a bug that I had to spend a lot of time debugging. It was related to the rotation and translate feature, as well as the group lines feature (the program would crash sometimes if I tried to group multiple lines at once, and then move them). I was able to fix out figure out what the issue was and resolve a few other bugs as well.

I also focused a lot on testing and making sure that there weren’t any other hidden bugs or edge cases that I might have missed. I’ve also been working with my team on the final poster and have started thinking about what to put on the video. I am on schedule so far and will continue testing and integrating my code with the rest of the team’s code, as well as working on the final video and preparing for the live demo.

 

Team Status Report for April 30, 2022

One risk that we face is any issues we might run into while integrating our individual components together. Other than that, our components are complete, and we will continue testing to root out any issues that we might discover while testing. Another concern is any network issues that might arise from out sockets/server-related code during the live demo, but we will acquire a router from Professor Savvides to account for that issue.

We are on schedule.  Our current schedule is the same as what was shown in the last team status report. We are mainly focusing on integrating and testing this week, and smoothing out any final issues that pop up during the process. We are also working on the poster and planning what to put in our final video.

 

 

Lisa’s Status Report for April 23, 2022

This week, I was able to get the whiteboard working so that it could now take in the camera feed as input and vectorize it. It detects the lines using computer vision, which then gets passed into the vectorizer to produce the output that we see in the whiteboard. Originally, without the computer vision method, the vectorized image was showing up as several smaller lines (as I showed during the demo on Monday), but I fixed the issue using computer vision to detect the lines beforehand.

I also created a “Erase all” feature that erases the entire whiteboard, and a “group lines” feature. The “group lines” feature will group together all the lines that the user selects into one line.

The biggest accomplishment this week was getting both rotating and scaling to work. The video below shows how it works. I can’t upload a video but we will be showing a video of the feature during our presentation in the coming week. One issue that doesn’t have to do with functionality but more user usability is that you cant drag too fast, otherwise the GUI will not be able to update the line fast enough to keep up with the mouse clicks. I will try to modify this next week

Finally, I worked on the presentation slides for this week, mainly the vectorizing part of the solution approach, and providing an updated schedule.

Next week, in addition to working on the issue I mentioned earlier, I’ll also work on integrating my code with Denise and Ronald’s and work on some of the stretch goals. So far, we are on schedule.

Lisa’s Status Report for April 16, 2022

This week I continued to work on the display function that I had started to work on last week (it takes in a parsed SVG file and converts the information in each line of the file into editable elements on the whiteboard). The main setback that I had to work with this week was dealing with an unexpected variable in each line of the SVG file (labeled with “transform”, which indicates that the line needs to be transformed by a specific value in the x or y direction). I also had to deal with adjusting the values to fit within the proportions of the whiteboard using a ratio between the width and height noted in the SVG file and that of the whiteboard display in the GUI.

I was able to complete this, which completes the integration of the SVG parser with the whiteboard display function. In the coming week, I will test this more to make sure it completely works with all types of lines and multiple lines in the input image. Right now, it makes all lines of a certain width, but next week I’ll try to have the lines on the whiteboard match the relative widths of the lines in the drawing.

I’ll also keep working on adding the translation feature for the lines, and starting doing some research into how to scale the lines. So far, I am on schedule.

Team Status Report for April 9, 2022

Currently, the most significant risk remains similar to that of last week’s, which is the method that we use to display the different components of the SVG file on the whiteboard such that we can also be able to edit them. Our current solution for that is to parse each line of the SVG file into either a line or polygon, and then use a PyGUI library to display the line or polygon onto the whiteboard, adjusted for proportions.

A change that we had to make was that we want to be able to edit out the background whenever a user holds up a piece of paper up to the camera, so that the background doesn’t get vectorized and added to the SVG file. No additional costs will be incurred because of this added feature.

We are on track. We were able to make good progress on converting the SVG file into editable components and should be able to complete that next week. We were also able to get sending an SVG through the GUI working and will work on the receiver code in the coming week. Next week we also hope to fully integrate our SVG parser with code to display objects on the whiteboard, and also make some progress on eliminating the background when a user holds up a drawing for the app to vectorize.

 

Lisa’s Status Report for April 9, 2022

This week I implemented function that took a parsed SVG file (which Denise wrote the code for) and drew lines on the whiteboard. The function takes in a list that consists of the line endpoints and also parses the parsed SVG file to obtain the width and height of the image by extracting the values from the second line of the SVG file. I also wrote code that simply displays the vectorized image on the whiteboard for now, but would like to replace it with code that allows translation of the elements (and also only displays lines and polygons).

We had to modify the schedule a bit for now, which we discussed during the demo on Monday. Next week, I hope to complete integrating the SVG parser with the code for creating translatable and scalable elements on the whiteboard.

Lisa’s Status Report for April 2, 2022

This week I worked with Denise on combining vectorization logic with the GUI. We also discussed on how to handle identifying lines as straight lines, and ended on the solution of handling it by letting the user identify which lines should be straight after processing the image.

We are a little behind schedule. One setback this week is converting the vectorized SVG file to something that can be displayed on the whiteboard on the GUI. We’ll have to parse the SVG file and use the GUI interface to display the objects on the whiteboard. For now, we’re trying to stick to just getting lines to show up on the whiteboard for the demo, and we’ll try to work with other shapes in the coming weeks. We’ve made a lot of progress on this setback though, and should be able to get back on track by next week.

Next week, we want to be able to be able to complete a full vectorize-to-GUI pipeline. We also want to try to implement sending and receiving vectorized images between users within the GUI.

 

Lisa’s Status Report for March 26, 2022

On Monday, I went to the ethics discussion during class. Prior to the ethics discussion, I had completed the ethics paper and done the two required readings. During class, we thought of and discussed ethical issue with different teams’ projects.

We are on schedule so far. This week, I mostly worked on the GUI/frontend, particularly the whiteboard portion of it. An image of the what the whiteboard UI looks like right now is shown below.

On the left is the whiteboard, which will contain the vectorized imaged. On the right there is a title that says “Camera Feed” and 3 buttons: “Receive”, “Send”, and “Vectorize Image”. The user can click on the “send” button to send the contents of the vectorized image to another user. The “receive” button can be used to check for any received messages in the inbox. And the “vectorize image” button will be used by the user once they are ready to vectorize the image that they are holding up.

Next week, I want to be able to actually display the vectorized contents of the image on the whiteboard. Once Ronald implements being able to create and host a session through the GUI, we will then try to implement being able to send and receive messages. We will try to get started on a basic UI framework for the user’s inbox.

Team Status Report for Mar 19, 2022

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

So far, a risk that we have discussed this week is the issue of our app not being able to recognize perfectly straight lines as lines, since people are generally not going to be able to draw completely perfect lines. We discussed the merits of a temporal approach, but decided against that because it would require the user to have a camera mounted over their drawing, and defeats the primary purpose of our project (since we’re trying to make the whole experience easier than having to make the diagram digitally). We then settled on a different solution, which is that the user can circle where the corners of the line are and the program would automatically draw a line between them. We also added a stretch goal of snapping a not completely straight vertical or horizontal line to be perfectly at 0º or 90º.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There are no updates to the schedule for our project. We did, however, add an additional design page for our GUI/frontend. This page will contain all of the received images of the user (up to a 100 images). Our example of what this would look like is below:

There will be an “Open Image” button, as shown in the last column, that the user can click on whenever they decide to open it. The image name is the name that was given by the sender. We also want to display the sender of the image so that the user can keep track of which image came from who, reducing the amount of external communication (the sender wouldn’t have to send a text to the user to notify that they were the one who sent the image). This also helps users keep track when multiple users are sending images to each other within one session. Currently, we’ve been working on our frontend/GUI and our code for vectorizing a diagram.

Lisa’s Status Report for Mar 19, 2022

This week, I worked with Ronald on the GUI frontend for the project. Specifically, I read the PySimpleGUI documentation and created a very basic home page for the GUI. I’ve attached an image of what it looks like below.

Ronald and I then discussed how to split up the basic frontend of the GUI, and what we aimed to accomplish for the GUI by Monday. I’m in charge of also making the whiteboard interface for the GUI (including the send and review buttons on the whiteboard)

So far, our progress is on schedule. Next week, Ronald and I would like to get most of the functional aspects of the GUI done (we’ll focus on making it more user-friendly and aesthetic in the week after). Specifically, the main things we want to get done next week are developing the whiteboard interface for the GUI and ensuring that we can start a server session through the GUI.