Jaden D’Abreo Status Report 11/18/2023

This week I spent time debugging the display of the circuits. When using the coordinates from the computer vision the display was not working correctly. As the majority of the phone application that is left to complete is integrated with the display of the circuit, I was not able to make progress on what is left to be done. I was able to make good progress with this and should not have to spend more time to get it completely working. I was not able to allocate as much time this week to the project as I had other commitments that took up a lot of my time. While I am behind on my work I plan to spend a good amount of time during Thanksgiving to get back on track. Regarding the timeline, I did not set Thanksgiving as work days, however working throughout the week will allow me to get back on track. I expect to be done with the phone application completely by the end of next week and will start testing the following week. I plan to integrate the computer vision with the phone application the following week. This includes testing to make sure the computer vision receives the uploaded picture correctly as well as receiving the coordinates to display the circuit. As the phone application code is tailored to send and receive these items I plan to get that done early in the week.  

 

Stephen Dai’s Status Report for 11/18/23

This week I began creating integration code and continued to work on improving accuracy requirements. One of the things I did was I wrote a script to turn a directory of component images into a single YML file that can be parsed. Before/for the demo every run I would re-run image preprocessing and feature detection on every image in the directory, but now I have made it so that you create the YML file once and it can be parsed for the already-calculated feature vectors. The dataset file is bigger than I had anticipated and mentioned in the design report. In the report I mentioned that we would expect the file to be 500 KB large with a 50 image dataset. Right now with a 27 image dataset, the file is 1.4 MB, which means we can expect our 50 image dataset to be 3 MB large. Although this is larger than we anticipated, this is still plenty small. With the YML file there is added overhead because of metadata that makes it easier to parse the file like a dictionary/map lookup, so we are okay with this size tradeoff.

I have also started doing testing and polishing of accuracy. I ran a test on 66 component images, and 64 of them were identified correctly (~97% accuracy)! This statement isn’t exactly true though, because 42 of the images were ones that had orientation associated with it (voltage + current sources, diodes), and only 24 of those were identified with the correct orientation. Besides the difficulty in classifying the correct orientation of those components, I also noticed that current sources and voltage sources would have very similar matching scores (sometimes they had the same score but it just so happens the correct one was put as the first choice). As a result of this, one thing I want to experiment with is trying to use SIFT instead of ORB for feature detection. Because orientation actually matters, it actually makes sense to use SIFT now, so this is definitely something I want to try next week.

Last week I said that I wanted to improve the node detection, but I realized in testing this week that it actually performs pretty well. I played around with some parameters and it worked consistently.

My next steps are to continue working on the individual component accuracy and the full circuit accuracy as well. By the next report I want to have a complete dataset that will be used, and the accuracies will hopefully be in the ballpark of 80-90%.

Team Status Report for 11/11/23

Stephen made several large changes to the computer vision subsystem in an effort to improve accuracy of classifications so that we can hit our accuracy requirements. Because of this, he did not get to performing benchmark tests to quantify how accurate the system is as of now. The goal for the end of this week was to get 80% accuracy and 90% next  week. As long as by the end of next week Stephen can hit close to the 90% mark, he will remain on schedule. Otherwise, we will have to use some slack until this mark is hit.

Jaden worked on the application’s UI, specifically the value input for components. He was able to get the feature to work in a way where you can tap on each individual component on the circuit and enter in a value. He still needs to figure out how to display these values onto the screen though. Devan worked mostly on trying to integrate diodes into his circuit simulator. This proved to be very challenging because he has to incrementally solve a system of equations in a loop until his solution converges. He is a little behind, but will make this up over Thanksgiving.

Starting next week we’re going to gradually start working on the integration of our subsystems. This means creating bridge files for the C++ classes that Stephen and Devan have created, as well as some other minor details like what was mentioned in last week’s report.We will also start doing unit and integration testing in order to benchmark requirements and make further refinements.

 

Devan Grover’s Status Report for 11/11/23

This week I worked a bit on some integration and on getting diodes integrated into my model. I wrote a function to create a netlist from the coordinate, component data structure we are using to send information between our backend and frontend. This function will register the coordinates as nodes and create component entries for each component  in the netlist with the nodes that it determines for each component based on the coordinates. It then condenses the netlist by making nodes connected with a wire considered one node. By conducting this process, we can integrate the different parts of our application more easily.

I have also been trying to work on getting the diode to work in my modified nodal analysis circuit simulator model. This is proving to be incredibly difficult, because I must iteratively solve the system of equations to arrive at a correct solution. I have been doing research on the Newton Raphson method and how to implement it successfully in my code with the diode model. I am still unsuccessful in this aspect. Since I have not gotten the diode to work, I am falling a little behind but I will work on this over Thanksgiving break to compensate.

I have run test circuits that I randomly create through my circuit simulator and through an existing web based simulator. In the web simulator, I have to drag and drop the components then click each node I want to measure. This works at a small scale but will be annoying when I am stress testing my simulator. Thus, I plan to run automated testing on LTspice where I will run netlists through my simulator and LTspice and compare the outputs. I will then compare the outputs of these simulators to ensure the accuracy requirement is met.

Stephen Dai’s Status Report for 11/11/23

I made A LOT of additions/changes to the computer vision code this week. Here is an overview:

  1. If a node is detected but doesn’t have at least two detected neighbors, then this node is removed. This is done to remove stray circles that might be detected as nodes. By doing this, we err on the side of missing nodes instead of identifying fake nodes/circles. To account for this, we make the Hough circles transform less strict so it is more lenient in its circle detection.
  2. I changed the way that component subimages are generated. Before I created a hard-coded size box that had X by Y dimensions based on the coordinates of its nodes. Now, I do the following. I create a straight line from one node to the other. I iterate the line upwards + downwards / left + right (depending on if the component is horizontal or vertical), and I stop iterating once there are no black pixels in the line. Then I add a fixed value of 20 pixels, and now I have a box that is guaranteed to encompass the entire component.
  3. I finally found a good combination of preprocessing functions to run on component sub images. I spent so much time on this trying to find a way to remove shadows and the grain of the paper from the images, which were seriously messing up the individual component classification. The combination I ended up with was median blurring -> non-local means denoising -> adaptive thresholding -> median blur. From my testing this combination does really well in removing grain from the paper (noise) and smoothing out shadows. I don’t think I will stray from this combination besides fine-tuning parameters in the future.
  4. Before feature detection and matching, I have added an additional step. I run Hough circle transform on the processed subimage in order to determine if the component symbol is one with a circle (voltage source, current source, lightbulb). If it is, then feature matching is only performed with these components in the dataset. If it is not, then the component must be a wire, resistor, switch, or LED. I then perform probabilistic Hough line detection. I then look for the maximum and minimum X / Y coordinates (depending on if the component is vertical/horizontal) and see what the difference is. If the difference is small (less than a third of the subimage width/height), then the component must be a wire. Else, the component must be either a resistor, switch, or LED. I did these things because the individual component detection was quite sad. Sometimes a wire would get classified as a voltage/current source, which made no sense. I figured that because wire’s are so much different/simpler than every other component I could case specially on them and not even require feature matching for them.

The improvements I want to make next week are to make the node detection more consistent even with shadows. I think I will experiment with new preprocessing algorithms like what I did with the subimage preprocessing. The other thing I want to try to improve is the feature detection by tweaking some parameters.

Test that I have run and are planning to run look like this:

Node detection/subimage generation: I take images of drawn circuits and then feed them into my subimage generator. I validate that the detected nodes correspond to the drawn nodes by showing an image of detected circles overlaid on top of the circuit image. I also visually validate that the subimages generated properly encompass the entire component.

Component classification: In order to determine what image preprocessing algorithms I wanted to use, I tried many different combinations of algorithms and outputted/showed the image resulting of running each algorithm. This way I could intuitively tune what algorithms to use and which parameters I needed to change. In order to validate the output, I print the top three component classifications. Based on our use-case requirements, the top choice should be the correct classification 90% of the time.

Full circuit classification: I run everything together and print the output of the circuit classification, which is a list of the five best circuit classifications. Based on our use case requirements, one of the five classifications should be correct 90% of the time.

Next steps with my testing is to actually measure these results for a quantitative measure of how well my subsystems are performing. For the individual component classification, I will work with a testing dataset of 50 drawn components from our user group, and see if we hit the 90% mark. For full circuit classification, I will work with a testing dataset of 28 different circuit images from our user group and see if we hit the 90% mark as well.

Jaden D’Abreo Status Report 11/11/2023

This week I was able to complete the user being able to tap the circuit they want to analyze as well as tapping components to input values. However, there is still work to be done with the tapping component functionality. Firstly, I need to display the value the user inputted somewhere on the page so they can keep track. In addition, I need to display a next button once all the components have values so it can get sent to the circuit simulator. There is one bug that needs to be fixed with the tapping functionality. If the user taps a component on the previous page, the page for selecting a circuit, the box will appear to input a value. This bug is marginal and does not cause any functionality problems, however it is something worth fixing if time permits. I am on track with my work and plan to have the input functionality fully completed by Tuesday. I am planning on testing my subsystem with the other subsystems sometime in the next week. I will analyze results by making sure the application is sending and receiving data correctly and the application still works the same as with the hardcoded values it is currently using. 

Team Status Report for 11/04/23

We are on schedule with our progress. This week we finished developing our subsystems so that we are ready for the demo next week. Stephen finished converting the Python proof of concept to C++ code, and has developed a functional example of the computer vision system that he will show during the presentation. Jaden was able to finish debugging the phone application and is now able to display the circuit components with coordinates with a sliding page model. Devan was able to correctly simulate circuits that contain voltage sources, current sources, and resistors. Therefore, we now have each individual subsystem working at a basic level to show for the demo next week.

For the coming weeks we plan to start integrating the subsystems. We expect integrating the phone application and the computer vision code to take time and involve debugging. Therefore, before making further advancements on each subsystem our priority is to integrate and test the subsystems as one system. It should not be too tough to integrate the simulator with the application because the simulator has been developed within the build environment for the application. This means we already know the code to simulate circuits can be built and run on an iOS device like we intended. We are currently on track with the schedule we have planned.

Devan Grover’s Status Report for 11/04/2023

I made a lot of really good progress this week. I spend lots of time over the course of this week trying to implement modified nodal analysis in C++ using the Eigen library. I was able to get nodal analysis to work for circuits that use resistors, independent current sources, and independent voltage sources. This means I can now take in a netlist, parse through the netlist, generate the required matrices to analyze the circuit, and finally analyze the circuit. The result of the circuit analysis gives me the voltage at each node and the current through each voltage source. With this information, I can then find the current going through every component.

I also wrote all this code within the environment of our mobile app, which means I know this code can compile for iOS. Although I have not written an Objective-C header to bridge the code between C++ and Swift and make my file interact with the main app, I can call my netlist analyze function and ensure it works. I am now going to try to model a diode for next week and get diodes working within the simulator: after some studying and research, I will likely have to use the Newton-Raphson method to find a solution to the circuit with a diode. I am on progress with the schedule and hope to successfully analyze circuits with diodes in them by the end of next week.

Jaden D’Abreo status report 11/04/2023

This week I was able to complete my expected work. The phone application is able to display circuit components with coordinates correctly. The code is ready to be demoed, but as it is not implemented with the computer vision code, it uses hard coded coordinates at this point. I have created five pages, one for each recommended circuit, and will create a sixth page in case the user needs to reupload a circuit. I plan to focus on integrating the computer vision code with the current state of the phone application in the next week. This includes correctly sending the image the user uploaded to the phone application and parsing the coordinates the program creates to display the circuit. As I have never done this before I do not know how much time this will take, but expect it to take less than a week, hence allocating the whole week for it. I am on track with my work and plan to spend all of Sunday prepping for the interim demo. In addition, if we complete integrating the computer vision and phone application early in the week, I expect to finish the page allowing users to input the values for each of their components.

Stephen Dai’s Status Report for 11/4/23

I finally finished the python -> C++ code conversion, and I am happy to say that the code is ready to be demoed! The things that I converted this week were the dataset parser file (which currently reads from images in a folder), the individual component classifier file, and the main file that classifies the entire circuit.

Tomorrow I will be experimenting with running the code on either Jaden or Devan’s laptops for the demo. I have been working on a Linux machine, and the problem with that is I don’t have a GUI installed that can show images, which I want to do for the demonstration (so I am not just showing std output). Also it would be much better if we could just run all of our code off of one machine anyways instead of switching computers every time someone presents their part.

The steps  for next week will to be to start working on testing and improving the accuracy of the circuit and component detection. The other thing is to also start working on integration with the mobile application. This will require creating a bridge file such that the C++ classes I made can be used in Swift. I also need to do a little bit of redesigning of the code, such as with the dataset parser. Right now I have it so that every time you run the program, the dataset parser will take each image in the dataset directory and generate the keypoints and descriptors. What we want to do is have dataset be represented by just one file that already has the keypoints and descriptors. This will honestly be a pretty easy change and coding this function will probably only take an hour or two max. I will also probably make this one of the last things I do because the dataset is not set yet.

I am on schedule. I foresee that maybe the improvement of the accuracy can potentially overflow into next next week as well, but we have given a slack period in our schedule that can account for this.