To view the full image, please right click and open the image on new tab!!
Vineeth, Shikha: During this week, we essentially put together our initial algorithm and code for barcode localization from an image. By localization, we refer to isolating the barcode region. This basically means contouring around the barcode region with a colored border. Decoding of the barcode is not a part of this step. The following is the steps we have taken in our algorithm:
- Open the Image from which to read the barcode and convert it to grayscale. We are doing this to take advantage of the fact that barcodes have high horizontal image gradient and low vertical image gradient. Grayscale essentially means that each pixel has a value between 0 and 255, where 0 is solid black and 255 is white. - Next, with the intent of identifying the areas of high horizontal gradient (x gradient) and low vertical gradient(y gradient), we subtract the y gradient from the x gradient. Note that we use something called a Scharr gradient to do this computation. It is simply a high pass filter. - Use blur function with a 9x9 kernel to smooth out the high frequency noise - Threshold such that any pixel with a grayscale intensity value greater than 225 is locked to black. - Remove the gaps that exist between the individual strip elements of the barcode - Conduct Erosion, which removes white region around the strips and dilation which increases the white region in a cycle of erosions and then dilations. - Pinpoint contours using the findCountours function, and sort by area - Draw a bounding box around the found contours - Display image with bounding box around barcode
**FOR CODE PLEASE SEE GITHUB: https://github.com/SChandra96/barcode_detection/blob/master/barcode_decode.py Goals for Next Update: For the next steps to take, we hope to firstly transition now to being able to localize barcodes from a video feed rather than a static image. We plan to do this with a constantly polling loop that takes a snapshot repeatedly and runs the algorithm, all in extremely efficient time for real time localization display.
MinSun: - Designed and structured all pages and every feature of SmartFridge web application on paper - Created github repo and created a working web application called SmartFridge - The web app now runs on the localhost - Created templates and static folders and added jquery file - The web app has a basic template structure (base.html) https://github.com/minsunp/ece_capstone
Goals for Next Update: Basic register, login and confirmation functions WebApp shows Inventory, allows manual input (expiry date, items)
In general, due to the past week being spring break week, we did not have much progress. We, however, in collaboration with our TA, came up with a set of goals for this upcoming Monday (March 26th). They are outlined below:
MinSun: Up until last checkpoint, the web application just had a general template structure, on the app created with django. The goal for next Monday is to get the complete UI for every page in the application, including the “My Fridge,” “Shopping List,” “My Recipes,” and “My Profile” pages. You’ll be able to redirect through different pages, and the website will have a complete look of how it should look like in the end. However it will not contain the actual data of the inventory info and recipes info yet. The products will be hardcoded for now. Another feature to implement by Monday is registration and login feature. You’ll be able to register using your email and the website will be customized using your name. https://github.com/minsunp/ece_capstone
Vineeth & Shikha: We have two main goals for the next week. The first is to be able to do barcode localization for a live video feed. Currently, we are able to have accurate localization for a static image as input. Note that the image was not taken through the raspi camera. Thus, we will now write code to be able to do localization in real time from a video feed. In essence, we plan to have a running loop wherein each loop runs the localization algorithm for an image captured. The issues we may run into are deciding how frequently to run this loop, based upon how long the localization algorithm takes. But, we’ll cross the bridge when it comes. The second goal is to get our sensor layout up and running, with readable data being produced. We don’t have to interpret the data necessarily, but should be able to ensure that it makes sense and the sensors work accordingly to spec and our needs. We plan on setting up the sensors, and writing scripts that produce some sort of an output demonstrating proper data retrieval. Since these sensors output resistor values, simply outputting the values under different amounts of pressure may suffice.
MinSun: Last week, I designed and created a web application with a complete UI interface. The goal for next week’s mid-point demo is to have a working web application, with all the buttons and features fully functional using the hard-coded data for product inventory. I also plan to have an example recipe put up on the “My Recipes” page. This should be done through web-scraping. https://github.com/minsunp/ece_capstone
Shikha and Vineeth : This week we completed two objectives. First objective was to setup and test the FSR sensors to make sure that the readings could be received as expected and fitting to our requirements. Thus, we were able to hook up the sensors using a breadboard, 3.5Kohm resistor, and jumper wires to quickly setup a test environment. The following was our test script. We will add the more refined later to our repo, but this was strictly a test script:
To describe the readings, a third of a smaller milk is equivalent to about 100 units, and eggs gave readings from low to mid 100s, varying on the size of the egg. The next goal that we reached was to be able to localize barcodes from a video feed. We used the macbook camera for this, althought obviously it would work for any camera with sufficient quality. Simply put, we start up a video feed, and send frames, or snapshots, that are resized to a 350x350 image. If a barcode is detected, we contour, otherwise, rinse and repeat. The code for localization is identical to what we did last week. This week was simply a wrapper to figure out localization in a video feed, which was not too hard if you realize that a video is simply a bunch of pictures played fast. The code can be viewed in our git link: https://github.com/SChandra96/barcode_detection/blob/master/barcode_decode.py
Shikha and Vineeth: This week, our main focus was to get together a large subsystem of our project, the sensors subsystem. For the demo, we ended up arranging 3 FSR sensors total. 2 circle sensors, 1 square sensor. The largest challenge was to firstly, extend the sensors away from the breadboard, which we did using the jumper cables. Next, we pretty much made a setup on the breadboard consisting of each sensor being connected to an analog input of the arduino, a resistor in parallel to the sensor. For picking the resistor, we could have gone with anything within the reading spec of the sensors, and thus chose a middle grade of 3.3 Kohms. Next, we had to come up with a setup for the eggs in order to have accurate, precise, and repeatable readings to form threshold benchmarks. This was an extremely challenging part as we had to find position where the sensor would lie completely flat, and in contact with the egg. However, vertically, we had to cut out cardboard of the right height and fit for the inside of the egg holder to finally achieve the final setup. After we achieved it, we simply thresholded a minimum value for an egg. Next, for the milk, we followed the same approach, except, we had to switch to a carton instead of a plastic container for better contact area. We were able to snap thresholds for under 5%, around 50%, and 100%. We also broke 3 eggs in this process. Code can be viewed here: https://github.com/vm9999/SensorsCode
MinSun: - For the mid-semester demo, we had a fully functional working web application with all the features implemented, except for a minor bug. You can manually add new items to “My Fridge” and “Shopping List” pages, edit them, and delete them. - The web scraping part hasn’t been implemented yet. - The goal for next week is to refine and debug the features that are already implemented, and maybe add additional features that might be helpful for the user. - Also, I’m going to use BeautifulSoup to web scrape and get one example recipe from a recipe website.
MinSun:
Last week, the web application had all its features implemented, focused around My Fridge and Shopping List pages. The user could manually add and delete items, and add items from my fridge to shopping list. These features have minor bugs right now, where item updates are shown only when a page is refreshed.
This week, I went through BeautifulSoup tutorial and learned how to use it. I also managed to extract recipe name from an html file by accessing source code on recipe websites. Im also in the process of debugging the buggy features mentioned above. (Its taking longer than I expected)
For next week, we plan to deploy this whole web application onto AWS and possibly show the items recognized from barcode scanning on the page as well by looking up an item on a grocery database.
Vineeth and Shikha: This week, we ended up implementing the decoding portion of the barcode scanning. We initially attempted to do it using only opencv, and implementing our version of a scanline approach. However, we were having difficulty due to the inconsistencies with the number of pixels representing a binary digit or ‘bar’ in this context. This, combined with variance in the distances of the barcodes from the camera made it pretty much impossible to form a threshold wherein we could decode the numbers. Thus, we accepted defeat, and ended up using a module called Pyzbar after consulting with our TA. We still need to clean up the code, and will come up with a proper description of the algorithm once we optimize it for reading from better angles/ slightly further distance. We have a satisfactory accuracy that we can work with from hereon, as the numbers are properly decoded. We are still using the scanline technique, described in previous blog posts, however, we make use of existing functions in the Pyzbar module to make life easier. This is akin to for example using transformations in opencv, or the contour function in opencv. Our goal for next week is to output the number to the webapp, finish the full sensor setup ( assuming parts get in on time), setup Raspi camera if possible, and work on the recipe scraping. Our code is viewable on our usual repository.
Vineeth and Partially Shikha for second session : This week, my task was to get the Raspberry pi up and running, with an OS of choice, as well as the required packages and dependencies to run our code natively on the raspberry pi. At first, I tried to do this using the pre-installed sd card that we had ordered. However, this sd card was messed up due to the fact that the splash screen would not show up, and there was no GUI. I was only able to access the pi kernel to config the pi, but this was useless. As a result I did a fresh install of Raspbian, and have been struggling to install opencv due to there being no offically supported opencv for pi. The first attempt was a fail due to the module being installed, but being installed incorrectly due to me bootleg installing cmake and wget. The module existed, but none of the attributes or built in functions could be accessed .The next round of installation attempts was to use a tutorial suggested by our TA. During this however, we had two issues. First, the raspberry pi SOC temperature warning came on to indicate that the device was over temperature. Then, hdmi connection was lost. When we let the device cool down, we regained hdmi connection, however, the pi was hung up. Tonight, another attempt will be made to circumvent the pi heating up as it may be due to using all 4 cores during the install or using a memory management technique.
Shikha: This week, I worked on getting our decoding script communicate with our webapp. In the python script, I used the requests library to make a HTTP GET request to the webapp with the 12 digit UPC code passed in as a query parameter. On the web app side, I added a Django function on the backend that received the barcode and updated the web app with the corresponding product . In order to do this, I first downloaded an open source Grocery UPC database (the second database listed on https://www.grocery.com/open-grocery-database-project/) as there seemed to be no free RESTful API that could do a barcode lookup reliably. Next I used the pandas library to read the data from the CSV database into a pandas data-frame. Since UPC-12 is a column label in the database, I then retrieved the corresponding row, based on the 12 digit UPC code received from the script, from the pandas data-frame, and extracted the name of the product from the row. The new item was then added to the web app database for persistent data storage. This whole process currently runs locally on my laptop. The only part left to do in this pipeline is to replicate this process on the webapp running on AWS.
MinSun
- Deployed the app on AWS (EC2). Created a new instance on AWS, changed the settings for database and static files, the app now uses the apache to serve the django app. Installed and configured MySQL as database for the fridge inventory items, shopping list items, as well as profile models.
Finished debugging - you dont have to refresh anymore to see the updated list of inventory and shopping list items. The app deletes all the items in the list and re-renders every item in the database now.
Took barcode-scanned items and rendered it on the My Fridge page. It shows the name of the product, and the default count shows as 1 item with the expiry date set as todays date by default. The user can change it my clicking on the Edit button whenever they want.
Vineeth and Shikha: Sensors setup has been successfully extended and calibrated/threholded for the full setup of 4 eggs and 2 milk cartons. Had to changed out the pull down resistors as for some odd reason the 3.3kohms weren't able to provide an adequate reading range in the full setup. Raspberry pi is also 100% able to run the complete barcode scanning code as well as send a request to the webapp. Tested feasibility of direct serial communication between rpi and arduino uno 3, and it works perfectly, so this is the form of communication to be used. Last task left to do is pretty much the automation scripts, and final object placement in the demo layout, as well as the transfer of egg and milk info to webapp. Hopefully, we should be able to complete all of this well in time.
This week, we worked on installing OpenCV and pyzbar on the raspberry pi based on an existing online tutorial. Initially the raspberry pi’s HDMI output froze when we ran make using all four of the pi’s cores. To get around the frozen HDMI output and run make successfully, we optimized the installation by changing the swap size from 100 MB to 1024 MB in /etc/dphys-swapfile and changed the swap size back to 100MB after make finished. Once openCV was installed, we installed pyzbar on the raspberry pi. Next we worked on setting up the Raspberry Pi camera module to click images of the product barcodes. The main issue we faced here was that the captured images were extremely blurry when the product was held close to the camera. We fixed this by using pliers to adjust the lens focus and eliminating resizing of the captured image from our original barcode decoding script. At the end of this, we were able to essentially use the raspi camera module (connected to the raspi camera) as a barcode scanner. Once a barcode was detected successfully by our decoding script, we were able to make GET requests to the web-app hosted on AWS to identify and register the product in our database. Lastly we tried unsuccessfully to connect the Arduino to the internet with our chosen Wi-Fi module ESP-8266. Instead we decided to connect the Arduino to our raspberry pi via a serial/USB connection, and modify /dev/ttyACM0 file to read the data that was being sent by the Arduino over the serial interface.
MinSun & Shikha: Last week, the web app received barcode-scanned items and rendered them on My Fridge page on localhost. This week, we managed to send the requests to AWS and store them on cloud database. We debugged several issues that came up while transporting code from Python2 to Python3, especially with locating files and storing dates. The web app initially showed multiple duplicate items if they were scanned multiple times. Instead, we made the application to find the corresponding item in the database and simply increment count of it. We also fixed the recurring issue of BeautifulSoup not being able to read any recipe html files on Python3 (AWS). We forced the encoding to be in read mode and “utf-8” mode when it’s opened. The recipe page now properly shows a list names of recipes. We also wrote temporary functions that take in URLs that contain sensor data information, save the data, and render the corresponding items onto the My Fridge page. The final goal on the recipe side will be to show a random set of recipes on the recipes page in an organized way. Next week, we plan to take in the sensor data of the eggs and the milk cartons and display the correct information on the web app. Also, the Edit function on My Fridge page should be implemented.
MinSun:
This week, I modified the Item model of the webapp to include an AMOUNT attribute, to incorporate the additional information the milk has. Due to the design differences, I made the AMOUNT attribute the mandatory field for all items, and I also thought it would be a useful addition on the user’s end as well. The AMOUNT is shown as “some_number/100.”
I also wrote and debugged 3 functions that receive GET requests of sensor-detected item data from URLs, then save to database and display the information on the webapp. I implemented 3 functions, each for eggs, milk1, and milk2. Since the requests are sent in continuously from the arduino, the functions need to find the existing eggs or milk items and get rid of them before overwriting the information with the new ones.
I also implemented an EDIT functionality, which allows users to edit either the automatically added data or manually added ones. Each item has an EDIT button so that if you click it, a modal pops up with all the fields already filled in with the existing information.
Regarding the recipes, although I managed to do web scraping from recipe websites, I realized the same ingredients aren’t named the same in different recipe pages, so there’s no point of doing web scraping for our purpose. So I started to look for a Food API to implement find recipes functionality instead. Although Shikha and I will try to get this to work, there’s no guarantee that this will work, so it’s a stretch goal.
Shikha and Vineeth: This week, we worked on implementing and testing the final piece of our sensor pipeline: getting the Arduino with the attached round and square FSRs to transmit the number of detected eggs and amount of milk in cartons to the web-app. Initially we had planned to use the ESP8266 wifi module to connect the Arduino to the internet. However after a lot of struggle with this, we decided instead to establish a serial communication between the Arduino and the Raspberry Pi and have the pi make HTTP GET requests to the web app since it has onboard wifi. In the loop() function of the Arduino sketch, we essentially write the number of detected eggs and amount of milk in the cartons as bytes to the Serial interface using the Serial.write() function. We then installed pyserial on the raspberry pi. Once this was done, in the /dev/ttyACM0 file on the pi, we read the bytes being sent by the Arduino using the Serial.read(1) function call. If the byte read was a number between 0 and 4, we knew that the Arduino was transmitting the number of eggs. In this case, our python code on the pi would construct the web-app url responsible for registering eggs in the database and then make a GET request to this url using the requests.get() function. If the byte read was more than 5, we knew that the Arduino was transmitting the amount of milk in the cartons. In this case, our python code on the pi would construct the web-app url responsible for registering milk in the database and then make a GET request to this url using the requests.get() function. Once we were convinced this pipeline was working, we also measured the time taken between removing/placing additional eggs on the egg tray to seeing this change be reflected on the web-app. Additionally we measured the time taken between changing the amount of milk in 1 of the cartons to seeing this change be reflected on the web-app. In both cases, we observed the time was about 2-3 seconds which was quasi real-time performance. Lastly we also worked on our final presentation this week. The only thing left to do on our end is port the sensor and Arduino setup to the fridge and test that both the barcode scanning and sensor pipelines work reliably and accurately before the demo.