Team Status Report for 2/17/2024

Main Accomplishments for This Week

  • Design presentation
  • Initial ML and CV combined integration for basic ASL alphabet testing

  • Confirmation of inventory items for purchase

Risks & Risk Management

  • Currently no significant risks for the whole team. Therefore, no mitigation needed. There are concerns for each team member in their respective roles, but nothing to the extent that they jeopardize the entire project.

Design Changes

  • Natural language processing (NLP) has been included in software development. Considering sign language does not directly translate into full, syntactic sentences, we realized we needed a machine learning algorithm for grammar correction to achieve proper translation. We intend to use open-source code after understanding NLP computation, and plan for it to be implemented in later stages. Specifically, it will be developed after the ASL ML algorithm and CV programming have been accomplished. Although this grows the software aspect a little more, team members are all on board to contribute to this part together to minimize any possible costs this may incur in the overall timeline.
  • Three reach goals have been confirmed for after MVP is completed: 1. Speech-to-text, 2. A signal for the end of a sentence by the ASL user (a flash of light, or an audio notification), and 3. Facial recognition to enhance the ASL translations. All of the above is for smoother, fluid conversation between the user and the receiver.

Schedule Changes

 

Additional – Status Report 2

Part A was written by Ran, B was written by Sejal and C was written by Leia.

Part A: Our project by nature enhances public health and welfare by ensuring effective communications for sign language users. In the context of health, both obtaining and expressing accurate information about material requirements, medical procedures, and preventive measures are vital. Our project facilitates these communications, contributing to the physiological well-being of users. More importantly, we aim to elevate the psychological happiness of sign language users by providing them with a sense of inclusivity and fairness in daily social interactions. In terms of welfare, our project enables efficient access to basic needs such as education, employment, community services and healthcare via its high portability and diverse use-case scenarios. Moreover, we make every effort to secure the functionality of mechanical and electronic components: the plastic backbone of our phone attachment will be 3-D printed with round corners, and the supporting battery will operate at a human-safe low voltage.

Part B: Our project prioritizes cultural sensitivity, inclusivity, and accessibility to meet the diverse needs of sign language users in various social settings. Through image processing, the system ensures clarity and accuracy in gesture recognition, accommodating different environments. The product will promote mutual understanding and respect among users from different cultural backgrounds, to unite them on effective communication. Additionally, recognizing the importance of ethical considerations in technology development, the product will prioritize privacy and data security, such as implementing data protection measures to ensure transparent data practices throughout the user journey. By promoting trust and transparency, the product solution will foster positive social relationships and user confidence in the technology. Ultimately, the product solution aims to bridge communication barriers and promote social inclusion by facilitating seamless interaction through sign language translation, meet the needs of diverse social groups and promote inclusive communication in social settings

Part C: Our product is meant to be manufactured and distributed at very low costs. The complete package is a free mobile application and a phone attachment, which will be 3D printed and require no screws, glue, or even assembly. The attachment is simply put on or taken off the phone at the user’s discretion, even if the phone has a case. The product’s most costly component is the Arduino, which is about $30, and we expect the total hardware side will amount to less than $100. Not only are production costs minimal, but given the product’s purpose is for equity and diversity, the product will not be exclusively distributed. Purchasing it is considered like buying any effective and helpful item for daily living. If it becomes a part of the market, it should not necessarily impact the current goods and services related to the deaf or hard-of-hearing communities. However, our product and software are optimized for Apple ecosystems. Our team members all use Apple products and hence, our project has the potential for cross-platform solutions but will not be tested for it. Currently, this may come as a cost for some users who do not use Apple operating systems. Still, since Apple products are popular and common, we feel our product is still overall economically reasonable.

Leia’s Status Report for 2/17/2024

Progress

I have done more research and trade-off analyses on the items we will need for the hardware side. I intend to purchase the Arduino Nano 33 BLE for its bluetooth capability and compact size. For the product to be portable and chargeable, I will attach a Lithium Ion battery to the Nano to power it. The reason for this battery in particular is that most handheld devices use these batteries. Hence, when a person uses a wired or inductive/wireless charger for their phone, they can also charge the product’s battery as well. Currently, I am considering three different displays to be used for the dual-screen aspect: LCD, OLED, and E-INK. They are all relatively priced and each have balanced pros and cons. I plan to try all screen types after I find the most suitable one of each. They all must be about 2.5” diagonally so the screen is large enough for the other person to see, but not overly big that it makes he phone difficult to handle. Everything will be wire-connected, and I’ve prepared how I will connect the components together beforehand.

I did a minimalist design of the mobile app wireframes. After receiving feedback that I should check if mobile app development for Apple operating systems requires a $99 subscription for licensing, I ensured that I do not need that annual purchase to create an app for local environment use. The subscription is for distributing an app on Apple’s App Store, but we do not have such intentions for our project.

Next Steps

After confirming with team members, I will submit purchase forms for the above items. I will be planning in-depth as to how I’ll connect them all together. I will also be practicing how to utilize CAD software so eventually I can create a phone attachment that will hold all the components for 3D printing. It must not be too bulky, and be adjustable in tilt, which will necessitate studying current phone stands in the market for comparison and development. Since I established the wireframes for the mobile app, I will begin developing the front-end on Xcode. When I have time, I’ll also be delving into the back-end aspects to identify how I can get the app to connect to a cloud database. Further in the timeline, after I get the hardware parts, I will try to connect the Arduino with the mobile app via bluetooth and test the app to manipulate the display monitor.

Leia’s Status Report for 2/10/24

Progress

I did a little more comparative research between Arduino and Raspberry Pi to ensure that using an Arduino module is right for our project. The articles support using the former, particularly the Arduino UNO, because of user-friendliness, energy efficiency, simplicity, and diversity. Moreover, it can achieve bluetooth as well as seamless connection to LCD displays. However, the specified unit is too bulky for our purposes so I looked into the other numerous Arduino hardware available. I have honed in on the Nano 33 BLE. It is much smaller, capable of machine learning in case we need it, and has bluetooth features. It also has a Flash memory of 1 MB, which is enough for storing simple text translations, which would take at most a couple KB. I believe it can be coupled with 2.8-3.5 inch LCD displays; I examined its data sheet and since it can connect to 16×2 displays, I’m expecting it can do the same with wider screens. Backup Arduino I’m considering is the Arduino GIGA display bundle which consists of an Arduino GIGA R1 wifi and a GIGA Display Shield. It packages the board and display together and is relatively flat, but its very complex and powerful so not necessarily compatible with our project.

I have setup the Xcode platform to prepare for Swift programming and the Arduino application in my computer. I also studied on 1. how to develop an app that can control the Arduino via bluetooth, and 2. How to connect to cloud storage through the app. Further plans for both are addressed in the “Next Steps” section.

Next Steps

A concern is how to attach a chargeable battery to the Arduino so it doesn’t need to be constantly plugged for electric power. Further investigation needs to be done to find a charger that won’t fry the board and is small/flat/sleek enough. The integration between battery and Arduino needs to be added as a task. Moreover, I need to decide on what LCD display to get so I can determine whether a Nano can be wired to it. 

For Arduino control with mobile app, the first step will be to design a plain UI. Then once Arduino is acquired, I will test bluetooth capabilities probably with LEDs or temperature sensing, which will then lead to testing text transmissions.

For cloud data retrieval from app, I identified Firebase, an application dev platform from Google that performs backend cloud computing services. I found a guide on how to install and use Firebase SDK (software dev kit) in Xcode, but cloud storage implementation needs to be discussed more with team members as this concerns retrieving ML and CV data for app usage.