Drones That Provide Visual & Sensory Info
18551- Biometric Quadcopter Security:
https://drive.google.com/open?id=1n7-88uXZdLNgPvCgoUYl53FvMd9fNqEN
https://drive.google.com/open?id=1S4pFCIYDsNxFYt50TVuXdDUyDG3hnyYr
- Goal: Drone follows people it does not recognize
- Simple face detection if person is facing drone
- Complexity comes from gait identification and drone navigation
- Gait identification is relatively new field
- Have been multiple approaches in feature engineering with examples being background subtraction, optical flow, and motion energy.
- Can try ML algorithms which have similar accuracy but come at the price of expensive computation
- Hardware included drone and tablet
- Achieved 87% accuracy for gait identification using a simple LDA + KNN classifier
- Measured different accuracies for people walking at different angles in relation to the drone camera
- The group did not get around to implementing gait recognition onto the drone- only the basic face recognition. However, they were able to show that the gait classification algorithm worked independently
- They ran into problems with Autopylot code and integration
If we do this project:
- Signals: Face mapping algorithm (PCA, LDA, NN), gait recognition algorithm (OpenCV algorithm or use ML since actual drones have succeeded by using it)
- Software: Automatic drone navigation and communication between camera and database
- Hardware/EE: Maybe Arduino & Raspberry Pi
- Make a cheap version of https://www.theverge.com/2018/4/5/17195622/skydio-r1-review-autonomous-drone-ai-autopilot-dji-competitor basically
- A lot of research would need to go into finding which drone would work best, but I think we need to find drones with flight controllers
18500- Snitch:
https://drive.google.com/open?id=13Rmza26JYVkNdvHaNwJTcUo68GZF2UFh
- We should revisit this paper to review a list of problems/solutions this group faced
- Goal: Create a “snitch” to avoid players and obstacles while flying around an area
- The laptop, after computing where the players as well as the quadcopter were located, would send messages over Wi-Fi to the Raspberry Pi 3 onboard the quadcopter in the form of coordinate locations of players of interest and of the quadcopter itself. These coordinates would be coordinates in the 3D space of the pitch
- Components: Raspberry Pi 3 would receive hall effect sensor data (from within IMU’s magnetic sensors) from the snitch ball, height information from an ultrasonic rangefinder (via an Arduino Nano), and gyroscopic information from a 9-Axis IMU
- Can read this more in-depth to understand how to maybe work around network problems and how to work around issues associated with arduino/raspberry pi communication
- Pivoted upon realizing their project was unsafe
- Dangerous to have people grab things from rotating blades and hang too much weight from a drone
- Faced issue with the drone not correctly reporting the angle it was at
- Abandoned ROS for ROS-like framework in Python
- Need relatively fast decisions in order to avoid obstacles
- Pivoted to making an assisted flight room mapper that could somewhat avoid obstacles
- New problem: Moving objects and moving the drone broke the map that was created by their mapping software (HECTOR SLAM)
If we go with this project:
- Goal: Create 3d mapping tool from drone images. Allow person to move drone around to image and scan an area. We could sell this as a disaster relief drone that could identify missing people and help map out an area that rescuers would need to go into for safety
- Signals: Facial recognition, 2d → 3d conversion algorithm
- Software: Visualizing the 3d map. Maybe allow people to interact with map, maybe not. Also navigation for drone (flight mapping?) and handling communication between drone and remote
- Hardware: Maybe wiring up an arduino/camera
- Potential challenges: See the group above. Seriously. They had like a billion
- There is already a whole bunch of software out there that tries to do this. I found several companies that are selling drones specifically for this purpose and I found a subreddit dedicated to the idea
Identification through Voice Authentication (for smart home?)
18551- Spoken Language Identifier:
https://drive.google.com/open?id=1MyV1K0w8DoISeQXnmvOtuOsV3R_6l7wS
18500- YOLO:
http://course.ece.cmu.edu/~ece500/projects/s19-teamda/
- Goal: One shot speech identification and speaker recognition via web app
- Simple voice recognition with extra complexities
- Complexity comes from speech id and speech recognition
- This should be in real time – a recording of an authorized voice should not be able to pass
- Should work within 10 seconds of recording
- Web app and signal processing with ML
- The group did not get around to implementing this specifically with smart home/iot devices, but it did work with a “logon service”
- Goal: Create device/detector that matches voices to unlocking the ability to control a smart home/hub device
- Simple voice recognition
- Complexity comes from speech id and speech recognition
- This should be in real time – a recording of an authorized voice should not be able to pass
- Hardware included drone and tablet
If we do this project:
- Signals: speech ID and recognition with ML
- Software: Authenticator service that if voice is authorized then commands are issued
- Hardware/EE: Maybe Arduino & Raspberry Pi or some sort of controller?
- Probably need to by a smart home device or hub to use for testing
Run With It
18551- DJ Run:
https://drive.google.com/open?id=1j7x7gFJguh4NDrayci-jIjX37-cb6uup
https://drive.google.com/open?id=1ACXJrRQjt2ibIdfNTTfwn2BPBIxn6bMA
18500- Song Hunter:
https://drive.google.com/open?id=1GKYRdOH90qN87q7cdNlU2zZHJN3Mt5LI
https://drive.google.com/open?id=11myryvI3wP7eiJqsaEPzYekTQjYbV7Gb
** manually assigned BPM labels for each song rather than detecting it (why?) this was a change from proposal to final project
- mobile app (android or smart watch)
- Detect runner’s pace (retrieve from phone’s accelerometer)
- Choose songs based on pace (database with song tempo range stored) OR (ML algo that can detect pace of song)
- Detect song pace – media lab MIT algorithm OR statistical beat detection (less computation, less accuracy)
- Use existing songs & timewarp (signal processing up/downsampling)
- Phase vocoder changes speed without changing pitch. Uses STFT, modulation in frequency domain, then inv. STFT – both groups used this
- Wavelet Transform is better bc advanced & has logarithmic precision ** we will do this
- TDHS is also an option. Song hunter decided not to use this bc its not suited for polyphonic material (what does that mean?)
- Use existing songs & make automated remix (add in beats/impulses at desired pace of same intensity/pitch as the song playing
- Music player with special features? OR fitness tracker with special features?
- Quality of music subjective measure
Apps like this exist: https://www.iphoneness.com/iphone-apps/music-pace-running-app/
- These apps match your choose from their own library or from your existing playlist to your pace automatically or manually
- None of these apps alter songs themselves to match pace
Walkie Talkies for Location Tracking
18551- Marco:
https://drive.google.com/open?id=14VWhMspw-yBSIJAEpdgZAIuEKngLfx5-
https://drive.google.com/open?id=14VWhMspw-yBSIJAEpdgZAIuEKngLfx5-
- New device or using smartphone or smartphone extension device
- Bluetooth or other way of keeping track of the 2 devices
- Notification when going out of range
- Use groundwaves to avoid obstructions – this just means certain frequency range
- Or use satellite/cell tower – but how would that exist in a ‘mountain’
- Mesh networking: smartphone that is using the app is the router creating the network.
Apps that exist: https://www.technologyreview.com/s/533081/communication-app-works-without-a-cellular-network/ (additional sources
https://www.geckoandfly.com/22562/chat-without-internet-connection-mesh-network/
- They use mesh networks
- No internet connection, just cell to cell with bluetooth
- Text and call capabilities
- Radius capabilities
- 200 feet radius distance for this to work
- 330 feet for person to person, greater than 330 feet requires a middleman within 330 feet radius.
- Considered important for disaster situations
- Briar is open-source
** this would also be a secure way of communicating. If messages are stored on the phone OR messages are encoded/decoded.
→ this tells us that communication is possible despite whatever obstructions.
→ 330 feet = .05 mile = thats a 1 minute walking distance… feasibility?
→ but we also don’t need to detect the data from the signal, just the strength of the signal
→ “Using bluetooth for localization is a very well known research field (ref.). The short answer is: you can’t. Signal strength isn’t a good indicator of distance between two connected bluetooth devices, because it is too much subject to environment condition (is there a person between the devices? How is the owner holding his/her device? Is there a wall? Are there any RF reflecting surfaces?). Using bluetooth you can at best obtain a distance resolution of few meters, but you can’t calculate the direction, not even roughly.
You may obtain better results by using multiple bluetooth devices and triangulating the various signal strength, but even in this case it’s hard to be more accurate than few meters in your estimates.” – https://stackoverflow.com/questions/3624945/how-to-measure-distance-between-two-iphone-devices-using-bluetooth (has more sources)
→ have a home base device at campsite or “meeting point”
→ use sound
→ metal devices are biggest obstruction. In mountain we would be ok. What are other use cases? Do we care if this is only targeted towards hikers/campers?
NOT SUPER CONFIDENT ABOUT THIS. we could probs make something work. But requires LOTS of scholarly research before we have any idea of what the design implementation will look like @ high level.