Luke’s Status Report for 3/30/24

As discussed last week, the primary goal for this week was to integrate all of our subsystems. With that being said, I focused on the intra-pi communication which bridges the gap between the core logic and recommendation code I’ve written over the past couple of weeks. This involved some more work with websockets, but after a little bit we were able to connect the two pis using a unified data transmission object (MessageRequest and MessageReponse classes).

This process involved a lot of small subtle debugging. For example, I caught a few different minor bugs with my recommendation code, particularly some edge cases with the session recs. We had some issues if there were fewer than 3 songs that have been queued, and also if the songs all had 0 likes for the session. that would cause the weighted centroid computation to be incorrect so I had to fix that.

After we fixed some minor bugs, we did some fun testing. I had a bunch of my teammates from soccer go to our website and concurrently queue songs and test around with the recommendation functionality. It was really cool to see everything working well. This also exposed some minor bugs that I worked on.

I would say that we are on schedule, but there is still a lot to be done. The integration process revealed many robustness challenges we are going to have to address, mainly with the management of the queue.

Next week I am going to work on the integration between our backend managed queue and the actual Spotify queue. this is going to be a little tough because we need to implement some timing of the songs by querying the duration of each song. This will be my main priority, and if I have any extra time I will work more on the semantic matching accuracy.

Team Status Report for 3/30/24

  • The most significant risk that could jeopardize this project is still the lighting system. Since our lighting fixture displayed some behavior before starting to fail consistently, we need to test some other configurations, or consider obtaining a second set of lighting fixtures. We plan to mitigate this risk by borrowing equipment from the IDeATe lab: first, we will borrow an ENTTEC DMX to USB converter, which a previous ECE Capstone team (Group D4) told us they used. If this does not work we will borrow their DMX lights as well, to test whether it is an issue with the fixture. Our second concern is managing real-world song timings, as the Spotify Web API does not issue us the capability to wait for a callback when a song finishes playing, although it does return the exact duration time of any track. We intend to mitigate this risk by experimenting with different clocks and internal song tracking. The final potential risk is keeping track of which Users have recently engaged with the app, in order to determine the majority count of active users. We plan to tackle this issue with keep-alive signal communication with our web app clients to check if Users have engaged with our service within the last timeout epoch. 
  • No changes have been made to the system diagram or the project schedule.

Matt’s Status Report for 3/30/24

  • This week was focused on system integration to get ready for the demo. So first I wanted to make sure that the Pis would be able to talk to each other for every song request made. It worked with one user as I was showing this past two weeks but for some reason it did not work when an extra user was making song requests. To fix this I tried to make the queue client class have async functions, and tried to research and take advantage of async Springboot decorators, and I tried to actually make a whole new queue client class for every message—none of those worked. I then tried a different approach which did not even have a separate queue client class. This works all the time for multiple users and is the simplest approach actually with the smallest amount of code. Second Luke, Thomas, and I integrated all of our parts over three different meeting sessions so that we could host the app on the first pi, get user requests with the second pi, and then actually do the computation on the second pi and send back the recommended song to the first pi and update the queue in the backend (I don’t think it actually pushes changes to the front end until there is another song queued/requested so that will be the next step). There were multiple bugs associated with this integration because of name changes and just finding bugs through testing which we worked as a team to fix. Finally, I implemented functionality for changing colors when users like and dislike as well as the app saving what songs each user likes and dislikes. The saving functionality just mentioned is important so that when users request new songs, it won’t reset their likes for other songs which it used to do. We do this by having a dictionary for each song object which maps user id to whether they liked, disliked, or felt neutral about a song. When the whole song queue gets sent to the javascript the dictionary is checked to see what color each song button should be. While I did not implement the actual user keep alive this week like I said I wanted to, I did implement parts that are important for it. I made a new class that maps users to each song they have a vote for and am adding to it appropriately. This is the object that will be referred to when removing user votes because of inactivity.
  • I’d say my progress is on schedule still. I did not do what I said I wanted to do last week but still got a lot of needed work done so it was a good week. 
  • Next week I want to have the app update the queue when a song recommendation is completed, not display like and dislike buttons for song recommendations, and then implement the user keep alive so that users who are not active will not be able to vote.

Thomas’ Status Report for 3/30/24

Thomas Lee

  • Much of this week’s work was done together as a team in the Hamerschlag labs testing our project ahead of the Interim Demo next week. There were a lot of hours of recompiling, retesting, and refactoring our code as we debugged and made sure our modules worked in the lab demo environment. This was done to ensure that our end-to-end functionality behaved as intended. Additionally, I changed our code so that the User requests would work as an async function, so that the queue displayed on the Web App would update instantaneously, without waiting for the Spotify querying round-trip time. This improved the ‘snappiness’ or our app, making it appear more responsive. Furthermore, I worked with Matt to revamp our Like/Dislike system to serve Users the specific Like button configuration that is unique to them. If Users have queued a song themselves, that song should initially appear as already Liked, and the other like buttons on the songs on the queue that they did not request should initially appear as blank (no Like or Dislike clicked yet), as well as updating the Like counts of each song accordingly. Previously the states of the buttons would flip if different Request buttons were pressed, now the correct button state and behavior is shown for each User’s own web app client. I also tracked down the ENTTEC DMX to USB translation unit as well as the DMX to XLR/DMX cable we would need to integrate it with our Raspberry pi and lighting fixture in the IDeATe lab. Hopefully we will have that early next week.
  • Progress is on schedule, synchronized to the timing of the Interim Demo.
  • Next week, I hope to be able to run the lighting test script and produce some controllable lighting fixture behavior, and begin integrating it with the rest of our codebase such that the Raspberry Pi automatically controls it. I will also improve our app backend and queue controller modules, specifically in regards to changing our song veto system to only consider veto votes (dislikes) from active/recently online Users.

Team Status Report for 3/23/24

  • The most significant risks that could jeopardize our project are the lights and knowing when to remove a song from the queue after it has been played. As described a little bit in Matt’s status update, the lights were working for a little until they were not. We think it has something to do with our cord connection between the computer and the lights. We may need an extra device to connect the two. This risk is being managed by us devoting a lot of attention to the lights this week and communicating with a team that has used the lights in the past. No contingency plan yet since we think we can figure it out and have not struggled for long. As for the timing of removing a song, our backend code does not know how long the song is at the time of queuing, we would have to query the Spotify api. But that is tricky since the Spotify api takes time and we want to be as accurate as possible. We are managing this risk by starting to think about the options available to us. We could get the song length back from the Spotify api and only play it for a certain time (less than the total song length) decided by the backend then tell the api to play the next song and increment the queue. That will make sure the timing is all in one place. We will be thinking of better solutions this week.
  • No changes
  • Photos/videos are in individual status reports

Matt’s Status Report for 3/23/24

  • This week Thomas and I worked together on both trying to figure out the lights and adding features to our app.  The parts I took the lead on were the vetoing functionality and adding options to queue a song that the DJ recommends. There are two options: 1. a song like the one inputted and 2. a song like the ones previously played in the session. The vetoing allows users to like and dislike songs. and when there are more likes than dislikes (for now, may be changed later) the song will be removed from the queue and concurrently displayed to all users through the WebSocket connections. I also integrated the lights with our Raspberry Pi and connected it to the rest of our system. We are having issues with controlling the lights currently. It worked at first but now every time we send a signal to the lights then it just turns them off. So as proof of concept here is a video of a user requesting a song on a computer from the app hosted on Pi1 which then invokes a message to be sent to pi2 and when pi2 receives the message it sends it to the lights which turns the lights off.  Video
  • My progress is on schedule
  • For next week I first want to try and figure out what is going on with the lights and how to fix it. I also want to try and implement the user keep alive so that users who are not active will not be able to vote.

Luke’s Status Report for 3/23/24

As stated in last week’s report, I spent a lot more time on the recommendation system this week. I worked to improve the recommendations from a single song, and then also implemented the session recommendations as well. I will describe both in detail:

Recommendation From Song:

In the last post, I described how I actually generated a seed to then use Spotify’s recommendation endpoint to get a list of recommendations. But now, we want to improve further on Spotify’s recs to ensure the user gets the best possible recommendation. So this is where I had some fun. At this point we have two things: an input song, and then a list of recommended song’s Spotify returns. We want to determine which of these songs to return, which is ideally the one most similar to the input song. Now also consider that as mentioned before, we have many song characteristics available to make this decision. So, I narrowed the parameters to the values that are actually meaningful when comparing two songs, and have the following 9 characteristics: acousticness, danceability, energy, instrumentalness, liveness, loudness, speechiness, tempo, valence. So now, let’s treat each ‘song’ as a point in a 9-dimensional vector space. Our problem described above, now simplifies to choosing the recommendation that minimizes the L2 norm with the original input song (or another distance metric). Thus I implemented this exact described process in code and now we can successfully further refine Spotify’s initial recommendations, with an output that more closely matches the input.

Session Recommendation:

Now, we also want to support the functionality of a generic ‘session’ recommendation –  recommendation that takes in all of the songs that have been played into account. As discussed before, we can do this in a smarter manner because we have access to user feedback: the likes and dislikes a song has received at the event. So the problem simplifies to, given a map of played songs and their number of likes, generate a song recommendation. Mainly, how can we translate this information into a Seed to send to Spotify’s recommendation endpoint. This boils down to two things: compiling the seed songs and artists and then choosing the song characteristic values for the seed. For the former, we can sort the session songs based on their likes, and then just choose the 3 song_ids of the top 3 songs and then the 2 artists of the top two songs. This gives us the needed 5 params for the seed in terms of song and artist. Where things get more interesting is selecting the numerical values for the song characteristics in the seed. An initial idea is finding the geometric average (ie. the centroid) of the input songs in this 9-dimensional vector space I talked about earlier. But this creates dull values. Think about the case of 3 session songs, each being drastically different. if we just take the geometric mean of these songs’ characteristics vectors, then we’ll just get song attributes that don’t resemble any of the songs at all, and are just a bland combination of them. So instead, we compute the weighted centroid, which is the weighted geometric sum where each weight is the number of likes corresponding to each of the songs. Then, we use  this resulting weighted centroid as the input to our seed generator. This works well which is great

The next interesting question in this realm is doing the same thing we talked about for the single song rec. Once we get recommendation results from Spotify, how do we further refine to get the best possible recommendation. This is something that is not urgent but it’s super interesting so I’ll spend time on it in the coming weeks. What I rlly want to do is K-means cluster the session songs and then choose a recommendation result that minimizes the L2 norm from any of the cluster centroids, but that’s a bit over the top. We’ll see

Also, note that the team as a whole did a lot of integrations together this week. We integrated the queueing and recommendation features with the actual frontend and backend internal queue components, so now Music Mirror is fully functional (an early version). You can now go on our site from multiple sessions and queue songs which are then played automatically. In fact I’m using music mirror right now as I’m writing this.

This is really good progress ahead of the demo. The next step is to continue to integrate all of these parts within the context of the broader system. Immediately, we will be integrating all of this code with the two Pis and managing the communication between the two.

In all, the team is in a good spot and this has been another great week for us in terms of progress towards our MVP.

Thomas’ Status Report for 3/23/24

Thomas Lee

  • This week, in close collaboration with Matt I updated the web app backend as well as the queue manager, began the lighting control application, and worked on general system integration to make sure our modules were collaborating properly. On Sunday I spent a couple hours with my team in the Hamerschlag lab integrating our systems, and was able to achieve end to end functionality for getting a song request (and song recommendation requests) from a User through the web app onto the main system, and then to the Spotify Web API to make a ‘Play Song’ query and finally to the bluetooth speaker audio output. This was demoed in our meeting this week. During the week I took the lead on adding the Likes & Dislikes user inputs on the web app, and tying it to the song data stored by the backend on the queue manager. The Likes & Dislikes is a critical feature as it drives both the recommendation system and the song veto capabilities: songs with more Likes are given more consideration by Luke’s recommendation service, and songs with low/negative likes downvoted by the majority are vetoed and removed from the song queue. I updated the frontend so that users could interface with this functionality and the data models in the backend, from which I accessed the song voting data on the queue manager. Attached below are some examples of the updated UI and the song veto/remove from queue functionality:


    I also created and worked on the lighting fixture control module, as we received our DMX controlled lighting unit and DMX to USB cable this week. I started a new Maven project and wrote a test script utilizing the DmxPy library (ported over to Java for coherence) to toggle the different channels of the lights in a regular manner. This was done to both check that we could, indeed, control the lighting fixture via DMX signals generated by our own software program, and to begin the steps for a persistent controller microservice to continually operate the lights based on which songs are playing/on the queue. We were able to start and run a simple lights show off this lighting application:
    [ video here ]
  • Our progress is on schedule. Our project now has the core functionality in place and the peripheral features are becoming fully constructed.
  • Next week we hope to make more progress actually getting the lights to work properly and in sync with the queue and the data received from the Spotify Web API. We are also looking into some song start/stop timing mechanisms in order to for the internal system to know what song is currently being played by the Spotify Web API.

Team Status Report for 3/16/2024

  • We still have not picked up our lights yet and everything else seems to be going well. Until we have more info on the lights (which we should have this week) we have the same risk for them which is: The most significant risks that could jeopardize the success of the project have not changed much so far. More specifically, the first biggest risk is not being able to properly compile and run code for controlling our light fixture automatically through our control program, which would use Flask, Python, and the Open Light Architecture framework to transmit DMX signals to the lighting system. Our concerns are due to comments given on other people’s projects attempting to control lights using the DMX protocol that the OLA framework is a little finicky and difficult to bootstrap, even though after initial setup progress should be smooth and predictable. To mitigate this risk we will be testing our setup before committing completely to OLA.
  • One change that was made was we are now connecting the Pis through wifi rather than a direct ethernet cable. This change was not necessary but wifi works just as well for our purposes and is easier to implement. This change did not incur any costs. 
  • No updated schedule

Matt’s Status Report for 3/16/2024

  • This week Thomas and I added functionality to our web socket app. I was involved in merging our queue class with the front end so that when the users request a song, it will be added to our queue and be the same for all users. The same queue is also now displayed on the front end for all users  (pictures are shown on Thomas’s status report). I also was able to set up our second pi and set up communication between the two Pis. Shown in the link below: a user is requesting the song from their computer then through the web socket our first Pi receives the request then it forwards the song to the next Pi. https://share.icloud.com/photos/01b9O72-v7Uqz9KqOgRQfZ09g  Communication between our Pis is important so this is good to see.
  • My progress is on schedule
  • Next week I hope to have a simple vetoing system so users can vote against a song and all the votes will be stored in the server. Also, we will try to integrate our backend on the app with the work Luke has been doing.