Amazon Music Leader on Product Design (Part 3)
When you look at product design for media services, there are two ways in which users search for content. First, the user knows what they want and enter a title into the search box to locate it. Second, they don’t know what they want to find and are looking for recommendations. These scenarios work well if you’re searching online for books or looking up real estate options.
However, when looking at voice command products like Echo, we found that these interactions are completely different. For example, you know exactly which song you want to listen to, but you don’t know the title. You can sing the chorus or hum the opening section. However, if you don’t know the title, it can be difficult to search for it.
With this, we identified an opportunity to make it easier for users to find songs based on an artist’s newest song. For example, Adele released a new album and the song “Hello” was very popular. We imagined a scenario where someone hears the song but they don’t know that the title is “Hello.” As a result, when someone says, “Alexa, play Adele’s newest song” – the song “Hello” would play because it’s the latest release by Adele.
Conversely, users do not type “what’s the latest song from Adele” into a search box. This illustrates the profound difference between audible commands and searches on the computer. In the end, it’s much easier to speak what’s on your mind rather than trying to type out a command.
In summary, product design at its best removes friction to allow customers to access content easily. With voice commands, music lovers can find their new favorite song by making a simple request with minimal effort.