The New Yorker has a piece on how Netflix relies on the human gut working alongside their trove of customer data to determine what shows will be a success.
“It is important to know which data to ignore,” he conceded, before saying, at the end, “In practice, its probably a seventy-thirty mix.” But which is the seventy and which is the thirty? “Seventy is the data, and thirty is judgment,” he told me later. Then he paused, and said, “But the thirty needs to be on top, if that makes sense.”
— Ted Sarandos, Chief Content Officer, Netflix
It’s a nice tempering to the battle cry of “Big Data!” everywhere, and encouraging to hear someone speak up on the role of human intuition in the age of the algorithm.
I was reminded of the partnership (and probably conflict) between these elements when, earlier this evening, Netflix recommended a few movies in Tyler Perry’s Madea franchise under the section “Movies Featuring a Strong Female Lead”.
While the New Yorker piece was looking at the curating of content, not the recommendation algorithms, I can’t help but question if an algorithm or a human categorized the Madea films.
At first I took it as a failure of the algorithm, but on second thought, this feels more human than machine. To a machine, it would be obvious that Tyler Perry is a man and that the top-billed actors in this movie are men. Categorizing films based on gender of the lead would be a non-ambiguous task when working solely with data. The soft-categorization of this feels like something a human would do after asking “well where else are we supposed to put this?” On the other hand, a very human decision would be to say “there’s no way we can call this a female lead”.
And maybe that’s the most confounding thing about it — that it’s hard to say either way if it was a human or machine decision that put Madea in my recommendations.