Companies, nonprofit organizations, and governments design algorithms to learn and predict user preferences. They embed these algorithms in recommendation systems that help consumers make choices about everything from which products or services to buy to which movies to see to which jobs to pursue. Because these algorithms rely on the behavior of users to infer the preferences of users, human biases are baked into the algorithms’ design. To build algorithms that more effectively predict user preferences and better enhance consumer well-being and social welfare, organizations need to employ ways to measure user preferences that take into account these biases.
Why Algorithm-Generated Recommendations Fall Short
They overly rely on our past behaviors to predict our future preferences. But there’s a better way.
January 09, 2024
Summary.
The online systems that make recommendations to us often rely on their digital footprint — our clicks, views, purchases, and other digital footprints — to infer our preferences. But this means that human biases are baked into the algorithms. To build algorithms that more effectively predict users’ true preferences and better enhance consumer well-being and social welfare, organizations need to employ ways to measure user preferences that take into account these biases. This article explains how to do so.