What Does it Mean to Give Someone What They Want? The Nature of Preferences in Recommender Systems

Luke Thorburn
Understanding Recommenders

--

Luke Thorburn, Jonathan Stray, Priyanjana Bengani

A central goal of recommender systems is to select items according to the “preferences” of their users. “Preferences” is a complicated word that has been used across many disciplines to mean, roughly, “what people want.” In practice, most recommenders instead optimize for engagement. This has been justified by the assumption that people always choose what they want, an idea from 20th-century economics called revealed preference. However, this approach to preferences can lead to a variety of unwanted outcomes including clickbait, addiction, or algorithmic manipulation.

Doing better requires both a change in thinking and a change in approach. We’ll propose a more realistic definition of preferences, taking into account a century of interdisciplinary study, and two concrete ways to build better recommender systems: asking people what they want instead of just watching what they do, and using models that separate motives, behaviors, and outcomes.

The definition of preferences we find most useful for recommender systems comes from behavioral economics:

Preferences are the judgments a person recalls or constructs when making choices.

This is a somewhat unusual way to describe “what people want,” but it’s designed to highlight two important ideas. First, preferences are psychological, not behavioral. While choices do correlate with preferences, they are not the same thing, and neither preferences nor choices are perfectly aligned with user welfare. Second, in many situations people construct their preferences on the fly. While some desires are stable and consistent, for many questions we don’t know what we want until we think about it, and our answers can be situational, which contradicts standard mathematical models of preference.

We believe this is a more productive way to think about preferences, whereas much current work in recommender systems (and AI alignment generally) depends on convenient but somewhat unrealistic conceptions. In this post we’ll briefly trace the history of the concept of preferences from its roots in 19th-century economics, tease apart the difference between preferences, choices, and welfare, survey recent work in recommender systems to demonstrate the continuing reliance on the revealed preference paradigm, and explain how this reliance may fail to give people what they need and want. Fortunately, there are at least two directions for going beyond revealed preferences in recommender systems: preference elicitation, a phrase that covers a variety of ways of asking people what they want, and preference modeling, mathematical techniques that don’t assume choice and preference are identical.

The Origin of Preferences

This understanding of preferences is a synthesis of insights from philosophy, economics, psychology, game theory, and other disciplines over the last century or so.

Economists have long sought a realistic theory of preferences on which to base models of consumer behavior — that is, what things people buy and at what prices. One of the oldest candidates is utility theory, developed by economists in the early 1870s (who, in turn, drew on earlier theories of utilitarianism in moral philosophy). This was extended in the 1940s and 50s to handle uncertainty and risk with the development of formal decision theory.

Utility theory models preferences as implicit numbers (utilities) that people assign to different outcomes. In other words, it assumes that each of us can assign some “goodness” score to each possible situation we might find ourselves in, whether that’s owning a particular vacuum cleaner, viewing a particular cat video, or being diagnosed with cancer. To connect preferences to behavior, individuals are usually assumed to make choices that maximize their expected utility.

Is a nice cup of coffee a 10 on the utility scale, or a 100? From early on, utility theory was criticized because utilities can only be measured relative to one another, not in absolute terms. This prompted a series of proposals that assumed only that individuals can rank a set of alternatives from most to least-preferred, removing the need to assume the existence of numerical utilities. This “ordinal revolution” culminated in a 1938 paper by Paul Samuelson, who built a theory on the single assumption that individuals’ choices are consistent — if we choose x over y, then we would have chosen x over y in any analogous situation. With this assumption, economists could use choices (which were observable) to reconstruct preferences (which were not), a paradigm known as revealed preference.

Preferences are not Choices or Welfare

Throughout the late 20th century, economists and philosophers such as Nobel laureate Amartya Sen criticized the revealed preference paradigm for conflating three concepts:

  • preferences, what people want
  • choices, what people do
  • welfare, what happens as a result

It was important to economists to distinguish these ideas, because if people do not always make choices that align with their welfare then market mechanisms cannot be said to maximize individual or societal welfare. Each of these three is a broad category, and there are a variety of words used in different contexts to describe items of each of these types. In recommender systems engagement is a type of behavior, and is also a user choice. Welfare is closely related to user well-being, value to the user, and outcomes more generally, and is similar to the classical concept of utility. Preferences are fundamentally psychological and involve motivations, judgements, interests, needs, and desires.

There are several reasons why preferences ≠ welfare, why the judgments we construct when making choices do not always align with our self-interest. We list some below, with examples from the context of recommender systems.

  • Influence — Preferences may be influenced by persuasion efforts that do not align with the person’s best interests (e.g. a user may select a film to watch on Netflix based on an ad they happened to see).
  • Context — Preferences are influenced by the environment in which they are made, which can lead people to construct judgments in particular ways (e.g. a user may perceive a need to check their notifications frequently because of addictive dark patterns in the user interface design).
  • Beliefs — People may not have the time or information required to evaluate the consequences of some alternatives, and hence desire something that they wouldn’t if they thought for longer or knew more (e.g. a user might click on a clickbait YouTube thumbnail because the title misled them about the content of the video).
  • Expectations — Disadvantaged people may ‘adapt their self-interested preferences to their limited opportunities: they lower their aspirations to avoid frustration’ (e.g. a user might not click on a post about a scholarship or job opportunity because of a mistaken view that an application from someone like them would not be successful).
  • Altruism — People have preferences about the welfare of others, not just their own (e.g. a user may explicitly like or share content produced by a friend to support them, even if they do not personally find the content valuable).

Similarly, there are several reasons why preferences ≠ choices — or, more precisely, why observing that a choice was made does not always tell you the reasons it was made.

  • Indifference — Choices may be made when people do not know what to choose, but have to choose something (e.g. none of the items recommended in a user’s feed may constitute the type of content they aspirationally prefer to pay attention to, but they choose one arbitrarily to kill time).
  • Preference Falsification — Choices might be strategically made to differ from preferences in order to achieve particular outcomes (e.g. users may choose not to interact with content that aligns with their political beliefs if the topic is controversial, to avoid being affiliated with that group).
  • Instrumental Choices — Choices might be made purely as a means to an end. A person can have a preference for the end but not care about the means (e.g. an online shopper might buy a widget to perform critical home maintenance, but have no ongoing desire for such widgets).

Because of these differences, our preferred definition focuses on preferences as a psychological process that contributes to, but is not identical with, choices or welfare.

All of this leaves recommenders in a complicated place. Choice seems at best a proxy for what we might really care about. Preferences seem like a much more attractive thing to optimize for, to the extent that they can be determined. Agency is a deeply important value, and giving someone agency requires respecting their preferences. Yet someone may genuinely want something that hurts themselves or others. For example, someone with an eating disorder might be fascinated with diet videos, even if it’s not good for them. In these sorts of cases it may be better to try to maximize welfare. There will always be some tension between seeking user agency and seeking user well-being. How such a balance should be struck will depend heavily on context and the ethical views of stakeholders.

Preference construction

Our suggested definition uses the phrase “judgments a person … constructs” because there is substantial evidence from psychology that we do not have a consistent, underlying set of “true” preferences — a sort of lookup table — that we refer to. We do have strong, long-lasting preferences in many cases, be they biological (e.g. preferring food to starvation) or learned (e.g. preferring particular music genres). Ideally, recommender systems are able to learn these stable preferences. But in many situations we have not previously considered what we prefer, and so must construct our preferences on the fly. This preference construction can be necessary in situations where

  • Some of the available alternatives are unfamiliar (e.g. a user researching an unfamiliar product category on Amazon).
  • We don’t know how to trade-off different types of preferences (e.g. liking, wanting, and approving) or how to trade-off preferences about different attributes (e.g. a user may want content that is both educational and entertaining, but must prioritize one over the other).
  • We are asked to express our preferences using a number or multiple choice scale, and find it difficult to translate our qualitative feelings into that format (eg. when rating or reacting to items in a recommender feed).

Some scholars make a stronger claim that not just some, but all preferences and choices are constructed on the fly. For example, in The Mind is Flat (2018), behavioral scientist Nick Chater makes the case that all human thought and activity is improvised in the moment to fit our history and circumstances.

The “judgments” in our definition refer to the intermediary comparisons we make in the process of making a choice. For example, when choosing from a slate of recommended YouTube videos, we will internally make a series of comparative (potentially subconscious) judgments about their perceived attributes such as relevance, timeliness, factualness, production value, length, novelty, consistency with our self-image, and so on. We also make judgments about how to trade-off these attributes against one another to reach a decision. According to our suggested definition, preferences are exactly this set of judgments. Preferences are the internal rationale for the choice, rather than the action taken — a richer definition.

Still, this is only a model of human wants — and implicitly a model of human agency too, if respecting agency means respecting preferences. Because of the ad hoc and diverse ways in which we construct preferences, they are often mutually inconsistent, changeable, and subject to factors that we are not aware of. For example, work on human decision-making heuristics and biases by Daniel Kahnemann and Amos Tversky, has shown that when asked about their preferences with differently worded but equivalent questions, people often give contradictory answers.

The existence of such framing effects suggests that recommender systems likely influence which preferences we construct. One example of this is position bias, the fact that items ranked near the top of a recommender slate are more likely to be seen and thus have preferences constructed in their favor. There are dozens of other design choices which can influence preference construction, including how the reputation of information sources is presented (e.g. popularity stats, “trusted publisher” labels, fact checks, etc.), which attributes of the items are made more or less salient (e.g. the inclusion of estimated carbon emissions in Google Flights), and how items are arranged (e.g. in categories like Netflix, or combined like the Twitter timeline).

Cumulatively, this work points to an understanding of preferences as somewhat fuzzy and resistant to formal modeling: there is no perfectly reliable method for eliciting preferences, and no objective “ground truth.” Any formal definition will only imperfectly describe the psychological processes that lead to us choosing some alternatives over others — and will lend itself to particular ideas of what it means to want something, and to give someone what they want.

The point here isn’t that nothing works. Building controls, watching what people do, and asking them what they want in various ways does work. Even simple optimization for engagement is often a very good starting point, because the choices users make when interacting with recommenders do generally reflect a meaningful form of preference and welfare. A user will usually have higher welfare watching Netflix series they finish instead of those they abandon, reading tweets they explicitly ”like” instead of those they don’t, and browsing Amazon products they choose to purchase instead of similarly priced alternatives they pass over. Choices are often a good proxy for preferences — but not always.

Preferences in Recommender Systems

Recommender system builders are usually aware of and attempting to mitigate the limitations of revealed preference. However there are no simple alternatives, and in practice most recommenders operate on the assumption that users’ choices reflect their preferences and welfare.

To demonstrate the pervasiveness of this approach, we classified all papers from the 15th ACM Conference on Recommender Systems, held in late September 2021, according to whether or not they adhered to the revealed preference paradigm. Beyond RP denotes papers that took concrete steps to move beyond revealed preference, usually by introducing non-behavioral signals of value or modeling the choice-making process in a manner that distinguished between choices and preferences. Papers that took no such steps were labeled RP, and papers that were not directly about preference learning were labeled Unrelated. You can see the full list of papers here, along with our classifications and rationales.

Of 49 papers in the main conference track, only 8 made concrete proposals outside of the revealed preference paradigm. Most of these eight were either conversational recommenders (which interactively elicited natural language critiques of recommendations) or e-commerce recommenders that modeled user preferences over product attributes (that is, they arguably modeled the judgments users construct when choosing products).

In practice, the dominance of the revealed preference paradigm means that most recommenders rely on engagement to learn user preferences, without accounting for situations where behavior and preferences diverge.

Few researchers would claim that “choices = preferences,” but this equivalence is often implied by the language and algorithms they use. For example, this research conflates platform activity with user preference:

…At the end of this stage, each user is associated with a list of communities they participate in, along with the scores quantifying the strength of their affiliation to each of those communities. We refer to this output as “User Interest Representations”…

Satuluri et al. from Twitter (KDD 2020) SimClusters: Community-Based Representations for Heterogeneous Recommendations at Twitter

And here’s a platform executive equating watch time, a choice-based engagement metric, with user well-being:

… A user goes to YouTube and types the query “How do I tie a bow tie?” And we have two videos on the topic. The first is one minute long and teaches you very quickly and precisely how to tie a bow tie. The second is ten minutes long and is full of jokes and really entertaining, and at the end of it you may or may not know how to tie a bow tie. … Which video should be ranked as our first search result? … “I want to show them the second video.” … By definition, viewers are happier watching seven minutes of a ten-minute video (or even two minutes of a ten-minute video) than all of a one-minute video. And when they’re happier, we are, too.

John Doerr, Measure What Matters (2018) [p162]

Why are choices, preferences and welfare so often collapsed? First, for pragmatic reasons — it is easy to optimize for engagement, that is, behavior, because behavioral data is plentiful. The revealed preference paradigm is then used to justify this approach, by arguing that engagement correlates strongly with value to the user. This belief is attractive because it neatly unifies altruistic and more profit-oriented goals:

Conflating satisfaction and retention helped mediate a tension between developers, who often expressed to me a strong desire to help users, and business people, who wanted to capture them. Appeals to user ‘satisfaction’ hold a moral power within the software industry, and are thus turned to justify a variety of technical decisions … But they also express a basic ambivalence in technologies of enchantment: people desire and enjoy enchantment, and the tension between ‘satisfying’ users and capturing them is not easily resolved. Thus, [a music streaming platform employee] could, without any apparent irony, tell me that he was both working in his listeners’ interest and trying to get them addicted.

Nick Seaver, Captivating algorithms: Recommender systems as traps (2018)

To be clear, historical engagement data really is an important signal of what users value, as we discuss elsewhere. The choices made by users do, in many situations, correlate with and provide information about their preferences and welfare. But for the reasons outlined above, it is a mistake to treat the choices we make as users as synonymous with what we value, and what is good for us.

Recommendations Beyond Revealed Preferences

A critique of revealed preference is not very useful without practical alternatives. Fortunately, we know of at least two broad strategies for improving on the revealed preference paradigm.

The first is to broaden the available methods for asking users what they want, rather than relying solely on implicit signals from engagement data. The most direct way to do this is to provide user controls. These include every day controls such as subscribing to a topic or channel, but also the ability to stop recommendations from particular sources (e.g. the “Don’t recommend channel” buttons on YouTube), or to choose between different recommendation algorithms (e.g. the ability to view the Twitter timeline in reverse chronological order). However, in practice, only a small subset of users make use of controls, and even with more widespread adoption, it may be infeasible for users to manually specify what they want in sufficient detail.

Alternative methods for collecting information about preferences include survey questions (potentially open-ended), third-party quality assessments (such as fact checks or source credibility ratings), and human-facilitated elicitation of a value model. All of these are used at least to some degree in production recommenders, especially surveys. For example, it is possible to gain insight into problems such as addiction and poorly informed choices by asking people to reflect on their use of a product, and some platforms ask users retrospective questions such as “was this post worth your time?” On some issues, it may be feasible to elicit better informed, more consistent preferences through the use of deliberative elicitation methods.

The second strategy is to model the choice-making process in a way that respects differences between choices, preferences and welfare. For example, the value of a tweet can be treated as a latent variable in a Bayesian network that only partially contributes to engagement. Alternately, evolving preferences can be modeled as an internal state in a Hidden Markov Model, and certain types of preference change be manually defined as incompatible with welfare. That is, the recommender should not “manipulate” the user by persuading them in an unethical way, or towards unethical ends. Another approach is to model users as having two choice-making processes — one impulsive, one deliberative (the latter being more aligned with individual welfare). There are many other possibilities when preferences, choices, and welfare are separated and explicitly modeled.

Both approaches can draw on connections with the emerging fields of AI alignment, which seeks to align algorithmic decisions with human values, and reward learning, which seeks to reconstruct the reward function of an agent from its behavior. Human preferences can be considered a qualitative, richer, but less consistent version of a reward function. The problem of learning a reward function from behavioral data is analogous to learning preferences from engagement data and similar challenges arise, such as the fact that multiple reward functions are often consistent with the same observed behavior.

All recommender systems operationalize a model of human preferences, and the choice of this model has normative implications. Moreover, even if recommendations align “perfectly” with the preferences of individuals, they might not produce outcomes that are best for society as a whole, as social choice theory explores. Giving people what they need or want requires modeling the distinctions between preferences, choices, and welfare, and understanding that the system itself can develop or change people’s preferences — and above all, asking people what they want, in some way.

Thank you to Michael Dennis, Smitha Milli, Adam Gleave, Micah Carroll, Aviv Ovadya and others for their feedback on this post.

Luke Thorburn was supported in part by UK Research and Innovation [grant number EP/S023356/1], in the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (safeandtrustedai.org), King’s College London.

--

--