39
Recommending Products When Consumers Learn Their Preferences by Daria Dzyabura and John R. Hauser March 2016 Daria Dzyabura is an Assistant Professor of Marketing, Leonard N. Stern School of Business, New York University, Tisch Hall, 40 West Fourth Street, 805, New York, NY 10012, (212) 992- 6843, [email protected]. John R. Hauser is the Kirin Professor of Marketing, MIT Sloan School of Management, Massachusetts Institute of Technology, E62-538, 77 Massachusetts Avenue, Cambridge, MA 02139, (617) 253-2929, [email protected].

Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

Embed Size (px)

Citation preview

Page 1: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

Recommending Products When Consumers Learn Their Preferences by

Daria Dzyabura

and

John R. Hauser

March 2016

Daria Dzyabura is an Assistant Professor of Marketing, Leonard N. Stern School of Business, New York University, Tisch Hall, 40 West Fourth Street, 805, New York, NY 10012, (212) 992-6843, [email protected].

John R. Hauser is the Kirin Professor of Marketing, MIT Sloan School of Management, Massachusetts Institute of Technology, E62-538, 77 Massachusetts Avenue, Cambridge, MA 02139, (617) 253-2929, [email protected].

Page 2: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

Recommending Products When Consumers Learn Their Preferences

Abstract

Consumers often learn their preferences as they search. For example, after test driving new

cars, a consumer might find she undervalued trunk space and overvalued sunroofs. Preference learning

makes search complex because, each time a product is searched, updated preferences affect the value

of all products and the value of subsequent (optimal) search. Recommendations that take preference

learning into account help consumers navigate search. We motivate a model in which consumers learn

(update) the preferences they ascribe to attribute levels as they search. Formal results suggest

modifications to the common foci in the search literature and the recommendation-system literature. It

may not be optimal to recommend the product with the highest option value, as in most search models,

or the product that is most likely to be chosen or has the highest expected utility, as in traditional

recommendation systems. Recommendations are more effective if they encourage consumers to search

undervalued products and/or products with diverse attributes. Both modifications enhance the value of

preference learning. Recommendation systems, based on the formal theory, outperform benchmark

models in synthetic worlds, especially when consumers are novices and when the recommendation

system can develop its own priors by targeting relatively homogeneous segments.

Keywords: Recommendation systems, learned preferences, multiattribute utility, consumer search

Page 3: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

1

1. Introduction, Motivation, and Preview of Results

Consumers often seek recommendations for products that best match their preferences. But

what if consumers are unsure about their own preferences and must learn their preferences through

search? Recommendations might influence consumers’ search and consumers’ ultimate choices in new

and different ways. We propose a model of preference learning and use the model to illustrate how

preference learning affects search and recommendation. Formal insights challenge accepted wisdom,

but help explain new directions in automated recommendation systems. More importantly, the formal

results suggest modifications to recommendation-system algorithms—improvements we explore with

synthetic data.

1.1. Motivation—Illustrative Examples and Other Evidence

Example 1: Candace and Dave were moving to a new city. They wanted a home with three

bedrooms, two bathrooms, a good school district, hardwood floors, adequate lighting, and proximity to

work. Based on a realtor’s recommendation, they visited one home that had a playground across the

street. Seeing the playground, Candace and Dave realized how convenient this feature would be for

them. Although they always valued playgrounds, they had not previously considered proximity to

playgrounds to be an important decision criterion for a home.

Example 2: Evan was a high school student beginning his college search. Based on his prior

beliefs, he valued colleges that have high academic ratings, are nearby, and are well-known for their

athletic teams. Upon visiting a nearby college, based on a recommendation from his guidance counselor,

Evan learned about a program that enabled undergraduate students to get involved in research with

faculty. After experiencing the availability of such programs, Evan realized this attribute was important

to him. It became a key factor in his subsequent search.

Example 3. Pat wanted to replace her old car. Initially, she thought she would prefer sporty cars

with sunroofs, attractive styling, and road-hugging handling. After visiting dealers and test driving five

recommended cars, she realized that sporty meant low gas mileage and that nothing satisfying her

initial preferences had sufficient trunk space for her needs. She revised her preferences to value sporty

less and gas mileage more. Sufficient trunk space, an attribute she had considered of little importance,

became one of the most important criteria in her choice. Sunroofs became less important.

Example 4: First-time parents Amy and Bob were novices in hiring nannies. When they began

their search, they thought the most important attribute was previous experience with daycare or with

another family. After searching for information (visiting daycare centers, talking with colleagues, and

interviewing nannies), Amy and Bob realized they valued a nanny’s experience less, but that they valued

Page 4: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

2

empathy with their child more. They also realized they placed a high value on a willingness to help with

nominal household chores, an attribute that, a priori, they believed unimportant. Freed from nominal

household chores, Amy and Bob could spend more quality time with their children.

The common thread in these examples is that (novice) consumers revised their decision criteria

during search as they evaluated products with various attributes. A priori, their prior beliefs suggested

different preferences for many attributes. In three examples, consumers followed recommendations; in

the fourth example they searched on their own. Perhaps the realtor sensed that Candace and Dave

would value playground proximity after they experienced it. Perhaps, Evan’s guidance counselor, based

on her experience advising hundreds of college-bound students, knew that the research program would

excite Evan. Perhaps the auto dealers had learned from prior experience to recommend cars that would

help Pat revise her preferences. The recommenders were not omniscient; none could perfectly predict

consumers’ preferences. But the recommenders had enough experience to recommend products that

helped each consumer learn his or her own preferences. Consumers learned that some attributes were

more important than they previously thought and that other attributes were less important than they

previously thought.

Other sources support preference learning and recommendations that help consumers learn. A

recent New York Times article on real estate buyer behavior by Rogers (2013, page F4) describes a

novice consumer’s search for a home: “Often people don’t know what they want. […] You may think you

want X, but if you’re shown Y, you may love Y better than you ever loved X. […] Even (or especially) in

these days of consumer online access, some of an agent’s value lies in her being able to offer a buyer a

choice different from his preconception.” Similarly, college counselors often suggest that students “look

at schools of various types for the express purpose of helping them refine their thinking about what they

want” (personal communication, November 2012). A college admissions officer from the University of

Pennsylvania said, "The vast majority of [potential students] have no idea what they really want to do

when they grow up. Even the ones who claim that they do.” (Sheehy 2013) The director of a large

childcare agency notes a similar pattern in parents’ preferences for nannies:

“We find that parents spend a lot of energy focused on attributes of a caregiver, like college

education or CPR training, which have no real correlation to how good of a caregiver someone

will actually be. They learn, after being presented and disappointed with their ‘perfect

candidate’ multiple times, that their focus should be on finding someone based on child-rearing

philosophy, temperament, and other belief systems. (personal communication, June 2014).”

In dating, young men and women do not always know what they value in a potential partner,

Page 5: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

3

even though they usually believe they have well-formed preferences (Finkel et al. 2012). Newer dating

services, such as Minidates.com and CoffeeMeetsBagel.com, offer blind dates to encourage real-life

interaction so that daters learn what really matters in a mate (Cook 2012). The websites claim users

explore more because they are not limited by their prior beliefs.

1.2. Overview of Model and Results

We model preferences with a multiattribute utility function and allow the consumer to learn the

parameters of the utility function. The consumer has prior beliefs about the weight of each attribute

level. (Such weights are sometimes called partworths.) The consumer learns (or updates) his or her

beliefs about the weight on an attribute level after searching a product with the attribute level.

We derive the consumer’s optimal search when preferences are learned. Because searching one

product updates attribute preferences, the utilities of all products that share these attributes are

updated. Existing solutions may not apply (e.g., Weitzman 1979). Consumers may not search products

with high option value (positive tail) or high utility variance (Result 2). Alternatively, searching products

with diverse attributes is particularly valuable to consumers (Result 1).

We next introduce a recommendation system, henceforth RecSys (plural RecSystems). We

demonstrate that when the RecSys anticipates the consumer will learn, it will recommend products that

help the consumer experience many attribute levels. The RecSys may recommend an “undervalued

product” even if the product has a low probability of being chosen and a low expected utility (Result 3).

Numerical analyses illustrate how a benevolent RecSys can choose recommendations to lead consumers

to the optimal net utility. A non-benevolent RecSys can lead consumers to more profitable products that

may not be best for the consumer. In the latter case, the consumer will not know what he or she does

not know and will be happy with the outcome. Finally, based on the insights from the formal theory, we

propose two practical learning-based RecSys modifications, test their properties with synthetic data, and

explore when they are most advantageous.

2. Related Literatures We build on two related literatures: the RecSys literature, mostly from computer science, and

the sequential search literature, mostly from economics and marketing.

2.1 Literature on Recommendation Systems (abbreviated RecSys, plural RecSystems).

Traditionally, the primary goal of a (Top- ) RecSys is to recommend items that maximize a

user’s utility (Adomavičius and Tuzhilin 2005). Typically, the RecSys observes a utility surrogate, a rating

or a rank, for some users and some items and attempts to extrapolate that surrogate to all users and

Page 6: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

4

items. As a result, most RecSystems are evaluated on the accuracy of that extrapolation (Herlocker, et

al. 2004; McNee and Konstan 2006; Vargas and Castells 2011; Zhang and Hurley 2008). This focus was

most notable in the $1M Netflix Challenge that began in 2006 and finished in 2009. The Netflix

Challenge sought the RecSys algorithm that best predicted held-out user ratings. Successful RecSystems

focus on similarities among users (collaborate filters), similarities among items (content-based filters), or

hybrids to recommend products with high expected utility (Adomavičius and Tuzhilin 2005). Although

some RecSystems attempt to match attribute-based utility, attributes are typically defined with

taxonomies such as genre (Ansari, Essegaier, and Kohli 2000). Recently, many authors have criticized the

RecSys focus on accuracy as providing recommendations that are too similar, for example,

recommending the same author if the RecSys seeks to recommend books (Fleder and Hosanagar 2009;

McNee and Konstan 2006; Zhang and Hurley 2008).

In response, researchers have proposed algorithms, and metrics to evaluate those algorithms,

that include goals that complement accuracy (Bodapati 2008; Herlocker et al. 2004). New algorithms

avoid recommending items that the consumer would have bought without a recommendation. They

augment accuracy with diversity, novelty, and serendipity (Adamopoulos and Tuzhilin 2014; Celma and

Herrera 2008; Castells, Vargas, and Wang 2011; Ge, Delgado-Battenfeld, and Jannach 2010; Vargas and

Castells 2011; Zhou, et al. 2010; Ziegler, et al. 2005). Diverse items are items that are not similar to one

another; novel items are items the consumer would not have chosen without a recommendation;

serendipitous items are items that are unexpected, relevant, and useful. To achieve these goals,

RecSystems penalize recommendations that are similar to “accurate” recommendations or recommend

products from the “long tail.” Product attributes, when used, are used to define similarity matrices.

Diversity, novelty, and serendipity are based on products, not the levels of attributes of the products.

We augment this literature by studying how a RecSys might consider preference learning when making

recommendations. Our results challenge the traditional RecSys focus on products that are likely to be

chosen or likely to have high utility. On the other hand, our analyses provide a theoretical explanation

for diversity, novelty, and serendipity. We suggest RecSys modifications that are based on a

reinterpretation of these concepts and highlight when such modified foci benefit consumers.

2.1. Literature on Sequential Search

Papers in marketing and economics recognize the importance of the consumer’s search for

information. For example, Kim, Albuquerque, and Bronnenberg (2010) use Amazon’s view-rank data to

infer consumer preferences for observed attributes of camcorders. Consumers know these attribute

levels without search, but consumers search to resolve the unobserved utility of the products (error

Page 7: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

5

term). Bronnenberg, Kim, and Mela (2016) study observed online search and find that search is over a

relatively small region of attribute space that declines with subsequent search. The final choice is rarely

the first item searched. Hong and Shum (2006), Honka (2014), and Seiler (2013) analyze search to infer

price distributions and/or search costs. Although a few authors consider non-sequential search,

Bronnenberg, Kim, and Mela (2016) report strong evidence to support sequential search.

Much of this literature is based on Weitzman (1979), who studies product utilities that are

independent over searched products. Weitzman’s search strategy is based on an option value index—

the upper tail of the utility distribution. The optimal strategy is to search the products with the highest

indices as long as they are above the reservation value. See extensions by Adam (2001) and

Bikhchandani and Sharma (1996). Branco, Sun, and Villas-Boas (2012) focus on the optimal search for

multiple attributes of a single product. Their optimal strategy is also index-based: the consumer

searches as long as utility is bounded between purchase and not-purchase thresholds. Ke, Shen, and

Villas-Boas (2016) extend the model to derive appropriate bounds for two products.

Our analyses are consistent with this literature in the sense that the consumer’s optimal search

path is the solution to a dynamic program (a Bellman Equation). However, we modify the recursion to

allow consumers to update their preferences for attributes as they search. Because products share

attributes, the optimal search strategy is no longer indexable (e.g., Weitzman’s solution). High option

value or high variance in utility matters less; strategies to learn preferences efficiently matter more.

Finally, our model of preference learning is consistent with examples of preference learning in

the marketing science literature. Greenleaf and Lehmann (1995) demonstrate that consumers delay

purchases to learn attribute preferences, and She and MacDonald (2013) show that “trigger features”

cause consumers to update attribute preferences. Hauser, Dong, and Ding (2014) show that, as

consumers become more expert, their attribute preferences stabilize. Predictions, even one-to-three

weeks later, improve. Dzyabura, Jagabathula, and Muller (2016) demonstrate that attribute preferences

change when consumers evaluate physical products rather than hypothetical profiles.

3. Model of Consumer Search with Preference Learning To model preference learning, we decompose products into attributes and allow consumers to

learn their preferences for attribute levels as they search the products. For example, when Pat test

drives a car, she learns (updates) her preference (partworth) for sufficient trunk space. We start by

defining the utility of a product and then present the model of search.

3.1. Consumer Utility is Defined on Attributes

Let = 1,… , index products and = 1,… , index attributes. We begin with binary attributes

Page 8: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

6

such that a product either has or does not have an attribute. When it is clear in context, we refer to

binary attributes simply as attributes. Later, after we discuss multilevel attributes in §4.3, we introduce

terminology to distinguish attribute levels from attributes. Let = 1 if product has binary attribute

and let = 0 otherwise. Let be the binary vector that describes product .

Let be the utility of product and let ∗ be the utility of the outside option. Let be the

relative weight that the consumer places on attribute such that:

(1) = =

Let = [ ,… ] be the vector of partworths. Because the consumer may have prior beliefs

about the values of , the are random variables before they are revealed. The probability density for

the consumer’s prior beliefs is denoted by ( ) for each attribute . For simplicity, we assume the

prior distributions (and any updated distributions) are independent over .

To focus on preference learning, we assume consumers know, or can search at negligible cost,

whether or not a product has an attribute. That is, we assume they know . This simplification is not

unrealistic. Zillow, Trulia, or multiple-listing services (MLS) provide attribute levels of new homes; U.S.

News & World Reports and Business Week provide attribute levels for colleges; Autotrader, Edmunds,

and Kelly Blue Book provide attribute levels for automobiles; and travel websites, dating websites, and

Amazon provide attribute levels for other products. We focus on situations, such as the examples in

§1.1, where attribute levels are easy to observe, but more-costly search is necessary for consumers to

experience attributes and learn their value. We revisit this assumption in §8.

3.2. Consumers Learn Preferences during Search

Consumers engage in sequential search, searching one product at a time. Searching represents

sufficient effort by a consumer to examine and evaluate product , for example, by test driving a car,

visiting a house/condo for sale, visiting a college, or interviewing a nanny. The cost of searching a

product is > 0. We need additional notation to keep track of searched attributes. Let be the set of

the first products searched, and let ( ) be the set of attributes that have been searched in the first

products: ( ) = : ∑ ≥ 1∈ .

The consumer’s search reveals the true value of , which we label . That is, whenever = 1, is revealed when the consumer searches product . For simplicity of exposition, we assume

the value of is fully revealed by the search. With this abstraction, the consumer’s posterior belief

distribution about the partworths after the product is searched is:

Page 9: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

7

(2) ( ) = ( ) ∉ ( )( ) ∈ ( ) , where (∙) is Kronecker’s delta function. The assumption that preferences are fully revealed when an

attribute is searched allows us to focus on the concept of preference learning. Our formal results can be

derived for cases in which the posterior is not a delta function, but doing so might obscure the basic

insights.

3.3. Optimal Search

If no recommendations are made, the consumer searches sequentially and optimally. If

recommendations are made, the consumer searches all recommended products, updates his or her

preferences, and searches optimally thereafter. The consumer is forward looking; therefore, the

consumer solves a dynamic programming recursion to select the next product to search, or to stop and

purchase. The state is the set of products already searched, , and the posterior beliefs. Because

distributions are independent over , posterior beliefs after searching those products are ( ) =∏ ( ) .

If ( , ) is the continuation value, the Bellman Equation recognizes that this value is the

maximum over choosing the outside option ( ∗), or choosing the maximum utility product without

searching based on , or continuing to search. The value of continuing to search is the maximum over

all unsearched products taking into account that preferences will be updated through further search (if

further search is optimal). Expectations are based on , which is the consumer’s belief about his or her

preferences when the search decision is made. To focus on preference learning, we assume search

happens relatively quickly such that the discount factor, , equals 1. (Results 1-3 can be proven for < 1, but with more complicated notation.) The Bellman Equation is:

(3) ( , ) = max ∗, max Σ , max∉ {− + [ ( ∪ { }, | ]} . Updating from to follows Equation 2 replacing ( ) with ( ∪ { }). Note that if

product reveals no new attributes, = and ( ∪ { }) = ( ). For the remainder of the

paper, we assume the consumer solves Equation 3 after first searching products that have been

recommended, if any.

4. Preference Learning Affects Search We know of no closed-form solution to Equation 3, even in the case of binary attributes. If a

Page 10: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

8

consumer is surprised in the sense that his or her posterior preference for an attribute is different from

his or her prior beliefs, then the updated beliefs change the expected utility of all products that share

this attribute. All subsequent searches are affected. This interdependency encourages exploration, but

the value of exploration is complex and qualitatively different from that of learning attribute values. To

understand better the exploration value of search and recommendations and to examine when and how

preference learning expands the consumer’s options, we examine product spaces defined by two or

three attributes. We examine more complicated product spaces in §§6-7. In this section we examine

search without recommendations. We introduce recommendations in §5.

4.1. Consumer Search in a Full-Factorial Product Space

We consider search without recommendations in a two-attribute full-factorial product space of

four products: = (0, 0), = (1, 0), = (0, 1), and = (1, 1). In this product space, at = 1, the

consumer has the options to buy one of the products, or to search , , or . If the consumer

searches , the values of both attributes are revealed and the consumer can make a choice without

further search. If the consumer searches or , then he or she has the option to buy one of the

searched products or to search one of the remaining products. Searching is not optimal, because it

has neither binary attribute. After searching two products other than , all attributes weights are

revealed, and if no choice has been made, the consumer has all information necessary to make a choice.

Even for two binary attributes, the equations for the optimal path are complicated and

dependent on what is revealed. In Appendix 1, we derive the full dynamic search path using Equation 3.

For example, the expected continuation value after searching is

(4) [ ({ }, )] = − + ( ) + max 0, , − + ( ) . By deriving equations for all possible options and outcomes, we prove the following result:

Result 1. When the two-attribute product space is a full factorial, the consumer will either search

the full-information product, = (1, 1), or purchase one of the products without searching.

Appendix 1 provides the formal proofs of Results 1-3. The intuition behind Result 1 is that

searching the full-information product is an efficient way to obtain information. For example, consumers

might prefer test driving highly featured cars, or consumers might prefer searching condos with high

levels of many attributes. Result 1 generalizes nicely. The result and intuition extends to more than two

attributes and to sub-searches in which one attribute is held constant. For example, searching product

(1, 1, 0) is better than searching either (0, 0, 0), (1, 0, 0), or (0, 1, 0).

Page 11: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

9

Result 1 raises a fundamental issue: if the best search is always the full-factorial product a

RecSys appears to have little value. But RecSystems are popular and highly researched. The answer to

the conundrum is simple. In most realistic cases, full-factorial products are not available. Highly featured

cars are priced high, and may be outside the consumer’s price range, leading to “sticker shock.” Many

possible features describe houses and condos—a fully featured property is exceedingly rare. The same

can be said for nannies, colleges, jobs, furniture on Craig’s List, and even dating opportunities.

Recommendations are particularly valuable in product spaces where full-factorial products are not

available. For example, product spaces with multilevel attributes rarely have a full-factorial alternative: a

car cannot simultaneously have an automatic transmission, a manual transmission, and a shiftable

automatic transmission. We discuss multilevel attributes further in §4.3.

Result 1 provides insight for a new modification for RecSystems—favoring products with diverse

attributes. The focus on diverse attributes is different from current RecSys trends toward diverse, novel,

or serendipitous products. RecSys trends focus on products rather than attributes. By contrast, Result 1

suggests that a consumer may be better off searching product combinations that introduce attributes

that have not yet been searched. (Previewing §7, consumers are better off with such a RecSys

modification.) Result 1 also suggests that manufacturers might simplify the consumer’s search process

by making available products with many attributes.

4.2. Search Strategy in Non-Full-Factorial Space

Result 1 relies on the full-information product being available. When it is not, solving Equation 3

implies that optimal search focuses on information. In Appendix 1, we prove the following:

Result 2. When the product space is not a full-factorial and the consumer receives no

recommendations, the consumer may choose to search a product:

• that has a low or zero probability of being chosen,

• that does not have the largest expected utility,

• that does not have the largest option value, and

• that does not have the largest variance in utility.

Result 2 is important because it tells us that common solutions may differ from standard results

when consumers learn their preferences. Relative to the RecSys literature, focusing on “accuracy” is not

optimal for the consumer; searching a product that has a low expected value or a zero probability of

being purchased might be optimal. Relative to the search literature, ranking products on their

reservation prices (or indices) that depend upon the upper tails of the ’s—the option values – is not

Page 12: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

10

optimal. All else equal, a reservation-price-rank solution favors high variance in a product’s utility. High

option value or high variance may not be best when consumers learn preferences.

Result 2 tells us what may not be optimal to search, but we gain insight on what may be optimal

to search by examining the intuition behind the formal result. To demonstrate existence in Result 2, we

use a product space of three attributes: = (0, 0, 0), = (1, 0, 0), = (0, 1, 0), = (0, 0, 1), = (1, 1, 0), = (1, 0, 1), and = (0, 1, 1), but remove the full-factorial product, = (1, 1, 1). In

this product space, a generalization of the reasoning behind Result 1 says that searching one of the

products with two attributes present, , ,or , is better than searching a product with only one or

no attributes present.

We establish existence with a class of prior distributions in which the consumer is reasonably

sure that > and > , but the consumer believes sufficient uncertainty exists about whether > or < . We solve the dynamic program to show that the optimal search solution is to

resolve the uncertainty in and , and then purchase the best product. The consumer resolves

uncertainty by searching = (1, 1, 0), and then purchases either = (1, 0, 1) or = (0, 1, 1). Because the consumer is reasonably sure > and > , buying is highly unlikely, and [ ( )] is less than the expected utility of the chosen product. Resolving the uncertainty involves the

relative distributions of and , not the option value (positive tail) of . This property allows us to

choose distributions such that [ ( )| ( ) ≥ ∗] < [ ( )| ( ) ≥ ∗] and [ ( )| ( ) ≥ ∗] < [ ( )| ( ) ≥ ∗]. Our example is based on uniform distributions and

non-overlapping support; these distributions are sufficient, but not necessary. The simple idea

underlying Result 2 is that information about preferences may be more valuable to consumers than

standard criteria based directly on the probability distribution of utility. We use this intuition in §5 to

suggest a RecSys modification that is based on learning preferences for attributes.

4.3. Multilevel Attributes

Results 1 and 2 are not limited to binary attributes. Every multilevel attribute can be written as a

series of binary attributes. For example, a product space of = (1, 0, 0), = (0, 1, 0), and = (0, 0, 1) can represent a three-level attribute whereby a car has either an automatic transmission,

, a manual transmission, , or a shiftable automatic transmission, . Indeed, most product spaces

with multilevel attributes will not have the full-factorial product available—no car has an automatic

transmission, a manual transmission, and a shiftable automatic transmission—no = (1, 1, 1). Both our formal theory and the RecSys analyses in §5 allow product spaces that are not full-

factorials. Our theory and analyses deal readily with such product spaces. In fact, because the full-

Page 13: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

11

factorial product is rarely available when multilevel attributes are present, the insights of Result 2 are

highly relevant for multilevel-attribute product spaces. The synthetic data analyses in §§6-7 are based

on multilevel attributes.

5. A RecSys Can Improve a Consumer’s Search A RecSys is valuable if it uses information the consumer does not have. In our case, that

information is improved prior beliefs about the preference weights. An experienced realtor might know

that young couples, such as Candice and Dave, tend to undervalue playground proximity or an

experienced guidance counselor might infer that Evan would value an undergraduate research program.

The guidance counselor might even know Evan would have to see the program himself; Evan, being a

teenager, would not update his priors based on the guidance counselor’s word alone. Machine-learning

algorithms might use prior experience, collaborative filters, cookies, social-network posts, geolocation,

or other means to gain insight on consumer’s true preferences. RecSys priors could very well be better

than the naïve consumers’ priors.

We let ( ) represent the RecSys’ prior beliefs about the consumer’s preferences. The

interesting case is when RecSys priors differ from the consumer’s priors, ≠ . Otherwise, the

RecSys would provide little value, and the RecSys’ recommended products would coincide with those

the consumer would otherwise search. In this section, we assume the RecSys knows the consumer’s

beliefs, (and ), but the consumer does not know the RecSys’ beliefs, . We study the case in

which the RecSys cannot simply tell the consumer to replace with . Instead the RecSys

recommends products and the consumer searches those products. After searching the

recommended products, the consumer continues to search and/or purchase optimally. The RecSys

makes its recommendations knowing the consumer will continue to search optimally.

A good RecSys need not know with certainty, but will have priors that are closer to the truth

than the consumer’s priors. In §7, we propose a RecSys modification that uses and . Unless

otherwise specified, we assume the RecSys is benevolent and acts in the best interests of the consumer.

For example, benevolence might be valuable for a realtor who wishes to maintain his or her reputation

for future sales, a guidance counselor who is judged on student satisfaction, a RecSys that is monetized

by website visitors who are motivated by prior visitor satisfaction, or a RecSys that provides value across

many product categories by acting in the consumer’s best interests.

Given these assumptions, we compare (1) the value the RecSys believes the consumer can

expect if he or she follows the RecSys’s recommendations and (2) the value the RecSys believes the

consumer can expect if he or she searches without recommendations. For example, for a two-attribute

Page 14: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

12

non-full-factorial product space, we compare the value of recommending , labeled { } , to

searching with no recommendations, labeled ∅. The derivation is in Appendix 1.

(5)

[ ({ } , )| ]≥ [ (∅, )| ] + ( ) ( )+ ( − ) ( ) ( ).

By deriving related equations for all possible recommendations and the resulting search paths,

we identify the best recommendation(s) and prove a RecSys result that parallels Result 2:

Result 3. There exist cases where it is optimal for a RecSys to recommend an undervalued

product that the consumer would not otherwise search or choose. The recommended product:

• may have a low or zero probability of being chosen,

• may not have the largest expected utility,

• may not have the largest option value, and

• may not have the largest variance in utility.

A two-attribute product space is sufficient to prove existence. We choose consumer priors, ,

such that, from the consumer’s perspective, it is optimal to search only = (1, 0). We do this by

choosing such that (1) the value of resolving the uncertainty in is greater than the search cost, but

(2) the value of resolving the uncertainty in is less than the search cost. These properties of

assures that, with no recommendations, the consumer will search = (1, 0), but not = (0, 1). Note

that, by construction, the option value of searching is less than the option value of searching .

We then keep the mean of the RecSys’s beliefs the same as the mean of the consumer’s prior

beliefs, but increase the uncertainty implied by ( ) relative to ( ), which, in turn, implies the

consumer undervalues relative to the RecSys’s beliefs. We show that recommending , and only ,

is optimal. Basically, the RecSys believes that searching to resolve and then evaluating whether to

resolve is best for the consumer. Because the optimal consumer may or may not search to resolve

after observing , the option afforded the consumer implies that { } is better than { , } .

The recommendation, { } , is inferior to other recommendations because, after searching , the

consumer would not search without a recommendation.

We see the basis of these relationships mathematically in Equation 5: the value of following the

recommendation, { } , is higher than the value of acting with no recommendations, ∅. The

Page 15: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

13

difference in value is the value represented by the two integrals, both of which are clearly non-negative.

(In Result 3, is an endogenous decision of the RecSys.) We show by example that all relationships can

be satisfied even if the mean and variance of ( ) is less than the mean and variance of ( ) and if

the probability of choosing is less than the probability of choosing .

The intuition underlying the proof to Result 3 is simple—the consumer benefits most from

recommendations of products the consumer would not have chosen to search. These products have

attributes for which is most different from . Together, Results 2 and 3 imply that rather than

recommending high-utility, high-probability-of-choice, high-option-value, or high-variance products, the

RecSys should focus on high-learning products.

Insights from the proof to Result 3 suggest a simple RecSys modification—recommend products

for which [ | ] is maximally different than [ | ]. We call such products “undervalued

products.” One RecSys modification maximizes the sum of [ | ] − [ | ] over all = 1.

Alternative modifications might focus on absolute differences, positive differences, or other functions of

and .

Result 1 suggested that RecSystems be modified to reward products with a diverse set of

attributes. The Result 1 modification is different from the Result 3 modification, but the two

modifications are not unrelated. If diverse attributes are unsearched attributes, then ( ) ≠ ( ). “Undervalued products” depends on ≠ [ | ]. Both RecSys modifications focus on

changing consumer beliefs.

This completes our formal results. The basic insight is that preference learning changes the

criteria with which RecSystems are evaluated. High expected utility, high probability of choice, high

variance, and high option values may not lead to the best recommendations. Instead, when preference

learning is important, a good RecSys will compare its beliefs, , to those of the consumer, , and

attempt to recommend products that encourage the consumer to seek the most useful information. The

most useful information is that which updates preference beliefs.

The suggested RecSys modifications are consistent with, and provide a marketing-science

explanation for, recent trends in the RecSys literature. These trends include diversity, novelty, and

serendipity. The preference-learning modifications suggest that diversity should be with respect to

attributes, novelty should be based on not-yet-searched attributes, and serendipity should focus on

products that are undervalued. In §7, we explore these theory-based RecSys modifications.

6. RecSys Recommendations with a Larger Product Space Before we test the proposed RecSys modifications, we illustrate RecSys capabilities in product

Page 16: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

14

spaces that are larger than those used in the formal results.

6.1. Structure of the Synthetic Data

We examine a product space defined by three five-level attributes. With multilevel attributes,

full-information (full-factorial) products (as per Result 1) are neither feasible nor available. Henceforth,

to avoid confusion, we follow Tversky (1972) and refer to a binary attribute level as an aspect. The

fifteen (15 = 3 × 5) aspects in our product space are sufficient to illustrate interesting phenomena,

but not so complex as to make calculating optimal post-recommendation search infeasible. With three

five-level attributes, 5 × 5 × 5 = 125 feasible products exist in the product space, which is far fewer

than the 2 = 32,768 combinations of 15 aspects, though still large.

Fortunately, we can reduce the computational burden for optimal post-recommendation search

by allowing all 125 feasible products to be available to the consumer. When all products are available,

the highest-utility product has the best level for each attribute, which simplifies search within an

attribute, although not between attributes. Despite the reduced computation, we retain sufficient

complexity to model the insights from Results 1-3. We set = 1 because a single recommendation is

sufficient to illustrate the phenomena made possible by preference learning.

The prior beliefs, , for all aspects, , are normally distributed and independent over aspects.

We denote the means and standard deviations of the aspect-based normal distributions by and

such that = ( , ). The specific values of , , and are given in Appendix 2. Patterns

similar to those discussed in this section emerge for a wide range of parameter values, for different

numbers of attributes, and for different .

6.2. Comparison of Recommended and Chosen Products

We consider all possible recommended products. We assume the consumer searches the

recommended product and then continues searching optimally. The consumer either chooses a product

or the outside option. We summarize the results in Figures 1 and 2.

The vertical axis of Figure 1 plots the consumer’s net pay off—the utility of the chosen product,

minus the incurred search costs. The horizontal axis of Figure 1 represents the true utility of the

recommended product (based on ). The consumer learns these values when he or she searches the

recommended product. In Figure 1, all recommendations lead the consumer to purchase one of three

products as indicated by the horizontal rows. The net utility of the chosen product differs slightly

because search costs differ. (“Three products” is not a general result. Different parameter values give

different numbers of post-search products.)

Page 17: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

15

Figure 1. Net Utility of Product Chosen After Search vs. Utility of Recommended Products

We first examine the product recommendation in Figure 1 that is represented by the diamond

(). In this case, the RecSys recommended the highest-utility product and the consumer chose that

product, but ultimately did so after incurring more search costs than would have been incurred for other

recommendations. As Result 3 anticipates, there is a better recommendation than the diamond ().

The recommendation indicated by a triangle () also leads the consumer to the highest-utility product,

but does so by recommending a much lower-utility product. After receiving this recommendation (),

the consumer learns efficiently that some attributes are more important and some are less important

than previously thought.

The recommendation indicated by a square () is interesting. For such recommendations, the

consumer is satisfied with one of the recommended products and has no incentive to search further.

Such recommendations by non-benevolent recommenders exploit the consumer’s naïveté and lead him

or her to purchase a product that is not the highest true utility. The consumer would not update his or

her priors sufficiently, never learn of better-utility products, but would be satisfied with his or her

chosen product. The consumer might even thank the RecSys for a recommendation. For example, a non-

benevolent realtor might be tempted to recommend his or her own listing to obtain both seller and

buyer commissions. (The value of the realtor’s reputation might be a force to prevent non-benevolence,

but only if the consumer would learn of the better-utility product after purchase, and then communicate

7.0

7.5

8.0

8.5

9.0

9.5

2 3 4 5 6 7 8 9 10

Net

Util

ity o

f Cho

sen

Prod

uct

True Utility of Recommended Product

Highest utility recommendation, close

to highest choice.

Modest recommendation, but consumer finds and chooses the best.

Good recommendation, consumer chooses well,

but not the best.

Bad recommendation, despite further search, consumer chooses poorly.

Page 18: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

16

that information to new consumers causing them to seek other realtors—and then only if the realtor did

not discount the future too much.) Similarly, and with similar concerns, short-term gains might be

tempting if paid advertising supported a RecSys. The other interesting feature about the

recommendation denoted by a square () is that other recommendations () exist, with lower initial

utility, that lead to higher post-search utility. Although we are agnostic about non-benevolent

RecSystems, we note that other researchers posit that a RecSys should recommend products to

maximize its own utility (Ansari, Essegaier and Kohli 2000).

Recommendations, such as indicated by a circle (), lead the consumer to choose lower-utility

products. After following such recommendations, the consumer, acting optimally based on his or her

priors, updates some of his or her preferences, but never updates them sufficiently to find the highest-

utility product. Post search, the consumer believes falsely that he or she has found the product that has

the highest utility. Despite the opportunity loss, the consumer might be satisfied.

Figure 2 provides a different perspective on the same recommendations. Figure 2 compares net

post-search utility to RecSys beliefs. In Figure 2, the mean of the RecSys beliefs, , is a convex

function of the mean of the consumer’s prior beliefs, , and the consumer’s true beliefs, . As

anticipated by Result 3 and as represented by a hexagon (), a recommended product, thought by the

RecSys to be the highest-utility products, does not lead the consumer to the highest-utility product.

After receiving that recommendation (), the consumer, acting optimally based on priors, would not

search sufficiently to find the true highest-utility product. A (benevolent) RecSys would have served the

consumer better had it recommended the product indicated by a diamond (), or even the product

indicated by a triangle (). In the latter case (), the recommendation itself would not have been a

high-utility product, but, consistent with Result 3, the recommendation would have caused the

consumer to update his or her beliefs and continue searching until the highest-utility product was found.

In Figure 2, the utility of the recommended products and the utility of the chosen products are

correlated ( = 0.28), but the relationship is well below 1.0. Preference learning drives the lack of

perfect correlation. Detailed examination of the search path reinforces the insights obtained from Result

3—the best recommendations are those that encourage the consumer to search products that reveal

undervalued aspects. Figure 2 reinforces the insight that good recommendations provide valuable

information. Figures 1 and 2 allow the consumer to be surprised either positively (attribute preference

higher than priors) or negatively (attribute preference lower than priors). Both forms of preference

learning are valuable to consumers.

Page 19: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

17

Figure 2. Net Utility of Product Chosen after Search vs. RecSys’s Beliefs about the Utility of the Recommended Product

7. RecSystems that Encourage Learning—Synthetic Data Experiments

The formal analyses (§§4-5) and the numerical examples (§6) suggest two preference-learning

RecSys modifications: recommending products with diverse attributes (Result 1) and recommending

undervalued products (Result 3). In this section we explore whether those modifications improve RecSys

performance on synthetic data. The synthetic data expand the analyses of §6 to 10,000 consumers in

each of over 100 synthetic worlds Because the value of preference learning depends upon differences in

and , our synthetic worlds vary with respect to the quality of consumers’ priors (naïveté) and the

ability of the RecSys to identify which consumers have which preference weights (RecSys knowledge). By

design, consumers learn their preferences in our synthetic worlds. Our experiments are proof-of-

concept experiments: we examine whether the RecSys modifications can improve RecSys performance

when consumers learn their preferences.

We expect preference learning to be particularly relevant for naïve consumers who are new to a

product category. Naiveté is more likely for infrequent purchases such as automobiles, housing, college

choice, and nannies. Naïveté is also more likely for consumers who feel they need recommendations.

RecSys knowledge about (naïve) consumers is more likely when recommendations can be tailored to a

relatively homogeneous target segment (homogeneous in ) so that experience with previous

consumers applies well to the target consumer. This is more likely when the RecSys tracks consumers’

7.0

7.5

8.0

8.5

9.0

9.5

2 3 4 5 6 7 8 9 10

Net

Utt

ity o

f Cho

sen

Prod

uct

Utility of Recommmended Product (RecSys Beliefs)

Recommender tries to recommend the best product, but, even after search, the consumer does not find the highest-utility product.

Page 20: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

18

browsing patterns and choices and when marketing-science methods identify homogeneous sets of

consumers from observed variables (demographics, geolocation, social network posts, collaborative

filters, etc.).

7.1. The Need for Computationally Efficient Heuristic Algorithms

The illustrative results of §6 were based on one synthetic consumer. We solved numerically for

the optimal search path after each of 125 possible recommendations. An applied RecSys would likely

recommend more products in a space with more attributes. For example, if all attribute combinations

were feasible, a Top-1 RecSys would evaluate almost 10 million recommendations in a space of ten five-

level attributes. This number would grow to 50 trillion pairwise recommendations for a Top-2 RecSys

and over 100 quintillion recommendations for Top-3 RecSys. Although the dynamic program in Equation

3 makes search more efficient, its memory requirements grow exponentially with the size of the

problem. It is no surprise that applied RecSystems are based on heuristics. To be consistent with

applications and with the RecSys literature, we examine RecSys modifications that are likely to be

feasible in applied situations. These modifications are necessarily heuristic, but capture the essence of

the formal results.

7.2. RecSys Modifications that Encourage Preference Learning

“Maximum Expected Utility.” A maximum-expected-utility RecSys provides a benchmark

against which to compare the two RecSys modifications. The maximum-expected-utility RecSys

recommends products that it expects will give the consumer the highest utility.

“Undervalued Products.” This RecSys modification compares the consumer’s prior beliefs ( )

to the RecSys’s beliefs ( ) and maximizes ≡ ∑ [ | ] − [ | ] . When max is

above a threshold, the RecSys modification recommends the corresponding product. When max is

below a threshold, the modification reverts to the benchmark of recommending the maximum-

expected-utility product. In the latter case, no significantly undervalued products exist, indicating the

consumer has learned his or her preferences.

We expect the undervalued-product modification to improve performance when the consumer

is particularly naïve ( is different from to ) and when the RecSys has relatively good knowledge

( is close to ). Because the modification focuses on -vs.- differences at the expense of

maximum utility, we expect the modification will do well for naïve consumers or knowledgeable

RecSystems.

“Aspect Diversity.” The aspect-diversity modification is a generalization of “attribute-diversity”

Page 21: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

19

to apply when multilevel attributes exist. Aspect diversity modifies the benchmark RecSys by subtracting

a penalty proportional to the number of aspects in common with the product the consumer would have

searched without a recommendation. To avoid overly tuning the modification to any specific synthetic

world, we set the weight on the penalty to 1.0, thus assuring that our results are conservative and could

only do better by tuning this parameter.

The aspect-diversity modification uses but not . Not needing to know is an advantage

in applications when inferring is hard for the RecSys, but it is a disadvantage because aspect-diversity

does not use data on . We expect the aspect-diversity modification will perform as well or better than

the benchmark and will be more robust than the undervalued-product modification, particularly for

more-expert consumers and less-knowledgeable RecSystems.

7.3. Product Space and True Consumer Utilities

We use a product space of two five-level attributes and set = 1. For each synthetic world and

for each of 10,000 consumers in that world, we draw true aspect preferences (partworths) from a

mixture of two normal distributions: one with a low mean to represent unimportant aspects and one

with a high mean to represent important aspects. The consumer’s prior beliefs are normally distributed

and independent over aspects and depend on naïveté as described in §7.4. The variances of the priors

are drawn i.i.d. from an exponential distribution. The specific values of the parameters of the preference

distributions are given in Appendix 2. For readers wishing to explore other parameter values or other

RecSys modifications, software is available from the authors.

7.4. Synthetic Worlds vary with Respect to Consumer Naïveté and RecSys Knowledge

Consumer Naïveté. For each consumer, we set the prior means equal to the true means for a

fraction of the aspects. For the remaining aspects, we redraw the prior means randomly. A consumer is

more naïve (less expert) if a larger fraction of the consumer’s prior beliefs are redrawn randomly. We

vary this fraction. An expert has naïveté equal to zero and a novice has naïveté equal to 1.

RecSys Knowledge. It would be unrealistic for a RecSys to know exactly. More likely, a

RecSys would be able to observe mean preferences within a segment of consumers. Such observations

might be obtrusive via preference elicitation such as conjoint analysis or unobtrusive via marketing-

science models that relate attribute levels to observed choices. If consumers have homogeneous

preference weights within a segment, the RecSys prior, , is likely to be highly accurate for the target

consumer. If the distribution of preference weights within a segment is diffuse, then, even with the

most-accurate estimation methods, the RecSys prior will be less accurate for a target consumer. We

manipulate heterogeneity within a segment as a realistic surrogate for RecSys knowledge.

Page 22: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

20

Manipulating heterogeneity must be done carefully. If we were to simply manipulate the

variances of the normal distributions from which we draw true aspect preferences, we would

manipulate heterogeneity, but we would simultaneously manipulate the expected value of the

maximum-utility product (Gumbel 1958). Instead, the variances of true beliefs are the same in each

synthetic world, but we manipulate the order of aspect preferences within an attribute. In the lowest

heterogeneity condition, the rank-order preferences within an attribute are the same for all consumers.

In the highest heterogeneity condition, the rank orders vary randomly. Details are in Appendix 2.

7.5. RecSys Performance as a Function of Consumer Naïveté and RecSys Knowledge

We vary consumer naïveté and RecSys knowledge in 10 equal steps for a total of 121 synthetic

worlds. (Naïveté and RecSys knowledge vary from 0 to 1 in steps of 0.1). The benchmark and each

suggested RecSys modification make their recommendations. Each of 10,000 consumers searches the

recommended product and then searches optimally until a stopping rule is reached. We measure

performance by the relative improvement in utility of the chosen product (net of search costs). That is,

performance is the net utility achieved with a recommendation minus the net utility achieved with no

recommendation (when varies sufficiently from to matter). We normalize net utility by the

maximum improvement that an omniscient consumer, who knew all of his or her preferences without

search, would have achieved. Tables A1 to A3 in Appendix 3 provide the mean performance for each

synthetic world. In those tables, we highlight in bold where each of the RecSystems does best. RecSys

performance behaves as predicted in §7.2.

We find it instructive to illustrate the variation in RecSys performance with two plots, one that

represents matched rows in Tables A1-A3 and one that represents matched columns in Tables A1-A3. In

particular, Figure 3 varies consumer naïveté, holding RecSys knowledge constant, and Figure 4 varies

RecSys knowledge, holding consumer naïveté constant. (We choose intermediate values for the

parameters held constant, naïveté = 0.6 and heterogeneity = 0.4 as defined in Appendix 2.) The error

bars represent variation over sets of consumers for each synthetic world.

As expected, both preference-learning modifications often improve performance relative to the

benchmark. The undervalued-product modification does particularly well when consumers are very

naïve or when the RecSys has good knowledge of preferences, but it does less well when consumers are

expert or RecSys knowledge is low. The aspect-diversity modification is more robust—its performance is

less extreme. It almost always improves performance (in Figures 3 and 4) relative to the benchmark. The

aspect-diversity modification does well relative to the undervalued-product modification for consumers

that are neither naïve nor expert and for RecSystems with low knowledge.

Page 23: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

21

Figure 3. Improvement in Net Utility of the Chosen Product Due to Recommendations As a Function of the Consumer’s Naïveté

Figure 4. Improvement in Net Utility of the Chosen Product due to Recommendations as a Function of RecSys Knowledge

7.6 Summary of Synthetic Data Experiments

Figures 3 and 4 (and Tables A1-A3) demonstrate that situations exist in which RecSys

modifications based on preference learning improve the consumer’s net utility relative to a standard

0.06

0.08

0.10

0.12

0.14

0.16

0.18

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Impr

ovem

ent i

n N

et U

tility

of C

hose

n Pr

oduc

t due

to R

ecom

men

datio

n

Consumer Naïveté (high = 1.0)

Undervalued Products

Aspect Diversity

Max Expected Utility

0.08

0.09

0.10

0.11

0.12

0.13

0.14

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Impr

ovem

ent i

n N

et U

tility

of C

hose

n Pr

oduc

t due

to R

ecom

men

datio

n

RecSys Knowledge (high = 1.0)

Undervalued Products

Aspect Diversity

Max Expected Utility

Page 24: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

22

RecSys benchmark. Both RecSys modifications do relatively well in at least some synthetic worlds. With

refinement and tuning, we expect the relative performances of the modifications to improve further.

For example, we might test a heuristic that combines the best features of the undervalued-product and

the aspect-diversity modifications, or we might tune either or both modifications. For readers wishing to

explore refinements, software is available from the authors.

8. Summary and Discussion

8.1. Summary

When consumers update their preferences as they search, the optimal search strategy is to

search the full-factorial product, purchase immediately, or choose the outside option. However, most

product spaces, particularly product spaces with multilevel attributes, do not contain a full-factorial

product. For typical product spaces, the consumer will search products that reveal the most useful

information about aspect preferences. The Weitzman strategy is not necessarily optimal—consumers

may search products with low option values or low variance in utility. Furthermore, in contrast to

traditional RecSys literature, but consistent with recent developments with respect to diversity, novelty,

and serendipity, preference learning suggests that RecSystems should not be evaluated solely based on

accuracy (probability of choice or expected utility). Benevolent RecSystems should focus on helping the

consumer learn rather than recommending products the consumer would buy, have high utility, or have

high option values. We also demonstrate that RecSystems, with goals other than benevolence, can steer

consumers to profitable choices and that such consumers may never learn that such steering has

occurred.

Using insight from the formal analyses and numerical examples, we propose two RecSys

modifications that encourage preference learning. Synthetic data experiments demonstrate that both

RecSys modifications outperform a common benchmark, especially when consumers are naïve and

when RecSys knowledge is high. For naïve consumers and high RecSys knowledge, the undervalued-

product modification does extremely well by exploiting differences between and . The aspect-

diversity modification is more robust and has the advantage that it does not need to know .

8.2. Generality

We expect the insights from the formal results, the numerical examples, and the synthetic data

experiments to scale to larger product spaces. The proposed RecSys modifications are stylized proof-of-

concept algorithms; we expect more-sophisticated RecSys modifications to achieve even greater

improvements, especially when tuned to specific settings. Our formal model abstracts in two ways. First,

we assume that search fully updates a consumer’s preferences for searched attributes. Relaxing this

Page 25: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

23

abstraction to allow normally-distributed posteriors should not change the basic insights. We can obtain

counterparts to Results 1-3 for this case. Our second abstraction is that consumers know attribute

levels. Standard search theory is based on learning about products (attribute levels). Relaxing this

abstraction greatly complicates the model, but does not change the insight that preference learning

affects search. Preference learning and aspect learning are complementary phenomena.

8.3. Future Research

The analyses in this paper demonstrate that preference learning can have a major impact on the

study of consumer search and on the design of RecSystems. Further avenues of research are possible.

One might explore interdependence among preferences for attribute levels, and model how learning

about one aspect informs preferences about another aspect. Interdependence is especially interesting

for “more-is-better” attributes such as square feet in a condo. RecSystems might be developed that

identify the consumer’s relative naïveté and morph depending upon that naïveté. Ensembles of

RecSystems might do well. In §6, non-benevolent RecSystems influenced outcomes to the benefit of the

RecSys, yet left the consumer satisfied. Such RecSystems could prove interesting. We assumed forward-

looking consumers solve Equation 3. An alternative interpretation is that heuristic solutions to Equation

3 approximate consumer search. For example, Lin, Zhang, and Hauser (2015) illustrate situations in

which consumers use a cognitively simple learning strategy that approximates a complex dynamic

program.

Our examples and references from the marketing-science literature motivate preference

learning. Research on the underlying mechanism could improve the stylized model. The synthetic data

experiments suggest empirical tests. Such tests are not trivial because they require professional web-

programming, optimized code, a professional website with tens of thousands of visitors, and a partner

interested in an A/B RecSys experiment. However, we predict that if RecSys modifications that

encourage preference learning are implemented, the modifications will achieve noticeable

improvements.

Page 26: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

24

References Adam K (2001) Learning while searching for the best alternative. Journal of Economic Theory,

101(1):252-280.

Adamopoulos P, Tuzhilin A (2014) On unexpectedness in recommender systems: or how to better expect

the unexpected. ACM Transactions on Intelligent Systems and Technology 5(4):54:1-32.

Adomavičius G, A Tuzhilin (2005) Towards the next generation of recommender systems: a survey of the

state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering

17(6):734-749.

Ansari A, S Essagier, R Kohli (2000) Internet recommendation systems, Journal of Marketing Research,

37(August):363-376.

Bikhchandani S, Shama S (1996) Optimal search with learning. Journal of Economic Dynamics and Control

20(1):333-359.

Branco F, Sun M, Villas-Boas, JM (2012) Optimal search for product information. Management

Science 58(11):2037-2056.

Bodapati A (2008) Recommendation systems with purchase data. J. of Marketing Research, 16:77–93.

Bronnenberg BJ, Kim J, Mela CF (2015) Zooming in on choice: How do consumers search for cameras

online? Forthcoming Marketing Science.

Castells P, Vargas S, Wang J (2011) Novelty and diversity metrics for recommender systems: choice,

discovery and relevance. International Workshop on Diversity in Document Retrieval (DDR 2011)

at the 33rd European Conference on Information Retrieval (ECIR 2011). Dublin, Ireland, April.

Celma Ò, Herrera P (2008) A new approach to evaluating novel recommendations. RecSys ’08,

Proceedings of the 2008 ACM Conference on Recommender Systems. Lausanne, Switzerland.

Cook J (2014) MiniDates schedules real-life (legitimately) blind dates for you. TechCrunch, May 30, 2012.

Dzyabura D, Jagabathula S, Muller E (2016) Using online preference measurement to infer offline

purchase behavior. Under review, Management Science.

Finkel EJ, PW Eastwick, BR Karney, HT Reis, S Sprecher (2012) Online dating: a critical analysis from the

perspective of psychological science. Psychological Science in the Public Interest, 13:3-66.

Fleder D, Hosanagar K (2009) Blockbuster culture’s next rise or fall: the impact of recommender systems

on sales diversity. Management Science 55(5):697-712.

Ge M, Delgado-Battenfeld C, Jannach D (2010) Beyond accuracy: Evaluating recommender systems by

coverage and serendipity. RecSys ’10. Proceedings of the 2008 ACM Conference on

Recommender Systems, Barcelona, Spain. 257–260.

Page 27: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

25

Gumbel, E. J. (1958) Statistics of extremes. New York, NY: Columbia University Press.

Greenleaf EA, DR Lehmann (1995) Reasons for substantial delay in consumer decision making. Journal of

Consumer Research 22(2):186-199.

Hauser JR, S Dong, M Ding (2014) Self-reflection and articulated consumer preferences. Journal of

Product Innovation Management 31(1):17-32.

Hauser JR, GL Urban, G Liberali, M Braun (2009) Website morphing. Marketing Science 28(2):202-224.

Herlocker J, Konstan J, Terveen L, Riedl J (2004) Evaluating collaborative filtering recommender systems.

ACM Transactions on Information Systems 22(1):5–53.

Hong H, Shum M (2006) Using price distributions to estimate search costs. RAND Journal of Economics

37(2):257–275.

Honka E (2014) Quantifying search and switching costs in the US auto insurance industry. The RAND

Journal of Economics 45(4):847-884.

Ke TT, Shen Z-JM, Villas-Boas, JM (2016) Search for information on multiple products. Forthcoming

Management Science.

Kim JB, Albuquerque P, Bronnenberg BJ (2010) Online demand under limited consumer

search. Marketing Science 29(6):1001-1023.

Lin S, J Zhang, JR Hauser (2014) Learning from experience, simply. Marketing Science 34(1):1-19.

McNee S, Riedl J, Konstan J (2006) Accurate is not always good: how accuracy metrics have hurt

recommender systems. CHI EA 2006, Extended Abstracts on ACM Human Factors in Computing

Systems, Quebec, Canada, 1097-1101.

Rogers A (2013) After you read the listings, your agent reads you. New York Times March 26, 2013: F4.

Seiler S (2013) The impact of search costs on consumer behavior: A dynamic approach. Quantitative

Marketing and Economics 11(2):155-203.

She J, EF MacDonald (2013) Trigger features on prototypes increase preference for sustainability.

Proceedings of the 25th ASME International Conference on Design Theory and Methodology,

Portland, OR. August 04, 2013.

Sheehy K (2013) Study: High School Grads Choosing Wrong College Majors. U.S. News & World Report

November 11. At http://www.usnews.com/education/blogs/high-school-

notes/2013/11/11/study-high-school-grads-choosing-wrong-college-majors.Tversky A (1972)

Elimination by aspects: a theory of choice. Psychological Review 79 (4):281-99.

Vargas S, P Castells (2011) Rank and relevance in novelty and diversity metrics for recommender

systems. RecSys ’11. Proceedings of the Fifth ACM Conference on Recommender systems

Page 28: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

26

Chicago, IL.

Weitzman ML (1979) Optimal search for the best alternative. Econometrica 47(3):641-654.

Zhang M, Hurley N (2008) Avoiding monotony: improving the diversity of recommendation lists. RecSys

’08. Proceedings of the 2008 ACM Conference on Recommender Systems, Lausanne,

Switzerland, 123-130.

Zhou T, Z Kuscsik, J-G Liu, M Medo, JR Wakeling, Y-C Zhang (2010) Solving the apparent diversity-

accuracy dilemma of recommender systems. PNAS 107(10):451-4515.

Ziegler C-N, McNee, SM, Konstan JA, Lausen G (2005) Improving recommendation lists through topic

diversification. WWW '05. ACM Proceedings of the 14th International Conference on World

Wide Web, Chiba, Japan, 22-32.

Page 29: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A1

Appendix 1: Proofs to Formal Results

Result 1. When the two-attribute product space is a full factorial, without any recommendations the

consumer will either search the full-information product, = (1, 1), or purchase one of the products

without searching.

Proof. Result 1 is based on binary attributes. A two-attribute full-factorial space consists of four

products: = (0, 0), = (1, 0), = (0, 1), and = (1, 1). Utility is unique to a positive linear

transformation, thus we normalize ( ) = 0. The consumer gets neither attribute from , thus its

utility is the same as the outside option. The consumer can either choose one of the products, or search.

If the consumer chooses one of the products, then the expected payoffs are [ ], where [ ] = 0, [ ] = , [ ] = , and [ ] = + .

If the consumer searches, the continuation value is given by Equation 3 in the text, repeated

here for our specific conditions:

(A1) ( , ) = max 0, max, , Σ , max, , {− + [ ( ∪ { }, | ]}

Suppose the consumer searches . After searching , the consumer observes attribute = 1.

Either ≥ 0 or < 0. Take first the case of ≥ 0. If ≥ 0, then the consumer knows that ( ) ≥ ( ) and ( ) ≥ ( ). The consumer has three non-dominated options (weakly non-

dominated): choose , search , or choose without search. The additional payoffs, not counting the

search cost sunk in searching , are (1) choose with payoff , (2) choose with expected payoff [ ( )] = [ + ] = + , or (3) search with expected payoff − + + ( ). Note that the integral is over the positive range to reflect the option of choosing after search. The

continuation value after observing ≥ 0 is:

(A2) ({ }, , ≥ 0) = max , + ,− + + ( )= +max 0, , − + ( )

When < 0,the consumer knows that ( ) > ( ) and ( ) ≥ ( ) and has three non-

dominated options of choose , search , or choose without search. Similar reasoning implies:

(A3) ({ }, , < 0) = max 0, , − + ( )

Page 30: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A2

We combine Equations A2 and A3 to obtain the expected value of searching :

(A4) [ ({ }, )] = − + ( ) + max 0, , − + ( )

If the consumer searches , the result looks similar swapping = 2 for = 1.

(A5) [ ({ }, )] = − + ( ) + max 0, , − + ( )

If the consumer searches , both and are revealed. If both attributes have negative

weight, the consumer chooses , which is equivalent to the outside good because the consumer gets

neither attribute. If both attributes have positive weight, the consumer chooses . If one is attribute

weight is positive and the other negative, the consumer chooses either or , depending upon which

attribute has positive weight. Thus,

(A6) , [ ({ }, )]= − + 0 ∙ ( ) ( ) + ( ) ( )+ ( ) ( ) ( + ) ( ) ( )= − + ( ) + ( )

For any ( ), it is always true that ( ) ≥ ( ) = . Thus, comparing Equations

A6 to A4 and A5, we see that searching weakly dominates searching either or , which proves the

first part of the result. If is sufficiently small, the consumer will search . If is sufficiently large,

choosing one of the products without searching is optimal. Which product depends upon the signs of

and . This completes the proof.

Result 2. When the product space is not a full-factorial and the consumer receives no recommendations,

the consumer may choose to search a product:

• that has a low or zero probability of being chosen,

• that does not have the largest expected utility,

• that does not have the largest option value, and

• that does not have the largest variance in utility.

Page 31: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A3

Proof. Like Result 1, Result 2 is based on binary attributes. However, to show the result we need

at least three binary attributes and not a full-factorial product space. Thus, we consider a product space

of = (0, 0, 0), = (1, 0, 0), = (0, 1, 0), = (0, 0, 1), = (1, 1, 0), = (1, 0, 1), and = (0, 1, 1). Although the notation is tedious, we can easily generalize Result 1 to demonstrate that

the consumer will prefer searching those products that reveal two attributes, , , or , rather than

those products that reveal only one attribute.

Because Result 2 is an existence proof, we need only show an example. We actually show a class

of examples. To keep the analogs of Equations A2 through A6 relatively simple, we consider distributions

for the consumer’s prior beliefs in which the consumer prior beliefs assure that , , ≥ 0 and min{ } ≥ max{ , }. Result 2 is not limited to such distributions, but such distributions suffice.

There are many distributions that satisfy these properties. For example, the conditions hold for

uniformly distributed beliefs, ~ [ , ], with parameters , ≥ 0 and ≥ max[ , ]. The

assumption of non-negative true importances simplifies the tree of conditions in the dynamic program

and assures that the consumer weakly prefers to , , and , weakly prefers to , , and ,

and weakly prefers to , , and . We need only consider a product space of , , and . These

are really the most interesting products for our purposes. The outside option is ∗ = ( ) = 0.

We first consider searching on . With min{ } ≥ max{ , }, the consumer would never

choose , but may consider searching to learn and efficiently. After searching , the

consumer will either choose the outside option, one of , , and , or search or , and, perhaps,

if the consumer searches, the consumer will continue thereafter. Using Equation 3 in the text we obtain:

(A7) ({5}, ) = max 0,max{ + , + , + } , max, − + [ ({5, }, | )]

With min{ } ≥ max{ , }, there is no value to searching to reveal , because knowing

does not change the consumer’s decision. Using , , ≥ 0, we eliminate the outside option as

a choice. Using min{ } ≥ max{ , }, we eliminate as a choice. Hence, we obtain the cost of

searching as:

(A8) − + , [ ({5}, )] = − + E , [max{max{ + , + } , − + 0}]= − + , [max{ , }] +

The last step uses the consumer’s prior beliefs to compute expected values for the ’s that are

revealed by search.

We now consider searching on . Using similar reasoning to Equation A7 we obtain:

Page 32: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A4

(A9) ({6}, ) = max 0,max{ + , + , + } , max, {− + [ ({6, }, | )]}

We first examine the value of further search. Searching either or reveals , so the

consumer is indifferent between searching or . We replace the value of further search by the value

of searching one of the two products. As in the case of searching , further search beyond and or

and has no value. Thus, we have, if the consumer were to choose to search:

(A10) − + [ ({5,6}, )] = − + [ ({6,7}, )] = − + [max{ + , + }]= − + [max{ , }] +

If, after searching only , the consumer were to choose without search, then and are

revealed, but the consumer expects if or are chosen. Recall that the outside option and are

dominated if consumer beliefs follow the example class of distributions. Thus, the value of choosing

without search, that is, the second internal max in Equation A9, is given by the following.

(A11) max{ + , + , + } = max{ , } +

Putting it all together and factoring out , we obtain:

(A12) ({6}, ) = max max{ , } , − + [max{ , }] +

And the value of searching is:

(A13) − + , [ ({6}, )] = − + max max{ , } , − + [max{ , }] += − + max , , − + [max{ , }] +

Suppose ≥ , then, for all , max , , − + [max{ , }] ≤ [max{ , }]. This is

true because, for the last internal max, max{ , } ≥ . Hence max , , − + [max{ , }] | ≥ ≤ , [max{ , }| ≥ ]. Suppose < , then, for all , max , , − + [max{ , }] ≤ [max{ , }]. This is true

because ≤ [max{ , }]. Thus, max , , − + [max{ , }] | < ≤ , [max{ , }| < ]. Putting these expressions together, establishes that max , , − + [max{ , }] ≤ , [max{ , }]. Comparing Equations A13 to A8,

we have the result that − + , [ ({6}, )] ≤ − + , [ ({5}, )].

Page 33: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A5

We have demonstrated that the consumer prefers to search rather then . The consumer’s

preference for searching rather than follows by symmetry. We have also demonstrated that, after

the first product is searched, the consumer does not search further. Finally, we can easily show that the

consumer will choose to search whenever − + , [max{ , }] ≥ max{ , }. This must hold

for some . By design, all possible realized values of are greater than any possible realized value of

either or , hence both the expected utility and the option value of ( ) and ( ) dominate the

expected utility and option value of ( ). By option value we mean that [ ( )| ( ) ≥ ∗] <[ ( )| ( ) ≥ ∗] = [ ( )| ( ) ≥ ∗]. We have nowhere restricted the variances of prior

beliefs. The result holds even if the variance of is greater than the variance of and the variance of

. Finally, all possible realized values of are greater than any possible realized value of either or

, the consumer will never choose after searching . This completes the proof.

Result 3. There exist cases where it is optimal for a RecSys to recommend an undervalued product that

the consumer would not otherwise search or choose. The recommended product:

• may have a low or zero probability of being chosen,

• may not have the largest expected utility,

• may not have the largest option value, and

• may not have the largest variance in utility..

Proof. To be consistent with Results 1 and 2, Result 3 is based on binary attributes. We return to

the product space of Result 1, but delete the full-factorial product: = (0, 0), = (1, 0), = (0, 1). Because Result 2 is an existence proof, we need only show an example. We begin by deriving general

conditions and then show that there exists a class of examples that satisfy the general conditions.

We first examine the consumer’s optimal search path in the absence of a recommendation. The

consumer makes decisions based on the consumer’s prior beliefs and revealed values. We are

particularly interested in the case where one of the attributes is undervalued by the consumer, but not

by the RecSys: ( ) < , but ( ) > . The other attribute is not undervalued: ( ) = ( ) and ( ) = ( ) > . We are interested in the case where

the consumer searches (at least) , so we further assume that ( ) − > . For simplicity

we set the utility of the outside option to the utility of choosing = (0, 0) such that ∗ = ( ) = 0.

We evaluate whether the consumer searches . The value of searching is:

Page 34: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A6

(A14) ({ }, ) = max 0, , − + ( ){ , }

Recognizing that ( ){ , } ≤ ( ) < , the consumer will not search after ,

hence ({ }, ) = max{0, }. Furthermore, because ( ) − > , [ ({ }, )] − >, hence the consumer prefers to search rather than simply choose .

We now evaluate whether the consumer searches . The value of searching is:

(A15) ({ }, ) = max 0, , − + ( ){ , } ≤ max{0, } = ( ) <

which implies the consumer will not choose to search (without a recommendations). Putting these

together, if no recommendations are make, the consumer would search and then either purchase

if ≥ 0 or accept the outside option, ∗ = 0, if < 0. The expected payoff based on the

consumer’s beliefs is given in Equation A16. Because ( ) = ( ), this is also the RecSys’s beliefs

about what the consumer will achieve without any recommendation.

(A16) [ (∅, )| ] = − + ( ) = − + ( ). We now analyze the cases where the RecSys recommends a product, or , and the

consumer follows that recommendation. We assume the consumer acts optimally after the

recommendation. (We will later consider what would happen if the RecSys could recommend both

products.) We seek to establish a case where the RecSys recommends even though ( ) < ( ). We first consider the expected payoff if the RecSys recommends only. If is the

recommendation, then the consumer does not deviate from the optimal path that the consumer would

have chosen without a recommendation. Define ({ } , ) as the continuation value given that the

consumer searched due to a recommendation. Then:

(A17) [ ({ } , )| ] = − + ( ). We now consider the expected payoff if the RecSys recommends . After searching the

recommended product, the consumer may either stop and purchase , take the outside option ( ∗ =0), or search the remaining product, . (Our condition that ( ) − > assures that the

consumer will not purchase without searching. It’s easy to show that Equation A18 does not change

that.) From the consumer’s perspective:

Page 35: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A7

(A18) ({ } , | ) = max 0, , − + ( ){ , }

The consumer will search if { , } > + . We are interested in the RecSys’s

expectation of this payoff, or ( ({ } , )| ). This value depends on whether or not the

consumer would choose to search after searching . Define to be the maximum observed value

of for which it would be optimal for the consumer to search after . This value is defined

implicitly by = + . If > , the consumer purchases and does not search ,

because > − + ( ). To compute the expected value we consider three regions for the outcome of the search.

Each outcome corresponds to a different action by the consumer after searching . These regions are: ≤ 0, 0 < ≤ ,and > . Note that is defined by , but we will compute expectations

based on the RecSys’s beliefs, .

Case 1: ≤ 0. In this region, the consumer searches after because ( ) > . After searching , there are no products left to search; the consumer purchases if > 0 and takes

the outside good if ≤ 0. In this region, the RecSys’s beliefs imply:

(A19) − + [ ({ } , )| ≤ 0, ] = −2 + ( ) Case 2: 0 < ≤ . In this region, by the definition of , the consumer searches after ,

after which there are no products left to search. The net payoff to the consumer is −2 +max{0, , }. Case 3: < . In this region, by the definition of , the consumer chooses and does not

search . But because > − + ( ) ≥ − + ( ) and > 0, we know that:

(A20) ({ } , | ) = max 0, , − + ( ){ , } ≥ − + ( ){ , }

Thus, the net payoff in Case 3 is at least as large as that which the consumer would obtain by searching

after , thus the next payoff is at least as large as −2 +max{0, , }. By combining Cases 2 and

3, which occur with probability, Pr( > 0) = ( ), we obtain a lower bound on the

RecSys’s beliefs for Cases 1, 2, and 3 as −2 +max{0, , }:

Page 36: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A8

(A21)

− + [ ({ } , )| ]≥ −2 + ( ) ( )+ ( ) ( ) + ( ) ( )+ ( ) ( )

Note that we would also obtain the right-hand side of Equation A21 if the RecSys were to

recommend both and . Thus, we have shown that recommending alone weakly dominates

recommending both products. We now rearrange the limits of integration to obtain.

(A22)

− + [ ({ } , )| ]≥ −2 + ( ) ( )+ ( ) ( )+ ( − ) ( ) ( )≥ [ (∅, )| ] − + ( ) ( )+ ( − ) ( ) ( )

Equation A22 is a general condition for when the RecSys will recommend to the consumer.

All that remains to complete the proof is to establish at least one example where the last two integrals

in Equation A2 exceed . We can do this for many distributions. We do it for at least one.

We consider uniform distributions, all of which have the zero mean: ( ) = ( ) =[− , ], ( ) = [− , ], and ( ) = [− , ], where < 1 assures the option

value of is less than the option value of and the variance of ( ) is less than the variance of ( ). We assure ( ) < with < 4 . We assure ( ) = ( ) >

and ( ) > with > 4 . The value of the next-to-last integral is /8 and the value

of the last integral is /24. Thus, for any positive < 1, the result holds as long as /24

Page 37: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A9

+ /8 > . Because < 1, we have that < . Thus, we require only that > 6 . This

condition, and the condition > 4 , are satisfied for many values of and . For example, if = 1/2, then it is sufficient that > 24 . { ℎ } = + < − = { ℎ } for

all < 1. Finally, replacing the support of with ( ) = [− − , − ], condition A22

becomes − + ( ) + ( ) > 0. We choose > 0 so that [ ( )] < [ ( )]. With = 1/2 , = 3, and = 0.1, setting = 0.05 suffices. This completes the proof.

Appendix 2: Details of Synthetic Data Experiments

Figures 1 and 2. Figures 1 and 2 are based on a product space with three five-level attributes. All 5 5 5 = 125 products are available. The consumer’s prior beliefs are normally distributed,

independently over aspects, with = , ∀ . The specific values are = (3.82, 1.43, 2.83,

1.60, 1.60, 1.88, 2.82, 0.24, 0.49, 3.71, 3.14, 1.05, 3.07, 1.67, 1.58), = (0.09, 0.27, 0.14, 0.15, 0.30,

0.48, 0.26, 0.18, 0.37, 0.27, 0.27, 0.37, 0.18, 0.23, 0.13), and = (3.40, 1.56, 2.38, 2.44, 2.39, 2.22,

1.11, 1.67, 2.68, 0.42, 1.75, 2.78, 2.48, 1.78, 3.36). One of the 125 products is recommended to the

consumer, after which the consumer searches optimally. The vertical axis in both figures is the utility of

the product, net of search costs, chosen by the consumer after optimal search. The horizontal axis in

Figure 1 is the true utility of the recommended product. The horizontal axis in Figure 2 is the RecSys’s

beliefs about the utility of the recommended product. RecSys beliefs are a convex combination of the

consumer’s prior beliefs and the consumer’s true beliefs, with parameter = 0.5.

Figure 3. Consumers in Recommendation-System Synthetic-Data Experiments. Consumer

beliefs are normally distributed and independent over aspects. For each consumer we draw the true

aspect importances, , from a mixture of two normal distributions, one with mean 0.0 and variance

0.5; the other with mean 4.0 and variance 0.2. The mixing parameter, 0.7, favors the second normal

distribution to illustrate a case where most of the aspects are important. We draw the variances , of

the consumer’s prior beliefs i.i.d. from an exponential distribution, , with parameter, = 10. This

gives [ ] = 0.1 and = 0.32. In Figure 3, we vary naïveté with a parameter, . We set the mean

of the consumer’s priors to a “naïve” value with probability, , and equal to with probability, 1 − .

The naïve value is redrawn randomly from the same generating distribution—a mixture of two normal

distributions. For Figure 3 we hold heterogeneity constant at = 0.4. is defined next.

Figure 4. RecSys Knowledge (Heterogeneity within Consumer Segment). We vary RecSys

knowledge by varying heterogeneity within a consumer segment. We assume that a RecSys can learn

the mean value of consumers’ preferences within a segment. When consumers are homogeneous within

Page 38: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A10

a segment, the mean value of is close to . When consumers are heterogeneous within a

segment, the mean value of may not be a very good guess at .

We have to be careful in when we vary heterogeneity in the synthetic-data experiments. We

need to avoid spurious increases in performance of the RecSys with the variance of consumer beliefs.

The value of the maximum utility product is based on the upper tail of the utility distribution. Thus, we

hold the variance of aspect importances constant, but manipulate heterogeneity as the variation across

consumers in the rank order of aspect importances. We first draw the means and variances of prior

beliefs and true beliefs as above, but assign these importance weights to aspects as follows. We draw a

ten-valued vector from a Dirichlet distribution with parameters , where each corresponds to one of

the ten aspects. We then use the rank of the Dirichlet draws to determine the rank of the aspect

importances. In Figure 4, we vary RecSys knowledge (heterogeneity within a segment) with a parameter,

, and set = 15 + 1 for the first aspect. The decrease linearly to = 1 for the 10th aspect.

When = 0, the Dirichlet distribution is a uniform distribution and the rank orders of preference values

are random within a segment. When = 1, the Dirichlet distribution is highly skewed and rank orders of

preference values are close to homogeneous within a segment. In Figure 4 we hold consumer naïveté

constant at = 0.6.

Algorithmic efficiency. We simulate optimal search after recommendations for each of 121 10,000 consumers each of whom received a recommendation from each of three RecSys

modifications. Fortunately, we can make calculations efficient while not compromising the complexity

inherent in Results 1-3. We gain computational efficiency with product spaces that contain all 125

products. We call such product spaces “fully-crossed” to distinguish them from the much larger full-

factorial aspect spaces. In a fully-crossed product space, the maximum-utility product is the product

with the maximum-utility attribute level from each attribute. It is easy to show that the option value of

searching a new attribute level is indexable. The problem does not fully revert to Weitzman’s solution

because the consumer will continue to search as long as the sum of the option values is above the

search cost, but the computational burden is reduced substantially. We gain further computational

advantage by first eliminating consumers for which the -vs.- comparison implies almost no

differences.

Page 39: Recommending Products When Consumers Learn Their Preferencesweb.mit.edu/hauser/www/Updates - 4.2016/Dzyabura Hauser RecSys... · Recommending Products When Consumers Learn Their Preferences

A11

Appendix 3: Synthetic-Data Results for Combinations of Naïveté and Heterogeneity

Table A1. Improvement in Net Utility of Chosen Product due to Undervalued-Product Modification

= 0 = .1 = .2 = .3 = .4 = .5 = .6 = .7 = .8 = .9 = 1= 0 0.073 0.069 0.059 0.052 0.049 0.064 0.082 0.089 0.095 0.110 0.126= .1 0.074 0.071 0.063 0.060 0.065 0.080 0.093 0.099 0.106 0.120 0.148= .2 0.072 0.070 0.066 0.068 0.082 0.097 0.108 0.111 0.121 0.130 0.155= .3 0.073 0.070 0.071 0.080 0.093 0.106 0.117 0.121 0.117 0.124 0.140= .4 0.075 0.072 0.077 0.084 0.100 0.107 0.121 0.123 0.117 0.134 0.141= .5 0.075 0.074 0.079 0.089 0.101 0.111 0.122 0.129 0.120 0.137 0.154= .6 0.074 0.074 0.082 0.093 0.104 0.113 0.124 0.128 0.125 0.130 0.148= .7 0.075 0.077 0.082 0.095 0.105 0.116 0.126 0.129 0.127 0.137 0.152= .8 0.075 0.078 0.084 0.096 0.113 0.116 0.126 0.127 0.123 0.132 0.151= .9 0.076 0.080 0.086 0.098 0.108 0.120 0.125 0.132 0.127 0.134 0.158= 1 0.076 0.078 0.087 0.101 0.111 0.114 0.128 0.128 0.134 0.132 0.158Table A2. Improvement in Net Utility of Chosen Product due to Aspect-Diversity Modification

= 0 = .1 = .2 = .3 = .4 = .5 = .6 = .7 = .8 = .9 = 1= 0 0.100 0.100 0.101 0.107 0.113 0.109 0.100 0.099 0.100 0.100 0.101= .1 0.100 0.099 0.104 0.110 0.113 0.112 0.110 0.102 0.101 0.101 0.101= .2 0.100 0.100 0.105 0.111 0.118 0.117 0.111 0.103 0.102 0.103 0.102= .3 0.100 0.100 0.105 0.112 0.119 0.118 0.113 0.106 0.102 0.101 0.102= .4 0.100 0.100 0.106 0.114 0.120 0.119 0.114 0.110 0.102 0.102 0.101= .5 0.100 0.100 0.106 0.110 0.121 0.121 0.115 0.109 0.105 0.102 0.101= .6 0.100 0.100 0.106 0.114 0.123 0.121 0.118 0.110 0.104 0.102 0.102= .7 0.100 0.100 0.107 0.114 0.120 0.122 0.119 0.109 0.103 0.103 0.104= .8 0.100 0.100 0.107 0.114 0.122 0.121 0.119 0.109 0.104 0.102 0.104= .9 0.100 0.100 0.107 0.115 0.120 0.123 0.118 0.111 0.104 0.102 0.104= 1 0.100 0.100 0.107 0.113 0.122 0.121 0.118 0.109 0.105 0.102 0.103Table A3. Improvement in Net Utility of Chosen Product due to Max-Expected-Utility Benchmark

= 0 = .1 = .2 = .3 = .4 = .5 = .6 = .7 = .8 = .9 = 1= 0 0.100 0.100 0.100 0.100 0.100 0.100 0.100 0.100 0.100 0.100 0.100= .1 0.100 0.100 0.101 0.102 0.102 0.102 0.103 0.100 0.100 0.100 0.100= .2 0.100 0.100 0.101 0.103 0.104 0.103 0.103 0.101 0.101 0.101 0.100= .3 0.100 0.100 0.102 0.103 0.104 0.103 0.104 0.101 0.101 0.100 0.100= .4 0.100 0.101 0.102 0.104 0.104 0.104 0.104 0.102 0.101 0.101 0.101= .5 0.100 0.101 0.102 0.103 0.105 0.104 0.104 0.103 0.101 0.100 0.100= .6 0.100 0.100 0.102 0.103 0.105 0.105 0.106 0.102 0.101 0.100 0.100= .7 0.100 0.100 0.103 0.103 0.105 0.104 0.105 0.102 0.100 0.101 0.102= .8 0.100 0.100 0.103 0.103 0.104 0.104 0.105 0.102 0.101 0.101 0.102= .9 0.100 0.100 0.102 0.103 0.103 0.104 0.104 0.102 0.101 0.102 0.102= 1 0.100 0.100 0.102 0.102 0.105 0.104 0.105 0.102 0.101 0.101 0.101Notes to Tables A1, A2, and A3

1. A bold font indicates the RecSys modification in the table performs best or not significantly different than the best RecSys modification in that synthetic world.

2. indicates consumer naïveté, with = 0 the least naïve. indicates RecSys knowledge, with = 0 the least knowledgeable. 3. Each synthetic world is based on 10,000 consumers who behave as described in Appendix 2.