Contact Info
an image
Amin Sayedi
Foster School of Business
Paccar 483
University of Washington
Seattle, WA 98195
Email: aminsa@uw.edu

Papers Under Review and Working Papers


Pricing in a Duopoly with Observational Learning [pdf]

We look at the problem of pricing in a duopoly with observational learning. Prior literature shows that when the number of customers in a monopoly is sufficiently large and the monopolist is sufficiently patient, an informational cascade always happens; furthermore, the probability that the cascade is wrong, i.e., virtually all customers receive negative utility ex post, is always positive. We find similar results in a duopoly with static pricing. Interestingly, and in contrast to the previous literature, we show that in a duopoly with dynamic pricing, there are equilibria in which no cascade happens. More importantly, we show that a wrong cascade cannot happen in equilibrium. Our results could explain why, in practice, cascades, and specifically wrong cascades, are not as common as prior literature has predicted.


Publications


Open and Private Exchanges in Display Advertising, joint with J. Choi [pdf], Marketing Science (forthcoming)

We study the impact of the emergence of private exchanges (PX) on the display advertising market. Unlike open exchanges (OX), the original exchange types that are open to all publishers and advertisers, the newly emerged private exchanges are only available to a smaller set of pre-screened advertisers and publishers through an invite-only process. OX exposes advertisers to ad fraud and brand safety risks, whereas PX ensures that advertisers purchase high-quality impressions from reputable publishers. While the assurance of higher quality increases advertisers’ valuation for PX impressions, we find that selling through both OX and PX can hurt publishers by creating an information asymmetry among advertisers. Intuitively, the presence of PX informationally advantages the “connected” advertisers, who have access to PX, while simultaneously disadvantaging the “unconnected” advertisers, who only have access to OX. Therefore, compared to the OX-only benchmark where advertisers are equally uninformed about impression quality, the unconnected advertisers anticipate a lower chance of winning high-quality impressions and thus lower their valuations. This devaluation effect softens bidding competition and reduces the publisher’s expected profit. In equilibrium, the publisher may sell through OX, PX, or both, depending on the baseline fraud intensity and the advertisers’ average valuations. Finally, our model sheds light on OX’s incentive to fight fraud. In the absence of PX, OX has low incentive to combat fraud because it earns commission from fraudulent transactions. However, the introduction of PX may create competitive pressure such that OX screens fake impressions; i.e., PX may induce the market to self-regulate.


First-price Auctions in Online Display Advertising, joint with S. Despotakis and R. Ravi [pdf], Journal of Marketing Research (2021)

We explain the rapid and dramatic move from second-price to first-price auction format in the display advertising market to be a simple consequence of the move from the waterfalling mechanism employed by publishers for soliciting bids in a pre-ordered cascade over exchanges, to an alternate header bidding strategy that broadcasts the request for bid to all exchanges. First, we argue that the move by the publishers from waterfalling to header bidding was a revenue improving move for publishers in the old regime when exchanges employed second-price auctions. Given the publisher move to header bidding, we show that exchanges move from second-price to first-price auctions to increase their expected clearing prices. Interestingly, when all exchanges move to first-price auctions, each exchange faces stronger competition from other exchanges and some exchanges may end up with lower revenue than when all exchanges use second-price auctions; yet, all exchanges move to first-price auctions in the unique equilibrium of the game. We show that the new regime commoditizes the exchanges’ offerings and drives their buyer-side fees to zero in equilibrium. Furthermore, it allows the publishers to achieve the revenue of the optimal mechanism despite not having direct access to the advertisers.


The Perils of Personalized Pricing with Network Effects, joint with B. Hajihashemi and J. Shulman [pdf], Marketing Science (2021)

This research examines how network effects, a phenomenon whereby the larger the demand for a product or service the higher the value of that product or service for each consumer, influence the impact of price personalization on firms and consumers. We study the impact of network effects on price personalization in terms of price, demand, profit, and consumer surplus. One belief commonly held by companies is that price personalization increases their profit by providing means for capturing heterogeneity in consumers’ willingness to pay and enticing demand from heterogeneous consumer segments. Using a game-theoretical model, we find conditions under which price personalization not only shrinks demand, but also decreases the profit of the firm and reduces surplus of all consumers. Specifically, our model shows in markets with network effects, unless when the impact of expected consumer heterogeneity is higher than the impact of network effects, personalized pricing reduces demand, lowers profit, reduces surplus of all consumers, and thus causes a deficiency in the market.


Learning in Online Advertising, joint with W. Choi [pdf], Marketing Science (2019)

Prior literature on pay-per-click advertising assumes that publishers know advertisers' click-through rates (CTR). This information, however, is not available when a new advertiser first joins a publisher. The new advertiser's CTR can be learned only if its ad is shown to enough consumers, i.e., the advertiser wins enough auctions. Since publishers use CTRs to calculate payments and allocations, the lack of information about a new advertiser can affect the advertisers' bids. Using a game theory model, we analyze advertisers' strategies, their payoffs, and the publisher's revenue in a learning environment. Our results indicate that a new advertiser always bids higher (sometimes above valuation) in the beginning. The incumbent advertiser's strategy depends on its valuation and CTR. A strong incumbent increases its bid to deter the publisher from learning the new advertiser's CTR, whereas a weak incumbent decreases its bid to facilitate learning. Interestingly, the publisher may benefit from not knowing the new advertiser's CTR because its ignorance could induce advertisers to bid more aggressively. Nonetheless, the publisher's revenue sometimes decreases because of this lack of information. The publisher can mitigate this loss by lowering the reserve price of, offering advertising credit to, or boosting the bids of new advertisers.


Exclusive Placement in Online Advertising, joint with M. Baghaie and K. Jerath [pdf], Marketing Science (2019)

A recent development in online advertising has been the ability of advertisers to have their ads displayed exclusively on (a part of) a web page. We study this phenomenon in the context of both sponsored search advertising and display advertising. Ads are sold through auctions, and when exclusivity is allowed the seller accepts two bids from advertisers, where one bid is for the standard display format in which multiple advertisers are displayed and the other bid is for being shown exclusively (therefore they are called two-dimensional, or 2D, auctions). We identify two opposing forces at play in an auction that provides the exclusive placement option — allowing more flexible expression of preferences through bidding for exclusivity increases competition among advertisers leading to higher bids which increases the seller’s revenue (between-advertiser competition effect), but it also gives advertisers the incentive to shade their bids for their non-preferred outcomes which decreases the seller’s revenue (within-advertiser competition effect); depending on which effect is stronger, the revenue may increase or decrease. We find that the GSP2D auction, which is an extension of the widely-used GSP auction and on which currently used auctions for exclusive placement are based, may lead to higher or lower revenue under different parametric conditions; paradoxically, the revenue from allowing exclusive placement decreases as bidders have higher valuations for exclusive placement. We verify several key implications from our analysis of GSP2D using data from Bing for over 100,000 auctions. As a possible solution (applicable to both sponsored search and display advertising), we show that using the VCG2D auction, which is the adaptation of the VCG auction for the 2D setting, guarantees weakly higher revenue when exclusive display is allowed. This is because it induces truthful bidding, which alleviates the problem of bid shading due to the within-advertiser competition effect.


Real-time Bidding in Online Display Advertising [pdf], Marketing Science (2018)

Display advertising is a major source of revenue for many of the online publishers and content providers. Historically, display advertising impressions have been sold through pre-negotiated contracts, known as reservation contracts, between publishers and advertisers. In recent years, a growing number of impressions are being sold in real-time bidding (RTB), where advertisers bid for impressions in real-time, as consumers visit publishers' websites. RTB allows advertisers to target consumers at an individual level using browser cookie information, and enables them to customize their ads for each individual. The rapid growth of RTB has created new challenges for advertisers and publishers on how much budget and ad inventory to allocate to RTB. In this paper, we use a game theory model with two advertisers and a publisher to study the effects of RTB on advertisers' and publishers' strategies and their profits. We show that symmetric advertisers use asymmetric strategies where one advertiser buys all of his impressions in RTB whereas the other advertiser focuses on reservation contracts. Interestingly, we find that while both advertisers benefit from existence of RTB, the advertiser that focuses on reservation contracts benefits more than the advertiser that focuses on RTB. We show that while RTB lowers the equilibrium price of impressions in reservation contracts, it increases the publisher's total revenue. Despite many analysts' belief that, because of being more efficient, RTB will replace reservation contracts in the future, we show that publishers have to sell a sufficiently large fraction of their impressions in reservation contracts in order to maximize their revenue. We extend our model to consider premium consumers, publisher's uncertainty about the number of future visitors, and effectiveness of ad customization.


The Effects of Autoscaling in Cloud Computing on Product Launch, joint with Amir Fazli and Jeff Shulman [pdf], Management Science (2018)

Web-based firms often rely on cloud-based computational resources to serve customers, but the number of customers they will serve is rarely known at the time of product launch. A recent innovation in cloud computing, known as autoscaling, allows companies to automatically scale their computational load up or down to match customer demand. We build a game theory model to examine how autoscaling will affect firms' decisions to enter a new market and the resulting equilibrium prices, profitability, and consumer surplus. The model produces novel results depending on the likelihood of a firm's success in the new market, differentiation among potential entrants, and the cost of computational capacity. For instance, in contrast to previous capacity commitment models with demand certainty, we show that autoscaling can mitigate price competition if the likelihood of a firm's success in the market is moderate and the cost of capacity is sufficiently low. This is because without autoscaling, the firms' uncertainty about demand would lead to excessive computational capacities and thus aggressive price competition. We also find that when the likelihood of success is sufficiently high, autoscaling facilitates entry for one firm yet deters a second firm from simultaneously entering the market.


Strategic Compliments in Sales, joint with J. Shulman [pdf], Quantitative Marketing and Economics (2017)

Salespersons often spend time and money giving prospective buyers compliments such as kind words, meals and gifts. Though prior research has shown that compliments will influence a prospective buyer’s decision, it is unknown the extent to which salespersons should make these investments. In this paper, we develop an analytical model to examine how seller and buyer characteristics affect the equilibrium provision of compliments by the seller. We establish that the optimal magnitude of compliments is non-monotonic in the buyer’s sensitivity to compliments. We identify conditions for when a seller of a high-quality product will offer greater (or lesser) compliments than a seller of a lower quality product. We show that, under certain conditions, an uninformed buyer earns greater utility than a buyer who knows the quality of the seller’s product. The findings have implications for sellers in their choice of compliments and buyers in the inferences they draw from the compliments received.


Expertise in Online Markets, joint with S. Despotakis, I. Hafalir and R. Ravi [pdf], Management Science (2016)

We examine the effect of the presence of knowledgeable buyers (experts) in online markets where auctions with a hard close and posted prices are widely used. We model buyer expertise as the ability to accurately predict the quality, or condition, of an item. In auctions with a hard close, sniping–submitting bids in the last minute–emerges as an equilibrium strategy for experts. We show that non-experts bid more aggressively as the proportion of experts increases. As a consequence, we establish that the auction platform may obtain a higher revenue by (i) enforcing a hard close and allowing sniping, and (ii) withholding information regarding the quality of the item. Moreover, in online markets where both auctions and posted prices are available, we show that the presence of experts allows the sellers of high quality items to signal their quality by choosing to sell via auctions.


Keyword Management Costs and "Borad Match" in Sponsored Search Advertising, joint with W. Amaldoss and K. Jerath [pdf], Marketing Science (2015)

In sponsored search advertising, advertisers choose the keywords to bid on, decide how much to bid on them, customize the ads for the chosen keywords by geographic region, time of day and season, and then measure and manage campaign effectiveness. We call the costs that arise from such activities keyword management costs. To reduce these costs and increase advertisers’ participation in keyword auctions, search engines offer a tool called broad match which automates bidding on keywords. These automated bids are based on the search engine’s estimates of the advertisers’ valuations and, therefore, may be less accurate than the bids the advertisers would have turned in themselves. Using a game-theoretic model, we examine the strategic role of keyword management costs, and of broad match, in sponsored search advertising. We show that because these costs inhibit participation by advertisers in keyword auctions, the search engine has to reduce the reserve price, which reduces the search engine’s profits. This motivates the search engine to offer broad match as a tool to reduce keyword management costs. If the accuracy of broad match bids is high enough, advertisers adopt broad match and benefit from the cost reduction, whereas if the accuracy is very low, advertisers do not use it. Interestingly, at moderate levels of bid accuracy, advertisers individually find it attractive to reduce costs by using broad match, but competing advertisers also adopt broad match and the increased competition hurts all advertisers’ profits, and thus a “prisoner’s dilemma” arises. Adoption of broad match by advertisers increases search engine profits, and it therefore seems natural to expect that the search engine will be motivated to improve broad match accuracy. Our analysis shows that the search engine will increase broad match accuracy upto the point where advertisers choose broad match, but increasing the accuracy any further reduces the search engine’s profits. We consider a number of extensions to the basic model to show the robustness of our insights.


Competitive Poaching in Sponsored Search Advertising and Strategic Impact on Traditional Advertising, joint with K. Jerath and K. Srinivasan [pdf], Marketing Science (2014)

An important decision for a firm is how to allocate its advertising budget among different types of advertising. Most traditional channels of advertising, such as advertising on television and in print, serve the purpose of building consumer awareness and desire about the firm's products. With recent developments in technology, sponsored search (or paid search) advertising at search engines in response to a keyword searched by a user has become an important part of most firms' advertising efforts. An advantage of sponsored search advertising is that, since the firm advertises in response to a consumer-initiated search, it is a highly targeted form of communication and the sales-conversion rate is typically higher than in traditional advertising. However, a consumer would search for a specific product or brand only if she is already aware of the same due to previous awareness-generating traditional advertising efforts. Moreover, competing firms can use sponsored search to free-ride on the awareness-building efforts of other firms by directly advertising on their keywords and therefore "poaching" their customers. Anecdotal evidence shows that this is a frequent occurrence. In other words, traditional advertising builds awareness, while sponsored search is a form of technology-enabled communication that helps to target consumers in a later stage of the purchase process, which induces competitors to poach these potential customers.
Using a game theory model, we study the implications of these tradeoffs on the advertising decisions of competing firms, and on the design of the sponsored search auction by the search engine. We find that symmetric firms may follow asymmetric advertising strategies, with one firm focusing on traditional advertising and the other firm focusing on sponsored search with poaching. Interestingly, the search engine benefits from discouraging competition in its own auctions in sponsored search advertising by shielding firms from competitors' bids. This explains the practice of employing "keyword relevance" measures, under which search engines such as Google, Yahoo! and Bing under-weight the bids of firms bidding on competitors' keywords. We also obtain various other interesting insights on the interplay between sponsored search advertising and traditional advertising.


A Near Pareto Optimal Auction with Budget Constraints, joint with I. Hafalir and R. Ravi [pdf]
Games and Economic Behavior (2011)

In a setup where a divisible good is to be allocated to a set of bidders with budget constraints, we introduce a mechanism in the spirit of the Vickrey auction. In the mechanism we propose, understating budgets or values is weakly dominated. Since the revenue is increasing in budgets and values, all kinds of equilibrium deviations from true valuations turn out to be beneficial to the auctioneer. We also show that ex-post Nash equilibrium of our mechanism is near Pareto optimal in the sense that all full winners' values are above all full losers' values.


Other Publications


We Know Who You Followed Last Summer: Inferring Social Link Creation Times in Twitter, joint with C. Borgs, J. Chayes, B. Karrer, B. Meeder and R. Ravi [pdf]

Understanding a network's temporal evolution appears to require multiple observations of the graph over time. These often expensive repeated crawls are only able to answer questions about what happened from observation to observation, and not what happened before or between network snapshots. Contrary to this picture, we propose a method for Twitter's social network that takes a single static snapshot of network edges and user account creation times to accurately infer when these edges were formed. This method can be exact in theory, and we demonstrate empirically for a large subset of Twitter relationships it is accurate to within hours in practice. We study users who have a very large number of edges or who are recommended by Twitter. We examine the graph formed by these nearly 1,800 Twitter celebrities and their 862 million edges in detail, showing that a single static snapshot can give novel insights about Twitter's evolution. We conclude from this analysis that real-world events and changes to Twitter's interface for recommending users strongly influence network growth.

Appeared in the Twentieth International World Wide Web Conference, WWW '11 (Acceptance Rate 12%).


Game-theoretic Models of Information Overload in Social Networks, joint with C. Borgs, J. Chayes, B. Karrer, B. Meeder, R. Ravi and R. Reagans [pdf]

We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under very simple and natural models of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of pure rate equilibria when there is a global ranking of the celebrities in terms of the quality of their updates to users. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.

Appeared in the Seventh Workshop on Algorithms and Models for the Web Graph, WAW '10.


Trading Off Mistakes and Don't-know Predictions, joint with A. Blum and M. Zadimoghaddam [pdf]

We discuss an online learning framework in which the agent is allowed to say "I don't know" as well as making incorrect predictions on given examples. We analyze the trade off between saying "I don't know" and making mistakes. If the number of don't know predictions is forced to be zero, the model reduces to the well-known mistake-bound model introduced by Littlestone [Lit88]. On the other hand, if no mistakes are allowed, the model reduces to KWIK framework introduced by Li et. al. [LLW08]. We propose a general, though inefficient, algorithm for general finite concept classes that minimizes the number of don't-know predictions if a certain number of mistakes are allowed. We then present specific polynomial-time algorithms for the concept classes of monotone disjunctions and linear separators.

Appeared in the Twenty-fourth Annual Conference on Neural Information Processing Systems, Spotlight Paper (spotlight paper acceptance rate was less than 6%) NIPS '10.


Expressive Auctions for Externalities in Online Advertising, joint with A. Ghosh [pdf]

When online ads are shown together, they compete for user attention and conversions, imposing negative externalities on each other. While the competition for user attention in sponsored search can be captured via models of clickthrough rates, the post-click competition for conversions cannot: since the value-per-click of an advertiser is proportional to the conversion probability conditional on a click, which depends on the other ads displayed, the private value of an advertiser is no longer one-dimensional, and the GSP mechanism is not adequately expressive. We study the design of expressive GSP-like mechanisms for the simplest form that an advertiser's private value can have in the presence of such externalities--- an advertiser's value depends on exclusivity, i.e., whether her ad is shown exclusively, or along with other ads.

Part of our results appeared in the Fifth Workshop on Ad Auctions, AAW '09.
Appeared in the Nineteenth International World Wide Web Conference, WWW '10 (Acceptance Rate 14%).


Mechanism Design for Complexity-constrained Bidders, joint with R. Kumar and M. Mahdian [pdf]

A well-known result due to Vickery gives a mechanism for selling a number of goods to interested buyers in a way that achieves the maximum social welfare. In practice, a problem with this mechanism is that it requires the buyers to specify a large number of values. In this paper we study the problem of designing optimal mechanisms subject to constraints on the complexity of the bidding language in a setting where buyers have additive valuations for a large set of goods. This setting is motivated by sponsored search auctions, where the valuations of the advertisers are more or less additive, and the number of keywords that are up for sale is huge. We give a complete solution for this problem when the valuations of the buyers are drawn from simple classes of prior distributions. For a more realistic class of priors, we show that a mechanism akin to the broad match mechanism currently in use provides a reasonable bicriteria approximation.

Appeared in the Fifth Workshop on Internet and Network Economics, WINE '09.


Minimizing Movement, joint with E. Demaine, M. Hajiaghayi, H. Mahini, S. Oveisgharan, M. Zadimoghaddam
Journal Version [pdf] Conference Version [pdf]

We give approximation algorithms and inapproximability results for a class of movement problems. In general, these problems involve planning the coordinated motion of a large collection of objects (representing anything from a robot swarm or firefighter team to map labels or network messages) to achieve a global property of the network while minimizing the maximum or average movement. In particular, we consider the goals of achieving connectivity (undirected and directed), achieving connectivity between a given pair of vertices, achieving independence (a dispersion problem), and achieving a perfect matching (with applications to multicasting). This general family of movement problems encompass an intriguing range of graph and geometric algorithms, with several real-world applications and a surprising range of approximability. In some cases, we obtain tight approximation and inapproximability results using direct techniques (without use of PCP), assuming just that P != NP.

Journal version appeared in ACM Transactions on Algorithms ACM TALG 5(3) 2009.
Conference version appeared in Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07 .


Scheduling to Minimize Gaps and Power Consumption, joint with E. Demaine, M. Ghodsi, M. Hajiaghayi and M. Zadimoghaddam [pdf]

This paper considers scheduling tasks while minimizing the power consumption of one or more processors, each of which can go to sleep at a fixed cost alpha. There are two natural versions of this problem, both considered extensively in recent work: minimize the total power consumption (including computation time), or minimize the number of "gaps" in execution. For both versions in a multiprocessor system, we develop a polynomial-time algorithm based on sophisticated dynamic programming. In a generalization of the power-saving problem, where each task can execute in any of a specified set of time intervals, we develop a (1 + (2/3)alpha)-approximation, and show that dependence on alpha is necessary. In contrast, the analogous multi-interval gap scheduling problem is set-cover hard (and thus not o(lg n)-approximable), even in the special cases of just two intervals per job or just three unit intervals per job. We also prove several other hardness-of-approximation results. Finally, we give an O(sqrt(n))-approximation for maximizing throughput given a hard upper bound on the number of gaps.

Appeared in Proceedings of 19th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '07 .


Spanning Trees with Minimum Weighted Degrees, joint with M. Ghodsi, H. Mahini, K. Mirjalali, S. Oveisgharan and M. Zadimoghaddam [pdf]

Given a metric graph G, we are concerned with finding a spanning tree of G where the maximum weighted degree of its vertices is minimum. In a metric graph (or its spanning tree), the weighted degree of a vertex is defined as the sum of the weights of its incident edges. In this paper, we propose a 4.5-approximation algorithm for this problem. We also prove it is NP-hard to approximate this problem within a 2-epsilon factor.

Appeared in Information Processing Letters, IPL 104(3) 2007.