Electronic-Commerce Day 2013

Electronic-Commerce Day 2013 is a one day workshop,
which will cover many aspects of electronic-commerce.

Schedule

09:30 - 10:00 Reception + coffee
10:00 - 10:15 Moshe Tennenholtz
Opening remarks
10:15 - 11:15 Ashish Goel
Some Algorithmic Aspects of Social Commerce
11:15 - 11:45 Coffee break
11:45 - 12:15 Fiana Raiber and Oren Kurland
Ranking Document Clusters using Markov Random Fields
12:15 - 12:45 Gideon Blocq and Ariel Orda
Worst-Case Coalitions in Routing Games
12:45 - 13:15 Yonatan Yaniv and Seffi Naor
Efficient Online Scheduling for Deadline-Sensitive Jobs
13:15 - 14:15 Lunch
14:15 - 15:15 Yishay Mansour
Implementing the "Wisdom of the Crowd"
15:15 - 15:45 Dan Garber and Elad Hazan
Projection-free online learning
15:45 - 16:15 Coffee break
16:15 - 16:45 Ori Plonsky and Ido Erev
Agile Environments and the Tendency to Rely on Small Samples
16:45 - 17:45 Ehud Kalai
Learning, Equilibrium and Stability in Large Population Games
17:45 - 18:30 Pizza & Ice cream




B a c k

×

Ashish Goel

Ashish Goel is a Professor of Management Science and Engineering and (by courtesy) Computer Science at Stanford University, and a member of Stanford's Institute for Computational and Mathematical Engineering. He received his PhD in Computer Science from Stanford in 1999, and was an Assistant Professor of Computer Science at the University of Southern California from 1999 to 2002. His research interests lie in the design, analysis, and applications of algorithms; current application areas of interest include social networks, Internet commerce, and large scale data processing. Professor Goel is a recipient of an Alfred P. Sloan faculty fellowship (2004-06), a Terman faculty fellowship from Stanford, an NSF Career Award (2002-07), and a Rajeev Motwani mentorship award (2010). He was a co-author on the paper that won the best paper award at WWW 2009. Professor Goel is also a research fellow and technical advisor to Twitter, Inc.

×

Some Algorithmic Aspects of Social Commerce

A significant fraction of socio-economic activity is now conducted on the Internet, using paradigms that did not exists before the Internet. Some examples are personalized advertising and product recommendations; online reputation systems to rate buyers, sellers, and service providers; question/answer forums and discussion boards; and novel pricing mechanisms. In this talk, we will focus on two examples from social networks, illustrating the use of algorithmic tools:

  1. Personalized advertising and recommendations: We will describe our experience designing twitter's revenue model and the initial version of its ad targeting algorithms. We will also point out the synergies between personalized advertising and personalized recommendations, and describe algorithmic techniques for real-time personalized recommendations.
  2. Reputation Systems and Virtual Currencies: Reputation systems are crucial for social commerce, given the large number of sellers and buyers, the ease of acquiring fake identities, and the large number of first-time transactions. We will describe a model of networked trust that is an alternative to centralized reputation/virtual currency systems on the Internet. Informally, every node in the network acts as a bank and prints its own currency, which is then used to purchase services within the network. Transactions between untrusting agents proceed through exchange of IOUs along a chain of trust. Such "trust networks" are robust to infiltration. We will show that in many interesting cases, the liquidity of these trust networks is comparable to a system where currency is issued by a single centralized bank.

×

Ranking Document Clusters using Markov Random Fields

The ad hoc retrieval task is ranking documents in a corpus by their relevance to the information need expressed by a query. A retrieval paradigm that has attracted quite a lot of research attention is cluster-based. That is, the documents most highly ranked by an initial search are clustered. Then, information induced from the document clusters is used to improve retrieval effectiveness.

An important challenge in cluster-based document retrieval is ranking document clusters by their relevance to the query. We present a novel cluster ranking approach that utilizes Markov Random Fields (MRFs). MRFs enable the integration of various types of cluster-relevance evidence; e.g., the query-similarity values of the cluster’s documents and query-independent measures of the cluster. The retrieval effectiveness of our approach is substantially better than that of several highly effective retrieval models that are used for the initial search. Furthermore, our cluster ranking method significantly outperforms state-of-the-art cluster ranking approaches. We also show that our method can be used to improve the performance of (state-of-the-art) results-diversification methods.

×

Worst-Case Coalitions in Routing Games

Recently, there has been a growing interest in network scenarios where selfish agents are able to communicate, bargain and form coalitions, i.e., play a cooperative game, in order to overcome the inefficiency of noncooperativeequilibria. We study such coalitional games under worst-case conditions in the fundamental load-balancing setting of routing over parallel links. Specifically, we define the cost of a coalition as the outcome of a Stackelberg game in which a rival leader aims to maximize the cost of the coalition. Quite surprisingly, we establish that the leader acts as if it is a continuum of infinitesimal self-optimizing agents. With this structural result at hand, we investigate fundamental solution concepts of the considered cooperative game, namely the Nash Bargaining Scheme, the (Inner) Core and the min-max fair Nucleolus. Most notably, we show that they all bring about the system optimum.

×

Efficient Online Scheduling for Deadline-Sensitive Jobs

We consider mechanisms for online deadline-aware scheduling in large computing clusters.

In our framework, jobs arrive dynamically and are characterized by their value and total resource demand (or estimation thereof), along with their reported deadlines. The scheduler's objective is to maximize the aggregate value of jobs completed by their deadlines.

We circumvent known lower bounds for this problem by assuming that the input has slack, meaning that any job could be delayed and still finish by its deadline.

Under the slackness assumption, we design a preemptive scheduler with a constant-factor worst-case performance guarantee.

Along the way, we pay close attention to practical aspects, such as runtime efficiency, data locality and demand uncertainty. We evaluate the algorithm via simulations over real job traces taken from a large production cluster, and show that its actual performance is significantly better than other heuristics used in practice.

We then extend our framework to handle provider commitments: the requirement that jobs admitted to service must be executed until completion. We prove that no algorithm can obtain worst-case guarantees when enforcing the commitment decision to the job arrival time. Nevertheless, we design efficient heuristics that commit on job admission, in the spirit of our basic algorithm. We show empirically that these heuristics perform just as well as (or better than) the original algorithm.

Finally, we discuss how our scheduling framework can be used to design truthful scheduling mechanisms, motivated by applications to commercial public cloud offerings.

×

Yishay Mansour

Prof. Yishay Mansour got his PhD from MIT in 1990, following it he was a postdoctoral fellow in Harvard and a Research Staff Member in IBM T. J. Watson Research Center. Since 1992 he is at Tel-Aviv University, where he is currently a Professor of Computer Science and has served as the first head of the Blavatnik School of Computer Science during 2000-2002. He is currently the director of the Israeli Center of Research Excellence in Algorithms.

Prof. Mansour has mentored start-ups as Riverhead, which was acquired by Cisco, Ghoonet and Verix. He holds a part-time position at MicroSoftReaserch Israel, and held visiting positions at Bell Labs, AT&T research Labs, IBM Research, and Google Research.

Prof. Mansour has published over 50 journal papers and over 100 proceeding papers in various areas of computer science with special emphasis on communication networks, machine learning, and algorithmic game theory, and has supervised over a dozen graduate students in those areas.

Prof. Mansour is currently an associate editor in a number of distinguished journals and has been on numerous conference program committees. He was both the program chair of COLT (1998) and served on the COLT steering committee.

Detailed CV | List of Publication
×

Implementing the "Wisdom of the Crowd"

In this paper we study a novel model in which agents arrive sequentially one after the other and each in turn chooses one action from a fixed set of actions to maximize his expected rewards given the information he possesses at the time of arrival.

The information that becomes available affects the incentives of an agent to explore and generate new information. We characterize the optimal disclosure policy of a planner whose goal is to maximizes social welfare. The planner's optimal policy is characterized and shown to be intuitive and very simple to implement. As the number of agents increases the social welfare converges to the optimal welfare of the unconstrained mechanism. One interpretation for our result is the implementation of what is known as the 'Wisdom of the crowds'. This topic has become more relevant during the last decade with the rapid adaptation of the Internet This is joint work with Ilan Kremer and Motty Perry.

×

Projection-free online learning

Modern large data sets prohibit any super-linear time operations. This motivates the study of iterative optimization algorithms with low complexity per iteration. The computational bottleneck in applying state-of-the-art iterative methods is usually the so-called "projection step".

We consider projection-free optimization that replaces projections by more efficient linear optimization steps. We give the first linearly-converging algorithm of this type, and describe applications to online learning and several combinatorial optimization problems.

×

Agile Environments and the Tendency to Rely on Small Samples

Four studies demonstrate that the tendency to rely on small samples is not always a result of cognitive limitations. It can be a byproduct of complex computations that were evolved to facilitate performance in agile (dynamic) environments. Study 1 is a computational analysis that demonstrates why the attempt to maximize performance in agile settings can lead to the use of similarity-based strategies that imply reliance on small samples and underweighting of rare events. Three experimental studies demonstrate that human behavior is consistent with this hypothesis. Study 2 shows near perfect performance in a complex agile environment, and deviation from maximization in a simple rigid setting. Study 3 reveals that people can adjust their strategies to the dynamic settings even when the adjustment requires good memory of long sequences of outcomes. Study 4 suggests that the set of similarity-based strategies that people consider is wide.

×

Ehud Kalai

Ehud Kalai works in game theory and its interface with economics, computer science and other areas. He is a Principal Researcher at Microsoft Research New England and the James J O'Connor Distinguished Professor of Decision and Game Sciences at Northwestern University, where is the director of the Center for Game Theory and Economic Behavior. He holds an AB (Cornell, 1972), a PhD (UC Berkeley, 1967) and an Honorary Doctorate (University of Paris 2010) in mathematics. Kalai is the founding Editor of Games and Economic Behavior, and a co-founder and past President of the Game Theory Society. He is a Fellow of the Econometric Society and of the American Academy of Arts and Sciences.

×

Learning, Equilibrium and Stability in Large Population Games

The sensitivity of Nash equilibrium to structural details of a game makes its predictions unreliable in many applications. In one-shot games with many statistically independent players, however, Nash equilibria are structurally robust and the modeling difficulties are less severe.

Using a new concept of compressed Nash equilibrium, we discuss these properties in a general model that describes repeated interaction among players who are correlated through unknown fundamentals of nature. Despite the failing of stability in short interaction, after sufficiently long play players “learn to be independent” and stability is obtained.

Motivating examples include games played on the web and price stability in market games.

Joint work with Eran Shmaya