Emile Figaro_800x800


25 ans d’innovation en intelligence collective,
sciences cognitives et médias interactifs

Emile est un spécialiste mondialement reconnu des marché prédictifs: l’intelligence collective au service de la prévision. Avant la création de Lumenogic il a fondé et dirigé NewsFutures pendant 10 ans (2000-2010). Sous son leadership, NewsFutures puis Lumenogic ont développé nombre d’applications innovantes des marchés prédictifs pour une clientèle mondiale de grandes entreprises, médias et gouvernements.

Le travail de NewsFutures à l’avant garde de l’industrie des marchés prédictifs est plusieurs fois mentionné dans le best-seller mondial « La Sagesse des Foules » (Lattès, 2007) et a été abondamment couvert par la presse internationale dont le New York Times, The Economist, Businessweek, TIME, Newsweek, The New Yorker, The Wall Street Journal, et Nature.

Conférencier aguéri, Emile a été invité à parler de l’intelligence collective et des marchés prédictifs dans le monde entier: World Economic Forum, Banque Mondiale, Wharton, INSEAD, Carnegie Mellon, Ecole Militaire, Collège de France, Sciences Po, Universités de Frankfort, Hong Kong, St Petersbourg… Il est l’un des éditeurs associés du Journal of Prediction Markets.

En tant qu’expert en sciences cognitives, formé au plus haut niveau à l’université Carnegie Mellon (USA), il a conseillé pendant huit ans (1999-2007) le Centre pour la Recherche et l’Innovation dans l’Enseignement de l’O.C.D.E. sur son projet « Sciences de l’apprentissage et recherche sur le cerveau« .

Avant de créer NewsFutures et Lumenogic, Emile a été ingénieur en intelligence artificielle chez Ilog et grand reporter en sciences & technologies pour le magazine SVM. Il est aussi l’auteur, avec Burno Lévy, de deux cédéroms multimédia dont l’originalité est de croiser les discours de deux douzaines des plus grands scientifiques de notre temps, dont huit lauréats du prix Nobel: Le Défi de l’Univers (Hypermind, 1995), CD-Rom d’Or de l’Ordinateur Individuel, et Les Secrets de l’Intelligence (Hypermind, 1997), Flèche d’Or de la FNAC. Vendus à plus de 200 000 exemplaires dans le monde.

Emile est diplômé de Carnegie Mellon (USA) en mathématiques appliquées (BS) et en psychologie cognitive (PhD).


The Marketcast Method for Aggregating Prediction Market Forecasts 

Avec Pavel Atanasov, Phillip Rescober, Eric Stone, Barbara Mellers, Philip Tetlock, & Lyle Ungar
Proceedings of The 2013 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction

We describe a hybrid forecasting method called marketcast. Marketcasts are based on bid and ask orders from prediction markets, aggregated using techniques associated with survey methods, rather than market matching algorithms. We discuss the process of conversion from market orders to probability estimates, and simple aggregation methods. The performance of marketcasts is compared to a traditional prediction market and a traditional opinion poll. Overall, marketcasts perform approximately as well as prediction markets and opinion poll methods on most questions, and performance is stable across model specifications.



Intelligence Is Collective

Lumenogic Research Report, 2012

How does collective intelligence relate to the individual kind? A high?level review of the state of the art in cognitive science suggests that both brains and the best artificial intelligence programs can be described as ?wise crowds? (in the sense of Surowiecki, 2004). Conversely, the most celebrated instances of collective intelligence bear deep similarities to the human mind’s organization and mechanisms. A tentative conclusion is that intelligence is collective to its core and that Surowiecki’s recipe for crowd wisdom provides a framework to unify human, artificial and collective intelligence. Some implications for research, business and individual decision makers are explored.



Trading Uncertainty for Collective Wisdom

Book Chapter in Collective Wisdom
Landemore & Elster (Eds.), Cambridge University Press, 2012

Prediction markets have captured the public’s imagination with their ability to predict the future by pooling the guesswork of many. This paper summarizes the evidence and examines the economic, mathematical, and neurological foundations of this form of collective wisdom. Rather than the particular trading mechanism used, the ultimate driver of accuracy seems to be the betting proposition itself: on the one hand, a wager attracts contrarians, which enhances the diversity of opinions that can be aggregated. On the other hand, the mere prospect of reward or loss promotes more objective, less passionate thinking, thereby enhancing the quality of the opinions that can be aggregated.



Betting On a Better World

International Studies Association, Annual Meeting, New York 2009

As The Economist recently wrote, « the most heeded futurists these days are not individuals, but prediction markets, where the informed guesswork of many is consolidated into hard probability. » This paper explores the principles driving this powerful new form of collective intelligence, and how may it be applied in the field of International Relations.



Prediction Markets: Does Money Matter? 

avec Justin Wolfers, David Pennock, et Brian Galebach
Electronic Markets, 2004, 14:3

The accuracy of prediction markets has been documented for both markets based on real money and those based on play money. To test how much extra accuracy can be obtained by using real money versus play money, we set up a real-world on-line experiment pitting the predictions of (real money) against those of (play money) regarding American Football outcomes during the fall-winter 2003-2004 NFL season. As expected, both types of markets exhibited significant predictive powers, and remarkable performance compared to individual humans. Perhaps more surprisingly, the play-money markets performed as well as the real-money markets. We speculate that this result reflects two opposing forces: real-money markets may better motivate information discovery while play-money markets may yield more efficient information aggregation.



Macro-markets and Environmental Futures

The Hague Conference on Environment, Security and Sustainable Development, May 2004

Prediction markets offer a new forecasting method whereby people trade future outcomes as on a stock exchange. This results in dynamic consensus probability estimations of future events whose superior accuracy has been documented in numerous domains including box-office, product sales, political elections, sports, regulatory forecasting, etc. Applied to the forecasting of important environmental outcomes, prediction markets could help deepen public awareness of the problems and possible solutions, cut through the fog of ambiguous scientific discourse, evaluate alternative scenarios of doom or salvation, and identify the more trustworthy experts.




Un groupe est toujours plus fort que vous

CLÉS, février-mars 2014

C’est prouvé : seul, même un génie ne fait pas le poids face à un groupe lambda. Google ou la CIA l’ont bien compris, et ont intégré les principes de l’intelligence collective. Découvrez comment 1+1=3.



Secrets of the Mind

with Bruno Lévy
Ubisoft, 1997; Softkey, 2004

Do you know how your memory works? Why do we have emotions? Why do you dream? How do you learn? How does a baby’s brain develop? Is there a limit to artificial intelligence? Unlock the secrets of your mind and discover some of the theories that attempt to answer these diverse questions. This in-depth educational software features a wealth of information from 11 pioneering psychologistst and neuroscientists, including Nobel Prize winners Herbert Simon and Eric Kandel, and features interactive psychological experiments to illustrate key concepts.



The Challenge of the Universe

The Challenge of the Universe – with Bruno Lévy
Oxford University Press, 1996

Get the big picture on the structure of matter, the origins of the Big Bang, and other scientific mysteries from 13 of the greatest contemporary physicists, including Stephen Hawking adn 6 Nobel Prize winners. Watch, listen, and interact as these dynamic thinkers expand their theories, share their discoveries, and explain the most fascinating of scientific concepts in straightforward, jargon-free language. If you’ve ever wondered what the universe is made of, where it comes from, and if it can ever be understood, this CD-ROM is for you.


Chunking Processes and Context Effects in Letter Perception 

Proceedings of the 14th Annual Conference of the Cognitive Science Society, 1992

Chunking is formalized as the dual process of building percepts by recognizing in stimuli chunks stored in memory, and creating new chunks by welding together those found in the percepts. As such, it is a very attractive process with which to account for phenomena of perception and learning. Servan-Schreiber and Anderson (1990) demonstrated that chunking is at the root of the « implicit learning » phenomenon, and Servan-Schreiber (1990; 1991) extended that analysis to cover category learning as well. This paper aims to demonstrate the potential of chunking as a theory of perception by presenting a model of context effects in letter perception. Starting from a set of letter segments the model creates from experience chunks that encode partial letters, then letters, then partial words, and finally words. The model’s ability to recognize letters alone, or in words, pseudowords, or strings of unrelated letters is then tested using a backward masking task. The model reproduces the word and pseudoword superiority effects.


En ligne

Classification of Dot Patterns with Competitive Chunking

Proceedings of the 12th Annual Conference of the Cognitive Science Society, 1990

Chunking, a familiar idea in cognitive science, has recently been formalized by Servan-Schreiber and Anderson (1999) into a theory of perception and learning, and it successfully simulated the human acquisition of an artificial grammar through the simple memorization of exemplar sentences. In this article I briefly present the theory, called Competitive Chunking, or CC, as it has been extended to deal with the task of encoding random dot patterns. I explain how CC can be applied to the classic task of classifying such patterns into multiple categories, and report a successful simulation of data collected by Knapp and Anderson (1984). The tentative conclusion is that people seem to process dot patterns and artificial grammars in the ame way, and that chunking is an important part of the process.


En ligne

Learning Artificial Grammars with Competitive Chunking

avec John R. Anderson
Journal of Experimental Psychology: Learning, Memory, & Cognition, 1990, 16:4

When exposed to a regular stimulus field, for instance, that generated by an artificial grammar,subjects unintentionally learn to respond efficiently to the underlying structure (Miller, 1958; Reber 1967). We explored the hypothesis that the learning process is chunking and that grammatical knowledge is implicitly encoded in a hierarchical network of chunks. We trained subjects on exemplar sentences while inducing them to form specific chunks. Their knowledge was then assessed through judgments of grammaticality. We found that subjects were less sensitive to violations that preserved their chunks than to violations that did not. We derived the theory of competitive chunking (CC) and found that it successfully reproduces, via computer simulations, both Miller’s experimental results and our own. In CC, chunks are hierarchical structures strengthened with use by a bottom-up perception process. Strength-mediated competitions determine which chunks are created and which are used by the perception process.



A Connectionist Approach to the Diagnosis of Dementia

avec Benoit H. Mulsant
Proceedings of the 12th Annual Symposium on Computer Applications in Medical Care, 1988

This paper describes an implemented connectionist network that performs clinical diagnosis in the domain of dementia. During the past decade, connectionism –also called parallel distributed processing or neural processing– has been established as a new cognitive and computational paradigm, with strong claims that it provides powerful mechanisms to bring solutions to problems previously intractable. To study the suitability of connectionist networks to perform a sequential diagnostic classification task under uncertainty, we have implemented a network that learns to diagnose cases of dementia. We describe in detail the implementation, training, and behavior of this network. We also discuss directions for future research suggested by the limitations of this network.



Depuis plus de 10 ans, Lumenogic aide les entreprises à tirer partie

de leur intelligence collective en utilisant des marchés prédictifs,

des compétitions d’idées, et d’autres processus collaboratifs.


© Lumenogic, 2017