The tutorials and invited talks will take place in the centre de Transfert. Tutorials will take place in the Apollo/Ariane rooms and invited talks in the amphitheatre.
TUTORIALS (1st July, Afternoon)
MATTHIAS TROFFAES: "A Gentle Introduction to Imprecise Probabilities: Brief History and
First Principles"
THIERRY DENOEUX: "introduction to belief functions" (tentative title)
INVITED SPEAKERS
ALESSIO BENAVOLI: "Pushing Dynamic Estimation to the Extremes: from the Moon to Imprecise Probability."
Summary
LINDA VAN DER GAAG: "Recent Advances in Sensitivity Analysis of Bayesian Networks." Summary
ISAAC ELISHAKOFF "Recent Developments in Applied Mechanics with Uncertainties" Summary
CHRISTOPHE LABREUCHE: "Robustness in Multi-Criteria Decision Making and its relation with Imprecise Probabilities." Summary
JEAN-MARC TALLON: "Ambiguity and ambiguity attitudes in economics." Summary
SUMMARIES
Pushing Dynamic Estimation to the Extremes: from the Moon to Imprecise Probability.
Dynamic estimation deals with the problem of estimating the state of a dynamic system on the basis of observations. Some examples are
- the estimate of the position/speed of a car through GPS measurements;
- the estimate of the future price of a derivative through the observations of the actual interest rate and past prices;
- the estimate of the time of eggs hatch of an insect through the measurements of the environment temperature etc.
One way of approaching this problem is by means of the theory of Hidden Markov Models (HMM) with scalar/vector real valued state variables.
Here, the initial condition, dynamics (state transition) and observation model are represented by (conditional) probability distributions and the estimation problem is then solved by estimating the state of the system given all the past observations.
This is the so-called Bayesian approach to dynamic estimation.
If the dynamics and observations are linear functions of the state and the probability distributions are assumed to be Gaussian, it is well known that the conditional distribution of the state given the past observations is still Gaussian. In this case, this conditional distribution can simply be computed by propagating the mean and covariance of the state through time.
This closed-form solution to the Bayesian state estimation problem is known as the Kalman filter (KF).
KF (and its numerous variants) have dominated the scene of dynamic estimation for decades in engineering and science applications.
The attractiveness of the KF lies in the fact that it is simple and optimal in several different senses (in the Gaussian case it
is the minimum variance estimator; in the non Gaussian case it is the best linear minimum variance estimator).
However, the underlying modelling assumptions in the KF (more in general in Bayesian filtering) such as the perfect knowledge of the
probability distributions of state transition and observation model are not met in many practical applications. For this reason, suitable generalizations
to the KF have been sought which are called Robust filters.
More recently, even in the imprecise probability community, there has been an increasing interest in the
generalization of hidden Markov models to the case in which the state transition and observation models
are described by set of probabilities. Also in this case, the aim is to obtain more robust/more reliable inferences
than the ones derived via standard approaches.
The goal of this talk is:
- to review the state-of-art of imprecise hidden Markov models
with particular emphasis to the case of dynamic systems;
- to present two/three implementations of imprecise dynamic estimators
based on different models for state transition and observation;
- to discuss optimality criteria for imprecise dynamic estimators;
- to show the connections with the so called Robust filters which have been developed in the area of control theory;
- to show the applications of these estimators to practical problems.
Sensitivity Analysis of Bayesian Networks.
Sensitivity analysis is a general technique for investigating the robustness of the output of a mathematical model and is performed for various different purposes. The practicability of conducting such an analysis of a Bayesian network has recently been studied extensively, resulting in a variety of new insights and effective methods, ranging from properties of the mathematical relation between a parameter and an output probability of interest, to methods for establishing the effects of parameter imprecision on decisions based on the output distributions computed from a network. In this talk, we present a survey of some of these recent research results and explain their significance.
Recent Developments in Applied Mechanics with Uncertainties.
It has been recognized during past decades that deterministic mechanics as such cannot answer all problems that arise in engineering. For example, the safety factor that is being utilized in engineering design cannot possibly be justified within deterministic mechanics. Thus, the uncertainty analysis is introduced in deterministic analysis Ôvia the back door.Õ The realistic analysis and design of structures demands the introduction of uncertainty analyses. To accomplish this goal, until very recently the only methodology used was the probabilistic analysis initiated by the great French scientists Blaise Pascal and Pierre Fermat. It is interesting to note that the first attempt to utilize the probability in engineering, appears to have been a dissertation by Max Mayer published in 1926 and devoted to safety factor allocation in civil engineering. In this spirit the lecture reviews first the safety factor idea and then the most common method that is applied in stochastic analysis of nonlinear structures, namely the stochastic linearization technique.
Then the lecture deals with alternatives to probability analysis: interval and ellipsoidal analyses and shows which one should be used in which circumstances. In these analyses no probability or fuzzy measures are needed to be known. These analyses depend on scarce knowledgeÑthat is often the case-for involved uncertain variables. Instead the bounds--as either intervals or ellipsoids--are incorporated into the analysis. The notion of combined optimization and anti-optimization will be discussed. At the last part the lecture reviews the notion of the fuzzy safety factor.
Many researchers prefer to use one of these techniques exclusively and maintain that only one of these methods is useful. In fact it appears that there is, as it were, a Babel Tower erected between different methodologies of uncertainty analyses. As pragmatic creatures engineers appear to be in need to know each of these techniques and use them in different circumstances depending on the character and the amount of available data.
Robustness in Multi-Criteria Decision Making and its relation with Imprecise Probabilities.
Multi-Criteria Decision Aid (MCDA) aims at helping an individual to make choices among alternatives described by several attributes, from a (small) set of learning data representing her preferences. MCDA has a wide range of applications in smart cities, public policy assessment, recommender systems and so on. Among the variety of available decision models, one can cite the weighted majority, additive utility, weighted sum or the Choquet integral.
Once the expression of the decision model has been chosen, the generation of choices among alternatives is classically done as follows. In a constraint approach, from a set of learning data (representing for instance comparisons of alternatives), one then looks for the value of the model parameters compatible with the learning data, which maximizes some functional, e.g. an entropy or a separation variable on the learning data. The comparisons among alternatives are then obtained by applying the model with the previously constructed parameters. The major difficulty the decision maker faces is that there usually does not exist one unique value of the parameters compatible with the learning data. Hence this approach introduces much arbitrariness since the generated preferences are much stronger than the learning data.
Robust preference relations have been recently introduced in MCDA to overcome this difficulty. An alternative is said to be necessarily preferred to another one if the first one dominates the second for any value of the parameters compatible with the learning data. In Artificial Intelligence, this operator is often called entailment. It is actually a closure operator. This necessity preference relation is usually incomplete, unless the model is completely specified from the preferential information of the decision maker.
The introduction of robust preference relation brings many new challenges:
- characterization: which axioms characterize this preference relation?
- algorithmic aspects: how to design efficient algorithms to construct it?
- explanation: how to explain to the decision maker the recommended robust preferences? In other words, how are the recommendations derived from the learning data?
We will address these points in the talk.
We will also mention some similarities between this new approach and imprecise probabilities. For instance, using the analogy between a weight vector in a weighted sum and a probability distribution, the set of weight vectors compatible to a learning set is similar to an imprecise probability.
Ambiguity and ambiguity attitudes in economics.
Ambiguity and ambiguity attitudes are not always disentangled in models of decision under uncertainty used in economics.
I plan to survey some models that do attempt to disentangle these phenomena and draw some implications for ambiguity
sharing and asset pricing for instance.
More information about tutorials and talks in ISIPTA 2013 soon!