La présentation est en train de télécharger. S'il vous plaît, attendez

# Bayesian Inference Algorithms Revisited

## Présentation au sujet: "Bayesian Inference Algorithms Revisited"— Transcription de la présentation:

Bayesian Inference Algorithms Revisited

Inference

2 optimisation problems

Symbolic Simplification

Exact symbolic simplification (example)

Question dependent 9x106

Reordering

Applying normalization

Factorizing

Result (1) =40

Result (2) (21x10)+9+1=220

Summary Reorder Normalize Factorize
SPI : Symbolic Processing Inference SRA: Successive Restriction Algorithm

Question independent

Sharing parts (1)

Sharing parts (2)

Sharing parts (3)

Sharing parts (4)

Sharing parts (5)

Sharing parts (6)

Sharing parts (7)

Message passing algorithms

Example 2

Question dependent

Junction Tree Algorithm

Cut-Set Algorithm

Max-Product & Min-Sum Algorithms
No free variables

Viterbi Algorithm

Approximate symbolic simplification: Variational methods

Crunching numbers: Sampling methods
Monte Carlo (MC) Importance sampling Rejection sampling Markof Chains Monte Carlo (MCMC) Metropolis sampling Gibbs sampling Information theory, Inference and learning algorithms (2003) D. MacKay

Bayesian Learning Revisited

Data and Preliminary knowledge

Using Preliminary Knowledge
How to Deal with Data? Using Preliminary Knowledge

Direct problem: Inverse problem:

Bernoulli's Urn (1) Variables Draw Decomposition Parametrical Form
Preliminary Knowledge p: "We draw from an urn containing w white balls and b black balls"

Bernoulli's Urn (2) Variables: Decomposition: Parametrical Form: Note:

Bernoulli's Urn (3)

Parameters Identification
Variables: Decomposition:

Model Selection Variables: Decomposition:

Summary

Entropy Principles Content: Entropy Principle Statement
Frequencies and Laplace succession law Observables and Exponential laws Wolf's dice

Entropy Principle Statement
flip of a coins: 9553 heads Probability distribution of the coin?

Observables and Exponential Laws
Constraint levels: Maximum Entropy Distribution: proof Partition Function: Constraints differential equation:

20 000 Flips Observable: Constraint levels:
Maximum Entropy Distribution: Partition Function: Constraints differential equation:

Frequencies and Laplace Succession Law
Preliminary Knowledge: 1- Each of the digit is a number 2- The data come from the same phenomenon 3- A single variable V has been observed times 4- The order of these observations is not relevant 5- The variable V may take 6 different values

Wolf's dice (1) H1 Hypothesis: excavations shifted the gravity center

Wolf's dice (2) H2 Hypothesis: The dice is oblong along the 1-6 direction and the excavations shifted the gravity center

Wolf's dice (3) Inverse Problem:

Theoretical Basis Objective: Justify the use of the entropy function H
Content: What is a good representation? Combinatorial justification Information theory justification Bayesian justification Axiomatic justification Entropy concentration theorems justifications

What is a Good Representation?

Combinatorial Justification
Statistical Mechanic Probabilistic Inference q microscopic states q propositions Macroscopic state Distribution

Entropy Principles Preliminary Knowledge
"Exchangeability" Preliminary Knowledge: D has no meaningful order Each "experience" in D is independent from the others knowing the model and its parameters Each "experience" in D corresponds to a unique phenomenon

Maximum Entropy for Frequencies
Variables: Decomposition: Proof

Minimum X-entropy with Observed Frequencies
Variables: Decomposition:

Shannon’s justification
Shannon C. E. (1948) ; “A Mathematical Theory of Communication” ; Bell Systems Technical Journal ; 27 Reprinted as Shannon C.E. & Weaver (1949) “The Mathematical Theory of Communication” ; University of Illinois Press, Urbana

Shore’s Axiomatic Justification
Shore, J.E. & Johnson, R.W. (1980) ; “Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy” ; IEEE Transactions on Information Theory ; IT

Entropy Concentration Theorem
Jaynes E.T. (1982) ; “On the rationale of Maximum Entropy Methods” ; Proceedings of the IEEE Robert Claudine (1990) ; “An Entropy Concentration Theorem: Applications in Artificial Intelligence and Descriptive Statistics” ; Journal of Applied Probabilities

Probabilistic reasoning and decision making in sensory-motor systems
Want to know more ? Bayesian-Programming.org Probabilistic reasoning and decision making in sensory-motor systems Springer, Star Series

Présentations similaires