温莎日记 19

Refresher on Bayesian and Frequentist Concepts

Frequentists: From Neymann/Pearson/Wald setup. An orthodox view that sampling is infinite and decision rules can be sharp.

Bayesians: From Bayes/Laplace/de Finetti tradition. Unknown quantities are treated probabilistically and the state of the world can always be updated.

Likelihoodists: From Fisher. Single sample inference based on maximizing the likelihood function and relying on the Birnbaum (1962) Theorem. Bayesians - But they don’t know it.

According to my friend Jeff Gill | ACCP 37th Annual Meeting: casella@ufl.edu

Frequentist: Data are a repeatable random sample; Underlying parameters remain constant during this repeatable process;Parameters are fixed.

Bayesian: Data are observed from the realized sample; Parameters are unknown and described probabilistically; Data are fixed.

Three General Steps for Bayesian Modeling:

I. Specify a probability model for unknown parameter values that includes some prior knowledge about the parameters if available.

II. Update knowledge about the unknown parameters by conditioning this probability m odel on observed data.

III. Evaluate the fit of the model to the data and the sensitivity of conclusions to the assumptions.

The History of Bayesian Statistics–Milestones:

Reverend Thomas Bayes (1702-1761). Pierre Simon Laplace. Pearson (Karl), Fisher, Neyman and Pearson (Egon), Wald. Jeffreys, de Finetti, Good, Savage, Lindley, Zellner. A world divided (mainly over practicality). The revolution: Gelfand and Smith (1990). Today. . .

Technologies that have changed my life: 源启科技.

Differences Between Bayesians and Frequentists:

Bayesian: View the world probabilistically, rather than as a set of fixed phenomena that are either known or unknown. Prior information abounds and it is important and helpful to use it. Very careful about stipulating assumptions and are willing to defend them. Every statistical model ever created in the history of the human race is subjective; we are willing to admit it.

Frequentist: Parameters of interest are fixed and unchanging under realistic circumstances. No information prior to the model specification. Statistical results assume that data were from a controlled experiment. Nothing is more important than repeatability, no matter what we pay for it.

Bring what is needed to Solve the Problem ! 

Frequentist: Evaluative Paradigm;Repeatability can be Important.

Bayesian: Modeling Paradigm; Inference can be appropriate.


Bayesian Modelling: An Information Revolution?

We are in an era of abundant data;We need tools for modelling, searching, visualising, and  understanding large data sets.

+ Society: the web, social networks, mobile networks, government, digital archives.

+ Science: large-scale scientific experiments, biomedical data, climate data, scientific literature.

+ Business: e-commerce, electronic trading, advertising, personalisation.

A model describes data that one could observe from a system. If we use the mathematics of probability theory to express all forms of uncertainty and noise associated with our model...then inverse probability (i.e. Bayes rule) allows us to infer unknown quantities, adapt our models, make predictions and learn from data.

Bayes Rule:

Machine Learning seeks to learn models of data: define a space of possible models; learn the parameters and structure of the models from data; make predictions and decisions

Canonical Machine Learning Problems: Linear Regression; Polynomial Regression; Clustering with Gaussian Mixtures (Density Estimation). 

Bayesian Machine Learning - Everything follows from two simple rules:

P(x) =\Sigma _y P(x,y);P(x,y)=P(x)P(y|x).

Prediction: P(x|D, m)=\int P(x|\theta ,D,m) P(\theta | D,m)d\theta .

Model Comparison: 

P(m|D)=\frac{P(D|m)P(D)}{P(D)} P(D|m)=\int P(D|\theta ,m)P(\theta |m)d\theta .

Consider a robot. In order to behave intelligently, the robot should be able to represent beliefs  about propositions in the world:

“my charging station is at location (x,y,z)”

“my rangefinder is malfunctioning”

“that stormtrooper is hostile”

We want to represent the strength of these beliefs numerically in the brain of the robot, and we want to know what mathematical rules we should use to manipulate those beliefs. Let’s use b(x) to represent the strength of belief in (plausibility of) proposition x.

Consistency: 

+ If a conclusion can be reasoned in several ways, then each way should lead to the same answer.

+ The robot must always take into account all relevant evidence.

+ Equivalent states of knowledge are represented by equivalent plausibility assignments.

Consequence: Belief functions (e.g. b(x), b(x|y), b(x, y)) must satisfy the rules of probability theory, including sum rule, product rule and therefore Bayes rule.

Asymptotic Certainty:

Assume that data set D_n, consisting of n data points, was generated from some true θ^∗, then under some regularity conditions, as long as p(θ^∗) > 0\lim_{n\to∞ }  p(θ|D_n) = δ(θ − θ^∗). In the unrealizable case, where data was generated from some p^*(x) which cannot be modelled by any θ, then the posterior will converge to \lim_{n\to∞ }  p(θ|D_n) = δ(θ − \hat{\theta } ) where \hat{\theta }  minimizes KL(p^∗(x), p(x|θ))

\hat{\theta } =argmin_{\theta }  \int p^*(x)log \frac{p^*(x)}{p(x|\theta )}dx = argmin_{\theta }  \int p^*(x)logp(x|\theta )dx.

Asymptotic Consensus:

Consider two Bayesians with different priors, p_1(θ) & p_2(θ), who observe same data D. Assume both Bayesians agree on the set of possible and impossible values of θ: \left\{ {θ : p_1(θ) > 0} = {θ : p_2(θ) > 0} \right\} . Then, in the limit of n → ∞, the posteriors, p_1(θ|D_n) and p_2(θ|D_n) will converge (in uniform distance between distributions \rho (P_1,P_2)=sup_{E} \vert P_1(E)-P_2(E) \vert ).

On Choosing Priors:

+ Objective Priors: non-informative priors that attempt to capture ignorance and have good frequentist properties.

+ Subjective Priors: priors should capture our beliefs as well as possible. They are subjective but not arbitrary.

+ Hierarchical Priors: multiple levels of priors, p(θ)= \int d \alpha  p(\theta |\alpha )p(\alpha )= \int d \alpha  p(\theta |\alpha ) \int d\beta  p(\alpha |\beta )p(\beta ).

+ Empirical Priors: learn some of the parameters from data, \hat{\alpha } =argmax_\alpha p(D|\alpha ); Robust — overcomes some limitations of misspecification of the prior.

Approximation Methods for Posteriors and Marginal Likelihoods: Laplace approximation; Bayesian Information Criterion; Variational Approximations; Expectation Propagation (EP); Markov chain Monte Carlo methods (MCMC); Exact Sampling......

PAGE 1
PAGE 2
PAGE 3

The Variational Bayesian EM algorithm has been used to approximate Bayesian learning in a wide range of models such as: 

• mixtures of Gaussians and mixtures of factor analysers

• hidden Markov models

• state-space models (linear dynamical systems)

• independent components analysis (ICA)

• discrete graphical models...

The main advantage is that it can be used to automatically do model selection and does not suffer from over fitting to the same extent as ML methods do.


PAGE 4
PAGE 5

Infinite mixture models: p(x)=\sum_{k=1}^K \pi _k p_k(x) .

Start from a finite mixture model with K components and take the limit as the number of components K → ∞ . But you have infinitely many parameters, you integrate them out using: 

– MCMC sampling (Escobar & West 1995; Neal 2000; Rasmussen 2000)

– expectation propagation (EP; Minka and Ghahramani, 2003)

– variational methods (Blei and Jordan, 2005)

– Bayesian hierarchical clustering (Heller and Ghahramani, 2005)

Myths and misconceptions about Bayesian methods:

+ Bayesian methods make assumptions where other methods don’t: All methods make assumptions! Otherwise it’s impossible to predict. Bayesian methods are transparent in their assumptions whereas other methods are often opaque.

+ If you don’t have the right prior you won’t do well. Certainly a poor model will predict poorly but there is no such thing as the right prior! Your model (both prior and likelihood) should capture a reasonable range of possibilities. When in doubt you can choose vague priors (cf nonparametrics).

+ Maximum A Posteriori (MAP) is a Bayesian method. MAP is similar to regularization and offers no particular Bayesian advantages. The key ingredient in Bayesian methods is to average over  your uncertain variables and parameters, rather than to optimize.

+ Bayesian methods don’t have theoretical guarantees. One can often apply frequentist style generalization error bounds to Bayesian methods (e.g. PAC-Bayes). Moreover, it is often possible  to prove convergence, consistency and rates for Bayesian methods.

+ Bayesian methods are generative. You can use Bayesian approaches for both generative and discriminative learning (e.g. Gaussian process classification). With the right inference methods (variational, MCMC) it is possible to scale to very large datasets, but it’s true that averaging is often more expensive than optimization.

Frequentist theory tends to focus on sampling properties of estimators, i.e. what would have happened had we observed other data sets from our model. Also look at minimax performance of methods – i.e. what is the worst case performance if the environment is adversarial. Frequentist methods often optimize some penalized cost function.

Bayesian methods focus on expected loss under the posterior. Bayesian methods generally do not make use of optimization, except at the point at which decisions are to be made.

Cons and pros of Bayesian methods: Bayesian machine learning treats learning as an probabilistic inference problem. Bayesian methods work well when the models are flexible enough to capture relevant properties of the data. The closed world assumption: need to consider all possible hypotheses for the data before observing the data. Often good performance. The use of approximations weakens the coherence argument.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,293评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,604评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,958评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,729评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,719评论 5 366
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,630评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,000评论 3 397
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,665评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,909评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,646评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,726评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,400评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,986评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,959评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,197评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 44,996评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,481评论 2 342

推荐阅读更多精彩内容