Document Type

Dissertation

Degree

Doctor of Philosophy

Major

Applied Mathematics

Date of Defense

7-21-2006

Graduate Advisor

Haiyan Cai, PhD.

Committee

Charles Chui, Ph.D.

Ronald Dotzel, Ph.D.

Qingtang Jiang, Ph.D.

Abstract

An increasingly popular method for fitting complex models, particularly with a hierchical structure involvese the use of Markov Chain Monte Carlo simulation. Within a Bayesian framework, two major strategies are Gibbs sampling and Metropolis-Hastings methods. Recent research in the area of MCMC methods has witnessed the emergence of modeling efforts which permit the movement of the chain across models of varying dimensions. When properly constructed, such Markov chains converge to the joint posterior distribution of the parameters to be estimated, making Bayesian averaging an attractive option after convergence has occurred. With this transdimensional methodology, the Bayesian averaging process takes place across models of different dimesions. The purpose of this research is to incorporate a penalty function within the transition kernel of the Markov Chain to impose desired constraints on the final estimated function. The class of functions used for modeling are ordinary cubic splines on a closed, finite interval. The knots for the each candidate spline function are also allowed to change over the course of the Markov Chain and this feature is reflected in the final results. Not only the number of knots, but their locations vary in the simulated Markov Chain. The penalty function of primary interest in this research is a form of the Kullback-Leibler distance measure between statistical distributions. It is shown that this penalty function is equivalent to the use of a certain Bayesian prior distribution on the number of knots. Results using this penalty approach are compared with results using no penalty.

OCLC Number

565977771

Included in

Mathematics Commons

Share

COinS