|
International
Conference on Statistical Distributions and Applications Oct. 14-16, 2016, Crowne Plaza, Niagara Falls, Canada |
|
||||||
Conference Keynote and Plenary Speakers |
||
|
Dr.
Peter McCullagh is John D. MacArthur Distinguished Service
Professor at the University of Chicago, Chicago, Illinois. Before moving to
Chicago, Peter obtained his Bachelor’s degree in mathematics from the
University of Birmingham, and his doctoral degree in statistics from Imperial
College. He has held visiting positions at the University of British Columbia
and at Bell Labs. He is a Fellow of the Institute of Mathematical Statistics,
the American Association for the Advancement of Science, the American Academy
of Arts and Sciences, and the Royal Society. Peter’s research focuses on
probabilistic modelling, statistical theory, and the application of
statistical methods in diverse areas, particularly in scientific research
such as biostatistics, agricultural research, ecology and animal behaviour. Recent probabilistic work includes boson point
processes, exchangeability and random discrete structures such as random
partitions, Gibbs random trees, random graphs and so on. Recent statistical
work has focused on health monitoring and survival processes. Peter is the
author of two books, Tensor Methods in Statistics, and Generalized Linear
Models, with co-author John Nelder. He has served
as editor of the journal Bernoulli, and as an associate editor of Biometrika, Journal of the Royal Statistical Society, and
the Annals of the Institute of Statistical Mathematics. |
Title: Statistical models for survival
processes 8:00 am – 9:00 am, October 15 in
Niagara Room The
focus of a survival study is partly on the distribution of survival times,
and partly on the health or quality of life of patients while they live. Health
varies over time, and survival is the most basic aspect of health, so the two
aspects are closely intertwined. Depending on the nature of the study, a
range of variables may be measured; some constant in time, others not; some
regarded as responses, others as explanatory risk factors; some directly and
personally health-related, others less directly so. We begin by classifying
variables that may arise in such a setting, emphasizing in particular, the
mathematical distinction between vital variables, non-vital variables and
external or exogenous variables. The goal is to construct a family of
continuous-time stochastic process for vital health variables, and to use
such models for the analysis of data collected intermittently in time,
especially in situations where mortality is appreciable. |
|
Dr Kjell Doksum is Senior Scientist in the Statistics Department at the University of Wisconsin, Madison, and he is Emeritus Professor in the Statistics Department at the University of California, Berkeley. He has held visiting positions at the L’Universite de Paris, University of Oslo, the Norwegian Institute of Technology in Trondheim, Harvard University, Harvard Medical School, Columbia University, Bank of Japan, Hitotsubashi University in Tokyo, and Stanford University. He is a Fellow of the Institute of Mathematical Statistics and of the American Statistical Association, as well as an elected member of the International Statistical Institute and the Royal Norwegian Society of Sciences and Letters. His research focuses on statistical theory and modeling. It includes inference for nonparametric regression and correlation curves, global measures of association in semiparametric and nonparametric settings, estimation of regression quantiles, Bayesian nonparametric inference, and high dimensional data analysis. Applications include statistical modeling of HIV data, and the analysis of financial data. Kjell Doksum is the co-author with Peter Bickel of the book “Mathematical Statistics: Basic Concepts and Selected Topics. Volumes I and II”, CRC Press. |
Title:
Ensemble subspace methods for high dimensional data 8:00 am – 9:00 am, October 16 in
Niagara Room We consider high dimensional regression frameworks where the number p of predictors exceed the number n of subjects. Recent work in high dimensional regression analysis has embraced an approach that consists of selecting random subsets with fewer than n predictors, doing statistical analysis on each subset, and then merging the results from the subsets. This ensemble approach makes it possible to construct methods for high dimensional data using methods designed for low dimensional data. Moreover, penalty methods such as Lasso that are unstable when p>n unless very stringent conditions are imposed, perform much better when used in the ensemble approach. We examine the extent of the improvement achieved by the ensemble approach when it is applied to Lasso, Lars, and the Elastic Net. Comparisons are also made with variable selection methods. |
|
Dr.
Mei-Ling Ting Lee is the Director and professor of the Biostatistics &
Risk Assessment Center, Department of Epidemiology & Biostatistics,
University of Maryland, College Park, MD. Previously she was a faculty member
at Boston University, Harvard University, and was professor and chair of the
Department of Biostatistics at the Ohio State University. She earned her PhD
degree from the Department of Mathematics at University of Pittsburgh. Dr.
Lee is a biostatistician with a wide range of research interests. Her works
in statistical distributions include dependence properties of multivariate
distributions and generalizing the Sarmonov
distributions. Dr. Lee is the founding editor and editor-in-chief of the
international journal Lifetime Data
Analysis, the only international statistical journal that is specialized
in modeling time-to-event data. Dr. Lee has received many awards and
recognitions including Fellow of the American Statistical Association, the Institute
of Mathematical Statistics and Royal Statistical Society. |
Title: From
Bacon and Eggs to Fréchet Shock-Degradation Models 7:30 pm –
8:30 pm, October 15 in Niagara Room Some distributions arise naturally to meet practical needs. I’ll discuss two interesting examples, Sarmanov multivariate distributions and Fréchet shock-degradation models. One can generate many multivariate distributions having given marginals. The density of the bivariate Sarmanov distributions with beta marginal can be expressed as a linear combination of products of independent beta densities. This pseudo conjugate property greatly reduces the complexity of posterior computations when this bivariate beta distribution is used as a prior (Lee, 1996). An interesting marketing study found that people who purchase bacon will often buy eggs, hence the bivariate beta-binomial distributions applied well in analyzing the data. Recently the method has also been applied in multivariate meta analysis. Many systems experience gradual degradation while simultaneously being exposed to a stream of random shocks that eventually cause failure when the shock exceeds the residual strength of the system. This failure mechanism is found in diverse fields of application. A tractable new family of Fréchet shock-degradation models will be presented. This family has the attractive feature of defining the failure event as a first passage event and the time to failure as a first hitting time (FHT) of a threshold by an underlying stochastic process. The Fréchet shock-degradation family includes a wide class of underlying degradation processes. We derive the survival function for the shock-degradation process as a convolution of the Fréchet shock process and any candidate degradation process that possesses stationary independent increments (Lee, Whitmore 2016). Statistical properties of the survival distribution will be discussed. |
Conference Plenary Speakers |
||
|
Dr.
Fraser is a Professor Emeritus at the University of Toronto, Toronto, Canada.
His Bachelor’s degree is from the University of Toronto and his PhD is from
Princeton University. He has held visiting positions at many establishments
including Princeton University, Stanford University, University of Wisconsin,
University of Geneva and University College, London. Dr. Fraser has numerous
honors and awards. He is a Fellow of many professional societies including
Institute of Mathematical Statistics, Royal Statistical Society, American
Statistical Association, Royal Society of Canada, American Association for
the Advancement of Science, and American Mathematical Society. His research
interests include, but not limited to likelihood asymptotic theory, large
sample theory of statistics, Bayesian analysis and qualitative data analysis.
Dr. Fraser is the author of many popular books including Nonparametric Methods in Statistics, The Structure of Inference, and Inference and Linear Models. He is currently addressing the
conflicts between reproducibility and ‘objective’ Bayesian methodology. |
Title:
Distributional methods have changed statistical inference 2:15
pm – 2:45 pm, October 16 in Niagara Room Saddlepoint methods entered statistics rather slowly: Henry Daniels in 1954
then Barndorff-Nielsen and Cox 1979, just 25 years. But since then the
methods have radically changed the landscape for the core methods of
inference. And p-values no longer need to be in the wild west stage. We
briefly survey the distributional methods that altered the statistical
landscape. |
|
Dr. John Stufken is the Charles Wexler Professor of Statistics in the School of Mathematical and Statistical Sciences at Arizona State University. Previously he served as Head of the Department of Statistics at the University of Georgia (2003-2014) and as Program Director for Statistics in the Division of Mathematical Sciences at the National Science Foundation (2000-2003). He also held faculty positions at Iowa State University (1988-2002) and the University of Georgia (1986-1990). His primary area of research interest is design and analysis of experiments. He is co-author of the book Orthogonal Arrays: Theory and Applications (1999, Springer Verlag, with A. Hedayat and N.J.A. Sloane), and co-Editor of the Handbook of Design and Analysis of Experiments (2015, Chapman and Hall/CRC, with D. Bingham, A. Dean and M. Morris). He serves currently as Associate Editor for the Journal of the American Statistical Association, Statistica Sinica, and the Journal of Statistical Theory and Practice. He served as Executive Editor for the Journal of Statistical Planning and Inference (2004-2006) and as Editor for The American Statistician (2009-2011). He is an elected Fellow of the Institute of Mathematical Statistics (2000) and of the American Statistical Association (2001), and was the Rothschild Distinguished Visiting Fellow at the Isaac Newton Institute for Mathematical Sciences in Cambridge, UK, for the program on Design and Analysis of Experiments in 2011. |
Title:
Optimal design and subdata selection for big data 2:15
pm – 2:45 pm, October 15 in Niagara Room The theory for optimal
design has been developed for experiments that, typically, yield “small”
amounts of data. Consequently, there is no immediate connection to big data.
However, if big data is really big, then a common strategy is to select subdata, and draw conclusions from the subdata. Just as in experimental design, this amounts to
a selection problem, namely that of selecting appropriate subdata.
We will discuss how ideas from design of experiments can help us to select subdata judiciously. |
|
Dr. Marc G. Genton is Professor of Statistics in the division of
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) at
King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.
He received his Ph.D. in Statistics from the Swiss Federal Institute of
Technology (EPFL), Lausanne, in 1996. He also holds a M.Sc. in
Applied Mathematics teaching and a degree of Applied Mathematics Engineer
from the same institution. Prof. Genton is a Fellow
of the American Statistical Association, of the Institute of Mathematical
Statistics, of the American Association for the Advancement of Science, and
elected member of the International Statistical Institute. In 2010, he
received the El-Shaarawi award for excellence from
the International Environmetrics Society and the
Distinguished Achievement award from the Section on Statistics and the
Environment of the American Statistical Association. Prof. Genton has published over 180 articles in scientific
journals, has edited a book on multivariate skew-elliptical distributions,
and has given over 300 presentations at conferences and universities
worldwide. Prof. Genton's research interests
include statistical analysis, flexible modeling, prediction, and uncertainty
quantification of spatio-temporal data, with
applications in environmental and climate science, renewable energies,
geophysics, and marine science. |
Title: Tukey g-and-h random fields and
max-stable processes 1:15 pm – 1:45 pm, October 15 in
Niagara Room We
propose a new class of trans-Gaussian random fields named Tukey g-and-h (TGH)
random fields to model non-Gaussian spatial data. The proposed TGH random
fields have extremely flexible marginal distributions, possibly skewed and/or
heavy-tailed, and, therefore, have a wide range of applications. The special
formulation of the TGH random field enables an automatic search for the most
suitable transformation for the dataset of interest while estimating model
parameters. Asymptotic properties of the maximum likelihood estimator and the
probabilistic properties of the TGH random fields are investigated. An
efficient estimation procedure, based on maximum approximated likelihood, is
proposed and an extreme spatial outlier detection algorithm is formulated. Kriging
and probabilistic prediction with TGH random fields are developed along with
prediction confidence intervals. The predictive performance of TGH random
fields is demonstrated through extensive simulation studies and an
application to a dataset of total precipitation in the south east of the
United States. Extensions of these ideas to the construction of new spatial
max-stable processes are presented as well. |
|
Dr. Gwo Dong Lin is a Research Fellow in the Institute of Statistical Science at Academia Sinica, Taiwan. He received his BS and MS in Mathematics from National Taiwan Normal University, and Ph.D. in Management Sciences from Tamkang University in Taiwan. He is an Elected Member of the International Statistical Institute and has served or is serving as an Associate Editor of several journals including Statistica Sinica, IEEE Transactions on Reliability, Journal of Statistical Distributions and Applications, and Statistics-A Journal of Theoretical and Applied Statistics. His research interests include Distribution Theory, Applied Probability and Survival Analysis. He has published over 70 papers in a variety of theoretical and applied journals such as Bernoulli, Probability Theory and Related Fields, Sankhya, JAP, TPA, JOTP, JMVA, AISM, JSPI, JSDA, JMAA, and others. |
Title: Recent
Developments on the Moment Problem 1:15 pm –
1:45 pm, October 16 in Niagara Room We consider univariate distributions with finite moments of all positive orders. The moment problem is to determine whether or not a given distribution is uniquely determined by the sequence of its moments. There is an inexhaustible literature on this classical topic. In this survey, we focus only on the recent developments on the checkable moment-(in)determinacy criteria including Cramer's condition, Carleman's condition, Hardy's condition, Krein's condition and the growth rate of moments, which help us solve the problem more easily. Both Hamburger and Stieltjes cases are investigated. The former is concerned with distributions on the whole real line, while the latter deals only with distributions on the right half-line. Some new results or new simple (direct) proofs of previous criteria are provided. Finally, we review the most recent problem for products of independent random variables with different distributions, which occur naturally in stochastic modelling of complex random phenomena. |
|
Dr. Yi Li is a Professor of Biostatistics and Director of
the Kidney Epidemiology and Cost Center, University of Michigan. His current research interests are
survival analysis, longitudinal and correlated data analysis, measurement
error problems, spatial models and clinical trial designs. His group is
developing methodologies for analyzing large-scale and high-dimensional
datasets, with direct applications in observational studies as well in genetics/genomics.
His methodologic research is funded by various NIH statistical grants
starting from year 2003. Yi Li is actively involved in collaborative research
in clinical trials and observational studies with researchers from the
University of Michigan and Harvard University. The applications have included
chronic kidney disease surveillance, organ transplantation, cancer preventive
studies and cancer genomics. Professor Li is a Fellow of the American
Statistical Association and has been serving as associate editor in various
journals including JASA, Biometrics, and Lifetime Data Analysis. |
Title: classification with Ultrahigh-Dimensional Features 1:45 pm – 2:15 pm, October 15, in Niagara Room Although much progress
has been made in classification with high-dimensional features,
classification with ultrahigh-dimensional features, wherein the features much
outnumber the sample size, defies most existing work. This paper
introduces a novel and computationally feasible multivariate screening and
classification method for ultrahigh-dimensional data. Leveraging
inter-feature correlations, the proposed method enables detection of
marginally weak and sparse signals and recovery of the true informative
feature set, and achieves asymptotic optimal misclassification rates. We also show that
the proposed procedure provides more powerful discovery boundaries compared
to those in Cai and Sun (2014) and Jin et al. (2009). The performance of the proposed
procedure is evaluated using simulation studies and demonstrated via
classification of patients with different post-transplantation renal
functional types. |
|
Dr. Anand Vidyashankar is a
Professor at George Mason University. He received his doctoral degree
in mathematics and statistics at Iowa State University. His research
interests span a wide variety of areas including branching processes, large deviations,
high-dimensional data analysis, robust inference, stochastic fixed point
equations, clinical trials, financial and actuarial risk assessment, machine
learning, non-parametric methods, and statistical foundations. His research
has been supported by industry extensively and by the NSF. |
Title: Implicit Networks in High Dimensional
Problems 1:45 pm – 2:15 pm, October 16 in Niagara Room In a
variety of contemporary applications, especially those involving big-data, it
is becoming a common practice to use high-dimensional regression models for
data analysis. While such methods yield important information concerning
associations between a response and a set of features, they fail to capture
the global characteristics of the feature set. To address some of these
limitations, we introduce the concept of supervised implicit networks and
investigate the theoretical properties of various network wide metrics (NWM).
Specifically, we provide an assessment of variability in the statistical
estimates of NWM and discuss their use in the context of data analysis.
Finally, we apply these methods to develop supervised clustering algorithms
and use it to identify communities in the network. |