International
Conference on Statistical Distributions and Applications Oct. 1416, 2016, Crowne Plaza, Niagara Falls, Canada 


Titles and
abstracts are updated shown below on September 6^{th}
Please take a look at your title and abstract.
Please email carl.lee@cmich.edu
, if there is any revision needed.
Titles and abstracts for Keynote and Plenary
speakers are on the ‘Keynotes & Plenary
Speakers’ Page.
TopicInvited Sessions: Topics and Organizers
Room Abbreviation: NI – Niagara Room, BR 
Brock Room, EL – Elisabeth Room, CAN/B – Canadian Room/B
Session 
Topic 
Organizer 
Date 
Time 
Room 
TI
1 
Applications of Statistical
Distributions in Business, Management and Economics 
Sarabia, Jose Maria 
Oct 15 
9:15 am 10:35 am 
NI 
TI
2 
Some Recent Issues and Methods in Statistics and
Biostatistics 
Yi, Grace 
Oct 15 
9:15 am 10:35 am 
BR 
TI
3 
Relative Belief Inferences 
Evans, Michael 
Oct 15 
9:15 am 10:35 am 
EL 
TI
4 
Recent developments in designs and analysis of statistical
experiments 
Xu, Xiaojian 
Oct 15 
9:15 am 10:35 am 
CAN/B 
TI
5 
Generalized distributions and its
application 
Alzaatreh,
Ayman 
Oct 15 
3:00 pm 4:20pm 
NI 
TI
6 
Don’t Count on Poisson! Introducing the
ConwayMaxwellPoisson distribution for statistical methodology regarding
count data 
Sellers, Kimberly 
Oct 15 
3:00 pm 4:20pm 
BR 
TI
7 
Extreme Value Distributions and
Models 
Huang, MeiLing 
Oct 15 
3:00 pm 4:20pm 
EL 
TI
8 
MomentBased Methodologies for Approximating and
Estimating Density Functions 
Provost, Serge B. 
Oct 15 
3:00 pm 4:20pm 
CAN/B 
TI
9 
Dependence modelling with
applications in insurance and finance 
Furman, Edward 
Oct 16 
9:15 am 10:35 am 
NI 
TI
10 
Multivariate distributions 
Richter, WolfDieter

Oct 16 
9:15 am 10:35 am 
BR 
TI
11 
Bayesian analysis for highly
structured processes 
Ferreira, Marco A. R. 
Oct 16 
9:15 am 10:35 am 
EL 
TI
12 
Recent development on Complex Data Analysis 
Gao, Xiaoli 
Oct 16 
9:15 am 10:35 am 
CAN/B 
TI
13 
Copula Modeling of Discrete
Dependent Data 
De Oliveira, Victor 
Oct 16 
10:50 am 12:10 pm 
NI 
TI
14 
Statistics and Modelling 
Stehlik,
Milan 
Oct 16 
10:50 am 12:10 pm 
CAN/B 
TI
15 
Copula Theory and Applications to
Insurance and Finance 
Cooray,
Kahadawala 
Oct 16 
3:00 pm 4:20pm 
NI 
TI
16 
Bayesian approaches on models and distributions estimation 
Cheng, ChinI 
Oct 16 
3:00 pm 4:20pm 
BR 
TI
17 
Compounding and Copulas: Generalized and Extended Distributions 
Oluyede,
Broderick O. 
Oct 16 
3:00 pm 4:20pm 
EL 
TI
18 
Modeling complex data 
Amezziane,
Mohamed 
Oct 16 
3:00 pm 4:20pm 
CAN/B 
TI
19 
Mixtures of NonGaussian
Distributions with Applications in Clustering 
McNicholas,
Paul 
Oct 16 
4:30 pm 5:50 pm 
NI 
TI
20 
Likelihoodbased Inference: Methods and Applications 
Coelho, Carlos A. 
Oct 16 
4:30 pm 5:50 pm 
BR 
TI
21 
Statistical Methods for Analysis
of Industrial and Medical Data 
Ng, Hon Keung Tony 
Oct 16 
4:30 pm 5:50 pm 
EL 
TI
22 
Construction of new statistical distributions and
statistical data modeling 
Akinsete,
Alfred 
Oct 16 
4:30 pm 5:50 pm 
CAN/B 
Abstracts
– TopicInvited Speakers (Alphabetically Ordered)
Session Name: TI m_k
(m_k = k^{th} speaker in the m^{th} session)
TI 22_3 
AlAqtash, Raid 
Title 
GumbelBurr XII {logistic} distribution 
In this
project, a member of the GumbelX family of distributions is defined. Many
properties will be presented including shapes, moments, skewness, kurtosis,
parameter estimation. The distribution will be used to fit real life data and
compare the performance with other used probability distributions. 

TI 3_4 
AlLabadi, Luai 
Title 
Priorbased
model checking 
Model checking procedures are
considered based on the use of the Dirichlet
process and relative belief. This combination is seen to lead to some unique
advantages for this problem. 

TI 5_4 
Alzaatreh, Ayman 
Title 
Parameter estimation for the
loglogistic distribution based on order statistics 
In this
talk, the moments and product moments of the order statistics in a sample of
size n drawn from the loglogistics distribution are discussed. We provide
more compact forms for the mean, variance and covariance of order statistics.
Parameter estimation for the loglogistic distribution based on order
statistics is studied. In particular, best linear unbiased estimators (BLUEs)
for the location and scale parameters for the loglogistic distribution with
known shape parameter are studied. Hill estimator is proposed for estimating
the shape parameter. 

TI 10_2 
Arslan, Olcay 
Title 
A
unified approach to some multivariate skew distributions 
The main objective of the present
work is to introduce a unified class of skew and heavytailed distributions.
We construct the new class by defining the variance–mean mixture of a skew
normal distributed random variable with a positive scalarvalued random
variable independent of the skew normal
distributed random variable. The new class can be regarded as an
extension of the following classes: the normal variance mixture
distributions, the variance mixture of the skew normal distribution and the
normal variance–mean mixture distributions. An explicit expression for the
density function of the new class is given and some of its distributional
properties are examined. We give a simulation algorithm to generate random
variates from the new class and propose an EM algorithm for maximum
likelihood estimation of its parameters. 

TI 17_1 
Baharith, Lamya A. 
Title 
Bivariate Truncated Type I
Generalized Logistic Distribution 
Truncated
type I generalized logistic distribution has been used in variety of
applications. In this article, new
bivariate truncated type I generalized logistic distribution based on
different types of copula functions is introduced. A study of some properties
is illustrated. Different methods of estimation are used to estimate the
parameters of the proposed distribution. Monte Carlo simulation is carried
out to examine the performance of the estimators. Finally, real data set is
analyzed to illustrate the satisfactory performance of the proposed
distribution. 

TI 7_1 
Brill , Percy and Huang, Mei Ling 
Title 
A
Renewal Process for Extremes 
We
derive the finite timet probability density function (pdf) of the excess,
age, and total life of a renewal
process where interarrival times have a heavy tailed distribution,
namely, a nomean Pareto distribution with shape parameter Alpha in (0,1]. We
compare the timet pdf’s with the corresponding limiting pdf’s of a renewal
process with interarrival times distributed as a finitemean, truncated
Pareto distribution having the same shape parameter Alpha. We give an example
with fixed value t, and right truncation point K, such that the corresponding
limiting pdf’s closely approximate the finite timet pdf’s on a subset of
support. 

TI 16_2 
Chatterjee, Arpita 
Title 
A note on Dirichlet Process
based semiparametric Bayesian models 
Semiparamatric Bayesian models have become
increasingly popular over the past few decades. As compared to their
parametric counterparts, the semiparametric models allow for a greater
flexibility in capturing the parameter uncertainty. Dirichlet
process mixed models form a particular class of Bayesian semiparametric
models by assuming a random mixing distribution, taken to be a realization
from a Dirichlet process, for the mixture. In this
research, we show that while hierarchical DP models may provide flexibility
in model fit, they may not perform uniformly better in other aspects as
compared to the parametric models. 

TI 16_4 
Cheng, ChinI 
Title 
Bayesian
Estimators of the Odd Weibull distribution with censored data 
The Odd Weibull distribution is
a threeparameter generalization of the Weibull distribution. The Bayesian
methods with Jeffreys priors for estimating
parameters of the Odd Weibull with censored data is considered. The Adaptive
Rejection Sampling (ARS) and Adaptive Rejection Metropolis Sampling (ARMS)
are adapted to generate random samples from full conditionals for inferences
on parameters. The estimates based on Bayesian and maximum likelihood on
censored data are compared. In order to clarify and advance the validity of
Bayesian and likelihood estimators of the Odd Weibull distribution, one
simulated data set and two examples about failure time are analyzed. 

TI 6_3 
ChooWosoba, Hyoyoung 
Title 
Marginal Regression Models for
Clustered Count Data Based on ZeroInflated ConwayMaxwellPoisson
Distribution with Applications 
We
propose a marginal regression model with a ConwayMaxwellPoisson (CMP)
distribution for clustered count data exhibiting excessive zeros and a wide
range of dispersion patterns. Two estimation methods (MPL and MES) are
introduced. Finite sample behaviors of the estimators and the resulting
confidence intervals are studied using an extensive simulation study. We
apply our methodologies to the data from the Iowa Fluoride Study and identify
significant protective and risk factors from dietary and nondietary
covariates. We also provide an application of an underdispersion case with a
maize Hybrids experiment data. 

TI 14_4 
Christara,
Christina C. 
Title 
PDE option
pricing with variable correlations 
Correlation between financial
quantities plays an important role in pricing financial derivatives. Existing
popular models assume that correlation either is constant, or exhibits some
deterministic behaviour. However, market
observations suggest that correlation is a more complicated process. We
consider correlation structures that are guided by regime switching or by a
stochastic process. We derive the related Partial Differential Equation (PDE)
problems for pricing several types of financial derivatives, and solve them
by accurate and efficient numerical methods. We also study the effect of
model parameters to the prices. We present the PDE, the numerical solution,
and comparison of the PDE results to MonteCarlo simulations. We also discuss
the relevant numerical challenges. This is joint work with Chun Ho (Nat)
Leung. 

TI 20_4 
Coelho, Carlos A. 
Title 
Likelihood ratio test for the equality of mean vectors
when the joint covariance matrix is blockcirculant
or block compound symmetric 
The test
developed and presented may be seen not only as a generalization of the
common test of equality of mean vectors, under the assumption of independence
of the corresponding random vectors or of independence of the samples, as
well as a generalization of the tests for equality of means under the
assumptions of a circulant or compound symmetric
covariance matrix. Since the exact p.d.f. and c.d.f. of this likelihood ratio statistic do not have
tractable expressions, nearexact distributions are developed, which enable
the easy obtainment of sharp quantiles and pvalues, and as such the
practical implementation of these tests. 

TI 15_4 
Cooray, Kahadawala

Title 
Strictly
Archimedean Copula with Complete Association for Multivariate Dependence
Based on the Clayton Family 
The Clayton copula is one of the most discussed Archimedean copulas for
dependency measurement. However, the major drawback is that when it accounts
for negative dependence, the copula becomes nonstrict
and its support depends on the parameter. To address this issue, this talk
introduces a new twoparameter family of strict Archimedean copula to measure
exchangeable multivariate dependence. Closedform formulas for the complete
monotonicity and the d−monotonicity parameter region of the generator,
copula distribution function, and the Kendall’s distribution function are
derived. Simulation studies are conducted to assess the performance of the ml
estimators of the d−variate copula under known margins. 

TI 9_1 
Cossette,Hélčne/Itre Mtalai/Etienne Marceau/Déry Veilleux 
Title 
Archimedean copulas: Aggregation and capital allocation 
Risk aggregation evaluates the
distribution of the sum of n random variables which represent individual
risks. Researchers in insurance and finance have investigated the aggregation
of dependent risks to determine an adequate level of capital to offset the
global risk S=X₁+…+ Xn of a portfolio of
n risks with known joint distribution. Risk measures, such as the VaR and TVaR, can be used to
calculate the minimum capital requirement associated to S and the amount of
capital allocated for each risk within the portfolio. We consider a portfolio
of dependent risks represented by a vector of positive random variables whose
joint distribution function is defined by a copula C and its margins F1, ...,Fn. We assume that the copula C is either an
Archimedean copula or a nested Archimedean copula. Our objective is to
introduce a deterministic method of computation of the distribution of S
which relies on the fact that an Archimedean copula can be represented as a
common mixture with a positive mixing variable. The exchangeability property
of Archimedean copulas restricts their application. We hence extend some
results to nested Archimedean copulas and propose a different approach
permitting to get around certain constraints of these copulas. 

TI 19_1 
Dang, Sanjeena 
Title 
Mixtures
of DirichletMultinomial Regression Models for
Microbiome Data 
The human gut microbiome is a
source of great genetic and metabolic diversity. Microbiome samples which
share similar biota compositions are known as enterotypes.
Exploring the relationship between biological/environmental covariates and
the taxonomic composition of the gut microbial community can shed light on
the enterotype structure. Dirichletmultinomial
models have been previously suggested to investigate this relationship,
however these models did not account for any latent group structure. Here, a
finite mixture of Dirichletmultinomial regression
models is proposed and illustrated. These models allow for accounting for the
enterotype structure and allow for a probabilistic
investigation of the relationship between bacterial abundance and
biological/environmental covariates within each inferred enterotype.
Furthermore, a generalization of these models is also proposed that can
incorporate the concomitant effect of the covariates on the resulting mixing
proportions. 

TI 19_3 
Dang, Utkarsh 
Title 
Parsimonious skew powerexponential mixture models 
A family of parsimonious
mixtures of multivariate power exponential distributions is presented. The
multivariate power exponential distribution is a flexible elliptical
alternative to the Gaussian and Student tdistributions, allowing for dealing
with both varying tailweight (light or heavy) and peakedness
of data. For particular values of the shape parameter, special and limiting
cases of this distribution include the doubleexponential, Gaussian, and the
uniform distributions. Furthermore, an extension of these models is presented
that can also model asymmetric data. Computational and inference challenges
will be discussed. Lastly, the utility of the proposed models is illustrated
using both toy and benchmark data. 

TI 13_4 
De Oliveira, Victor 
Title 
On the
Correlation Structure of Gaussian Copula Models for Geostatistical
Count Data 
We describe a class of random field models for geostatistical
count data based on Gaussian copulas. Unlike hierarchical Poisson models
often used to describe this type of data, Gaussian copula models allow a more
direct modelling of the marginal distributions and association structure of
the count data. We study in detail the correlation structure of these random
fields when the family of marginal distributions is either negative binomial
or zeroinflated Poisson; these represent two types of overdispersion
often encountered in geostatistical count data. We
also contrast the correlation structure of one of these Gaussian copula
models with that of a hierarchical Poisson model having the same family of
marginal distributions, and show that the former is more flexible than the
latter in terms of range of feasible correlation, sensitivity to the mean
function and modelling of isotropy. An exploratory analysis of a dataset of
Japanese beetle larvae counts illustrate some of the findings. All of these
investigations show that Gaussian copula models are useful alternatives to
hierarchical Poisson models, specially for geostatistical count data that display substantial
correlation and small overdispersion. 

TI 3_1 
Evans, Michael 
Title 
Measuring
Statistical Evidence Using Relative Belief 
A fundamental concern of any
theory of statistical inference is how one should measure statistical
evidence. Certainly the words `statistical evidence', or perhaps just 'evidence', are
much used in statistical contexts. Still it is fair to say that the precise characterization
of this concept is somewhat elusive. Our goal here is to
provide a definition of how to measure statistical evidence for any
particular statistical problem. Since evidence is what causes beliefs to change, we measure
evidence by the change in belief from a priori to a
posteriori. As such our definition involves prior beliefs and this
raises issues of subjectivity versus objectivity in statistical analyses. We
deal with this through a principle requiring the falsifiability of any
ingredients to a statistical analysis. This leads to a discussion of checking
for priordata conflict and measuring the a priori bias in a prior. 

TI 12_3 
Fang, Yixin 
Title 
Variable
selection for partially linear models via learning gradients 
The performance of the proposed
estimator is demonstrated in both simulation studies and real examples.
Partially linear models, a compromise between parametric regression and
nonparametric regression models, are very useful for analyzing
highdimensional data. Variable selection plays an important role in the use
of partially linear models, which are of both linear and nonlinear
components. Variable selection for the linear component has been well
studied. However, variable selection for the nonlinear component usually
relies on some assumption imposed on the structure of the nonlinear
component. For example, variable selection methods have been developed for
additive partially linear models and generalized additive partially linear
models. In this manuscript, we propose a new variable selection method based
on learning gradients for partially linear models without any assumption on
the structure of the nonlinear component. The proposed method utilizes the
reproducingkernelHilbertspace tool to learn the
gradients and the grouplasso penalty to select variables. In addition, a
blockcoordinate descent algorithm is described and some theoretical
properties are derived. The performance of the proposed method is evaluated
via simulation studies and a real data application. 

TI 14_2 
Filus, Jerzy 
Title 
Two Kinds of Stochastic Dependencies Bivariate
Distributions; Part 2 
A new class of bivariate
probability densities as stochastic models for some biomedical as well as for
reliability phenomena is constructed. The models are fusions of the already
known bivariate “pseudodistributions” (pseudoexponential and pseudoWeibulian,
in particular) with a rather new class of bivariate survival functions that,
basically, look like a generalization of the first bivariate Gumbel’s
survival function. This generalization is obtained by use of ‘additive hazard
models’ (see, Aalen, 1989) which are some modifications of the famous model
by Cox (1972). The class of the “Gumbellike” models, we will present, is
quite general so that it, possibly, contains “most of” bivariate survival
functions met in practical applications. In biomedical (or reliability)
situations, we consider, a member of this class is supposed to model some
particular stochastic dependence between biomedical quantities according to a biophysical phenomena. In addition, stochastic
description of some other, more complex type of phenomena, one obtains by
applying to the previous bivariate distribution a pseudolinear transformation
of the random vector possessing the previously mentioned property of being
the “Gumbellike” distributed. The pseudolinear transformation once applied
to independent random variables produces the pseudodistributions.
In the case it is applied to the random variables having the joint Gumblelike distributions one obtains the fusion of two
different stochastic models. Some analysis of the “combined” bivariate
distributions will be presented. 

TI 14_1 
Filus, Lidia 
Title 
Two
Kinds of Stochastic Dependencies Bivariate Distributions; Part 1 
A new class of bivariate
probability densities as stochastic models for some biomedical as well as for
reliability phenomena is constructed. The models are fusions of the already
known bivariate “pseudodistributions” (pseudoexponential and pseudoWeibulian,
in particular) with a rather new class of bivariate survival functions that,
basically, look like a generalization of the first bivariate Gumbel’s
survival function. This generalization is obtained by use of ‘additive hazard
models’ (see, Aalen, 1989) which are some modifications of the famous model
by Cox (1972). The class of the “Gumbellike” models, we will present, is
quite general so that it, possibly, contains “most of” bivariate survival
functions met in practical applications. In biomedical (or reliability)
situations, we consider, a member of this class is supposed to model some
particular stochastic dependence between biomedical quantities according to a biophysical phenomena. In addition, stochastic description
of some other, more complex type of phenomena, one obtains by applying to the
previous bivariate distribution a pseudolinear transformation of the random
vector possessing the previously mentioned property of being the
“Gumbellike” distributed. The pseudolinear transformation once applied to
independent random variables produces the pseudodistributions.
In the case it is applied to the random variables having the joint Gumblelike distributions one obtains the fusion of two
different stochastic models. Some analysis of the “combined” bivariate
distributions will be presented. 

TI 19_2 
Gallaugher, Michael 
Title 
Clustering Clickstream Data Using a Mixture of Continuous
Time Markov Models 
In today's society, the internet
is quickly becoming a major source of data. One interesting type of data that
can be utilized from the internet is clickstream data, which monitors a
user's web browsing patterns. Clustering is the process of finding underlying
group structures in a dataset, and although there has been ample work done in
the clustering paradigm for clickstream data, the methods often neglect the
amount of time spent on each website.
By failing to include a time component in the model, we are robbing
ourselves of potentially valuable information. We propose a mixture of
continuous time first order Markov models for the clustering of clickstreams
which would incorporate the time aspect. Both simulated data, and real
datasets will be considered for the evaluation of the proposed methodology. 

TI 5_3 
Ghosh, Indranil 
Title 
Some
alternative bivariate Kumaraswamy models 
In this paper we discuss
various strategies for constructing bivariate Kumaraswamy
distributions. As alternatives to the Nadarajah, Cordeiro and Ortega (2011) bivariate model, four
different models are introduced utilizing a conditional specification
approach, a conditional survival function approach, an ArnoldNg bivariate
beta distribution construction approach, and a copula based construction
approach. Distributional properties for such bivariate distributions are
investigated. Parameter estimation strategies for the models are discussed,
as are the consequences of fitting two of the models to a particular data set
involving hemoglobin content in blood samples before and after treatment. 

TI 18_4 
Giurcanu, Mihai 
Title 
Thresholding Least Squares
Inference in High Dimensional Regression Models 
We propose a thresholding leastsquares method of inference for highdimensional
regression models when the number of parameters, p, tends to infinity with
the sample size, n. Extending the asymptotic behavior of the Ftest in high
dimensions, we establish the oracle property of the thresholding
leastsquares estimator when p = o(n). We propose two automatic selection
procedures for the thresholding parameter using Scheffe and Bonferroni methods.
We show that, under additional regularity conditions, the results continue to
hold even if p = exp(o(n)). Lastly, we show that,
if properly centered, the residualbootstrap estimator of the distribution of
thresholding leastsquares estimator is consistent,
while a naive bootstrap estimator is inconsistent. In an intensive simulation
study, we assess the finite sample properties of the proposed methods for
various sample sizes and model parameters. The analysis of a real world data
set illustrates an application of the methods in practice. 

TI 1_2 
GomezDeniz, Emilio 
Title 
Computing
Credibility BonusMalus Premiums Using a Bivariate
Discrete Distribution 
A simple modification for
computing the automobile insurance bonusmalus
premiums is proposed here. Traditionally, in automobile insurance the premium
assigned to each policyholder is based only on the number of claims made. Therefore,
a policyholder who has had an accident producing a relatively small amount of
loss is penalised to the same extent as one who has
had a more costly accident and this seems to be
unfair. We propose a statistical model which distinguishes between two
different types of claims, incorporating a bivariate distribution based on
the assumption of dependence. We also describe a bivariate prior distribution
conjugated with respect to the likelihood. This approach produces credibility
bonusmalus premiums that satisfy appropriate
transition rules. A practical example of its application is presented and the
results obtained are compared with those derived from the traditional model
in which only the number of claims is taken into account. 

TI 5_2 
Hamedani, Gholamhossein 
Title 
Characterizations of Probability Distribution Via the
Concept of SubIndependence 
Limit theorems as well as other
wellknown results in probability and statistics are often based on the distribution
of the sums of independent (and often identically distributed) random
variables rather than the joint distribution of the summands. Therefore, the
full force of independence of the summands will not be required. In other
words, it is the convolution of the marginal distributions which is needed,
rather than the joint distribution of the summands. The concept of
subindependence, which is much weaker than that of independence, is shown to
be sufficient to yield the conclusions of these theorems and results. It also
provides a measure of dissociation between two random variables which is much
stronger than uncorrelatedness. In this talk,
certain characterizations of probability distributions based on the concept
of subindependence will be presented. 

TI 16_1 
He, Jianghua 
Title 
Bayesian
Reliability Assessment of FacilityLevel Patient Outcome Measures 
Patient health outcome measures
at facilitylevel are often used as quality indicators of patient care.
Withinfacility variations of such measures often differ among facilities.
The intraclass correlation coefficient based on
equal withinsubject variation may not be directly applied. Signaltonoise
approach can be used to assess the facilityspecific reliability of a measure
with different withinsubject variation among facilities. In this study, we
propose a new approach of assessing the reliability of patient outcome
measures at facilitylevel in differentiating one facility from others by
allowing for facilityspecific variation. The Bayesian framework is utilized
to handle measures of events rates with nonnegligible zeros. 

TI 2_2 
He, Wenqing 
Title 
Improving Performance of Support Vector Machine
Classifiers with Data Adaptive Kernel 
Support Vector Machine (SVM) is
popularly used in the classification/prediction of discrete outcomes,
especially in high dimensional data analysis such as gene expression data
analysis and image analysis. In this talk, a new enhance SVM method will be
presented. The initial kernel function for the SVM is rescaled in an adaptive
way so that the separation between two classes can be effectively enlarged,
based on the prior knowledge obtained from the conventional SVM. The modified
classifier takes into consideration the distribution of the support vectors
in the feature space, and the correlation will be dealt with by selecting
only limited numbers of parameters properly. Improvement of prediction
accuracy from this data dependent SVM is shown with numerical studies. 

TI 12_1 
Hirose, Kei 
Title 
Robust
estimation for sparse Gaussian graphical model 
In Gaussian graphical modeling,
we often use a penalized maximum likelihood approach with the L1 penalty for
learning a highdimensional graph. However, the penalized maximum likelihood
procedure is sensitive to outliers. To overcome this problem, we introduce a
robust estimation procedure based on the \gammadivergence. The parameter
estimation procedure is constructed using the MajorizeMinimization
algorithm, which guarantees that the objective function monotonically
decreases at each iteration. This method has a redescending
property, which is known as a desirable property in robust statistics.
Extensive simulation studies showed that our procedure performed much better
than the existing methods. 

TI 7_3 
Hlynka, Myron 
Title 
Comments on the Gumbel Distribution 
The talk will discuss the
Gumbel distribution and its relationship to integer partitions. 

TI 11_4 
Hoegh, Andrew 
Title 
Multiscale
Spatiotemporal Modeling for Predicting Civil Unrest 
Civil unrest is a complicated,
multifaceted social phenomenon that is difficult to forecast. Relevant data
for predicting future protests consist of a massive set of heterogeneous data
sources, primarily from social media. A modular approach to extract pertinent
information from disparate data sources is implemented to develop a
multiscale spatiotemporal framework to fuse predictions from algorithms
mining social media. The novel multiscale spatiotemporal framework is
scalable to handle massive spatiotemporal datasets and can incorporate
hierarchical covariates. An efficient sequential Monte Carlo procedure
coupled with the multiscale framework enables rapid computation of a richly
specified Bayesian hierarchical model for spatiotemporal data. 

TI 13_1 
Hughes, John 
Title 
Hierarchical Copula
Regression Models for Areal Data 
Regression
analysis for spatially aggregated data is common in a number of fields,
including public health, ecology, and econometrics. Often, the goal of such an
analysis is to quantify the relationship between an outcome of interest and
one or more covariates. The mixed model with proper conditional
autoregressive (CAR) spatial random effects is commonly used to model such
data but suffers serious drawbacks. First, an analyst must interpret
covariate effects conditionally although marginal effects may be of interest.
Second, the dependence parameter of the proper CAR model has an intuitive
conditional interpretation, but the parameter's marginal interpretation is
complicated and counterintuitive; specifically, spatial units with a similar
number of neighbors have different marginal correlations. To overcome these
two drawbacks, we propose a copulabased hierarchical model with covariance
selection. Our approach allows for unbiased estimation of marginal parameters
and thus an 

TI 14_3 
Ishimura, Naoyuki

Title 
Evolution
of copulas and its applications 
Copula is known to provide a
flexible method for the understanding of dependence structure among random events.
However, a copula function does not usually
involve a time variable. We have developed, on the other hand, the concept of
evolution of copulas, which claim that copula itself evolves according
to the time variable. In this presentation, we review our recent study on
this evolution of copulas and consider its applications, which include
in particular the analysis of exchange rate modeling. 

TI 3_2 
Jang, Gun Ho and
Stein, Lincoln 
Title 
Relative Belief based Signal Segmentation 
Cancers display a considerable
degree of genomic copy number alteration (CNA), manifested as chromosomal and
segmental amplifications and deletions. Many CNA detection algorithms assume
the events follow a locally constant signal model, but low tumor fractions
and/or subclonal heterogeneity create weak signals
that are difficult to interpret accurately. We propose a segmentation method
using a relative belief inference on a locally constant model. The
performance of the proposed method is presented and compared with several
segmentation algorithms including circular binary segmentation,
allelespecific piecewise constant fitting and SCAN algorithms. 

TI 21_2 
Jayalath, Kalanka

Title 
A
Graphical Test for Testing Random Effects in Common Statistical Designs 
Analysis of means (ANOM) is a
powerful graphical testing procedure for comparing means and variances in
fixed effect models. The graphical interpretation of ANOM is a great
advantage over the classical ANOVA approach. However, the ANOM only deals
with the fixed factor effects. In this talk, we discuss the ability to extend
the ANOM approach to testing random effects. We also discuss the use of the
new ANOM approach in many different statistical designs including both random
and mixed effects models with illustrative examples. The power performance of
the proposed procedure is compared to the ANOVA approach via a simulation
study. 

TI 9_4 
Jevtić, Petar/Hurd, Thomas R.

Title 
The joint mortality of couples in continuous time 
This paper introduces a probabilistic
framework for the joint survivorship of couples in the context of dynamic
stochastic mortality models. In contrast to previous literature, where the
dependence between male and female times of death was achieved using a copula
approach, this new framework gives an intuitive and flexible pairwise
cohortbased probabilistic mechanism that can accommodate both deterministic
and stochastic effects which the death of one member of couple causes on the
other. It is sufficiently flexible to allow modeling of effects that are
short term (broken heart) or long term in their durations. In addition, it
can account for the state of health of the both the surviving and dying
spouse and thus can allow for dynamic and asymmetric reactions of varying
complexity. Finally, it can accommodate the dependence of lives before the
first death. Analytical expressions for bivariate survivorship in
representative models are given, and their estimation, done in two stages, is
seen to be straightforward. First, marginal survivorship functions are
calibrated based on UK mortality data for males and females of chosen
cohorts. Second, the maximum
likelihood approach is used to estimate the remaining parameters from
simulated joint survival data. We show that the calibration methodology is
simple, robust and fast, and can be readily used in practice. 

TI 11_3 
Keefe, Matthew J. 
Title 
Objective
Bayesian Analysis for Gaussian Improper CAR Models 
Choosing appropriate priors for
parameters of Bayesian hierarchical models for areal data is challenging. In
particular, an improper conditional autoregressive (CAR) component is often
used to account for spatial association. The use of vague proper priors for
this model requires the selection of suitable hyperparameters.
In this talk, we derive objective priors for the Gaussian hierarchical model
with an improper CAR component and show that the reference prior results in a
proper posterior distribution. We present results from a simulation study to
compare properties of the proposed Bayesian procedures. We illustrate our
methodology by modeling foreclosure rates in Ohio. 

TI 17_2 
Kim, JongMin 
Title 
Directional Dependence via Copula Stochastic Volatility
Model. 
By a theorem due to Sklar in 1959, a multivariate distribution can be represented
in terms of its underlying margins by binding them together a copula
function. Copulas are useful devices to explain the dependence structure
between variables by eliminating the influence of marginals.
A copula method for understanding multivariate distributions has a relatively
short history in statistics literature; most of the statistical applications
have arisen in the last twenty years. In this talk, directional dependence
via copula stochastic volatility model will be introduced with real example
using financial data. 

TI 6_1 
Kimberly Sellers 
Title 
Introducing
the ConwayMaxwellPoisson distribution 
The ConwayMaxwellPoisson
(COMPoisson) distribution is a flexible alternative for modeling count data,
and it is quickly growing in popularity in both the statistics and applied
quantitative disciplines. While the Poisson distribution maintains the
constrained equidispersion assumption (where the
variance and mean equal), the COMPoisson distribution allows for data over
or underdispersion (where the variance is larger or smaller than the mean),
and captures three classical distributions as special cases. This talk will
introduce the distribution and serve as a review survey for the work done,
and a prologue to the subsequent talks in the session. 

TI 10_3 
Kleiber, Christian 
Title 
On moment indeterminacy of the generalized variance 
The moment problem asks whether
a distribution can be uniquely characterized by the sequence of its moments. In
the univariate case, counterexamples have been known for decades, e.g., the
lognormal and certain generalized gamma distributions. In the multivariate
case, knowledge is still much more limited. Here we consider a univariate
sampling distribution from classical multivariate analysis, the generalized
variance, which leads to a Stieltjestype moment
problem. It is shown that this object is not determined by the sequence of
its moments although all the moments are finite. There is a dimension effect:
the bivariate case the distribution is momentdeterminate, whereas in
dimensions greater than two the distribution is momentindeterminate. 

TI 4_3 
Li, Pengfei 
Title 
Controlling
IER and EER in replicated regular twolevel factorial experiments 
Replicated regular twolevel
factorial experiments are very useful for industry. The goal of these
experiments is to identify active effects that affect the mean and variance
of the response. Hypothesis testing procedures are widely used for this
purpose. However, the existing methods give results that are either too
liberal or conservative in controlling the individual and experimentwise
error rates (IER and EER). In this paper, we propose a resampling procedure
and an exactvariance method to identify active effects for the mean and
variance, respectively, of the response. Monte Carlo studies show that our
methods control the IER and EER well. 

TI 13_2 
Madsen, Lisa 
Title 
Simulating Dependent Count Data 
Statisticians simulate data for
a variety of purposes: to assess and compare the performance of statistical
procedures and to design studies. Therefore, the ability to simulate
realistic data is an important tool. I will discuss a method to simulate
countvalued dependent random variables from the Gaussian copula that mimic
observed data sets. Researchers typically characterize dependence by
Pearson's producemoment correlation, but for smallmean counts, this is not
as sensible as other measures such as Spearman's rank correlation. Furthermore,
for smallmean count distributions, the high probability of ties requires
special attention. I will show how to determine the Gaussian copula
correlation matrix that will lead to any specified feasible Spearman or
Pearson correlation matrix. I will demonstrate the method with an example
based on an actual data set. 

TI 9_3 
Mailhot, Mélina

Title 
Reciprocal
Reinsurance Treaties Under an Optimal and Fair Joint Survival Probability 
Optimal reinsurance treaties
between an insurer and a reinsurer considering both parties' interests will
be presented. Most articles only focus on the insurer's point of view. The
latest research considering both sides have considerably oversimplied
the joint survival function. This situation leads to an unrealistic optimal
solution; one of the parties can make riskfree profits while the other bears
all the risk. A fair joint survival probability will be defined and optimized
for a reciprocal reinsurance treaty under different principles and types of
contract. 

TI 17_5 
Makubate, Boikanyo, Galetlhakanelwe Motsewabagale,
Broderick O. Oluyede, Alphonse Amey 
Title 
Dagum Power Series Class of
Distributions with Applications to Lifetime Data 
In this
paper, we present a new distribution class of distributions called the DagumPower Series (DPS) distribution and in particular
the DagumPoisson (DP) distribution. This model is obtained by compounding Dagum distribution with the power series distribution.
The hazard function, reverse hazard function, moments and mean residual life
function are obtained. Methods of finding estimators such as Minimum
Distance, Maximum Product of Spacing, Bayesian estimators, Least Squares,
Weighted Least Squares and Maximum Likelihood will be discussed. A simulation
study will be carried out to compare these estimation methods. Each method has its own strength and
weakness. We also carry out some hypothesis tests using the Wald test
statistic. This distribution will be shown to be competitive model for describing
censored observations in life time reliability problems. Finally, we apply
the DagumPoisson distribution to real dataset to
illustrate the usefulness and applicability of the distribution. 

TI 22_4 
Mallick, Avishek

Title 
Robustness
of Multiple Comparison Methods for Oneway and Twoway ANOVA with Repeated
Measurements 
In many experiments several
observations are taken over time or with several treatments applied to each
subject. These observations tend to be highly correlated, particularly those
observed adjacent to each other with respect to time. In this paper we
investigate the eﬀect of the correlations among the observations in
oneway and twoway ANOVA. A modiﬁcation of the standard tests
suitable for AR(1) correlation structure is proposed
and its properties are investigated. We also apply the approximations to the
distribution of F tests as suggested by Andersen, Jensen, and Schou (1981) and carry out the analysis. The modiﬁed
procedure allows us to have a better control of the nominal signiﬁcance
level α. Consequently, the multiple comparisons and multiple tests based
on this modiﬁed procedure will lead to conclusions with better
accuracy. 

TI 4_1 
Mandal, Saumen 
Title 
Optimal designs for minimizing correlations among
parameter estimators in a linear model 
In many
regression designs it is desired to render certain parameter estimators
uncorrelated with others. Motivated by this fact, we construct optimal
designs for minimizing covariances among the
parameter estimators in a linear model, thereby rendering the parameter
estimators approximately uncorrelated with each other. In the case of
rendering a parameter estimator uncorrelated with another two estimators, we
set up a compound optimization problem and transform the problem to one of
maximizing two functions of the design weights simultaneously. The approaches
are formulated for a general regression model and are explored through some
examples including one practical problem arising in Chemistry. 

TI 19_4 
McNicholas, Paul 
Title 
Mixture
of Coalesced Generalized Hyperbolic Distributions 
A mixture of multiple scaled
generalized hyperbolic distributions (MSGHDs) is introduced. Then, a mixture
of coalesced generalized hyperbolic distributions is developed by joining a
finite mixture of generalized hyperbolic distributions with a MSGHD. After
detailing the development of the mixture of MSGHDs, which arises via
implementation of a multidimensional weight function, the density of the
coalesced distribution is developed. A parameter estimation scheme is
developed using the everexpanding class of MM algorithms and the Bayesian
information criterion is used for model selection. The issue of cluster
convexity is examined and a special case of the MSGHDs is developed that is
guaranteed to have convex clusters. These approaches are illustrated and
compared using simulated and real data. 

TI 6_4 
Morris, Darcy S. 
Title 
Bivariate ConwayMaxwellPoisson Distribution: Formulation,
Properties, and Inference 
The bivariate Poisson
distribution is a popular distribution for modeling bivariate count
data. Its basic assumptions and marginal equidispersion,
however, may prove limiting in some contexts. To allow for data dispersion,
we developed a bivariate ConwayMaxwellPoisson (COMPoisson) distribution
that includes the bivariate Poisson, bivariate geometric, and bivariate
Bernoulli distributions all as special cases. As a result, the bivariate
COMPoisson distribution serves as a flexible alternative and unifying
framework for modeling bivariate count data, especially in the presence of
data dispersion. This is joint work with Kimberly Sellers (Georgetown
University) and Narayanaswamy Balakrishnan
(McMaster University). 

TI 20_3 
Moura, Ricardo 
Title 
Likelihoodbased
exact inference for Posterior and FixedPosterior Predictive Sampling
synthetic data under the MLR model 
Synthesizing datasets as a
Statistical Disclosure Control technique has become more and more popular. Under multivariate linear regression model,
likelihoodbased exact inference for singly and multiply imputed synthetic
data generated under Posterior Predictive Sampling (PPS) will be presented,
filling a gap in the existing SDC literature. It will be also
presented a likelihoodbased exact inference for multiply imputed data
generated via a new method, called FixedPosterior Predictive Sampling
(FPPS), proposed to overcome problems inherent to the PPS method. An
application using U.S. 2000 Current Population Survey data will be discussed
and comparisons between PPS and FPPS are presented. 

TI 3_3 
Muthukumarana, Saman 
Title 
Noninferiority Hypothesis Testing in Twoarm Trials
using Relative Belief Ratios 
We discuss a Bayesian approach for
assessing noninferiority in twoarm trials using relative belief ratio. A
relative belief ratio is a measure of the evidence in favour
of a hypothesis. It is similar to the Bayes factor as both measure the change
in belief from a priori to a posteriori but has
better optimal properties. Under different conditions, we obtain the
posterior distribution of the difference in treatment effects between
experimental treatment and reference treatment. Once this distribution is
determined, we propose a Bayesian decision criterion using the relative
belief ratio. We illustrate the proposed method by applying it to data
arising from twoarm clinical trials. Some extensions to discrete data with
excessive zeros will also be discussed. 

TI 21_4 
Ng, Hon Keung Tony 
Title 
Statistical
Inference for Component Distribution from System Lifetime Data 
In this talk, statistical
inference of the reliability characteristics of the components in the system
based on the lifetimes of systems will be discussed. We study the problem of
testing the homogeneity of distributions of component lifetime based on
system lifetime data with known system signatures. Both parametric and
nonparametric procedures are developed for this problem. The performance and
limitations of the proposed nonparametric method are discussed. Then, we
assume the component lifetimes follow exponential distributions and develop
exact and asymptotic parametric tests. Monte Carlo simulation study is used
to compare the performance of different parametric and nonparametric
procedures. 

TI 7_4 
Nguyen, Christine and Huang Mei Ling 
Title 
On High Quantile Regression 
The estimation of conditional
quantiles at very high or low tails of a heavy tailed distribution is of interest
in numerous applications. We study a linear quantile regression model which
uses an L1 loss function, and the optimal solution of linear program, for
estimating coefficients of regression. This paper proposes a weighted
quantile regression method for certain extreme value sets. Monte Carlo
simulations show good results for the proposed weighted method. Comparisons
of the proposed method and existing methods are given. The paper also
investigates realworld examples by using the proposed weighted method. 

TI 18_3 
Nkurunziza, Sévérien

Title 
A class
of restricted estimators in multivariate measurement error regression model 
In this paper, we study an
estimation problem in multivariate regression model with measurement error.
In particular, we consider the case where the regression coefficient may
satisfy some restrictions. We propose the unrestricted estimator (UE) and a
class of restricted estimators, which includes as a special cases three
restricted estimators given in recent literature. Further, we study the
asymptotic properties of the proposed class of estimators under the null and
alternative hypothesis. To this end, we generalize some findings underlying
the elliptically contoured distributions. Thanks to the generalized findings,
we establish Asymptotic Distributional Risk (ADR) for the UE as well as the
ADR of any member of the proposed class of the restricted estimators and we
compare their relative performance. It is established that near the null
hypothesis, the restricted estimators (REs) perform better than the UE. But
the REs perform worse than the UE when one moves far away from the null
hypothesis. Finally, in order to illustrate the application of the proposed
method, we present some simulations and we analyze a real data set. The numerical
findings corroborate the established theoretical results. 

TI 10_1 
Nolde, Natalia 
Title 
Multivariate lighttailed distributions: from the
asymptotic shape of sample clouds to properties of multivariate extremes. 
Sample clouds
of multivariate data points from lighttailed distributions can often be
scaled to converge onto a deterministic set as the sample size tends to
infinity. It turns out that the shape of this limit set can be related to a
number of extremal tail and dependence properties of the underlying
multivariate distribution. In this talk, I will present several simple
relations, and illustrate how they can be used to replace frequently
cumbersome or intractable analytical computations. 

TI 15_3 
Oh, Dong Hwan 
Title 
TimeVarying Systemic Risk: Evidence from a
Dynamic Copula Model of CDS Spreads 
This
paper proposes a new class of copulabased dynamic models for high dimension
conditional distributions, facilitating the estimation of a wide variety of
measures of systemic risk. Our use of copulabased models enables the
estimation of the joint model in stages, greatly reducing the computational
burden. We use the proposed new models to study a collection of daily CDS
spreads on 100 U.S. firms. We find that while the probability of distress for
individual firms has greatly reduced since the 2008 financial crisis, a
measure of systemic risk is substantially higher now than in the precrisis
period. 

TI 17_4 
Oluyede, Broderick O. 
Title 
The Burr XII Weibull Power Series
Distribution: Theory and Applications 
A new
class of power series distributions is developed and presented. In
particular, the new Burr XII WeibullPoisson (BWP) distribution is introduced
and its properties are explored in detail. Some estimation techniques
including maximum likelihood estimation method are used to estimate the model
parameters and finally applications of the model to real data sets are
presented to illustrate the usefulness of the proposed class of distributions. 

TI 22_2 
Otunuga, Michael 

Distribution Models of Energy Commodity Spot
Price Processes 
In this
work, we undertake the study to shed light on world oil market and price
movement, price balancing process and energy commodity behavior. A system of
stochastic model for dynamic of energy pricing process is proposed. Different
methods for parameter estimation is discussed. In addition, by developing a
Local Lagged Adapted Generalized Method of Moment (LLGMM) method, an attempt
is made to compare the simulated estimates derived using LLGMM and other
existing method. These developed results are applied to the Henry Hub natural
gas, crude oil, coal, and ethanol data set. 

TI 8_3 
Paolella, Marc 
Title 
Stable Paretian
Distribution Testing 
A fast method
for estimating the parameters of a stableAPARCH not requiring likelihood or
iteration is proposed. Several powerful tests for the (asymmetric) stable Paretian distribution with tail index $1< \alpha <
2$ are developed and used for assessing the appropriateness of the stable
assumption as the innovations process in stableGARCHtype models for daily
stock returns. Overall, there is strong evidence against the stable as the
correct innovations assumption for all stocks and time periods, though for many
stocks and windows of data, the stable hypothesis is not rejected. 

TI 22_1 
Pararai, Mavis 
Title 
A New Lifetime Distribution With Applications 
The beta
LindleyPoisson (BLP) distribution which is an extension of the
LindleyPoisson Distribution is introduced and its properties are explored.
This new distribution represents a more flexible model for the lifetime data.
Some statistical properties of the proposed distribution including the
expansion of the density function, hazard rate function, moments and moment
generating function, skewness and kurtosis are explored. Renyi
entropy and the distribution of the order statistics are given. The maximum
likelihood estimation technique is used to estimate the model parameters and
finally applications of the model to real data sets are presented for the
illustration of the usefulness of the proposed distribution. 

TI 2_3 
Peng, Yingwei 
Title 
Prediction accuracy for cure
probability in cure models 
Prediction
accuracy of a cure model to predict the cure probability of a subject is an
important but not well addressed issue in survival analysis. We propose a
method to assess the prediction accuracy of a mixture cure model in
predicting cure probability based on inverse probability of censoring weights
to incorporate the censoring and latent cure status in the data. The
consistency of the estimator is examined. A simulation study is conducted to
investigate the performance of estimator based on training data only. An
application of the method to a real data set is illustrated. 
TI 8_2 
Pigeon, Mathieu 
Title 
Composite
(mixed) models for individual loss reserving 
In this talk, we consider composite models
(CM) based on a distribution f up to an
unknown threshold and a distribution g thereafter. Instead of using a
single threshold value applying uniformly to the whole dataset, a composite mixed
model (CMM) allows for heterogeneity with respect to the threshold and let it
vary among observations. More specifically, the threshold value for a
particular observation is seen as the realization of a random variable and
the CMM is obtained by averaging over the population of interest. We apply
these models, and some extensions, to evaluate loss reserves in a microlevel
actuarial dataset. We illustrate results with an empirical analysis using a
real portfolio as well as with simulations. 
TI 15_1 
Plante, JeanFrançois 
Title 
Rank Correlation under Categorical
Confounding 
Rank
correlation is invariant to marginal transformations, but it is not immune to
confounding. Assuming a categorical confounding variable is observed, the author
proposes weighted coefficients of correlation developed within a larger
framework based on copulas. While the weighting is clear under the assumption
that the dependence is the same within each group implied by the confounder,
the author extends the Minimum Averaged Mean Squared Error (MAMSE) weights to
borrow strength between groups when the dependence may vary across them.
Asymptotic properties of the proposed coefficients are derived and
simulations are used to assess their finite sample properties. 

TI 1_3 
Prieto, Faustino and Sarabia,
Jose Maria 
Title 
Family of generalized power law (GPL)
distributions: Properties and Applications 
Many
real phenomena can be modelled by the Power Law (Pareto) distribution in their upper tail. However, that
distribution usually fails when we focus on their whole range. In this paper,
we provide empirical evidence that, the family of Generalized Power Law (GPL)
distributions, can be useful for modelling the whole range, of those real
phenomena with power law tail. To do that, we combine máximum
likelihood method, as a fitting technique, with KolmogorovSmirnov test
method based on bootstrap resampling, as a goodness of fit test. In addition,
we compare that family of
distributions with other well known distributions. 

TI 8_4 
Provost, Serge 
Title 
Differentiated Logdensity
Estimates and Approximants as Rational Functions 
We
propose a density approximation methodology whereby the derivative of the
logarithm of a density approximant is expressed as a polynomial or a rational
function. The polynomial coefficients are determined by matching moments and
solving the resulting system of linear equations. This methodology is applied
to two test statistics as well as certain mixtures of density functions. As
well, it is explained that this approach can produce density estimates. 

TI 6_2 
Raim, Andrew 
Title 
A flexible zeroinflated model to address data
dispersion 
The
Conway–Maxwell–Poisson distribution has seen increased interest in recent
years due to its ability to model both overdispersion
and underdispersion relative to the Poisson
distribution. This work considers a zeroinflated Conway–Maxwell–Poisson
(ZICMP) distribution for the common problem of excess zeroes in count data.
ZICMP becomes a flexible regression model by linking covariates to its count
rate and zeroinflation parameters. Through simulation, we examine some
properties of the maximum likelihood estimator and a test for equidispersion. ZICMP performs favorably compared to
related count models in analyzing several synthetic datasets, as well as a
real study of unwanted pursuit behaviors in separated couples. 

TI 8_1 
Ren, Jiandong 
Title 
MomentBased Density
Approximations for Aggregate Losses 
The determination
of the distribution of aggregate losses is of crucial importance for an
insurer. We apply a momentbased density approximation method to approximate
the distributions of univariate and bivariate aggregate losses. The proposed
technique which is conceptually simple and computationally efficient,
constitutes a viable alternative to the commonly used recursive and FFT
methods. As well, given a set of observed aggregate losses, the methodology
advocated herein can readily be applied in conjunction with the sample
moments for modeling purposes. 

TI 10_4 
Richter, WolfDieter 
Title 
Statistical reasoning on scaling parameters in
dependent pgeneralized elliptically contoured distributions 
Scaling
parameters of two dependent variables having known expectations are compared
if the twodimensional observation vector follows a pgeneralized
elliptically contoured distribution. Basic properties of the geometric
representation of the multivariate sample distribution are used in
constructing exact significance tests and confidence estimates. 

TI 1_1 
SaezCastillo, Antonio Jose and CondeSanchez,
Antonio 
Title 
Regression models based on
extended Poisson distributions in R 
Many
discrete probability distribution has tried to extend the Poisson distribution
and solve the problem of lack of equidispersion,
mainly due to the presence of individual heterogeneity, but also caused by
the existence of negative contagion effect. Only a reduced subset of these
extended Poisson distributions has been employed to develop regression models
for count data. In this work, we present a survey of such models which have
been implemented in R Statistical Software. Code to describe applications in
Business, Management and Economics is included and commented to facilitate
their use. 

TI 15_2 
Samanthi, Ranadeera

Title 
Comparing the Riskiness of Dependent Insurance
Portfolios 
A
nonparametric test based on nested Lstatistics to compare the riskiness of
portfolios was introduced by Brazauskas, Jones, Puri, and Zitikis (2007). In
this work, we investigate how the performance of the test changes when
insurance portfolios are dependent. To achieve that goal, we perform a
simulation study using spectral risk measures. Further, three insurance
portfolios are generated, and their interdependence is modeled with the
threedimensional elliptical copulas. It is found that the presence of comonotonicity makes the test liberal for all the risk
measures under consideration. We illustrate how to incorporate such findings
into sensitivity analysis of decisions. 

TI 1_4 
Sarabia, Jose Maria and Prieto, Faustino 
Title 
A Hierarchy of Multivariate Pareto
Distributions with Applications in Risk Analysis 
The
Pareto distribution and all its different versions have long been used as a suitable
model for many nonnegative economic variables, including losses and other
variables in risk analysis. In this paper we introduce a hierarchy of
multivariate Pareto distributions. The hierarchy is composed by three
families, which permits more and more flexibility. We consider the aggregated
risks and we study the individual and collective risk models based on the
three dependence structures. In two of these families we consider some
relevant collective models with Poisson and negative binomial as primary
distributions. Finally, some
applications with data are given. 

TI 7_2 
Sclove, Stanley 
Title 
Extreme Values or Mixture Distribution? 
For
modeling a dataset of employee days ill, or accidents among insureds, levels
of granularity are considered in describing the population, from a single
distribution, possibly with extreme values, to a bimodal distribution, to a
mixture of two or more distributions, to modeling the population at the
individual level. 

TI 16_3 
Shahtahmassebi, Golnaz 
Title 
Bayesian Estimation of Change
Point problems using Conditionally Specified Prior Distributions with
Applications 
In data
analysis, change point problems correspond to abrupt changes in stochastic mechanisms
generating data. The detection of change points is a relevant problem in the
analysis and prediction of time series. In this talk, we propose and
illustrate a Bayesian solution to the estimation of change point problems.
The estimation is based on a broad class of conjugate prior distributions
constructed from a conditional specification methodology. Hyperparameter
elicitation methodologies are discussed and simulation from the resulting
posterior distributions is obtained using Gibbs sampler. We demonstrate some
examples with simulated and real data. 

TI 4_2 
Sinha, Sanjoy 
Title 
Joint modeling of longitudinal and survival data
with a covariate subject to limit of detection 
Joint
models are often used for investigating the effect of an endogenous
timedependent covariate on survival times. I will discuss a novel method for
jointly analyzing longitudinal and timetoevent data when a covariate is
subject to the limit of detection. We often assume latent processes based on
random effects in order to describe the association between longitudinal and
timetoevent data. We study the effects of misspecified
random effects distributions on the estimates of the model parameters. We
also present an application of the proposed method using a large clinical
dataset. 

TI 21_3 
So, Hon Yiu 
Title 
Title of Presentation: The EM
algorithm for Oneshot Device Testing with Competing Risk under Different
Lifetimes Distributions 
In this
talk, we extend the recent works of Balakrishnan and
Ling by introducing a competing risk model into a oneshot device testing
analysis under accelerated life test setting. Expectation maximization (EM)
algorithms are developed for the estimation of model parameters under di_erent lifetime distributions. Extensive Monte Carlo
simulations are carried out to assess the performance of the proposed method
of estimation. The advantages of the EM algorithms over the traditional
Fisher scoring method are displayed through simulation. 

TI 13_3 
Song, Peter 
Title 
Copula Random Field with
Application to Longitudinal Neuroimaging Data Analysis 
Motivated
by the needs of analyzing massive longitudinal imaging data, we present an
extension of GeoCopula proposed by Bai et al.
(2014). This new model, termed as imageCopula,
helps us to address multilevel spatialtemporal dependencies arising from
longitudinal imaging data. We propose an efficient composite likelihood
approach by constructing joint composite estimating equations (JCEE) and
develop computationally feasible algorithm to solve the JCEE. We show that
the computation is scalable to largescale imaging data. We conduct several
simulation studies to evaluate the performance of the proposed models and
estimation methods. We apply the imageCopula to
analyze a longitudinal PET data set from the Alzheimer's Disease Neuroimaging
Initiative (ADNI) study. 

TI 5_1 
Su, Steve 
Title 
Transformation and Family of Generalised Lambda Distributions 
Generalised lambda distributions (GLDs) are
very versatile distributions that can effectively model a wide range of
continuous empirical data, despite their simple looking formulae. The
versatility of GLDs can be extended further by considering one to one,
monotonic transformation of GLD variables to generate new distributions. This
presentation discusses the theory, application and fitting algorithm of
exponential, arctan, inverse (domain of data being
positive or negative, but not both) and squared (positive values only)
transformations of GLDs for survival analysis, truncated data and extreme
value modelling, and attaining shapes that traditionally can only be achieved
using mixtures of statistical distributions. 

TI 11_1 
Tegge, Allison N. 
Title 
Bayesian analysis for multisubject time course
RNAseq experiments 
We
introduce Bayesian methodology for the analysis of multisubject time course
RNAseq experiments. Our methodology facilitates
the study of gene reactions to certain biological processes through time.
Specifically, we develop an empirical Bayes approach to detect differentially
expressed genes that reduces the high dimensionality of time course data by
empirical orthogonal functions. The proposed model assumes distinct
distributions for differentially and nondifferentially expressed genes, and
borrows strength across genes and subjects to increase detection power. We
illustrate our methodology with an analysis of an RNAseq
dataset from B cells to study their temporal response pattern to the human
influenza vaccine. 

TI 20_2 
Teodoro, M. Filomena 
Title 
Modeling the time between failures
using likelihood ratio tests 
The aim
of this work is to model the time between failures of certain type of
equipment essential for the proper functioning of ships from a certain class from Portuguese Navy , so the maintenance can be adjusted avoiding additional costs. To help
us choose among different distributions that may be fitted to these data, we
will use likelihood ratio tests for the equality of Gamma distributions.
Since the exact distributions of the statistics are not tractable, nearexact
distributions will be developed to obtain very sharp pvalues and quantiles.
This will allow for the easy practical implementation of these tests. 

TI 17_3 
Tsukahara, Hideatsu

Title 
The empirical beta copula 
Applying
Baker's construction of copulas based on the order statistics with the ranks
being coefficients leads us to define the empirical beta copula, which is a
particular case of the empirical Bernstein copula. We show that the empirical
beta copula is a genuine copula by providing (necessary and) sufficient
conditions for a Bernstein transformation to be a copula. Furthermore, we
establish the assumptions under which the standard asymptotic results hold
for the empirical Bernstein copula.
Our Monte Carlo simulation study shows that the empirical beta copula
outperforms the empirical copula in terms of the bias and the integrated mean
squared error. 

TI 18_1 
Vinogradov, Vladimir /Paris, Richard B. 
Title 
PoissonTweedie mixtures: a case
study 
PoissonTweedie
mixtures constitute a subclass of the family of Poisson mixtures
corresponding to the case where the mixing measure is generated by a member of
the powervariance family of distributions with nonnegative support. For a
specific value of the “power” parameter, such mixtures comprise both an
additive exponential dispersion model and a factorial dispersion model, which
are characterized by the variance and the dispersion functions, respectively.
We concentrate on the former structure illustrating our results by paying
attention to Neyman type A
distributions. We construct local approximations for PoissonTweedie
mixtures. 

TI 12_2 
Wang, Bin 
Title 
Normalizing nextgeneration sequencing data via
Density Estimation and Binning 
Nextgeneration
sequencing (NGS) is widely used in biomedical studies. Normalization is
challenging and crucial in NGS gene expression profiling data analysis. We propose to normalize gene profiles by
binning the data and estimating the distributions using three methods: 1)
rootunroot algorithm, 2) finite normal mixture
model using expectationmaximization algorithm, and 3) fitting generalized
lambda distribution. In addition, a novel measure of similarity of the gene
profiles is proposed to assess the normalization results and detect
differentially expressed genes as well. The proposed methods will be applied
to multiple NGS data sets and will be benchmarked with some existing NGS
normalization methods. 

TI 21_1 
Wang, Dongliang 
Title 
Penalized Empirical Likelihood for
the Cox Regression Model 
Current
penalized regression methods for selecting and estimating regression coefficients
in the Cox model are mainly developed on partial likelihood. In this paper,
an empirical likelihood method is proposed in conjunction with appropriate
penalty function. Asymptotic properties of the resulting estimators,
including the consistency, asymptotic normality and the oracle property with
respect to variable selection, are theoretically proved. Simulation studies
suggest that empirical likelihood is superior to partial likelihood in terms
of selecting correct risk factors and reducing estimation error. The
wellknown primary biliary cirrhosis data set is used to illustrate and
compare the empirical likelihood method with existing methods. 

TI 2_1 
Wu, Changbao 
Title 
Distribution Theory in Empirical Likelihood for
Complex Survey Data 
Empirical
likelihood has been shown to be a useful tool for handling parameters defined
through estimating equations. The use of empirical likelihood for complex
survey data, however, encounters various issues for the ``nonstandard’’
asymptotic distribution of the empirical likelihood ratio statistics. In this
talk, we present some basic distribution theory for two different
formulations of the empirical likelihood methods for survey data. We further
present results on the posterior distribution of the Bayesian empirical
likelihood methods and the related computational issues in Bayesian inference
for surveys. 

TI 11_2 
Wu, HoHsiang 
Title 
Mixtures of Nonlocal Priors for
Variable Selection in Generalized Linear Models 
We propose
two novel scale mixtures of nonlocal priors (SMNP) for variable selection in
generalized linear models. In addition, we develop a Laplace integration
procedure to compute posterior model probabilities. We show that under
certain regularity conditions the proposed methods are variable selection
consistent. Simulation studies indicate that our proposed SMNPbased methods
select true models with higher success rates than other existing Bayesian
methods. Furthermore, our methods lead to mean posterior probabilities for
the true models that are closer to their empirical success rates. Finally, we
illustrate the application of our SMNPbased methods with the analyses of two
real datasets. 

TI 9_2 
Wu, Jiang/ Zitikis,Ricardas 
Title 
Background risk models, twoperiod economies, and
optimal strategies that minimize financial losses 
Background
risk models, including a myriad of Pareto and betatype multivariate
distributions, provide a particularly intuitive and fruitful way for modeling
dependence in reallife applications. In this talk, we shall discuss one of
such applications that concerns decisionmaking in a twoperiod economy when a pivotal decision
needs to be made during the first timeperiod and cannot be subsequently
reversed. 

TI 12_4 
Xie, Yuying 
Title 
Joint Estimation of Multiple
Dependent Gaussian Graphical Models with Applications to Mouse Genomics 
Gaussian
graphical models are widely used to represent conditional dependence among random
variables. In this paper we propose a novel estimator for data arising from a
group of Gaussian graphical models that are themselves dependent. A
motivating example is that of modeling gene expression collected on multiple
tissues from the same individual: a multivariate outcome that is affected by
dependencies at the level of both the tissue and the whole body, and existing
methods that assume independence among graphs are not applicable. To estimate
multiple dependent graphs, we decompose the problem into two graphical
layers: the systemic layer, which is the network affecting all outcomes and
thereby inducing crossgraph dependency, and the categoryspecific layer,
which represents the graphspecific variation. We propose a graphical EM
technique that estimates the two layers jointly, establish the estimation
consistency and selection sparsistency of the
proposed estimator, and confirm by simulation that the EM method is superior
to a simple onestep method. Lastly, we apply our graphical EM technique to
mouse genomics data and obtain biologically plausible results. 

TI 4_4 
Xu, Xiaojian 
Title 
Optimal designs for regression when measurement
error is present 
Optimal
designs for regression have a great impact on the precision of model
parameter estimation. Utilizing a Doptimal design may ensure that the joint
confidence regions for true model parameters will be as small as possible for
a fixed sample size. Moreover, measurement error is often present in the
majority of models and should be taken into account when designing an
experiment. Considering a simple linear model with possible measurement error
in both the response and explanatory variables, we have investigated the
properties of exact and approximate Doptimal
designs for various cases of variance structure associated with measurement
error involved. 

TI 20_1 
Yagi, Ayaka and Seo, Takashi 
Title 
The null distribution of the LRT
statistic for mean vectors with monotone missing data 
In this talk,
we consider the likelihood ratio test (LRT) for a normal mean vector or two
normal mean vectors when the data have a monotone pattern of missing
observations. For the onesample and twosample problems, we derive the
modified likelihood ratio test statistics by using the asymptotic expansion
approximation. Further, we investigate the accuracy of the upper percentiles
of these test statistics by Monte Carlo simulation. 

TI 2_4 
Yi, Grace Y. 
Title 
Analysis of HighDimensional Correlated Data in the Presence of Missing Observations and
Measurement Error 
In
contrast to extensive attention on model selection for univariate data,
research on correlated data remains relatively limited. Furthermore, in the
presence of missing data and/or measurement error, standard methods would
typically break down. To address these issues, we propose marginal methods
that simultaneously carry out model selection and estimation for
highdimensional correlated data which are subject to missingness
and measurement error. To justify the proposed methods, we provide both
theoretical properties and numerical assessments. 

TI 18_2 
Yu, Guan 
Title 
Sparse Regression for
Blockmissing Multimodality Data 
In
modern scientiﬁc research, many data are collected from multiple
modalities (sources or types). Since diﬀerent modalities could provide
complementary information, sparse regression methods using multimodality
data could deliver better prediction performance. However, one special
challenge for using multimodality data is related to missing data. In
practice, the observations of a certain modality can be missing completely,
i.e., a complete block of the data is missing. In this paper, we propose a
new twostep sparse regression method for blockmissing multimodality data.
In the ﬁrst step, we estimate the covariance matrix. Rather than
deleting samples with missing data or imputing the missing observations, the
proposed method makes use of all available information. In the second step,
based on the estimated covariance matrix, a Lassotype estimator is used to
deliver a sparse estimate of the regression coeﬃcients in the linear
regression model. The eﬀectiveness of the proposed method is
demonstrated by theoretical studies, simulated examples, and a real data
example from the Alzheimer’s Disease Neuroimaging Initiative. The comparison
between the proposed method and some existing methods also indicates that our
method has promising performance. 
GeneralInvited Session: Topics and
Session Chairs
Session Name: TI m_k
(m_k = k^{th} speaker in the m^{th} session)
Room Abbreviation: NI – Niagara Room, BR 
Brock Room, EL – Elisabeth Room, CAN/B – Canadian Room/B
Session 
Topic 
Session Chair 
Date 
Time 
Room 
GI
1 
Modeling 1  Life
Time, Biostatistics 
Pararai, Mavis 
Oct 15 
10:50 am  12:05 pm 
NI 
GI
2 
High Dimension Data Analysis 
Amezziane, Mohamed 
Oct 15 
10:50 am  12:05pm 
BR 
GI
3 
Bayesian 1 , Spatial 
Samanthi, Madhuka 
Oct 15 
10:50 am  12:05 pm 
EL 
GI
4 
Other  Miscellaneous 
Sepanski, Steve 
Oct 15 
10:50 am  12:05 pm 
CAN/B 
GI
5 
Generalized
Distributions 1 
Pararai, Mavis 
Oct 15 
4:30 pm 5:45 pm 
NI 
GI
6 
Inference Estimation, Testing 
Amezziane, Mohamed 
Oct 15 
4:30 pm 5:45 pm 
BR 
GI
7 
Modeling 2 Estimation 
Samanthi, Madhuka

Oct 15 
4:30 pm 5:45 pm 
EL 
GI
8 
Reliability, Risk 
Daniels, John

Oct 15 
4:30 pm 5:45 pm 
CAN/B 
GI
9 
Bayesian 2:
Estimation, Model 
Cheng, ChinI 
Oct 16 
10:50 am – 12:05 pm 
BR 
GI
10 
Generalized Distributions 2 
Cooray, K. 
Oct 16 
10:50 am – 12:05 pm 
EL 
Abstracts for GeneralInvited Speakers (Alphabetic Order)
Session Name: GI m_k
(m_k = k^{th} speaker in the m^{th} session)
GI 4_4 
Abdelrazeq,
Ibrahim 
Title 
GoodnessofFit Test: Levy Driven
Continuous ARMA Model 
The Levy driven CARMA(p,q) process is becoming a
popular one with which to model stochastic volatility. However, there has
been little development of statistical tools to verify this model assumption
and assess the goodnessof_t of real world data (Realized Volatility). When a Levy driven
CARMA(p,q) process is
observed at high frequencies, the unobserved driving process can be
approximated from the observed process. Since, under general conditions, the
Levy driven CARMA(p,q)
process can be written as a sum of pdependent Levy driven OrnsteinUhlenbeck processes, the methods developed in Abdelrazaeq et al. (2014) can be employed in order to use
the approximated increments of the driving process to test the assumption
that the process is Levydriven. Performance of the test is illustrated
through simulation assuming that the model parameters are known. 

GI 10_1 
Aljarrah, Mohammad 
Title 
ExponentialNormal distribution 
In this paper, a new three
parameter distribution called the exponentialnormal distribution is defined
and studied. Various properties of the distribution such as hazard function,
quantile function, moments, Shannon entropy are discussed. The method of maximum
likelihood is proposed to estimate the parameters of the distribution. A real
data set is applied to illustrate the flexibility of the distribution. 

GI 10_2 
Alshkaki, Rafid S. 
Title 
An Extension to the ZeroInflated Generalized Power
Series Distributions 
In many sampling involving non
negative integer data, the zeros are observed to be significantly higher than
the expected assumed model. Such models are called zeroinflated models, and
are recently cited in literature in various fields of science including;
engineering, natural, social and political sciences. The class of
zeroinflated Generalized Power Series distributions was recently considered
and studied due to its empirical needs and application. In this paper an
extension to class of zeroinflated power series distributions was
introduced, and its characteristics were studied and analyzed. 

GI 10_3 
Alzaghal, Ahmad 
Title 
The Exponentiated GammaPareto Distribution, Properties and
Application 
A new distribution, the exponentiated gammaPareto
distribution is introduced and studied. Some of its properties, including
distribution shapes, limit behavior, hazard function, Rényi
and Shannon entropies, moments, and deviations from the mean and median are
discussed. The method of maximum likelihood is used to estimate the exponentiated gammaPareto
distribution parameters and a simulation study is carried out to assess its
performance. The flexibility of the exponentiated
gammaPareto distribution is illustrated by applying it to real data sets and
the results compared with other distributions. 

GI 7_5 
Arowolo, Olatunji and Ayinde, Kayode 
Title 
Parameter estimation techniques of simultaneous equation
model with multicollinearity problem 
Multicollinearity problem is still inevitable in
Simultaneous Equation Model (SEM). The work adopted the single equation
estimators for handling multicollinearity, Ordinary
Ridge Regression Estimator (ORRE) and Generalized Ridge Regression Estimator
(GRRE) into SEM and proposed some estimators using the approach of the
conventional ones. Monte Carlo
experiments were conducted with two (2) types of exogenous variable at seven
(7) levels of multicollinearity, correlation
between error terms and sample sizes.
The estimators were compared and ranked on the basis of their
performances visŕvis their finite sampling properties. The proposed
estimators, ORRGRRE, 2SGRRE and OLSGRRE, are recommended for parameters’
estimation of SEM. 

GI 1_5 
Bakar, Shaiful Anuar
Abu 
Title 
Actuarial
loss modeling with the composite models and its computer implementation 
Composite model is a
statistical distribution model made by piecing together two distributions at
a certain threshold. It increasingly deems attention in actuarial loss
modelling. In this study, we propose several variations of the composite
model in which Weibull distribution is assumed up to the threshold and a
family of Beta distribution beyond it. We also specify two of the composite
model parameters in term of other parameters of the model which in turn
reduce the number of parameters and form a general construction rule for any
two arbitrary distributions. The significance of such approach is further
demonstrated with respect to computer implementation in R programming
language. Finally the performance of the models is assessed via application
to real loss data sets using information criteria based approach. 

GI 1_4 
Bayramoglu , Konul Kavlak 
Title 
The mean wasted life time of a component of system 
A reliability inspection model in
which a component of a technical system has lifetime X and inspection time S
is considered. It is assumed that X and S are random variables with
absolutely continuous joint distribution function F_(X,S) and the joint probability density function
f_(X,S). Firstly, we consider mean residual life function of the component
under two different setups of inspection. Secondly, we consider an inspection
model where at the inspection time the component is replaced with its spare
regardless of whether the component is alive or failed at this time. Under
condition that 0 < t < S < X we are interested in expected value of
X  S, which is the mean wasted time of intact at time t component in the
case if it will not be failed at inspection time, but will be replaced with
the new one. Formula for mean wasted life time expressed in terms of f_(X,S) and partial
derivatives of F_(X,S) is derived.
Some examples with graphical representations are also provided. 

GI 4_2 
Bingham,
Melissa 
Title 
Quantifying
Spread in 3D Rotation Data: Comparison of Nonparametric and Parametric
Techniques 
A measure of spread for 3D
rotation data, called the average misorientation
angle, is introduced and bootstrapping will be used to develop confidence
intervals for this measure. Existing parametric inference methods for
estimating spread in 3D rotations for the von Mises Uniform AxisRandom Spin
and matrix Fisher distributions are then compared to the bootstrapping
procedure through a simulation study.
Based on the results on the simulation study, it is determined when
the nonparametric or parametric techniques are preferred for different
scenarios. 

GI 3_1 
Boulieri, Areti 
Title 
A Bayesian detection model for chronic disease
surveillance: application to COPD hospitalisation
data 
Disease
surveillance is an important public health practice, as it provides information which can be
used to make successful interventions. Innovative surveillance systems are
being developed to improve investigation of outbreaks, with the Bayesian
models attracting a lot of interest. In this work, we propose an
extension of a Bayesian hierarchical model introduced by Li et al.(2012), which is able to detect areas with an
unusual temporal trend, and a simulation study is carried out to assess the
performance of the model. The method is illustrated by application to chronic
obstructive pulmonary disease (COPD) hospitalisation data in England at
clinical commissioning group (CCG) level, from April 2010 to March 2011. 

GI 9_5 
Chacko, Manoj 
Title 
Bayesian
density estimation using ranked set sample when ranking is not perfect 
In this paper, we consider a
ranked set sampling in which an auxiliary variable X is used to rank the
sample units. A Bayesian method for estimating the underlying density function
of the study variate Y using ranked set sample is proposed when measurement
of Xs are also available along with Ys. A Markov
chain Monte Carlo procedure is developed to obtain the Bayesian estimator of
the density function of Y by assuming a parametric distribution for (X,Y), with the distribution of the parameters having a Dirichlet process prior. A simulation study is used to
evaluate the performance of the proposed method. 

GI 3_4 
Daniels, John 
Title 
Variogram Fitting Based on
the Wilcoxon Norm 
Within geostatistics
research, estimation of the variogram points has
been examined, particularly in developing robust alternatives. The fit of
these variogram points, which eventually defines
the kriging weights, has not received the same attention from a robust
perspective. This paper proposes the use of the nonlinear Wilcoxon norm over
weighted nonlinear least squares as a robust fitting alternative. First, we
introduce the concept of variogram estimation and
fitting. Then, as an alternative to nonlinear weighted least squares, we
discuss the nonlinear Wilcoxon estimator. Next, the robustness properties of
the nonlinear Wilcoxon are demonstrated using a contaminated spatial data
set. Finally, under simulated conditions, increasing levels of contaminated
spatial processes have their variograms points
estimated and fit. In the fitting of these variogram
points, both nonlinear Weighted Least Squares and nonlinear Wilcoxon fits
are examined for efficiency. At all levels of contamination, the nonweighted
Wilcoxon outperforms weighted Least Squares. 

GI 8_4 
Doray, Louis G. 
Title 
The
Double Pareto Lognormal Distribution with Covariates and its Applications in
Finance and Actuarial Science 
We describe the double
Paretolognormal distribution, present some new properties and show how the
model can be extended by introducing explanatory variables. First, the
double Paretolognormal distribution is built from the normalLaplace
distribution and some of its properties presented. The parameters can be
estimated by using the method of moments or maximum likelihood. Next,
explanatory variables are added to the model by using the mean of the normal
distribution. The procedure of estimation for this model is also discussed.
Finally, examples of application of the model in finance and fire insurance
are illustrated and some useful statistical tests are conducted. 

GI 7_4 
El Ktaibi, Farid

Title 
Change point detection for stationary linear models and
MBB applications 
The problem of structural
stability in a time series environment is a classical problem in statistics.
In this presentation, we analyze the detection problem of a change in the
marginal distribution of a stationary linear process using MBB techniques.
Our model will incorporate simultaneously any change in the coefficients
and/or the innovations of the linear process. Moreover, the changepoint can
be random and data dependent. Our results hold under very mild conditions on
the existence of any moment of the innovations and a corresponding condition
of summability of the coefficients. Lastly, the
performance of our approach is demonstrated through simulations. 

GI 2_4 
Faisal, Shahla 
Title 
Improved
Nearest Neighbors Imputation for HighDimensional Longitudinal Data 
Longitudinal data often comes
with missing values. These values cannot be ignored as it can result in loss
of important information regarding samples. Therefore
imputation is a good strategy to overcome this problem. In this paper, we
present a single imputation method based on weighted nearest neighbors that
uses the information from other variables to estimate the missing values.
These neighbors use the information from within the sample whose response is
measured at different time points and between samples. The simulation results
show that the suggested imputation method provides better results with
smaller imputation errors. 

GI 5_2 
Ferrari, Silvia L. P.
and Fumes, Giovana 
Title 
BoxCox symmetric distributions and applications to
nutritional data 
We introduce and study the
BoxCox symmetric class of distributions, which is useful for modeling
positively skewed, possibly heavytailed, data. The new class of
distributions includes the BoxCox t, BoxCox ColeGreen, BoxCox power
exponential distributions, and the class of the logsymmetric distributions
as special cases. It provides easy parameter interpretation, which makes it
convenient for regression modeling purposes. Additionally, it provides
enough flexibility to handle outliers. The usefulness of the BoxCox
symmetric models is illustrated in a series of applications to nutritional
data. 

GI 8_1 
Gleaton, James 
Title 
Characteristics
of Generalized LogLogistic Families of Lifetime Distributions and
Asymptotic Properties of Parameter Estimators 
A brief overview of the
generalized loglogistic (GLL) transformation (also called the odd
loglogistic transformation) group and the characteristics of lifetime
distributions generated using this type of transformation is presented. It is shown that, for a baseline
distribution in an exponential class, the MLE’s for parameters of an exponentiated exponentialclass (EE) distribution are
jointly asymptotically normal and efficient.
A representation of the GLLexponentialclass density as a series in
which each term is proportional to an EE density is developed. Work on the asymptotic properties of the
MLE’s for the GLLexponentialclass distribution is in progress. 

GI 10_5 
Godbole, Anant 
Title 
Statistical Distributions in Combinatorics:
Moving from Intractability to Tractability 
In this talk, we will present
several examples of problems from combinatorics for
which the entire distribution of a key variable X is of interest in its own
right to distribution theorists, beyond the point probability P(X=0), which
is often the primary concen of combinatorialists. The distributions are either impossible to
write in closed form, or available in an intractable closed form. The SteinChen method of Poisson
approximation can be used however, to yield Poisson estimates together with
error bounds. 

GI 5_1 
Hodge, Miriam 
Title 
Comparison
of liquefaction data: An application
of a logistic normal distribution in the simplex sample space 
Liquefaction occurs when an earthquake
liquefies water saturated soil and ejects it to the surface of the soil. This
physical process is not well understood. We address this uncertainty with a
novel model selection strategy to evaluate models which include: ejecta originate from a combination of multiple layers of
sediment; the source sediment layer changed during ejection process; the
source sediment layer could be deeper than the candidate samples. The data
are logistic normal and comprised of percentages of 120plus grain size ranges.
Compositional analysis in the simplex space identified ejecta origins and the
result is confirmed by qualitative analysis. 

GI 4_3 
Hoshino, Nobuaki 
Title 
On the marginals of a random
partitioning distribution 
Kolchin’s model is a class of random partitioning
distributions of a positive integer, which includes the celebrated Ewens distribution. This type of distributions defines
the joint probability of the frequencies of frequencies, but the marginal
distribution of the frequency of a given frequency is not straightforward to
derive because of its combinatorial nature. This talk motivates the
derivation of such a marginal distribution, and shows two methods: the first
one inverts factorial moments, and the second one exploits a fact that Kolchin’s model is the product of some conditional
distributions. 

GI 5_3 
Hristopulos, Dionissios
T. 
Title 
A
probability distribution function for finitesize systems with renormalized
weakestlink behavior 
We investigate weakestlink scaling
in systems with complex interactions expressed as ensembles of representative
volume elements (RVEs). The system
survival probability function is expressed in terms of interdependent RVEs
using a product rule. For a finite number of RVEs, we propose the
κWeibull distribution. We
discuss properties of the κWeibull and present results from the
analysis of experimental data and simulations pertaining to the return
interval distributions of seismic data and of avalanches in fiber bundle
models. Areas of potential
applications involve the fracture strength of quasibrittle
materials, precipitation, wind speed, and earthquake return times. 

GI 3_2 
Huang, HsinHsiung 
Title 
New Mixed Gaussian AffineInvariant Bayesian Clustering
Method 
We develop a clustering
algorithm which does not requires knowing the number of clusters in advance
as well as it is rotation, scale and translationinvariant coordinatewise. A highly efficient splitmerge Gibbs
sampling algorithm is proposed. Using the Ewens
sampling distribution as prior of the partition and the profile residual
likelihoods of the responses under three different covariance matrix
structures, we a posterior distribution on partitions. Our experimental
results indicate that the proposed method outperforms other competing
methods. In addition, the proposed algorithm is irreducible and aperiodic, so
that the estimate is guaranteed to converge to the posterior distribution. 

GI 6_4 
Jiang, Jiancheng 
Title 
A new
diversity estimator 
The maximum likelihood
estimator (MLE) of GiniSimpson's diversity (GS) index is widely used but
suffers from large bias when the number of species is large relative to the
sample size. We propose a new estimator of the GS index and show its
unbiasedness. Asymptotic normality of the proposed estimator is established
when the number of species in the population is finite and known, finite but
unknown, and infinite. Our theory demonstrates that the proposed estimators
share the same efficiency as the MLE for finite and known number of species
and is more efficient than the MLE for other situations. Simulations
demonstrate advantages of our estimators over the MLE, and an example for the
extinction of dinosaurs endorses the use of our approach. 

GI 5_5 
Jureckova, Jana 
Title 
Specifying the tails of a distribution 
The first question induced by
observed data is whether they are governed by a heavy or light tailed
probability distribution. Such decision is not always straightforward. When a specific test rejects the Gumbel
hypothesis of exponentiality of the tails, we
do not have an information how heavy is really the distribution. Instead of
that, we can rather verify the hypothesis whether the tails of a distribution
are heavier than a specific level, measured by the Pareto index. We will
discuss some nonparametric tests of this hypothesis and compare them with the
parametric likelihood ratio test on the parameters of generalized Pareto
distribution. The nonparametric tests use the specific behavior of some sample
statistics coming from a heavytailed distribution; this is of independent
interest and can be extended e.g. to AR time series. While the parametric
test behaves better when the data really come from a generalized Pareto
distribution, the nonparametric tests are typically better for other cases. 

GI 8_2 
Karlis, Dimitris 
Title 
On
mixtures of multiple discrete distributions with application 
In this paper we present a
model to fit appropriately data with a lot of periodic spikes in certain values.
The motivation comes from a dataset on the number of absence from work. The
data show clearly spikes in certain days, implying the different scale of
doctor decisions. A new modeling approach, based on finite mixtures of
multiple discrete distributions of different multiplicities, is proposed to
fit this kind of data. Multiple
Poisson and negative binomial distributions are defined and used for
modeling. A numerical application with a real dataset concerning the length,
measured in days, of inability to work after an accident occurs is treated.
The main finding is that the model provides a very good fit when working
week, calendar week and month multiplicities are taken into account.
Properties of the derived model are examined together with estimation and
inference. 

GI 6_1 
Lewin, Alex 
Title 
Fuzzy multiple testing procedures for discrete test
statistics 
Commonly used multiple testing
procedures controlling the Family Wise Error Rate or the False Discovery Rate
can be conservative when used with discrete test statistics. We propose fuzzy
multiple comparison procedures which give a fuzzy decision function, using
the critical function of randomised pvalues. We
also define adjusted pvalues for the new multiple comparison procedures. The
method is demonstrated on four data sets involving discrete statistics. A
software package for the R language is available. 

GI 7_1 
Liu, Sifan and Xie, Minge 
Title 
Exact
Inference on MetaAnalysis with Generalized FixedEffects and RandomEffects
Models 
For metaanalysis with
fixedeffects and randomeffects models, conventional methods rely on
Gaussian assumptions and/or largesample approximations. However, when the
number of studies is not large, or the sample sizes of individual studies are
small, such assumptions and approximations may be inaccurate and lead to
invalid conclusions. In this talk, we will present "exact'' confidence
intervals for the overall effect using all available data. Our proposals
cover generalized models without Gaussian assumptions, and there is no need
of approximation. Confidence distribution interpretations and numerical
studies, including quantifying the efficacy of BCG vaccine against
tuberculosis, will be given for illustrations. 

GI 1_3 
Mandrekar, Jay 
Title 
Statistical approach for the development, prediction, and
validation of a simple risk score: application to a neurocritical
care study. 
Patients admitted to neurocritical care units often have devastating
neurologic conditions and are likely candidates for organ donation after
cardiac death. Improving our ability to predict the time of death after
withdrawal of lifesustaining measures could have significant impact on rates
of organ donation after cardiac death and allocation of appropriate medical
resources. In the first part of the presentation, we will discuss using
logistic regression and ROC analysis how we arrived at a prediction model
based on a retrospective database. Next, we will discuss the validation of
model and development of score using data from a multicenter prospective
study. 

GI 9_4 
Maruyama, Yuzo 
Title 
Harmonic
Bayesian prediction under alphadivergence 
We investigate Bayesian
shrinkage methods for constructing predictive distributions. We consider the multivariate
Normal model with a known covariance matrix and show that the Bayesian
predictive density with respect to Stein's harmonic prior dominates the best
invariant Bayesian predictive density, when the dimension is not less than
three. Alphadivergence from the true distribution to a predictive
distribution is adopted as a loss function. 

GI 1_1 
Matheson , Matthew 
Title 
The Shape of the Hazard Function: The Generalized Gamma
and Its Competitors 
A large number of distributions
have been proposed for parametric survival analysis. The generalized gamma,
with its flexible taxonomy of four distinct hazard shapes and ease of
implementation, has proven to be one of the most popular. In search of
distributions with potentially richer hazard behavior, we have investigated
the exponentiated Weibull, generalized Weibull, and
betageneralized gamma using both real and simulated data. Somewhat
surprisingly, these distributions appear unable to significantly improve on
the flexibility of the generalized gamma for applications, with the
generalized gamma being able to closely match almost any parameter
combination of the other three distributions. 

GI 4_1 
Mi, Jie 
Title 
Instant
System Availability 
In this talk, we study the instant
availability A(t) of a repairable system using integral equation. We have
proved initial monotonicity of the availability, and derived various lower
bounds of A(t) and average availability. The availabilities of two systems
are also compared with the help of stochastic ordering. 

GI 8_3 
Minkova, Leda 
Title 
Distributions of order K in risk models 
The most used generalization of
the counting process in the Risk model is a compound Poisson process. In this
talk a counting process with distributions of order K is given. At first we
introduce the compound birth process of order K. As a particular case we consider the
compound Poisson process. As examples, the Poisson process of order K, and
two types of PolyaAeppli processes of order K are
given. Some functions related to corresponding risk models are analyzed. The
derivation of the joint distribution of the time to ruin and the deficit at
ruin as well as the ruin probability are given. We discuss in detail the
particular case of exponentially distributed claims. 

GI 10_4 
Nolan, John 
Title 
Classes
of generalized spherical distributions 
A flexible class of
multivariate generalized spherical distributions with starshaped level sets
is developed. Tools from computational
geometry and multivariate integration are used to work with dimension above
two. The R package gensphere
allows one to compute multivariate densities and simulate from such
distributions. 

GI 4_5 
Ozturk, Omer 
Title 
Ratio estimators based on ranked set sampling in survey
sampling 
In this talk, we consider the
ratio estimator in a finite population setting in a ranked set sampling (RSS)
design when the sample is constructed without replacement. We show that the
proposed ratio estimator is slightly biased, but the amount of bias is
smaller than the bias of the simple random sample (SRS) ratio estimator. We provide an explicit expression for the
approximated mean square error of the ratio estimator and for its precision
relative to other competing estimators. We show that the new estimator has
substantial amount of improvement in efficiency with respect to SRS
estimator. We apply the proposed estimator to estimate apple production in
Marmara Region of Turkey in a finite population setting. 

GI 3_5 
Paul, Rajib 
Title 
Real
Time Estimation of ILI (Influenza Like Illnesses) Rates Using Dynamic
Downscaling 
Despite novel advances in
surveillance of flu trends, the realtime daily estimates of ILI cases are
often unavailable. The community health departments collect daily information
on reported respiratory and constitutional symptoms (for example, fever,
headache, cough etc.). Google flu trends provide weekly estimates per one
hundred thousand people. We develop a Bayesian hierarchical model for dynamic
downscaling of ILI rates on daily scale fusing these two datasets. We also
incorporate environmental factors, such as, temperature and humidity. A
sequential Monte Carlo algorithm is developed for faster computation. Our
model is tested and validated using Michigan data over the years 20092013. 

GI 9_2 
Peer Bilal Ahmad 
Title 
Bayesian analysis of misclassified generalized Power
Series distributions under different loss functions 
In certain experimental investigations
involving discrete distributions external factors may induce a measurement
error in the form of misclassification. For instance, a situation may arise
where certain values are erroneously reported; such a situation termed as
modified or misclassified has been studied by many researchers. Cohen (1960)
studied misclassification for Poisson and binomial random variables. In this
paper, we discuss misclassification for the more general class of discrete
distributions, the generalized power series distributions (GPSD), where some
of the observations corresponding to x=c+1; c≥0 are erroneously
observed or at least reported as being x=c with probability α. This
class includes among others the binomial, negative binomial, logarithmic
series and Poisson distributions. We derive the Bayes estimators of
parameters of the misclassified generalized power series distributions under
different loss functions. The results obtained for misclassified GPSD are
then applied to its particular cases like negative binomial, logarithmic
series and Poisson distributions. An example is provided to illustrate the
results and a goodness of fit test is done using the moment, maximum
likelihood and Bayes estimators. 

GI 5_4 
PérezCasany, Marta 
Title 
RandomStopped
Extreme distributions 
The distribution of the maximum
(minimum) of
a random number of independent and identically distributed random variables is characterized by means of their
probability generating function, and a duality property between the two sets
of distributions is derived. These
distributions appear in a natural way as data collection mechanisms, similar
to the Stoppedsum distributions. When the sample size is geometrically
distributed, one obtains the MarshallOlkin
transformation of the sampled distribution as a particular case. Special attention will be paid to the case
where sample size is Poisson distributed, since it is the one with the most
practical appeal. 

GI 9_1 
Ross, Sheldon 
Title 
Friendship Paradox and Friendship Network Model 
The friendship paradox says
that "your friends tend to have more friends than you". We explore
this paradox and then suggest a model for a friendship network. 

GI 2_2 
Ruth, David M. 
Title 
An approach
to the multivariate twosample problem using classification and regression
trees with minimumweight spanning subgraphs 
The multivariate twosample
problem is one of continued interest in statistics. Approaches to this
problem usually require a dissimilarity measure on the observation sample
space; such measures are typically restricted to numeric variables. In order to accommodate both categorical
and numeric variables, we use a new dissimilarity measure based on
classification and regression trees. We briefly discuss this new measure and
then employ it with a recentlydeveloped graphbased multivariate test. New improvements to this test are
discussed, test performance is examined via simulation study, and test
efficacy is investigated using realworld data. 

GI 7_2 
Schick, Anton 
Title 
Estimation of the error distribution function in a
varying coefficient regression model 
This talk discusses estimation
of the error distribution function in a varying coefficient regression model.
Three estimators are introduced and their asymptotic properties described by
uniform stochastic expansions. The first estimator is a residualbased
empirical distribution function utilizing an undersmoothed local quadratic
smoother of the coefficient function. The second estimator exploits the fact
that the error distribution has mean zero. It improves on the first
estimator, but is not yet efficient. An efficient estimator is obtained by
adding a stochastic correction term to the second estimator. 

GI 1_2 
Song, Xinyuan 
Title 
Analysis
of proportional mean residual life model with latent variables 
In this study, we propose a
proportional mean residual life (MRL) model with latent variables to examine
the effects of potential risk factors on the MRL function of ESRD in a cohort
of Chinese type 2 diabetic patients. The proposed model generalizes
conventional proportional MRL models to accommodate latent risk factors. We
employ a factor analysis model to characterize latent risk factors via
multiple observed variables. We develop a borrowstrength estimation
procedure incorporating EM algorithm and the corrected estimating equation
approach. The empirical performance of the proposed methodology is evaluated
via numerical studies. 

GI 6_5 
Stehlik, Milan 
Title 
Exact distributions of LR tests and their
applications/Johannes Kepler University Linz, Austria 
During the talk we introduce
exact statistical procedures based on likelihood ratio. Also practical
examples will be given. We introduce exact likelihood ratio tests in
exponential family and for a generalized gamma distribution and its
properties. We will derive general forms of distributions for exact
likelihood ratio test of the homogeneity and scale. Applications and
illustrative examples (missing and censored data, mixtures) will be given.
Geometry of life time data will be discussed and related to Idivergence
decomposition. Small samples testing for frailty through homogeneity test
will be discussed. We will provide the methodology for exact and robust test
for normality. 

GI 3_3 
Sun, Ying 
Title 
A
Stochastic Spacetime Model for Intermittent Precipitation Occurrences 
Modeling a precipitation field
is challenging due to its intermittent and highly scaledependent nature.
Motivated by the features of highfrequency precipitation data from a network
of rain gauges, we propose a threshold spacetime t random field (tRF) model for 15minute precipitation occurrences. This
model is constructed through a spacetime Gaussian random field (GRF) with
random scaling varying along time or space and time. It can be viewed as a
generalization of the purely spatial tRF, and has a
hierarchical representation that allows for Bayesian interpretation.
Developing appropriate tools for evaluating precipitation models is a crucial
part of the modelbuilding process, and we focus on evaluating whether models
can produce the observed conditional dry and rain probabilities given that
some set of neighboring sites all have rain or all have no rain. These conditional
probabilities show that the proposed spacetime model has noticeable
improvements in some characteristics of joint rainfall occurrences for the
data we have considered. 

GI 2_5 
Sylvan, Dana 
Title 
Exploration and visualization of spacetime data with
complex structures 
We introduce a versatile
exploratory tool that may be used to describe and visualize various
distributional characteristics of data with complex spatial and
spatialtemporal dependencies. We present a flexible mathematical framework
for modeling spatial random fields and give possible extensions to spacetime
data. For illustration we show applications to air pollution and
baseball data. 

GI 6_2 
Thomas, Hoben and Hettmansperger,
T.P 
Title 
Test
Scores, HRX, and Distribution Function
Tail Ratios 
Let A and D denote advantaged
and disadvantaged populations with cdfs $F(x)$ and
$G(x)$ respectively, and $F(x) \leq G(x)$. Assume a
selection setting; those selected have $x>c$, with $p_A$ and $p_D$ the selected
proportions; $p_D/p_A<<1$.
Often the desire is $p_D/p_A\approx 1.$ Consequently $c$ is lowered. Surprisingly, the fail ratio $G(c)/F(c)$
and success ratio $[1G(c)]/[1F(c)]$ can both be increasing with decreasing $c$
which Scanlan (2006) calls HRX. He argues HRX is widely misunderstood, with
deleterious public policy results. Conditions for HRX are presented along
with data examples. 

GI 9_3 
Wang, Min and Li, Shengnan 
Title 
Bayesian estimation of the generalized lognormal
distribution using objective priors 
The generalized lognormal
distribution plays an important role in analyzing data from different life
testing experiments. In this paper, we consider Bayesian analysis of this
distribution using various objective priors for the model parameters.
Specifically, we derive expressions for the three types of the Jeffreys priors, the reference priors with different
group ordering of the parameters, and the firstorder matching priors. We
further investigate the properties of the corresponding posterior
distributions of the unknown parameter under the various improper priors. It
is shown that only two of them result in proper posterior distributions.
Numerical simulation studies are conducted to compare the performances of the
Bayesian approaches under the considered priors as well as the maximum
likelihood estimates. A realdata study is also provided for illustrative
purposes. 

GI 7_3 
Wang, Qiying 
Title 
Limit
theorems for nonlinear cointegrating regression 
The past decade has witnessed
great progress in the development of nonlinear cointegrating
regression. Unlike linear cointegration and
nonlinear regression with stationarity where the traditional and classical
methods are widely used in practice, estimation and inference theory in
nonlinear cointegrating regression produce new
mechanisms involving local time, a mixture of normal distributions and
stochastic integrals. This talk aims to introduce the machinery of the
theoretical developments, providing uptodate limit theorems for nonlinear cointegrating regression. 

GI 2_1 
Yu, Chong Ho 
Title 
Pattern recognition: The role of data visualization and
data mining in statistics 
Many people say, “Let the data
speak for themselves,” yet in higher education the standard curriculum design
are still overwhelmingly devoted to hypothesis testing. Although rejecting
the null hypothesis based on the p value alone is questionable while there is
no detectable pattern in the data, very often data visualization (DV) and
data mining (DM) are going unheard among researchers. To rectify this
situation, the presenter will show how various DV/DM tools, such as the
ternary plot, the diamond plot, the bubble plot, and the GIS map, can be
utilized to gain a holistic view of the data patterns. 

GI 8_5 
Yu, Jihnhee ; Yang, Luge; Vexler,
Albert and Hutson, Alan D. 
Title 
Variance
Estimation of the Nonparametric Estimator of the Partial Area under the ROC
Curve 
The pAUC
is commonly estimated based on a Ustatistic with the plugin sample
quantile, making the estimator a nontraditional Ustatistic. In this talk,
an accurate and easy method to obtain the variance of the nonparametric pAUC estimator is proposed. The proposed method is easy
to implement for both one biomarker test and the comparison of two correlated
biomarkers since it simply adapts the existing variance estimator of
Ustatistics. Further, an empirical likelihood inference method is developed
based on the proposed variance estimator through a simple implementation. 

GI 2_3 
Zahid, Faisal Maqbool and Heumann,
Christian 
Title 
Multiple Imputation using Regularization 
Multiple imputation (MI) is an
increasingly popular approach for filling missing data with plausible values.
In case of large number of covariates with missing data, existing MI softwares are likely to perform poorly or fail. We are
proposing an MI algorithm based on regularized sequential regression models.
Each variable (e.g., normal, binary, Poisson etc.) is imputed using its own
imputation model. The proposed approach performs well even with large number
of covariates and small samples. The results are compared with the existing softwares like mice, VIM, and Amelia in simulation
studies. The results are compared using Mean Squared Imputation Error (MSIE)
and Mean Absolute Imputation Error (MAIE). 
(Alphabetically
Ordered)
All Student Poster Presentations are in Canadian/A Room
The student posters must be posted by 3:00 pm
on October 15
The student authors must be at their posters
from 5:45 pm – 6:30 pm, October 15
Authors 
Aldeni, Mahmoud 
Title 
Families of distributions
arising from the quantile of generalized lambda distribution 
Statistical
distributions play an important role in theory and applications, which are
used to fit, model and describe real world phenomena. For this reason,
developing new and more flexible univariate statistical distributions has
received an increasing amount of attention over the last two decades. In this
work, the class of TR{generalized lambda} families
of distributions based on the quantile of generalized lambda distribution has
been proposed using the TR{Y} framework. Different choices of the random
variables T and R naturally lead to different families of the TR{generalized lambda} distributions. Some general
properties of these families of distributions are studied. Four members of
the TR{generalized lambda} families of
distributions are derived, namely, the uniformexponential{generalized
lambda}, the normaluniform{generalized lambda}, the
ParetoWeibull{generalized lambda}and the loglogisticlogistic{generalized
lambda}. The shapes of these distributions can be symmetric, skewed to the
left, skewed to the right, or bimodal. Two real life data sets are applied to
illustrate the flexibility of the distributions and the results are compared
with the results from some existing distributions. 

Authors 
Arapis, Anastasios N. 
Title 
Joint
distribution of ktuple statistics in zeroone sequences 
Let a sequence of random
variables with values (zeroone) ordered on a line. We consider runs of one
of length larger than or equal to a fixed number. The statistics denoting the
number of such runs, the number of ones in the runs and the distance between
the first and the last run in the sequence, are defined. The paper provides,
in a closed form, the exact joint distribution of these three statistics
given that the number of such runs in the sequence is at least equal to two.
The study is first developed on sequences of independent and identically
distributed random variables and then is extended to exchangeable
(symmetrically dependent) sequences. Numerical examples illustrate further
the theoretical results. 

Authors 
Bentoumi, Rachid 
Title 
Dependence measure under lengthbiased sampling 
In epidemiological studies,
subjects with disease (prevalent cases) differ from newly diseased (incident
cases). Methods for regression analyses have recently been proposed to
measure the potential effects of covariates on survival. The goal is to
extend the dependence measure of Kent based on the information gain, in the
context of lengthbiased sampling. In
this regard, to estimate information gain and dependence measure for
lengthbiased data, we propose to use
the kernel density estimation with a
regression procedure . The performances of the proposed estimators,
under lengthbiased sampling, are demonstrated through simulations studies. 

Authors 
Chaba, Linda
and Omolo, Bernard* 
Title 
Using
copulas to select prognostic genes in melanoma patients 
We developed a copula model for
gene selection that does not depend on the distributions of the covariates,
except that their marginal distributions are continuous. A comparison of the
ability to control for the FDR of the copulabased model with the SAM and
Bayesian models is performed via simulations. Simulations indicated that the
copulabased model provided a better control of the FDR and yielded a more
prognostic signature than the SAM and Bayesian modelbased signatures. These
results were validated in three publiclyavailable melanoma datasets.
Relaxing parametric assumptions on microarray data may yield gene signatures
for melanoma with better prognostic properties. 

Authors 
Chan, Stephen 
Title 
Extreme value analysis of electricity demand in the UK 
For the first time, an extreme
value analysis of electricity demand in the UK is provided. The analysis is
based on the generalized Pareto distribution. Its parameters are allowed to
vary linearly and sinusoidally with respect to time
to capture patterns in the electricity demand data. The models are shown to
give reasonable fits. Some useful predictions are given for the value at risk
of the returns of electricity demand. 
Authors 
Cordero,
Osnamir Elias Bru, Jaramillo, Mario César and Canal, Sergio Yáńez 
Title 
Random Number Generation for a Survival Bivariate Weibull Distribution 
A bivariate survival function of Weibull distribution is
presented as Model VI(a)5 by Murthy, Xie and
Jiang. It is shown that the model corresponds to a GumbelHougaard
survival copula evaluated at two Weibull survival marginal. Their properties
are studied to compare three method of random generation from that
distribution. The CDVines methodology is used as the base reference for the
purpose of methodology evaluation. 
Authors 
De Silva, Kushani 
Title 
Bayesian Approach to Profile Gradient Estimation using
Exponential Cubic Splines 
Reliable profile and profile
gradient estimates are of utmost important for many physical models. In most
situations, the derivative is either difficult to compute or it is impossible
to obtain by direct measurement. Most
importantly, for discrete noisy measurements, the differentiation magnifies
the random error buried in the measurements, especially for highfrequency
components. Estimating the derivative
from pointwise noisy measurements is well known as an illposed
problem. A Bayesian recipe based on a
model using exponential cubic spline is implemented to estimate the profile
gradient of discrete noisy measurements. The spline model is formulated in
the space where the quantity (gradient) to be modeled is continuous, instead
of being placed in the data space. The gradient profile is welldetermined by
the mean value of the posterior distribution calculated using Markov Chain
Monte Carlo sampling technique. 

Authors 
Darkenbayeva, Gulsim 
Title 
Convergence
of some quadratic forms used in regression analysis 
We consider convergence in
distribution of two quadratic forms arising in unit root tests for a
regression with a slowly varying regressor. The
error term is a unit root process with linear processes as disturbances. The
linear processes are noncausal shortmemory with independent identically
distributed innovations. Our results generalize some statements from Phillips
and Solo (1992). 

Authors 
Hamed, Duha 
Title 
TPareto family of distributions: Properties and
Applications 
Six families of generalized
Pareto distribution were defined and studied using the TR{Y} framework will
be presented with some of their properties and special cases including the
Lorenz and Bonferroni curves. The flexibility of
two members of these generalized families namely the normalPareto{Cauchy}
and the exponentiatedexponentialPareto{Weibull}
distribution are assessed by applying them to a couple of real data sets and
comparing their results with other distributions. 

Authors 
Kang,
Kai 
Title 
Bayesian
semiparametric mixed hidden Markov models 
In this study, we develop a
semiparametric mixed hidden Markov model to analyze longitudinal data. The
proposed model comprises a parametric transition model for examining how
potential predictors influence the probability of transition from one state
to another and a nonparametric conditional model for revealing the functional
effects of explanatory variables on outcomes of interest. We propose a
Bayesian approach that combines Bayesian Psplines and MCMC methods to
conduct the statistical analysis. The empirical performance of the proposed
methodology is evaluated via simulation studies. An application to a
reallife example is presented. 

Authors 
Krutto, Annika 
Title 
Estimation in Univariate Stable Laws 
In the study fourparameter
stable laws are considered. The
explicit representations for the densities of stable laws in terms of
elementary functions are unknown and that complicates the estimation of
parameters. All stable laws can be uniquely expressed by their characteristic
function. The motivation for this study arises from an estimation procedure
based on the empirical characteristic function and known as the method of
moments. In this study an amended and more fruitful version of the procedure
is proposed, extensive simulation experiments over the parameter space are
performed. 

Authors 
Mdziniso, Nonhle
Channon 
Title 
Odd
Pareto Families of Distributions for Modeling Loss Payment Data 
A threeparameter
generalization of the Pareto distribution is presented to deal with general
situations in modeling loss payment data with various shapes in the density
function. This generalized Pareto distribution will be referred to as the Odd
Pareto family since it is derived by considering the distributions of the
odds of the Pareto and inverse Pareto families. Various statistical
properties of the Odd Pareto distribution are provided, including hazard
function and moments. Loss payment data is used to illustrate applications of
the Odd Pareto distribution. The method of maximum likelihood estimation is
proposed for estimating the model parameters. 

Authors 
Nitithumbundit, Thanakorn 
Title 
Maximum leaveoneout likelihood estimation for location
parameter of unbounded densities 
Maximum likelihood estimation
of a location parameter fails when the density have
unbounded mode. An alternative approach is considered by leaving out a data
point to avoid the unbounded density in the full likelihood. This
modification gives rise to the leaveoneout likelihood. We propose an
expectation/conditional maximisation (ECM)
algorithm which maximises the leaveoneout
likelihood. Podgórski and Wallin
(2015) showed that the estimator which maximises
the leaveoneout likelihood is consistent and superefficient. To
investigate other asymptotic properties such as the optimal rate of
convergence and asymptotic distribution, we use our proposed algorithm on
simulated data sets while also evaluating the accuracy of our estimator. 

Authors 
Odhiambo,
Collins Ojwang 
Title 
A Smooth
Test of Goodnessoffit for the Weibull Distribution: An Application to an
HIV Retention data 
In this paper, we propose a
smooth test of goodnessoffit for the twoparameter Weibull distribution.
The smooth test described here is a score test that is an extension of the Neyman’s smooth tests. Simulations are conducted to
compare the power of the smooth test with three other goodnessoffit tests
for the Weibull distribution against the gamma and the lognormal
alternatives. Results show that the smooth tests of order three and four are
more powerful than the other goodnessoffit tests. For validation, we apply
the goodnessoffit procedure to retention data in an HIV care setting in
Kenya. 

Authors 
Selvitella, Alessandro 
Title 
The Simpson's Paradox in Quantum Mechanics 
In probability and statistics, the \emph{Simpson's paradox} is a paradox in which a trend that appears in different groups of data disappears when these groups are combined, while the reverse trend appears for the aggregate data. In this paper, we give some results about the occurrence of the \emph{Simpson's Paradox} in Quantum Mechanics. In particular, we prove that the \emph{Simpson's Paradox} occurs for solutions of the \emph{Quantum Harmonic Oscillator} both in the stationary case and in the nonstationary case. In the nonstationary case, the \emph{Simpson's Paradox} is persistent: if it occurs at any time $t=\tilde{t}$, then it occurs at any time $t\neq \tilde{t}$. Moreover, we prove that the \emph{Simpson's Paradox} is not an isolated phenomenon, namely that, close to initial data for which it occurs, there are lots of initial data (a open neighborhood), for which it still occurs. Differently fro 