Publications
Google Scholar- Zietkiewicz, P, Kosmidis, I (2024). Bounded-memory adjusted scores estimation in generalized linear models with large data sets. Statistics and Computing, 34, 138. DOI ArViV Supplementary Material:
- Kosmidis, I, Zietkiewicz, P (2025). Jeffreys-prior penalty for high-dimensional logistic regression: A conjecture about aggregate bias. Statistical Science (accepted). ArXiV Supplementary Material:
- Zietkiewicz, P (2024). Model selection by thresholding with applications to generalized linear models. PhD Thesis. Warwick archive:
The widespread use of maximum Jeffreys'-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (2021, Biometrika), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing O(n) quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.
Firth (1993, Biometrika) shows that the maximum Jeffreys' prior penalized likelihood estimator in logistic regression has asymptotic bias decreasing with the square of the number of observations when the number of parameters is fixed, which is an order faster than the typical rate from maximum likelihood. The widespread use of that estimator in applied work is supported by the results in Kosmidis and Firth (2021, Biometrika), who show that it takes finite values, even in cases where the maximum likelihood estimate does not exist. Kosmidis and Firth (2021, Biometrika) also provide empirical evidence that the estimator has good bias properties in high-dimensional settings where the number of parameters grows asymptotically linearly but slower than the number of observations. We design and carry out a large-scale computer experiment covering a wide range of such high-dimensional settings and produce strong empirical evidence for a simple rescaling of the maximum Jeffreys' prior penalized likelihood estimator that delivers high accuracy in signal recovery in the presence of an intercept parameter. The rescaled estimator is effective even in cases where estimates from maximum likelihood and other recently proposed corrective methods based on approximate message passing do not exist.
In this thesis, we introduce a consistent model selection procedure for regression problems through the thresholding of readily available statistics post-estimation of the full model. This method depends only on standard assumptions ensuring typical asymptotic properties of estimators, such as consistency. Unlike standard penalized likelihood methods (e.g., LASSO) or best subsets methods, our approach does not require tuning parameters or fitting all possible models. While we focus on generalized linear models (GLMs), our procedure is broadly applicable. The performance of our model selection method hinges on the estimator used. Thus, we also advance GLM estimation in scenarios where data cannot be stored locally due to the large number of observations or when the number of covariates grows proportionally with observations. Traditional model selection methods struggle in these settings, but our thresholding method remains feasible.
For GLMs, we present thresholding as a novel, computationally efficient, and variable selection consistent model selection method. It retains the same limiting distribution as if the true variables were known, a property known as the oracle property. This is achieved by thresholding asymptotic Wald statistics, which can be implemented using standard maximum likelihood outputs. Additionally, our method allows for constructing model confidence sets via bootstrapping, making it more accessible for practitioners. Extensive simulations demonstrate our method’s effectiveness compared to existing techniques, significantly contributing to GLM modeling by providing a practical, efficient model selection procedure. Building on Chee et al. (2023), we extend model selection by thresholding in GLMs using Wald statistics from stochastic gradient descent iterates. This method also possesses the oracle property when the selected model is reestimated using maximum likelihood.
Regarding estimation, we present two new iteratively reweighted least squares variants for estimating GLMs with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing O(n) quantities in memory. This is particularly useful for binomial response models, as the adjusted score equations ensure finite-valued estimates even when maximum likelihood fails (Kosmidis and Firth, 2021). Our method computes the necessary statistics for thresholding. In high-dimensional settings, where the number of parameters grows linearly but slower than the number of observations, we propose a rescaling conjecture for the maximum Jeffreys’ prior penalized likelihood estimator. This approach achieves high accuracy in signal recovery with an intercept parameter, enabling the recovery of parameter estimates, proxies for the standard error, and consequently Wald statistics for model selection by thresholding.