Address: Computing Applications Building, Room 141
605 E. Springfield Ave, Champaign, IL 61820 USA
I am broadly interested in the mathematics of data science and artificial intelligence. My research work has been centered around high-dimensional statistics, machine learning and optimal transport.
We determine the information-theoretic cutoff value on separation of cluster centers for exact recovery of cluster labels in a K-component Gaussian mixture model with equal cluster sizes. Moreover, we show that a semidefinite programming (SDP) relaxation of the K-means clustering method achieves such sharp threshold for exact recovery without assuming the symmetry of cluster centers.
@article{9366690,author={Chen, Xiaohui and Yang, Yun},date-added={2022-08-29 10:07:29 -0500},date-modified={2022-09-30 08:53:58 -0500},doi={10.1109/TIT.2021.3063155},issn={1557-9654},journal={IEEE Transactions on Information Theory},month=jun,number={6},pages={4223-4238},title={Cutoff for Exact Recovery of Gaussian Mixture Models},volume={67},year={2021},bdsk-url-1={https://doi.org/10.1109/TIT.2021.3063155}}
We introduce the diffusion K-means clustering method on Riemannian submanifolds, which maximizes the within-cluster connectedness based on the diffusion distance. The diffusion K-means constructs a random walk on the similarity graph with vertices as data points randomly sampled on the manifolds and edges as similarities given by a kernel that captures the local geometry of manifolds. The diffusion K-means is a multi-scale clustering tool that is suitable for data with non-linear and non-Euclidean geometric features in mixed dimensions. Given the number of clusters, we propose a polynomial-time convex relaxation algorithm via the semidefinite programming (SDP) to solve the diffusion K-means. In addition, we also propose a nuclear norm regularized SDP that is adaptive to the number of clusters. In both cases, we show that exact recovery of the SDPs for diffusion K-means can be achieved under suitable between-cluster separability and within-cluster connectedness of the submanifolds, which together quantify the hardness of the manifold clustering problem. We further propose the localized diffusion K-means by using the local adaptive bandwidth estimated from the nearest neighbors. We show that exact recovery of the localized diffusion K-means is fully adaptive to the local probability density and geometric structures of the underlying submanifolds.
@article{CHEN2021303,author={Chen, Xiaohui and Yang, Yun},date-added={2022-08-29 09:04:40 -0500},date-modified={2022-09-30 08:54:12 -0500},doi={https://doi.org/10.1016/j.acha.2020.03.002},issn={1063-5203},journal={Applied and Computational Harmonic Analysis},keywords={Manifold clustering, K-means, Riemannian submanifolds, Diffusion distance, Semidefinite programming, Random walk on random graphs, Laplace-Beltrami operator, Mixing times, Adaptivity},month=may,pages={303-347},title={Diffusion K-means clustering on manifolds: Provable exact recovery via semidefinite relaxations},url={https://www.sciencedirect.com/science/article/pii/S106352032030021X},volume={52},year={2021},bdsk-url-1={https://www.sciencedirect.com/science/article/pii/S106352032030021X},bdsk-url-2={https://doi.org/10.1016/j.acha.2020.03.002}}
Abstract Cumulative sum (CUSUM) statistics are widely used in the change point inference and identification. For the problem of testing for existence of a change point in an independent sample generated from the mean-shift model, we introduce a Gaussian multiplier bootstrap to calibrate critical values of the CUSUM test statistics in high dimensions. The proposed bootstrap CUSUM test is fully data dependent and it has strong theoretical guarantees under arbitrary dependence structures and mild moment conditions. Specifically, we show that with a boundary removal parameter the bootstrap CUSUM test enjoys the uniform validity in size under the null and it achieves the minimax separation rate under the sparse alternatives when the dimension p can be larger than the sample size n. Once a change point is detected, we estimate the change point location by maximising the ℓ∞-norm of the generalised CUSUM statistics at two different weighting scales corresponding to covariance stationary and non-stationary CUSUM statistics. For both estimators, we derive their rates of convergence and show that dimension impacts the rates only through logarithmic factors, which implies that consistency of the CUSUM estimators is possible when p is much larger than n. In the presence of multiple change points, we propose a principled bootstrap-assisted binary segmentation (BABS) algorithm to dynamically adjust the change point detection rule and recursively estimate their locations. We derive its rate of convergence under suitable signal separation and strength conditions. The results derived in this paper are non-asymptotic and we provide extensive simulation studies to assess the finite sample performance. The empirical evidence shows an encouraging agreement with our theoretical results.
@article{https://doi.org/10.1111/rssb.12406,author={Yu, Mengjia and Chen, Xiaohui},date-added={2022-08-29 10:16:32 -0500},date-modified={2022-09-30 08:54:35 -0500},doi={https://doi.org/10.1111/rssb.12406},journal={Journal of the Royal Statistical Society: Series B (Statistical Methodology)},keywords={binary segmentation, bootstrap, CUSUM, change point analysis, Gaussian approximation, high-dimensional data},month=apr,number={2},pages={247-270},title={Finite sample change point inference and identification for high-dimensional mean vectors},url={https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssb.12406},volume={83},year={2021},bdsk-url-1={https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssb.12406},bdsk-url-2={https://doi.org/10.1111/rssb.12406}}
This paper is concerned with finite sample approximations to the supremum of a non-degenerate U-process of a general order indexed by a function class. We are primarily interested in situations where the function class as well as the underlying distribution change with the sample size, and the U-process itself is not weakly convergent as a process. Such situations arise in a variety of modern statistical problems. We first consider Gaussian approximations, namely, approximate the U-process supremum by the supremum of a Gaussian process, and derive coupling and Kolmogorov distance bounds. Such Gaussian approximations are, however, not often directly applicable in statistical problems since the covariance function of the approximating Gaussian process is unknown. This motivates us to study bootstrap-type approximations to the U-process supremum. We propose a novel jackknife multiplier bootstrap (JMB) tailored to the U-process, and derive coupling and Kolmogorov distance bounds for the proposed JMB method. All these results are non-asymptotic, and established under fairly general conditions on function classes and underlying distributions. Key technical tools in the proofs are new local maximal inequalities for U-processes, which may be useful in other problems. We also discuss applications of the general approximation results to testing for qualitative features of nonparametric functions based on generalized local U-processes.
@article{ChenKato2020_PTRF,author={Chen, Xiaohui and Kato, Kengo},date-added={2022-08-29 09:04:40 -0500},date-modified={2022-09-30 08:55:34 -0500},day={31},doi={10.1007/s00440-019-00936-y},issn={1432-2064},journal={Probability Theory and Related Fields},month=jul,pages={1097--1163},title={Jackknife multiplier bootstrap: finite sample approximations to the U-process supremum with applications},url={https://doi.org/10.1007/s00440-019-00936-y},volume={176},year={2020},bdsk-url-1={https://doi.org/10.1007/s00440-019-00936-y}}
This paper studies inference for the mean vector of a high-dimensional U-statistic. In the era of big data, the dimension d of the U-statistic and the sample size n of the observations tend to be both large, and the computation of the U-statistic is prohibitively demanding. Data-dependent inferential procedures such as the empirical bootstrap for U-statistics is even more computationally expensive. To overcome such a computational bottleneck, incomplete U-statistics obtained by sampling fewer terms of the U-statistic are attractive alternatives. In this paper, we introduce randomized incomplete U-statistics with sparse weights whose computational cost can be made independent of the order of the U-statistic. We derive nonasymptotic Gaussian approximation error bounds for the randomized incomplete U-statistics in high dimensions, namely in cases where the dimension d is possibly much larger than the sample size n, for both nondegenerate and degenerate kernels. In addition, we propose generic bootstrap methods for the incomplete U-statistics that are computationally much less demanding than existing bootstrap methods, and establish finite sample validity of the proposed bootstrap methods. Our methods are illustrated on the application to nonparametric testing for the pairwise independence of a high-dimensional random vector under weaker assumptions than those appearing in the literature.
@article{10.1214/18-AOS1773,author={Chen, Xiaohui and Kato, Kengo},date-added={2022-08-29 15:34:05 -0500},date-modified={2022-09-30 08:57:52 -0500},doi={10.1214/18-AOS1773},journal={The Annals of Statistics},keywords={Bernoulli sampling, bootstrap, Divide and conquer, Gaussian approximation, incomplete $U$-statistics, randomized inference, sampling with replacement},number={6},pages={3127 -- 3156},publisher={Institute of Mathematical Statistics},title={{Randomized incomplete $U$-statistics in high dimensions}},url={https://doi.org/10.1214/18-AOS1773},volume={47},year={2019},bdsk-url-1={https://doi.org/10.1214/18-AOS1773}}
This paper studies the Gaussian and bootstrap approximations for the probabilities of a nondegenerate U-statistic belonging to the hyperrectangles in \mathbbR^d when the dimension d is large. A two-step Gaussian approximation procedure that does not impose structural assumptions on the data distribution is proposed. Subject to mild moment conditions on the kernel, we establish the explicit rate of convergence uniformly in the class of all hyperrectangles in \mathbbR^d that decays polynomially in sample size for a high-dimensional scaling limit, where the dimension can be much larger than the sample size. We also provide computable approximation methods for the quantiles of the maxima of centered U-statistics. Specifically, we provide a unified perspective for the empirical bootstrap, the randomly reweighted bootstrap and the Gaussian multiplier bootstrap with the jackknife estimator of covariance matrix as randomly reweighted quadratic forms and we establish their validity. We show that all three methods are inferentially first-order equivalent for high-dimensional U-statistics in the sense that they achieve the same uniform rate of convergence over all d-dimensional hyperrectangles. In particular, they are asymptotically valid when the dimension d can be as large as O(e^n^c) for some constant c∈(0,1/7). The bootstrap methods are applied to statistical applications for high-dimensional non-Gaussian data including: (i) principled and data-dependent tuning parameter selection for regularized estimation of the covariance matrix and its related functionals; (ii) simultaneous inference for the covariance and rank correlation matrices. In particular, for the thresholded covariance matrix estimator with the bootstrap selected tuning parameter, we show that for a class of sub-Gaussian data, error bounds of the bootstrapped thresholded covariance matrix estimator can be much tighter than those of the minimax estimator with a universal threshold. In addition, we also show that the Gaussian-like convergence rates can be achieved for heavy-tailed data, which are less conservative than those obtained by the Bonferroni technique that ignores the dependency in the underlying data distribution.
@article{10.1214/17-AOS1563,author={Chen, Xiaohui},date-added={2022-08-29 10:35:00 -0500},date-modified={2022-09-30 08:54:48 -0500},doi={10.1214/17-AOS1563},journal={The Annals of Statistics},keywords={bootstrap, Gaussian approximation, high-dimensional inference, U-statistics},number={2},pages={642 -- 678},publisher={Institute of Mathematical Statistics},title={{Gaussian and bootstrap approximations for high-dimensional U-statistics and their applications}},url={https://doi.org/10.1214/17-AOS1563},volume={46},year={2018},bdsk-url-1={https://doi.org/10.1214/17-AOS1563}}