Publications

My research centers on developing efficient optimization algorithms for large-scale machine learning, with a focus on distributed and communication-efficient methods.

Preprints

  1. Huang, J., Wu, R., Liu, D., Lan, J., and Wang, X. (2025). Communication-efficient decentralized optimization via double-communication symmetric ADMM. arXiv preprint arXiv:2511.05283.

  1. Lan, J., Lin, H., and Wang, X. (2024). Minimax and communication-efficient distributed best subset selection with oracle property. arXiv preprint arXiv:2408.17276.

Working Papers

  1. Lan, J. and Wang, X. Distributed semismooth Newton method for sparse optimization. In preparation.

  1. Lan, J. and Wang, X. Coordinate-wise ADMM for piecewise linear/quadratic optimization. In preparation.

Research Details

Distributed Best Subset Selection

Addressing the challenge of high-dimensional sparse estimation in distributed environments, we propose an efficient two-stage distributed best subset selection algorithm. The algorithm achieves accurate sparse estimation with low communication cost and fewer iterations. By introducing a data-driven information criterion, the algorithm adaptively selects the optimal sparsity level. Theoretical analysis shows that the algorithm not only recovers the true sparse support exactly, but also achieves the minimax optimal error rate consistent with centralized processing.

Communication-Efficient Decentralized Optimization

We introduce a novel Symmetric ADMM framework for decentralized composite optimization in networks without central coordination. The method employs a double-communication strategy per iteration, which, despite increasing per-iteration communication, substantially reduces the total number of iterations and overall communication cost. We establish optimal communication protocols and prove linear convergence under standard conditions. This represents the first decentralized optimization framework that achieves a net reduction in total communication by leveraging fixed multi-round communication within each iteration.

Distributed Semismooth Newton Method

For optimization problems involving semismooth functions in distributed settings, we propose an efficient method combining a dual framework with the semismooth Newton method. Theoretical analysis demonstrates that the algorithm achieves a linear convergence rate, and machines only need to transmit one vector per iteration, significantly reducing communication cost. Experiments on Lasso and quantile regression show that the semismooth Newton method has significant advantages in solving speed compared to traditional first-order methods and ADMM.

Coordinate-wise ADMM (Co-ADMM)

Inspired by the solving strategy of LibSVM  —  the state-of-the-art solver for support vector machine (SVM) problems  —  we extend the class of optimizable functions and propose a coordinate-wise alternating direction method of multipliers (Co-ADMM). This method handles optimization problems with piecewise linear/quadratic functions plus general convex penalties. Co-ADMM achieves linear convergence and has \(O(n)\) per-iteration computational complexity, significantly improving efficiency. The solver developed based on this method demonstrates clear speed advantages over existing solvers such as CVX, ECOS, and SCS on SVM, Huber regression, and quantile regression problems.