* indicates alphabetical ordering of author names.
No quantum speedup over gradient descent for non-smooth convex optimization [*]
A. Garg, R. Kothari, P. Netrapalli and S. Sherif
ITCS 2021
The Pitfalls of Simplicity Bias in Neural Networks
H. Shah, K. Tamuly, A. Raghunathan, P. Jain and P. Netrapalli
NeurIPS 2020
Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method
K. K. Thekumparampil, P. Jain, P. Netrapalli and S. Oh
NeurIPS 2020 (Spotlight)
Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms [*]
G. Bresler, P. Jain, D. Nagaraj, P. Netrapalli and X. Wu
NeurIPS 2020 (Spotlight)
MOReL : Model-Based Offline Reinforcement Learning
R. Kidambi, A. Rajeswaran, P. Netrapalli and T. Joachims
NeurIPS 2020
Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
A. S. Suggala and P. Netrapalli
NeurIPS 2020
On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points
C. Jin, P. Netrapalli, R. Ge, S. M. Kakade and M. I. Jordan
Accepted to Journal of the ACM
Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
V. Piratla, P. Netrapalli and S. Sarawagi
ICML 2020
What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?
C. Jin, P. Netrapalli and M. I. Jordan
ICML 2020
Online Non-Convex Learning: Following the Perturbed Leader is Optimal
A. S. Suggala and P. Netrapalli
ALT 2020 Best student paper award
Leverage Score Sampling for Faster Accelerated Regression and ERM [*]
N. Agarwal, S. M. Kakade, R. Kidambi, Y. T. Lee, P. Netrapalli and A. Sidford
ALT 2020
P-SIF: Document Embeddings Using Partition Averaging
V. Gupta, A. Saw, P. Nokhiz, P. Netrapalli, P. Rai and P. Talukdar
AAAI 2020
Efficient Algorithms for Smooth Minimax Optimization
K. K. Thekumparampil, P. Jain, P. Netrapalli and S. Oh
NeurIPS 2019
SGD for Least Squares Regression: Towards Minimax Optimality with the Final Iterate [*]
R. Ge, S. M. Kakade, R. Kidambi and P. Netrapalli
NeurIPS 2019
Making the Last Iterate of SGD Information Theoretically Optimal [*]
P. Jain, D. Nagaraj and P. Netrapalli
COLT 2019
SGD without Replacement: Sharper Rates for General Smooth Convex Functions [*]
P. Jain, D. Nagaraj and P. Netrapalli
ICML 2019
A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm
C. Jin, P. Netrapalli, R. Ge, S. M. Kakade and M. I. Jordan
Manuscript
Support Recovery for Orthogonal Matching Pursuit: Upper and Lower Bounds
R. Somani, C. Gupta, P. Jain and P. Netrapalli
NeurIPS 2018 (Spotlight)
Smoothed analysis for low-rank solutions to semidefinite programs in quadratic penalty form [*]
S. Bhojanapalli, N. Boumal, P. Jain and P. Netrapalli
COLT 2018
See Nicolas’ and Srinadh's interview about this paper on Dustin Mixon's blog
On the insufficiency of existing momentum schemes for Stochastic Optimization
R. Kidambi, P. Netrapalli, P. Jain and S. M. Kakade
ICLR 2018 (Oral)
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
C. Jin, P. Netrapalli and M. I. Jordan
COLT 2018
Accelerating Stochastic Gradient Descent For Least Squares Regression [*]
P. Jain, S. M. Kakade, R. Kidambi, P. Netrapalli and A. Sidford
COLT 2018
Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness [*]
C. Musco, P. Netrapalli, A. Sidford, S. Ubaru and D. P. Woodruff
ITCS 2018
A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares) [*]
P. Jain, S. M. Kakade, R. Kidambi, P. Netrapalli, V. K. Pillutla and A. Sidford
FSTTCS 2017 (Invited)
How to Escape Saddle Points Efficiently
C. Jin, R. Ge, P. Netrapalli, S. M. Kakade and M. I. Jordan
ICML 2017
Thresholding based Efficient Outlier Robust PCA [*]
Y. Cherapanamjeri, P. Jain and P. Netrapalli
COLT 2017
Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification [*]
P. Jain, S. M. Kakade, R. Kidambi, P. Netrapalli and A. Sidford
Journal of Machine Learning Research (JMLR) 18(223)
Computing Matrix Squareroot via Non Convex Local Search [*]
P. Jain, C. Jin, S. M. Kakade and P. Netrapalli
AISTATS 2017
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent
C. Jin, S. M. Kakade and P. Netrapalli
NIPS 2016
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm [*]
P. Jain, C. Jin, S. M. Kakade, P. Netrapalli and A. Sidford
COLT 2016
Information-theoretic thresholds for community detection in sparse networks [*]
J. Banks, C. Moore, J. Neeman and P. Netrapalli
COLT 2016
This paper was a merge of the following two papers
Non-Reconstructability in the Stochastic Block Model [*]
J. Neeman and P. Netrapalli
and
Information-theoretic thresholds for community detection in sparse networks
J. Banks and C. Moore
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis [*]
R. Ge, C. Jin, S. M. Kakade, P. Netrapalli and A. Sidford
ICML 2016
Faster Eigenvector Computation via Shift-and-Invert Preconditioning [*]
D. Garber, E. Hazan, C. Jin, S. M. Kakade, C. Musco, P. Netrapalli and A. Sidford
ICML 2016
This paper was a merge of the following two papers
Robust Shift-and-Invert Preconditioning: Faster and More Sample Efficient Algorithms for Eigenvector Computation [*]
C. Jin, S. M. Kakade, C. Musco, P. Netrapalli and A. Sidford
and
Fast and Simple PCA via Convex Optimization
D. Garber and E. Hazan
Learning Planar Ising Models
J. K. Johnson, D. Oyen, M. Chertkov and P. Netrapalli
Journal of Machine Learning Research (JMLR) 2016, Volume 17, Issue 15
Convergence Rates of Active Learning for Maximum Likelihood Estimation [*]
K. Chaudhuri, S. M. Kakade, P. Netrapalli and S. Sanghavi
NIPS 2015
Fast Exact Matrix Completion with Finite Samples [*]
P. Jain and P. Netrapalli
COLT 2015
Non-convex Robust PCA
P. Netrapalli, U. N. Niranjan, S. Sanghavi, A. Anandkumar and P. Jain
NIPS 2014 (Spotlight)
Learning Structure of Power-Law Markov Networks
A. K. Das, P. Netrapalli, S. Sanghavi and S. Vishwanath
ISIT 2014
Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization [*]
A. Agarwal, A. Anandkumar, P. Jain and P. Netrapalli
SIAM Journal on Optimization 2016, Vol. 26, Issue 4, Pages 2775–2799
A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries [*]
A. Agarwal, A. Anandkumar and P. Netrapalli
IEEE Transactions on Information Theory 2017, Vol. 63, Issue 1, Pages 575–592
An extended abstract of the above two papers appeared as
Learning Sparsely Used Overcomplete Dictionaries [*]
A. Agarwal, A. Anandkumar, P. Jain, P. Netrapalli and R. Tandon
COLT 2014
See also this paper by Arora, Ge and Moitra.
Phase Retrieval using Alternating Minimization
P. Netrapalli, P. Jain and S. Sanghavi
IEEE Transactions on Signal Processing 2015, Vol. 63, Issue 18, Pages 4814–4826
An extended abstract appeared in NIPS 2013
IEEE Signal Processing Society Best Paper Award 2019 (awarded annually to up to six papers published during the previous five years)
One-Bit Compressed Sensing: Provable Support and Vector Recovery
S. Gopi, P. Netrapalli, P. Jain and A. Nori
ICML 2013
Low-rank Matrix Completion using Alternating Minimization [*]
P. Jain, P. Netrapalli and S. Sanghavi
STOC 2013
Learning Markov Graphs Up To Edit Distance
A. K. Das, P. Netrapalli, S. Sanghavi and S. Vishwanath
ISIT 2012
Finding the Graph of Epidemic Cascades
P. Netrapalli and S. Sanghavi
SIGMETRICS/Performance 2012
Greedy Learning of Markov Network Structure
P. Netrapalli, S. Banerjee, S. Sanghavi and S. Shakkottai
Allerton 2010 (Invited)