Publications

* indicates alphabetical ordering of author names.

Surveys

  1. Stochastic Gradient Descent and Its Variants in Machine Learning
    P. Netrapalli
    Journal of the Indian Institute of Science

Research articles

  1. Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs [*]
    N. Agarwal, S. Chaudhuri, P. Jain, D. Nagaraj and P. Netrapalli
    ICLR 2022

  2. Focus on the Common Good: Group Distributional Robustness Follows
    V. Piratla, P. Netrapalli and S. Sarawagi
    ICLR 2022

  3. Minimax Optimization with Smooth Algorithmic Adversaries [*]
    T. Fiez, C. Jin, P. Netrapalli and L. Ratliff
    ICLR 2022

  4. Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness [*]
    A. Garg, R. Kothari, P. Netrapalli and S. Sherif
    NeurIPS 2021 (Spotlight) & QIP 2022

  5. Sample Efficient Linear Meta-Learning by Alternating Minimization
    K. K. Thekumparampil, P. Jain, P. Netrapalli and S. Oh
    NeurIPS 2021

  6. Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems [*]
    P. Jain, S. S. Kowshik, D. Nagaraj and P. Netrapalli
    NeurIPS 2021 (Spotlight)

  7. Streaming Linear System Identification with Reverse Experience Replay [*]
    P. Jain, S. S. Kowshik, D. Nagaraj and P. Netrapalli
    NeurIPS 2021

  8. Do Input Gradients Highlight Discriminative Features?
    H. Shah, P. Jain and P. Netrapalli
    NeurIPS 2021

  9. Efficient Bandit Convex Optimization: Beyond Linear Losses
    A. S. Suggala, P. Ravikumar and P. Netrapalli
    COLT 2021

  10. Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization
    A. Saha, N. Natarajan, P. Netrapalli and P. Jain
    ICML 2021

  11. No quantum speedup over gradient descent for non-smooth convex optimization [*]
    A. Garg, R. Kothari, P. Netrapalli and S. Sherif
    ITCS 2021 & QIP 2021

  12. The Pitfalls of Simplicity Bias in Neural Networks
    H. Shah, K. Tamuly, A. Raghunathan, P. Jain and P. Netrapalli
    NeurIPS 2020

  13. Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method
    K. K. Thekumparampil, P. Jain, P. Netrapalli and S. Oh
    NeurIPS 2020 (Spotlight)

  14. Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms [*]
    G. Bresler, P. Jain, D. Nagaraj, P. Netrapalli and X. Wu
    NeurIPS 2020 (Spotlight)

  15. MOReL : Model-Based Offline Reinforcement Learning
    R. Kidambi, A. Rajeswaran, P. Netrapalli and T. Joachims
    NeurIPS 2020

  16. Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
    A. S. Suggala and P. Netrapalli
    NeurIPS 2020

  17. On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points
    C. Jin, P. Netrapalli, R. Ge, S. M. Kakade and M. I. Jordan
    Accepted to Journal of the ACM

  18. Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
    V. Piratla, P. Netrapalli and S. Sarawagi
    ICML 2020

  19. What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?
    C. Jin, P. Netrapalli and M. I. Jordan
    ICML 2020

  20. Online Non-Convex Learning: Following the Perturbed Leader is Optimal
    A. S. Suggala and P. Netrapalli
    ALT 2020 Best student paper award

  21. Leverage Score Sampling for Faster Accelerated Regression and ERM [*]
    N. Agarwal, S. M. Kakade, R. Kidambi, Y. T. Lee, P. Netrapalli and A. Sidford
    ALT 2020

  22. P-SIF: Document Embeddings Using Partition Averaging
    V. Gupta, A. Saw, P. Nokhiz, P. Netrapalli, P. Rai and P. Talukdar
    AAAI 2020

  23. Efficient Algorithms for Smooth Minimax Optimization
    K. K. Thekumparampil, P. Jain, P. Netrapalli and S. Oh
    NeurIPS 2019

  24. SGD for Least Squares Regression: Towards Minimax Optimality with the Final Iterate [*]
    R. Ge, S. M. Kakade, R. Kidambi and P. Netrapalli
    NeurIPS 2019

  25. Making the Last Iterate of SGD Information Theoretically Optimal [*]
    P. Jain, D. Nagaraj and P. Netrapalli
    COLT 2019

  26. SGD without Replacement: Sharper Rates for General Smooth Convex Functions [*]
    P. Jain, D. Nagaraj and P. Netrapalli
    ICML 2019

  27. A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm
    C. Jin, P. Netrapalli, R. Ge, S. M. Kakade and M. I. Jordan
    Manuscript

  28. Support Recovery for Orthogonal Matching Pursuit: Upper and Lower Bounds
    R. Somani, C. Gupta, P. Jain and P. Netrapalli
    NeurIPS 2018 (Spotlight)

  29. Smoothed analysis for low-rank solutions to semidefinite programs in quadratic penalty form [*]
    S. Bhojanapalli, N. Boumal, P. Jain and P. Netrapalli
    COLT 2018
    See Nicolas’ and Srinadh's interview about this paper on Dustin Mixon's blog

  30. On the insufficiency of existing momentum schemes for Stochastic Optimization
    R. Kidambi, P. Netrapalli, P. Jain and S. M. Kakade
    ICLR 2018 (Oral)

  31. Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
    C. Jin, P. Netrapalli and M. I. Jordan
    COLT 2018

  32. Accelerating Stochastic Gradient Descent For Least Squares Regression [*]
    P. Jain, S. M. Kakade, R. Kidambi, P. Netrapalli and A. Sidford
    COLT 2018

  33. Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness [*]
    C. Musco, P. Netrapalli, A. Sidford, S. Ubaru and D. P. Woodruff
    ITCS 2018

  34. A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares) [*]
    P. Jain, S. M. Kakade, R. Kidambi, P. Netrapalli, V. K. Pillutla and A. Sidford
    FSTTCS 2017 (Invited)

  35. How to Escape Saddle Points Efficiently
    C. Jin, R. Ge, P. Netrapalli, S. M. Kakade and M. I. Jordan
    ICML 2017

  36. Thresholding based Efficient Outlier Robust PCA [*]
    Y. Cherapanamjeri, P. Jain and P. Netrapalli
    COLT 2017

  37. Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification [*]
    P. Jain, S. M. Kakade, R. Kidambi, P. Netrapalli and A. Sidford
    Journal of Machine Learning Research (JMLR) 18(223)

  38. Computing Matrix Squareroot via Non Convex Local Search [*]
    P. Jain, C. Jin, S. M. Kakade and P. Netrapalli
    AISTATS 2017

  39. Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent
    C. Jin, S. M. Kakade and P. Netrapalli
    NIPS 2016

  40. Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm [*]
    P. Jain, C. Jin, S. M. Kakade, P. Netrapalli and A. Sidford
    COLT 2016

  41. Information-theoretic thresholds for community detection in sparse networks [*]
    J. Banks, C. Moore, J. Neeman and P. Netrapalli
    COLT 2016

    This paper was a merge of the following two papers
    Non-Reconstructability in the Stochastic Block Model [*]    J. Neeman and P. Netrapalli
    and
    Information-theoretic thresholds for community detection in sparse networks    J. Banks and C. Moore

  42. Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis [*]
    R. Ge, C. Jin, S. M. Kakade, P. Netrapalli and A. Sidford
    ICML 2016

  43. Faster Eigenvector Computation via Shift-and-Invert Preconditioning [*]
    D. Garber, E. Hazan, C. Jin, S. M. Kakade, C. Musco, P. Netrapalli and A. Sidford
    ICML 2016

    This paper was a merge of the following two papers
    Robust Shift-and-Invert Preconditioning: Faster and More Sample Efficient Algorithms for Eigenvector Computation [*]
    C. Jin, S. M. Kakade, C. Musco, P. Netrapalli and A. Sidford
    and
    Fast and Simple PCA via Convex Optimization    D. Garber and E. Hazan

  44. Learning Planar Ising Models
    J. K. Johnson, D. Oyen, M. Chertkov and P. Netrapalli
    Journal of Machine Learning Research (JMLR) 2016, Volume 17, Issue 15

  45. Convergence Rates of Active Learning for Maximum Likelihood Estimation [*]
    K. Chaudhuri, S. M. Kakade, P. Netrapalli and S. Sanghavi
    NIPS 2015

  46. Fast Exact Matrix Completion with Finite Samples [*]
    P. Jain and P. Netrapalli
    COLT 2015

  47. Non-convex Robust PCA
    P. Netrapalli, U. N. Niranjan, S. Sanghavi, A. Anandkumar and P. Jain
    NIPS 2014 (Spotlight)

  48. Learning Structure of Power-Law Markov Networks
    A. K. Das, P. Netrapalli, S. Sanghavi and S. Vishwanath
    ISIT 2014

  49. Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization [*]
    A. Agarwal, A. Anandkumar, P. Jain and P. Netrapalli
    SIAM Journal on Optimization 2016, Vol. 26, Issue 4, Pages 2775–2799

  50. A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries [*]
    A. Agarwal, A. Anandkumar and P. Netrapalli
    IEEE Transactions on Information Theory 2017, Vol. 63, Issue 1, Pages 575–592

    An extended abstract of the above two papers appeared as
    Learning Sparsely Used Overcomplete Dictionaries [*]
    A. Agarwal, A. Anandkumar, P. Jain, P. Netrapalli and R. Tandon
    COLT 2014

    See also this paper by Arora, Ge and Moitra.

  51. Phase Retrieval using Alternating Minimization
    P. Netrapalli, P. Jain and S. Sanghavi
    IEEE Transactions on Signal Processing 2015, Vol. 63, Issue 18, Pages 4814–4826
    An extended abstract appeared in NIPS 2013
    IEEE Signal Processing Society Best Paper Award 2019 (awarded annually to up to six papers published during the previous five years)

  52. One-Bit Compressed Sensing: Provable Support and Vector Recovery
    S. Gopi, P. Netrapalli, P. Jain and A. Nori
    ICML 2013

  53. Low-rank Matrix Completion using Alternating Minimization [*]
    P. Jain, P. Netrapalli and S. Sanghavi
    STOC 2013

  54. Learning Markov Graphs Up To Edit Distance
    A. K. Das, P. Netrapalli, S. Sanghavi and S. Vishwanath
    ISIT 2012

  55. Finding the Graph of Epidemic Cascades
    P. Netrapalli and S. Sanghavi
    SIGMETRICS/Performance 2012

  56. Greedy Learning of Markov Network Structure
    P. Netrapalli, S. Banerjee, S. Sanghavi and S. Shakkottai
    Allerton 2010 (Invited)