By SRA, SUVRIT, NOWOZIN, SEBASTIAN, WRIGHT, STEPHEN J.
Price : Rs.795.00Rs.636.00 ISBN : 9788120347540
Page : 508 Year of Publication : 2013
Edition : Publisher : MIT Press
Description :
The interplay between optimization and machine learning is one of the most important developments in modern computational science.
Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today’s machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.
Content :
Series Foreword. Preface.
1. Introduction: Optimization and Machine Learning
2. Convex Optimization with Sparsity-Inducing Norms
3. Interior-Point Methods for Large-Scale Cone Programming
4. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey
5. First-Order Methods for Nonsmooth Convex Large-Scale Optimization
I: General Purpose Methods
6. First-Order Methods for Nonsmooth Convex Large-Scale Optimization
II: Utilizing Problem’s Structure
7. Cutting-Plane Methods in Machine Learning
8. Introduction to Dual Decomposition for Inference
9. Augmented Lagrangian Methods for Learning, Selecting, and Combining Features
10. The Convex Optimization Approach to Regret Minimization
11. Projected Newton-type Methods in Machine Learning
12. Interior-Point Methods in Machine Learning
13. The Tradeoffs of Large-Scale Learning
14. Robust Optimization in Machine Learning
15. Improving First and Second-Order Methods by Modeling Uncertainty
16. Bandit View on Noisy Optimization
17. Optimization Methods for Sparse Inverse Covariance Selection
18. A Pathwise Algorithms for Covariance Selection.