您好,欢迎来到聚文网。 登录 免费注册
数值最优化(第2版)

数值最优化(第2版)

  • 字数: 872000
  • 装帧: 简装
  • 出版社: 科学出版社
  • 作者: (美)乔治 劳斯特等
  • 出版日期: 2019-03-01
  • 商品条码: 9787030605511
  • 版次: 1
  • 开本: B5
  • 页数: 692
  • 出版年份: 2019
定价:¥198 销售价:登录后查看价格  ¥{{selectedSku?.salePrice}} 
库存: {{selectedSku?.stock}} 库存充足
{{item.title}}:
{{its.name}}
精选
内容简介
乔治?劳斯特、斯蒂芬?J.瑞特著的《数值很优化(第2版影印版英文版)(精)/国外数学名著系列》作者根据在教学、研究和咨询中的经验,写了这本适合学生和实际工作者的书。本书提供连续优化中大多数有效方法的全面的新的论述。每一章从基本概念开始,逐步阐述当前可用的技术。
本书强调实用方法,包含大量图例和练习,适合广大读者阅读,可作为工程、运筹学、数学、计算机科学以及商务方面的研究生教材,也可作为该领域的科研人员和实际工作人员的手册。
目录
Preface
prefcetothe Second Edition
1 Introduction
Mathematical Formulation
Example:A Transportation Problem
Continuous versus Discrete Optimization
Constrained and Unconstrained Optimization
Global and Local Optimization
Stocbastic and Deterministic Optimization
Convexity
Optimization Algorithms
Notes and References
2 Fundamentals of Unconstrained Optimization
2.1 What ls a Solution?
Recognizing a Local Minimum
Nonsmooth Problems
2.2 Overview of A1gorithms
Two Strategies:Line Search and Trust Region
Search Directions for Line Search Methods
Models for Trust-Region Methods
Scaling
Exercises
3 Line Search Methods
3.1 Step Length
The Wolfe Conditions
The Goldstein Conditions
Sufficient Decrease and Backtracking
3.2 Convergence of Line Search Methods
3.3 Rate of Convergence
Convergence Rate of Steepest Descent
Newton's Method
Quasi-Newton Methods
3.4 Newton's Method with Hessian Modification
Eigenvalue Modification
Adding a Multiple of the ldentity
Modified Cholesky Factorization
Modified Symmetric Indefinite Factorization
3.5 Step-Length Selection Algorithms
lnterpolation
lnitial Step Length
A Line Search A1gorithm for the Wolfe Conditions
Notes and References
Exercises
4 Trust-Region Methods
Outline of the Trust-Region Approach
4.1 A1gorithms Based on the Cauchy Point
The Cauchy Point
lmpro时ng on the Cauchy Point
The Dogleg Method
Two-Dinlensional Subspace Mininlization
4.2 Global Convergence
Reduction Obtained by the Cauchy Point
Convergence to Stationary Points
4.3 lterative Solution of the Subproblem
The Hard Case
Proof of Theorem 4.
Convergence of Algorithms Based on Nearly Exact Solutions
4.4 Local Convergence ofTrust-Region Newton Methods
4.5 0ther Enhancements
Scaling
Trust Regions in 0ther Norms
Notes and References
Exercises
5 Conjugate Gradient Methods
5.1 The linear Conjugate Gradient Method
Conjugate Direction Methods
Basic Properties of thee Conjugate Gradient Method
A Practical Form of the Conjugate Gradient Method
Rate of Convergence
Preconditioning
Practical Preconditioners
5.2 Nonlinear Conjugate Gradient Methods
The Fletcher-Reeves Method
The Polak-Ribière Method and Variants
Quadratic Termination and Restarts
Behavior of the Fletcher-Reeves Method
Global Convergence
Numerical Performance
Notes and Reference
Exercises
6 Quasi-Newton Methods
6.1 The BFGS Method
Properties ofthe BFGS Method
Implementation
6.2 The SR1 Method
Properties of SR1 Updating
6.3 The Broyden Class
6.4 Convergence Analysis
Global Convergence of the BFGS Method
Superlinear Convergence of the BFGS Method
Convergence Analysis of the SR1 Method
Notes and References
Exercises
7 Large-Scale Unconstrained optimization
7.1 lnexact Newton Methods
Local Convergence of Inexact Newton Methods
Line Search Newton-CG Method
Trust-Region Newton-CG Method
Preconditioning the Trust-Region Newton-CG Method
Trust-Region Newton-Lanczos Method
7.2 Limited-Memory Quasi-Newton Methods
Limited-Memory BFGS
Relationship with Conjugate Gradient Methods
General Lirnited:d-Memory Updatiug
Compact Representation of BFGS Updating
Unrolling the Update
7.3 Sparse Quasi-Newton Updates
7.4 Algorithms for Partially Separable Fnnctions
7.5 Perspectives and Sotrware
Notes and References
Exercises
8 Calculating Derivatives
8.1 Finite-Difference Derivative Approximations
Approximating the Gradient
Approximating a Sparse Jacobian
Approximatiug the Hessian
Approximatiug a Sparse Hessian
8.2 Automatic Differentiation
Au Example
The Forward Mode
The Reverse Mode
Vector Fnnctions and Partial Separablity
Calculating Jacobians ofVector Funlctions
Calculating Hessians:Forward Mode
Calculating Hessians:Reverse Mode
Current Lirnitations
Notess and References
Exercises
9 Derivatve-Free Optiimization
9.1 Finite Differences and Noise
9.2 Model-Based Methods
Interpolation aod Polyoomial Bases
Updating the Interpolation Set
A Method Based on Minimum-Change Updating
9.3 Coordinate and Pattern-Search Methods
Coordinate Search Method
Pattern-Search Methods
9.4 A Conjugate-Direction Method
9.5 Nelder-Mead Method
9.6 Implicit Filtering
Notes and References
Exercises
10 Least-Sqnares Problems
10.1 Background
10.2 Linear Least-Squares Problems
10.3 Algorithms for Nonlinear Least-Squares Problems
The Gauss-Newton Method
Convergence of the Gauss Newton Method
The Levenberg-Marquardt Method
Implementation of the Levenberg-Marquardt Method
Convergence of the Levenberg-Marquardt Method
Methods for Large-Residual Problems
10.4 Orthogonal Distance Regression
Nootes and References
Exerclses
11 Nonlinear Equations
11.1 Local A1gorithms
Newton's Method for Nonlinear Equations
Inexact Newton Methods
Broyden's Methods
Tensor Methods
11.2 Practical Methods
Merit Functions
Line Search Methods
Trust-Region Methods
11.3 Continuation/Homotopy Methods
Motivation
Practical Continuation Methods
Notes and References
Exercises
12 Theory of Constrained Optimization
Local and Global Solutions
Smoothness
12.1 Examples
A Single Equality Constraint
A Single Inequality Constraint
Two Inequality Constraints
12.2 Tangent Cone and Constraint Qualifications
12.3 First-Order Optimality Conditions
12.4 First-Order Optimality Conditions:Proof
Relating the Tangent Cone and the First-Order Feasible Direction Set
A Fundamental Necessary Condition
Farkas' Lemma
Proof ofTbeorem 12.
12.5 Second-Order Conditions
Second-Order Conditions and projected Hessians
12.6 Other Constraint Qualifications
12.7 A Geometric Viewpoint
12.8 Lagrange Multipliers and Sensitivity
12.9 Duality
Notes and References
Exercises
13 Linear Programming:Tbe Sirnplex Method
Linear Programming
13.1 Optimality and Duality
Optimality Conditions
Tbe Dual Problem
13.2 Geometry of the Feasible Set
Bases and Basic Feasible Points
A Single Step of the Feasible Polytope
13.3 The Sirnplex Metbod
Outline
A Single Step of the Metbod
13.4 Linear Algebra in the Sirnplex Metbod
13.5 Other Important Detaills
Pricing and Selection of the Entering Index
Starting the Sirnplex Method
Degenerate Steps and Cycling
13.6 Tbe Dual Sirnplex Method
13.7 Presolving
13.8 Where Does the Sirnplex Metbod Fit
Notes and References
Exfercises
14 Linear Programming:lnterior-Point Methods
14.1 Primal-Dual Methods
Outlioe
The Central Path
Central Path Neighborhoods and path-Following Methods
14.2 Practical Primal-Dual Algorithms
Corrector and Centering Steps
Step Lengths
Starting Point
A Practiica1 Algorithm
Solving Linear Systems
14.3 Other Primal-Dual Algorithms and Extensions
0ther Parimal-Followmg Methods
Potential-Reduction Metheods
Extenlsions
14.4 Perspectives and Software
Notes and References
Exercises
15 Fundamentals of A1gorithms for Nonlinear Constrained Optization
15.1 Categorizing Optimization Algorithms
15.2 The Combmatorial Difficulty of Inequality Constrained Problems
15.3 Elimiuation of Variables
Simple Elimination usmg Lmear Constraints
General Reduction Strategies for Lmear Constraints
Effect of lnequality Constraints
15.4 Merit Functions and Filtes
Merit Functions
Filters
15.5 The Maratos Effect
15.6 Second-Order Correction and Nonmonotone Tecbniques
Nonmonotone (Watcbdog) Strategy
Notes and References
Exercises
16 Quadratic Programs
16.1 Equality-Constrained Quadratic Programs
Properties of Equality-Constrained QPs
16.2 Direct Solution of the KKT System
Factormg 也e Full KKT System
Scbur-Complement Method
Null-Space Method
16.3 Iterative Solution of the KKT System
CG Applied to the Reduced System
The ProjectedCG Method
16.4 Inequality-Constrained Problems
Optimality Conditions for Inequality-Constrained Problems
Degeneracy
16.5 Active-Set Methods for Convex QPs
Specification of the Active-Set Method for Convex QP
Further Remarks on the Active-Set Method
Finite Termination of Active-Set A1gorithm on Strictly Convex QPs
Updating Factorizations
16.6 Interior-Point Methods
Solving the PrinIal-Dual System
Step Length Selection
A Practical PrinIal-Dual Method
16.7 The Gradient Projection Method
Caucby Point Computation
Subspace Mininimization
16.8 Perspectives and Software
Notes and References
Exercises
17 Penalty and Angmented Lagrangian Methods
17.1 Tbe Quadratic penalty Method
Motivation
Algorithmic Framework
Convergence of the Quadratic Penalty Method
Ill Conditioning and Reformulations
17.2 Nonsmooth Peualty Functions
A Practical e1 Penalty Method
A General Class ofNonsmooth Penalty Methods
17.3 Augmented Lagrangian Method:Equality Constraints
Motivation and A1gorithmic Framework
Properties of the Augmented Lagrangian
17.4 Practical Augmented Lagrangian Methods
Bound-Constrained Formulation
Linearly Constrained Formulation
Unconstrainde Formulation
17.5 Perspectives and Software
Notes and References
Exercises
18 Sequential Quadratic Programming
18.1 Local SQP Method
SQP Framework
Inequality Constraints
18.2 Preview ofPractical SQP Methods
IQP and EQP
Enforcing Convergence
18.3 Algorithmic Development
Handling Inconsistent Linearizations
FuIl Quasi-Newton Approxirnations
Reduced-Hessian Quasi-Newton Approxirnations
Merit Functions
Second-Order Correction
18.4 A Practical Line Search SQP Method
18.5 Trust-Region SQP Methods
A Relaxation Method for Equality-Constrained Optimization
St1QP(Sequential t1 Quadratic Programming)
Sequential Linear-Quadratic Programming (SLQP)
Aτèchnique for Updating the Penalty Parameter
18.6 Nonlinque Gradient Projection
18.7 Convergence Analysis
Rate of Convergence
18.8 Perspectives and Software
Notes and References
Exercises
19 Interior-Point Methods for Nonlinear Programming
19.1 Two InterPretations
19.2 A Basic Interior-Point A1gorithm
19.3 A1gorithmic Development
Primal vs.Primal-Dual System
Solving the Primal-Dual System
Updating the Barrier Parameter
Handling Nonconvexity and Singularity
Step Acceptance:Merit Functions and Filters
Quasi-Newton Approximations
Feasible Interior-Point Methods
19.4 A Line Search Interior-Point Method
19.5 A Trust-Region Interior-Point Method
An A1gorithm for Solving the Barrier Problem
Step Computation
Lagrange MuItipliers Estimates and Step Acceptance
Description of a Trust-Region Interior-Point Method
19.6 The Primal Log-Barrier Method
19.7 Global Convergence Propertiles
Failure of tbe Line Search Approach
Modified Line Search Metbods
Global Convergence of the Trust-Region Approach
19.8 Superlinear Convergence
19.9 Perspectives and Sofware
Notes and References
Exercises
A Background Material
A.l Elements of Linear A1gebra
Vectors and Matriices
Norms
Subspaces
Eigenva1ues, Eigenvectors,and the Singular-Value Decomposition
Determinant and Trace
Matrix Factorizations:Cholesky,LU,QR
Synunetric Indefinite Factorization
Sherman-Morrison-Woodbury Formula
Interlacing Eigenvalue Theorem
Error Analysis and Floating-Point Arithmetic
Conditioning and Stability
A.2 Elements of Analysis,Geometry,Topology
Sequences
Rates of Convergence
Topology of tbe Euclideean Space Rn
Convex Sets in Rn
Continuity and Limits
Derivatives
Directional Derivatives
Mean Value Theorern
Implicit Function Theorem
Order Notation
Root-Finding for Scalar Equations
B A Reaularization Procedure
References
Index

蜀ICP备2024047804号

Copyright 版权所有 © jvwen.com 聚文网