应用迭代分析(英文版) epub pdf  mobi txt 电子书 下载

应用迭代分析(英文版) epub pdf mobi txt 电子书 下载 2024

应用迭代分析(英文版) epub pdf mobi txt 电子书 下载 2024


简体网页||繁体网页
袁锦昀 著

下载链接在页面底部


点击这里下载
    


想要找书就要到 静思书屋
立刻按 ctrl+D收藏本页
你会得到大惊喜!!

发表于2024-12-23

商品介绍



出版社: 科学出版社
ISBN:9787030420787
版次:1
商品编码:11580288
包装:精装
丛书名: 信息与计算科学丛书
开本:16开
出版时间:2014-11-01
用纸:胶版纸
页数:268
正文语种:中文

应用迭代分析(英文版) epub pdf mobi txt 电子书 下载 2024



类似图书 点击查看全场最低价

相关书籍





书籍描述

内容简介

  袁锦昀教授是杰出的旅居巴西华人1957年出生于江苏兴化唐刘镇,1977年考入南京工学院,巴西巴拉那联邦大学数学系终身教授、工业数学研究所所长,巴西计算和应用数学学会副会长,巴西数学会巴拉那州分会会长,巴西科技部基金委数学终审组应用数学和计算数学负责人,巴西巴拉那基金委数学终身组成员。 《实用迭代分析(英文版)(精)》是由其创作的英文版实用迭代分析专著。

目录

Preface to the Series in Information and Computational Science
Preface
Chapter 1 Introduction
1.1 Background in linear algebra
1.1.1 Basic symbols, notations, and definitions
1.1.2 Vector norm
1.1.3 Matrix norm
1.1.4 Spectral radii
1.2 Spectral results of matrix
1.3 Special matrices
1.3.1 Reducible and irreducible matrices
1.3.2 Diagonally dominant matrices
1.3.3 Nonnegative matrices
1.3.4 p-cyclic matrices
1.3.5 Toeplitz, Hankel, Cauchy, Cauchy-like and Hessenberg matrices "
1.4 Matrix decomposition
1.4.1 LU decomposition
1.4.2 Singular value decomposition
1.4.3 Conjugate decomposition
1.4.4 QZ decomposition
1.4.5 S T decomposition
1.5 Exercises
Chapter 2 Basic Methods and Convergence
2.1 Basic concepts
2.2 The Jacobi method
2.3 The Gauss-Seidel method
2.4 The SOR method
2.5 M-matrices and splitting methods
2.5.1 M-matrix
2.5.2 Splitting methods
2.5.3 Comparison theorems
2.5.4 Multi-splitting methods
2.5.5 Generalized Ostrowski-Reich theorem
2.6 Error analysis of iterative methods
2.7 Iterative refinement
2.8 Exercises
Chapter 3 Non-stationary Methods
3.1 Conjugate gradient methods
3.1.1 Steepest descent method
3.1.2 Conjugate gradient method
3.1.3 Preconditioned conjugate gradient method
3.1.4 Generalized conjugate gradient method
3.1.5 Theoretical results on the conjugate gradient method
3.1.6 GeueuAzed poduct-tpe methods base u -QC
3.1.7 Inexact preconditioned conjugate gradient method
3.2 Lanczos method
3.3 GMRES method and QMR method
3.3.1 GMRES method
3.3.2 QMR method
3.3.3 Variants of the QMR method
3.4 Direct projection method
3.4.1 Theory of the direct projection method
3.4.2 Direct projection algorithms
3.5 Semi-conjugate direction method
3.5.1 Semi-conjugate vectors
3.5.2 Left conjugate direction method
3.5.3 One possible way to find left conjugate vector set
3.5.4 Remedy for breakdown
3.5.5 Relation with Gaussian elimination
3.6 Krylov subspace methods
3.7 Exercises
Chapter 4 Iterative Methods for Least Squares Problems
4.1 Introduction
4.2 Basic iterative methods
4.3 Block SOR methods
4.3.1 Block SOR algorithms
4.3.2 Convergence and optimal factors
4.3.3 Example
4.4 Preconditioned conjugate gradient methods
4.5 Generalized least squares problems
4.5.1 Block SOR methods
4.5.2 Preconditioned conjugate gradient method
4.5.3 Comparison
4.5.4 SOR-like methods
4.6 Rank deficient problems
4.6.1 Augmented system of normal equation
4.6.2 Block SOR algorithms
4.6.3 Convergence and optimal factor
4.6.4 Preconditioned conjugate gradient method
4.6.5 Comparison results
4.7 Exercises
Chapter 5 Preconditioners
5.1 LU decomposition and orthogonal transformations
5.1.1 Gilbert and Peierls algorithm for LU decomposition
5.1.2 Orthogonal transformations
5.2 Stationary preconditioners
5.2.1 Jacobi preconditioner
5.2.2 SSOR preconditioner
5.3 Incomplete factorization
5.3.1 Point incomplete factorization
5.3.2 Modified incomplete factorization
5.3.3 Block incomplete factorization
5.4 Diagonally dominant preconditioner
5.5 Preconditioner for least squares problems
5.5.1 Preconditioner by LU decomposition
5.5.2 Preconditioner by direct projection method
5.5.3 Preconditioner by QR decomposition
5.6 Exercises
Chapter 6 Singular Linear Systems
6.1 Introduction
6.2 Properties of singular systems
6.3 Splitting methods for singular systems
6.4 Nonstationary methods for Singular systems
6.4.1 symmetric and positive semidefinite systems
6.4.2 General systems
6.5 Exercises
Bibliography
Index

精彩书摘

Chapter 1
Introduction
In this chapter, we first give an overview of relevant concepts and some basic results of matrix linear algebra. Materials contained here will be used throughout the book.
1.1 Background in linear algebra
1.1.1 Basic symbols, notations, and definitions
Let R be the set of real numbers; C,the set of complex numbers; and i 三 /^T. The symbol Rn denotes the set of real n-vectors and Cn, the set of complex n-vectors, a, /3, 7,etc., denote real numbers or constants. Vectors are almost always column vectors. We use Rmxn to denote the linear vector space of all m-by-n real matrices and Cmxn, the linear vector space of all m-by-n complex matrices. The symbol dimp) denotes the dimension of a linear vector space S.
The upper case letters A, B, C, A, A, etc., denote matrices and the lower case letters x, y, z, etc., denote vectors.
Let (A)ij = ctij denote the (i, j)th entry in a matrix A = (aij). For any n-by-n matrix, the indices j go through 1 to n usually but sometimes go through 0 to n — 1 for convenience. Let AT be the transpose of A; A*, the conjugate transpose of
A rank(yl), the rank of A and tr(A)三the trace of A. An n-by-n diagonal
matrix is denoted by
We use the notation In for the n-by-n identity matrix. When there is no ambiguity, we shall write it as I. The symbol ej denotes the jth unit vector, i.e., the jth column vector of I.
A matrix A G Rnxn is symmetric if AT = A, and skew-symmetric if AT = —A. A symmetric matrix A is positive definite (semidefinite) if xTAx > 00) for any
nonzero vector x G Rn. A matrix A G Cnxn is Hermitian if A* = A. A Hermitian matrix A is positive definite (semidefinite) if x*Ax ≥ 0( 0) for any nonzero vector
x e Cn.
A number A e C is an eigenvalue of A G Cnxn if there exists a nonzero vector x G Cn such that Ax = Xx, where x is called the eigenvector of A associated with A. It is well-known that the eigenvalues of all Hermitian matrices are real. Let Amin (A) and Amax(A) denote the smallest and largest eigenvalues of a Hermitian matrix A respectively. We use p(A) = max |Ai(A)| to denote the spectral radius of A where Ai(A) goes through the spectrum of A. Recall that the spectrum of A is the set of all the eigenvalues of A.
We use to denote a norm of vector or matrix. The symbols||oo denote the p-novm with p = 1,2, oo, respectively. Also we use ?a(A), which is defined by Ka(A) = ||A||a||A_1||a to denote the condition number of the matrix A. In general, we consider every norm at the definition when a is omitted. But most used norm is 2-norm.
We use and 1Z(A) to represent the null space and Image space (or Range)
of given matrix A respectively where = {x G Rn : Ax = 0} and 1^(A) = {y G
Rm : y = Ax for some x G Rn} and A G Rmxn.
For matrix iterative analysis, we need some tools, such as vector norms, matrix norms and their extensions, and spectral radii.
1.1.2 Vector norm
In fact, a norm is an extension of length of vector in R2 or absolute value in R. It is well-known that Vx G R, x = satisfies the following properties:
We generalize three properties above to vector space Rn as follows.
Definition 1.1.1 /i : Rn —j- R is a vector norm on Rn if
Example 1.1.1 There are three common norms on Rn defined by
There axe some important elementary consequences from Definition 1.1.1 of the vector norm.
Proposition 1.1.1
Proof
Then,
By interchanging x and y, we can obtain
The result of (1.1.1) follows from (1.1.3) and (1.1.4) together. We can prove (1.1.2) if y is replaced by —y in (1.1.1).
The 2-norm is the natural generalization of the Euclidean length of vector on R2 or R3 and called the Euclidean norm. The oo-norm also sometimes called the maximum norm or the Chebyshev norm. In fact, they are special cases of p-norm defined as ,
Sometimes, usual norm is not enough for our research. We have to construct a new norm. One useful technique to construct new norms from some well-known norm is given in the following theorem.
Theorem 1.1.2 Let v be a norm on Rm and A E Rmxn have linearly inde?pendent columns. Then /i(x) = u(Ax) : Rn is a norm on Rn.
The proof is easy, just to check properties of the norm in Definition 1.1.1. Leave it to reader. This technique can work for matrix norm in the next subsection.
Corollary 1.1.3 Let A G RnXn be symmetric and positive definite. Then, /i(x) = VxTAx is a norm on Rn? denoted ||尤||^4,and called weighted norm (with A). We have to know if the sequence generated by iterative methods converges to the solution when we study iterative methods. For this purpose, we shall give some concepts about limit of sequence in vector spaces.
Definition 1.1.2 Let {x(fc)} be a sequence of n-vectors,and x G Rn. T

应用迭代分析(英文版) epub pdf mobi txt 电子书 下载 2024

应用迭代分析(英文版) 下载 epub mobi pdf txt 电子书 2024

应用迭代分析(英文版) pdf 下载 mobi 下载 pub 下载 txt 电子书 下载 2024

应用迭代分析(英文版) mobi pdf epub txt 电子书 下载 2024

应用迭代分析(英文版) epub pdf mobi txt 电子书 下载
想要找书就要到 静思书屋
立刻按 ctrl+D收藏本页
你会得到大惊喜!!

读者评价

评分

评分

评分

评分

评分

评分

评分

评分

评分

应用迭代分析(英文版) epub pdf mobi txt 电子书 下载 2024

类似图书 点击查看全场最低价

应用迭代分析(英文版) epub pdf mobi txt 电子书 下载 2024


分享链接









相关书籍


本站所有内容均为互联网搜索引擎提供的公开搜索信息,本站不存储任何数据与内容,任何内容与数据均与本站无关,如有需要请联系相关搜索引擎包括但不限于百度google,bing,sogou

友情链接

© 2024 book.tinynews.org All Rights Reserved. 静思书屋 版权所有