均差 (Divided differences)是递归 除法 过程。在数值分析 中,可用于计算牛顿多项式 形式的多项式插值 的系数。在微积分 中,均差与导数 一起合称差商 ,是对函数 在一个区间 内的平均 变化率的测量[ 1] [ 2] [ 3] 。
均差也是一种算法 ,查尔斯·巴贝奇 的差分机 ,是他在1822年发表的论文中提出的一种早期的机械计算机 ,在历史上意图用来计算对数 表和三角函数 表, 它设计在其运算中使用这个算法[ 4] 。
定义
给定n+1个数据点
(
x
0
,
y
0
)
,
…
,
(
x
n
,
y
n
)
{\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})}
定义前向均差 为:
[
y
ν
]
=
y
ν
,
ν
∈
{
0
,
…
,
n
}
[
y
ν
,
…
,
y
ν
+
j
]
=
[
y
ν
+
1
,
…
,
y
ν
+
j
]
−
[
y
ν
,
…
,
y
ν
+
j
−
1
]
x
ν
+
j
−
x
ν
,
ν
∈
{
0
,
…
,
n
−
j
}
,
j
∈
{
1
,
…
,
n
}
{\displaystyle {\begin{aligned}{\mathopen {[}}y_{\nu }]&=y_{\nu },\quad \nu \in \{0,\ldots ,n\}\\{\mathopen {[}}y_{\nu },\ldots ,y_{\nu +j}]&={\frac {[y_{\nu +1},\ldots ,y_{\nu +j}]-[y_{\nu },\ldots ,y_{\nu +j-1}]}{x_{\nu +j}-x_{\nu }}},\quad \nu \in \{0,\ldots ,n-j\},\ j\in \{1,\ldots ,n\}\\\end{aligned}}}
定义后向均差 为:
[
y
ν
]
=
y
ν
,
ν
∈
{
0
,
…
,
n
}
[
y
ν
,
…
,
y
ν
−
j
]
=
[
y
ν
,
…
,
y
ν
−
j
+
1
]
−
[
y
ν
−
1
,
…
,
y
ν
−
j
]
x
ν
−
x
ν
−
j
,
ν
∈
{
j
,
…
,
n
}
,
j
∈
{
1
,
…
,
n
}
{\displaystyle {\begin{aligned}{\mathopen {[}}y_{\nu }]&=y_{\nu },\quad \nu \in \{0,\ldots ,n\}\\{\mathopen {[}}y_{\nu },\ldots ,y_{\nu -j}]&={\frac {[y_{\nu },\ldots ,y_{\nu -j+1}]-[y_{\nu -1},\ldots ,y_{\nu -j}]}{x_{\nu }-x_{\nu -j}}},\quad \nu \in \{j,\ldots ,n\},\ j\in \{1,\ldots ,n\}\\\end{aligned}}}
表示法
假定数据点给出为函数 ƒ,
(
x
0
,
f
(
x
0
)
)
,
…
,
(
x
n
,
f
(
x
n
)
)
{\displaystyle (x_{0},f(x_{0})),\ldots ,(x_{n},f(x_{n}))}
其均差可以写为:
f
[
x
ν
]
=
f
(
x
ν
)
,
ν
∈
{
0
,
…
,
n
}
f
[
x
ν
,
…
,
x
ν
+
j
]
=
f
[
x
ν
+
1
,
…
,
x
ν
+
j
]
−
f
[
x
ν
,
…
,
x
ν
+
j
−
1
]
x
ν
+
j
−
x
ν
,
ν
∈
{
0
,
…
,
n
−
j
}
,
j
∈
{
1
,
…
,
n
}
{\displaystyle {\begin{aligned}f[x_{\nu }]&=f(x_{\nu }),\qquad \nu \in \{0,\ldots ,n\}\\f[x_{\nu },\ldots ,x_{\nu +j}]&={\frac {f[x_{\nu +1},\ldots ,x_{\nu +j}]-f[x_{\nu },\ldots ,x_{\nu +j-1}]}{x_{\nu +j}-x_{\nu }}},\quad \nu \in \{0,\ldots ,n-j\},\ j\in \{1,\ldots ,n\}\end{aligned}}}
对函数 ƒ 在节点 x 0 , ..., x n 上的均差还有其他表示法,如:
[
x
0
,
…
,
x
n
]
f
[
x
0
,
…
,
x
n
;
f
]
D
[
x
0
,
…
,
x
n
]
f
{\displaystyle {\begin{matrix}{\mathopen {[}}x_{0},\ldots ,x_{n}]f\\{\mathopen {[}}x_{0},\ldots ,x_{n};f]\\{\mathopen {D}}[x_{0},\ldots ,x_{n}]f\\\end{matrix}}}
例子
给定ν=0:
[
y
0
]
=
y
0
[
y
0
,
y
1
]
=
y
1
−
y
0
x
1
−
x
0
[
y
0
,
y
1
,
y
2
]
=
[
y
1
,
y
2
]
−
[
y
0
,
y
1
]
x
2
−
x
0
[
y
0
,
y
1
,
y
2
,
y
3
]
=
[
y
1
,
y
2
,
y
3
]
−
[
y
0
,
y
1
,
y
2
]
x
3
−
x
0
[
y
0
,
y
1
,
…
,
y
n
]
=
[
y
1
,
y
2
,
…
,
y
n
]
−
[
y
0
,
y
1
,
…
,
y
n
−
1
]
x
n
−
x
0
{\displaystyle {\begin{aligned}{\mathopen {[}}y_{0}]&=y_{0}\\{\mathopen {[}}y_{0},y_{1}]&={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},y_{2}]&={\frac {{\mathopen {[}}y_{1},y_{2}]-{\mathopen {[}}y_{0},y_{1}]}{x_{2}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},y_{2},y_{3}]&={\frac {{\mathopen {[}}y_{1},y_{2},y_{3}]-{\mathopen {[}}y_{0},y_{1},y_{2}]}{x_{3}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},\dots ,y_{n}]&={\frac {{\mathopen {[}}y_{1},y_{2},\dots ,y_{n}]-{\mathopen {[}}y_{0},y_{1},\dots ,y_{n-1}]}{x_{n}-x_{0}}}\end{aligned}}}
为了使涉及的递归过程更加清楚,以列表形式展示均差的计算过程[ 5] :
x
0
[
y
0
]
=
y
0
[
y
0
,
y
1
]
x
1
[
y
1
]
=
y
1
[
y
0
,
y
1
,
y
2
]
[
y
1
,
y
2
]
[
y
0
,
y
1
,
y
2
,
y
3
]
x
2
[
y
2
]
=
y
2
[
y
1
,
y
2
,
y
3
]
[
y
2
,
y
3
]
x
3
[
y
3
]
=
y
3
{\displaystyle {\begin{matrix}x_{0}&[y_{0}]=y_{0}&&&\\&&[y_{0},y_{1}]&&\\x_{1}&[y_{1}]=y_{1}&&[y_{0},y_{1},y_{2}]&\\&&[y_{1},y_{2}]&&[y_{0},y_{1},y_{2},y_{3}]\\x_{2}&[y_{2}]=y_{2}&&[y_{1},y_{2},y_{3}]&\\&&[y_{2},y_{3}]&&\\x_{3}&[y_{3}]=y_{3}&&&\\\end{matrix}}}
展开形式
用数学归纳法 可证明[ 6] :
[
y
0
]
=
y
0
[
y
0
,
y
1
]
=
y
0
x
0
−
x
1
+
y
1
x
1
−
x
0
[
y
0
,
y
1
,
y
2
]
=
y
0
(
x
0
−
x
1
)
(
x
0
−
x
2
)
+
y
1
(
x
1
−
x
0
)
(
x
1
−
x
2
)
+
y
2
(
x
2
−
x
0
)
(
x
2
−
x
1
)
[
y
0
,
y
1
,
…
,
y
n
]
=
∑
j
=
0
n
y
j
∏
k
=
0
,
k
≠
j
n
(
x
j
−
x
k
)
{\displaystyle {\begin{aligned}{\mathopen {[}}y_{0}]&=y_{0}\\{\mathopen {[}}y_{0},y_{1}]&={\frac {y_{0}}{x_{0}-x_{1}}}+{\frac {y_{1}}{x_{1}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},y_{2}]&={\frac {y_{0}}{(x_{0}-x_{1})(x_{0}-x_{2})}}+{\frac {y_{1}}{(x_{1}-x_{0})(x_{1}-x_{2})}}+{\frac {y_{2}}{(x_{2}-x_{0})(x_{2}-x_{1})}}\\{\mathopen {[}}y_{0},y_{1},\dots ,y_{n}]&=\sum _{j=0}^{n}{\frac {y_{j}}{\prod _{k=0,\,k\neq j}^{n}(x_{j}-x_{k})}}\\\end{aligned}}}
此公式体现了均差的对称性质。[ 7] 故可推知:任意调换数据点次序,其值不变。[ 8]
性质
对称性:若
σ
:
{
0
,
…
,
n
}
→
{
0
,
…
,
n
}
{\displaystyle \sigma :\{0,\dots ,n\}\to \{0,\dots ,n\}}
是一个排列 则
f
[
x
0
,
…
,
x
n
]
=
f
[
x
σ
(
0
)
,
…
,
x
σ
(
n
)
]
{\displaystyle f[x_{0},\dots ,x_{n}]=f[x_{\sigma (0)},\dots ,x_{\sigma (n)}]}
(
f
+
g
)
[
x
0
,
…
,
x
n
]
=
f
[
x
0
,
…
,
x
n
]
+
g
[
x
0
,
…
,
x
n
]
(
λ
⋅
f
)
[
x
0
,
…
,
x
n
]
=
λ
⋅
f
[
x
0
,
…
,
x
n
]
{\displaystyle {\begin{aligned}(f+g)[x_{0},\dots ,x_{n}]&=f[x_{0},\dots ,x_{n}]+g[x_{0},\dots ,x_{n}]\\(\lambda \cdot f)[x_{0},\dots ,x_{n}]&=\lambda \cdot f[x_{0},\dots ,x_{n}]\\\end{aligned}}}
(
f
⋅
g
)
[
x
0
,
…
,
x
n
]
=
f
[
x
0
]
⋅
g
[
x
0
,
…
,
x
n
]
+
f
[
x
0
,
x
1
]
⋅
g
[
x
1
,
…
,
x
n
]
+
⋯
+
f
[
x
0
,
…
,
x
n
]
⋅
g
[
x
n
]
{\displaystyle (f\cdot g)[x_{0},\dots ,x_{n}]=f[x_{0}]\cdot g[x_{0},\dots ,x_{n}]+f[x_{0},x_{1}]\cdot g[x_{1},\dots ,x_{n}]+\dots +f[x_{0},\dots ,x_{n}]\cdot g[x_{n}]}
∃
ξ
∈
(
min
{
x
0
,
…
,
x
n
}
,
max
{
x
0
,
…
,
x
n
}
)
f
[
x
0
,
…
,
x
n
]
=
f
(
n
)
(
ξ
)
n
!
{\displaystyle \exists \xi \in (\min\{x_{0},\dots ,x_{n}\},\max\{x_{0},\dots ,x_{n}\})\quad f[x_{0},\dots ,x_{n}]={\frac {f^{(n)}(\xi )}{n!}}}
等价定义
通过对换 n 阶均差中(x0 ,y0 )与(xn-1 ,yn-1 ),可得到等价定义:
[
y
0
,
y
1
,
…
,
y
n
−
1
,
y
n
]
=
[
y
1
,
y
2
,
…
,
y
n
]
−
[
y
0
,
y
1
,
…
,
y
n
−
1
]
x
n
−
x
0
=
[
y
1
,
…
,
y
n
−
2
,
y
0
,
y
n
]
−
[
y
n
−
1
,
y
1
,
…
,
y
n
−
2
,
y
0
]
x
n
−
x
n
−
1
=
[
y
0
,
…
,
y
n
−
2
,
y
n
]
−
[
y
0
,
y
1
,
…
,
y
n
−
1
]
x
n
−
x
n
−
1
{\displaystyle {\begin{aligned}{\mathopen {[}}y_{0},y_{1},\dots ,y_{n-1},y_{n}]&={\frac {{\mathopen {[}}y_{1},y_{2},\dots ,y_{n}]-{\mathopen {[}}y_{0},y_{1},\dots ,y_{n-1}]}{x_{n}-x_{0}}}\\&={\frac {{\mathopen {[}}y_{1},\dots ,y_{n-2},y_{0},y_{n}]-{\mathopen {[}}y_{n-1},y_{1},\dots ,y_{n-2},y_{0}]}{x_{n}-x_{n-1}}}\\&={\frac {{\mathopen {[}}y_{0},\dots ,y_{n-2},y_{n}]-{\mathopen {[}}y_{0},y_{1},\dots ,y_{n-1}]}{x_{n}-x_{n-1}}}\\\end{aligned}}}
这个定义有著不同的计算次序:
[
y
0
]
=
y
0
[
y
0
,
y
1
]
=
y
1
−
y
0
x
1
−
x
0
[
y
0
,
y
1
,
y
2
]
=
[
y
0
,
y
2
]
−
[
y
0
,
y
1
]
x
2
−
x
1
[
y
0
,
y
1
,
y
2
,
y
3
]
=
[
y
0
,
y
1
,
y
3
]
−
[
y
0
,
y
1
,
y
2
]
x
3
−
x
2
[
y
0
,
y
1
,
…
,
y
n
]
=
[
y
0
,
…
,
y
n
−
2
,
y
n
]
−
[
y
0
,
y
1
,
…
,
y
n
−
1
]
x
n
−
x
n
−
1
{\displaystyle {\begin{aligned}{\mathopen {[}}y_{0}]&=y_{0}\\{\mathopen {[}}y_{0},y_{1}]&={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},y_{2}]&={\frac {{\mathopen {[}}y_{0},y_{2}]-{\mathopen {[}}y_{0},y_{1}]}{x_{2}-x_{1}}}\\{\mathopen {[}}y_{0},y_{1},y_{2},y_{3}]&={\frac {{\mathopen {[}}y_{0},y_{1},y_{3}]-{\mathopen {[}}y_{0},y_{1},y_{2}]}{x_{3}-x_{2}}}\\{\mathopen {[}}y_{0},y_{1},\dots ,y_{n}]&={\frac {{\mathopen {[}}y_{0},\dots ,y_{n-2},y_{n}]-{\mathopen {[}}y_{0},y_{1},\dots ,y_{n-1}]}{x_{n}-x_{n-1}}}\\\end{aligned}}}
以列表形式展示这个定义下均差的计算过程[ 9] :
x
0
[
y
0
]
=
y
0
[
y
0
,
y
1
]
x
1
[
y
1
]
=
y
1
[
y
0
,
y
1
,
y
2
]
[
y
0
,
y
2
]
[
y
0
,
y
1
,
y
2
,
y
3
]
x
2
[
y
2
]
=
y
2
[
y
0
,
y
1
,
y
3
]
[
y
0
,
y
3
]
x
3
[
y
3
]
=
y
3
{\displaystyle {\begin{matrix}x_{0}&[y_{0}]=y_{0}&&&\\&&[y_{0},y_{1}]&&\\x_{1}&[y_{1}]=y_{1}&&[y_{0},y_{1},y_{2}]&\\&&[y_{0},y_{2}]&&[y_{0},y_{1},y_{2},y_{3}]\\x_{2}&[y_{2}]=y_{2}&&[y_{0},y_{1},y_{3}]&\\&&[y_{0},y_{3}]&&\\x_{3}&[y_{3}]=y_{3}&&&\\\end{matrix}}}
牛顿插值法
《自然哲学的数学原理 》的第三编“宇宙体系”的引理五的图例。这里在横坐标上有6个点H,I,K,L,M,N,对应著6个值A,B,C,D,E,F,生成一个多项式函数对这6个点上有对应的6个值,计算任意点S对应的值R。牛顿给出了间距为单位值和任意值的两种情况。
牛顿插值公式,得名于伊萨克·牛顿 爵士,最早发表为他在1687年出版的《自然哲学的数学原理 》中第三编“宇宙体系”的引理五,此前詹姆斯·格雷果里 于1670年和牛顿于1676年已经分别独立得出这个成果。一般称其为连续泰勒展开 的离散对应。
使用均差的牛顿插值法 为[ 10] :
N
n
(
x
)
=
y
0
+
(
x
−
x
0
)
(
[
y
0
,
y
1
]
+
(
x
−
x
1
)
(
[
y
0
,
y
1
,
y
2
]
+
⋯
)
)
=
[
y
0
]
+
[
y
0
,
y
1
]
(
x
−
x
0
)
+
⋯
+
[
y
0
,
y
1
,
…
,
y
n
]
∏
k
=
0
n
−
1
(
x
−
x
k
)
{\displaystyle {\begin{aligned}N_{n}(x)&=y_{0}+(x-{x}_{0})\left([{y}_{0},{y}_{1}]+(x-{x}_{1})\left([{y}_{0},{y}_{1},{y}_{2}]+\cdots \right)\right)\\&=[y_{0}]+[{y}_{0},{y}_{1}](x-{x}_{0})+\cdots +[{y}_{0},{y}_{1},\ldots ,{y}_{n}]\prod _{k=0}^{n-1}(x-{x}_{k})\end{aligned}}}
可以在计算过程中任意增添节点如点(xn+1 ,yn+1 ),只需计算新增的n+1阶均差及其插值基函数,而无拉格朗日插值法 需重算全部插值基函数之虞。
对均差采用展开形式[ 11] :
N
n
(
x
)
=
y
0
+
y
0
x
−
x
0
x
0
−
x
1
+
y
1
x
−
x
0
x
1
−
x
0
+
⋯
+
∑
j
=
0
n
y
j
∏
k
=
0
n
−
1
(
x
−
x
k
)
∏
k
=
0
,
k
≠
j
n
(
x
j
−
x
k
)
{\displaystyle {\begin{aligned}N_{n}(x)&=y_{0}+y_{0}{\frac {x-{x}_{0}}{x_{0}-x_{1}}}+y_{1}{\frac {x-{x}_{0}}{x_{1}-x_{0}}}+\cdots +\sum _{j=0}^{n}y_{j}{\frac {\prod _{k=0}^{n-1}(x-{x}_{k})}{\prod _{k=0,\,k\neq j}^{n}(x_{j}-x_{k})}}\\\end{aligned}}}
以2阶均差牛顿插值为例:
N
2
(
x
)
=
y
0
(
1
+
x
−
x
0
x
0
−
x
1
+
(
x
−
x
0
)
(
x
−
x
1
)
(
x
0
−
x
1
)
(
x
0
−
x
2
)
)
+
y
1
(
x
−
x
0
x
1
−
x
0
+
(
x
−
x
0
)
(
x
−
x
1
)
(
x
1
−
x
0
)
(
x
1
−
x
2
)
)
+
y
2
(
x
−
x
0
)
(
x
−
x
1
)
(
x
2
−
x
0
)
(
x
2
−
x
1
)
=
y
0
(
x
−
x
1
)
(
x
−
x
2
)
(
x
0
−
x
1
)
(
x
0
−
x
2
)
+
y
1
(
x
−
x
0
)
(
x
−
x
2
)
(
x
1
−
x
0
)
(
x
1
−
x
2
)
+
y
2
(
x
−
x
0
)
(
x
−
x
1
)
(
x
2
−
x
0
)
(
x
2
−
x
1
)
=
∑
j
=
0
2
y
j
∏
k
=
0
k
≠
j
2
x
−
x
k
x
j
−
x
k
{\displaystyle {\begin{aligned}N_{2}(x)&=y_{0}\left(1+{\frac {x-{x}_{0}}{x_{0}-x_{1}}}+{\frac {(x-x_{0})(x-x_{1})}{(x_{0}-x_{1})(x_{0}-x_{2})}}\right)+y_{1}\left({\frac {x-{x}_{0}}{x_{1}-x_{0}}}+{\frac {(x-x_{0})(x-x_{1})}{(x_{1}-x_{0})(x_{1}-x_{2})}}\right)+y_{2}{\frac {(x-x_{0})(x-x_{1})}{(x_{2}-x_{0})(x_{2}-x_{1})}}\\&=y_{0}{\frac {(x-x_{1})(x-x_{2})}{(x_{0}-x_{1})(x_{0}-x_{2})}}+y_{1}{\frac {(x-x_{0})(x-x_{2})}{(x_{1}-x_{0})(x_{1}-x_{2})}}+y_{2}{\frac {(x-x_{0})(x-x_{1})}{(x_{2}-x_{0})(x_{2}-x_{1})}}\\&=\sum _{j=0}^{2}y_{j}\prod _{\begin{smallmatrix}k=0\\k\neq j\end{smallmatrix}}^{2}{\frac {x-{x}_{k}}{x_{j}-x_{k}}}\\\end{aligned}}}
前向差分
当数据点呈等距分布的时候,这个特殊情况叫做“前向差分 ”。它们比计算一般的均差要容易。
定义
给定n+1个数据点
(
x
0
,
y
0
)
,
…
,
(
x
n
,
y
n
)
{\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})}
有著
x
i
=
x
0
+
i
h
,
h
>
0
,
0
≤
i
≤
n
{\displaystyle x_{i}=x_{0}+ih,\quad h>0{\mbox{ , }}0\leq i\leq n}
定义前向差分 为:
△
0
y
i
=
y
i
△
k
y
i
=
△
k
−
1
y
i
+
1
−
△
k
−
1
y
i
,
1
≤
k
≤
n
−
i
{\displaystyle {\begin{aligned}\triangle ^{0}y_{i}&=y_{i}\\\triangle ^{k}y_{i}&=\triangle ^{k-1}y_{i+1}-\triangle ^{k-1}y_{i},\quad 1\leq k\leq n-i\\\end{aligned}}}
前向差分所对应的均差为[ 12] :
f
[
x
0
,
x
1
,
…
,
x
k
]
=
1
k
!
h
k
Δ
(
k
)
f
(
x
0
)
{\displaystyle f[x_{0},x_{1},\ldots ,x_{k}]={\frac {1}{k!h^{k}}}\Delta ^{(k)}f(x_{0})}
例子
y
0
△
y
0
y
1
△
2
y
0
△
y
1
△
3
y
0
y
2
△
2
y
1
△
y
2
y
3
{\displaystyle {\begin{matrix}y_{0}&&&\\&\triangle y_{0}&&\\y_{1}&&\triangle ^{2}y_{0}&\\&\triangle y_{1}&&\triangle ^{3}y_{0}\\y_{2}&&\triangle ^{2}y_{1}&\\&\triangle y_{2}&&\\y_{3}&&&\\\end{matrix}}}
展开形式
差分的展开形式是均差展开形式的特殊情况[ 13] :
△
k
y
i
=
∑
j
=
0
k
(
−
1
)
k
−
j
(
k
j
)
y
i
+
j
,
0
≤
k
≤
n
−
i
{\displaystyle {\begin{aligned}\triangle ^{k}y_{i}&=\sum _{j=0}^{k}(-1)^{k-j}{\binom {k}{j}}y_{i+j},\quad 0\leq k\leq n-i\end{aligned}}}
这里的表达式
(
n
k
)
=
(
n
)
k
k
!
(
n
)
k
=
n
(
n
−
1
)
(
n
−
2
)
⋯
(
n
−
k
+
1
)
{\displaystyle {n \choose k}={\frac {(n)_{k}}{k!}}\quad \quad (n)_{k}=n(n-1)(n-2)\cdots (n-k+1)}
是二项式系数 ,其中的(n)k 是“下降阶乘幂 ”,空积 (n)0 被定义为1。
插值公式
其对应的牛顿插值公式为:
f
(
x
)
=
y
0
+
x
−
x
0
h
(
Δ
1
y
0
+
x
−
x
0
−
h
2
h
(
Δ
2
y
0
+
⋯
)
)
=
y
0
+
∑
k
=
1
n
Δ
k
y
0
k
!
h
k
∏
i
=
0
n
−
1
(
x
−
x
0
−
i
h
)
=
y
0
+
∑
k
=
1
n
Δ
k
y
0
k
!
∏
i
=
0
n
−
1
(
x
−
x
0
h
−
i
)
=
∑
k
=
0
n
(
x
−
x
0
h
k
)
Δ
k
y
0
{\displaystyle {\begin{aligned}f(x)&=y_{0}+{\frac {x-x_{0}}{h}}\left(\Delta ^{1}y_{0}+{\frac {x-x_{0}-h}{2h}}\left(\Delta ^{2}y_{0}+\cdots \right)\right)\\&=y_{0}+\sum _{k=1}^{n}{\frac {\Delta ^{k}y_{0}}{k!h^{k}}}\prod _{i=0}^{n-1}(x-x_{0}-ih)\\&=y_{0}+\sum _{k=1}^{n}{\frac {\Delta ^{k}y_{0}}{k!}}\prod _{i=0}^{n-1}({\frac {x-x_{0}}{h}}-i)\\&=\sum _{k=0}^{n}{{\frac {x-x_{0}}{h}} \choose k}~\Delta ^{k}y_{0}\\\end{aligned}}}
无穷级数
牛顿 在1665年得出并在1671年写的《流数法》中发表了ln(1+x)的无穷级数 ,在1666年得出了arcsin(x)和arctan(x)的无穷级数,在1669年的《分析学》中发表了sin(x)、cos(x)、arcsin(x)和ex 的无穷级数;莱布尼茨 在1673年大概也得出了sin(x)、cos(x)和arctan(x)的无穷级数。布鲁克·泰勒 在1715年著作《Methodus Incrementorum Directa et Inversa》[ 14] 中研讨了“有限差分”方法,其中论述了他在1712年得出的泰勒定理 ,这个成果此前詹姆斯·格雷果里 在1670年和莱布尼茨 在1673年已经得出,而约翰·伯努利 在1694年已经在《教师学报》发表。
他对牛顿的均差的步长取趋于0的极限 ,得出:
f
(
x
)
=
f
(
a
)
+
lim
h
→
0
∑
k
=
1
∞
Δ
h
k
[
f
]
(
a
)
k
!
h
k
∏
i
=
0
k
−
1
(
(
x
−
a
)
−
i
h
)
=
f
(
a
)
+
∑
k
=
1
∞
d
k
d
x
k
f
(
a
)
(
x
−
a
)
k
k
!
{\displaystyle {\begin{aligned}f(x)&=f(a)+\lim _{h\to 0}\sum _{k=1}^{\infty }{\frac {\Delta _{h}^{k}[f](a)}{k!h^{k}}}\prod _{i=0}^{k-1}((x-a)-ih)\\&=f(a)+\sum _{k=1}^{\infty }{\frac {d^{k}}{dx^{k}}}f(a){\frac {(x-a)^{k}}{k!}}\\\end{aligned}}}
幂函数的均差
使用普通函数记号表示幂运算,
p
n
(
x
)
=
x
n
{\displaystyle p_{n}(x)=x^{n}}
,有:
p
j
[
x
0
,
…
,
x
n
]
=
0
∀
j
<
n
p
n
[
x
0
,
…
,
x
n
]
=
1
p
n
+
1
[
x
0
,
…
,
x
n
]
=
x
0
+
⋯
+
x
n
p
n
+
m
[
x
0
,
…
,
x
n
]
=
∑
k
0
+
⋯
+
k
n
=
m
∏
t
=
0
n
x
t
k
t
{\displaystyle {\begin{aligned}p_{j}[x_{0},\dots ,x_{n}]&=0\qquad \forall j<n\\p_{n}[x_{0},\dots ,x_{n}]&=1\\p_{n+1}[x_{0},\dots ,x_{n}]&=x_{0}+\dots +x_{n}\\p_{n+m}[x_{0},\dots ,x_{n}]&=\sum _{k_{0}+\cdots +k_{n}=m}{\begin{matrix}\prod _{t=0}^{n}x_{t}^{k_{t}}\end{matrix}}\\\end{aligned}}}
此中n+1元m次齐次多项式 的记法同于多项式定理 。
泰勒形式
泰勒级数 和任何其他的函数级数,在原理上都可以用来逼近均差。将泰勒级数表示为:
f
=
f
(
0
)
p
0
+
f
′
(
0
)
p
1
+
f
″
(
0
)
2
!
p
2
+
…
{\displaystyle f=f(0)p_{0}+f'(0)p_{1}+{\frac {f''(0)}{2!}}p_{2}+\dots }
均差的泰勒级数为:
f
[
x
0
,
…
,
x
n
]
=
f
(
0
)
p
0
[
x
0
,
…
,
x
n
]
+
f
′
(
0
)
p
1
[
x
0
,
…
,
x
n
]
+
⋯
+
f
(
n
)
(
0
)
n
!
p
n
[
x
0
,
…
,
x
n
]
+
…
{\displaystyle f[x_{0},\dots ,x_{n}]=f(0)p_{0}[x_{0},\dots ,x_{n}]+f'(0)p_{1}[x_{0},\dots ,x_{n}]+\dots +{\frac {f^{(n)}(0)}{n!}}p_{n}[x_{0},\dots ,x_{n}]+\dots }
前
n
{\displaystyle n}
项消失了,因为均差的阶高于多项式的阶。可以得出均差的泰勒级数本质上开始于:
f
(
n
)
(
0
)
n
!
{\displaystyle {\frac {f^{(n)}(0)}{n!}}}
依据均差中值定理 ,这也是均差的最简单逼近。
皮亚诺形式
均差还可以表达为
f
[
x
0
,
…
,
x
n
]
=
1
n
!
∫
x
0
x
n
f
(
n
)
(
t
)
B
n
−
1
(
t
)
d
t
{\displaystyle f[x_{0},\ldots ,x_{n}]={\frac {1}{n!}}\int _{x_{0}}^{x_{n}}f^{(n)}(t)B_{n-1}(t)\,dt}
这里的Bn-1 是数据点x0 ,...,xn 的n-1次B样条 ,而f(n) 是函数f的n阶导数 。这叫做均差的皮亚诺形式 ,而Bn-1 是均差的皮亚诺 核。
注释与引用
^ Frank C. Wilson; Scott Adamson. Applied Calculus . Cengage Learning. 2008: 177 . ISBN 0-618-61104-5 .
^ Tamara Lefcourt Ruby; James Sellers; Lisa Korf; Jeremy Van Horn; Mike Munn. Kaplan AP Calculus AB & BC 2015. Kaplan Publishing. 2014: 237. ISBN 978-1-61865-686-5 .
^ Thomas Hungerford; Douglas Shaw. Contemporary Precalculus: A Graphing Approach. Cengage Learning. 2008: 211–212. ISBN 0-495-10833-2 .
^ Isaacson, Walter. The Innovators . Simon & Schuster. 2014: 20 . ISBN 978-1-4767-0869-0 .
^
x
0
x
0
2
x
0
+
x
1
x
1
x
1
2
1
x
1
+
x
2
0
x
2
x
2
2
1
x
2
+
x
3
x
3
x
3
2
x
0
x
0
n
∑
i
=
0
n
−
1
x
0
n
−
1
−
i
x
1
i
x
1
x
1
n
x
0
x
0
n
+
1
x
1
n
+
1
−
x
1
x
0
n
+
x
1
x
0
n
−
x
0
n
+
1
x
1
−
x
0
=
x
1
x
1
n
−
x
0
n
x
1
−
x
0
+
x
0
n
=
x
1
∑
i
=
0
n
−
1
x
0
n
−
1
−
i
x
1
i
+
x
0
n
=
∑
i
=
0
n
x
0
n
−
i
x
1
i
x
1
x
1
n
+
1
x
0
x
0
n
+
1
∑
i
=
0
n
x
0
n
−
i
x
1
i
x
1
x
1
n
+
1
∑
i
=
0
n
x
1
n
−
i
x
2
i
−
∑
i
=
0
n
x
0
n
−
i
x
1
i
x
2
−
x
0
=
∑
i
=
0
n
−
1
x
1
i
(
x
2
n
−
i
−
x
0
n
−
i
)
x
2
−
x
0
=
∑
i
+
j
+
k
=
n
−
1
x
0
i
x
1
j
x
2
k
∑
i
=
0
n
x
1
n
−
i
x
2
i
x
2
x
2
n
+
1
x
0
x
0
n
+
1
∑
i
=
0
n
x
0
n
−
i
x
1
i
x
1
x
1
n
+
1
∑
i
+
j
+
k
=
n
−
1
x
0
i
x
1
j
x
2
k
∑
i
=
0
n
x
1
n
−
i
x
2
i
∑
i
+
j
+
k
=
n
−
1
x
1
i
x
2
j
x
3
k
−
∑
i
+
j
+
k
=
n
−
1
x
0
i
x
1
j
x
2
k
x
3
−
x
0
=
∑
i
+
j
+
k
+
l
=
n
−
2
x
0
i
x
1
j
x
2
k
x
3
l
x
2
x
2
n
+
1
∑
i
+
j
+
k
=
n
−
1
x
1
i
x
2
j
x
3
k
∑
i
=
0
n
x
2
n
−
i
x
3
i
x
3
x
3
n
+
1
x
0
x
0
3
x
0
2
+
x
0
x
1
+
x
1
2
x
1
x
1
3
x
0
+
x
1
+
x
2
x
1
1
+
x
1
x
2
+
x
2
2
1
x
2
x
2
3
x
1
+
x
2
+
x
3
0
x
2
2
+
x
2
x
3
+
x
3
2
1
x
3
x
3
3
x
2
+
x
3
+
x
4
x
3
2
+
x
3
x
4
+
x
4
2
x
4
x
4
3
x
0
x
0
4
x
0
3
+
x
0
2
x
1
+
x
0
x
1
2
+
x
1
3
x
1
x
1
4
x
0
2
+
x
0
x
1
+
x
1
2
+
x
0
x
2
+
x
1
x
2
+
x
2
2
x
1
3
+
x
1
2
x
2
+
x
1
x
2
2
+
x
2
3
x
0
+
x
1
+
x
2
+
x
3
x
2
x
2
4
x
1
2
+
x
1
x
2
+
x
2
2
+
x
1
x
3
+
x
2
x
3
+
x
3
2
1
x
2
3
+
x
2
2
x
3
+
x
2
x
3
2
+
x
3
3
x
1
+
x
2
+
x
3
+
x
4
0
x
3
x
3
4
x
2
2
+
x
2
x
3
+
x
3
2
+
x
2
x
4
+
x
3
x
4
+
x
4
2
1
x
3
3
+
x
3
2
x
4
+
x
3
x
4
2
+
x
4
3
x
2
+
x
3
+
x
4
+
x
5
x
4
x
4
4
x
3
2
+
x
3
x
4
+
x
4
2
+
x
3
x
5
+
x
4
x
5
+
x
5
2
x
4
3
+
x
4
2
x
5
+
x
4
x
5
2
+
x
5
3
x
5
x
5
4
x
0
x
0
5
∑
i
=
0
4
x
0
4
−
i
x
1
i
x
1
x
1
5
∑
i
+
j
+
k
=
3
x
0
i
x
1
j
x
2
k
∑
i
=
0
4
x
1
4
−
i
x
2
i
∑
i
+
j
+
k
+
l
=
2
x
0
i
x
1
j
x
2
k
x
3
l
x
2
x
2
5
∑
i
+
j
+
k
=
3
x
1
i
x
2
j
x
3
k
∑
i
=
0
4
x
i
∑
i
=
0
4
x
2
4
−
i
x
3
i
∑
i
+
j
+
k
+
l
=
2
x
1
i
x
2
j
x
3
k
x
4
l
1
x
3
x
3
5
∑
i
+
j
+
k
=
3
x
2
i
x
3
j
x
4
k
∑
i
=
1
5
x
i
0
∑
i
=
0
4
x
3
4
−
i
x
4
i
∑
i
+
j
+
k
+
l
=
2
x
2
i
x
3
j
x
4
j
x
5
l
1
x
4
x
4
5
∑
i
+
j
+
k
=
3
x
3
i
x
4
j
x
5
k
∑
i
=
2
6
x
i
∑
i
=
0
4
x
4
4
−
i
x
5
i
∑
i
+
j
+
k
+
l
=
2
x
3
i
x
4
j
x
5
k
x
6
l
x
5
x
5
5
∑
i
+
j
+
k
=
3
x
4
i
x
5
j
x
6
k
∑
i
=
0
4
x
5
4
−
i
x
6
i
x
6
x
6
5
{\displaystyle {\begin{array}{lcl}{\begin{matrix}x_{0}&x_{0}^{2}&&\\&&x_{0}+x_{1}&&\\x_{1}&x_{1}^{2}&&1&\\&&x_{1}+x_{2}&&0\\x_{2}&x_{2}^{2}&&1&\\&&x_{2}+x_{3}&&&\\x_{3}&x_{3}^{2}&&&\\\end{matrix}}\\\\{\begin{matrix}x_{0}&x_{0}^{n}&\\&&\sum _{i=0}^{n-1}x_{0}^{n-1-i}x_{1}^{i}\\x_{1}&x_{1}^{n}&\\\end{matrix}}\\\\{\begin{matrix}x_{0}&x_{0}^{n+1}&\\&&{\frac {x_{1}^{n+1}-x_{1}x_{0}^{n}+x_{1}x_{0}^{n}-x_{0}^{n+1}}{x_{1}-x_{0}}}=x_{1}{\frac {x_{1}^{n}-x_{0}^{n}}{x_{1}-x_{0}}}+x_{0}^{n}=x_{1}\sum _{i=0}^{n-1}x_{0}^{n-1-i}x_{1}^{i}+x_{0}^{n}=\sum _{i=0}^{n}x_{0}^{n-i}x_{1}^{i}\\x_{1}&x_{1}^{n+1}&\\\end{matrix}}\\\\{\begin{matrix}x_{0}&x_{0}^{n+1}&&\\&&\sum _{i=0}^{n}x_{0}^{n-i}x_{1}^{i}&\\x_{1}&x_{1}^{n+1}&&{\frac {\sum _{i=0}^{n}x_{1}^{n-i}x_{2}^{i}-\sum _{i=0}^{n}x_{0}^{n-i}x_{1}^{i}}{x_{2}-x_{0}}}={\frac {\sum _{i=0}^{n-1}x_{1}^{i}(x_{2}^{n-i}-x_{0}^{n-i})}{x_{2}-x_{0}}}=\sum _{i+j+k=n-1}{x_{0}^{i}x_{1}^{j}x_{2}^{k}}\\&&\sum _{i=0}^{n}x_{1}^{n-i}x_{2}^{i}&\\x_{2}&x_{2}^{n+1}&&\\\end{matrix}}\\\\{\begin{matrix}x_{0}&x_{0}^{n+1}&&&\\&&\sum _{i=0}^{n}x_{0}^{n-i}x_{1}^{i}&&\\x_{1}&x_{1}^{n+1}&&\sum _{i+j+k=n-1}{x_{0}^{i}x_{1}^{j}x_{2}^{k}}&\\&&\sum _{i=0}^{n}x_{1}^{n-i}x_{2}^{i}&&{\frac {\sum _{i+j+k=n-1}{x_{1}^{i}x_{2}^{j}x_{3}^{k}}-\sum _{i+j+k=n-1}{x_{0}^{i}x_{1}^{j}x_{2}^{k}}}{x_{3}-x_{0}}}=\sum _{i+j+k+l=n-2}{x_{0}^{i}x_{1}^{j}x_{2}^{k}x_{3}^{l}}\\x_{2}&x_{2}^{n+1}&&\sum _{i+j+k=n-1}{x_{1}^{i}x_{2}^{j}x_{3}^{k}}&\\&&\sum _{i=0}^{n}x_{2}^{n-i}x_{3}^{i}&&\\x_{3}&x_{3}^{n+1}&&&\\\end{matrix}}\\\\{\begin{matrix}x_{0}&x_{0}^{3}&&&&\\&&x_{0}^{2}+x_{0}x_{1}+x_{1}^{2}&&\\x_{1}&x_{1}^{3}&&x_{0}+x_{1}+x_{2}&&\\&&x_{1}^{1}+x_{1}x_{2}+x_{2}^{2}&&1&\\x_{2}&x_{2}^{3}&&x_{1}+x_{2}+x_{3}&&0\\&&x_{2}^{2}+x_{2}x_{3}+x_{3}^{2}&&1&\\x_{3}&x_{3}^{3}&&x_{2}+x_{3}+x_{4}&&\\&&x_{3}^{2}+x_{3}x_{4}+x_{4}^{2}&&&\\x_{4}&x_{4}^{3}&&&&\\\end{matrix}}\\\\{\begin{matrix}x_{0}&x_{0}^{4}&&&&&\\&&x_{0}^{3}+x_{0}^{2}x_{1}+x_{0}x_{1}^{2}+x_{1}^{3}&&&\\x_{1}&x_{1}^{4}&&x_{0}^{2}+x_{0}x_{1}+x_{1}^{2}+x_{0}x_{2}+x_{1}x_{2}+x_{2}^{2}&&&\\&&x_{1}^{3}+x_{1}^{2}x_{2}+x_{1}x_{2}^{2}+x_{2}^{3}&&x_{0}+x_{1}+x_{2}+x_{3}&\\x_{2}&x_{2}^{4}&&x_{1}^{2}+x_{1}x_{2}+x_{2}^{2}+x_{1}x_{3}+x_{2}x_{3}+x_{3}^{2}&&1&\\&&x_{2}^{3}+x_{2}^{2}x_{3}+x_{2}x_{3}^{2}+x_{3}^{3}&&x_{1}+x_{2}+x_{3}+x_{4}&&0\\x_{3}&x_{3}^{4}&&x_{2}^{2}+x_{2}x_{3}+x_{3}^{2}+x_{2}x_{4}+x_{3}x_{4}+x_{4}^{2}&&1&\\&&x_{3}^{3}+x_{3}^{2}x_{4}+x_{3}x_{4}^{2}+x_{4}^{3}&&x_{2}+x_{3}+x_{4}+x_{5}&&\\x_{4}&x_{4}^{4}&&x_{3}^{2}+x_{3}x_{4}+x_{4}^{2}+x_{3}x_{5}+x_{4}x_{5}+x_{5}^{2}&&&\\&&x_{4}^{3}+x_{4}^{2}x_{5}+x_{4}x_{5}^{2}+x_{5}^{3}&&&&\\x_{5}&x_{5}^{4}&&&&&\\\end{matrix}}\\\\{\begin{matrix}x_{0}&x_{0}^{5}&&&&&&\\&&\sum _{i=0}^{4}x_{0}^{4-i}x_{1}^{i}&&&&\\x_{1}&x_{1}^{5}&&\sum _{i+j+k=3}x_{0}^{i}x_{1}^{j}x_{2}^{k}&&&&\\&&\sum _{i=0}^{4}x_{1}^{4-i}x_{2}^{i}&&\sum _{i+j+k+l=2}x_{0}^{i}x_{1}^{j}x_{2}^{k}x_{3}^{l}&&&\\x_{2}&x_{2}^{5}&&\sum _{i+j+k=3}x_{1}^{i}x_{2}^{j}x_{3}^{k}&&\sum _{i=0}^{4}x_{i}&&\\&&\sum _{i=0}^{4}x_{2}^{4-i}x_{3}^{i}&&\sum _{i+j+k+l=2}x_{1}^{i}x_{2}^{j}x_{3}^{k}x_{4}^{l}&&1&\\x_{3}&x_{3}^{5}&&\sum _{i+j+k=3}x_{2}^{i}x_{3}^{j}x_{4}^{k}&&\sum _{i=1}^{5}x_{i}&&0\\&&\sum _{i=0}^{4}x_{3}^{4-i}x_{4}^{i}&&\sum _{i+j+k+l=2}x_{2}^{i}x_{3}^{j}x_{4}^{j}x_{5}^{l}&&1&\\x_{4}&x_{4}^{5}&&\sum _{i+j+k=3}x_{3}^{i}x_{4}^{j}x_{5}^{k}&&\sum _{i=2}^{6}x_{i}&&\\&&\sum _{i=0}^{4}x_{4}^{4-i}x_{5}^{i}&&\sum _{i+j+k+l=2}x_{3}^{i}x_{4}^{j}x_{5}^{k}x_{6}^{l}&&&\\x_{5}&x_{5}^{5}&&\sum _{i+j+k=3}x_{4}^{i}x_{5}^{j}x_{6}^{k}&&&&\\&&\sum _{i=0}^{4}x_{5}^{4-i}x_{6}^{i}&&&&&\\x_{6}&x_{6}^{5}&&&&&&\\\end{matrix}}\\\end{array}}}
^
[
y
0
]
=
y
0
[
y
0
,
y
1
]
=
y
1
−
y
0
x
1
−
x
0
=
y
0
x
0
−
x
1
+
y
1
x
1
−
x
0
=
∑
j
=
0
1
y
j
∏
k
=
0
,
k
≠
j
1
(
x
j
−
x
k
)
[
y
0
,
y
1
,
y
2
]
=
y
1
x
1
−
x
2
+
y
2
x
2
−
x
1
−
y
0
x
0
−
x
1
−
y
1
x
1
−
x
0
x
2
−
x
0
=
y
0
(
x
0
−
x
1
)
(
x
0
−
x
2
)
+
y
1
(
x
1
−
x
0
)
(
x
1
−
x
2
)
+
y
2
(
x
2
−
x
0
)
(
x
2
−
x
1
)
=
∑
j
=
0
2
y
j
∏
k
=
0
,
k
≠
j
2
(
x
j
−
x
k
)
[
y
0
,
y
1
,
…
,
y
n
]
=
∑
j
=
0
n
y
j
∏
k
=
0
,
k
≠
j
n
(
x
j
−
x
k
)
[
y
0
,
y
1
,
…
,
y
n
+
1
]
=
∑
j
=
1
n
+
1
y
j
∏
k
=
1
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
−
∑
j
=
0
n
y
j
∏
k
=
0
,
k
≠
j
n
(
x
j
−
x
k
)
x
n
+
1
−
x
0
=
y
n
+
1
∏
k
=
1
n
(
x
n
+
1
−
x
k
)
+
∑
j
=
1
n
y
j
(
1
∏
k
=
1
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
−
1
∏
k
=
0
,
k
≠
j
n
(
x
j
−
x
k
)
)
−
y
0
∏
k
=
1
n
(
x
0
−
x
k
)
x
n
+
1
−
x
0
=
y
n
+
1
∏
k
=
1
n
(
x
n
+
1
−
x
k
)
+
∑
j
=
1
n
y
j
(
x
j
−
x
0
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
−
x
j
−
x
n
+
1
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
)
−
y
0
∏
k
=
1
n
(
x
0
−
x
k
)
x
n
+
1
−
x
0
=
y
n
+
1
∏
k
=
1
n
(
x
n
+
1
−
x
k
)
+
∑
j
=
1
n
y
j
(
x
n
+
1
−
x
0
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
)
−
y
0
∏
k
=
1
n
(
x
0
−
x
k
)
x
n
+
1
−
x
0
=
y
n
+
1
∏
k
=
0
n
(
x
n
+
1
−
x
k
)
+
∑
j
=
1
n
y
j
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
+
y
0
∏
k
=
1
n
+
1
(
x
0
−
x
k
)
=
∑
j
=
0
n
+
1
y
j
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
{\displaystyle {\begin{aligned}{\mathopen {[}}y_{0}]&=y_{0}\\{\mathopen {[}}y_{0},y_{1}]&={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}={\frac {y_{0}}{x_{0}-x_{1}}}+{\frac {y_{1}}{x_{1}-x_{0}}}\\&=\sum _{j=0}^{1}{\frac {y_{j}}{\prod _{k=0,k\neq j}^{1}(x_{j}-x_{k})}}\\{\mathopen {[}}y_{0},y_{1},y_{2}]&={\frac {{\cfrac {y_{1}}{x_{1}-x_{2}}}+{\cfrac {y_{2}}{x_{2}-x_{1}}}-{\cfrac {y_{0}}{x_{0}-x_{1}}}-{\cfrac {y_{1}}{x_{1}-x_{0}}}}{x_{2}-x_{0}}}\\&={\frac {y_{0}}{(x_{0}-x_{1})(x_{0}-x_{2})}}+{\frac {y_{1}}{(x_{1}-x_{0})(x_{1}-x_{2})}}+{\frac {y_{2}}{(x_{2}-x_{0})(x_{2}-x_{1})}}\\&=\sum _{j=0}^{2}{\frac {y_{j}}{\prod _{k=0,k\neq j}^{2}(x_{j}-x_{k})}}\\{\mathopen {[}}y_{0},y_{1},\dots ,y_{n}]&=\sum _{j=0}^{n}{\frac {y_{j}}{\prod _{k=0,k\neq j}^{n}(x_{j}-x_{k})}}\\{\mathopen {[}}y_{0},y_{1},\dots ,y_{n+1}]&={\frac {\sum _{j=1}^{n+1}{\frac {y_{j}}{\prod _{k=1,\,k\neq j}^{n+1}(x_{j}-x_{k})}}-\sum _{j=0}^{n}{\frac {y_{j}}{\prod _{k=0,\,k\neq j}^{n}(x_{j}-x_{k})}}}{x_{n+1}-x_{0}}}\\&={\frac {{\frac {y_{n+1}}{\prod _{k=1}^{n}(x_{n+1}-x_{k})}}+\sum _{j=1}^{n}y_{j}\left({\frac {1}{\prod _{k=1,\,k\neq j}^{n+1}(x_{j}-x_{k})}}-{\frac {1}{\prod _{k=0,\,k\neq j}^{n}(x_{j}-x_{k})}}\right)-{\frac {y_{0}}{\prod _{k=1}^{n}(x_{0}-x_{k})}}}{x_{n+1}-x_{0}}}\\&={\frac {{\frac {y_{n+1}}{\prod _{k=1}^{n}(x_{n+1}-x_{k})}}+\sum _{j=1}^{n}y_{j}\left({\frac {x_{j}-x_{0}}{\prod _{k=0,\,k\neq j}^{n+1}(x_{j}-x_{k})}}-{\frac {x_{j}-x_{n+1}}{\prod _{k=0,\,k\neq j}^{n+1}(x_{j}-x_{k})}}\right)-{\frac {y_{0}}{\prod _{k=1}^{n}(x_{0}-x_{k})}}}{x_{n+1}-x_{0}}}\\&={\frac {{\frac {y_{n+1}}{\prod _{k=1}^{n}(x_{n+1}-x_{k})}}+\sum _{j=1}^{n}y_{j}\left({\frac {x_{n+1}-x_{0}}{\prod _{k=0,\,k\neq j}^{n+1}(x_{j}-x_{k})}}\right)-{\frac {y_{0}}{\prod _{k=1}^{n}(x_{0}-x_{k})}}}{x_{n+1}-x_{0}}}\\&={\frac {y_{n+1}}{\prod _{k=0}^{n}(x_{n+1}-x_{k})}}+\sum _{j=1}^{n}{\frac {y_{j}}{\prod _{k=0,\,k\neq j}^{n+1}(x_{j}-x_{k})}}+{\frac {y_{0}}{\prod _{k=1}^{n+1}(x_{0}-x_{k})}}\\&=\sum _{j=0}^{n+1}{\frac {y_{j}}{\prod _{k=0,\,k\neq j}^{n+1}(x_{j}-x_{k})}}\end{aligned}}}
^ 《数值分析及科学计算》 薛毅(编) 第六章 第2节 Newton插值. P200.
^ 《数值分析及科学计算》 薛毅(编) 第六章 第2节 Newton插值. P201.
^
x
0
x
0
3
x
0
2
+
x
0
x
1
+
x
1
2
x
1
x
1
3
x
0
+
x
1
+
x
2
x
0
2
+
x
0
x
2
+
x
2
2
1
x
2
x
2
3
x
0
+
x
1
+
x
3
0
x
0
2
+
x
0
x
3
+
x
3
2
1
x
3
x
3
3
x
0
+
x
1
+
x
4
x
0
2
+
x
0
x
4
+
x
4
2
x
4
x
4
3
{\displaystyle {\begin{matrix}x_{0}&x_{0}^{3}&&&&\\&&x_{0}^{2}+x_{0}x_{1}+x_{1}^{2}&&\\x_{1}&x_{1}^{3}&&x_{0}+x_{1}+x_{2}&&\\&&x_{0}^{2}+x_{0}x_{2}+x_{2}^{2}&&1&\\x_{2}&x_{2}^{3}&&x_{0}+x_{1}+x_{3}&&0\\&&x_{0}^{2}+x_{0}x_{3}+x_{3}^{2}&&1&\\x_{3}&x_{3}^{3}&&x_{0}+x_{1}+x_{4}&&\\&&x_{0}^{2}+x_{0}x_{4}+x_{4}^{2}&&&\\x_{4}&x_{4}^{3}&&&&\\\end{matrix}}}
^ The Newton Polynomial Interpolation . [2019-04-19 ] . (原始内容存档 于2019-04-19).
^
N
1
(
x
)
=
[
y
0
]
+
[
y
0
,
y
1
]
(
x
−
x
0
)
=
y
0
+
y
0
x
−
x
0
x
0
−
x
1
+
y
1
x
−
x
0
x
1
−
x
0
=
y
0
(
1
+
x
−
x
0
x
0
−
x
1
)
+
y
1
x
−
x
0
x
1
−
x
0
=
y
0
x
−
x
1
x
0
−
x
1
+
y
1
x
−
x
0
x
1
−
x
0
=
∑
j
=
0
1
y
j
∏
k
=
0
,
k
≠
j
1
x
−
x
k
x
j
−
x
k
N
n
(
x
)
=
∑
j
=
0
n
y
j
∏
k
=
0
,
k
≠
j
n
x
−
x
k
x
j
−
x
k
N
n
+
1
(
x
)
=
N
n
(
x
)
+
[
y
0
,
y
1
,
…
,
y
n
+
1
]
∏
k
=
0
n
(
x
−
x
k
)
=
∑
j
=
0
n
y
j
∏
k
=
0
,
k
≠
j
n
x
−
x
k
x
j
−
x
k
+
∑
j
=
0
n
+
1
y
j
∏
k
=
0
n
(
x
−
x
k
)
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
=
∑
j
=
0
n
y
j
(
∏
k
=
0
,
k
≠
j
n
(
x
−
x
k
)
∏
k
=
0
,
k
≠
j
n
(
x
j
−
x
k
)
+
∏
k
=
0
n
(
x
−
x
k
)
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
)
+
y
n
+
1
∏
k
=
0
n
(
x
−
x
k
)
∏
k
=
0
n
(
x
n
+
1
−
x
k
)
=
∑
j
=
0
n
y
j
(
(
∏
k
=
0
,
k
≠
j
n
(
x
−
x
k
)
)
(
x
j
−
x
n
+
1
)
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
+
(
∏
k
=
0
,
k
≠
j
n
(
x
−
x
k
)
)
(
x
−
x
j
)
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
)
+
y
n
+
1
∏
k
=
0
n
(
x
−
x
k
)
∏
k
=
0
n
(
x
n
+
1
−
x
k
)
=
∑
j
=
0
n
y
j
(
∏
k
=
0
,
k
≠
j
n
(
x
−
x
k
)
)
(
x
−
x
n
+
1
)
∏
k
=
0
,
k
≠
j
n
+
1
(
x
j
−
x
k
)
+
y
n
+
1
∏
k
=
0
n
(
x
−
x
k
)
∏
k
=
0
n
(
x
n
+
1
−
x
k
)
=
∑
j
=
0
n
+
1
y
j
∏
k
=
0
,
k
≠
j
n
+
1
x
−
x
k
x
j
−
x
k
{\displaystyle {\begin{array}{lcl}{\begin{aligned}N_{1}(x)&=[y_{0}]+[{y}_{0},{y}_{1}](x-{x}_{0})=y_{0}+y_{0}{\frac {x-{x}_{0}}{x_{0}-x_{1}}}+y_{1}{\frac {x-{x}_{0}}{x_{1}-x_{0}}}=y_{0}(1+{\frac {x-{x}_{0}}{x_{0}-x_{1}}})+y_{1}{\frac {x-{x}_{0}}{x_{1}-x_{0}}}\\&=y_{0}{\frac {x-{x}_{1}}{x_{0}-x_{1}}}+y_{1}{\frac {x-{x}_{0}}{x_{1}-x_{0}}}=\sum _{j=0}^{1}y_{j}\prod _{k=0,k\neq j}^{1}{\frac {x-{x}_{k}}{x_{j}-x_{k}}}\\\end{aligned}}\\{\begin{aligned}N_{n}(x)&=\sum _{j=0}^{n}y_{j}\prod _{k=0,k\neq j}^{n}{\frac {x-{x}_{k}}{x_{j}-x_{k}}}\\\end{aligned}}\\{\begin{aligned}N_{n+1}(x)&=N_{n}(x)+[{y}_{0},{y}_{1},\ldots ,{y}_{n+1}]\prod _{k=0}^{n}(x-{x}_{k})\\&=\sum _{j=0}^{n}y_{j}\prod _{k=0,k\neq j}^{n}{\frac {x-{x}_{k}}{x_{j}-x_{k}}}+\sum _{j=0}^{n+1}y_{j}{\frac {\prod _{k=0}^{n}(x-{x}_{k})}{\prod _{k=0,\,k\neq j}^{n+1}(x_{j}-x_{k})}}\\&=\sum _{j=0}^{n}y_{j}\left({\frac {\prod _{k=0,k\neq j}^{n}(x-{x}_{k})}{\prod _{k=0,k\neq j}^{n}(x_{j}-x_{k})}}+{\frac {\prod _{k=0}^{n}(x-{x}_{k})}{\prod _{k=0,\,k\neq j}^{n+1}(x_{j}-x_{k})}}\right)+y_{n+1}{\frac {\prod _{k=0}^{n}(x-{x}_{k})}{\prod _{k=0}^{n}(x_{n+1}-x_{k})}}\\&=\sum _{j=0}^{n}y_{j}\left({\frac {\left(\prod _{k=0,k\neq j}^{n}(x-{x}_{k})\right)(x_{j}-x_{n+1})}{\prod _{k=0,k\neq j}^{n+1}(x_{j}-x_{k})}}+{\frac {\left(\prod _{k=0,k\neq j}^{n}(x-{x}_{k})\right)(x-x_{j})}{\prod _{k=0,\,k\neq j}^{n+1}(x_{j}-x_{k})}}\right)+y_{n+1}{\frac {\prod _{k=0}^{n}(x-{x}_{k})}{\prod _{k=0}^{n}(x_{n+1}-x_{k})}}\\&=\sum _{j=0}^{n}y_{j}{\frac {\left(\prod _{k=0,k\neq j}^{n}(x-{x}_{k})\right)(x-x_{n+1})}{\prod _{k=0,k\neq j}^{n+1}(x_{j}-x_{k})}}+y_{n+1}{\frac {\prod _{k=0}^{n}(x-{x}_{k})}{\prod _{k=0}^{n}(x_{n+1}-x_{k})}}\\&=\sum _{j=0}^{n+1}y_{j}\prod _{k=0,k\neq j}^{n+1}{\frac {x-{x}_{k}}{x_{j}-x_{k}}}\\\end{aligned}}\\\end{array}}}
^ Burden, Richard L.; Faires, J. Douglas. Numerical Analysis 9th. 2011: 129 .
^
△
k
y
i
=
∑
j
=
0
k
k
!
∏
l
=
0
,
l
≠
j
k
(
j
−
l
)
y
i
+
j
,
0
≤
k
≤
n
−
i
=
∑
j
=
0
k
k
!
j
!
(
−
1
)
k
−
j
(
k
−
j
)
!
y
i
+
j
,
0
≤
k
≤
n
−
i
=
∑
j
=
0
k
(
−
1
)
k
−
j
(
k
j
)
y
i
+
j
,
0
≤
k
≤
n
−
i
{\displaystyle {\begin{aligned}\triangle ^{k}y_{i}&=\sum _{j=0}^{k}{\frac {k!}{\prod _{l=0,\,l\neq j}^{k}(j-l)}}y_{i+j},\quad 0\leq k\leq n-i\\&=\sum _{j=0}^{k}{\frac {k!}{j!(-1)^{k-j}(k-j)!}}y_{i+j},\quad 0\leq k\leq n-i\\&=\sum _{j=0}^{k}(-1)^{k-j}{\binom {k}{j}}y_{i+j},\quad 0\leq k\leq n-i\end{aligned}}}
^ Methodus Incrementorum Directa et Inversa (页面存档备份 ,存于互联网档案馆 )
参考书目
参见