Contents
- Gradient of the Rosenbrock function at [1,1], the global minimizer
- The Hessian matrix at the minimizer should be positive definite
- Gradient estimation using gradest - a function of 5 variables
- Simple Hessian matrix of a problem with 3 independent variables
- A semi-definite Hessian matrix
- Directional derivative of the Rosenbrock function at the solution
- Directional derivative at other locations
- Jacobian matrix of a scalar function is just the gradient
- Jacobian matrix of a linear system will reduce to the design matrix
- The jacobian matrix of a nonlinear transformation of variables
% Multivariate calculus demo script % This script file is designed to be used in cell mode % from the matlab editor, or best of all, use the publish % to HTML feature from the matlab editor. Older versions % of matlab can copy and paste entire blocks of code into % the Matlab command window. % Typical usage of the gradient and Hessian might be in % optimization problems, where one might compare an analytically % derived gradient for correctness, or use the Hessian matrix % to compute confidence interval estimates on parameters in a % maximum likelihood estimation.
Gradient of the Rosenbrock function at [1,1], the global minimizer
rosen = @(x) (1-x(1)).^2 + 105*(x(2)-x(1).^2).^2;
% The gradient should be zero (within floating point noise)
[grad,err] = gradest(rosen,[1 1])
grad = 0 0 err = 1.0e-017 * 0.1896 0
The Hessian matrix at the minimizer should be positive definite
H = hessian(rosen,[1 1])
% The eigenvalues of h should be positive
eig(H)
H = 842.0000 -420.0000 -420.0000 210.0000 ans = 1.0e+003 * 0.0004 1.0516
Gradient estimation using gradest - a function of 5 variables
[grad,err] = gradest(@(x) sum(x.^2),[1 2 3 4 5])
grad = 2.0000 4.0000 6.0000 8.0000 10.0000 err = 1.0e-013 * 0.0434 0.0868 0.1418 0.1737 0.2005
Simple Hessian matrix of a problem with 3 independent variables
[H,err] = hessian(@(x) x(1) + x(2)^2 + x(3)^3,[1 2 3])
H = 0 0 0 0 2.0000 0 0 0 18.0000 err = 1.0e-013 * 0 0 0 0 0.0751 0 0 0 0.6664
A semi-definite Hessian matrix
H = hessian(@(xy) cos(xy(1) - xy(2)),[0 0])
% one of these eigenvalues will be zero (approximately)
eig(H)
H = -1.0000 1.0000 1.0000 -1.0000 ans = -2.0000 -0.0000
Directional derivative of the Rosenbrock function at the solution
This should be zero. Ok, its a trivial test case.
[dd,err] = directionaldiff(rosen,[1 1],[1 2])
dd = 0 err = 0
Directional derivative at other locations
[dd,err] = directionaldiff(rosen,[2 3],[1 -1]) % We can test this example v = [1 -1]; v = v/norm(v); g = gradest(rosen,[2 3]); % The directional derivative will be the dot product of the gradient with % the (unit normalized) vector. So this difference will be (approx) zero. dot(g,v) - dd
dd = 743.8763 err = 1.8467e-012 ans = 1.5916e-012
Jacobian matrix of a scalar function is just the gradient
[jac,err] = jacobianest(rosen,[2 3]) grad = gradest(rosen,[2 3])
jac = 842 -210 err = 1.0e-011 * 0.1283 0 grad = 842 -210
Jacobian matrix of a linear system will reduce to the design matrix
A = rand(5,3);
b = rand(5,1);
fun = @(x) (A*x-b);
x = rand(3,1);
[jac,err] = jacobianest(fun,x)
disp 'This should be essentially zero at any location x'
jac - A
jac = 0.8147 0.0975 0.1576 0.9058 0.2785 0.9706 0.1270 0.5469 0.9572 0.9134 0.9575 0.4854 0.6324 0.9649 0.8003 err = 1.0e-014 * 0.1772 0.0157 0.0313 0 0.1535 0.1772 0.0443 0 0.1253 0.1772 0.2171 0.0627 0.1253 0.1253 0.1253 This should be essentially zero at any location x ans = 1.0e-015 * 0 -0.3331 -0.0278 -0.2220 0.2220 0 0 -0.1110 -0.1110 -0.2220 0 -0.1665 0.4441 0.1110 0.1110
The jacobian matrix of a nonlinear transformation of variables
evaluated at some arbitrary location [-2, -3]
fun = @(xy) [xy(1).^2, cos(xy(1) - xy(2))]; [jac,err] = jacobianest(fun,[-2 -3])
jac = -4.0000 0 -0.8415 0.8415 err = 1.0e-013 * 0.0868 0 0.6102 0.1997