
MTH243 (Calculus
for Functions of Several Variables)
MAPLE. Chapter 15:
Optimization
Vladimir A. Dobrushkin,Lippitt
Hall 202C, 8745095,dobrush@uri.edu
In this course we will use computer algebra system (CAS) Maple, which is not available in computer
labs at URIit should be purchased separately. The Maple projects are created to help you learn new concepts. It is an essential part of WileyPlus, Matlab, and Mathcad. Maple is very useful in
visualizing graphs and surfaces in three dimensions. Matlab (commercial software) is also available at engineeering labs.
Its free version is called Octave. A student can also use free CASs: SymPy (based on Python), Maxima, or Sage.
We follow the standard textbook "Multivariable Calculus" (6th edition) by McCallum, HughesHallett, Gleason et al.

Section 15.1. Local Extrema
Example 1. Plotting the Graph of the Function \( F (x, y) = x^2+y^2 \)
Plot3D[(x^2 + y^2), {x, 3, 3}, {y, 3, 3}, Axes > True,
PlotStyle > Green]
Now we create a new graph:
\( g (x, y) = x^2 + y^2 + 3 \)
Plot3D[(x^2 + y^2 + 3), {x, 3, 3}, {y, 3, 3}, Axes > True]
Another graph of \( h(x, y) = 5  x^2  y^2 \)
Plot3D[(5  x^2  y^2), {x, 3, 3}, {y, 3, 3}, Axes > True,
PlotStyle > Orange]
One more: \( k (x, y) = x^2 + (y  1)^2 \)
Plot3D[(x^2 + (y  1)^2), {x, 3, 3}, {y, 3, 3}, PlotStyle > None]
Example 2. Plotting the Graph of the Function \( G(x,y)=e^{(x^2+y^2)} \)
Plot3D[(E^(x^2 + y^2)), {x, 5, 5}, {y, 5, 5},
PlotStyle > Opacity[.8]]
Cross Sections and the Graph of a Function where x=2
Plot3D[{(x^2 + y^2), (4 + y^2)}, {x, 3, 3}, {y, 3, 3}]
Section 15.2. Optimization
Section 15.3. Constrained Optimization
The method of Lagrange multipliers (named after Joseph Louis Lagrange, 17361813) is a strategy for finding the
local maxima and minima of a function subject to equality constraints. For the case of only one constraint and only
two choice variables (as exemplified in Figure 1), consider the optimization problem
\[
\begin{cases}
\mbox{maximize/minimize} & \quad f(x,y), \\
\mbox{subject to} & \quad g(x,y) =c .
\end{cases}
\]
We assume that both f and g have continuous first partial derivatives. We introduce a new variable (λ) called a
Lagrange multiplier and study the Lagrange function (or Lagrangian or Lagrangian expression) defined by
\[
{\cal L}(x,y,\lambda ) = f(x,y)  \lambda \left( g(x,y) c \right) ,
\]
where the λ term may be either added or subtracted. If \( f(x_0, y_0) \) is a maximum of
\( f(x, y) \) for the original constrained problem, then there exists \( λ_0 \)
such that \( (x_0, y_0, \lambda_0 ) \) is a stationary point for the Lagrange function
(stationary points are those points where the partial derivatives of the Lagrange function are zero). However, not all
stationary points yield a solution of the original problem. Thus, the method of Lagrange multipliers yields a necessary
condition for optimality in constrained problems. Sufficient conditions for a minimum or maximum also exist.
For the general case of an arbitrary number n of choice variables and an arbitrary number M of constraints,
the Lagrangian takes the form
\[
{\cal L}(x_1, x_2, \ldots , x_n , \lambda_1 , \ldots , \lambda_M ) = f(x_1 , \ldots , x_n )  \sum_{k=1}^M \lambda_k
\left( g_k (x_1 , \ldots x_n ) c_k \right) ;
\]
again the constrained optimum of f coincides with a stationary point of the Lagrangian.
For the twodimensional problem introduced above, the extreme points are determined by solving the following
nonlinear system of equations:
\[
\nabla_{x,y,\lambda} {\cal L} =0 \qquad \Longleftrightarrow \qquad \begin{cases}
\nabla_{x,y} f(x,y) &= \lambda \,\nabla_{x,y} g(x,y) , \\
g(x,y) &= c. \end{cases}
\]
The method generalizes readily to functions on n variables
\[
\nabla_{x_1 ,\ldots , x_n , \lambda} {\cal L} =0 ,
\]
which amounts to solving \( n+1 \) equations in \( n+1 \) unknowns.
The constrained extrema of f are critical points of the Lagrangian \( {\cal L}, \)
but they are not necessarily local extrema of \( {\cal L} \) (see Example 2 below).
One may reformulate the Lagrangian as a Hamiltonian, in which case the solutions are local minima for the Hamiltonian. This is done in optimal control theory, in the form of Pontryagin's minimum principle. It was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students.
Example 1.
Example 2. Suppose we want to find the maximum values of
\( f(x,y) = x^2 y \) with the condition that the x and y coordinates lie on
the circle around the origin with radius 3, that is, subject to the constraint
\[
x^2 + y^2 9 =0 .
\]
As there is just a single constraint, we will use only one multiplier, say λ.
The constraint g(x, y) is identically zero on the circle of radius 3.
See that any multiple of g(x, y) may be added to f(x, y) leaving f(x, y) unchanged in the region of interest (on the circle where our original constraint is satisfied).
Apply the ordinary Langrange multiplier method. Let:
\[
{\cal L}(x,y,\lambda ) = f(x,y)  \lambda \left( g(x,y) c \right) = x^2 y  \lambda \left( x^2 + y^2 9 \right) .
\]
Now we can calculate the gradient of the Lagrangian and get the system of equations
\[
\begin{cases}
2xy &= 2\lambda x, \\
x^2 &= 2\lambda y , \\
x^2 + y^2 &= 9 ,
\end{cases} \qquad \Longleftrightarrow \qquad \begin{cases}
2x\left( y  \lambda \right) &= 0, \\
x^2 &= 2\lambda y , \\
x^2 + y^2 &= 9 .
\end{cases}
\]
If x=0, then \( y=\pm 3 \) and λ =0. If \( y=\lambda , \)
we get \( x^2 = 2 y^2 . \) Substituting this into the constraint condition and
solving for y gives \( y = \pm \sqrt{3} . \) Thus, there are six critical points of the
Lagrangian:
\[
(0,3), \quad (0,3), \quad (2\sqrt{3} , \sqrt{3} ) , \quad (2\sqrt{3} , \sqrt{3} ) , \quad (2\sqrt{3} , \sqrt{3} ) , \quad (2\sqrt{3} , \sqrt{3} ) .
\]
Evaluating the objective at these points, we find that
\[
f(0,\pm 3) =0, \qquad f\left( \pm 2\sqrt{3} , \sqrt{3} \right) = 15 , \qquad f\left( \pm 2\sqrt{3} , \sqrt{3} \right) = 15.
\]
Therefore, the objective function attains the global maximum (subject to the constraints) at \( \left( \pm 2\sqrt{3} , \sqrt{3} \right) \)
and the global minimum at \( \left( \pm 2\sqrt{3} , \sqrt{3} \right) .\) The point ( 0 , 3 )
is a local minimum of f and ( 0 , − 3 ) is a local maximum of f, as may be determined by consideration
of the Hessian matrix of \( {\cal L} ( x , y , 0 ) . \)
Note that while \( \left( 2\sqrt{3} , \sqrt{3} , 1 \right) \) is a critical point of
\( {\cal L} ( x , y , \lambda ) ,\) it is not a local extremum of
\( {\cal L} ( x , y , \lambda ) . \)
 