How Do You Find the Optima of a Function Under Constraints?

  • Thread starter Inertigratus
  • Start date
  • Tags
    Function
In summary, finding the optima of a function involves determining the maximum or minimum values of the function over a given domain. This can be done by analyzing the critical points, where the derivative of the function is equal to zero, and using the first or second derivative test to determine if they are maximum or minimum points. Other methods, such as graphing and using optimization algorithms, can also be used to find the optima of a function. Ultimately, the goal is to find the optimal values of the function that will yield the best results for a given problem or situation.
  • #1
Inertigratus
128
0

Homework Statement


Find the optima of [itex]f(x, y)[/itex] which satisfy the equation [itex]g(x, y) = 3[/itex] and [itex]x > 0[/itex].

Homework Equations


[itex]f(x, y) = x + y[/itex]
[itex]g(x, y) = x^2 + xy + y^2[/itex]
[itex]\nabla f(x, y) = (1, 1)[/itex]
[itex]\nabla g(x, y) = (2x + y, 2y + x)[/itex]

The Attempt at a Solution


[itex]\nabla f(x, y) = \lambda \nabla g(x, y) <=> x = y => f(x, x) = f(x) = 2x => g(x, x) = g(x) = 3x^2 = 3 <=> x = y = 1 (x > 0) => f(1, 1) = 2[/itex].

Now, how do I get the minima...?
Minima is supposed to be [itex]-\sqrt{3}[/itex].
 
Physics news on Phys.org
  • #2
Inertigratus said:

Homework Statement


Find the optima of [itex]f(x, y)[/itex] which satisfy the equation [itex]g(x, y) = 3[/itex] and [itex]x > 0[/itex].


Homework Equations


[itex]f(x, y) = x + y[/itex]
[itex]g(x, y) = x^2 + xy + y^2[/itex]
[itex]\nabla f(x, y) = (1, 1)[/itex]
[itex]\nabla g(x, y) = (2x + y, 2y + x)[/itex]


The Attempt at a Solution


[itex]\nabla f(x, y) = \lambda \nabla g(x, y) <=> x = y => f(x, x) = f(x) = 2x => g(x, x) = g(x) = 3x^2 = 3 <=> x = y = 1 (x > 0) => f(1, 1) = 2[/itex].

Now, how do I get the minima...?
Minima is supposed to be [itex]-\sqrt{3}[/itex].

Be VERY careful in stating the problem. The problem with x > 0 has NO MINIMUM; however, the problem with x >= 0 does have a minimum. Never, never, never write optimization problems with strict inequalities unless you absolutely have to. (Simplest example: minimize x subject to x > 0 has no solution, while minimize x subject to x >= 0 does have a solution.)

If you have taken the Karush-Kuhn-Tucker conditions you can use them. (These are generalizations of Lagrange to handle inequality constraints.) However, in this case you can get away without using them. A solution either satisfies x > 0 (so you get the ordinary Lagrange solution) or else satisfies x = 0. In this last case you need dL/dy = 0 and dL/dx <= 0 for a maximum, or dL/dx >= 0 for a minimum.

RGV
 
  • #3
You used Lagrange multipliers to find the critical points. There was only one which was at (1,1).
Optima necessarily occur either at critical points or at the boundary of the region under consideration. There are two boundary points that satisfy the constraint g(x,y)=3, they are (0,3^.5) and (0,-3^.5). The maximum is max{f(1,1),f(0,3^.5),f(0,-3^.5)} = 2 and the minimum is min{f(1,1),f(0,3^.5),f(0,-3^.5)} = -3^.5.
 
  • #4
Ray Vickson said:
Be VERY careful in stating the problem. The problem with x > 0 has NO MINIMUM; however, the problem with x >= 0 does have a minimum. Never, never, never write optimization problems with strict inequalities unless you absolutely have to. (Simplest example: minimize x subject to x > 0 has no solution, while minimize x subject to x >= 0 does have a solution.)

If you have taken the Karush-Kuhn-Tucker conditions you can use them. (These are generalizations of Lagrange to handle inequality constraints.) However, in this case you can get away without using them. A solution either satisfies x > 0 (so you get the ordinary Lagrange solution) or else satisfies x = 0. In this last case you need dL/dy = 0 and dL/dx <= 0 for a maximum, or dL/dx >= 0 for a minimum.

RGV

Ohh, sorry, you're right. It's supposed to be >= 0, maybe I even missed that myself.
I thought that was just more of a restriction and not actually the boundary.
So the derivative of the Lagrange multiplier can be used when either variable is zero?

upsidedowntop said:
You used Lagrange multipliers to find the critical points. There was only one which was at (1,1).
Optima necessarily occur either at critical points or at the boundary of the region under consideration. There are two boundary points that satisfy the constraint g(x,y)=3, they are (0,3^.5) and (0,-3^.5). The maximum is max{f(1,1),f(0,3^.5),f(0,-3^.5)} = 2 and the minimum is min{f(1,1),f(0,3^.5),f(0,-3^.5)} = -3^.5.

Right, I totally missed that. Thanks to both!
 
  • #5
Inertigratus said:
Ohh, sorry, you're right. It's supposed to be >= 0, maybe I even missed that myself.
I thought that was just more of a restriction and not actually the boundary.
So the derivative of the Lagrange multiplier can be used when either variable is zero?



Right, I totally missed that. Thanks to both!

Yes, the derivative of the Lagrangian (NOT the Lagrange multiplier) can be used when the variable is zero, but the derivative need not be zero there. However, it should be >= 0 for a minimum or <= 0 for a maximum. As I said, you need the so-called Karush-Kuhn-Tucker conditions. See, eg., http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions .

RGV
 

Related to How Do You Find the Optima of a Function Under Constraints?

What is the process for finding the optima of a function?

The process for finding the optima of a function involves identifying the critical points (where the derivative of the function is equal to zero or undefined), determining their nature (local minimum, local maximum, or saddle point), and evaluating the function at these points to determine the global optimum.

How do I know if a critical point is a local minimum or maximum?

To determine the nature of a critical point, you can use the second derivative test. If the second derivative is positive at the critical point, it is a local minimum. If the second derivative is negative, it is a local maximum. If the second derivative is zero, the test is inconclusive and further analysis is needed.

What is the difference between a local and global optimum?

A local optimum is the highest or lowest point in a specific region of the function, while a global optimum is the highest or lowest point in the entire function. A local optimum may not necessarily be the global optimum, as there may be other higher or lower points outside of the region being analyzed.

What if there are multiple critical points in a function?

If there are multiple critical points, you will need to evaluate the function at each point and compare the values to determine the global optimum. It is also important to consider the endpoints of the function, as they may also be potential optima.

Are there any limitations to finding the optima of a function?

Finding the optima of a function is limited by the complexity of the function and the methods used to analyze it. Some functions may have infinitely many critical points or no critical points at all, making it difficult to determine the optima. In these cases, numerical methods or approximation techniques may be used instead.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
585
  • Calculus and Beyond Homework Help
Replies
5
Views
349
  • Calculus and Beyond Homework Help
Replies
2
Views
546
  • Calculus and Beyond Homework Help
Replies
8
Views
530
  • Calculus and Beyond Homework Help
Replies
26
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
739
  • Calculus and Beyond Homework Help
Replies
1
Views
489
  • Calculus and Beyond Homework Help
Replies
6
Views
900
  • Calculus and Beyond Homework Help
Replies
3
Views
626
Back
Top