What makes an optimization problem hard




















This problem is polynomial-time solvable , but its usual polyhedral representation admits no small linear programming formulation. Perhaps this belief is caused by a mistaken interpretation of the fact that linear programming is P-complete. Note that P-hardness is defined with respect to a more restrictive type of reduction than NP-hardness. The former is usually defined with respect to log-space reductions. Wrong answer 4: optimization problems are difficult when there are many variables.

Consider a linear program that has exponentially many constraints, but that can be solved using the ellipsoid method and an efficient separation procedure. Its dual LP has exponentially many variables, but is polynomial-time solvable.

One counterexample to the claim is the problem of optimizing a quadratic function subject to an ellipsoid constraint. This problem is nonconvex, but is polynomial-time solvable. Some of these wrong answers are appealing, partially because their converses are true. For example, the converses of wrong answers 2 and 4 are true for integer programs.

Indeed, while integer programming is thought to be hard in general, it becomes polynomial-time solvable when we restrict ourselves to instances having a fixed number of variables or with a fixed number of constraints. I might add that the converse of wrong answer 1 does not seem to hold. For example , it is thought to be hard to find a second Hamiltonian cycle in a particular class of graphs even when it is guaranteed to exist!

Moreover, we can quickly check whether a number is prime, but actually factoring a composite number is thought to be hard for classical computers. This is essentially what we are showing when we prove that a problem is NP-hard—we are showing that we can express any problem from NP as an equivalent instance of our problem.

Of course, using NP-hardness as a justification for the hardness of a problem depends on NP actually having difficult problems in it. Of note, for non-convex problems, providing an initial guess even if infeasible can greatly help steer the optimization to a better local optima. Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams?

Learn more. Do constraints make an optimization problem harder or easier? Ask Question. Asked 2 years, 1 month ago. Active 4 months ago. Viewed times. Rufus Rufus 4 4 bronze badges. Add a comment. Active Oldest Votes. Maybe you better show us an example where you are really in doubt. You can have very high confidence that the solutions obtained are globally optimal. Smooth nonlinear optimization problems -- where all of the relationships are smooth functions i.

Models with thousands of variables and constraints can often be solved in minutes on modern PCs. If the problem is convex , you can have very high confidence that the solutions obtained are globally optimal. If the problem is non-convex , you can have reasonable confidence that the solutions obtained are locally optimal, but not necessarily globally optimal.

You can only have confidence that the solutions obtained are "good" i. Free Trial. Search form X. Contact Us Login.



0コメント

  • 1000 / 1000