Table of Contents
Minimization
Minimization is a subtopic is more broader “mathematical optimization” topic:
- Snippet from Wikipedia: Mathematical optimization
In mathematics, computer science, or management science, mathematical optimization (alternatively, optimization or mathematical programming) is the selection of a best element (with regard to some criteria) from some set of available alternatives.
Minuit minimization
Any function that extends jhplot.FNon
can be minimized. How to define
arbitrary complicated function in multiple dimensions is described in the section
Non-parametric function.
1D case
Let us give an example of how to find a minimum of a function shown in this figure:
which is defined as . To minimize this function we will use JMinuitOptimizer. Here is a small script that finds and print the value of the variable which minimizes the function using the Migrad method:
1: # Function. Finding a minimum using Minuit 2: from java.lang import Math 3: from jhplot import * 4: 5: class MyFunc(FNon): # define a function a*(x-2)*(x-2) sqrt(x)+ b*x^2 6: def value(self, x): 7: return (x[0]-2)*(x[0]-2)*Math.sqrt(x[0])*self.p[0]+self.p[1]*x[0]*x[0] 8: 9: c1 = HPlot("Plotting canvas") 10: c1.visible(); c1.setAutoRange() 11: pl = MyFunc("test",1,2) # 1 varible, 2 parameters 12: print pl.numberOfParameters() 13: print pl.dimension() 14: print pl.parameterNames().tolist() 15: pl.setParameter("par0",50) # define parameters 16: pl.setParameter("par1",-60) 17: print pl.parameters() 18: c1.draw( F1D(pl,0,10) ) # create plottable function and show it 19: 20: from hep.aida.ref.optimizer.jminuit import * 21: op=JMinuitOptimizerFactory().create() 22: op.setFunction(pl) # set function 23: op.variableSettings("x0").setValue(10) # set initial value 24: op.optimize() # perform optimization 25: res=op.result() 26: print "Minimization results=",res.parameters(), " status=",res.optimizationStatus()
Configuring the minimization
One can define the minimization methods, strategies, precision, tolerance and a maximum number of iterations. This is especially needed if the minimization fails. The above example uses the so-called Migrad method that requires the knowledge of first derivatives. Thus, it can fail if such knowledge is insufficient.
2D case
Minimization can be done for a function in any dimension, with any number of parameters. Let us consider 2D case, . The example script that performs minimization and plotting in 2D is shown below:
The output is (2,-1) and the generated image file is:
Migrad function minimization
This is an alternative approach. First we will consider a minimization of a function in 1D using the MnMigrad class.
1D case
Let us consider a trivial example of minimization for a function .
- minuit.py
from org.freehep.math.minuit import * class func(FCNBase): # define user functions def valueOf(self, par): return 10+par[0]*par[0] # 10+x^2 function Par = MnUserParameters() Par.add("x", 1., 0.1) migrad = MnMigrad(func(), Par) vmin = migrad.minimize() state=vmin.userState() print "Min value=:",vmin.fval(), "function calls=",vmin.nfcn() print "Parameters=",state.params() print "Print more information=:",vmin.toString()
The output of this code is:
click here to view the output
click here to view the output
Min value=: 10.0 function calls= 13 Parameters= array('d', [-2.1424395590941003e-10]) Print more information=: Minuit did successfully converge. # of function calls: 13 minimum function value: 10.0000 minimum edm: 4.57506e-20 minimum internal state vector: LAVector parameters: -2.14244e-10 minimum internal covariance matrix: LASymMatrix parameters: 1.00000 # ext. || name || type || value || error +/- 0 || x || free || -2.14244e-10 || 1.00000 MnUserCovariance: 1.00000 MnUserCovariance parameter correlations: 1.00000 MnGlobalCorrelationCoeff: 0.00000
Note that for a complicated function minimisation may fail. It is important to check that the minimisation is successful; if not, one can try alternative strategy of the Minuit. Modify the above code by including isValid() method:
...... migrad = MnMigrad(func(), Par) vmin = migrad.minimize() if vmin.isValid()==False: # try with higher strategy migrad = MnMigrad(func(), upar, 2) min = migrad.minimize() .....
The above example can be run using Java codding as shown below:
Click here to view Java code
Click here to view Java code
- example.java
import org.freehep.math.minuit.*; public class Main { public static void main(String[] args) { FCNBase myFunction = new FCNBase() { public double valueOf(double[] par) { return 10 + par[0]*par[0]; } }; MnUserParameters myParameters = new MnUserParameters(); myParameters.add("x", 1., 0.1); MnMigrad migrad = new MnMigrad(myFunction, myParameters); FunctionMinimum min = migrad.minimize(); System.out.printf("Minimum value is %g found using %d function calls", min.fval(),min.nfcn()); } }
The above code needs to be modified in order to produce a graphic output. Let us consider a more complex example in which we will minimize the function . In order to plot such function and minimize at the same time, we will add additional method “addPlot()” to the function definition. This method returns the (X,Y) array for function plotting. This example shows how to minimize and plot this function. We also indicate the results of minimisation as a red dot:
The output of the above script is shown below:
2D case
Now let us consider a more complicated case of minimizing a function in 2D, which will also give you an idea how to do this for any arbitrary function in any dimension. We will consider a function . This function has the known minima at (1,1) with the Z value close to 0.
Constrained optimization
In this section we will discuss constrained optimization by linear approximation for any arbitrary function with any number of variables. The function can be constrained by certain condition. This algorithm is publicly available in the Jcobyla project but it was optimized in order to use it with scripting languages.
Let us consider a minimisation of a function with two variables. The function can be defined in Java as:
10.0 * Math.pow(x[0] + 1.0, 2.0) + Math.pow(x[1], 2.0)
Let us minimize this function and determine a point (x,y) where the value of this function is smallest. We minimizes the objective function above with respect to a set of inequality constraints “CON”. The function and CON may be non-linear, and should preferably be smooth.
The output of the above script is:
[-1.0, 0.0]
Note that the corresponding code in Java look as:
Click here to view Java code
Click here to view Java code
public void test01FindMinimum() { // Java example double rhobeg = 0.5; double rhoend = 1.0e-6; int iprint = 1; int maxfun = 3500; System.out.format("%nOutput from test problem 1 (Simple quadratic)%n"); Calcfc calcfc = new Calcfc() { @Override public double Compute(int n, int m, double[] x, double[] con) { return 10.0 * Math.pow(x[0] + 1.0, 2.0) + Math.pow(x[1], 2.0); } }; double[] x = {1.0, 1.0 }; CobylaExitStatus result = Cobyla.FindMinimum(calcfc, 2, 0, x, rhobeg, rhoend, iprint, maxfun); }
The code illustrated that we overwrite the function Calcfc() method by using a custom code.
Let us make a more complicated example: a minimization of function with two variables in unit circle. This adds a constrain on the minimization. The example is shown below:
The next example shows minimization in three variables. We will minimize an ellipsoid (with one contain)
Finally, the next example show a minimization of function with 9 variables and 14 constants: