tracts.legacy.optimize#
- optimize(p0, bins, Ls, data, nsamp, model_func, outofbounds_fun=None, cutoff=0, verbose=0, flush_delay=0.5, epsilon=0.001, gtol=1e-05, maxiter=None, full_output=True, func_args=None, fixed_params=None, ll_scale=1)#
Optimizes parameters to fit model to data using the BFGS method.
- Parameters:
p0 – Initial parameters.
data – Spectrum with data.
model_function – Function to evaluate model spectrum. Should take arguments (params, pts).
out_of_bounds_fun (default None) – A function evaluating to True if the current parameters are in a forbidden region.
cutoff (default 0) – The number of bins to drop at the beginning of the array. This could be achieved with masks.
verbose (default 0) – If greater than zero, print optimization status every verbose steps.
flush_delay (default 0.5) – Standard output will be flushed once every flush_delay minutes. This is useful to avoid overloading I/O on clusters.
epsilon (default 1e-3) – Step-size to use for finite-difference derivatives.
gtol (default 1e-5) – Convergence criterion for optimization. For more info, see help(scipy.optimize.fmin_bfgs).
maxiter (default None) – Maximum iterations to run for.
full_output (default True) – If True, returns full outputs as described in help.(scipy.optimize.fmin_bfgs).
func_args (default None) – List of additional arguments to model_func. It is assumed that model_func’s first argument is an array of parameters to optimize.
fixed_params (default None) – (Not yet implemented). If not None, should be a list used to fix model parameters at particular values. For example, if the model parameters are (nu1,nu2,T,m), then fixed_params = [0.5,None,None,2] will hold nu1=0.5 and m=2. The optimizer will only change T and m. Note that the bounds lists must include all parameters. Optimization will fail if the fixed values lie outside their bounds. A full-length p0 should be passed in; values corresponding to fixed parameters are ignored.
ll_scale (default 1) – The bfgs algorithm may fail if your initial log-likelihood is too large. (This appears to be a flaw in the scipy implementation.) To overcome this, pass ll_scale > 1, which will simply reduce the magnitude of the log-likelihood. Once in a region of reasonable likelihood, you’ll probably want to re-optimize with ll_scale=1.
Notes
This optimization method works well when we start reasonably close to the optimum. It is best at burrowing down a single minimum. It should also perform better when parameters range over scales.