tracts.core.optimize_brute_multifracs#

optimize_brute_multifracs(bins, Ls, data_list, nsamp_list, model_func, fracs_list, searchvalues, outofbounds_fun=None, cutoff=0, verbose=0, flush_delay=1, full_output=True, func_args=None, fixed_params=None, ll_scale=1)#

Optimizes parameters to fit the model to data using the brute-force method.

Parameters:
  • p0 – Initial parameters.

  • data – Spectrum with data.

  • model_function – Function to evaluate model spectrum. Should take arguments (params, pts).

  • out_of_bounds_fun – A function evaluating to True if the current parameters are in a forbidden region.

  • cutoff – The number of bins to drop at the beginning of the array. This could be achieved with masks.

  • verbose – If > 0, print optimization status every verbose steps.

  • flush_delay – Standard output will be flushed once every flush_delay minutes. This is useful to avoid overloading I/O on clusters.

  • epsilon – Step-size to use for finite-difference derivatives.

  • gtol – Convergence criterion for optimization. For more info, see help(scipy.optimize.fmin_bfgs).

  • maxiter – Maximum iterations to run for.

  • full_output – If True, return full outputs as in described in help(scipy.optimize.fmin_bfgs).

  • func_args – Additional arguments to model_func. It is assumed that model_func’s first argument is an array of parameters to optimize.

  • fixed_params – (Not yet implemented) If not None, should be a list used to fix model parameters at particular values. For example, if the model parameters are (nu1,nu2,T,m), then fixed_params = [0.5,None,None,2] will hold nu1=0.5 and m=2. The optimizer will only change T and m. Note that the bounds lists must include all parameters. Optimization will fail if the fixed values lie outside their bounds. A full-length p0 should be passed in; values corresponding to fixed parameters are ignored.

  • ll_scale – The BFGS algorithm may fail if the initial log-likelihood is too large. Using ll_scale > 1 reduces the log-likelihood magnitude, helping the optimizer reach a reasonable region. Afterward, re-optimize with ll_scale=1.

Notes

This optimization performs well when the starting point is reasonably close to the optimum and is most effective at converging to a single minimum. It also tends to perform better when parameters vary across different scales.