Fitting data with python: Difference between revisions

From LPTMS Wiki
Jump to navigation Jump to search
 
Line 22: Line 22:
<source lang="py">
<source lang="py">
fitfunc = lambda x, a, b: a*np.sin(b*x)
fitfunc = lambda x, a, b: a*np.sin(b*x)
p, pcov = curve_fit(fitfunc,xb,yb,p0 = p0)
p, pcov = curve_fit(fitfunc,xb,yb,p0 = [1.0,1.0])
print p, np.sqrt(np.diag(pcov))
print p, np.sqrt(np.diag(pcov))
</source>
</source>

Latest revision as of 15:01, 31 January 2016

Curve fitting

Preparing noisy data: <source lang="py"> Npoints = 30 x = np.linspace(1,10,100) xb = np.linspace(1,10,Npoints) f = lambda x: np.sin(x) yb = f(xb) + 0.3*np.random.normal(size=len(xb)) </source>

Using a polynomial fit that is based on generalized linear regression algorithm, solving a linear system. <source lang="py"> from numpy.polynomial import polynomial as P coeff, stats = P.polyfit(xb,yb,9,full=True) fitpoly = P.Polynomial(coeff) print stats </source> fitpoly is a function and coeff are the coefficients of the optimal polynomial.

Using curve-fit that calls *leastsq* algorithm, taking a step-by-step search for the minimum. <source lang="py"> fitfunc = lambda x, a, b: a*np.sin(b*x) p, pcov = curve_fit(fitfunc,xb,yb,p0 = [1.0,1.0]) print p, np.sqrt(np.diag(pcov)) </source> The last lines provides the found optimal parameters and their uncertainties. It is worth trying several guesses p0.

Plotting the results: <source lang="py"> import matplotlib.pyplot as plt plt.scatter(xb,yb) plt.plot(x,f(x)) plt.plot(x,fitpoly(x)) plt.plot(x,fitfunc(x,p[0],p[1])) plt.show() </source>

Using the least-square function directly

The basic syntax is the following: <source lang="py">

  1. !/usr/bin/python

from scipy import optimize from numpy import array

  1. your data as lists

x = [0.0, 1.0, 2.0, 3.0] y = [1.0, 0.5, 0.0, -1.0]

  1. define a fitting function and the corresponding error function that must be minimized
  2. use the lambda shortcut or define standard functions with def fit():
  3. p is the list of parameters

fit = lambda p, x: p[0] + p[1]*(x) + p[2]*(x)**2 err = lambda p, x, y: fit(p,x) - y

  1. initial guess

p0 = [1.0,1.0,1.0]

  1. calls optimize.leastsq to find optimal parameters, converts lists into numpy.array on the fly

p, success = optimize.leastsq(err, p0, args=(array(x), array(y)), ftol=5e-9, xtol=5e-9)

  1. some info about convergence is in success and the optimized parameters in p

</source>