Should the standard error (SE) of the best-fit coefficients be propagated?

Hi all,

This is how I understand it. Please correct me if I’m wrong:

When fitting models to experimental data the statistical software produces a set of best-fit coefficients and automatically calculates their respective ‘asymptotic’ standard error (SE). The SE for each coefficient describes with what degree of precision we have been able to determine each best-fit value. We do nonlinear curve fitting of 10 datasets one by one which results in a list of 10 ‘b’-values with respective SEs. The fitting is done in Sigmaplot and we use a single exponential, 3 parameter equation called ‘exponential rise to maximum’:
y0 + a ( 1 - e^(-bx) )

Based on this, how is the SE of the mean ‘b’ calculated?
Should the SE of the individual ‘b’ be propagated to the mean, and how is that done?

Some examples of ‘b’ and SE:
b1: 2.3775 SE: 0.3505
b2: 2.5900 SE: 0.2740
b3: 1.9496 SE: 0.2937

I will be grateful for any help you can provide!