Determining error of a slope based on a dependent variable containing error

I am currently attempting to compare the slopes of many linear graphs. Each graph is a plot of Time (independent variable) vs. Effect Size (dependent variable). I know the standard error associated with each measure of Effect Size. I want to know the error of the slope of each graph. However, every approach that I have taken so far does not take the standard error of each Effect Size (dependent variable) into account, only spitting out an error calculated from the line of best fit. How can I calculate an error of the slope that takes into account the other source of error (error from calculation of effect size)?

(Note: The Effect Sizes of each graph are not independent; I think they are correlated.)

I am currently using Graphical Analysis, but I am open to using other statistical software such as R, although I have very limited experience with these options.

Thank you.
Update: Effect sizes are in fact independent of one another, and not correlated as I thought. That should make things easier.

Approaches that may work:

1) Using bootstrap methods to create confidence intervals. Basically, resampling so as to generate a list of several thousand slopes, and then estimating error from there. Guidance would be appreciated, as I am not experienced with this.

2) Performing linear regression using the most conservative estimates, by plotting values (Effect Size) that have the error added (or subtracted) to them. This would overestimate the error, however.