# Using the delta method

*This was originally posted at Point Mass Prior and features MathML. If you’re viewing from StatsBlogs the math probably won’t show up properly and it would be beneficial to view the post here*

Somebody recently asked me about the delta method and specifically the `deltamethod`

function in the msm package. I thought I would write about that and so to motivate this we’ll look at an example. The example we’ll consider is a simple case where we fit a quadratic regression to some data. This means our model has the form

where are independent and identically distributed normal random variables with mean 0 and a variance of .

To start we’ll generate some data such that we have roots at x=0 and x=10 and the quadratic is such that we have a maximum instead of a minimum.

We can plot the data to get a feel for it:

Now it might be that what we’re really interested in is the input value that gives us the maximum value for the response (on average). Let’s call that value . Now if we knew the true parameters for this data we could figure out exactly where that maximum occurs. We know that for a quadratic function the maximum value occurs at . In our specific case we have so the maximum occurs at . Just eyeballing our plot it doesn’t look like the quadratic that will be fit will give us a maximum that occurs exactly at . Let’s actually fit the quadratic regression and see what we get for the estimated value of which I will call .

So our estimate of the value where the maximum occurs is 5.1903. This is pretty close but it would still be nice to have some sort of interval to go along with our estimate. This is where the delta method can help us out. The delta method can be thought of as a way to get an estimated standard error for a transformation of estimated parameter values. In our case we’re interested in applying the function

to our estimated parameters.

To perform the delta method we need to know a little bit of calculus. The method requires taking derivatives of our function of interest. Now this isn’t too bad to do in practice but not everybody that wants to perform an analysis will know how to take derivatives (or at least it might have been a long time since they’ve taken a derivative).

Luckily for us we don’t have to do the delta method by hand though as long as we know the transformation of interest. The `deltamethod`

function in the msm package provides a convenient way to get the estimated standard error of the transformation as long as we can provide

- The transformation of interest
- The estimated parameter values
- The covariance matrix of the estimated parameters

We already know (1) but we have to make sure it write it in the proper syntax for `deltamethod`

. We can easily obtain (2) by using `coef`

on our fitted model and we can also easily obtain (3) by using `vcov`

on the estimated model.

When writing the syntax for the transformation when using the `deltamethod`

function you need to refer to the first parameter as `x1`

, the second parameter as `x2`

, and so on. So if I wanted to find the standard error for the sum of two parameters I would write that as `~ x1 + x2`

. In our case our estimated parameters are from the output of `coef(o)`

so let’s sneak a peak at them to remind ourselves the output order.

So in this case when writing our transformation we would refer to as `x1`

, as `x2`

, and as `x3`

. As a reminder the transformation we applied was so the formula we want is `~ -x2 / (2 * x3)`

.

So we see that our confidence interval does contain the true value. We could also do a hypothesis test if we wanted to test against a certain value. Here we’ll test using a null hypothesis that the true .

Our p-value of 0.473 doesn’t allow us to reject the null hypothesis in this situation.

So we can see that it’s fairly easy to implement the delta method in R. Now this isn’t necessarily my favorite way to get intervals for transformations of parameters but if you’re a frequentist then it can be quite useful.