docstring release coming soon

This was originally posted at Point Mass Prior and features MathML. If you’re viewing the blog elsewhere the math probably won’t show up properly and it would be beneficial to view the post here

I official submitted version 1.0.0 of docstring to CRAN. Hopefully a release will be on CRAN in the near future. Once that goes live I’ll post an update with more information about the package.


Turn Down (For What)

The following is an analysis of Lil’ Jon’s magnum opus “Turn Down (For What)”.

The buildup features a beat with a Shepard tone in the background building up the tension. One gets the impression that this is going to be a fun time. Lil’ Jon doesn’t take too long to come in and at the 16 second mark we hear him proclaim “Fire up your loud - another round of shots”. The implication of course being that it’s time to party and clearly they’ve had at least one round of shots already. This seems like it’s going to be a good night. After the music drops Lil’ Jon grabs your attention with the question “Turn down for what?”. It’s a question he wants you to ponder. What is there that would make you want to ‘turn down’? Now turning up is taken to mean having a crazy good time partying typically by means of large amounts of alcohol and/or illegal drug use. Lil’ Jon can’t find a reason one wouldn’t want to turn up. He asks the question five times in total. He is demanding an answer and you have failed to provide one. He is searching for meaning in this life and failing to find one he turns other means to numb the pain he feels. Notice that he is not alone during his inquisition. There are clearly others having a good time in the background calling out “eh” and making some fun sounding noises! He has his posse there but is still searching. He crew isn’t quite enough for him. He wants to know why somebody would turn down but can’t think of a reason.

Failing to get an answer he proclaims once again “Fire up your loud - another round of shots”. This is at least the third round of shots in the song. He isn’t slowing down either and his search for meaning hasn’t slowed down either. He once again asks “turn down for what” five more times. What seemed like a party anthem is quickly turning into a cry for help. One really connects with the pain and the longing that Lil’ Jon is expressing through the repeated asking of this simple question.

Just then it seems like he has given up hope in finding an answer because he starts again with “fire up your loud - another round of shots”. But he doesn’t stop. Just like Miley he can’t stop. Again he yells “fire up your loud - another round of shots”. And once more “fire up your loud - another round of shots”. Six shots at least so far in this short period of time. Just when you start to think he has to slow down eventually he picks up the pace and does “fire up your loud - another round of shots shots shots …” repeating shots at least thirty times. He is downing these shots and one has to hope they are either watered down or tiny, tiny shots because he is going to end up in the hospital if he really did consume that much hard alcohol in such a short period of time. That very well might be his goal. This is quite clearly a cry for help. He is desperate and seeking an answer by any means necessary even if that means death via alcohol poisoning. How much pain has this young man felt? What else has he tried in his life that he found to not be a suitable replacement for turning up? Was it a lover that hurt him? A rough childhood that he wants to forget? Does his place his hope, like the philosopher 50 cent, in the goal to “get rich or die trying” but hasn’t been able to satisfy this lust for money? Whatever is causing his pain you can feel it in this song.

After sinking into this depression and trying to kill all feeling you would think that he just wouldn’t care anymore. Lil’ Jon is full of surprises though because just when you think he’s down and out he turns philosopher once more. He asks the question “turn down for what” five more times. However, the last “turn down for what” sounds different. It is pained. It sounds like death. Could this be an attempt to symbolize the death of hope that we thought we saw so many times in this song already? There are no more lyrics to shed any light on this. All we are left with is a simple beat. And like all things in life - that too must end.


Finally bought my own domain name

I did it. I finally bought my own domain name. You’ll notice that the site is now instead of Previously I didn’t want to have to pay a monthly fee and that’s what I thought I would have to do for a domain name. Turns out I’m an idiot and since github is already hosting my site all I needed to do was purchase the domain name. I used Hover and the site cost $12 a year (I found an online coupon to reduce the price by $3). I could have obtained the domain name for cheaper but I’ve read good things about Hover so I decided to go through them. Then it was just a matter of configuring a few things so that the github site would show up when you visit I’m sure most of you know all of this but I never really cared enough to learn it and finally decided to take that step.

Now that I updated the domain name over the next few days I’m thinking I’m going to give this entire place an overhaul: new theme, new layout, make it more than just a blog…


Using nls in place of the delta method

This was originally posted at Point Mass Prior and features MathML. If you’re viewing from StatsBlogs the math probably won’t show up properly and it would be beneficial to view the post here.

It’s been a while since my last post which was on using the delta method in R with a specific application to finding the ‘x’ value that corresponds to the maximum/minimum value in a quadratic regression. This post will be about how to do the same thing in a slightly different way. Quadratic regression can be fit using a linear model of the form

where are independent and identically distributed normal random variables with mean 0 and a variance of . However, if our concern is on the ‘x’ value that provides the minimum/maximum value and possibly the value of the response at the minimum/maximum we can reformulate the model as

So that the x value that corresponds to the minimum/maximum is represented directly through the parameter . The actual minimum/maximum value is also represented as a parameter in the model as . In this case can be interpreted as half of the second derivative with respect to x but that isn’t as much of an interest here. Note that we can expand this model out to get the same form as the linear model so it really is representing the same model but notice that we don’t actually have a linear model anymore.

To fit this we would need to use something other than lm. The natural choice in R is to use nls. We’ll look at an example of how to fit this model and get confidence intervals for the quantities of interest. We’ll use the same simulated data as my previous post so we can compare how the delta method and nls compare for this problem.

As a reminder the exact model we fit previously was where so to write it using the same form as our nonlinear model we have . So the maximum occurs at and produces an output of 25 at that location.

n <- 30
x <- runif(n, 0, 10)
y <- -x * (x - 10) + rnorm(n, 0, 8)  # y = 0 +10x - x^2 + error

Now we fit our model using nls. We need to provide starting values for the parameters since it fits using an iterative procedure. I provide some pretty bad starting values here but it still fits its just fine.

o <- nls(y ~ t1 * (x - t2)^2 + t3, start = list(t1 = 1, t2 = 1, t3 = 1))

Now we can look at the output we get

## Formula: y ~ t1 * (x - t2)^2 + t3
## Parameters:
##    Estimate Std. Error t value Pr(>|t|)    
## t1   -1.040      0.239   -4.35  0.00017 ***
## t2    5.190      0.265   19.57  < 2e-16 ***
## t3   25.089      2.661    9.43  4.9e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Residual standard error: 8.71 on 27 degrees of freedom
## Number of iterations to convergence: 8 
## Achieved convergence tolerance: 2.22e-07

We see that the estimated value at which the maximum occurs is 5.1903. If we go back to the delta method post we see that we obtained the same estimate. Another interesting point is the the standard error for this term is the same as we obtained using the delta method. In both cases we get a standard error of 0.2652

We can easily obtain a confidence interval for this using confint

## Waiting for profiling to be done...
##      2.5%   97.5%
## t1 -1.530 -0.5499
## t2  4.639  5.8815
## t3 19.650 30.5620

Now recall that we used the asymptotic normality of the transformation applied when we used the delta method to obtain a confidence interval so that previous interval which went from 4.671 to 5.710 was based on a normal distribution assumption. When using confint with a nls object it uses t-based methods to get a confidence interval so it will be a little bit wider. Recall that we have the same estimate and the same standard error as when we used the delta method so if we want we could get the same interval based on asymptotic normality as well. Alternatively if you use confint.default it will use a normal distribution to create your confidence intervals

##     2.5 %  97.5 %
## t1 -1.508 -0.5719
## t2  4.671  5.7101
## t3 19.874 30.3054

And here we see that we get the same confidence interval as when we used the asymptotic normality argument to get the confidence intervals for the delta method approach.


Using the delta method

This was originally posted at Point Mass Prior and features MathML. If you’re viewing from StatsBlogs the math probably won’t show up properly and it would be beneficial to view the post here

Somebody recently asked me about the delta method and specifically the deltamethod function in the msm package. I thought I would write about that and so to motivate this we’ll look at an example. The example we’ll consider is a simple case where we fit a quadratic regression to some data. This means our model has the form

where are independent and identically distributed normal random variables with mean 0 and a variance of .

To start we’ll generate some data such that we have roots at x=0 and x=10 and the quadratic is such that we have a maximum instead of a minimum.

n <- 30
x <- runif(n, 0, 10)
y <- -x * (x - 10) + rnorm(n, 0, 8)  # y = 0 +10x - x^2 + error

We can plot the data to get a feel for it:


Now it might be that what we’re really interested in is the input value that gives us the maximum value for the response (on average). Let’s call that value . Now if we knew the true parameters for this data we could figure out exactly where that maximum occurs. We know that for a quadratic function the maximum value occurs at . In our specific case we have so the maximum occurs at . Just eyeballing our plot it doesn’t look like the quadratic that will be fit will give us a maximum that occurs exactly at . Let’s actually fit the quadratic regression and see what we get for the estimated value of which I will call .

# Estimate quadratic regression
o <- lm(y ~ x + I(x^2))
# View the output
## Call:
## lm(formula = y ~ x + I(x^2))
## Residuals:
##    Min     1Q Median     3Q    Max 
## -14.26  -6.13  -1.49   7.62  13.87 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   -2.923      4.983   -0.59  0.56233    
## x             10.794      2.422    4.46  0.00013 ***
## I(x^2)        -1.040      0.239   -4.35  0.00017 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
## Residual standard error: 8.71 on 27 degrees of freedom
## Multiple R-squared: 0.424,	Adjusted R-squared: 0.381 
## F-statistic: 9.93 on 2 and 27 DF,  p-value: 0.000585
# Make scatterplot and add the estimated curve
plot(x, y, main = "Scatterplot with estimated regression curve", xlim = c(0, 
curve(coef(o)[1] + coef(o)[2] * x + coef(o)[3] * x^2, col = "red", add = T)
# Add a line at the theoretical maximum
abline(v = 5, col = "black")
# Estimate the xmax value
beta2 <- coef(o)["I(x^2)"]
beta1 <- coef(o)["x"]
estmax <- unname(-beta1/(2 * beta2))
# Add a line at estimated maximum
abline(v = estmax, col = "blue", lty = 2)
legend("topleft", legend = c("True max", "Estimated max", "Estimated curve"), 
    col = c("black", "blue", "red"), lty = c(1, 2, 1))


So our estimate of the value where the maximum occurs is 5.1903. This is pretty close but it would still be nice to have some sort of interval to go along with our estimate. This is where the delta method can help us out. The delta method can be thought of as a way to get an estimated standard error for a transformation of estimated parameter values. In our case we’re interested in applying the function

to our estimated parameters.

To perform the delta method we need to know a little bit of calculus. The method requires taking derivatives of our function of interest. Now this isn’t too bad to do in practice but not everybody that wants to perform an analysis will know how to take derivatives (or at least it might have been a long time since they’ve taken a derivative).

Luckily for us we don’t have to do the delta method by hand though as long as we know the transformation of interest. The deltamethod function in the msm package provides a convenient way to get the estimated standard error of the transformation as long as we can provide

  1. The transformation of interest
  2. The estimated parameter values
  3. The covariance matrix of the estimated parameters

We already know (1) but we have to make sure it write it in the proper syntax for deltamethod. We can easily obtain (2) by using coef on our fitted model and we can also easily obtain (3) by using vcov on the estimated model.

When writing the syntax for the transformation when using the deltamethod function you need to refer to the first parameter as x1, the second parameter as x2, and so on. So if I wanted to find the standard error for the sum of two parameters I would write that as ~ x1 + x2. In our case our estimated parameters are from the output of coef(o) so let’s sneak a peak at them to remind ourselves the output order.

## (Intercept)           x      I(x^2) 
##      -2.923      10.794      -1.040

So in this case when writing our transformation we would refer to as x1, as x2, and as x3. As a reminder the transformation we applied was so the formula we want is ~ -x2 / (2 * x3).

standerr <- deltamethod(~-x2/(2 * x3), coef(o), vcov(o))
## [1] 0.2652
# Make a confidence interval
(ci <- estmax + c(-1, 1) * qnorm(0.975) * standerr)
## [1] 4.671 5.710

So we see that our confidence interval does contain the true value. We could also do a hypothesis test if we wanted to test against a certain value. Here we’ll test using a null hypothesis that the true .

# If we want to do a hypothesis test of Ho: xmax = 5 Ha: xmax != 5
z <- (estmax - 5)/standerr
# Calculate p-value
pval <- 2 * pnorm(-abs(z))
## [1] 0.4729

Our p-value of 0.473 doesn’t allow us to reject the null hypothesis in this situation.

So we can see that it’s fairly easy to implement the delta method in R. Now this isn’t necessarily my favorite way to get intervals for transformations of parameters but if you’re a frequentist then it can be quite useful.