Measures of Variability – Absolute Deviation

The range, as we discussed last time, isn’t really used as a measure of variability these days and the IQR isn’t terribly informative.

Since I’m a big fan of checking the density of data as a descriptor let’s try that before we continue. Let’s grab those objects we made last time.

versi <- iris[iris$Species=='versicolor',]
petal <- versi$Petal.Length
y <- 1:50
versidf <- data.frame(y, petal)

Okay, let’s try it out with both ggplot2 and with the base R functions. Remember that you have to load ggplot2 with the library() function each time you start R!

Just for fun let’s compare the typical Gaussian density with the much more exotic rectangular density. It would be comforting to see that densities are essentially in agreement even when we change something as fundamental as the kernel. After all we know that no description of the data will be perfect so knowing that the description isn’t going to randomly be misleading is nice.

plot(density(petal),col="#E69F00")
lines(density(petal, kernel = “rectangular”),col="#000000"))

p <- ggplot(versidf, aes(x = petal))
p + 	geom_density(kernel = 'rectangular', size=1, color="#000000") +
	geom_density(size=1, color="#E69F00") +
	theme(panel.grid.major = element_line(size = 1),
		panel.grid.minor = element_line(size = .8))

rectgaus

Yep those are both in agreement about where the mass of the data is. There are a lot of other densities we could use like the Epanechnikov, triangular, biweight, cosine, and optcosine. They all have particular strengths and weaknesses but they do broadly agree in most cases.

Let’s go back to just using the Gaussian density for now since it’s much easier to read. We can put the measures we developed onto the plot in order to evaluate them. With base R and with ggplot2. Here’s the IQR shown against the density.

plot(density(petal))
abline(v=c(4,4.6), col = '#33CC33')

p <- ggplot(versidf, aes(x = petal))
p + geom_density(size = 1) +
	geom_vline(x = c(4,4.6), size = 1, color = '#33CC33') +
	theme(panel.grid.major = element_line(size = 1),
		panel.grid.minor = element_line(size = .8))

iqrden

So clearly the IQR is capturing a lot of the density (including the point of highest density) and it does represent a piece of information that is very easy to understand. Unfortunately it isn’t making using of much of the information in the data. Because of this if IQR is not very efficient, it doesn’t generalize very well without a very large sample.

The most popular measures of spread are ones that make use of all of the data points. Of these the most basic is the median absolute deviation. We saw the related concept of the mean absolute deviation in an earlier post when talking about the measures of central tendency. The MAD is the median distance from each data point to the median of the data. Like the median and the IQR the MAD is considered robust, not easily distorted by outliers. It is more efficient than the IQR.

It is somewhat rare to report the strict MAD, though, because the exact number of not of much interest. As a result it is usually scaled to mimic a more popular measure of variability called the standard deviation. With R we can get the strict MAD if we want but it gives us the traditional one by default.

mad(petal)
0.51891

mad(petal, constant = 1)
.35

median(petal)
4.35

For convenience later we will now make a function that gives us x+y and x-y as a conveniently structured dataframe. These sorts of convenience functions are extremely useful if you expect to do something many times.

plusminus <- function(p, q, groups = seq(1,max(length(x),length(y))) ) {
	a <- p-q
	b <- p+q
	c <- c(a,b)
	d <- data.frame(vals = c, group = groups)
	d[order(d$group),]
	}

The code here is very simple, you just have to keep in mind how R treats vectors. There are three arguments: p and q are numeric objects (either scalars or vectors); groups is a vector as the same size as the larger or p and q. In the function itself we subtract p from q, add p to q, and then make all of those results into a single vector. Notice that the vector “c” will contain all of the minus values followed by all of the plus values. Finally we put values into a dataframe along with groups. For the sake of neatness we than order the dataframe so that all of the groups are together.

plusminus(4.35,c(0.52,0.35))
  vals group
1 3.83     1
3 4.87     1
2 4.00     2
4 4.70     2

You can see immediately that, in this case, the strict MAD (group 2) is about the same as the IQR while the traditional MAD is obviously quite different. Let’s compare the traditional MAD with the IQR in an image. We saw the cbPalette vector of color codes a while back, I’ve deleted the first one so that the lines stand out a little better here.

spread <- plusminus(c(4.3,4.35), c(.3,.52), c('IQR','MAD'))
cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")

p <- ggplot(versidf, aes(x=petal))
p + 	geom_density(size=1) +
	geom_vline(data=spread,
		aes(xintercept=vals,
		color = group),
		show_guide = TRUE, size = 1) +
	scale_color_manual(values=cbPalette) +
	theme(panel.grid.major = element_line(size = 1),
		panel.grid.minor = element_line(size = .8))

madiqr

Measures of Variability – The Ranges

The measures of central tendency tell us how far from zero the data tends to be. The measures of variability tells us how far from the center the data tends to be. Obviously we know that all of the data isn’t exactly at the mean or median. Just for the sake of illustration let’s look at some sample data from the iris dataframe. This will also give us a chance to look at more ggplot2 code.

For clarity let’s start by defining a few objects we’ll be using later.

versi <- iris[iris$Species=='versicolor',]
petal <- versi$Petal.Length
y <- 1:50
versidf <- data.frame(y, petal)

versi will hold all the information about the versicolor irises, petal then extracts just the information about the length of the petals. y is just a sequence from 1 to 50 to help us keep track of individual observations. Finally we make versidf to hold just the petal lengths and the assigned numbers.

p <- ggplot(versidf, aes(y = y, x = petal))
p + geom_point(size = 3) + 
	theme(panel.grid.major = element_line(size = 1)) +
	theme(panel.grid.minor = element_line(size = 1)) +
	ylab("") +
	scale_y_continuous(labels=NULL) +
	theme(axis.ticks.y = element_blank()) +
	geom_vline(xintercept = 4.26, size = 1.5) +
	scale_x_continuous(limits=c(3,5.5))

versi1

Most of the ggplot code there is just getting the plot to look exactly the way I wanted. The familiar theme elements make the grid lines a nicer size. Next we have several lines getting rid of everything that would normally be on the y-axis since there there’s no information on the y-axis, we’re just separating out the various data points so we can see them. Finally we use geom_vline() to make a vertical line at 4.26 (the mean of the petal lengths).

Clearly the data isn’t all in the same place.

When we discussed the mean we touched on the idea of the mean absolute error which measures how far from mean the data is, on average. This could be used as a measure of variability but isn’t common. We’ll come back to a cousin of it later in our discussion of variability.

The simplest measure of variation is the range, the distance between the lowest value and the highest value. With the range() function we can retrieve the min() and max() values of the data. The diff() function finds the difference between them.

petal <- versi$Petal.Length

range(petal)
3.0 5.1

diff(range(petal))
2.1

This isn’t a very good description of variability. While it gives a vague notion of where the data is, everything is within the range, it doesn’t tell us where it clusters within the range. One mutant flower with a 4 centimeter petal would totally alter the range despite not reflecting where the majority of the data is.

The main reason that people look at the range is because in certain kinds of data missing information will be coded as extreme values like 9999 or in order to warn people that something has to be dealt with. This is done in order to avoid a program silently removing values that are coded as missing.

A more effective way to measure variability is with the interquartile range (often IQR). The 1st and 3rd quartile cut off the top 25% and bottom 25% of the data (the median is the 2nd quartile) so the middle half of the data is between them. It is somewhat better that the range but not enormously so.

The easiest way to get the IQR is with the the summary() and IQR() functions. With summary() you can see the 1st and 3rd quartile (along with the min, max, median, and mean).

summary(petal)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   3.00    4.00    4.35    4.26    4.60    5.10
 
IQR(petal)
0.6

The IQR has the advantage of of cutting off extreme values so a single mutant flower isn’t going to throw it off. In fact the IQR is so popular that is has been enshrined in the boxplot.

Let’s look at two ways to visualize the IQR. First we just highlight the data.

vedf <- with(versidf, versidf[petal >= 4 & petal <= 4.6,])

p <- ggplot(versidf, aes(y = y, x = petal))
p + geom_point(size = 3) + geom_vline(xintercept = 4.26, size = 1.5) +
	theme(panel.grid.major = element_line(size = 1)) +
	theme(panel.grid.minor = element_line(size = 1)) +
	ylab("") +
	scale_y_continuous(labels=NULL) +
	theme(axis.ticks.y = element_blank()) +
	geom_point(data = vedf, aes(y = y, x= petal),color='red',size=3)

INSERT IMAGE

As a boxplot.

p <- ggplot(versidf, aes(y = petal, x = ''))
p + geom_boxplot() +
	scale_x_continuous(breaks=NULL,minor_breaks=NULL)

versi2

versibox

The boxplot() function in base R and the geom_boxplot() layer in ggplot2 will automatically create a Tukey Boxplot. The edges of the box at the 1st and 3rd quartiles. The middle line is the median. The whiskers go to the data points that are no more than 1.5 times the IQR from the 1st and 3rd quartiles. Outliers are shown as dots.

Boxplots have both strengths and weaknesses. Compared to violin plots or densities they show us less information about the data, however they are less subjective and showing the IQR can be very useful. Identifying outliers via Tukey’s method is also a nice touch.

Graphics – ggplot2

The ggplot2 package is one of Hadley Wickham’s many wildly successful R packages.

Installing and using a package is easy.

To install:

install.packages(“ggplot2”)

To use once installed:

library(ggplot2)

The citation that Wickham prefers is available with the citation() function.

H. Wickham. ggplot2: elegant graphics for data analysis. Springer New
York, 2009.

The primary advantage of ggplot 2 over the base R graphics is the Wickham’s decision to use natural language rather than highly specific abbreviations. Changing the size of a line is done the same way as changing the size of a point. The second advantage is that ggplot objects store information inside of them where it is easy to reference when making an image. This makes switching techniques effortless.

Let’s look at an example using the iris data.

This is how we create the most common kind of ggplot object. Our data is drawn from the iris dataframe and our three aesthetics are set. The x-axis will be will show the length of sepals on the flowers and every data point will be colored based on its species.

p <- ggplot(iris, aes(x = Sepal.Length, fill=Species))

A simple visualization is a stacked histogram, which is how ggplot2 automatically deals with this kind of data. Layers are added with the + command that is usually reserved for the addition operation.

p + geom_histogram()

stackedgg1

That’s serviceable but we can do better. The first thing to do is change to a colorblind friendly color palate, this has no impact on readability and opens it up to more readers. Winston Chang provides a pre-packaged vector of hexadecimal color codes that do this for us.

cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")

The theme layer gives us very fine control over the image. We can save space by moving the legend inside of the plot. Since I make graphics for this site I also change the size of the grid lines so they’re easier to see when the image is small.

p + geom_histogram() +
	theme(legend.position = c(.9, .9)) +
	theme(panel.grid.major = element_line(size = 1)) +
	theme(panel.grid.minor = element_line(size = 1)) +
	scale_fill_manual(values=cbPalette)

stackedgg2

Unfortunately a stacked histogram doesn’t seem to be a good way of representing this information. We need another method to represent it. In base R this means going back to the drawing board and carefully calibrating the commands for another type of plot. With ggplot2, however, all the information we want has been stored inside of the object p that we made.

A density plot might work better. It’s going to overlap, though, so let’s adjust the transparency argument called alpha so we can see through the them. In fact we should throw in the theme elements from before.

p + geom_density(alpha = .5) +
	theme(legend.position = c(.9, .9)) +
	theme(panel.grid.major = element_line(size = 1)) +
	theme(panel.grid.minor = element_line(size = 1)) +
	scale_fill_manual(values=cbPalette)

stackedgg3

Let’s compare that to the code we would use for the same thing in base R.

dens <- tapply(iris$Sepal.Length, iris$Species, density)
plot(dens$setosa$x, dens$setosa$y, 
	xlim=c(4,8.5), 
	type='l', lwd=3, col="#999999",
	xlab='Sepal Length', ylab='Density')
lines(dens$versicolor$x, dens$versicolor$y, 
	lwd=3, col="#E69F00",)
lines(dens$virginica$x, dens$virginica$y, 
	lwd=3, col="#56B4E9")
grid()
text(8, c(1.2, 1.1 ,1), 
	c('Setosa', 'Versicolor', 'Virginica'), 
	col=c("#999999", "#E69F00", "#56B4E9"),
	cex=1.5)

stackedgg4

I actually like this style of density plot better than the one we just made in ggplot2 but it’s actually fairly easy to mimic in ggplot2 as well. Nonetheless this is a more more elaborate piece of code, a lesson in the R language all itself. We use tapply() to break down the dataframe the way we want. Then some elaborate subsetting notation gets us the pieces of that new object. We adjust the limits of the x-axis so it fits everything and rewrite the axis labels. The lines() function to add the second and third densities. Since there’s no legend we have to make one up with some clever use of the text() function.

On the other hand ggplot2 saves us a lot of work and does so with much clearer code. That clarity is very helpful if you ever look back at old code.

In any event this still might not be the idea way to represent these comparisons. A style of graphic called a violin plot is perfect for this situation. It isn’t available in base R but ggplot objects can create them easily. We’ll have to adjust some things first. The Species now needs to go on the x-axis and the, the length of the sepals will go on the y-axis. Let’s keep the color codes but get rid of the legend since we won’t need it.

p <- ggplot(iris, aes(x = Species, y = Sepal.Length, fill = Species))
p + geom_violin() +
	theme(panel.grid.major = element_line(size = 1)) +
	theme(panel.grid.minor = element_line(size = 1)) +
	theme(legend.position='none')

stackedgg5

I’d call that a significant improvement! We can see all three of the species without them interrupting each other. The colors are also also less of an issue. In fact we can use this kind of plot in black and white if necessary while the other version degrade significantly in readability.

Exporting Pretty Graphics

Making nice graphics with R!

While R does come with some very nice built in graphics capabilities, many of which create production quality graphics out of the box, it takes some effort to produce the exact image you want. The first thing you might notice when trying to produce graphics is that the bitmap images R creates are incredibly grainy when printed to their own file. Unfortunately you’re almost always going to need a bitmap image (png, jpg, gif) to work with in practice. So how to improve the quality of the images?

The vector graphics (svg) are much better. The lines are smooth and they can be resized as you like.

Here’s how I create the images for this blog:

svg(“filename.svg”, width=7, height=7)
# plotting commands go here
dev.off()

This produces a square svg file, 7 inches by 7 inches.

In the program Inkscape, which is free online, turning it into a bitmap is just a matter of going to File → Export Bitmap, setting the dpi to something reasonable (like 200 or 300) and clicking Export.

Just like that you have a very nice image to use however you want. Storing it as an svg file also makes it convenient to edit and resize without loss of quality later. That’s the whole reason vector graphics exist!

It is possible to write code that produces a nicely size png file directly but unfortunately the results are not impressive.

png(“filename.png”, width=1400, height=1400)
# plotting commands go here
dev.off()

Let’s compare two versions of some data from the faithful data.

testcompareimg

The file that started life as an svg is much easier to read and looks a lot more like what I intended to create. Saving the files in one format and then converting them to another may seem fiddly but trying to make R print bitmaps that look nice requires a lot more tinkering. The best way to make sure that what you see is what you get is by using svg files.

Central Tendency – Other Measures

Although there are a laundry list of measures of central tendency, they are fairly specialized. The trimmed mean and winsored mean are meant to be better descriptors of the data than the mean and more efficient than the median.

The geometric and harmonic means are specialized to describe certain kinds of data.

The barycenter, centroid, geometric median, and medoid are used for describing the center point for data that has several dimensions to it.

The rivers data shows the lengths of 141 major rivers in the North America.

str(rivers)
plot(density(rivers))

Yep, the density is unimodal.

mean(rivers)
591.1844

median(rivers)
425

with(density(rivers),x[y==max(y)]) # Code we used to find the mode.
334.6666

That isn’t a good sign! All three of our measures of central tendency are giving us wildly different values.

The trimmed means and winsored means are simple concepts, the idea is to remove the most extreme values from the data. With a trimmed mean we remove the largest values and the smallest values entirely while with a winsored mean we replace those values with whatever our two cutoff points were.

The interquartile mean is a popular form of trimmed or winsored mean. We remove the top 25% of the data and the bottom 25% of the data. With the summary() function finding the quartiles is easy.

summary(rivers)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  135.0   310.0   425.0   591.2   680.0  3710.0

rivTrim <- rivers[rivers > 310 & rivers < 680]

Try using the sort() function on the rivers data and the rivTrim data so you can see the data that has been trimmed from the edges. For example: the 3710 kilometer Mississippi river is no longer there. You can see the same thing by checking the density.

Making the data for the Winsored mean is also easy. We take the 1st quartile and the 3rd quartile (which we just used) and use them to fill in the missing data.

rivWins <- c(rivTrim, rep(310,35), rep(680,35))

Now let’s take a look at those two measures.

mean(rivTrim)
448.6087
mean(rivWins)
471.9712

plot(density(rivers))
abline(v = c(448, 472), col = c('green','purple'))

They agree pretty well and they happen to agree with the median as well.

While the measures of central tendency for one dimensional data have a few well agreed upon standards this is less true of multidimensional data. The centroid and barycenter are the only two well agreed upon measurements, because they are clear analogues of the mean. The centroid is easy to find, it is the place where the mean of each of the variables intersects.

plot(faithful)
abline(v = mean(faithful$eruptions), h = mean(faithful$waiting))
points(x = mean(faithful$eruptions), mean(faithful$waiting),  cex=1.5, pch=19, col='red')

center

The barycenter is the weighted average of the data and corresponds to the center of mass of a physical system.

The medoid and geometric median, which are both analagous to the median in two dimensions, are unfortunately subject to many competing definitions, none of which are quite as trivial to calculate as the centroid. Minimizing the absolute errors requires deciding on how you wish to measure distance, typically either the Euclidian distance (a straight line) or the Manhattan distance (the square of that) is desired. The medoid is defined as the point from the data that accomplishes this best while the geometric median is simply whatever point does it best.

The geometric mean and harmonic mean rather specialized concepts that aren’t used heavily as descriptive statistics. Their main uses are for efficiently solving certain kinds of problems rather than taking the time to transform data. In a later post we will look at how to use them as well as the concept that connects them to the arithmetic mean, the generalized mean.

Next week we’ll be covering graphics! How to output nice images! How to use the ggplot2 package!

Measures of Central Tendency – Efficient Estimator

Before we move on to some of the less common measures of central tendency let’s compare the efficiency of the mean and median. We would like for our description of the data to reflect not just the data but also the real world. This is important because there are few things we can truly take a census of, in practice we must rely upon samples.

This seems, in the abstract, like a difficult thing to determine but techniques for it do exist.

A wonderfully simple R program by Ajay Shah lets us study a simple example.

http://www.mayin.org/ajayshah/KB/R/html/p1.html

Let’s go through a slightly modified version of the code line by line, all of it will be collected at the end of the post for you to copy down. The ggplot2 graphics package will be used to make a nice plot of the information.

This custom function does a specialized task very quickly. We make a sequence that goes from 0 to 1000, select thirty random numbers from it, then calculate the mean and median of that sample. It is important to know that the mean and median of that sequence is exactly 500. Since we’re taking a sample of thirty we’re probably not going to get a mean or median that is exactly correct.

one.simulation <- function(N=30) {
	s ← seq(0,1000,1)
	x <- sample(s,N)
  	return(c(mean(x), median(x)))
}

The replicate function runs our function one hundred thousand times with different random samples. It creates a matrix with 2 columns and 100000 rows.

results <- replicate(100000, one.simulation(30))

Now we use the denisty() function to find the distribution of the means and medians. The density should be very familiar by now. Hopefully the means and medians will tend to be close to 500.

k1 <- density(results[1,])
k2 <- density(results[2,])

Ajay Shah then provides some code to plot the densities with base R. Here is the whole program with the actual functional part taking up a mere eight lines.

one.simulation <- function(N=30) {
	s <- seq(0,1000,1)
	x <- sample(s,N)
  	return(c(mean(x), median(x)))
}

results <- replicate(100000, one.simulation(30))

k1 <- density(results[1,])
k2 <- density(results[2,])

xrange <- range(k1$x, k2$x)
plot(k1$x, k1$y, xlim=xrange, type="l", xlab="Estimated value", ylab="")
grid()
lines(k2$x, k2$y, col="red")
abline(v=.5)
legend(x="topleft", bty="n",
       lty=c(1,1),
       col=c("black", "red"),
       legend=c("Mean", "Median"))

With the help of ggplot2 the visualization looks like this.

efficient

The mean is much more concentrated than the median, that is what efficiency is. The mean of a sample is a better estimate of the mean of the population it comes from than the median of a sample is for the population median. This is another reason that the mean is sometimes preferred over the median despite criticisms.

Central Tendency – The Median and Mean

When people say “average” they are generally referring to what mathematicians call the arithmetic mean. They might also be talking about the mode, the median, the trimmed means, the winsored mean, the centroid or medioid, or (in rare cases where they are relevant) the geometric mean or harmonic mean.

In these posts we will use R as a teaching tool so that we can work with the various kinds of average hands on. We’ve already taken a look at the mode, which is the most common number in the data. Today we will take a look at the mean and median which are, by far, the most popular measures of central tendency.

What do the mean and median do? Oddly few people bother to address this when discussing the measures of central tendency. It’s important. It defines exactly what they really are.

The purpose of the arithmetic mean is to “balance” the data.

To find the mean we add up all of the values of the data and divide by the number of data points, it is the familiar average you probably learned in grade school.

Finding this point means determining where the errors add up to zero. The total error on either side of the mean is the same.

a <- c(13,1,8,4,5,0,10,3,10)
mean(a)
6

a-6
7 -5  2 -2  4 -6 -1 -3  4

Those numbers do indeed sum to zero but how should we interpret that fact?

Let’s say that we’re working in construction and the project will last one year. Our weekly progress ought be determined based on the mean amount of work done each week. If the mean falls below 1.9% a week we are not on track to finish. Some weeks might be slow and some weeks might be fast but the mean (and no other number) is what tells us if we’re keeping on schedule.

The mean is also popular due because it minimizes the root mean squared error. The RMSE is important for a variety of reasons in more advanced statistics isn’t particularly relevant to the description of data.

The purpose of the median is to find the number for which half of the data is greater and half of the data is smaller. It is the “middle” number.

a <- c(13,1,8,4,10,0,5,3,10)
sort(a)
0  1  3  4  5  8 10 10 13  #We can see 5 right there in the center once we sort the data.

median(a)
5

Somewhat coincidentally, but perhaps more importantly, the median minimizes the mean absolute error, a relative of the RMSE that actually does matter when describing data.

The absolute error is the distance from a value to a given datapoint. The mean absolute error, then, is the mean of all the absolute errors. The number that minimizes the MAE is the number that is strictly closest to all of the numbers in the data! That’s why the median is so effective at showing the typical value of the data.

Let’s quickly work through this with our simple data.

a <- c(13,1,8,4,10,0,5,3,10)
median(a)
5

abs(a-5)
8 4 3 1 5 5 0 2 5

The median is eight less than thirteen, four more than one, and so on. Those are the absolute errors. Consequently the MAE is as follows.

mean(abs(a-5))
3.666667

It is possible to visualize what the data looks like from the point of view of the MAE very nicely in order to see that the median minimizes it.

minimize

The MAE clearly shrinks as we approach the median and rises as we move away from it.

You will often see the assertion that the median is a better description of the typical value in a dataset than the mean is. While this is often true, certainly it tracks better with how we might want to describe typicality, it is far from panacea. Indeed believing that the median is simply better than the mean is a dangerous trap because attempting to describe data in terms of a single number rarely ends well. For thousands of years we had no choice. Thanks to computers, however, calculating a better description is just as easy as calculating the mean and median. We introduced the idea of probability density in the last post. Where the density is higher the data is more tightly clustered.

Let’s take a look at the duration of eruptions in the faithful data. The visualization here is done with the ggplot2 package to mimic the output of the code provided.

plot(density(faithful$eruptions))
abline(v=mean(faithful$eruptions),col='red')
abline(v=median(faithful$eruptions),col='blue')

meanmedden

Would you say that the mean or the median is better description of this data? Obviously we could talk about optimization as much as we want and go in circles. The faithful data is clustered around two values (in technical terms we call it bimodal because there are two peaks). The mean and median are both terrible descriptions of this data!

Reporting the density or histogram of some data is very simple, contains a great deal of information, can be supplemented by whatever summary statistics we like, and has the added benefit of being more visually engaging than numbers. It is a window into the data both for the public and researchers.

In any event the take away from this post should be that the mean and median are very different thing with consequently different interpretations. The mean misses the numbers above it by the same degree as the numbers below it while the median is the mathematical center of the data. Neither is a complete description of the data.

In the next post we will take a look at the notion of efficiency and why it matters.

Central Tendency – The Mode and Density

A very common goal in descriptive statistics is to determine the value that is typical or average. By far the three most common measures of central tendency are the mode, the median, and the mean. We call them measures of central tendency because data tends to form clumps and we would like to measure the center of that clump, where most of the data is.

In this series of posts we will use R as a teaching tool to explore central tendency and related concepts.

Because the mode is very simple this post will use R to introduce it along with the histograms and probability density.

The mode is, in common use, not actually a measure of centrality. It looks for the most common value that occurs in categorical data. In practice you can usually determine the mode by looking at a table without the need for any math. In fact R doesn’t even have a function for calculating the mode. (There is a mode() function but it does something else)

Using the table() function it is fairly easy to identify the mode of a vector. Factors can quickly and easily be read as vectors with the as.vector() function, though. Here we look at the nationalities of racers from the boston data.

table(as.vector(boston$Country))

ESP GBR JPN RSA SUI USA
1 1 3 1 2 2

Visualizations are another reasonable alternative, although base R doesn’t make it easy to do. Fortunately the ggplot2 package has this functionality built in to the geom_bar() layer of its graphics.

p <- ggplot(boston,aes(x=Country))
p + geom_bar()

Clearly Japanese racers were most common at the marathon.

Finally in the rare case where you need to extract the mode from the data directly a custom function is your best bet. This truly wonderful function was made public by Ken Williams on stackoverflow a few years ago:

Mode <- function(x) {
  ux <- unique(x)
  ux[which.max(tabulate(match(x, ux)))]
}

Mode(boston$Country)
[1] JPN
Levels: ESP GBR JPN RSA SUI USA

Notice that Williams’ function is called ‘Mode’ not ‘mode’ so that R will accept it.

Understanding why the mode is considered a measure of central tendency requires us to delve deeper into the field of statistics. While it is an oft repeated truism that the mode is only for categorical data it can, in effect, be used for any kind of data.

It is a frequently repeated truism that the mode can only be used to describe categorical data because other measures are better for ordinal and interval data and because the mode simply doesn’t exist for interval data. Thanks to the rise of personal computers and software like R its very easy to show concepts that are otherwise quite abstract. Let’s look at a histogram of some data. What is the mode miles per gallon of cars in the mtcars data?

hist(mtcars$mpg)

mpghist

Looking at the histogram we might be tempted to say that the mode is “between 15 and 20 mpg” but that would be a mistake. We can vary the numbers of “bins” that the histogram has and, moreover, a histogram suffers from somewhat arbitrary limitations. R likes to make bins with widths that are round numbers and which start at round numbers. Nonetheless histograms give use a good starting place.

What would be better than a histogram is something we can mathematically optimize and which would give us a single value. This probability density does this very nicely. A few decades ago it was impractical to calculate a density for anything but a pre-defined function. Computers allow the use of a process called kernel density estimation to determine it for any arbitrary data. Let’s take a look at how R does it.

plot(density(mtcars$mpg))

mpgden

Rather than many discrete bins R is showing us a continuous curve. The density is a measure of how densely packed the data points are around each value. The exact value of the density is difficult to interpret and generally unimportant.

What we see here is that the density is highest at about 18 mpg.

Extracting the precise point of highest density is actually pretty simple. The density() function produces a bunch of information.

mpg <- density(mtcars$mpg)
str(mpg)
List of 7
 $ x        : num [1:512] 2.97 3.05 3.12 3.2 3.27 ...
 $ y        : num [1:512] 0.000114 0.000125 0.000137 0.00015 0.000164 ...
 $ bw       : num 2.48
 $ n        : int 32
 $ call     : language density.default(x = mtcars$mpg)
 $ data.name: chr "mtcars$mpg"
 $ has.na   : logi FALSE
 - attr(*, "class")= chr "density"

The x and y coordinates are what R uses to plot the image. We can use our ever useful friend, the subsetting notation, to find the value of x and y is equal to the greatest value of y. We’re looking for the point of highest density… the mode.

with(mpg,x[y==max(y)])
17.90862

Are there any cars that get a mileage of 17.90862 in the data?

No, there are not. The density is making a prediction about what we would see if we tested many millions of cars.

However we probably should not trust that value!

While probability densities are extremely useful the exact numbers are not reliable since there are a variety of densities. The gaussian (which we’ve just seen) and Epanechnikov methods are the most popular. They produce slightly different shapes with slightly different modes.

mpgep <- density(mtcars$mpg,kernel="epanechnikov")
with(mpgep,x[y==max(y)])
18.13383

mpgep

Obviously different in shape, although the similarity should be clear and we still estimate the mode as being very close to 18 mpg.

Use the ? function to call up the help menu for density and try out some of the less popular kernels for calculating the density. With plot() you can check out the shape of those alternative methods.

We will come back to probability density in a later post and discuss them in more detail. For now the density is important because we will use it to compare the other measures of central tendency.

Even if the point of highest density were entirely stable there wouldn’t be much call for using it because it lacks any useful mathematical properties. True, is is strictly more “typical” than anything else but very little can be done with that. The mean and median, on top of being trivial to calculate, are very functional. When we look at the the mean and median we’ll see what those are.

Structure of Data

Before we can really get a handle on concepts in datascience we need to understand the structure of data, both in R and in general. There are a lot of great datasets in the history of statistics but for understanding the basics of data nothing beats information from a race so for this post we will look at the boston data we made in our introduction to dataframes.

Here are the contents of the dataframe.

boston
   Place    Sex   Time Country
1      1   Male  80.60     RSA
2      2   Male  81.23     JPN
3      3   Male  81.23     JPN
4      4   Male  84.65     SUI
5      5   Male  84.70     ESP
6      1 Female  95.10     USA
7      2 Female  97.40     JPN
8      3 Female  98.55     USA
9      4 Female  99.65     SUI
10     5 Female 101.70     GBR

And here is the structure of the dataframe.

str(boston)
'data.frame':   10 obs. of  4 variables:
 $ Place  : int  1 2 3 4 5 1 2 3 4 5
 $ Sex    : Factor w/ 2 levels "Female","Male": 2 2 2 2 2 1 1 1 1 1
 $ Time   : num  80.6 81.2 81.2 84.7 84.7 ...
 $ Country: Factor w/ 6 levels "ESP","GBR","JPN",..: 4 3 3 5 1 6 3 6 5 2

As you can see there are several different kinds of data represented here. R identifies them as an integer, two factors, and a numeric although these aren’t the common names.

Let’s begin by looking at the factors.

In general information that R stores as a factor is Categorical data. Both Country and Sex are categorical data. Being from Spain (ESP) is different from being from Switzerland (SUI) but that’s all we can sat about the difference. Notice how R describes the factors.

Country: Factor w/ 6 levels "ESP","GBR","JPN",..: 4 3 3 5 1 6 3 6 5 2

It is a factor with 6 levels, the first few of which are listed followed by a list of numbers. In order to save space with huge datasets R stores factors as a list of names (in alphabetical order) along with corresponding numbers.

Although they are both factors there is a difference between Sex and Country. Sex is also Dichotomous variable, it has exactly two possible values (since the Boston Athletic Association records sex only as female or male). Certain kinds of analysis can only be done with dichotomous variables and many kinds are impossible.

 

Next let’s take a look at the integer data. The Place variable is an example of Ordinal data, it has an order to it but no more information. The woman who came in first was ahead of the woman who came in second but their rank doesn’t tell us anything more that that. Ordinal data is also discrete, there is no such thing as 1.33th place in a race although there might be a tie in which case several people get 1st.

Place  : int  1 2 3 4 5 1 2 3 4 5

What does it mean that R stores this kind of data as an integer? In computing there is an important difference between integers and what are known as floating-point numbers. Here the relevant difference is that integers take less space to store and are only whole numbers.

 

Finally there is the Time variable which R stores as a numeric, it is Interval data. This is means that it has magnitude as well as rank. The woman in third took 3.45 minutes longer to finish than the woman who came in first, we know both that she was faster and how much faster she was. Take a closer look and notice that the values are not all whole numbers.

Time   : num  80.6 81.2 81.2 84.7 84.7 ...

The Time variable is also an example of Ratio data which means we can say things like “the fastest woman was 1.07 times as fast as the slowest woman”. This is possible because the zero point corresponds to an actual zero, a person with a time of 0.0 would finish the race instantly.

Isn’t that true of all interval data, though? No! Most temperature scales do not have a meaningful zero point. When it is 40° out it isn’t twice as hot as when it is 20° out because 0° doesn’t mean there is no heat at all, just that water will freeze. In fact, since absolute zero is -273.15°, for it to be twice as hot as 20° it would have to be over 300° out!

In fact 40° is merely 1.07 times as hot as 20°. Human beings happen to be incredibly sensitive to changes in temperature.

 

Let’s take a look at the types of data in another dataframe. mtcars comes built in to R.

head(mtcars)
                   mpg cyl disp  hp drat    wt  qsec vs am gear carb
Mazda RX4         21.0   6  160 110 3.90 2.620 16.46  0  1    4    4
Mazda RX4 Wag     21.0   6  160 110 3.90 2.875 17.02  0  1    4    4
Datsun 710        22.8   4  108  93 3.85 2.320 18.61  1  1    4    1
Hornet 4 Drive    21.4   6  258 110 3.08 3.215 19.44  1  0    3    1
Hornet Sportabout 18.7   8  360 175 3.15 3.440 17.02  0  0    3    2
Valiant           18.1   6  225 105 2.76 3.460 20.22  1  0    3    1

What kinds of data do we see here?

If you use the str() function look at the structure of the dataframe you’ll see that all of the variables are stored as numerics! Nonetheless it is evident that they are different kinds of data. A car must have an even number of cylinders so the cyl variable is really ordinal, it has discrete values. Meanwhile mpg, wt, and qsec are examples of ratio data. Interestingly the am variable is categorical, it says whether the transmission is automatic or manual, even though it is stored as a numeric.

 

Remember that data is best understood based on it properties rather than by looking at the choices made by the person who recorded it. They may have had all sorts of reasons to do what they did.

 

There is some advice often given about these various levels of measurement which says that the mode ought to be used for categorical data, the median for ordinal data, and the mean for interval or ratio data. Unfortunately these suggestions are unhelpful at best and probably misleading. We’ll go over why as we take a look at the measures of central tendency.

Lies, Damned Lies, and the Truth – Descriptive Statistics

Descriptive statistics is the field that seeks to characterize information. It does not generally seek to make predictions. The fundamental goal is to turn data, of which we now have mountains, into something actually useful. In this series of posts we will start with the measures of central tendency, the various averages, then move on to the higher “moments” and their analogues.

Why should we study descriptive statistics at all, though? Surely it would be better to let the data speak for itself rather than use a bunch of complicated formulas that will just confuse and mislead people? Maybe there’s no point in trying to teach the typical person this kind of thing and we should leave it up to trusted experts.

We need descriptive statistics because nothing speaks for itself (and even if it did there is no real degree of understanding that a human being can gain from looking at thousands of lines of data). Ultimately everything is a matter of interpretation. People who say they are not providing an interpretation are either lying or deluded. Worse, failing to accept that knowledge comes from interpretation of information blinds people to the actual content of their beliefs. Do you know what the median is and why we use it? Exactly what does it tell us?

Since descriptive statistics are the only hope we have of understand the world around us it is critical that everyone have at least a basic understanding of the field.

In that spirit this series of posts will focus on explaining not just the names of the various statistics and when its best to use them but also what they mean and where they come from. We will look at tradition statistical methods, robust statistics, as well as a few methods that are available thanks to the rise of personal computers.

Throughout this series I will also be providing sample R code so that people who want to follow along after reading the [i]Absolute Introduction to R[/i] can do so.

I’d also like to take this chance to introduce a dataset that provides a rich environment in which to play, the Magic data. Most of this data was acquired using the Gatherer Extractor program, arranged in a form compatible with R in Excel 2010, then finalized with some work in R.

The structure of the Magic Data dataframe.

'data.frame':   6337 obs. of  10 variables:
 $ ID    : int  175097 175042 176444 175038 179432 174941 174848 175147 175009 176452 ...
 $ Rating: num  0.82 0.938 1.107 1.206 1.352 ...
 $ Rare  : Ord.factor w/ 4 levels "C"<"U"<"R"<"M": 1 1 1 1 1 1 1 1 1 1 ...
 $ CMC   : int  4 5 3 3 4 3 5 3 6 6 ...
 $ Set   : Ord.factor w/ 31 levels "MRD"<"DST"<"FDN"<..: 17 17 17 17 17 17 17 17 17 17 ...
 $ Block : Ord.factor w/ 10 levels "MRDBL"<"CHKBL"<..: 6 6 6 6 6 6 6 6 6 6 ...
 $ Price : num  0.16 0.16 0.16 0.16 0.15 0.16 0.16 0.15 0.16 0.15 ...
 $ Year  : int  2008 2008 2008 2008 2008 2008 2008 2008 2008 2008 ...
 $ Color : Factor w/ 32 levels "","B","BG","BR",..: 7 9 2 9 9 6 2 9 9 2 ...
 $ Type  : Factor w/ 16 levels "Ar","ArCr","ArLa",..: 4 6 16 8 8 6 4 2 2 4 ...

This data provides continuous and discrete numerical data, categorical data, a chance to practice with regular expressions, ordered vs unordered factors, and a wide variety of chances to explore visualizations.

Since the Magic dataset is larger and less familiar than others I’ll reintroduce it later with a more detailed explanation of the variables. Nonetheless here are dropbox links to the Rdata and csv versions.

https://www.dropbox.com/s/r8tr01sp72meuf0/MagicData.Rdata
https://www.dropbox.com/s/rh6uvlumymnz9x3/MagicData.csv