blogR

Walkthroughs and projects using R for data science.

Page 3


k-fold cross validation with modelr and broom

@drsimonj here to discuss how to conduct k-fold cross validation, with an emphasis on evaluating models supported by David Robinson’s broom package. Full credit also goes to David, as this is a slightly more detailed version of his past post, which I read some time ago and felt like unpacking.

Assumed knowledge: K-fold Cross validation

This post assumes you know what k-fold cross validation is. If you want to brush up, here’s a fantastic tutorial from Stanford University professors Trevor Hastie and Rob Tibshirani.

Creating folds

Before worrying about models, we can generate K folds using crossv_kfold from the modelr package. Let’s practice with the mtcars data to keep things simple.

library(modelr)
set.seed(1)   Run to replicate this post
folds <- crossv_kfold(mtcars, k = 5)
folds
>  A tibble: 5 × 3
>            train           test   .id
>           <list>         <list> <chr>
> 1
...

Continue reading →


Plotting my trips with ubeR

@drsimonj here to explain how I used ubeR, an R package for the Uber API, to create this map of my trips over the last couple of years:

init-example-1.png

Getting ubeR

The ubeR package, which I first heard about here, is currently available on GitHub. In R, install and load it as follows:

 install.packages("devtools")   Run to install the devtools package if needed
devtools::install_github("DataWookie/ubeR")   Install ubeR
library(ubeR)

For this post I also use many of the tidyverse packages, so install and load this too to follow along:

library(tidyverse)

Setting up an app

To use ubeR and the uber API, you’ll need an uber account and to register a new app. In a web browser, log into your uber account and head to this page. Fill in the details. Here’s an example:

register_new_app.JPG

Once created, under the Authorization tab, set the Redirect URL to http://localhost:1410/

redirect_url.JPG

Further down, under General Scopes...

Continue reading →


Ordering categories within ggplot2 facets

@drsimonj here to share my method for ordering categories within facets to create plots that look like this…

unnamed-chunk-2-1.png

instead of like this…

unnamed-chunk-3-1.png

Motivation: Tidy Text Mining in R

The motivation for this post comes from Tidy Text Mining in R by Julia Silge and David Robinson. It is a must read if text mining is something that interests you.

I noticed that Julia and David had left themselves a “TODO” in Chapter 5 that was “not easy to fix.” Not easy to fix? Could Julia Silge and David Robinson face challenges as the rest of us do?!

shocked-koala.jpg

Shocking, I know.

Well, it was probably just a matter of time until they fixed it. Still, I thought it was an interesting challenge; gave it some thought, and wanted to share my solution.

The problem

They were using ggplot2 to create a bar plot with the following features:

  • Facetted into separate panels
  • One bar for each category (words in their case).
  • ...

Continue reading →


Plotting individual observations and group means with ggplot2

@drsimonj here to share my approach for visualizing individual observations with group means in the same plot. Here are some examples of what we’ll be creating:

init-example-1.png

init-example-2.png

init-example-3.png

I find these sorts of plots to be incredibly useful for visualizing and gaining insight into our data. We often visualize group means only, sometimes with the likes of standard errors bars. Alternatively, we plot only the individual observations using histograms or scatter plots. Separately, these two methods have unique problems. For example, we can’t easily see sample sizes or variability with group means, and we can’t easily see underlying patterns or trends in individual observations. But when individual observations and group means are combined into a single plot, we can produce some powerful visualizations.

A quick note that, after publishing this post, the paper, “Modern graphical methods to compare two groups of...

Continue reading →


Exploring the effects of healthcare investment on child mortality in R

@drsimonj here to investigate the effects of healthcare investment on child mortality rates over time. I hope that you find the content to be as equally interesting as I do. However, please note that this post is intended to be an informative exercise of exploring and visualizing data with R and my new ourworldindata package. The conclusions drawn here require independent, peer-reviewed verification.

On this note, thank you to Amanda Glassman for bringing this research paper to my attention after this post was first published. The paper suggests that healthcare expenditure does not, or weakly affects child mortality rates. I think it’s an excellent paper and, if you’re interested in the content, a far more legitimate resource in terms of the scientific approach taken. After reading that paper, with the exception of this paragraph, I’ve left this post unchanged for interested readers.

...

Continue reading →


corrr 0.2.1 now on CRAN

@drsimonj here to discuss the latest CRAN release of corrr (0.2.1), a package for exploring correlations in a tidy R framework. This post will describe corrr features added since version 0.1.0.

You can install or update to this latest version directly from CRAN by running:

install.packages("corrr")

Let’s load corrr into our workspace and create a correlation data frame of the mtcars data set to work with:

library(corrr)
rdf <- correlate(mtcars)
rdf
>  A tibble: 11 × 12
>    rowname        mpg        cyl       disp         hp        drat
>      <chr>      <dbl>      <dbl>      <dbl>      <dbl>       <dbl>
> 1      mpg         NA -0.8521620 -0.8475514 -0.7761684  0.68117191
> 2      cyl -0.8521620         NA  0.9020329  0.8324475 -0.69993811
> 3     disp -0.8475514  0.9020329         NA  0.7909486 -0.71021393
> 4       hp -0.7761684  0.8324475  0.7909486         NA -0.44875912
> 5
...

Continue reading →


ourworldindata: an R data package

@drsimonj here to introduce ourworldindata: a new data package for R.

The ourworldindata package contains data frames that are generated by combining datasets from OurWorldInData.org: “an online publication that shows how living conditions around the world are changing”. The data frames in this package have undergone tidying so that they are suited to quick analysis in R. The purpose of this package is to serve as a central R resource for these datasets so that they might be used for the likes of practice or exploratory data analysis in a replicable manner.

Thanks to the OurWorldInData team

Before discussing the package, I’d like to express my thanks to Max Roser and the rest of the OurWorldInData team, who collate the data sets that form the foundation of this package. If you appreciate their work and make use of this package, please consider supporting OurWorldInData. Personal...

Continue reading →


Running a model on separate groups

Ever wanted to run a model on separate groups of data? Read on!

Here’s an example of a regression model fitted to separate groups: predicting a car’s Miles per Gallon with various attributes, but spearately for automatic and manual cars.

library(tidyverse)
library(broom)
mtcars %>% 
  nest(-am) %>% 
  mutate(am = factor(am, levels = c(0, 1), labels = c("automatic", "manual")),
         fit = map(data, ~ lm(mpg ~ hp + wt + disp, data = .)),
         results = map(fit, augment)) %>% 
  unnest(results) %>% 
  ggplot(aes(x = mpg, y = .fitted)) +
    geom_abline(intercept = 0, slope = 1, alpha = .2) +   Line of perfect fit
    geom_point() +
    facet_grid(am ~ .) +
    labs(x = "Miles Per Gallon", y = "Predicted Value") +
    theme_bw()

init-example-1.png

Getting Started

A few things to do/keep in mind before getting started…

A lot of detail for novices

I started this post after working on a larger...

Continue reading →


Five ways to calculate internal consistency

Let’s get psychometric and learn a range of ways to compute the internal consistency of a test or questionnaire in R. We’ll be covering:

  • Average inter-item correlation
  • Average item-total correlation
  • Cronbach’s alpha
  • Split-half reliability (adjusted using the Spearman–Brown prophecy formula)
  • Composite reliability

If you’re unfamiliar with any of these, here are some resources to get you up to speed:

  • https://en.wikipedia.org/wiki/Internal_consistency
  • https://en.wikipedia.org/wiki/Cronbach%27s_alpha
  • http://www.socialresearchmethods.net/kb/reltypes.php
  • http://zencaroline.blogspot.com.au/2007/06/composite-reliability.html

The data

For this post, we’ll be using data on a Big 5 measure of personality that is freely available from Personality Tests. You can download the data yourself HERE, or running the following code will handle the downloading and save the data...

Continue reading →


Visualising Residuals

Residuals. Now there’s something to get you out of bed in the morning!

OK, maybe residuals aren’t the sexiest topic in the world. Still, they’re an essential element and means for identifying potential problems of any statistical model. For example, the residuals from a linear regression model should be homoscedastic. If not, this indicates an issue with the model such as non-linearity in the data.

This post will cover various methods for visualising residuals from regression-based models. Here are some examples of the visualisations that we’ll be creating:

init-example1-1.png

init-example2-1.png

init-example3-1.png

What you need to know

To get the most out of this post, there are a few things you should be aware of. Firstly, if you’re unfamiliar with the meaning of residuals, or what seems to be going on here, I’d recommend that you first do some introductory reading on the topic. Some places to get started are Wikipedia and this...

Continue reading →