Chapter 9 Advanced Topics

We have covered a lot of material in the previous chapters, but have only scratched the surface of what R can do to for us. To wrap up, we will look briefly at a few of its more advanced capabilities.

9.1 Learning Objectives

  • Use reticulate to share data between R and Python.
  • Use reticulate to call Python functions from R code and vice versa.
  • Run Python scripts directly from R programs.
  • Correctly identify the most commonly used object-oriented programming system in R.
  • Explain what attributes R and correctly set and query objects’ attributes, class, and dimensions.
  • Explain how to define a new method for a class.
  • Describe and implement the three functions that should be written for any user-defined class.
  • Query a relational database from R.

9.2 How can I use Python with R?

You can put Python code in R Markdown documents:

print("Hello R")
Hello R

but how can those chunks interact with your R and vice versa? The answer is a package called reticulate that provides two-way communication between Python and R. To use it, run install.packages("reticulate"). By default, it uses the system-default Python:


but you can configure it to use different versions, or to use virtualenv or a Conda environment—see the document for details.

If you want to run the Pythonic bits of code we present as well as the R, run install.packages("reticulate") and then set the RETICULATE_PYTHON environment variable to point at the version of Python you want to use before you launch RStudio. This is necessary because you may have a system-installed version somewhere like /usr/bin/python and a conda-managed version in ~/anaconda3/bin/python.

XKCD on Python Environments (from

Figure 9.1: XKCD on Python Environments (from

9.2.1 How can I access data across languages?

The most common way to use reticulate is to do some calculations in Python and then use the results in R or vice versa. To show how this works, let’s read our infant HIV data into a Pandas data frame:

import pandas
data = pandas.read_csv('results/infant_hiv.csv')
  country  year  estimate  hi  lo
0     AFG  2009       NaN NaN NaN
1     AFG  2010       NaN NaN NaN
2     AFG  2011       NaN NaN NaN
3     AFG  2012       NaN NaN NaN
4     AFG  2013       NaN NaN NaN

All of our Python variables are available in our R session as part of the py object, so py$data is our data frame inside a chunk of R code:

  country year estimate  hi  lo
1     AFG 2009      NaN NaN NaN
2     AFG 2010      NaN NaN NaN
3     AFG 2011      NaN NaN NaN
4     AFG 2012      NaN NaN NaN
5     AFG 2013      NaN NaN NaN
6     AFG 2014      NaN NaN NaN

reticulate handles type conversions automatically, though there are a few tricky cases: for example, the number 9 is a float in R, so if you want an integer in Python, you have to add the trailing L (for “long”) and write it 9L.

On the other hand, reticulate translates between 0-based and 1-based indexing. Suppose we create a character vector in R:

elements = c('hydrogen', 'helium', 'lithium', 'beryllium')

Hydrogen is in position 1 in R:

[1] "hydrogen"

but position 0 in Python:


Note our use of the object r in our Python code: just py$whatever gives us access to Python objects in R, r.whatever gives us access to R objects in Python.

9.2.2 How can I call functions across languages?

We don’t have to run Python code, store values in a variable, and then access that variable from R: we can call the Python directly (or vice versa). For example, we can use Python’s random number generator in R as follows:

pyrand <- import("random")
pyrand$gauss(0, 1)
[1] -1.331085

(There’s no reason to do this—R’s random number generator is just as strong—but it illustrates the point.)

We can also source Python scripts. For example, suppose that contains this function:

#!/usr/bin/env python

import pandas as pd

def get_countries(filename):
    data = pd.read_csv(filename)

We can run that script using source_python:


There is no output because all the script did was define a function. By default, that function and all other top-level variables defined in the script are now available in R:

  [1] "AFG" "AGO" "AIA" "ALB" "ARE" "ARG" "ARM" "ATG" "AUS" "AUT" "AZE"
 [12] "BDI" "BEL" "BEN" "BFA" "BGD" "BGR" "BHR" "BHS" "BIH" "BLR" "BLZ"
 [23] "BOL" "BRA" "BRB" "BRN" "BTN" "BWA" "CAF" "CAN" "CHE" "CHL" "CHN"
 [34] "CIV" "CMR" "COD" "COG" "COK" "COL" "COM" "CPV" "CRI" "CUB" "CYP"
 [45] "CZE" "DEU" "DJI" "DMA" "DNK" "DOM" "DZA" "ECU" "EGY" "ERI" "ESP"
 [56] "EST" "ETH" "FIN" "FJI" "FRA" "FSM" "GAB" "GBR" "GEO" "GHA" "GIN"
 [67] "GMB" "GNB" "GNQ" "GRC" "GRD" "GTM" "GUY" "HND" "HRV" "HTI" "HUN"
 [78] "IDN" "IND" "IRL" "IRN" "IRQ" "ISL" "ISR" "ITA" "JAM" "JOR" "JPN"
 [89] "KAZ" "KEN" "KGZ" "KHM" "KIR" "KNA" "KOR" "LAO" "LBN" "LBR" "LBY"
[100] "LCA" "LKA" "LSO" "LTU" "LUX" "LVA" "MAR" "MDA" "MDG" "MDV" "MEX"
[111] "MHL" "MKD" "MLI" "MLT" "MMR" "MNE" "MNG" "MOZ" "MRT" "MUS" "MWI"
[122] "MYS" "NAM" "NER" "NGA" "NIC" "NIU" "NLD" "NOR" "NPL" "NRU" "NZL"
[133] "OMN" "PAK" "PAN" "PER" "PHL" "PLW" "PNG" "POL" "PRK" "PRT" "PRY"
[144] "PSE" "ROU" "RUS" "RWA" "SAU" "SDN" "SEN" "SGP" "SLB" "SLE" "SLV"
[155] "SOM" "SRB" "SSD" "STP" "SUR" "SVK" "SVN" "SWE" "SWZ" "SYC" "SYR"
[166] "TCD" "TGO" "THA" "TJK" "TKM" "TLS" "TON" "TTO" "TUN" "TUR" "TUV"
[177] "TZA" "UGA" "UKR" "UNK" "URY" "USA" "UZB" "VCT" "VEN" "VNM" "VUT"
[188] "WSM" "YEM" "ZAF" "ZMB" "ZWE"

There is one small pothole in this. When the script is run, the special Python variable __name__ is set to '__main__'"', i.e., the script thinks it is being called from the command line. If it includes a conditional block to handle command-line arguments like this:

if __name__ == '__main__':
    input_file, output_files = sys.argv[1], sys.argv[2:]
    main(input_file, output_files)

then that block will be executed, but will fail because sys.argv won’t include anything.

9.3 How does object-oriented programming work in R?

Programmers spend a great deal of their time trying to create order out of chaos, and the rest of their time inventing new ways to create more chaos. Object-oriented programming serves both needs well: it allows good software designers to create marvels, and less conscientious or less experienced ones to manufacture horrors.

R has not one, not two, but at least three different frameworks for object-oriented programming. By far the most widely used is S3 (because it was first introduced with Version 3 of S, the language from which R is derived). Unlike the approaches used in Python and similarly pedestrian languages, S3 does not require users to define classes. Instead, they add attributes to data, then write specialized versions of generic functions to process data identified by those attributes. Since attributes can be used in other ways as well, we will start by exploring them.

9.3.1 What are attributes?

Let’s begin by creating a matrix containing the first few hundreds:

values <- 100 * 1:9 # creates c(100, 200, ..., 900)
m <- matrix(values, nrow = 3, ncol = 3)
     [,1] [,2] [,3]
[1,]  100  400  700
[2,]  200  500  800
[3,]  300  600  900

Behind the scenes, R continues to store our nine values as a vector. However, it adds an attribute called class to the vector to identify it as a matrix:

[1] "matrix"

and another attribute called dim to store its dimensions as a 2-element vector:

[1] 3 3

An object’s attributes are simply a set of name-value pairs. We can find out what attributes are present using attributes and show or set individual attributes using attr:

attr(m, "prospects") <- "dismal"
[1] 3 3

[1] "dismal"

What are the type and attributes of a tibble?

t <- tribble(
  ~a, ~b,
  1, 2,
  3, 4)
[1] "list"
[1] "a" "b"

[1] 1 2

[1] "tbl_df"     "tbl"        "data.frame"

This tells us that a tibble is stored as a list (the first line of output), and that it has an attribute called names that stores the names of its columns, another called row.names that stores the names of its rows (a feature we should ignore), and three classes. These classes tell R what functions to search for when we are (for example) asking for the length of a tibble (which is the number of rows it contains):

[1] 2

9.3.2 How are classes represented?

To show how classes and generic functions work together, let’s customize the way that 2D coordinates are converted to strings. First, we create two coordinate vectors:

first <- c(0.5, 0.7)
class(first) <- "two_d"
[1] 0.5 0.7
[1] "two_d"
second <- c(1.3, 3.1)
class(second) <- "two_d"
[1] 1.3 3.1
[1] "two_d"

Separately, we define the behavior of toString for such objects:

toString.two_d <- function(obj){
  paste0("<", obj[1], ", ", obj[2], ">")
[1] "<0.5, 0.7>"
[1] "<1.3, 3.1>"

S3’s protocol is simple: given a function F and an object of class C, S3 looks for a function named F.C. If it doesn’t find one, it looks at the object’s next class (assuming it has more than one); once its user-assigned classes are exhausted, it uses whatever function the system has defined for its base type (in this case, character vector). We can trace this process by importing the sloop package and calling s3_dispatch:

=> toString.two_d
 * toString.default

Compare this with calling toString on a plain old character vector:

s3_dispatch(toString(c(7.1, 7.2)))
=> toString.default

The specialized functions associated with a generic function like toString are called methods. Unlike languages that require methods to be defined all together as part of a class, S3 allows us to add methods when and as we see fit. But that doesn’t mean we should: minds confined to three dimensions of space and one of time are simply not capable of comprehending complex class hierarchies. Instead, we should always write three functions that work together for a class like two_d:

  • A constructor called new_two_d that creates objects of our class.
  • An optional validator called validate_two_d that checks the consistency and correctness of an object’s values.
  • An optional helper, simply called two_d, that most users will call to create and validate objects.

The constructor’s first argument should always be the base object (in our case, the two-element vector). It should also have one argument for each attribute the object is to have, if any. Unlike matrices, our 2D points don’t have any extra arguments, so our constructor needs no extra arguments. Crucially, the constructor checks the type of its arguments to ensure that the object has at least some chance of being valid.

new_two_d <- function(coordinates){
  class(coordinates) <- "two_d"

example <- new_two_d(c(4.4, -2.2))
[1] "<4.4, -2.2>"

Validators are only needed when checks on data correctness and consistency are expensive. For example, if we were to define a class to represent sorted vectors, checking that each element is no less than its predecessor could take a long time for very long vectors. To illustrate this, we will check that we have exactly two coordinates; in real code, we would probably include this (inexpensive) check in the constructor.

validate_two_d <- function(coordinates) {
  stopifnot(length(coordinates) == 2)
  stopifnot(class(coordinates) == "two_d")

validate_two_d(example)    # should succeed silently
validate_two_d(c(1, 3))    # should fail
Error in validate_two_d(c(1, 3)): class(coordinates) == "two_d" is not TRUE
validate_two_d(c(2, 2, 2)) # should also fail
Error in validate_two_d(c(2, 2, 2)): length(coordinates) == 2 is not TRUE

The third and final function in our trio provides a user-friendly way to construct objects of our new class. It should call the constructor and the validator (if one exists), but should also provide a richer set of defaults, better error messages, and so on. To illustrate this, we shall allow the user to provide either one argument (which must be a two-element vector) or two (which must each be numeric):

two_d <- function(...){
  args <- list(...)
  if (length(args) == 1) {
    args <- args[[1]]    # extract original value
  else if (length(args) == 2) {
    args <- unlist(args) # convert list to vector
  result <- new_two_d(args)

here <- two_d(10.1, 11.2)
[1] "<10.1, 11.2>"
there <- two_d(c(15.6, 16.7))
[1] "<15.6, 16.7>"

9.3.3 How does inheritance work?

We said above that an object can have more than one class, and that S3 searches the classes in order when it wants to find a method to call. Methods can also trigger invocation of other methods explicitly in order to supplement, rather than replace, the behavior of other classes. To show how this works, we shall look at that classic of object-oriented design: shapes. (The safe kind, of course, not those whose non-Euclidean angles have placed such intolerable stress on the minds of so many of our colleagues over the years.) We start by defining a polygon class:

new_polygon <- function(coords, name) {
  points <- map(coords, two_d)
  class(points) <- "polygon"
  attr(points, "name") <- name

toString.polygon <- function(poly) {
  paste0(attr(poly, "name"), ": ", paste0(map(poly, toString), collapse = ", "))

right <- new_polygon(list(c(0, 0), c(1, 0), c(0, 1)), "triangle")
[1] "triangle: <0, 0>, <1, 0>, <0, 1>"

Now we will add colored shapes:

new_colored_polygon <- function(coords, name, color) {
  object <- new_polygon(coords, name)
  attr(object, "color") <- color
  class(object) <- c("colored_polygon", class(object))

pinkish <- new_colored_polygon(list(c(0, 0), c(1, 0), c(1, 1)), "triangle", "roseate")
[1] "colored_polygon" "polygon"        
[1] "triangle: <0, 0>, <1, 0>, <1, 1>"

So far so good: since we have not defined a method to handle colored polygons specifically, we get the behavior for a regular polygon. Let’s add another method that supplements the behavior of the existing method:

toString.colored_polygon <- function(poly) {
  paste0(toString.polygon(poly), "+ color = ", attr(poly, "color"))

[1] "triangle: <0, 0>, <1, 0>, <1, 1>+ color = roseate"

In practice, we will almost always place all of the methods associated with a class in the same file as its constructor, validator, and helper. The time has finally come for us to explore projects and packages.

9.4 How can I write web applications in R?

R has this awesome gnarly web programming framework called Shiny. It uses sympathetic magic quantum entanglement reactive variables to update the application’s interface when data changes. You should, like, totally check it out.

9.5 How can I work with relational databases in R?

Data frames and database tables go together as naturally as chocolate and the tears of our fallen foes. As in Python and other languages, there is a standard interface for connecting to and querying relational databases; each database is then supported by a package that implements that interface. This doesn’t completely hide the differences between databases—we must still worry about the quirks of various SQL dialects—but it does keep the R side of things simple.

This tutorial uses the SQLite database and the RSQLite interface package. The former is included with the latter, so install.packages("RSQLite") will give you everything you need. We assume that you already speak enough SQL to get yourself into trouble; if you do not, this tutorial is a good place to start.

9.5.1 How can I get data from a database?

Suppose we have a small database in data/example.db containing survey data salvaged from a series of doomed expeditions to the Antarctic in the 1920s and 1930s. The database contains four tables:

Person: people who took readings.

person_id personal family
dyer William Dyer
pb Frank Pabodie
lake Anderson Lake
roe Valentina Roerich
danforth Frank Danforth

Site: locations where readings were taken.

site_id lat long
DR-1 -49.85 -128.57
DR-3 -47.15 -126.72
MSK-4 -48.87 -123.4

Visited: when readings were taken at specific sites.

visit_id site_id dated
619 DR-1 1927-02-08
622 DR-1 1927-02-10
734 DR-3 1930-01-07
735 DR-3 1930-01-12
751 DR-3 1930-02-26
752 DR-3 -null-
837 MSK-4 1932-01-14
844 DR-1 1932-03-22

Measurements: the actual readings.

visit_id visitor quantity reading
619 dyer rad 9.82
619 dyer sal 0.13
622 dyer rad 7.8
622 dyer sal 0.09
734 pb rad 8.41
734 lake sal 0.05
734 pb temp -21.5
735 pb rad 7.22
735 -null- sal 0.06
735 -null- temp -26.0
751 pb rad 4.35
751 pb temp -18.5
751 lake sal 0.1
752 lake rad 2.19
752 lake sal 0.09
752 lake temp -16.0
752 roe sal 41.6
837 lake rad 1.46
837 lake sal 0.21
837 roe sal 22.5
844 roe rad 11.25

Let’s get the data about the people into a data frame:

db <- dbConnect(RSQLite::SQLite(), here::here("data", "example.db"))
dbGetQuery(db, "select * from Person;")
  person_id personal_name family_name
1      dyer       William        Dyer
2        pb         Frank     Pabodie
3      lake      Anderson        Lake
4       roe     Valentina     Roerich
5  danforth         Frank    Danforth

That seems simple enough: the database connection is the first argument to dbGetQuery, the query itself is the second, and the result is a tibble whose column names correspond to the names of the fields in the database table. What if we want to parameterize our query? Inside the text of the query, we use :name as a placeholder for a query parameter, then pass a list of name-value pairs to specify what we actually want:

           "select * from Measurements where quantity = :desired",
           params = list(desired = "rad"))
  visit_id person_id quantity reading
1      619      dyer      rad    9.82
2      622      dyer      rad    7.80
3      734        pb      rad    8.41
4      735        pb      rad    7.22
5      751        pb      rad    4.35
6      752      lake      rad    2.19
7      837      lake      rad    1.46
8      844       roe      rad   11.25

Do not use glue or some other kind of string interpolation to construct database queries, as this can leave you open to SQL injection attacks and other forms of digital damnation.

If you expect a large set of results, it’s best to page through them:

results <- dbSendQuery(db, "select * from Measurements limit 15;")
while (!dbHasCompleted(results)) {
  chunk <- dbFetch(results, n = 3) # artificially low for tutorial purposes
  visit_id person_id quantity reading
1      619      dyer      rad    9.82
2      619      dyer      sal    0.13
3      622      dyer      rad    7.80
  visit_id person_id quantity reading
1      622      dyer      sal    0.09
2      734        pb      rad    8.41
3      734      lake      sal    0.05
  visit_id person_id quantity reading
1      734        pb     temp  -21.50
2      735        pb      rad    7.22
3      735      <NA>      sal    0.06
  visit_id person_id quantity reading
1      735      <NA>     temp  -26.00
2      751        pb      rad    4.35
3      751        pb     temp  -18.50
Warning in result_fetch(res@ptr, n = n): Column `reading`: mixed type,
first seen values of type real, coercing other values of type string
  visit_id person_id quantity reading
1      751      lake      sal    0.00
2      752      lake      rad    2.19
3      752      lake      sal    0.09

9.5.2 How can I populate databases with R?

Data scientists spend most of their time reading data, but someone has to create it. RSQLite makes it easy to map a data frame directly to a database table; to show how it works, we will create an in-memory database:

colors <- tribble(
  ~name, ~red, ~green, ~blue,
  'black', 0, 0, 0,
  'yellow', 255, 255, 0,
  'aqua', 0, 255, 255,
  'fuchsia', 255, 0, 0
db <- dbConnect(RSQLite::SQLite(), ':memory:')
dbWriteTable(db, "colors", colors)

Let’s see what the combination of R and SQLite has done with our data and the types thereof:

dbGetQuery(db, "select * from colors;")
     name red green blue
1   black   0     0    0
2  yellow 255   255    0
3    aqua   0   255  255
4 fuchsia 255     0    0

Good: the types have been guessed correctly.

But what about dates?

appointments <- tribble(
  ~who, ~when,
  'Dyer', '1927-03-01',
  'Peabody', '1927-05-05'
) %>% mutate(when = lubridate::as_date(when))
dbWriteTable(db, "appointments", appointments)
dbGetQuery(db, "select * from appointments;")
      who   when
1    Dyer -15647
2 Peabody -15582

What fresh hell is this? After considerable trial and error, we discover that our dates have been returned to us as the number of days since January 1, 1970:

           "insert into appointments values('Testing', :the_date);",
           params = list(the_date = lubridate::as_date('1971-01-01')))
[1] 1
dbGetQuery(db, "select * from appointments where who = 'Testing';")
      who when
1 Testing  365

There is no point screaming: those who might pity you cannot hear, and those who can hear will definitely not pity you.

9.6 Key Points

  • The reticulate library allows R programs to access data in Python programs and vice versa.
  • Use py.whatever to access a top-level Python variable from R.
  • Use r.whatever to access a top-level R definition from Python.
  • R is always indexed from 1 (even in Python) and Python is always indexed from 0 (even in R).
  • Numbers in R are floating point by default, so use a trailing ‘L’ to force a value to be an integer.
  • A Python script run from an R session believes it is the main script, i.e., __name__ is '__main__' inside the Python script.
  • S3 is the most commonly used object-oriented programming system in R.
  • Every object can store metadata about itself in attributes, which are set and queried with attr.
  • The dim attribute stores the dimensions of a matrix (which is physically stored as a vector).
  • The class attribute of an object defines its class or classes (it may have several character entries).
  • When F(X, ...) is called, and X has class C, R looks for a function called F.C (the . is just a naming convention).
  • If an object has multiple classes in its class attribute, R looks for a corresponding method for each in turn.
  • Every user defined class C should have functions new_C (to create it), validate_C (to validate its integrity), and C (to create and validate).
  • Use the DBI package to work with relational databases.
  • Use DBI::dbConnect(...) with database-specific parameters to connect to a specific database.
  • Use dbGetQuery(connection, "query") to send an SQL query string to a database and get a data frame of results.
  • Parameterize queries using :name as a placeholder in the query and params = list(name = value) as a third parameter to dbGetQuery to specify actual values.
  • Use dbFetch in a while loop to page results.
  • Use dbWriteTable to write an entire data frame to a table, and dbExecute to execute a single insertion statement.
  • Dates… why did it have to be dates?