David Spiegelhalter has written in several places about trust in algorithms e.g. Harvard Data Science Review, Should We Trust Algorithms?, David Spiegelhalter, Jan 31, 2020, DOI: 10.1162/99608f92.cb91a35a.
The general idea is that we shouldn’t think just about trust in algorithms but also trustworthiness. He provides a checklist of questions he would like to ask:
Is it any good when tried in new parts of the real world?
Would something simpler, and more transparent and robust, be just as good?
Could I explain how it works (in general) to anyone who is interested?
Could I explain to an individual how it reached its conclusion in their particular case?
Does it know when it is on shaky ground, and can it acknowledge uncertainty?
Do people use it appropriately, with the right level of skepticism?
Does it actually help in practice?
Excel vs R
A common criticism, I’ve heard for both sides of the argument, is that R/ Excel is not trusted because it is not clear how the model has been implemented. Excel users claim the WYSIWG interface is transparent and the R users claim that it is exactly this interface that make interrogating the model and testing it difficult and so not transparent.
Can we use David Spiegelhalter’s ideas to compare Excel and R for doing HTA?
The previous list is to do with the underlying algorithm, the data they’re used on and how the results are used. There is no mention of the implementation which is what we are interested in here.
So, borrowing from above, a possible Excel vs R check list could be:
Is it able to simply implement a given model?
Can someone easily understand the implementation (against the mathematical description)?
Is the flow through the model clear?
Are there tests and checks built in to the model?
Are the inputs constrained or errors produced for bad values?
Of course, these elements are interrelated. If a model is easy to implement in software then it is more likely to be easy to understand and to follow its pipeline. So there is an assumption that the model builder has implemented the model in the best(ish) way appropriate for that software.
There are other benefit to using R, such as speed, extensibility and reuse but these aren’t directly linked to trustworthiness.
Listening to a recent talk I realised that there doesn’t seem to be common name for something commonly used in health economic evaluation.
There are the staple plots which everyone will recognise: the cost-effectiveness plane and the cost-effectiveness acceptability curve (CEAC). The CEAC gives the probability of being cost-effective for different willingness to pay thresholds. This could similarly be done for any other model parameter e.g. say the sensitivity of a diagnostic test.
We can further show the probability of being cost-effective for two parameters on a grid of points. This could be a mesh, contour, surface, hex plot or whatever is your favourite.
The plots above is a related previous plot of mine incorporating a third parameter, showing the unit cost of a test at which the probability of being cost-effective is greater than 50% for different test sensitivity and specificity.
The point here is that this sort of 2D plot doesn’t appear to have an agreed name in the health economics literature in the same way as the CEAC; Perhaps a cost-effectiveness acceptability surface or CEAS?
On January 21st – 22nd 2020 at Queen’s University Belfast, we hosted the second health economics in R event – a workshop/hackathon/data dive mash-up. (Read about the first one here).
Generally, day one was aimed more at people new to health economics and R. Day two was aim more at those more familiar with health economic evaluation who were interested in creating new R tools to tackle problems in health economics large and small.
There was a lot of interest in the event beforehand and it was oversubscribed. Attendees came from academia, government, consultancies and industry, including UCL, University of Bristol, Glasgow and NICE amongst others. In particular, we had a lot of attendees from Ireland and outside of London which was one of our intentions. The Belfast hackathon website has more details about the structure of the two days and the aims.
Day 1
The day was structured as a series of introduction to health economics and healthcare evaluation lectures. This was lead by Dr Felicity Lamrock, and covered cost-effectiveness modelling, uncertainty and Markov modelling. The day ended with a demonstration of all of these things in R using the package heemod. The slides can be found on GitHub here.
It was also great to socialise with everyone at the evening meal at the end of the first day, following a hard days work, at the lovely Riddle Hall.
Day 2
The hackthon started with brief project pitches by several attendees. Following this, the participants split into groups to work on these or other projects proposed within groups.
The hackers were supported by our expert research software engineers Rob Ashton, Igor Siveroni and Giovanni Charles from Imperial College London.
Projects included:
pdf2data – To take a pdf table of input data (from an HTA report) and wrangle it into a form for use in a cost-effectiveness model in R.
We also had focused, advanced R skills sessions to up-skill current R users on Git and GitHub, Shiny, and package building in RStudio.
Some of the final day participants.
These meetings have been made possible by generous support from the Medical Research Council, Centre for Global Infectious Disease Analysis (Grant Reference MR/R015600/1), NIHR Health Protection Research Unit in Modelling Methodology and Imperial College London.
I have been aware of the RStudio AddIns for a while now but never really saw the usefulness in them (see here https://rstudio.github.io/rstudioaddins/). To me the benefit of using RStudio is so coding in R is more fun and including more menus and clicking and taking away from the keyboard seemed like a step backwards. I know that some of my VIM colleagues would agree!
However, I came across this GitHub Repo which is an AddIn of AddIns. That is, it has a list of lots of them (https://github.com/daattali/addinslist). What I found potentially useful was those that auto-generate some template code, like the generate Roxygen code from functions does in the dropdown menu.
To this end, I’ve written a small AddIn myself which generates some code for running jags from R. Each session in which I do this more or less uses the same boilerplate code and so I realised that this could actually be useful. Its something like this:
cat("model { for (i in 1:N) { }") data <- list("fdfd") params <- list("fdfd") nc <- 1 ns <- 1000 nb <- 100 nt <- 1 bug_out <- bugs(data = dat, inits = inits, parameters.to.save = params, n.chains = nc, n.iter = ns, n.burnin = nb, n.thin = nt, DIC = TRUE working.directory = getwd)
I have since had my AddIn included in the addinslist table so you can see for yourself if it is in fact useful. Visit here https://github.com/n8thangreen/jagsAddIn.
Using the methodology given in Guyot et al. BMC Medical Research Methodology 2012, 12:9 (http://www.biomedcentral.com/1471-2288/12/9), we held a practical session to show how to do this in R.
The algorithm in Guyot (2012) is included in the digitise() function in the survHE package.
The first step is to extract the data points from an image by using a plot digitiser app. The plot used to digitise is below. The red points are manual selected along the length of the curve.
The software uses by Guyot (2012) is called DigitiseIt. In order to use it for the purposes of this tutorial, a fee needs to be paid. Therefore we used a freely available equivalent tool called WebPlotDigitiser. The tools are very similar in functionality. The output of the tools is formated differently however. Also each digitisation may be slightly different simply due to manual selection of points. So we needed to do a little preprocessing before we could use the existing function.
There are 2 matrices required. The survival data taken directly from the Kaplan-Meier plot and the at-risk matrix which include the row numbers at which the data is divided in to intervals. Read in the data from the digitiser.
On November 6th – 7th 2019 at Imperial College London, we hosted the first health economics in R hackathon.
The event was aimed at health economists, statistcians and R users who are interested in creating new R tools to tackle problems in health economics large or small.
There was a lot of interest in the event (despite the fact the dates coincided with one of the main health economics conferences!). Attendees came from academia, government, consultancies and industry, including UCL, University of Bristol, Glasgow, NICE amongst others. The hackathon website has details about the structure of the two days and some of the ambitions.
Packaging an R cost-effectiveness tutorial for ease-of-use and reproducability
We also had focused advanced R skills sessions to upskill current R users on Git and GitHub, Shiny, testing (unit tests/testthat, Travis CI), and package building in RStudio.
There will be a second event happening at Queen’s University Belfast on 21st and 22nd of January 2020. The format will be slightly different with a half-day of teaching followed by a data dive (delving into a health economics related dataset). This is aimed at less experienced R users than the hackathon. The event is proving even more popular than the London hackathon so we are really looking forward to it.
These meetings have been made possible by generous support from the Medical Research Council, Centre for Global Infectious Disease Analysis (Grant Reference MR/R015600/1), NIHR Health Protection Research Unit in Modelling Methodology and Imperial College London.
This blog post is written as an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see http://rmarkdown.rstudio.com.
When you click the Knit button a document will be generated that includes both content as well as the output of any embedded R code chunks within the document.
I’ve use the RWordPress package and the instructions from this blog post:
#activate the necessary libraries
library(RWordPress)
library(knitr)
library(XMLRPC)
library(RCurl)
# tell RWordPress how to set the user name, password, and URL for your WordPress site.
options(WordPressLogin = c(n8thangreen20 = 'PASSWORD'),
WordPressURL = 'https://healthdatacounts.wordpress.com/')
# tell knitr to create the html code and upload it to your WordPress site
knit2wp(input = "postname.Rmd",
title = "posttitle",
publish = FALSE,
action = "newPost")
We want to only keep an individual’s clinic visits which are for new infections. Therefore, want to remove repeat visit to the clinic where a repeat is defined as within 6 weeks of the previous visit.
Solutions
There are a number of way to tackle this problem which vary in terms of computation speed, simplicity and intuition.
Load in the data. This is in the form of individual patient data (IPD) clinic visit dates and ids.
load(file = here::here("cleaned_visit_date.RData"))
dat0 <- dat
kable(head(dat))
uniq_id
date
person_id
67219
F053090_2012-03-07
2012-03-07
F053090
135647
F105975_2014-05-21
2014-05-21
F105975
211067
16M03749_2016-04-18
2016-04-18
16M03749
49805
M091207_2012-03-27
2012-03-27
M091207
39751
F066543_2011-12-29
2011-12-29
F066543
37228
F108730_2011-12-02
2011-12-02
F108730
Simple loops solution
We could simply step through the original data line by line.
library(dplyr)
keep_record <- c()
dat <- dat0
dat <-
dat[1:5000, ] %>%
arrange(person_id, date)
previous_patient <- ""
previous_date <- 0
for (i in seq_len(nrow(dat))){
person_id <- dat[i, "person_id"]
date <- dat[i, "date"]
# first visit
if (person_id != previous_patient){
# print("new id")
keep <- TRUE
previous_patient <- person_id
previous_date <- date
} else if ((date - previous_date) > 42){
# print("over 42")
keep <- TRUE
previous_date <- date
} else{
keep <- FALSE
}
keep_record <- c(keep_record, keep)
}
table(keep_record)
## keep_record
## FALSE TRUE
## 31 4969
Tidyverse solution
So within a while loop we group by individuals and for the earliest date calculate a 6 week time window (end date window_end) and a flag whether or not subsequent clinic visits are within this (within_window).
We keep track of 2 output patient lists: repeat_visit_ids and first_visit_ids. The later is just the id from the first date. Once we have added to these list given the information from window_end then we remove the first date and the repeat visit associated with this visit. Finally, we repeat the whole thing until we reach the end of the dataframe.
time_window <- weeks(6) # duration(42, "days") #6 weeks in days
repeat_visit_ids <- NULL #initialise
first_visit_ids <- NULL
dat <- dat0
while(nrow(dat) > 0) {
dat <-
dat %>%
group_by(person_id) %>%
dplyr::arrange(date, .by_group = TRUE) %>%
mutate(window_end = min(date) + time_window,
within_window = date <= window_end,
first_date = date == min(date)) %>%
ungroup()
repeat_visit_ids <-
dat %>%
filter(within_window & !first_date) %>%
transmute(uniq_id) %>%
rbind(repeat_visit_ids, .)
first_visit_ids <-
dat %>%
filter(first_date) %>%
transmute(uniq_id) %>%
rbind(first_visit_ids, .)
dat <-
dat %>%
filter(!within_window)
}
The output is a list of unique visit ids.
kable(head(first_visit_ids))
uniq_id
14F00004_2014-05-07
14F00006_2014-05-07
14F00016_2014-05-07
14F00019_2014-05-07
14F00021_2014-05-07
14F00027_2014-06-12
nrow(first_visit_ids)
## [1] 90389
Pre-processed data structure solution
We could create a data structure for easier computation.
First, lets index the repeat visits for each patient by creating a ‘visit’ column with a counter.
dat <- dat0
dat <-
dat %>%
group_by(person_id) %>%
arrange(date) %>%
mutate(visit = row_number()) %>%
arrange(person_id, visit)
kable(head(dat, n = 10))
uniq_id
date
person_id
visit
14F00004_2014-05-07
2014-05-07
14F00004
1
14F00004_2014-05-27
2014-05-27
14F00004
2
14F00004_2014-06-06
2014-06-06
14F00004
3
14F00006_2014-05-07
2014-05-07
14F00006
1
14F00016_2014-05-07
2014-05-07
14F00016
1
14F00019_2014-05-07
2014-05-07
14F00019
1
14F00021_2014-05-07
2014-05-07
14F00021
1
14F00021_2014-06-11
2014-06-11
14F00021
2
14F00027_2014-06-12
2014-06-12
14F00027
1
14F00034_2014-06-06
2014-06-06
14F00034
1
For any computation using this data set we can simply remove the single visit individuals because we already know about these. This reduced the size of the dataframe by some way.
We can cast the data to a wide form to show a kind of patient trajectory such that the columns of visit counts and entries are dates (on a numeric scale).
Rather than this flat representation, we can split the data in to a list of visit frequencies. That is each element in the list consist of trajectories for those individual with that number of visits. The elements in the dataframes are the difference between the ith visit and all of the subsequent. We’ll use a subsample just to keep the computation time down. Note that I remove the patient id so that I can do compuation on all of the visit times before including it back in.
yy <- yy[1:1000, ]
visits_wide <- select(yy ,-person_id)
zz <- list()
for (i in seq_along(visits_wide)) {
zz[[i]] <- visits_wide - visits_wide[, i] # difference between origin visit and subsequent dates
zz[[i]][zz[[i]] <= 0] <- NA # replace negative times
# are all visits/ whole row NAs?
all_NA <- rowSums(is.na(zz[[i]])) != ncol(zz[[i]])
zz[[i]]$person_id <- yy$person_id # include patient id
zz[[i]] <- zz[[i]][all_NA, ] # remove rows
}
lapply(zz[1:3], function(x) x[1:6, 1:11])
## [[1]]
## 1 2 3 4 5 6 7 8 9 10 11
## 1 NA 20 30 NA NA NA NA NA NA NA NA
## 2 NA 35 NA NA NA NA NA NA NA NA NA
## 3 NA 6 286 420 803 1026 1290 1546 1633 NA NA
## 4 NA 11 882 NA NA NA NA NA NA NA NA
## 5 NA 47 154 168 182 301 329 NA NA NA NA
## 6 NA 161 1218 NA NA NA NA NA NA NA NA
##
## [[2]]
## 1 2 3 4 5 6 7 8 9 10 11
## 1 NA NA 10 NA NA NA NA NA NA NA NA
## 3 NA NA 280 414 797 1020 1284 1540 1627 NA NA
## 4 NA NA 871 NA NA NA NA NA NA NA NA
## 5 NA NA 107 121 135 254 282 NA NA NA NA
## 6 NA NA 1057 NA NA NA NA NA NA NA NA
## 10 NA NA 7 NA NA NA NA NA NA NA NA
##
## [[3]]
## 1 2 3 4 5 6 7 8 9 10 11
## 3 NA NA NA 134 517 740 1004 1260 1347 NA NA
## 5 NA NA NA 14 28 147 175 NA NA NA NA
## 12 NA NA NA 151 785 NA NA NA NA NA NA
## 15 NA NA NA 185 NA NA NA NA NA NA NA
## 18 NA NA NA 69 182 NA NA NA NA NA NA
## 20 NA NA NA 161 NA NA NA NA NA NA NA
The reason for splitting the data in this way is that now doing operations on it to deduplicate are straightforward because its kind of already sorted. The below was my first attempt at this approach. It slow and pretty ugly.
out <- zz
from_visit_seq <- head(seq_along(zz), -1)
for (i in from_visit_seq){
# only consider later columns (to the right)
keep_cols <- names(out[[i]])[!names(out[[i]]) %in% (1:i)]
future_visits <- out[[i]][ ,keep_cols]
for (j in out[[i]]$person_id){
if (nrow(out[[i]]) == 0) break
future_id <- future_visits[future_visits$person_id == j, ]
# which visit number (column name) is within time window (42 days) for each patient?
times <- select(future_id, -person_id)
visit_rm <- colnames(times)[times <= time_window & !is.na(times)]
if (length(visit_rm) > 0) {
# remove these repeat visits in list element
for (k in as.numeric(visit_rm))
out[[k]] <- out[[k]][out[[k]]$person_id != j, ]
}
}
}
lapply(out[1:3], function(x) x[1:6, 1:11])
## [[1]]
## 1 2 3 4 5 6 7 8 9 10 11
## 1 NA 20 30 NA NA NA NA NA NA NA NA
## 2 NA 35 NA NA NA NA NA NA NA NA NA
## 3 NA 6 286 420 803 1026 1290 1546 1633 NA NA
## 4 NA 11 882 NA NA NA NA NA NA NA NA
## 5 NA 47 154 168 182 301 329 NA NA NA NA
## 6 NA 161 1218 NA NA NA NA NA NA NA NA
##
## [[2]]
## 1 2 3 4 5 6 7 8 9 10 11
## 273 NA NA 2069 NA NA NA NA NA NA NA NA
## 398 NA NA 831 1317 1321 NA NA NA NA NA NA
## 494 NA NA 995 1038 NA NA NA NA NA NA NA
## 504 NA NA 258 NA NA NA NA NA NA NA NA
## 678 NA NA 578 672 1519 NA NA NA NA NA NA
## NA NA NA NA NA NA NA NA NA NA NA NA
##
## [[3]]
## 1 2 3 4 5 6 7 8 9 10 11
## NA NA NA NA NA NA NA NA NA NA NA NA
## NA.1 NA NA NA NA NA NA NA NA NA NA NA
## NA.2 NA NA NA NA NA NA NA NA NA NA NA
## NA.3 NA NA NA NA NA NA NA NA NA NA NA
## NA.4 NA NA NA NA NA NA NA NA NA NA NA
## NA.5 NA NA NA NA NA NA NA NA NA NA NA
I realised that the break could be replaced with a next if I moved the if statement up a level outside of the jfor loop. Also, I don’t have to do the duplicate visit identification and removal separately. I only need to consider the visits in the future of the current visit, hence the (i+1):ncol(tmp) term. This produces a much cleaner chunk of code.
from_visit_seq <- head(seq_along(zz), -1)
for (i in from_visit_seq){
if (nrow(out[[i]]) == 0) next
for (j in out[[i]]$person_id){
# stop at first time outside window
t <- i + 1
out_person <- out[[i]][out[[i]]$person_id == j, ]
# remove these repeat visits in list element
while(out_person[, t] <= time_window & !is.na(out_person[, t])){
out[[t]] <- out[[t]][out[[t]]$person_id != j, ]
t <- t + 1
}
}
}
lapply(out[1:3], function(x) x[1:6, 1:11])
## [[1]]
## 1 2 3 4 5 6 7 8 9 10 11
## 1 NA 20 30 NA NA NA NA NA NA NA NA
## 2 NA 35 NA NA NA NA NA NA NA NA NA
## 3 NA 6 286 420 803 1026 1290 1546 1633 NA NA
## 4 NA 11 882 NA NA NA NA NA NA NA NA
## 5 NA 47 154 168 182 301 329 NA NA NA NA
## 6 NA 161 1218 NA NA NA NA NA NA NA NA
##
## [[2]]
## 1 2 3 4 5 6 7 8 9 10 11
## 273 NA NA 2069 NA NA NA NA NA NA NA NA
## 398 NA NA 831 1317 1321 NA NA NA NA NA NA
## 494 NA NA 995 1038 NA NA NA NA NA NA NA
## 504 NA NA 258 NA NA NA NA NA NA NA NA
## 678 NA NA 578 672 1519 NA NA NA NA NA NA
## NA NA NA NA NA NA NA NA NA NA NA NA
##
## [[3]]
## 1 2 3 4 5 6 7 8 9 10 11
## NA NA NA NA NA NA NA NA NA NA NA NA
## NA.1 NA NA NA NA NA NA NA NA NA NA NA
## NA.2 NA NA NA NA NA NA NA NA NA NA NA
## NA.3 NA NA NA NA NA NA NA NA NA NA NA
## NA.4 NA NA NA NA NA NA NA NA NA NA NA
## NA.5 NA NA NA NA NA NA NA NA NA NA NA
Finally, include a visit count column and remove empty dataframes.
names(out) <- as.character(seq_along(out) + 1)
for (i in names(out)){
if (nrow(out[[i]]) == 0) out[[i]] <- NULL
else out[[i]]$visit <- as.numeric(i)
}
Stack the lists and include back in the initial visits to give all visits with duplicates removed.