You are looking at archived content from my "Bookworm" blog, an experiment that ran from 2014-2016. Not all content may work. For current posts, see here.

Posts with tag TV


← Back to all posts
Sep 14 2015

This post is just kind of playing around in code, rather than any particular argument. It shows the outlines of using the features stored in a Bookworm for all sorts of machine learning, by testing how well a logistic regression classifier can predict IMDB genre based on the subtitles of television episodes.

Jon Fitz Fitzgerald was asking me for example for training a genre classifer on textual data. To reduce dimensionality into the model, we have been thinking of using a topic model as the classifiers instead of the tokens. The idea is that classifiers with more than several dozen variables tend to get finicky and hard to interpret, and with more than a few hundred become completely unmanageable. If you want to classify texts based on their vocabularies, you have two choices:

  1. Only use some of the words as classifiers. This is the normal approach, used from Mosteller and Wallace on the Federalist papers through to Ted Underwoods work on classifying genre in books.

  2. Aggregate the words somehow.1 The best way, from an information-theoretic point of view, is to use the first several principal components of the term-document matrix as your aggregators. This is hard, though, because principal components vectors are hard to interpret and on very large corpora (for which the term-document matrix doesnt fit in memory) somewhat tedious to calculate.

Using topic models as classifiers is somewhat appealing. They should be worse at classification than principal components, but they should also be readable like words, to some degree. I havent seen it done that much because there are some obvious problems; topic models are time-consuming to fit, and they usually throw out stopwords which tend to be extremely successful at classification problems. Thats where a system like Bookworm, which will just add in a one-size-fits-all topic model with a single command, can help; it lets you try loading in a pre-computed model to see what works.

So this post just walks through some of the problems with genre classification in a corpus of 44,000 television episodes and a pre-fit topic model. I dont compare it directly to existing methods, in large part because it quickly becomes clear that IMDB genre is such a flexible thing that its all but impossible to assess whether a classifier is working on anything but a subjective level. But I do include all of the code for anyone who wants to try fitting something else.

Code, Descriptions, and charts

Note: all the code below assumes the libraries dplyr, bookworm, and tidyr are loaded.

First we make the data wide (columns as topic labels). That gives us 127 topics across 44,258 episodes of television, each tagged with a genre by IMDB.

wide = movies %>% spread(topic_label,WordsPerMillion,fill=0)

Now well train a model. Were going to do logistic regression (in R, a glm with family=binomial), but Ill define a more general function that can take an svm or something more exotic for testing.

# Our feature set is a matrix without the categorical variables and a junk variable getting introduced somehow.
modeling_matrix = wide %>% select(-TV_show,-primary_genre,-season,-episode,-`0`) %>% as.matrix
training = sample(c(TRUE,FALSE),nrow(modeling_matrix),replace=T)

dim(modeling_matrix)
## [1] 44258   127
training_frame = data.frame(modeling_matrix[training,])
training_frame$match = NA
build_model = function(genre,model_function=glm,...) {
  # genre is a string indicating one of the primary_genre fields;
  # model function is something like "glm" or "svm";
  # are further arguments passed to that function.
  training_frame$match=as.numeric(wide$primary_genre == genre)[training]
  # we model against a matrix: the columns are the topics, which we get by dropping out the other four elements
  model = model_function(match ~ ., training_frame,...)
}

Heres a plot of the top genres. Ill model on the first ten, because theres a nice break before game show, reality, and fantasy.

library(ggplot2)

wide %>% filter(training) %>% group_by(primary_genre) %>% summarize(episodes=n()) %>% mutate(rank=rank(-episodes)) %>% arrange(rank) %>% ggplot() + geom_bar(aes(y=episodes,x=reorder(primary_genre,episodes),fill=rank<=7),stat="identity") + coord_flip() + labs(title="most common genres, by number of episodes in training set")