NLP

What does Kappa mean?

I've written about some basic NLP on twitch chats before. This post is an extension of that, with more data, more sophisticated methods, and hopefully better results!

Word2vec is an algorithm that takes a one-hot encoding of words, which is a sparse, high dimensional representation, and maps it to a dense, low dimensional representation. In this low dimensional word embedding, the distance metric between words represents how likely words are to appear near each other in a sentence. This results in synonyms and related words being close to each other. In fact, misspelled words (which are common in twitch chats) are often the closest synonym for words. The low dimensional representation is learned by training a shallow neural network to predict a word given the words around it. This has some intriguing properties like being good at analogies (which will be discussed later).

So, I took a bunch of twitch chats I could find and trained a word2vec model on it. This was from chats of about 360 streamers over the past four years. Unfortunately, this isn't the most unbiased source of data. Clearly, larger streams have more chatters, and thus their chats will be overrepresented. In addition, the 360 streamers are a small fraction of all streamers on twitch. In fact, none of the streamers I regularly watch had available chat logs.

I did some cleaning and took out single word messages as well as anything twitchnotify and bots said ¹. Even if a chat is actually multiple sentences, I assume that each chat message is a sentence for the word2vec training. I also strip all symbols in messages to ignore any punctuation. This does have the downside of making words such as <3 and 3 equivalent. I chose a context size of 2, and require a word to appear more than 50 times in the corpus to train the word2vec model. I train a 200-dimensional word2vec model.

Once the word2vec is trained, the cosine distance between word vectors can be used to determine their similarity. This showed that the word closest to Kappa was, unsurprisingly, Keepo. This is followed by 4Head and DansGame; also twitch emotes. The closest non-emote word closest to Kappa was lol. This was unsatisfying for me because I feel like information is lost in translation from equating Kappa with lol, but it makes sense. Kappa is likely to appear at the end of sarcastic sentences, and it's quite reasonable for lol to occur in a similar context.

I then looked into analogies. Word2vec has a cool property that by adding and subtracting word vectors, a relationship between two words can be added to another word. The textbook example of this is man+queen-woman=king. That is, the relationship of the difference between queen and woman (royalty) is being added to man to get king. My word2vec model did, in fact, learn this relationship. With some of the game-related analogies, the model fares a bit worse. The expected analogy result was not necessary the closest vector to the vector sums but would be one of the closest.


Screen Shot 2017-12-08 at 9.58.04 PM

The top three closest word vectors to the vector sum (or difference) of word vectors shown. This shows that the models may not learn relationships between words entirely, but is developing a pretty good idea.  

Next, I plotted how words were distributed globally in the word embedding. I used PCA to reduce the 200-dimensional word embedding to 2 dimensions to visualize the relationships. What this showed was that foreign words cluster separately from English words. This makes sense, as it should be rare to combine German words with English ones in the same sentence. Another effect was the commonly used words clustered together, and then there is a region of context-specific words and meme words. Sub emotes are an example of context-specific words, as they are likely to be found only in the chat of one streamer, in which similar chat topics are present. A meme word would be something like the "word" AAAAE-A-A-I-A-U- that usually only appears with the Brain Power meme, and are unlikely to show up in any other context.


Screen Shot 2017-11-16 at 5.49.38 PM

The word embedding of all the words in the corpus, with PCA used to reduce the dimensions from 200 to 2. Each dot represents a word. Natural clusters form in the word embedding.

Zooming into the common words area, the relationships between words become apparent. Most of the global emotes are toward the lower half of the common words range, while games sit on the top half. TriHard is closer to the left, getting close to the context-specific range, which makes sense as while TriHard is a global emote, it's probably used more in TriHex's chat. The politicians cluster together, with Obama closer to Clinton than Sanders or Trump.


Screen Shot 2017-11-18 at 10.04.06 PM

Zooming into the common words part of the previous graph, and visualizing some of the words.

With the success of vector representation of words, a natural extension is a vector representation of chat messages. This can be a useful way to classify similar sentiments or intents between chatters. A simple way to get a vector representation of sentences is to take an average of the word vectors in the sentence in a bag of words approach, ignoring any words that do not have a vector representation. This is a commutative operation, so it is not a perfect representation of a sentence. For example, the sentences "the cat big the dog" and "the dog bit the cat" have different meanings.  However, this is a good starting point to capture the overall intent of sentences.

I cluster the sentences to determine relationships between them. I sampled a set of 1,000,000 chat messages as this was about as much my computer could handle. Typically, I would use DBSCAN to cluster, but as this was computationally prohibitive, I opted for minibatch k-means. I chose the cluster size as 15, and 6 large clusters emerged from the clustering, shown below.

Screen Shot 2017-12-08 at 8.57.40 AM

The sentence vectors of 1,000,000 chat messages, with PCA used to reduce the dimensions from 200 to 2. Each dot now represents a chat message, calculated as a sum of the words vectors in the chat message. The different colors represent clusters found by K-means.

Like the words, sentences containing foreign phrases cluster separately from the rest of sentences. Likewise, chats with sub emotes, and channel specific memes tend to cluster together. Spamming of global emotes was another cluster, and there is reasonably another cluster where global emotes are combined with text chat messages. Chat messages without emotes tend to cluster into two regions: one where the chatter is interacting with the streamer, and another where the chatter is talking about themselves or referencing others in chat (that is the personal pronouns category). These are general trends, and six clusters are not enough to capture all intents of chatters, but this gives a broad idea.

As mentioned earlier, in the context of twitch, this wasn't using that much available data. I'd expect that training on more data might help the model better learn some of the analogies. Another intriguing prospect is how the word embeddings change. I'm sure relationships between words like Clinton and Trump evolved over the course of last year's election. This raises interesting questions about what time period that word2vec should use as a training corpus.

Code for this post is available here.

1. I define bots as tags containing the string 'bot.'

Natural Language Processing and Twitch Chat

This post will be about using natural language processing (NLP) to extract information from Twitch chat. I found the chat log of a popular streamer, and I'll be analyzing one day of chats. I'll keep the streamer anonymous just because I didn't ask his permission to analyze his chat.

On this particular day, the streamer had 88807 messages in his chat with 11312 distinct users chatting. This is an average of about 7.9 chats/user. However, this doesn't mean that most people actually post this much. In fact, 4579 (or about 40%) of users only posted one message. This doesn't take into account the people that never posted, but it shows that it is quite common for users to "lurk," or watch the stream without actively taking part in chat. The distribution of posts per user is shown below:



A histogram of the frequency of number of messages in chat (note the log scale). Almost all of the people in chat post less than 10 posts. The chat bots were not included here, so everyone represented in the plot should be an actual user.

Only 1677 (or about 15%) of users posted 10 or more posts in chat, but they accounted for 65284 messages (about 73.5%). This seems to imply that there may be some form of Pareto principle at work here.

What are people talking about?

I used tf-idf on the chat log to get a sense for common words and phrases. The tf in tf-idf stands for word frequency, and is, for each chat message, how many times a certain word appears in that chat message [1]. idf stands for inverse document frequency, and is, for each term, the negative log of the fraction of all messages that the term appears in. The idf is an indicator for how much information there is in a word. Common words like "the" and "a" don't carry much information. tf-idf multiplies the two into one index for each term in each chat message. The words with the highest tf-idf are then the most used words in chat. The following table shows some of the common words in chat

 

All Chatters           >10 Chat Messages           One Chat Message
lol lol game
kappa kappa lol
kreygasm kreygasm kappa
pogchamp pogchamp kreygasm
game game wtf
myd kkona stream
kkona dansgame followage

The words with highest score under tf-idf for all messages, those who post many messages, and those who only post one message.

Not surprisingly, the people who chat a lot have a similar distribution of words as all the messages (remember, they are about 73.5% of all the messages). Those who only had one message in chat are talking about slightly different things than those who chat a lot. There are a few interesting features, which I will elaborate on below.

myd and followage are bot commands on the streamer's stream. Apparently gimmicks like this are fairly popular, but this means that there are many people chatting without adding content to the stream. It is interesting that those that post more are far less likely to play with these bot commands.

On this day the streamer was playing random games that his subscribers had suggested. This led to weird games and thus many people commented on the game, hence the prevalence of words like "game" and "wtf". People who only post one message seem more likely to complain about the game than those who talk often. For words like this, it could be interesting to see how their prevalence shifts when different games are played.

For those not familiar with Twitch, kappa, kreygasm, pogchamp, kkona, and dansgame are all emotes. Clearly, the use of emotes is quite popular. kkona is an emote on BTTV (a Twitch extension), so it is quite interesting how many people have adopted its use, and this may also indicate why it is more popular with people who post more.

Who do people talk about?

I wanted to see what kind of "conversations" take place in Twitch chat, so I selected for references to other users and then again looked at the most common words under tf-idf. Unfortunately this method will miss many references (e.g. if there were a user who was Nick482392, other people might simply refer to him as Nick) but for an exploratory analysis, it seemed sufficient.

The most referenced person was, predictably, the streamer himself, with 1232 messages mentioning him. The top words for the streamer included "play" with countless suggestions for what other games the streamer should play. During this day, apparently another prominent streamer was talking about the streamer I analyzed, and many people commented on this. There were also many links directed at the streamer. There were no particularly negative words in the most common words directed at the streamer.

I also considered references to other users. There were 4697 of these, though some of these references are simply due to a user having the same name as an emote. Other than the emotes prevalent in general (Kappa, PogChamp), a common word among references was "banned," talking about people who had been banned from talking on the stream by moderators. An interesting thing to look at may also be to look at what kinds of things mods ban for and try to automate some of that process. Another common word was whisper, which was a feature recently added to Twitch. People are at least talking about this feature, which probably means it is getting used as well.

Profanity?

I then looked at all chat messages containing profane words to see if there were trends in how this language was directed. There were 5542 messages that contained profanity, with the most common word being variants of "fuck." The word "game" was often in posts with profanity, which isn't too strange because as mentioned earlier, a lot of people were complaining about the game choice on this day. Other words that were popular in general, such as kappa and kreygasm, were also present in posts with profanity.

The streamer had a visible injury on this day, and there were a few words related to this injury that correlated highly with profanity. These would be messages like "what the hell happened to your arm?" The streamer's name was also quite prevalent in messages that contained profanity.

A little less common than that was a reference to "mods." It seems that people get upset with moderators for banning people and possibly being too harsh. Right below this is "subs," whom there seems to be quite a bit of hostility towards. I'm not sure if this is when subscriber only chat was used, but the use of profanity with "subs" is spread out throughout all of the messages during the day.

There are some profane words that come in bursts (presumably as a reaction to what is happening on the stream). Terms like "sex her" seem to come in bursts, which seems to show some of the more sexist aspects of the Twitch chat ("sex" was a word included as profanity even though it may not qualify as that in all cases).

Conclusions

The ubiquity of emotes on Twitch may be an interesting reason to conduct general NLP research through Twitch chat. Many of these emotes have sentiments or intentions tied to them, and for the most part, people use them for the "right" purpose. For example, Kappa is indicative of sarcasm or someone who is trolling. Sarcasm is notoriously hard for NLP to detect so having a hint like the Kappa emote could reveal general trends in sarcasm [2]. This would be a cool application of machine learning to NLP (maybe a future blog post?).

From a more practical point of view, information like this could be useful to streamers to figure out how they are doing. For example, if a streamer is trying some techniques to get chat more involved, it may be interesting to see if they are successful and they manage to increase the number of chatters with many posts. One thing I didn't consider is how top words change from day-to-day. The game being played and other factors such as recent events may cause these to fluctuate which could be interesting. Of course, more sophisticated analyses can be conducted than looking at top words, for example, looking at the grammar of the messages and seeing what the target of profanity is.

I also just considered one streamer's stream (because I couldn't find many chat logs), and I'm sure it would be interesting to see how other streams differ. The streamer I analyzed is clearly an extremely popular streamer, but it may be interesting to see if the distribution of the engagement level of chatters is different on smaller of streams. It would also be interesting to see if the things said toward female streamers are particularly different than those said to male streamers.

The code I used for this post is available here.

References
1. Yin, D. et al., 2009. Detection of Harassment on Web 2.0. Proceedings of the Content Analysis in the WEB 2.0 (CAW 2.0) Workshop at WWW2009.
2. Gonzalez-Ibanez, R., Muresan, S., and Wacholder, N., 2011. Identifying Sarcasm in Twitter: A Closer Look . Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2, 581-586.