So when presented with the lowly GIF, an image format of great Internet utility — if little depth — Rich and Hu didn’t see a joke. They saw a compelling new venue for non-verbal communication. A subject for study. And soon thereafter, GIFGIF was born.
GIFGIF is, at its core, an attempt to map GIFs to specific emotions. Rich and Hu collect their data from the crowd, using a voting platform that shows users two GIFs and asks them to pick which one best represents a feeling. The GIFs are sourced from Giphy — there are 6,000 in total. The feelings come from Paul Ekman’s theory of emotions, which posits that there are 17 universal emotions everybody shares.
So far more, than 2.5 million votes have been cast — and the “so what” of all this is pretty big, too. Rich and Hu are basically adding another layer of data to one of the Internet’s trendiest formats, which they’ll release to developers and other academics later this year. Already they theorize their data could be used to bring easy GIF-messaging to mobile — a huge frontier. They also think there are applications in psychology and sociology, where researchers could use the platform to weigh how people interpret emotions across cultures and mental health spectra. At least one English as a second language professor is already using GIFGIF in class.
Rich and Hu recently agreed to answer some questions about their work by e-mail. This being the Internet, of course, I asked them to answer every questions with a GIF, as well as with text. A lightly edited transcript of our conversation follows.
All right, so let’s start at the beginning: Where in the world did you get this idea?
GIFGIF is a product of serendipity. We work in the same lab space and were talking about non-verbal communication. Though we approach the problem from different angles (Kevin from visualization, Travis from wireless communication), we agreed that GIFs are an increasingly powerful means of non-verbal communication and set out to capture their magic. We adopted the pairwise-comparison methodology [editor’s note: the whole pick-one-of-two-GIFs set-up, as pictured in the screenshot up top] from a previous project from the lab Place Pulse, which mapped urban perception based on images from Google streetview. We spent a couple weekends getting a first version ready, and then pushed it to the world!
How do you use GIFs in your own lives/personal communications?
Like all good children of the Internet, we use GIFs to convey a slightly exaggerated and caricatured emotion of what’s going on in our lives. Like that moment when you realize someone has a crush on you, and all the signs hit you like a flood of facepalms combined with the joy and excitement of the flattering news — as perfectly captured by Spongebob when he discovers Squidward likes Krabby Patties.
Much of our GIF use is trapped in desktop/laptop based messaging though — in GChat or across Facebook or Twitter. The mobile GIF messaging scene, though growing, is still in its infancy and the tools for getting the right GIF into your message are still a bit too clunky on mobile devices. This is something we hope GIFGIF (using our API) can change. We’re itching for the perfect GIF messaging app, and we think it’ll be a game-changer.
Okay, cool — hold that thought! Let’s talk about the whole emotion and communication aspect first. I understand GIFGIF is supposed to be a look at non-verbal communication — do you mean the gestures and expressions pictured in the GIFs? Or the GIFs themselves? (Couldn’t you could theoretically carry out the same type of experiment with photos or short videos?)
When we think of non-verbal communication, we mean it in the broadest sense: anything that conveys a message without the use of spoken words. While this includes gestures and expressions, it could also include your fashion or behavior that communicates some message (passive-aggressively not feeding your roommate’s dog as a sign that you hate the dog, for example).
Regardless of the definition, we could absolutely do the same type of experiment with photos or short videos. In fact, we could do similar experiments with essentially any media (sounds, colors, rollercoasters) and any set of questions (exploring things other than emotions). We’re currently building a generalized framework allowing both developers and lay-people to build out their own GIFGIF-style experiment with whatever set of data and questions they choose. We will be launching that platform this coming fall.
So have you found any GIF/emotion correlations so far?
While we’re not ready to release specific numbers, we have been working on a bunch of analyses. We’ve identified many ‘obvious’ trends (which is reassuring) such as GIFs with high anger scores tend to correlate strongly with high contempt scores and likewise for happiness and pleasure. Also, we’ve done some work performing principal component analysis on the scores to find that not all 17 emotions are needed to fully describe the ‘emotion-space’ that we’ve captured. More simply, this implies that there is redundancy in the emotions and the ‘fundamental’ emotions (as defined by Paul Ekman) may not be so fundamental after all. No specific numbers to release right now, but more coming in August!
Non-analytically, we’ve found qualitative patterns in many of the tops results. Things like arms-in-the-air being strongly present on all the GIFs that rank high on excitement seems obvious in retrospect, but it is interesting to see such a trend bubble up from the votes.
Are there any characters or subjects that seem most popular?
Yeah, some recurring characters on GIFGIF are Stephen Colbert, Finn and Jake from “Adventure Time,” SpongeBob and Patrick, Jennifer Lawrence, Michelle Obama, and Jerry Seinfeld.
Interesting — those are all distinctly American figures. Are you looking into how people interpret gestures and GIFs across cultures? I noticed you’re not asking respondents to give their native language or location or demographic information like that.
We’re keenly interested in looking at how people interpret GIFs across cultures and countries. In fact, we do collect the IP address of each vote, which in turn allows us to see the country of origin of each vote. The 2.5 million votes we have so far are all geo-located and we have been working to look at the variance across cultures and countries. We hope to release our findings in August along with everything else.
What do you see as the primary applications of this research?
We entered the project with a narrower purpose in mind. It was something along the lines of building the tools necessary to create the world’s best GIF search engine. The data and backend needed to accurately map the emotional content of every GIF on the Internet. With this data, we would then be able to answer some questions about the medium, how emotion is conveyed online, and other things relating to the use of GIFs across internet culture.
Other researchers at MIT have been supportive of the work — we’ve collaborated with another Media Lab student on a project using the GIFGIF API. We’ve also been in contact with [computer science] and linguistics researchers. So far the response has been overwhelmingly positive — academics seem to realize that, although GIFs are fun, they contain enough content to be taken seriously.
What we’ve discovered is that the research is not about animated GIFs. It is about measuring that intangible human experience (whether it be emotion, knowledge, or other abstract concepts) and building tools that let us computationally act on them.