Using Twitter data to study politics? Fine, but be careful!

The role of social media in shaping the new politics is undeniable. Therefore the volume of research on this topic, relying on the data that are produced by the same technologies, is ever increasing. And let’s be honest, when we say “social media” data, almost always we mean Twitter data!

Twitter is arguably the most studied and used source of data in the new field of Computational Political Science, even though in many countries Twitter is not the main player. But we all know why we use Twitter data in our studies and not for instance data mined from Facebook: Twitter data are (almost) publicly available whereas it’s (almost) impossible to collect any useful data from Facebook.

That is understandable. However, there are numerous issues with studies that are entirely relying on Twitter data.

In a mini-review paper titled “A Biased Review of Biases in Twitter Studies on Political Collective Action“, we discussed some of these issues. Only some of them and not all, and that’s why we called our paper a “biased review”.

The reason that I’m reminding you of the paper now is mostly the new surge of research on “politics and Twitter” in relation to the recent events in the UK, US, and the forthcoming elections in European countries this summer.

Here is the abstract:

In recent years researchers have gravitated to Twitter and other social media platforms as fertile ground for empirical analysis of social phenomena. Social media provides researchers access to trace data of interactions and discourse that once went unrecorded in the offline world. Researchers have sought to use these data to explain social phenomena both particular to social media and applicable to the broader social world. This paper offers a minireview of Twitter-based research on political crowd behavior. This literature offers insight into particular social phenomena on Twitter, but often fails to use standardized methods that permit interpretation beyond individual studies. Read more….


Social Media: an illustration of overestimating the relevance of social media to social events from XKCD. Available online at

Even good bots fight and a typology of Internet bots

Our new paper titled “Even good bots fight: The case of Wikipedia” has finally appeared on PLOS One.

There are two things that I particularly find worth-highlighting about this work. First, this is the first time that someone looks at an ecosystem of the Internet bots at scale using hard data and tries to come up with a typology of the Internet bots (see the figure). And second, the arrangement of our team that is a good example of multidisciplinary research in action: Milena Tsvetkova, the lead author is a sociologist by training. Ruth Garcia is a computer engineer, Luciano Floridi is a professor of Philosophy, and I have a PhD in physics.

If you find the paper too long, have a look at the University of Oxford press release, or the one by the Alan Turing Institute, where both Luciano and I are Faculty Fellows.

Among many media coverages of our work, I think the one in The Guardian is the closest to ideal.


A first typology of the Internet bots. See the source.


New Paper: Personal Clashes and Status in Wikipedia Edit Wars


Originally posted on HUMANE blog by Milena Tsvetkova.

Our study on disagreement in Wikipedia was just published in Scientific Reports (impact factor 5.2). In this study, we find that disagreement and conflict in Wikipedia follow specific patterns. We use complex network methods to identify three kinds of typical negative interactions: an editor confronts another editor repeatedly, an editor confronts back an equally experienced attacker, and less experienced editors confront someone else’s attacker.

Disagreement and conflict are a fact of social life but we do not like to disclose publicly whom we dislike. This poses a challenge for scientists, as we rarely have records of negative social interactions.

To circumvent this problem, we investigate when and with whom Wikipedia users edit articles. We analyze more than 4.6 million edits in 13 different language editions of Wikipedia in the period 2001-2011. We identify when an editor undoes the contribution by another editor and created a network of these “reverts”.

A revert may be intended to improve the content in the article but may also indicate a negative social interaction among the editors involved. To see if the latter is the case, we analyze how often and how fast pairs of reverts occur compared to a null model. The null model removes any individual patterns of activity but preserves important characteristics of the community. It preserves the community structure centered around articles and topics and the natural irregularity of activity due to editors being in the same time zone or due to the occurrence of news-worthy events.

Using this method, we discover that certain interactions occur more often and during shorter time intervals than one would expect from the null model. We find that Wikipedia editors systematically revert the same person, revert back their reverter, and come to defend a reverted editor beyond what would be needed just to improve and maintain the encyclopedia objectively. In addition, we analyze the editors’ status and seniority as measured by the number of article edits they have completed. This reveals that editors with equal status are more likely to respond to reverts and lower-status editors are more likely to revert someone else’s reverter, presumably to make friends and gain some social capital.

We conclude that the discovered interactions demonstrate that social processes interfere with how knowledge is negotiated. Large-scale collaboration by volunteers online provides much of the information we obtain and the software products we use today. The repeated interactions of these volunteers give rise to communities with shared identity and practice. But the social interactions in these communities can in turn affect knowledge production. Such interferences may induce biases and subjectivities into the information we rely on.

Biases in Online Attention; Whose life matters more

This has become a common knowledge that certain lives matter more, when it comes to media coverage and public attention to natural or manmade disasters. Among many papers and articles that report on such biases, my favourite is this one by William C. Adams, titled “Whose Lives Count?”, and dated back to 1986. In this paper, it’s been reported, that for example, an Italian life matters to the American TV’s as much as some 200 Indonesians lives.


The Mh17 crash site in eastern Ukraine. Analysis of Wikipedia found that its article about the crash was the most read across all the aircraft incidents reported in Wikipedia. Photo by Robert Ghement/EPA.

We also studied such biases in online attention and in relation to aircraft crashes. Our paper, recently published in the Royal Society Open Science, reports that for example, a North American life matters almost 50 times more than an African life to the pool of Wikipedia readers.

The paper has received great media attention, and made it to Science and the Guardian.

The abstract of the paper reads

The Internet not only has changed the dynamics of our collective attention but also through the transactional log of online activities, provides us with the opportunity to study attention dynamics at scale. In this paper, we particularly study attention to aircraft incidents and accidents using Wikipedia transactional data in two different language editions, English and Spanish. We study both the editorial activities on and the viewership of the articles about airline crashes. We analyse how the level of attention is influenced by different parameters such as number of deaths, airline region, and event locale and date. We find evidence that the attention given by Wikipedia editors to pre-Wikipedia aircraft incidents and accidents depends on the region of the airline for both English and Spanish editions. North American airline companies receive more prompt coverage in English Wikipedia. We also observe that the attention given by Wikipedia visitors is influenced by the airline region but only for events with a high number of deaths. Finally we show that the rate and time span of the decay of attention is independent of the number of deaths and a fast decay within about a week seems to be universal. We discuss the implications of these findings in the context of attention bias.

and the full paper is available here.

Understanding voters’ information seeking behaviour

Jonathan and I recently published a paper titledWikipedia traffic data and electoral prediction: towards theoretically informed models in EPJ Data Science.

In this article we examine the possibility of predicting election results by analysing Wikipedia traffic going to different articles related to the parties involved in the election.

Unlike similar work in which socially generated online data is used in an automated learning system to predict the electoral results, without much understanding of mechanisms, here we try to provide a theoretical understanding of voters’ information seeking behaviour around election time and use that understanding to make predictions.


Left panel shows the normalized daily views of the article on the European Parliament Election, 2009 in different langue editions of Wikipedia. The right panel shows the relative change between 2009 and 2014 election turnout in each country vs the relative change in the page view counts of the election article in the corresponding Wikipedia language edition. Germany and Czech Republic are marked as outliers from the general trend.

We test our model on a variety of countries in the 2009 and 2014 European Parliament elections. We show that Wikipedia offers good information about changes in overall turnout at elections and also about changes in vote share for parties. It gives a particularly strong signal for new parties which are emerging to prominence.

We use these results to enhance existing theories about the drivers of aggregate patterns in online information seeking, by suggesting that:

voters are cognitive misers who seek information only when considering changing their vote.

This shows the importance of informal online information in forming the opinions of swing voters, and emphasizes the need for serious consideration of the potentials of systems like Wikipedia by parties, campaign organizers, and institutions which regulate elections.

Read more here.

P-values: misunderstood and misused

Since I launched this blog, I always wanted to write something about the dangers of big data! Things that can go wrong easily when you study a large scale transactional data. Obviously, I haven’t done this!

But recently we (Bertie, my PhD Student and I) just finished a paper titled: P-values: misunderstood and misused.

Of course statistical “misunderstanding” is one of the dangers of big data. Calculating p-values has become the most-used method to prove the “significance” of your analysis. However, as we say in the abstract:

P-values are widely used in both the social and natural sciences to quantify the statistical significance of observed results. The recent surge of big data research has made p-value an even more popular tool to test the significance of a study. However, substantial literature has been produced critiquing how p-values are used and understood. In this paper we review this recent critical literature, much of which is routed in the life sciences, and consider its implications for social scientific research. We provide a coherent picture of what the main criticisms are, and draw together and disambiguate common themes. In particular, we explain how the False Discovery Rate is calculated, and how this differs from a p-value. We also make explicit the Bayesian nature of many recent criticisms, a dimension that is often underplayed or ignored. We also identify practical steps to help remediate some of the concerns identified, and argue that p-values need to be contextualised within (i) the specific study, and (ii) the broader field of inquiry.


“Significant”; taken from


Wikipedia; modern platform, ancient debates on Land and Gods

What are the most controversial topics in Wikipedia? What articles have been subject to edit wars more than others? We now have a tool to explore what topics are most controversial in different languages and different parts of the world.

Wikipedia is great! There is no doubt about it. You may argue that it’s not reliable, it’s incomplete, it’s biased, etc, and I might agree. However, despite all these issues, Wikipedia IS useful, fast, practical and phenomenal!

Do you have any other example of a mass collaboration at the scale of Wikipedia with more 40 million editors, having produced more than 37 million articles in more than 280 languages?

Coordinating a small group of friends becomes a big issue when it’s about collaboration and reaching agreement on some topic, how is that possible that this huge number of unprofessional individuals with different backgrounds, cultures, opinions, come together and produce the largest encyclopaedia of all times?

Well, the answer is: it’s not easy and it’s not always smooth. Many Wikipedia articles are about neutral topic, like watermelon and hamsters. But there are lots of editorial wars and opinion clashes happening behind the scenes of Wikipedia as well. What are the main characteristics of these wars? What are the most disputed articles? Does it give us a window to how humans of different parts of the world think about stuff? It’s not difficult to observe some of the editorial wars in English Wikipedia, for example see the list of controversial issues in Wikipedia. But first of all there is no guarantee that these lists are inclusive, and more importantly, such lists are only available for the biggest language editions like English Wikipedia.

There have been already nice studies on Wikipedia conflict, but unfortunately only limited to English Wikipedia. In a recent multidisciplinary project (see the paper), my colleagues Anselm Spoerri (communication and Information scientist), Mark Graham (geographer) , János Kertész (senior physicist), and I (physicist in transition to computational social scientist) studied Wikipedia editorial wars in 13 different language editions including: English, German, French, Spanish, Portuguese, … Persian, Arabic, Hebrew, … Czech, Hungarian, Romanian, …. Chinese and Japanese.

We have developed our tools to locate, quantify, and rank the most controversial articles in different language editions without being able to read the language! Our method to measure editorial wars has been reported in our previous papers on Dynamics of conflicts in Wikipedia and Edit wars in Wikipedia.

Now that we have measures of controversy for all the articles in the language editions under study, we could have lots of fun!

First take a look at the awesome post by Mark on mapping conflict and geographical locations of the controversial articles, and then I’ll tell you something about most debated topics in different language editions.

Here’s the top-10 list of most controversial articles in different languages:

English German French Spanish Portuguese Czech Hungarian  Romanian Arabic Persian Hebrew Japanese Chinese
1 George W. Bush Croatia Ségolène Royal Chile São Paulo Homosexuality Gypsy Crime FC Universitatea Craiova Ash’ari Báb Chabad Koreans in Japan Taiwan
2 Anarchism Scientology Unidentified flying object Club América Brazil Psychotronics Atheism Mircea Badea Ali bin Talal al Jahani Fatimah Chabad messianism Korea origin theory List of upcoming TVB series
3 Muhammad 9/11 conspiracy theories Jehovah’s Witnesses Opus Dei Rede Record Telepathy Hungarian radical right Disney Channel (Romania) Muhammad Mahmoud Ahmadinejad 2006 Lebanon War Men’s rights TVB
4 List of WWE personnel Fraternities Jesus Athletic Bilbao José Serra Communism Viktor Orbán Legionnaires’ rebellion & Bucharest pogrom Ali People’s Mujahedin of Iran B’Tselem internet right-wing China
5 Global warming Homeopathy Sigmund Freud Andrés Manuel López Obrador Grêmio Foot-Ball Porto Alegrense Homophobia Hungarian Guard Movement Lugoj Egypt Criticism of the Quran Benjamin Netanyahu AKB48 Chiang Kai-shek
6 Circumcision Adolf Hitler September 11 attacks Newell’s Old Boys Sport Club Corinthians Paulista Jesus Ferenc Gyurcsány’s speech in May 2006 Vladimir Tismăneanu Syria Tabriz Jewish settlement in Hebron Kamen Rider Series Ma Ying-jeou
7 United States Jesus Muhammad al-Durrah incident FC Barcelona Cyndi Lauper Moravia The Mortimer case Craiova Sunni Islam Ali Khamenei Daphni Leef One Piece Chen Shui-bian
8 Jesus Hugo Chávez Islamophobia Homeopathy Dilma Rousseff Sexual orientation change efforts Hungarian Far- right Romania Wahhabi Ruhollah Khomeini Gaza War Kim Yu-Na Mao Zedong
9 Race and intelligence Minimum wage God in Christianity Augusto Pinochet Luiz Inácio Lula da Silva Ross Hedvíček Jobbik Traian Băsescu Yasser Al-Habib Massoud Rajavi Beitar Jerusalem F.C. Mizuho Fukushima Second Sino-Japanese War
10 Christianity Rudolf Steiner Nuclear power debate Alianza Lima Guns N’ Roses Israel Polgár Tamás Romanian Orthodox Church Arab people Muhammad Ariel Sharon GoGo Sentai Boukenger Tiananmen Square protests of 1989

Interesting and familiar titles, right? Did you realise that some titles appear in many different language editions? Many of them are about religion: Jesus; countries: Israel, Brazil; politics: Ségolène Royal, George W. Bush.


If you’d  like to take a look at the top-100 or in case you fancy having the complete lists with controversy score, get them from here.

What you see at the right is  a Word Cloud of all the titles in top-100 lists.

There are interesting patterns. Similarities and differences. International and global issues and very local items. An interactive visualization of top-100 lists in different languages to show overlaps and similarities, is waiting for you here.

To have a more general picture, we would have to look further than just “titles”. We need to consider more general topics and concepts, which the articles  can be categorised based on.

We hand-coded all the articles in top-100 lists with 10 different category tags. See the population of topical categories in each language in the interactive chart below (click on it!).


Some interesting patterns: Religion and Politics are debated in Persian, Arabic, and Hebrew even more than the others.  Spanish and Portuguese Wikipedias are full of wars on football clubs. French and Czech Wikipedias have relatively more disputed articles on science and technology related topics. Chinese and Japanese Wikipedia are battle fields for manga, anime, TV series, and entertainment fans. TVB product appear quite often in the Chinese list, and well, the number 19 most disputed article in Japanese Wikipedia is “Penis”!

“So What?” is probably what you are asking. Generally speaking the implication of these kind of studies are two-fold:

1) These results could help Wikipedia and similar projects (which are already many, and growing) to be better designed, considering these experiences and the observations we made. Local effects shouldn’t be neglected and specially Wikipedias with smaller community of editors could be inefficiently very much focused on local issues.

2) we believe that this kind of case-studies (Wikipedia being the case) could help us and social scientist to understand more about human societies. Topics like conflict emergence, its dynamics, its universal features, and the resolution mechanisms could be  empirically examined for the first time.  Most of the theories in social science could have never been tested against real world experiments (in contrast to natural sciences). But now, thanks to our digital life of today, we are able to track and analyse all the actions and interactions of a huge society of individuals (here, Wikipedia editors), so why not test the pre-existing social theories in a large “social experiment” of Wikipedia?

Read more about this project:

Yasseri, Taha, Spoerri, Anselm, Graham, Mark and Kertesz, Janos, The Most Controversial Topics in Wikipedia: A Multilingual and Geographical Analysis (May 23, 2013). Fichman P., Hara N., editors. Global Wikipedia: International and Cross-Cultural Issues in Online Collaboration. Scarecrow Press (2014), Forthcoming. Available at SSRN:

And more on Wikipedia by our team:

Török, J., Iñiguez, G., Yasseri, T., San Miguel, M., Kaski, K., and Kertész, J. (2013) Opinions, Conflicts and Consensus: Modeling Social Dynamics in a Collaborative Environment. Physical Review Letters 110 (8).

Yasseri, T., Sumi, R., Rung, A., Kornai, A., and Kertész, J. (2012) Dynamics of conflicts in Wikipedia. PLoS ONE 7(6): e38869.

Yasseri, T., Kornai, A., and Kertész, J. (2012) A practical approach to language complexity: a Wikipedia case study. PLoS ONE 7(11): e48386.

Yasseri, T., Sumi, R., and Kertész, J. (2012) Circadian patterns of Wikipedia editorial activity: A demographic analysis. PLoS ONE 7(1): e30091.

Mestyán, M., Yasseri, T., and Kertész, J. (2012) Early Prediction of Movie Box Office Success based on Wikipedia Activity Big Data.