First, it is important to understand that you do not necessarily need specific software to analyze texts. Sometimes, even if what you are interested in is style, you will be better off creating a “classical” dataset (one row per text, one column per interesting feature of the text) and analyzing it with a spreadsheet editor or classical functions in R, rather than feeding the entire text into specific software (provided that you have the means to digitize it). But sometimes, as we also explain in Chapter 7, you might want to use specific tools devised for texts – the object of this post. Here is a useful, if far from exhaustive, survey of software; we give more details on some options below.
Blogs that introduce several tools
Tutorials on diverse techniques
Topic modeling: tools and tutorials
Blogs that introduce several tools
First, we would like to mention two colleagues who also offer substantive analysis, but whose blog/site are generally very useful as regards methods:
– Cheryl Schonhardt-Bailey’s website gives examples of student research using textual analysis, as well as papers of her own (in political science), all of which are especially clear on methods (incl. Alceste – see below – and other methods) The “text mining handbook“, however, is not for beginners in statistics. Because of the type of texts that she studies, you might prefer to start there if you consider yourself as a social scientist rather than a humanist (although, in fact, the two groups of researchers tend to end up using the same tools).
– Christof Schöch‘s “The Dragonfly’s Gaze” discusses the computational analysis of literary texts, with frequent (and generally advanced) discussions of tools, esp. for topic modeling. It is the place to go, among other things, if you want to know more about stylometry, which we briefly mention in our chapter.
Tutorials on diverse techniques
We shortly discuss wordclouds in our book: a fashionable tool with many drawbacks if you want to visually summarize the use of words in a text. If you want to try drawing wordclouds, keeping these reservations in mind, you can use this chapter in the excellent handbook by Graham, Milligan, and Weingart (which you can buy to show your support, even though it is freely accessible online). If you want to create a quick visualization, you could also try TreeCloud. This tool emphasizes not just the frequency of words but also their mutual proximity. It can be used online if the text is short, or you can download it as opensource software. The principles are explained here.
In our chapter, we emphasize the value of a more classical (statistically simple and robust) technique used to contrast corpora (e.g. to compare authors, or periods): the computations of words over- and under-used in each corpus as compared to the others. There are many tools that allow to do this (but we will have to expand this section in the future to describe them; IRaMuTeQ, described below, also includes this function).
Graham, Milligan, and Weingart very clearly present several other useful tools. For not-too-large corpora, AntConc offers excellent search and visualization functions allowing to track the context of words and their changing use over time. Voyant, a classic among digital humanities tools, does the same thing for larger corpora.
Topic modeling: tools and tutorials
We also present “topic modeling”, the use of the computer to define “topics” in texts, i.e. “bags of words” that often appear together in these texts. There are many tools for topic modeling. A first approach to the general idea is offered by Overview, which clusters texts according to their shared or non-shared vocabularies: Graham, Milligan, and Weingart introduce it here.
For more refined analysis, we tend to favor IRaMuTeQ*. It is free, opensource, and comes with a reasonably user-friendly graphical interface: you don’t have to use a command line. However, it is an R package, so you have to install R, before installing and running IRaMuTeQ. The tutorial here leads you from installation to analysis. If you have trouble during installation, you can also write to the users list, which is quite active and efficient. IRaMuTeQ has dictionaries that enable it to lemmatize texts in (modern) English, French, Italian, Spanish, Portuguese, Swedish, and Greek (some of the dictionaries are still experimental or minimal as of this writing, in March 2019); for other languages, you can use it but you won’t get lemmatization. The actual use of the software (where to click, etc.) is quite straightforward; it is more difficult to learn how to prepare the texts, and how to interpret the analysis, but it can be rewarding. For topic modeling, IRaMuTeQ uses the same logics as Alceste, which has been more often presented in English, esp. by Schonhardt-Bailey. If you understand the results of Alceste (for example, thanks to this video), you will understand the results of IRaMuTeQ.
[Edit] In 2020, Stephen Gourlay wrote a very useful tutorial explaining how to prepare input files for IRaMuTeq. Thanks to him for this work and for signaling it to me! One comments as regards its application to historical sources: if your original material is a historical source (as opposed to interview transcripts that you produce), you probably won’t want, at least in a first step, to standardize spelling or hyphenated words or acronyms (e.g. consistently but either UN or United Nations), let alone synonyms, because patterns in spelling, in the use of acronyms or complete names, etc. might be interesting for you. In a second stage, however, you might try a standardization and see whether this focus on words, as opposed to their exact spelling, changes some results.
Graham, Milligan, and Weingart present alternative, often less easy to learn (i.e. involving some coding) techniques of topic modeling here and in the following chapters. The classic here is Mallet. For many well-organized pointers to further readings and tutorials, you can also read Scott Weingart’s “guided tour for humanists” of topic modeling.
* Some would argue that it does not perform topic modeling in the strict sense of the phrase, but the underlying idea is the same.