Years of experience have convinced us that you don’t need a more sophisticated tool than a spreadsheet editor (i.e. Excel or Calc, the Open/Libreoffice equivalent) if you want to properly store (input) and analyze (categorize and create visualizations and calculation based on) historical data for a research project. For a PhD dissertation, say, or for any other project that is personal or involves just a few collagues. (Sharing data with the general public is a different matter)
So you won’t find advice on this blog about database managers or the TEI. Nothing wrong with those, and please do use them if you know how to (you’ll still have to export .csv files if you want to quantify). What we just say is that learning them is not necessary to quantify properly, and that they are more difficult to learn, in our experience, than the proper use of a spreadsheet editor. (We discuss this with more details in Chapter 3) So here is a list of functions that you should learn to use in order to store and analyze your data using a spreadsheet editor. This is the more practical complement to our “ten commandments of inputting data.” (those are the core of our book, in our view, and they seem to have helped a lot of users of the French version)
Continue reading “Things that you should learn to do with a spreadsheet editor”
In Chapter 2, we explain why “statistical tests” such as the chi-squared test can be useful. If you use R (or other statistical software), it will routinely run such tests. It is very cumbersome (impossible for some tests) to run them in a spreadsheet editor. Fortunately, you can still use these tests without installing or learning how to use statistical software: there are dedicated websites that run the test (in fact, run the statistical software) for you.
Continue reading “Running statistical tests online”
Here are a few suggestions for further readings after you have read our Introduction and Chapter 1. Read them if you would like to better understand where we (and some others!) stand in the debates today as to why and how to use quantification – or formalization, to use a broader term. Continue reading “Why count? Why formalize?”
Which tool should I use? This is a complicated question, with not very satisfying, constantly evolving answers, so prepare for an evolving post. But first, three things that are actually more important than the choice of software.
Continue reading “Network analysis tools and tutorials”
Geoff Cunfer’s “Scaling the Dust Bowl” won’t teach you how to create a GIS or draw a map (it presents the results of his research, not how it unfolded step by step). But it is an excellent demonstration of the interest of maps from data exploration. Maps are sometimes used to show something that numbers would better demonstrate. Here, on the contrary, specific spatial patterns matter. In fact, what we like even more in this paper is the way it weaves together the quantitative and spatial analysis with a thorough, really “humanistic” discussion of different sources on the Dust Bowl, what they show and what they hide. The author’s results contradict received knowledge: he does not stop there but investigates where this knowledge came from and how to make sense of it.
Continue reading “Favorite papers using maps or GIS”
Joanna Drucker’s “Humanities Approaches to Graphical Display,” a classic among critical digital humanists, is ostensibly a paper on visualization, with nice, original figures. The author discusses the implicits of “objective” visualizations; those often endorse simple conceptions of time (e.g. linear or even with no past) and categories (exclusive from one another, with firm boundaries, etc.). Doing this, she in fact addresses the implicits of “objective” categorization generally – and she explicity makes the point that data are capta (constructed), that humanists know this and that using computers/quantification should not make them forget it. Her paper might free your imagination for the categorization of your own data. Continue reading “Categorization: Principles and examples”
Here, we have too many options! As a relatively recent, marginal technique, promoted by sociologists who like fine-grained descriptions and thinking about time, sequence analysis has produced many papers that we like – not just because they explain the technique rather clearly, but because they tend to care about their sources and categories. So if you want to read just one paper after reading our overview, you should choose among the following depending on your substantive research interests. They are all clear and accessible as regards sequence analysis per se. Continue reading “Favorite papers in sequence analysis”
You will find references to other tools in older papers, but now the situation is clear: sequence analysis is performed using R, in command line mode, and the specifically developed and well-documented package TraMineR. Continue reading “Sequence analysis”
You can find all the references that we cite in the book in the Zotero collection here. We have tried, as far as possible, to include links to legal open-access versions of the publications.
The subcollection “additions” references texts not cited in the book but that we find very relevant, and discuss somewhere in this blog.
First, it is important to understand that you do not necessarily need specific software to analyze texts. Sometimes, even if what you are interested in is style, you will be better off creating a “classical” dataset (one row per text, one column per interesting feature of the text) and analyzing it with a spreadsheet editor or classical functions in R, rather than feeding the entire text into specific software (provided that you have the means to digitize it). But sometimes, as we also explain in Chapter 7, you might want to use specific tools devised for texts – the object of this post. Here is a useful, if far from exhaustive, survey of software; we give more details on some options below.
Blogs that introduce several tools
Tutorials on diverse techniques
Topic modeling: tools and tutorials
Continue reading “Analyzing texts – software and tutorials”
In Chapter 7, we warn readers against the use of Google Books as a source on the historical use of words: Google Books does not contain everything that has ever been printed; it is biased in many directions (e.g. towards the English language), most of which are unknown. There are, however, corpora, generally built by linguists, that allow you to know inside which group of texts exactly you plot the use of this or that word, or you investigate which words were often used together. You can also choose to restrict your analysis to a part of the corpus (by author, say, or period). Those are also texts that have been checked and indexed by humans: they do not include errors due to optical character recognition and in most cases, they allow an analysis of grammatical categories. The drawback is, of course, that you can only explore what is in the corpus, i.e., often, something like the canon. But anyway, you cannot seriously count if you have not defined a perimeter of what you include and do not include. Continue reading “Analyzing texts – structured historical corpora”
Factor analysis is addressed in our Chapter 4. Factor analysis graphs may look strange; it certainly takes some time to learn how to read them (to that effect, you should re-read our chapter and look into our suggestions for further reading). Still, many historians want to use factor analysis, once they have understood that it is an exploratory method that allows them to get a general idea of “everything that is in their dataset,” and to create typologies. (Perhaps this is also true in other disciplines, but the words “exploratory”, “having a look at everything at once,” and “typology” seem to have a special appeal for historians – ourselves included)
Continue reading “Factor analysis”
As we write in the book, for most historical and humanistic research, you won’t need methods of analysis more complicated than contingency tables (which you can create easily with a spreadsheet editor), and perhaps a test such as the chi-squared test, which will help you interpret the tables (you can easily perform this test online). We strongly believe that the most important, interesting, and difficult task is to create good data: our book therefore focuses on the imputting and categorizing stages of this process. A straightforward descriptive analysis is then sufficient in many cases.
Should I learn R?
Graphical user interfaces
Tutorials Continue reading “Learning R”
This book is intended for many different audiences. We hope that you will read it bit by bit, but return to it often. Here are some ideas on how to do it.
We also hope that you will read papers, books, tutorials, or blog entries afterwards: the book is not a self-contained object (even with its accompanying blog). It is an invitation to discover quantitative research that respects the core values of humanistic research – and is engaging! In this blog, we point you to our favorite papers/books cited in each chapter, so that you can plan further readings. Continue reading “How should I read this book?”
Of course we will value reviews in journals and your own feedback even more. But for now, we are lucky enough to be able to exhibit blurbs by colleagues whose work – and efforts to make quantification compatible with historical practice – we admire. Continue reading “Praise for the book by colleagues we admire”