Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us
Featured

Teaching with our book

We could only write our book because we have taught quantitative methods (mostly to historians, also to other humanists and social scientists) for more than 15 years. In France as elsewhere, however, there are few courses specifically aimed at historians or humanists – or they only address specific methods (GIS, network analysis, text analysis, etc.) and seldom discuss inputting and categorizing data. Therefore, our first intention when we wrote the initial, French version of the book in 2007 was to offer a self-help handbook: something that beginning graduate students (and any person beginning a research) could use to learn the basics on their own; the book and the companion website would point them to further readings and tutorials. The English book of 2019 can be used in the same way (here are some ways to read it on your own, depending on your needs).

If, however, you are teaching quantitative history, or digital humanities, or methods generally, and you want to devote at least a few lessons to the contents of our book, this post will give you a general idea of our own teaching formats. We are very willing to discuss ways to teach quantitative methods to diverse audiences (in comments here, over e-mail, in conferences, etc.). Continue reading “Teaching with our book”

Featured

How should I read this book?

This book is intended for many different audiences. We hope that you will read it bit by bit, but return to it often. Here are some ideas on how to do it.

We also hope that you will read papers, books, tutorials, or blog entries afterwards: the book is not a self-contained object (even with its accompanying blog). It is an invitation to discover quantitative research that respects the core values of humanistic research – and is engaging! In this blog, we point you to our favorite papers/books cited in each chapter, so that you can plan further readings. Continue reading “How should I read this book?”

Featured

What is this blog?

This site is under construction! (and classically, it’s not coming out as fast as we had hoped – sorry for this!) We will regularly add new posts to the “How to” and “Good Reads” categories, so as to (hopefully) cover all methods addressed in the book. Meanwhile, do not hesitate to use comments if you want to send us feedback or you wishlist for the site.

This is the companion blog to the book Quantitative Methods in the Humanities. An Introduction, published by Claire Lemercier and Claire Zalc (University of Virginia Press, 2019). In the book, we explain why quantitative methods could become a useful addition to the historians’ and humanists’ toolkits and books of recipes. In this blog, we give you a more direct access to the actual tools and recipes thanks to tutorials, focuses on particularly interesting papers, pointers to software, and much more. Like the book, this blog is written for all historians and humanists, not specifically for those who consider themselves as “digital humanists” or quantifiers. Our aim is to make quantification accessible without mathematical formalism and to show that it can be useful beyond specific themes such as economic or demographic history. For now, this perhaps looks like a static website rather than a blog, because we have to put together the main pointers (further readings, tutorials, etc.). In the future, we’ll have actual blog posts commenting on new research, new tools, etc.

On strike / Supporting digital workers at OpenEdition

Since the beginning of the social movement on 5 December, we have stopped our quantitative history workshop, so as to leave students and ourselves opportunities to demonstrate or otherwise take part in the opposition to the reform of pensions. (Incidentally, researchers and professors are among the categories who will lose the most in terms of percentage; but we know that here, absolute numbers matter more and we want to defend persons who will not have complete, full-time careers – most of whom are women.) Beyond the reform of pensions, the movement defends public services in health, education, and now public Universities and research, threatened by yet another reform [see here – in French].

In this context, digital workers at the public platform OpenEdition, who maintain the infrastructure supporting this blog, along with hundreds of other blogs, journals, digital books, etc., have encountered oppositions to their right to strike [see here – in French]. As producers of contents, we are fully aware of our dependence on their work and we want to emphasize our support to their collective action. In our book, we plead against the invisibilization of digital labor in the context of research projects. The same applies to platforms and other services. There are actual workers behind the “digital” or “automatic.”

 

Reviews of the book

Pat Hudson wrote a review of our book on eh.net. Having herself authored an introduction to quantitative methods in history (more centered than ours on economic and demographic history), she describes what we do and don’t offer in our book with a lot of fair play. This companion website is intended to address some of the shortcomings that she identified in our – indeed short – book.

Things that you should learn to do with a spreadsheet editor

Years of experience have convinced us that you don’t need a more sophisticated tool than a spreadsheet editor (i.e. Excel or Calc, the Open/Libreoffice equivalent) if you want to properly store (input) and analyze (categorize and create visualizations and calculation based on) historical data for a research project. For a PhD dissertation, say, or for any other project that is personal or involves just a few collagues. (Sharing data with the general public is a different matter)

So you won’t find advice on this blog about database managers or the TEI. Nothing wrong with those, and please do use them if you know how to (you’ll still have to export .csv files if you want to quantify). What we just say is that learning them is not necessary to quantify properly, and that they are more difficult to learn, in our experience, than the proper use of a spreadsheet editor. (We discuss this with more details in Chapter 3) So here is a list of functions that you should learn to use in order to store and analyze your data using a spreadsheet editor. This is the more practical complement to our “ten commandments of inputting data.” (those are the core of our book, in our view, and they seem to have helped a lot of users of the French version)

Continue reading “Things that you should learn to do with a spreadsheet editor”

Running statistical tests online

In Chapter 2, we explain why “statistical tests” such as the chi-squared test can be useful. If you use R (or other statistical software), it will routinely run such tests. It is very cumbersome (impossible for some tests) to run them in a spreadsheet editor. Fortunately, you can still use these tests without installing or learning how to use statistical software: there are dedicated websites that run the test (in fact, run the statistical software) for you.

Continue reading “Running statistical tests online”

Why count? Why formalize?

Here are a few suggestions for further readings after you have read our Introduction and Chapter 1. Read them if you would like to better understand where we (and some others!) stand in the debates today as to why and how to use quantification – or formalization, to use a broader term. Continue reading “Why count? Why formalize?”

Network analysis tools and tutorials

Which tool should I use? This is a complicated question, with not very satisfying, constantly evolving answers, so prepare for an evolving post. But first, three things that are actually more important than the choice of software.

Continue reading “Network analysis tools and tutorials”

Favorite papers using maps or GIS

Geoff Cunfer’s “Scaling the Dust Bowl” won’t teach you how to create a GIS or draw a map (it presents the results of his research, not how it unfolded step by step). But it is an excellent demonstration of the interest of maps from data exploration. Maps are sometimes used to show something that numbers would better demonstrate. Here, on the contrary, specific spatial patterns matter. In fact, what we like even more in this paper is the way it weaves together the quantitative and spatial analysis with a thorough, really “humanistic” discussion of different sources on the Dust Bowl, what they show and what they hide. The author’s results contradict received knowledge: he does not stop there but investigates where this knowledge came from and how to make sense of it.

Continue reading “Favorite papers using maps or GIS”

Categorization: Principles and examples

Joanna Drucker’s “Humanities Approaches to Graphical Display,” a classic among critical digital humanists, is ostensibly a paper on visualization, with nice, original figures. The author discusses the implicits of “objective” visualizations; those often endorse simple conceptions of time (e.g. linear or even with no past) and categories (exclusive from one another, with firm boundaries, etc.). Doing this, she in fact addresses the implicits of “objective” categorization generally – and she explicity makes the point that data are capta (constructed), that humanists know this and that using computers/quantification should not make them forget it. Her paper might free your imagination for the categorization of your own data. If you want to begin with an even shorter piece, she makes some of the same points with incisive clarity in this interview (with Miriam Posner, by Miriam Kienle). Continue reading “Categorization: Principles and examples”

Favorite papers in sequence analysis

Here, we have too many options! As a relatively recent, marginal technique, promoted by sociologists who like fine-grained descriptions and thinking about time, sequence analysis has produced many papers that we like – not just because they explain the technique rather clearly, but because they tend to care about their sources and categories. So if you want to read just one paper after reading our overview, you should choose among the following depending on your substantive research interests. They are all clear and accessible as regards sequence analysis per se. Continue reading “Favorite papers in sequence analysis”

Analyzing texts – software and tutorials

First, it is important to understand that you do not necessarily need specific software to analyze texts. Sometimes, even if what you are interested in is style, you will be better off creating a “classical” dataset (one row per text, one column per interesting feature of the text) and analyzing it with a spreadsheet editor or classical functions in R, rather than feeding the entire text into specific software (provided that you have the means to digitize it). But sometimes, as we also explain in Chapter 7, you might want to use specific tools devised for texts – the object of this post. Here is a useful, if far from exhaustive, survey of software; we give more details on some options below. Continue reading “Analyzing texts – software and tutorials”

Analyzing texts – structured historical corpora

In Chapter 7, we warn readers against the use of Google Books as a source on the historical use of words: Google Books does not contain everything that has ever been printed; it is biased in many directions (e.g. towards the English language), most of which are unknown. There are, however, corpora, generally built by linguists, that allow you to know inside which group of texts exactly you plot the use of this or that word, or you investigate which words were often used together. You can also choose to restrict your analysis to a part of the corpus (by author, say, or period). Those are also texts that have been checked and indexed by humans: they do not include errors due to optical character recognition and in most cases, they allow an analysis of grammatical categories. The drawback is, of course, that you can only explore what is in the corpus, i.e., often, something like the canon. But anyway, you cannot seriously count if you have not defined a perimeter of what you include and do not include. Continue reading “Analyzing texts – structured historical corpora”

Factor analysis

Factor analysis is addressed in our Chapter 4. Factor analysis graphs may look strange; it certainly takes some time to learn how to read them (to that effect, you should re-read our chapter and look into our suggestions for further reading). Still, many historians want to use factor analysis, once they have understood that it is an exploratory method that allows them to get a general idea of “everything that is in their dataset,” and to create typologies. (Perhaps this is also true in other disciplines, but the words “exploratory”, “having a look at everything at once,” and “typology” seem to have a special appeal for historians – ourselves included)

Continue reading “Factor analysis”