Teaching with our book

We could only write our book because we have taught quantitative methods (mostly to historians, also to other humanists and social scientists) for more than 15 years. In France as elsewhere, however, there are few courses specifically aimed at historians or humanists – or they only address specific methods (GIS, network analysis, text analysis, etc.) and seldom discuss inputting and categorizing data. Therefore, our first intention when we wrote the initial, French version of the book in 2007 was to offer a self-help handbook: something that beginning graduate students (and any person beginning a research) could use to learn the basics on their own; the book and the companion website would point them to further readings and tutorials. The English book of 2019 can be used in the same way (here are some ways to read it on your own, depending on your needs).

If, however, you are teaching quantitative history, or digital humanities, or methods generally, and you want to devote at least a few lessons to the contents of our book, this post will give you a general idea of our own teaching formats. We are very willing to discuss ways to teach quantitative methods to diverse audiences (in comments here, over e-mail, in conferences, etc.).

The two main formats that we have used to teach quantitative methods generally (not just e.g. network analysis or regression) are a graduate workshop based on the participants’ research (mostly attended by doctoral students, in the European classification) and a master-1 course based on an ad hoc collective research (capstone project) (for students who have not yet seen primary sources or even defined a specific research topic: this could be adapted to advanced undergraduates or beginning graduate students in the US system).

Graduate workshop

This is a format that I have also used for one-shot workshops with small groups (3-6 participants) in various locations where I was for a short visit (the Institute for the history of science in Berlin, the department of history at Urbana-Champaign, etc.). It works really well as an introduction and, even more importantly, actually helps the participants in their research. It can be improvised by any person who knows well what we discuss in Chapters 2 and 3 of our book (deciding on which population or sample to study, inputting and categorizing data, creating contingency tables) and has a general idea as to which method is good for which type of data and questions.

As an actual course of, for example, 24 hours, it seems much more useful for graduate students, in my view, than more specific workshops or summer schools on one method only (which are more frequently offered by graduate schools). Everybody needs to discuss samples, databases, categorization, needs to learn how to create a contingency table, and have a general idea of when to use (thus learn) other methods – whereas in no humanistici graduate school will all students use GIS, network analysis, or regression. If we had our way, our type of workshop (or the course described below) would be compulsory in all graduate schools 🙂 – also because it’s really fun and rewarding to teach. (Colleagues have experimented with the same format, on a voluntary basis, in other places, esp. Stéphanie Ginalski and Alix Heiniger in Switzerland.)

In our Parisian version, there are ca. 20 participants per year (any number between 5 and 25 will do). They come from many different universities (in Paris and sometimes elsewhere), study diverse topics, periods, and places and are not all historians; one important benefit of the workshop is indeed for them to learn about what happens in fields of research very different from their own (and how to present their research to such an audience).

We ask participants to have read our book in advance. We use the first hour to establish the rules of the game, and then devote one hour to each participant (you can see here how it looks like – in French). They have ca. 15 minutes to present their research questions, examples of their materials (it is important to show what the sources look like) and explain why they think they need some sort of quantification, what they have already done (we ask them to show their databases/spreadsheets, esp. if they are not proud) and how we and the group could help them. Then there is a 45-minute general discussion (if we could, we would love to have more time for each person; but I have also seen one-shot workshops work with 6 participants and 3 hours). We ask participants to take part in all or most sessions: it has happened quite often that they have learned more from the case of a colleague (sometimes studying a very different period or topic) then from our direct answers to their questions during their own session. Even though the setting is completely interactive, our answers end up often repeating the same principles on diverse cases. It is from those interactions that we derived most of the contents of our books; yet it’s always more efficient for participants to hear how general advice practically applies to their neighbor’s research, rather than just to read it.

We try to create sessions that group persons who study similar types of sources (e.g. photographs; judicial archives; school textbooks) and/or persons who seem to have similar needs (how to sample from a large population; how to categorize occupations; the basics of network analysis). However, quite often, we collectively find that the big problem or the interesting issue is not what the person had anticipated. The classic case is that of a colleague who thinks that she needs to learn network analysis, but ends up not needing it or not having data that allows it; instead, she’ll be able to draw conclusions on social ties from a different method (Marten Düring discusses this case here). Hence the fact that this workshop does not require a very clear structure, step by step or method by method – and that you can easily improvise a 5-person, 3-hour workshop without bothering about the specialties of the participants or their previous expertise. Looking at sources and at preliminary, ugly databases together is a great experience, and everybody has something to learn – provided, of course, that there is no spirit of competition or judgment involved. We often liken our workshop to collective therapy. Everybody has problems with data and quantification, and indeed with the research process generally (too many or not enough sources, etc.), and it’s nice to discuss this together at a stage when problems can be solved.

Course for beginners in research

The format

Claire Zalc created the format in 2008 and then we taught two groups in parallel from 2009 to 2017 in Sciences Po, Paris. The students were beginning a “research master” in history but, for reasons specific to Sciences Po, many had never seen a primary source or read a paper in a scientific journal in history before. It was the first or second semester of the master, when they learned the basics of research methods, and they often didn’t know yet which topic they would personally investigate (only that they would work on the 1750-2000 period). In this context, we experimented with several variants of a format intended to a/ have them read our book, plus papers in history that used diverse methods (teach them how to read papers generally, and quantitative papers specifically) and b/ have them conduct a small collective research (capstone project) in prosopography, from reading the sources to producing essays commenting on contingency tables (and chi-squared tests). This implied c/ a few sessions when we showed them how to actually do things with software, and them asked them to exercise at home. Very few actually, even though they knew almost nothing about spreadsheet editors beforehand (they knew how to sort rows). We taught them 1/ to draw a random sample using the RAND() function, 2/ to create pivot tables, and 3/ to run a chi-squared test or exact Fisher test using and online tool.

Over time, we tended to move in the direction of more b/ than a/, in terms of hours spent, because the course, esp. collective research was very demanding in terms of work between sessions. Depending on how many hours you have (we had 12×2 hours for courses and students worked perhaps 2-3 hours per week on their own), you can do just a/, just b/, or a little bit of both! The most time-consuming – but rewarding – tasks both for the teacher and the student are the definition of the inputting grid and the categorizing schemes.

One important thing that we learned from this course is that it was important that it was compulsory – something some of our colleagues (almost none of whom practices quantification) supported, but many did not understand; it ended in 2017. Had it been an option, few students would probably have chosen “quantification” when they didn’t know much about historical research generally. Yet many expressed a lot of interest in the course and put a lot of energy in the collective research –  not just because of quantification, of course, but because it was a first experience with primary sources, they liked the “collective” part, and they were allowed to make many choices on their own. More importantly, many used some sort of quantification during their master’s thesis afterwards – including many who would never have thought about it without the course, e.g. in art history or religious history. Almost all those who went on to work on a doctoral dissertation did, too – and the others knew that it was an option.

In our experience, it was optimal to have two groups of 10-15 students: each class was small enough that there was a lot of interaction, but the two groups joined forces to build a database large enough (in terms of numbers of persons and variables) to be interesting to explore. With less numerous students, you would probably have to input less variables (discuss the choice with them so as to avoid frustrations).

Readings

The a/ part (reading texts together) was classic in terms of format: the students had to read one paper per week, then we discussed the text with them (with or without a formal presentation by one student), focusing on how the author explained her questions, her choice of sources and methods, and her results, how she had defined her population or sampled, how qualitative and quantitative analyses were linked, etc. If you want to do this with the topics of our book (or of some chapters), we present our favorite pieces for further reading here. Using the index in our book, you can also choose texts on your favorite topic, period, or area (do ask us if you don’t find what you would like – we know that we have included too few references on some topics, periods, and area).

Working on biographical dictionaries

For the b/ part, the collective research, we did not want students to work on materials from our own personal research (we feeled that it would be exploitation), and it was not practical to have them go to the archives and take pictures (which would of course be very interesting when/where it’s possible). We decided to have them work on second-hand sources: online biographical dictionaries. We experimented with biographies of members of the French Parliament (on the official parliamentary website) for several periods, of persons in the workers’ movement (from a French-language dictionary written partly by historians and partly by amateurs, following serious guidelines), and of the Righteous among the Nations (on the official Yad Vashem website) – sources that have equivalents in many countries and languages (you can also find biographical dictionaries that could be used in a similar way for more distant time periods, as well as for literature, music, art history, history of science, etc.). This was more familiar, less intimidating for complete beginners in research, and we discovered that it was indeed interesting to create and analyze a database of this sort. The drawback was, of course, the “second-hand” quality of the material; but as you will see, we used it to have students think critically both about quantification and about the source.

The first step of the research was for each student to read a few biographies, try to understand how they had been produced (sources of the biographies, history of the dictionary), and think about “what type of information we could extract from these biographies and try to quantify“. Most students mentioned things that we are accustomed to think of as objective and quantifiable: gender, age, occupation, political affiliation. As this stage, we introduced our ideas on inputting data (see Chapter 3 of our book), i.e. not just keeping track of the source, keeping its exact words, etc., but we also emphasized the fact that it was perfectly OK to put other things in the database:

  • things that were unevenly mentioned across biographies (absent in many, very long in some), e.g. information on private life. An interesting essays produced in this course, in 2015, by Paul Magisson and Alban Sénault discussed the marital status of female members of the workers’ movement – including the lack of mention of a marital status (what it was correlated to, and what it implied);
  • things that looked complicated or “too qualitative,” e.g. the circumstances of an arrest or the presence of images in the biography. One of the most brilliant essays discussed the photographs of representatives of the colonial Empire in the French Parliament. The students (Elias Burgel and Thomas Mareite, in 2013) had inputted and categorized information on what was visible (was there a hat? a tie? in which direction did the person look? etc.) and independently researched the circumstances of production of these pictures, which helped them to interpret correlations between the contents of the texts and the photographs – for example, to think about the intentions of those deputies who wore supposedly traditional attire on their official photographs.
  • information on how the source was written, e.g. the author of each biography, its date of writing, the fact that the author had or had not used interviews with the person, or the use of words. Another of the most fascinating essays, in 2011, was based on a column in the database where students had inputted “all words used” (in the biographies of the Righteous among the Nations on the Yad Vashem website) “to characterize the character of the person.” On the basis of what had been inputted, Maxence Aucouturier and Marie-Camille Fourtané created inductive categoried grouped around the most frequent words, e.g. words denoting warmth (a frequent metaphor in this context) or courage. They hypothesis was that those words would be correlated with the gender of the person described. They found no such correlation, but found one with the type of person(s) helped (esp. children vs. members of the underground).

Decisions on how to input data

This led to fascinating discussions on what to include in our collective inputting instructions. In addition to deciding on which columns to create (we tried to have as many as possible), we had to collectively write instructions that were as clear and manageable as possible so that the work of the students would be consistent. It is interesting to have students experience this early on; then, when they create a database by themselves, they will hopefully also write clear instructions (which we often forget to do, thinking that we are going to be consistent over time). Here are short extracts of our 2016 grid – that year, we worked on artists in the dictionary of the workers’ movement for the interwar period. (click on the images to enlarge) Those are is the first and third page of a six-page document. Even if you don’t read French, you’ll hopefully realize that we tried to be quite specific about what was to be included in each column.

A few noticeable things about this grid – and this type of collective research generally:

  • A discussion that was conducted in parallel to the definition of the grid was that on the definition (and, sometimes, sampling) of the population to study. Here, we decided on a working definition of “artists” that made sense for the students and could be applied on our source. We also decided that it would be more interesting not just to study the artists’ biographies, but to be able to compare them with the vast majority of non-artists in the dictionary (so as to interpret results). So the students worked on a/ all artists in the dictionary and b/ a random sample of the others, using the same grid. In another year, we had fascinating discussions on how to study “the Jews” in the same dictionary (we used several different definitions of “the Jews” and compared the results). More unexpectedly, another group wanted to work on “women in the workers’ movement who had been imprisoned”, and had to decide on what counted as “imprisoned.” You can read here about similar discussions in a capstone project at UCLA aiming at creating a database on on early African-American silent “race films.” What was a “race film”? Defining the genre was not easy, so they made inclusive choices at the beginning, refined them afterwards, and kept track of all choices as well as the discarded data.
  • One of the first items in each grid, before discussing columns, was “when do I create a separate row?”. Even in the apparently simple situation of biographies, we encountered limit cases. Should a member of the Parliament who died just after his election and never took part in debates get his own row? (it depends if you study elections or parliamentary work!) How do we deal with persons with two or more separate biographies in the same dictionary? (you create separate rows at the inputting stage, to be on the safe side; then you merge them at the categorizing stage if your question deals with individuals, and you keep them separate if your question deals with biographies), etc.;
  • We showed the students how to apply the “ten commandments” in our book (for example, in the second page above, you see an application of the “episode” format, used to input successive or contemporary involvements in parties, newspapers, occupations, etc.) and when it was necessary to negotiate with the commandments because it would be too time-consuming or cumbersome to follow them all (e.g. we put all first names in the same column) – staying aware of the consequences;
  • We showed students that devising instructions on what to include or not to include was a great opportunity to think about their own categories of analysis, in the interest of research generally, not just quantification. For example, the grid above defines what should be considered as “an occupation” for input – limit cases are many in biographies (when we don’t know whether the person was paid, how much time she spent doing something, etc.). The same goes for “political action,” “political opinion”, etc. (what is “political”? what counts as an “action”? an “opinion”? etc.) It was a good introduction to research generally, and we wanted to make the point that quantification did not imply that we did not pause to think about such issues. On the contrary, in our view, trying to be systematic can help us to think about our often too implicit definitions.

Here is a short extract of the resulting spreadsheet (click on the image to enlarge), which had a total of 291 rows and 177 columns (including many “comments” columns), but was absolutely not as long to fill in as you would think (each student read and inputted ca. 10 biographies; their median length was only 1,000 signs). Notice that there are no empty cells (but “NA” for unknown information). The colors were used to identify substantive types of information (on the biography, on politics, art, etc.) so as to make inputting easier.

Categorization and analysis

Then came the categorizing stage. Groups of students took care of groups of columns (one group dealt with artistic activity, another with political activity, etc.) and decided on categories that, in their view, made sense and were manageable on the basis of what was in the database. They applied their own categorization schemes and wrote notes to explain to the other groups what they had done. (of course, we gave them advice during the process, but the ultimate decisions were theirs – there is no intrinsically bad categorization scheme, only insufficiently explicit ones, and many that are not adapted to the research question!) For each group, the explanatory document was often four-page-long. For example, Chloé Duvivier, Chloé D’Arcy, and Manon Khalfi dealt with descriptions of what the persons did in periodical publications. They created several variables; one described the “function in the publication”: author, funder, editor-in-chief, seller, etc. They had to explain what exactly they had grouped under each category and how they had dealt with multiple functions and very rare functions. They created a second category “internal progress yes/no” describing an ascending career inside a periodical, which required (necessarily imperfect) decisions on how to consider hierarchies. Thomas Busciglio, Inès El Alami, and Gabriela Larrain worked on occupations. In addition to thinking about economic sectors, they created separate variables indicating occupational instability (how many distinct occupations were mentioned?), the use (or not) of the term “worker” (important in the context of a dictionary of the “workers’ movement”), and the fact that the individual had the same occupation as one of his or her parents, or not, or it was impossible to know from the biography. Different groups would have done otherwise – this was also the take-home message.

The students knew from the beginning what this was all about: they would have to write essays, in groups of two to four, presenting a question about the material (citing at least a few items in the relevant literature), and trying to answer on the basis of at least two contingency tables (i.e. on the basis of correlations between variables in the database) accompanied by tests. (The students often chose to include more than two tables; some even took the time to learn the basics of text analysis, correspondence analysis, or regression and make experiments. Over the years, we had to limit the length of the essays, because they were so motivated as to make grading really long). Moreover, they knew that just giving the numbers would not suffice: among the criteria for a good essay, we had listed

  • reminding the reader of the categorization schemes for the variables that they used, insofar as this was relevant for interpretations;
  • explain in plain French was the table and the test said;
  • include qualitative elements in the analysis, e.g. look closely at individual examples of what they were trying to demonstrate, or exceptions;
  • include prospects about what could be done with the material to push matters further (which other sources and methods to use) if they had more time.

Many of the essays were excellent and, with a little polish, could easily have made their way to scientific journals. A few students said that they were interested, but it never happened – they had many other things to do in the following semesters. Still, we have learned a lot about prosopography (and the Parliament, the worker’s movement, and the Righteous among the Nations) and we think that they have learned a lot not just about quantification, but also about the research process generally.


Author: Claire Lemercier

CNRS research professor of history in Paris / Directrice de recherche au CNRS en histoire, au Centre de sociologie des organisations

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.