Memory is inextricably linked to the idea of repetition, since a memory exists only to the extent that it is recalled, and it only becomes a matter of social significance when it is shared, through media, across various platforms.
The key to the study of cultural memory is the identification of reiterations over time, by means of various cultural artefacts and through various media platforms, of references to particular names and events. Computational methods hold the promise of allowing us to trace such patterns in a more systematic way and on a larger scale.
The digitization of large corpora of analogue materials (of texts, images), and the accessibility of digitally-born resources (on social media platforms, including Wikipedia) have made this possible, as has the development of tools for computer-assisted text analysis, image recognition, and network analysis.
Google Ngram Viewer provides a graph showing how words (e.g. the name of an author) have reoccurred in different corpora of books (such as “English Fiction”, “Latin”, “British English”) over a selected period of time. In the domain of memory, the instructions for Google Ngram Viewer can be applied to observe frequency changes of references to historically important references. You can for instance try to search for terms such as Anne Frank or the Berlin Wall, or compare the occurrence of synonyms to detect linguistic changes.
Another free tool is Google Trends which allows you to visualise and investigate trends in people’s search behavior over time. Google Trends can thus be a useful way of exploring how we remember as well as what has historically been the center of attention. This can be further explored by comparing how different points of interests have been reflected in newspapers using Smurf, a tool that visualises the changes in language use in Danish newspapers since the 18th century. How are certain events reflected later on? How do media archives remember things, persons, and events?
Language also creates memory through semantics. The Macroscope built on Google Books data allows for exploring historical changes in language. Semantic drifts show how words get new meanings over time. In addition to semantic changes, context changes can imply changes in general attitudes or movements in the society. Think for example of the words “web” and “gay” and the changes in their usage. Explore also the sentiment values of different terms. Does change in vocabulary lead to changes in our collective or cultural memory, and how we perceive history?
Reviews given on social media venues such as Goodreads can, as proven by Tangherlini et al. (2019), provide an opportunity to understand reader responses to fiction - they can be especially informative about what “sticks” to readers’ memory after reading a novel. In 2019, Tangherlini et al. used machine learning approaches to generate a consensus model of literary fiction based on thousands of reader reviews on Goodreads on Frankenstein (1818), Of Mice and Men (1937), The Hobbit (1937), and To Kill a Mockingbird (1960). Although generating such a model is not an elementary approach, it illustrates well how much information it is in fact possible to extract from reader reviews.
Literary productions contribute all the time to the production of cultural memory – what is remembered and in which way. Linguistic and cultural phenomena are intertwined, and a big data approach allows for exploring large-scale changes and cultural trends quantitatively, what Michel et al. (2011) call ‘culturomics’. The corpus collected by the research group, based on digitized Google Books data, makes it possible to observe the appearance of historical events, such as World Wars or the invention and spreading of new technological devices, in a literary context.
To customise your exploration with Google Books, retrieve data with Python from the Culturomics project. It is even possible to download the whole Google Books dataset or parts of it from Google for a more massive analysis. Get inspired by the way Michel et al. (2011) make visible the appearance of epidemics, periods of censorship, or even trends in food culture. Do your own ngram searches, visualise the data, and reflect on your observations on historical events and how cultural memory is constructed.
Another great resource is Wikipedia whose archives are also available for download. The data can be used to explore which periods of time get the most coverage and if there are differences across languages. Following a guide for the Wikipedia API and the methods used by Samoilenko et al. (2017), make your corpus of historical events and analyse its representation in different languages. Compare article lengths, use Named Entity Recognition to compare mentioned people or places to see how history is remembered in a collective, open-source encyclopedia. See also Contropedia and related projects to analyse how social formation of cultural heritage and collective memory is formed on Wikipedia.
Google Books NGram Viewer, an online search engine that plots the frequencies of any set of search strings as n-grams.
Culturomics, a project exploring cultural trends through Google Books Ngrams.
A GitHub repository for retrieving Google Book Ngram data.
Instructions for setting up a Wikipedia API for Python.
Smurf, visualising language from Danish newspapers.
Li, Y., Engelthaler, T., Siew, C. S., & Hills, T. T. (2019). The Macroscope: A tool for examining the historical structure of language. Behavior research methods, 51(4), 1864-1877. https://doi.org/10.3758/s13428-018-1177-6
Michel, J. B., Shen, Y. K., Aiden, A. P., Veres, A., Gray, M. K., Pickett, J. P., ... & Aiden, E. L. (2011). Quantitative analysis of culture using millions of digitized books. science, 331(6014), 176-182. http://dx.doi.org/10.1126/science.1199644
Pentzold, C., Weltevrede, E., Mauri, M., Laniado, D., Kaltenbrunner, A., & Borra, E. (2017). Digging Wikipedia: the online encyclopedia as a digital cultural heritage gateway and site. Journal on Computing and Cultural Heritage (JOCCH), 10(1), 1-19. https://doi.org/10.1145/3012285
Rogers, Richard. "Wikipedia as Cultural Reference." Digital Methods. Cambridge, Massachusetts: The MIT Press, 2013, pp. 165-202. https://doi.org/10.7551/mitpress/8718.003.0009
Samoilenko, A., Lemmerich, F., Weller, K., Zens, M., & Strohmaier, M. (2017, May). Analysing timelines of national histories across Wikipedia editions: A comparative computational approach. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 11, No. 1). https://arxiv.org/abs/1705.08816v1