Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Research Scenario CLARIAH-VL: Mapping flood damages in the past (part I)

Introducing the case of the Belgian Historical Gazetteer

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the datasets made available through CLARIAH-VL. This blogpost introduces the research scenario “Mapping flood damages in the past using the Belgian Historical Gazetteer”. The two following ones will introduce the methodology and the results.

Introduction

Our societies are facing major meteorological disasters because climate changes greatly and because societies are no longer prepared to face them: many people live in risky areas when the risk itself is culturally no longer accepted. Recent research has shown that past societies used to be more resilient towards meteorological bad events (Soens 2018, De Keyzer et al. 2024), at least to the non-exceptional ones. Thinking about the daily resilience of societies to such events in the past means taking stock of past bad weather events: how frequent were they? What were the causes? Which places were the most impacted? This can be difficult because these events have whether not always left much trace in the documentation (because they were common), whether so much trace (because they were numerous) that they are difficult to manage. However, digital tools can help researchers in both cases. They can for instance greatly facilitate the visualization of specific natural disasters by automatizing the mapping of (forgotten) affected places. This research scenario aims at showing how it can be done.

Dataset

For this research scenario, we will focus on one specific source, namely the “Gazet van Antwerpen” which is a daily newspaper published in Flanders since 1891. Among other subjects, this newspaper mentions the occurrences and consequences of bad weather in all Belgium, in other countries but especially in Flanders.  This document had been digitized and is available online from 1911 on BelgicaPress, the online newspaper database of the Royal Library of Belgium (KBR).

Figure 1 – First page of the Gazet van Antwerpen (BelgicaPress)

For this case study, we will look for every mention of the words “onder water” (“under water” in English. Ex: “this hamlet was under water because of the storm”) used in this newspaper from 1911 to 1921. We will select the articles that use this expression in the framework of storms or floods that took place in provinces Antwerp and East-Flanders.

Research Question

For this research scenario, we would like to obtain a visualisation of the damage caused by (non-exceptional) storms and floods in the provinces of Antwerp and East Flanders in order to take measure of these (non-exceptional) events each year between 1911 and 1921. The first step is of course to be able to map places mentioned in those articles. Our research question is therefore: where are the places that were impacted by storms and floods in provinces Antwerp and East-Flanders between 1911 and 1921? and To what extent were they affected ? 

Challenges

The challenges are numerous if one wants to automatize, at least a bit, the working process. The bigger are 1) that information on storms and floods needs to be aggregated into locations, which 2) need to be disambiguated and linked to an existing spatial database for subsequent mapping within a Geographic Information System.

Solutions

This research scenario will make use of the tools provided by SIC 4 (Aggregate) to (semi)automatically map the locations mentioned in the selected newspaper articles.

This includes specifically the Belgian Historical Gazetteer, which is an historical gazetteer of toponyms. Its main goals is to provide researchers with a collection of data that 1) does not stop at Belgian provincial borders, which 2) goes beyond the level of municipalities and 3) that does not stop to the 19th century but go deeper in the past.

Next Steps (and next blogposts)

  • Building the dataset and identifying locations (blogpost part II)
  • Visualizing floods damages (blogpost part III)

References

Tim Soens, 2018. “Resilient Societies, Vulnerable People: Coping with North Sea Floods Before 1800”, Past & Present, 241 (1), 2018 : 143–177, https://doi.org/10.1093/pastj/gty018

Maïka De Keyzer, Tim Soens and Christophe Verbruggen, 2024. Mens en natuur : een geschiedenis, (Gent: Academia Press), 2014, 313p.

Diff Annotator: An Annotation Tool for Text Comparisons

Diff Annotator

Diff Annotator, developed at the University of Antwerp as part of eXtant, a toolkit for digital scholarly editing, is a lightweight tool for annotating text-comparisons of two plain text files. It was initially created for a specific use case, a research project that included a comparative close reading of two versions of Stephen King’s novel IT. It was programmed to facilitate an analysis of the mechanics of suspense throughout King’s revision campaigns during the novel’s composition process.

How Does Diff Annotator Work?

The tool provides a reading environment that presents the variation in a pleasing way and enables users to take notes and attach annotations to variants. As the name suggests, the Diff Annotator uses the ‘git diff’ command from git (an open-source distributed version control system) for the collation process itself. It visualizes invariant text in black and the variants from the first version in red and the second version in green. The collation is accurate, and the output is clear. Equally important to the collation itself, however, is Diff Annotator’s mechanism to note down what one notices in the variants: to make general notes about revision patterns that one might discover along the way and to attach annotations to particular variants, flagging and tallying those patterns.

Technical Specifications and Installation

Diff Annotator’s simple web-based content management system is a lightweight node js application. To add a new text comparison, users can click on the ‘+ NEW DIFF’ button, supply a name for the comparison and then select two plain text files from their computer (other file formats than .txt are not supported, only two versions can be compared). The output generated by git diff is transformed to HTML by a python script that adds unique identifiers to all variants. The adding of notes and annotations is handled by a combination of python scripts (which make the changes to the HTML files) and client-side javascript functions (which update the visualisation in the browser as the user is working).

The automatic collation output can be corrected in three ways:

1) by clicking on a variant and making changes in the text fields marked ‘f1’ (the first text file) and ‘f2’ (the second text file)

2) by clicking the ‘Normalize to f2’ button to remove a variant and insert the variant from f2 as normal text

3) editing the source code by calling up the source HTML of a selection of paragraphs (all variants are contained in a <span class=”app”>, for instance: <span class=”app” id=”app-652″><span class=”f1″>Hello</span><span class=”f2″>Hi</span></span>).

Diff Annotator supplies statistics for every paragraph on the increase or decrease in words between the versions. ‘[inv.:19 — f1:14 — f2:12 (-6.06%)]’, for instance, describes the collation of a paragraph with 19 invariant words, 14 variant words in the first text file, compared to only 12 variant words in the second text file, which comes down to a decrease of 6% in word count between the versions.

Usage and Customization

The intended audience of the Diff Annotator will need moderate technical skills to install it since it has a moderate number of dependencies that must be installed first (git diff, node.js and python). Text comparisons can be easily transported from one installation of the Diff Annotator to another: simply copy the relevant folder from the ‘Diff-Annotator/public/data’ directory to the same directory in another installation.

The tool does not come with a user management system; it is intended to be used as local software on an individual’s computer. But since the application is open source, users are free to add such a feature to it or set it up in such a way that it can be used safely from a shared server. If so desired, a user with a technical background could remove the interactive functionality (adding new text comparisons, adding or changing annotations and notes) and publish the collection of annotated text comparisons on the web.

The source code of this application can be downloaded from:

https://github.com/eXtant-CMG/Diff-Annotator

Research Scenario: Unlocking Born-Digital Literary Heritage II

Second blogpost on the case of the Herman de Coninck floppy disks


The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to, and building upon, the datasets made available through CLARIAH-VL. This is the second blogpost on the research scenario ‘Unlocking born-digital literary heritage: the case of the Herman de Coninck floppy disks’, in which we reflect on the usefulness of computational methods for discovering the contents of the born-digital files of Herman de Coninck.

Aim of the research scenario

As mentioned in the previous blogpost, the Letterenhuis acquired 218 floppy disks (5¼- and 3½-inch) of the prominent Belgian poet, essayist, journalist, and publisher Herman de Coninck (1994-1997) in 1998, and more than 25 years later, the content of the digital files has never been analysed or compared with the paper archive on item level. The aim of the research scenario is, therefore, to partially analyse and discover the content of the digital files stored on the floppy disks by describing their contents, creating sub-datasets that group related files and linking the files with related files in the paper archive. While the full dataset of all the files held on the floppy disks could be used to answer various research questions relevant to biographical research, textual scholarship and genetic criticism, the first research question driving this research scenario relates to the initial access to the files: What is the content of each of the files on the disks, and what digital tools and methods can be used to reveal it?

Floppy disk Herman de Coninck
Floppy disk Herman de Coninck at the Letterenhuis

Steps taken

In 2018, the Letterenhuis took the first step in making disk images of the floppy disks. These disk images are not just copies of all the files, but are a literal representation of every bit of information on the floppy disks, to ensure that the original data is preserved. In total, the floppy disks contain over 1300 files (including .wpd and .doc format), which were converted by the Letterenhuis to .txt and .pdf for further processing. In some cases, the conversion rendered some characters incorrect (e.g. ‘ë’ instead of ‘é’) or illegible, but most files are perfectly suitable for further analysis. Several steps were then taken to gain more insight into the content of these plaintext files: 1) preprocessing, 2) filtering of correspondences, 3) content review, and 4) identifying corresponding files.

1) Preprocessing

Firstly, we ran a script to check whether the contents contained duplicate files. Ultimately, we found about 25 duplicates. We then conducted an initial analysis to discern any patterns in the file content. Initial observations indicated the presence of recurring elements, such as files beginning with “De vliegende keeper” followed by the column title. We therefore attempted to extract the column name (e.g., ‘De vliegende keeper’), the column title, and the author, which is crucial given the files also include contributions from authors of the Nieuw Wereldtijdschrift (NWT) under the editorship of De Coninck. In order to maximize data extraction efficiency, various exceptions were incorporated into the script. Manual correction was a more time-efficient approach for files with significantly different structures than script modification. Adjustments for line breaks, capitalization, and similar variations were integrated into the optimization process. During optimization, a subset of files was manually annotated and used to compare and refine the script’s accuracy.

2) Filtering of correspondences 

A large proportion of the files contained personal correspondences, and we wanted to separate these personal files from the work-related files. They may contain personal information and have to go through a more extensive sensitivity review in a later stage. We employed pattern recognition to filter out correspondence. De Coninck’s correspondences typically begin with a salutation, such as ‘geachte’ or ‘beste’. The script therefore identifies files with the most common salutations in Dutch, English and German at the beginning of a line and categorises them as correspondence. The remainder of the files were subjected to a quick manual inspection to filter out personal files without salutations. As such, we excluded 838 files from further analysis. This, of course, a very bare bone approach and not a very strong parameter for a more comprehensive sensitivity review and needs refinements for further applications.

3) Content review

We then turned our attention to Named Entity Recognition (NER) to get a general overview of the content of the files. The NER script used the nl_core_news_lg model by spaCy to extract person and place names from the text files. The results were compiled into separate text files and a CSV file with term frequencies and categories (person or place), alongside their occurrences in the dataset. It was necessary to manually clean the results to remove non-entity nouns, as uppercase initials at the beginning of sentences caused some false positives to be detected. The list still contains some errors and false positives, but the 3490 entries should provide a solid starting point for further research, such as: in which essays and in what way De Coninck discussed certain people or places – and whether this was subjected to any revision during the writing process.

4) Identifying corresponding files

The objective of this step was to identify files that corresponded to each other across De Coninck’s archive and publications, which included:

  1. The floppy disk files;
  2. Printouts of the files in the De Coninck’s paper archive at the Letterenhuis;
  3. Newspaper snippets files in the De Coninck’s paper archive at the Letterenhuis;
  4. Published columns in the collection of essays De flaptekstlezer (1992), Intimiteit onder de melkweg (1994) and De vliegende keeper (1995).

To facilitate the comparison of these sources, we firstly digitized the materials. We made use of an existing EPUB version of the collection of essays. For the printouts and newspaper snippets, we employed Optical Character Recognition (OCR) using Python-tesseract. Once all sources had been digitized, we proceeded to compare their contents using similarity ratios and list the top 5 similarities. For instance, comparing published columns with floppy disk files revealed a 0.90 similarity ratio between the column “Poëzie en toeval” and the file “DICKEY_74_968.txt”, which might suggest a strong correspondence.

Next Steps

By means of computational methods such as NER and similarity checks, we could gain some general insights into the content of the files on De Coninck’s floppies. The next and final blogpost within this series will provide further details on the dataset with the file titles related to De Coninck’s columns published under the title “De vliegende keeper” in De Morgen.

Link to the dataset

The CSV file containing the persons and places mentioned in the files stored on the floppy disks (with the exception of De Coninck’s personal communication) is made publicly available on Zenodo through CLARIAH-VL, the files from the floppies can – after approval – be consulted at the Letterenhuis.

References

Bekius, L., & Thijs, J. (2024). What does that little black square store? The contents of Herman de Coninck’s floppy disks in the Letterenhuis. DH Benelux 2024, Leuven, Belgium. Zenodo. https://doi.org/10.5281/zenodo.11401905

Written by Lamyk Bekius & Jordan Thijs

Bibundina: a Writer’s Library app

The Writer’s Library application “Bibundina”, developed at the University of Antwerp as part of eXtant, a toolkit for digital scholarly editing, aims to provide an environment both to create and to publish an edition of a collection of books and the reading traces in them. In origin, it was created to publish an edition of a writer’s library. It could for instance be used to make an edition of Virginia Woolf’s personal library, or Toni Morrison’s, or James Joyce’s—a digital reconstruction of the books that occupied their shelves. Such an edition can contain not just the bibliographic details of those books, but also focus on the marginalia left in them by the writers, as well as include information about the books’ provenance and when they were read.

Target audience

The project’s target audience consists of people who have an interest in creating such an edition of a library and are looking for a publication environment. The only technical skill users will need is an understanding of XML: to fill up an XML document following a well-documented schema. Bibundina was written as an application in eXist-db, an open-source NoSQL XML database and application platform. Bibundina can be easily installed via eXist’s “package manager” by dropping a compressed version of the app into the dropzone. Included with the eXist installation is eXide, eXist-db’s code editor. Users can edit the Writer’s Library app’s XML files in eXide and immediately see their work visualized in the publication environment. There are just two XML files to edit: a config file in which users can give their edition a name and select which sorting and browsing categories to use, and a file called “library.xml”, the main data file in which to encode the books.

Images

Images of the books can be added. The app accommodates three different ways of incorporating images, including IIIF, which will allow scholars to create editions based on their own research data or on collections of images that have been made available via IIIF at institutions all around the world. In the book view, users can browse through the images of the pages the editor wishes to include, cover and title page, for instance, and all pages with reading traces.

Reading traces

For pages with reading traces, zone numbers appear in the left margin of the image, at the same level as their corresponding trace. Clicking on a zone number will activate two pop-up windows on the right: a cropped picture of the reading trace in question and a text box with a transcription of the reading trace, the marked passage and an extract of the passage in context.

Some admin tools have been embedded into the interface to facilitate the data input. There is a tool to help editors create a new book entry, to extract image links from a IIIF manifest and formats them into the required XML schema, and a tool that assists admins with drawing rectangular zones around reading traces on IIIF images.

Available on GitHub: the source code, the current binary release, the documentation, and the user manual.

https://github.com/eXtant-CMG/writerslibrary

Research Scenario: Spoken Academic Belgian Dutch (SABeD)

Introducing the case of the Spoken Academic Belgium Dutch

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the datasets made available through CLARIAH-VL. This blogpost introduces the research scenario ‘Spoken Academic Belgian Dutch (SABeD)’.

Introduction

In higher education, students are confronted with academic language use, with which they are often not familiar. Since academic language skills are a necessary condition for study success, higher education institutions in Flanders and the Netherlands focus on language support for students. In many institutions, these efforts have evolved into formal, embedded language policies, but research into their implementation is limited (Bonne & Casteleyn, 2022). Research (Deygers, 2017; Deygers et al., 2017) shows that Dutch language learners struggle with academic spoken Dutch, even when they passed the university entrance language tests, ITNA or CNaVT. Although academic listening is part of the test, learners indicate that the listening tasks in the language entrance tests are easier than actual lectures (Deygers et al., 2018). Linguistic features of the listening task in the test have not been empirically validated because of the lack of a corpus of spoken academic Dutch. This is one of the main reasons for building a corpus of spoken academic Belgian Dutch, which consists of (recordings of) academic lectures. Lectures are typical of higher education, and due to the covid pandemic, recorded video lectures are available in abundance.

SABeD

Dataset

We selected academic lectures, because these constitute the predominant form of instruction in higher education institutions in Flanders, especially in the first bachelor year. Lectures are defined as instructional discourse given before an audience of at least 40 students, in which the lecturer is the dominant speaker and the level of interactivity is modest to low. We chose lectures for first year bachelor students, as both native speakers and foreign learners of Dutch indicate that the language used in lectures is one of the hurdles for comprehension and academic success (Deygers, 2017; Deygers et al., 2017). First year bachelor lectures also constitute the first encounter of the target group (i.e., Flemish first bachelor students and international students commencing university education in Belgian-Dutch) of our corpus with spoken academic Dutch. As such, these lectures make up a solid basis for our corpus compilation, especially considering that we cannot be certain if, and to what degree, the language of lectures in later bachelor and master years differs from that in the first bachelor. It is also important to take into account the primary pedagogical goal of the corpus, i.e., developing learning materials for students entering Flemish higher education.

Research question

One of the main motivations of the project is to create empirically validated vocabulary lists of spoken academic Belgian Dutch, in order to lower the barrier to entering university education by providing learning material for students.

Challenges

  1. The first challenge was to obtain an approval of the Social and Societal Ethics Committee of the University of Leuven to allow for publication of the collected data, even after having obtained written permission of the speakers in the data.
  2. Another challenge was to actually obtain the videos. Due to the covid pandemic recorded courses were abundantly available, but there was no technically feasible way for bulk downloading. 
  3. Transcription of audio is a slow and expensive process.
  4. Linguistic processing of spoken language is quite different from processing of written language.
  5. The corpus needs to be made searchable and frequency lists need to be made which provide information about the coverage.

Solutions

  1. Permission was only obtained for anonymized data, in the sense that all mentions of course and professor names have to be removed or masked in the data. This is doable for the data that is manually transcribed, as these mentions are marked, and automatic audio muting for segments with a mention is applied.
  2. The videos had to be manually downloaded one by one.
  3. In order to speed up the transcription process, automated speech recognition (Van Dijck et al., 2021) was applied, and the output was manually corrected.
  4. In order to obtain word lists, we need to work with lemmas, i.e. the dictionary form of words, and not the surface form of the words, so shallow linguistic processing was performed. A first step consisted of automated punctuation insertion, for which a specific module was developed (Vandeghinste & Guhr 2023), and next, part of speech tagging with Frog (van den Bosch et al. 2007) was applied.
  5. The CLARIN Autosearch infrastructure (de Does et al. 2017) allows for uploading a corpus with annotations and makes it searchable with corpus query language. This can be shared with any user with a CLARIN account upon request. Autosearch also allows for frequency list extraction.

Next steps

The following blogposts within this series will detail  the next steps taken as part of this research scenario. This will include:

  • A first version of the dataset (only the written data), which has already been deposited and published by the  CLARIN-B Centre INT and can be downloaded at:  https://hdl.handle.net/10032/tm-a2-w4. Metadata has been harvested by CLARIN Virtual Language Observatory.  A version which is publicly accessible for anyone with a CLARIN login is work in progress.  A detailed description of the construction of the corpus can be found in: Jolien Mathysen, Vincent Vandeghinste, Elke Peters and Patrick Wambacq (2024). Constructing SABeD: A Spoken Academic Belgian Dutch Corpus. CLARIN2023: Selected papers. https://doi.org/10.3384/ecp210001
  • Creation and publishing of specific Belgian Dutch spoken academic terminology list is ongoing.

References

Bonne, P., & Casteleyn, J. (2022). Taalbeleid en taalondersteuning: Op zoek naar een gedeelde basis en strategie voor implementatie. Tijdschrift voor Onderwijsrecht en Onderwijsbeleid, 4, 279–293. 

de Does, J., Niestadt, J., & Depuydt, K. (2017). Creating Research Environments with BlackLab. In CLARIN in the Low Countries. Ubiquity Press.

Deygers, B. (2017). Validating university entrance policy assumptions. Some inconvenient facts. In E. Gutíerrez Eugenio (Ed.), Learning and Assessment: Making the Connections – Proceedings of the ALTE 6th International Conference (pp. 46–50). Cambridge: ALTE. 

Deygers, B., Van den Branden, K., & Peters, E. (2017). Checking assumed proficiency: comparing L1 and L2 performance on a university entrance test. Assessing Writing, 32, 43–56.

Deygers, B., Van den Branden, K., & Van Gorp, K. (2018). University entrance language tests: A matter of justice. Language Testing, 35, 449–476

Vincent Vandeghinste and Oliver Guhr (2023). FullStop: Punctuation and Segmentation Prediction for Dutch with Transformers. Language Resources and Evaluation. Springer. https://doi.org/10.1007/s10579-023-09676-x

Van den Bosch, A., Busser, G., Daelemans, W., & Canisius, S. (2007). An efficient memory-based morphosyntactic tagger and parser for Dutch. In Selected Papers of the 17th Computational Linguistics in the Netherlands Meeting (pp. 99–114).Van Dyck, B., BabaAli, B., & Van Compernolle, D. (2021). A Hybrid ASR System for Southern Dutch. Computational Linguistics in the Netherlands Journal, 11, 27–34

CLARIN Annual Conference 2024: report from the CLARIAH-VL representatives

The CLARIN Annual Conference 2024 was held in Barcelona from October 15 to 17 and brought together over 200 in-person attendees and nearly 100 virtual participants, establishing it as a central forum for advancing language technology and resource sharing across Europe. Chaired by Vincent Vandeghinste, coordinator of CLARIN-Belgium, the conference explored both theoretical and applied aspects of linguistic resource management and computational processing.

One of the keynote speakers was Maite Melero from the Barcelona Supercomputing Centre, whose talk, “The Future of Language (and Cultural) Diversity in the Age of AI,” addressed AI’s complex impact on linguistic diversity. Melero discussed AI’s dual potential: while it can support endangered languages, it also risks reinforcing the dominance of major languages and perpetuating biases. Her presentation highlighted the necessity for careful AI integration in multilingual contexts, with researchers and policymakers playing a crucial role in guiding AI toward linguistic inclusion.

Steven Bird from Charles Darwin University provided a complementary perspective in his address, “Making it Meaningful.” Drawing on his extensive work with under-resourced languages, Bird stressed the importance of community-driven approaches to linguistic preservation, especially through tools that enable local communities to document endangered languages. He argued that such technology should empower communities to preserve their linguistic heritage actively.

The conference program featured more than 20 oral presentations and multiple poster sessions, fostering lively discussions on the latest research and methodologies in the field.

The conference program included over 20 oral presentations and several poster sessions, encouraging engagement with recent research and methodologies in the field. The CLARIAH-VL delegation included Els Lefever, Belgium’s representative in CLARIN’s User Involvement Committee, Jonas Doumen, who gave a first demo of TextLens and Tess Dejaeghere, a PhD researcher at Ghent CDH. Tess contributed to both the main conference with an oral presentation and to the PhD session with a poster.

Further details on the conference program, presentations, and proceedings can be found on the official CLARIN Annual Conference 2024 page.

Research Scenario: Newspapers as Data

Introducing the case of digitised historical newspapers

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitised and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the datasets made available through CLARIAH-VL. This blogpost introduces the research scenario ‘Newspapers as Data‘.

Introduction

KBR, Royal Library of Belgium’s digitised historical newspaper platform BelgicaPress provides access to over 4.1 million pages of searchable full-text, from over 135 Belgian major newspaper titles published between 1814-1987. These collections have significant potential for digital humanities research. However, platforms such as BelgicaPress are less than ideal for researchers who are looking to build datasets around specific research questions.

Inspired by the ‘Collections as Data’ movement (Padilla et al., 2019; Candela et al., 2020, Ames, 2021 and Candela et al., 2023) as an approach for cultural heritage institutions to prepare their digital collections for analysis using digital methods, CLARIAH-VL has been exploring how to provide data-level access to KBR’s digitised and born-digital collections for digital humanities research. ‘Collections as Data’, enable digital cultural heritage collections to be made available as FAIR (Findable, Accessible, Interoperable and Reusable) datasets.

Dataset

Data-level access to digitised historical newspaper collections, or ‘Newspapers as Data’ has been the focus of a number of ‘Collections as Data’ initiatives, for example at the National Library of Luxembourg, the Digital Library of the Caribbean and CLARIN’s newspaper corpora.

Data-level access means providing access to the underlying files of digitised cultural heritage resources, enabling a fine-grained level of access which will facilitate data analysis by means of tools and methods developed in the field of digital humanities.

Underlying files of digitised historical newspapers

This could include offering access to the METS (Metadata Encoding and Transmission Standard) and ALTO (Analysed Layout and Text Object) files (e.g. in XML or JSON); PDFs of the scanned images (e.g. by newspaper issue, or page); and image files, both as JPEG lower-resolution and high-resolution images in TIFF (Tagged Image File Format).

Research Questions

From a Library and Information Science (LIS) perspective, ‘Collections as Data’ marks a sea change in how cultural heritage institutions provide access to their digital collections for humanities researchers. This paradigm shift is accompanied by a number of LIS research questions, such as: How can the data files of digitised historical newspapers be sustainably extracted from collection management systems? How can we provide best guidelines for curating sample datasets for researchers to tempt them to curate FAIR datasets? How can we measure whether the quality of the OCR (Optical Character Recognition) is sufficient for digital humanities research?

Challenges

To facilitate this requirements building process, the CLARIAH-VL project team made use of the “Collections as Data Checklist”, which is currently being developed by the International GLAM Labs Community. This emerging checklist, which was introduced during the Towards implementing Collections as Data in GLAM institutions webinar in October 2022, is intended to provide “an easy to apply method to encourage, especially small and medium-size cultural heritage institutions, to publish their digital collections as ‘Collections as Data’”. Additionally, the DATA-KBR-BE project team at KBR, Royal Library of Belgium, used the checklist as a framework to help structure the development of the functional and technical requirements for the DATA-KBR-BE platform. 

Solutions

Building on the initial webinar in 2022, the CLARIAH-VL team contributed to the International GLAM Labs Community to develop the “Collections as Data Checklist”. This checklist, is intended to provide “an easy to apply method to encourage, especially small and medium-size cultural heritage institutions, to publish their digital collections as ‘Collections as Data’ (Candela et al., 2023). To increase its usability and robustness, within the context of the common European Data Space for Cultural Heritage, the checklist has been transformed into a reproducible research workflow within the Social Science and Humanities Open Marketplace. A workflow is a sequence of steps describing how to perform a task within the research data lifecycle (see: How to create a workflow in the SSH Open Marketplace?).

Next Steps

Within the context of CLARIAH-VL, SIC 2: Aggregate is collaborating with the DATA-KBR-BE project team to develop Newspapers as Data. The resulting curated datasets will be published in the Social Sciences and Digital Humanities Archive (SODHA). Where best to publish the humanities research datasets resulting from CLARIAH-VL is currently being explored. Possibilities include the CLARIAH-VL Zenodo Community.

References

Ames, Sarah. 2021. “Transparency, Provenance and Collections as Data: The National Library of Scotland’s Data Foundry.” LIBER Quarterly 31 (1): 1–13. https://doi.org/10.18352/lq.10371

Candela, G., Sáez, M. D., Escobar Esteban, Mp., & Marco-Such, M. (2020). Reusing digital collections from GLAM institutions. Journal of Information Science. https://doi.org/10.1177/0165551520950246 

Candela, G., Gabriëls, N., Chambers, S., Pham T-A., Ames, S., Fitzgerald, N., Hofmann, K., Harbo, V., Potter, A., Ferriter, M., Manchester, E., Irollo, A., Van Keer, E., Mahey, M., Holowinia, O and Dobreva, M. (2023)  A Checklist to Publish Collections as Data in GLAM Institutions. https://doi.org/10.48550/arXiv.2304.02603

Padilla, T., Allen, L., Frost, H., Potvin, S., Russey Roke, E. & Varner, S. (2019). Final Report : Always Already Computational: Collections as Data. http://doi.org/10.5281/zenodo.3152935 & https://osf.io/mx6uk/wiki/home/

The Belgian Historical Gazetteer: an index of Belgian historical place names to link archival collections and explore landscapes histories

Legend: extracts from various sheets of the ‘reduced cadastre’ (©IGN/NGI)

Gazetteers can help historians with mapping toponyms that appear in the sources they analyse by providing them with lists of historical place names and extra information which can be used to disambiguate the latter, if it happens that several different places share a same name. This is the aim of initiatives such as the World Historical Gazetteer or Pleiades. However, despite their merits, the international scope of these tools impedes the use for regional and local historical research. Historians thus need more suitable gazetteers if they want to work at a more local scale.

CLARIAH-VL aims to fill in this gap for Belgium by launching the project “Belgian Historical Gazetteer” in the framework of its SIC “Enrich”. Started in November 2022, the pilot of this project will last two years, and eventually aims to cover the whole country in the coming years. It is hosted at the University of Antwerp and undertaken in liaison with the Belgian National Geographical Institute (IGN/NGI). The aim of this project is to set up a historical gazetteer of toponyms for the whole present-day territory of Belgium, in order to provide researchers with a collection of data that 1) does not stop at Belgian provincial borders 2) goes beyond the level of municipalities and 3) goes back in time as much as possible.

Pilot test

The first phase of the project (pilot 2022/2024) consists in the gathering of an ancient and geographically homogeneous set of toponyms for provinces Antwerp, East Flanders and Liège. This is done by collecting all the toponyms mentioned on the ‘reduced cadastre’, a reduced version of the primitive cadastre drawn between 1847 and 1855 to prepare the creation of the first topographical map of Belgium in the second half of the 19th century.

Workflow of the ‘reduced cadastre’ (©IGN/NGI)
Workflow of the ‘reduced cadastre’ (©IGN/NGI)

From QGis to Linked Open Data

These toponyms are located by using points in QGis and described via the PostGis extension in a PostgreSQL database whose structure is highly adaptable to every source and compatible with Linked Open Data principles. For each toponym we record not only its name, location, type (as described in the source, and a standardized version of it), but also if applicable its matching number in databases of other projects (for instance the “placename” project of the State Archives, the “Dorpskernen” project for Flemish toponyms), its corresponding Wikidata page if it already exists and its corresponding notice in the book Gemeenten van Belgïe[1], where Hervé Hasquin and his team summarized historical information (notably administrative and ecclesiastical belonging during the Ancien Régime) about municipalities.

The ultimate aim of drawing those links with the existing (notably historical) resources is to provide researchers with sufficient information on each toponyms to facilitate the disambiguation of place names. For instance, if one historian wants to locate a place called “heikant” in the primary source and if it appears that there are several “heikant” in Belgium, it may help him/her to identify the correct one among the latter if he knows in which bishopric each of them belongs in the seventeenth century. Finally, thanks to another table of the database (table “relations”), we also describe elementary relations between the toponyms like the administrative ones (“this hamlet belongs to this town” / “this isolated chapel belongs to this town”).

Future integration and accessibility

At the end of the project, this gazetteer will be published on the World Historical Gazetteer, and will progressively be loaded on Druid (a LOD platform), in order to make it easily reachable for everyone as well as potentially extendable. This gazetteer will enable researchers to easily identify place names they find in their sources, and could be used in the future to automatically ‘map’ (e.g. annotated) written sources to a great extent.

Beyond these very practical benefits, we think that this Belgian gazetteer can also be used to open new perspectives in landscape history by making it possible to follow the evolution of the Belgian landscape through the evolution of toponyms (name and location) on the long-term. This is why, in a second phase (which in practice is ran parallel to the first), we compare the nineteenth century toponyms with present-day ones using the database of the IGN/NGI. By establishing links between ancient and actual toponyms, we can explore how a toponym evolved during the nineteenth and the twentieth century in order to research variations of place names through time. In a third phase, toponyms extracted from older material will be added to the database and the same sort of links will be described, in order to make it possible to follow the evolution of place name on a longer term.


[1] Duvosquel, Jean-Marie, Hervé Hasquin, and Raymond Van Uytven. Gemeenten Van België: Geschiedkundig En Administratief-Geografisch Woordenboek. Bruxelles: La Renaissance du Livre, 1980.

DH Benelux 2024: report about the CLARIAH-VL contributions

From June 4-7, the 11th edition of the DH Benelux Conference was held at the Irish College in Leuven, Belgium. With the theme “Breaking Silos, Connecting Data: Advancing Integration and Collaboration in Digital Humanities,” the event gathered a diverse group of Digital Humanities researchers, eager to explore new ways to connect and share knowledge across disciplines. The conference kicked off with pre-conference workshops on June 4, followed by three days of presentations and discussions.

CLARIAH-VL pre-conference workshop

As part of the DH Benelux pre-conference program, CLARIAH-VL hosted a full-day, in-person workshop. The workshop highlighted three tools developed within the CLARIAH-VL infrastructure project: a named entity referencing toolkit, a tool for digital scholarly editing, and a participatory digital asset enrichment platform based on IIIF technology

The workshop was organised into two interactive sessions:

Morning Session: Attendees explored a named entity referencing toolkit (presented by Frederic Pietowski & Tom Gheldof) and a tool for digital scholarly editing (presented by Lamyk Bekius, Vincent Neyt & Nooshin Shahidzadeh Asadi) gaining practical insights into their functionalities and applications.

Afternoon Session: Focused on Madoc, an IIIF-based platform (presented by Lise Foket, Fien Danniau, Rein Debrulle, Julie Birkholz & Annamaria Van Ingelgem) designed for participatory digital asset enrichment, allowing researchers to annotate and collaborate on image-based datasets.

This hands-on workshop offered participants a deeper understanding of these tools’ potential to enhance digital humanities research. The workshop bridged theoretical knowledge and practical application, underscoring CLARIAH-VL’s commitment to advancing digital scholarship.

CLARIAH-VL contributions

Members of CLARIAH-VL played an important role throughout the conference. Tom Gheldof, the CLARIAH-VL coordinator at KU Leuven’s Faculty of Arts, served on the DH Benelux Organising and Program Committee. Supervisors of KU Leuven, Mark Depauw and Fred Truyen acted as the local chairs of the conference, strengthening the event’s ties to the local academic community. Moreover, several CLARIAH-VL coordinators (Lamyk Bekius, Julie Birkholz, Mike Kestemont, and Mark Depauw) chaired sessions, contributing their expertise and fostering discussions throughout the conference.

Several CLARIAH-VL researchers also presented a (joint) paper:

Lamyk Bekius presented in the session Literature and Fiction on day 1, with the paper “What does that little black square store? The contents of Herman de Coninck’s floppy disks in the Letterenhuis”, co-authored with Jordan Thijs.

Mike Kestemont was truly a conference marathoner at DH Benelux 2024! On day 1, he presented in the session Statistics and Patterns, with the paper “Painting a bigger picture: the annotation of emotion in Middle Dutch literature”, co-authored with Cecile Vermaas and Laurent Breeus-Loos. He was back again on day 2 in the session Textual Analysis and Stylometry with the paper Abbreviation Application: A Stylochronometric Study of the Abbreviations in the Oeuvre of Herne’s Speculum Scribe, co-authored with Caroline Vandyck. He also made it to day 3, where he presented in the Disambiguating and Annotating Historical Text session with the paper “Coverage-based Comparisons of Cultural Diversity”, co-authored with Folgert Karsdorp and Melvin Wevers. Three days in a row – talk about dedication!

Tim Van de Cruys shared his expertise on day 2 in the session on NER, with the paper “Named Entity Recognition for a Large Scale Analysis of Individuals in Antiquity“, co-authored with Marijke Beersmans, Evelien de Graaf, Alek Keersmaekers, and Margherita Fantoli.

Julie Birkholz took part in the Computer Vision session on Day 3, with the paper “Finding a Needle in a Haystack: Computer vision and machine learning techniques for extracting comics from Belgian Illustrated Periodicals in the Interwar Period“, co-authored with Benoît Crucifix, Erwin Dejasse, Sébastien Hermans, Krishna Kumar Thirukokaranam Chandrasekar, and Bas Vercruysse.

Sytze Van Herck presented in the Disambiguating and Annotating Historical Text session on day 3, with the paper “PiCo in Practice: An ontology to standardise person reconstructions”, co-authored with Rick Mourits and Ivo Zandhuis.

Tom Gheldof

Tom Gheldof is the day-to-day coordinator for CLARIAH-VL at KU Leuven's Faculty of Arts. He is responsible for the SICs developed as part of the Linked Open Data Tool Suite (SIC 3). In addition, he is the Belgian national Moderator for the Digital Humanities Course Registry (DHCR) and one of the chairs of the DARIAH-EU WG 'Sustainable publishing of (meta)data'.

More Posts - Website

Follow Me:
TwitterLinkedIn

Styloscope and Toposcope: Towards User-Friendly Digital Text Analysis

Natural Language Processing (NLP) has been one of the fastest-growing research fields in the last decade. Innovations such as pre-trained large language models based on transformer neural networks have not only led to the popularization of AI and NLP in the general public, but also to interdisciplinary research projects in the humanities and social sciences facilitated by the scalability of these methods. In this post, we present two tools that aim to facilitate said interdisciplinary research: Styloscope and Toposcope.  The tools were developed in Python and can be used from the command line or from a user interface. The code, detailed installation instructions, and user guidelines can be found on GitHub:

Styloscope

Styloscope is a tool for automatic writing style analysis. It can be used to test hypotheses about large-scale corpora, parse documents, or detect outliers. Users can provide data by either uploading a local file or by using a publicly available Huggingface dataset. When uploading a corpus, the tool accepts CSV files with one document per row, and ZIP folders in which documents are stored in individual text files. The output contains the parsed documents, raw statistics on various writing style features such as syntactic dependencies, lexical richness,  readability, etc., and visualizations of aggregated results. An example for syntactic dependencies is provided below:

Toposcope

Toposcope can be used to detect topics in unstructured text data. It provides annotations and visualizations of the detected topics, including (changes in) topic frequency over time. The tool features four algorithms: BERTopic (Grootendorst, 2022), Top2Vec (Angelov, 2020), Non-negative Matrix Factorization (Choo et al., 2013), and Latent Dirichlet Allocation (Blei et al., 2003). Users can modify a selection of topic model parameters, and apply a number of built-in preprocessing steps, such as lemmatization and stopword removal. The input format is identical to the Styloscope format: users can upload a local corpus (CSV/ZIP), or use a Huggingface dataset. The output includes visualizations of the topic-document clusters (as shown below) and the most important keywords per topic. The raw results, among other things, consist of annotations, a topic-document matrix, and a topic-term matrix. Topic diversity and topic coherence are also computed in order to support the user during the evaluation of the tool.

How to cite

Jens Lemmens and Walter Daelemans. 2024. Styloscope and Toposcope: Towards user-friendly digital text analysis. CLiPS Technical Report Series (CTRS): 10. https://www.uantwerpen.be/en/research-groups/clips/research/computational-linguistics/compling-resources/clips-technical-repo/

References

  • Dimo Angelov. 2020. Top2Vec: Distributed representation of topics. arXiv:2008.09470.
  • David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, vol. 3, pp. 993—1022.
  • Jaegul Choo, Changhyun Lee, Chandan K. Reddy and Haesun Park. 2013. Utopian: User-driven topic modeling based on interactive nonnegative matrix factorization. IEEE Transactions on Visualization and Computer Graphics, vol. 19 (12), pp. 1992—2001. Institute of Electrical and Electronics Engineers (IEEE).
  • Maarten Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.05794.

CLARIAH-VL SIC 5 tool descriptions