Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Research Scenario: RePublic III

A reputational perspective on structural reforms

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other disciplines, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the software and datasets made available through CLARIAH-VL. In the previous blogposts in this series, the RePublic model was introduced, and it was explained how the model was used to show the interaction between parliamentary attention and media attention of public agencies. This third and final part discusses the research conducted in Boon et al. (2025), in which RePublic was utilized.

Introduction

The study described in this blogpost explores how media coverage and different sentiments in media coverage impact the likelihood of structural reforms in public agencies. 

Research questions

  1. Does the amount of media attention influence the likelihood of structural reforms in public agencies? 
  2. Does the tone of media coverage (positive or negative sentiment) impact the likelihood of reforms? 
  3. Are negative reputations more influential than positive reputations in triggering reforms?

Hypotheses

  1. Agencies with more media attention are less likely to face structural reforms. 
  2. Positive reputations reduce the likelihood of reforms, while negative reputations increase it. 
  3. Negative reputations have a stronger impact on reform likelihood than positive reputations. 

Methodology

The study used two main datasets to answer the research questions mentioned above. The first dataset was the same corpus that was mentioned in the previous blogpost in this series: it consists of Flemish news articles (published between 2000 and 2015) that discuss a public agency. The RePublic model was used to provide sentiment annotations (positive, negative, neutral) to these articles. The second corpus, on the other hand, consisted of  a dataset extracted from the Belgian State Administration Database (Kleizen, Verhoest, and Wynen 2018), which contains information regarding the structural reforms which the same agencies that occur in the first corpus underwent.

In order to detect any effects of sentiment in media attention on structural reform likelihood, both linear and non-linear statistical models were used. The fact whether an agency experienced a structural reform in a given year (binary: yes/no) was treated as the dependent variable. The independent variables, on the other hand, were media sentiment and total media attention. Political turnover and neutral sentiment were used as control variables. 

Results

The results indicate an inverted U-shaped relationship between negative media coverage and reform likelihood: negative media reputations initially increase the likelihood of reforms, but this effect diminishes when negativity becomes extreme. Agencies with consistently negative reputations are less likely to experience reforms, as negativity becomes normalized. Positive media reputations, on the other hand, do not significantly impact the likelihood of reforms. 

References

Jan Boon, Jan Wynen, Koen Verhoest, Walter Daelemans, Jens Lemmens. 2025. A Reputational Perspective on Structural Reforms: How Media Reputations are Related to the Structural Reform Likelihood of Public Agencies. In Journal of Public Administration Research and Theory, pp. 1-15. Oxford University Press.

Jan Boon, Jan Wynen, Walter Daelemans, Jens Lemmens, Koen Verhoest. 2023. Agencies on the Parliamentary Radar: Exploring the Relations between Media Attention and Parliamentary Attention for Public Agencies Using Machine Learning Methods. In Public Administration 102:3, pp. 1026-1044. Wiley Online Library.

Bjorn Kleizen, Koen Verhoest, and Jan Wynen. 2018. Structural Reform Histories and Perceptions of Organizational Autonomy: Do Senior Managers Perceive Less Strategic Policy Autonomy When Faced with Frequent and Intense Restructuring? Public Administration 96: pp. 349-67. https://doi.org/10.1111/padm.12399

Evelien Willems and Frederik Heylen. 2023. FlemPar: An interface to the API of the Flemish Parliament. https://github.com/PolscienceAntwerp/Flempar

Authors

Jens Lemmens*, Jan Boon**, Koen Verhoest*, and Walter Daelemans*

(*University of Antwerp, **University of Hasselt)

Research Scenario: Mapping flood damages in the past (part II)

Building the dataset and identifying locations

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the datasets made available through CLARIAH-VL. The previous blogpost introduced the research scenario “Mapping flood damages”. This one presents the methodology that was used, while the last one will display the results obtained. 

Aim of the research scenario

Our societies are facing major meteorological disasters because climate changes greatly and because societies are no longer prepared to face them: many people live in risky areas when the risk itself is culturally no longer accepted. Recent research has shown that past societies used to be more resilient towards meteorological bad events (Soens 2018, De Keyzer et al. 2024), at least to the non-exceptional ones. Thinking about the daily resilience of societies to such events in the past means taking stock of past bad weather events: how frequent were they? What were the causes? Which places were the most impacted? This can be difficult because these events have whether not always left much trace in the documentation (because they were common), whether so much trace (because they were numerous) that they are difficult to manage. However, digital tools can help researchers in both cases. They can for instance greatly facilitate the visualization of specific natural disasters by automatizing the mapping of (forgotten) affected places. The aim of this research scenario is precisely to map semi-automatically the locations that have been mentioned in the Gazet van Antwerpen as having been under water because of a flood or a storm between 1911 and 1921, in order to try to answer the following question: where are the places that were impacted by storms and floods in provinces Antwerp and East-Flanders between 1911 and 1921? and To what extent were they affected ? 

Steps taken

Step 1: Building a the dataset

We looked for every mention of the words “onder water” from 1911 to 1921 (“under water” in English. Ex: “this hamlet was under water because of the storm”) in the newspaper Gazet van Antwerpen accessed on BelgicaPress, the online newspaper database of the Royal Library of Belgium (KBR).

Extract from the Gazet van Antwerpen, 2d of August, 1911 (https://uurl.kbr.be/1361477)

The words “onder water” appear on 317 pages between 1911 and 1921. Most of the time, this expression is used in the following contexts:

    1. consequently to a flood or a storm in Belgium 
    2. consequently to a flood or a storm abroad
    3. when someone died from drowning
    4. when a fire broke out (most of the time on a ship but not always) and had to be extinguished using massive quantities of water

Once set aside the newspaper articles published to report one of the last three events, we can count 58 newspapers where the expression “onder water” is used at least one time to report the consequences of a flood or a storm that occurred somewhere in Belgium. 

Unfortunately, BelgicaPress does not yet give access to the OCR version of the newspapers which means that, to be extracted, the results of a research has to be manually copied. We therefore copied manually the content of all the relevant newspapers articles containing the expression “onder water” in an Excel sheet. 

Step 2:  Matching the articles with the Belgian Historical Gazetteer

All these articles contain at least one place name. However localizing those places can prove difficult since names of town, villages or hamlets changed greatly in Belgium in the last two centuries, because of incorrect transfers or translations and the variability introduced by local dialects (Von Busekist 1998, Taeldeman 2001, Witte 2011). We therefore cannot use a contemporary gazetteer to locate places mentioned in those articles, and have to use an historical one, namely the Belgian Historical Gazetteer. This gazetteer – still under construction (CLARIAH-VL, SIC 4) – aims at providing researchers with a collection of data that 1) does not stop at Belgian provincial borders, which 2) goes beyond the level of municipalities and 3) that does not stop to the 19th century but go deeper in the past.

For the needs of this research scenario, we made a first matching test between the content of the articles and the Belgian Historical gazetteer (BHG). We did this using a formula that asks Excel to look for specific character strings (place names registered in the gazetteer) within a text (the newspaper articles). 

The BHG currently covers only provinces Antwerp and East-Flanders. A lot of the place names mentioned in the articles could therefore not be found in de gazetteer (36% of the total). For 42% of the remaining data, the matching worked perfectly giving the good result immediately. In 15% of the cases, we had to manually do a small intervention to get the good result (for instance a small correction in the spelling of the place name that had been incompletely or badly transcribed). In 38% of the cases, the formula did not succeed in finding the correct place name in the gazetteer, for various reasons:

  • the formula found the good toponym but not in the right province (disambiguation problem)
  • the formula could only find one toponym among the several that were mentioned in the article (the formula is not made to repeat the search once one toponym has been found)
  • the formula stopped the search once it has found a corresponding toponym in the gazetteer that was only part of the one mentioned in the article (example: “Gentbrugge” was matched with “Gent”)

In the previous cases we had to manually correct of supply the toponym found by the formula. Those problem will be solve in the future, in the framework of CLARIAH +, when a collaboration with experts in text extraction will help us to get a better matching query. 

In the next blogpost we will explain we get a visualization of the results of this matching test and how it offers research perspective.

Next step (and next blogpost)

Visualizing floods damages (blogpost 3/3)

References

Von Busekist, Astrid. La Belgique. Politique des langues et construction de l’État, de 1780 à nos jours, Paris-Bruxelles, Éditions Duculot, 1998.

Maïka De Keyzer, Tim Soens and Christophe Verbruggen, 2024. Mens en natuur : een geschiedenis, (Gent: Academia Press), 2014, 313p. 

Taeldeman, Johan. “De Regenboog van de Vlaamse Dialecten.” Het Taallandschap in Vlaanderen, Johan Taeldeman et al. (eds.), Academia Press, 2001, 49–58.

Witte, Els. “La question linguistique en Belgique dans une perspective historique », Pouvoirs 136(1), 37-50.

Tim Soens, 2018. “Resilient Societies, Vulnerable People: Coping with North Sea Floods Before 1800”, Past & Present, 241 (1), 2018 : 143–177, https://doi.org/10.1093/pastj/gty018

Research scenario: RePublic II

Introducing the case of reputation analysis of government organizations 

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other disciplines, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the software and datasets made available through CLARIAH-VL. In the previous blogpost in this series, the RePublic model was introduced. This second part discusses the research conducted in Boon et al. (2023), in which RePublic was utilized.

Introduction

Public agencies operate with significant autonomy, often holding more information about their activities than legislators. This information imbalance makes it challenging for politicians to monitor the performance of said agencies. Previous research has shown that news media play an important role in shaping political debates by drawing attention to societal issues, which helps to fill this knowledge gap. While earlier studies have explored the media’s impact on politics broadly, it is still unclear how media sentiment affects political scrutiny of agencies (Boon et al., 2023). By using RePublic to analyze sentiment in news media and its influence on political debates we aimed to provide new insights into this matter.

Research question

How are media attention and parliamentary attention for public agencies related?

Hypotheses

  • Media attention in newspapers precedes parliamentary questions about public agencies. This effect is more pronounced for news with a negative tone, compared to news with a neutral or positive tone.
  • Negative media attention for public agencies in newspapers is more likely to precede negatively toned parliamentary questions than positive and neutral media attention.

Methodology

In order to investigate the effect of news media attention (main independent variable) on parliamentary attention (main dependent variable), news data and parliamentary data about 24 public agencies was collected and statistical regression tests were applied.

Attention and reputation analysis

The number of published news articles and  parliamentary questions about public agencies were used as a metric for attention. In order to provide reputation annotations, RePublic was used to predict a “neutral”, “positive”, or “negative” label for each document. To determine which documents talk about which organizations, regular expressions were used. These statistics, aggregated per month, were used as the unit of analysis. 

Data

More than 90.000 news articles about 24 government organizations were collected. These were all published in one of three popular Flemish newspapers (De Standaard, De Morgen, Het Laatste Nieuws) between 2000 and 2020. For the parliamentary data, written questions from commissions and plenary sessions that originate from the same time span and that mentioned the same organizations were scraped using the FlemPar package for R (Willems and Heylen, 2023).

Results

The study revealed that media coverage influences parliamentary attention to public agencies, with media attention often preceding parliamentary attention. It was shown that positive media prompts favorable questions within the same month, but that negative coverage has a larger impact and increases all types of questions. Surprisingly, majority legislators, not just the opposition, actively respond to negative news, likely to protect their reputation. Written questions, though symbolic, reflect how legislators rely on media to monitor agencies. While causality isn’t definitive, the media’s agenda-setting role is clear—negative coverage triggers scrutiny, while positive coverage results in more favorable treatment of agencies in parliament.

Next Steps

In the third post of this series, the research described in Boon et al. (2025) will be presented.

References

Jan Boon, Jan Wynen, Koen Verhoest, Walter Daelemans, Jens Lemmens. 2025. A Reputational Perspective on Structural Reforms: How Media Reputations are Related to the Structural Reform Likelihood of Public Agencies. In Journal of Public Administration Research and Theory, pp. 1-15. Oxford University Press.

Jan Boon, Jan Wynen, Walter Daelemans, Jens Lemmens, Koen Verhoest. 2023. Agencies on the Parliamentary Radar: Exploring the Relations between Media Attention and Parliamentary Attention for Public Agencies Using Machine Learning Methods. In Public Administration 102:3, pp. 1026-1044. Wiley Online Library.

Evelien Willems and Frederik Heylen. 2023. FlemPar: An interface to the API of the Flemish Parliament. https://github.com/PolscienceAntwerp/Flempar

Authors

Jens Lemmens*, Jan Boon**, Koen Verhoest*, and Walter Daelemans*

(*University of Antwerp, **University of Hasselt)

Research Scenario: RePublic I

Introducing the case of reputation analysis of government organizations

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other disciplines, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the software and datasets made available through CLARIAH-VL. This blogpost introduces the research scenario of the RePublic NLP model.

Introduction

To evaluate government organizations, their (mal)performance is discussed both in news media and during parliamentary sessions. The relationship between news attention and parliamentary attention (and its contingent nature), however, is understudied. In this three-piece blogpost, we describe a tool developed during CLARIAH-VL that can be used to predict the reputation of public agencies from text data, and 2 political studies conducted with this tool, which have been published as peer reviewed journal articles. In this first part, the tool itself – RePublic – is described.

Research question

Both news media and parliamentary discussions play an important role in the evaluation of government organisations. Due to the large scale of the available data, however, it is  necessary to utilize automatic methods to estimate the reputation of these organizations and gain comprehensive, diachronic insights. Hence, we proposed the following research question: How can we leverage Natural Language Processing methods to automatically analyze the reputation of government organisations from text?

Method

Data

An annotation task was set up to collect 4404 sentences mentioning Flemish government organizations, of which 1257 sentences were positive, 1485 sentences were negative and 1662 sentences were neutral. The sentences were extracted from news articles published between 2000 and 2020 in “Het Laatste Nieuws”, “De Standaard” or “De Morgen”, and which contained at least one of 24 government organizations, such as De Lijn, NMBS, Agentschap Natuur en Bos, etc. The latter was determined by using regular expressions.

Model

We used BERTje, the Dutch version of BERT – a pre-trained transformer model – to build a tool for automatic reputation prediction (De Vries et al., 2019). Initially, we used a Masked Language Modeling task to allow the model to learn the text genre using a corpus of more than 90.000 unlabeled news articles that mentioned at least one of the 24 government organisations. Then, a fine-tuning task was conducted to predict whether a given text about a certain organization expresses a positive, negative, or neutral attitude towards its reputation. For this task, the labeled data mentioned above was used. Our final model, which we named ‘RePublic’ (reputation analyzer for public agencies), is publicly available on the HuggingFace/transformers hub: https://huggingface.co/clips/republic.

Evaluation

A 10-fold cross validation experiment was conducted on the labeled data to optimize the hyperparameters of the model and evaluate it. The results can be found below.

ClassPrecision (%)Recall (%)F1-score (%)
Positive87.388.688.0
Negative86.486.586.5
Neutral85.384.284.7
Macro-averaged86.386.486.4

Table 1. Results of the 10-fold cross-validation experiment with RePublic using optimal hyper- parameters.

Next Steps

Using RePublic, two reputation studies have been conducted. These are published in Boon et al. (2025) and Boon et al. (2024), and will be described in two separate blogposts.

References

Jan Boon, Jan Wynen, Koen Verhoest, Walter Daelemans, Jens Lemmens. 2025. A Reputational Perspective on Structural Reforms: How Media Reputations are Related to the Structural Reform Likelihood of Public Agencies. In Journal of Public Administration Research and Theory, pp. 1-15. Oxford University press.

Jan Boon, Jan Wynen, Walter Daelemans, Jens Lemmens, Koen Verhoest. 2023. Agencies on the Parliamentary Radar: Exploring the Relations between Media Attention and Parliamentary Attention for Public Agencies Using Machine Learning Methods. In Public Administration 102:3, pp. 1026-1044. Wiley Online Library.

Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, Malvina Nissim. 2019. BERTje: A Dutch BERT Model. arXiv:1912.09582.

Authors

Jens Lemmens*, Jan Boon**, Koen Verhoest*, and Walter Daelemans*

(*University of Antwerp, **University of Hasselt)

Madoc as a tool to enrich and analyse sources

Madoc is an open-source tool developed by Digirati for the enrichment of digital objects. The primary building block for content enrichment is the International Image Interoperability Framework (IIIF), an open set of standards for sharing images and their metadata. IIIF is used by over 130 research institutions worldwide, mostly in image repositories, cultural heritage institutions, and research libraries. Participatory enrichment possibilities include annotations, translations, transcriptions, and metadata entry. In Madoc users can assemble digital collections with IIIF objects from multiple archives, libraries and museums and build an interpretative website around them. Another possibility is to run a crowdsourcing project and invite the public, researchers or students to contribute.

Madoc allows users to import IIIF objects, including metadata and the result of Optical Character Recognition (OCR). After data import, enrichment projects can be configured in the backend using a capture model or customizable template to describe entry fields. Capture models can consist of annotations to gather transcriptions, translations, entities, tags, commentaries, or metadata curation. Internal projects can be kept private, while public crowdsourcing projects allow users to participate and contribute after signing in.  Submissions made by the public can easily be reviewed and validated through a user-friendly dashboard. Madoc provides a public digital exhibition space to showcase the enriched collection along with the annotations. The results of a project can also be exported using the Madoc API.

The Ghent Centre for Digital Humanities has invested in Madoc since 2019 through the Flemish consortium CLARIAH-VL. Other investors include the National Library of Wales and the Indigenous Digital Archive. We have been actively involved since the development stage and have ample experience setting up projects.

Documentation

Madoc combines a multitude of functionalities and components, both in the front-end and in the back-end. Therefore, a clear documentation page is indispensable. During CLARIAH-VL, the documentation page was significantly expanded to support developers, project managers and users. The page provides explanations, examples and screenshots, and is linked to the source code on GitHub. There is also a subpage documenting the releases of new features. 

Although the documentation page is not complete, it is a tool to facilitate a wider use of the platform and strengthen the open-source character of Madoc.

Feature development

To make Madoc more user-friendly for both project managers and project participants, a range of functionalities was developed, resulting in the release of Madoc version 2.2 in October 2023. These functionalities were chosen based on comparative research and a user survey, performed in collaboration with Flemish Art Collection, meemoo and Meise Botanic Garden.

  • Polygon selectors

Up until now, the Capture Model Service in Madoc only supported rectangular box selectors to define regions on the image (i.e. IIIF canvas) as the target of an annotation. Because rectangular boxes make certain projects impossible, we added the possibility to draw non-rectangular polygonal regions on the image. Digirati used Scalable Vector Graphics (SVG), following the SVG-standards defined by the World Wide Web Consortium (W3C). This selector enables more detailed annotations in Madoc.

Project participants can add an image to the viewer via a split view. This enables a comparison of different images. The dimensions of the viewer can also be adjusted to make the form more visible.

Project managers can add structured vocabularies themselves, or link to existing vocabularies. By using an auto-complete field, project participants can easily create a connection to vocabularies such as Wikidata.

Instructions for the tasks can be linked to a help button. This allows extensive instructions to be given to the project participant without overloading the form with text.

In addition to a progress bar and an overview of the collections and manifests, every project participant can access an overview of their own contributions, statistics of the other participants and updates from the project team for each project. The project landing page also allows every participant to provide feedback to the project manager(s). To enable quick and efficiënt communication, managers can send an email to all participants of the project with one click.

  • Auto-save

In addition to a ‘save for later’ button, an auto-save can be enabled. In this way, a copy of the contribution is saved to the browser’s local storage every 10 seconds. If desired, the participant can restore the previous version.

The user profile provides an overview of a user’s personal information (name, e-mail address, setting, …) and the tasks they performed. The profile page also allows users to indicate for whom their personal information should be visible.

Project managers can add a page with terms of use to the project site. Each new participant is prompted upon registration to read the terms of use and agree to them. When the terms of use change, every user will be notified. Data on whether or not the terms of use are agreed to are stored and can be used to deny access to certain functionalities.

Ghent Enriched – platform for customized participation

To test and improve the technology, methodology and user experience of Madoc as a participative tool we learned a lot from the project Ghent Enriched (Gent Verrijkt), a spin-off of the cultural heritage project Ghent Mapped (Gent Gemapt) (2020-2023). Through Ghent Enriched, museums and archives offer parts of their collections and ask the public or experts to help transcribe, date, describe and identify them. Heritage partners used Madoc as a customized toolbox for digital and hybrid volunteer work. Moreover, the platform made it possible to set up shared projects with similar collections, and to learn from each other’s capture models.

We first tested the approach and technology in pilot projects. Huis van Alijn, Amsab-ISG and Archief Gent, formulated problem statements around specific collection registration cases. We then translated and implemented these cases via IIIF manifests in a capture model, and embedded the Madoc projects in existing public and volunteer operations. Students worked on the Belgian-Dutch feminist-socialist periodical De Vrouw (Woman; 1893‒1900) from the digital collections of the Belgian Amsab-Institute of Social History. They collaborated to identify all poems (over 200 in total), developed a distinct data model to describe the poems, and each conducted their own case study. An important condition of collection registration cases was to shield copyrighted collections, an advantage of Madoc.

Archief Gent made an open call for volunteers to help date the collection of 3,000 postcards of the Municipal Commission for Monuments and Townscapes (SCMS). These unique pieces contain a lot of interesting information for researchers and folklorists but have yet to be dated. Aside from historical purposes, dating is crucial for clearing copyright and thus making the postcards more accessible. The dating methodology was translated to a capture model taking into account postage and publisher stamps, writing, colours, and lay-out. After a training session on dating methodology in the archive, five volunteers individually worked on the task in Madoc. They could come back two-weekly for feedback from the archivist. 

The Industry Museum uses Madoc to present and describe historical photographs of metalworking at the ACEC-site. (Former) metal workers were asked to identify machines and measuring instruments on photographs from several different companies. During a workshop with seventeen volunteers, three groups worked together with curators to complete the description of this collection and share their expertise.

Erfgoedcel Gent set up a community project to identify people and places on photographs and slides made by amateur photographer Lucien Bogaert between 1960 and 2010. People from neighbourhood de Muide are asked to further describe the collection.

Research Scenario CLARIAH-VL: Mapping flood damages in the past (part I)

Introducing the case of the Belgian Historical Gazetteer

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the datasets made available through CLARIAH-VL. This blogpost introduces the research scenario “Mapping flood damages in the past using the Belgian Historical Gazetteer”. The two following ones will introduce the methodology and the results.

Introduction

Our societies are facing major meteorological disasters because climate changes greatly and because societies are no longer prepared to face them: many people live in risky areas when the risk itself is culturally no longer accepted. Recent research has shown that past societies used to be more resilient towards meteorological bad events (Soens 2018, De Keyzer et al. 2024), at least to the non-exceptional ones. Thinking about the daily resilience of societies to such events in the past means taking stock of past bad weather events: how frequent were they? What were the causes? Which places were the most impacted? This can be difficult because these events have whether not always left much trace in the documentation (because they were common), whether so much trace (because they were numerous) that they are difficult to manage. However, digital tools can help researchers in both cases. They can for instance greatly facilitate the visualization of specific natural disasters by automatizing the mapping of (forgotten) affected places. This research scenario aims at showing how it can be done.

Dataset

For this research scenario, we will focus on one specific source, namely the “Gazet van Antwerpen” which is a daily newspaper published in Flanders since 1891. Among other subjects, this newspaper mentions the occurrences and consequences of bad weather in all Belgium, in other countries but especially in Flanders.  This document had been digitized and is available online from 1911 on BelgicaPress, the online newspaper database of the Royal Library of Belgium (KBR).

Figure 1 – First page of the Gazet van Antwerpen (BelgicaPress)

For this case study, we will look for every mention of the words “onder water” (“under water” in English. Ex: “this hamlet was under water because of the storm”) used in this newspaper from 1911 to 1921. We will select the articles that use this expression in the framework of storms or floods that took place in provinces Antwerp and East-Flanders.

Research Question

For this research scenario, we would like to obtain a visualisation of the damage caused by (non-exceptional) storms and floods in the provinces of Antwerp and East Flanders in order to take measure of these (non-exceptional) events each year between 1911 and 1921. The first step is of course to be able to map places mentioned in those articles. Our research question is therefore: where are the places that were impacted by storms and floods in provinces Antwerp and East-Flanders between 1911 and 1921? and To what extent were they affected ? 

Challenges

The challenges are numerous if one wants to automatize, at least a bit, the working process. The bigger are 1) that information on storms and floods needs to be aggregated into locations, which 2) need to be disambiguated and linked to an existing spatial database for subsequent mapping within a Geographic Information System.

Solutions

This research scenario will make use of the tools provided by SIC 4 (Aggregate) to (semi)automatically map the locations mentioned in the selected newspaper articles.

This includes specifically the Belgian Historical Gazetteer, which is an historical gazetteer of toponyms. Its main goals is to provide researchers with a collection of data that 1) does not stop at Belgian provincial borders, which 2) goes beyond the level of municipalities and 3) that does not stop to the 19th century but go deeper in the past.

Next Steps (and next blogposts)

  • Building the dataset and identifying locations (blogpost part II)
  • Visualizing floods damages (blogpost part III)

References

Tim Soens, 2018. “Resilient Societies, Vulnerable People: Coping with North Sea Floods Before 1800”, Past & Present, 241 (1), 2018 : 143–177, https://doi.org/10.1093/pastj/gty018

Maïka De Keyzer, Tim Soens and Christophe Verbruggen, 2024. Mens en natuur : een geschiedenis, (Gent: Academia Press), 2014, 313p.

Diff Annotator: An Annotation Tool for Text Comparisons

Diff Annotator

Diff Annotator, developed at the University of Antwerp as part of eXtant, a toolkit for digital scholarly editing, is a lightweight tool for annotating text-comparisons of two plain text files. It was initially created for a specific use case, a research project that included a comparative close reading of two versions of Stephen King’s novel IT. It was programmed to facilitate an analysis of the mechanics of suspense throughout King’s revision campaigns during the novel’s composition process.

How Does Diff Annotator Work?

The tool provides a reading environment that presents the variation in a pleasing way and enables users to take notes and attach annotations to variants. As the name suggests, the Diff Annotator uses the ‘git diff’ command from git (an open-source distributed version control system) for the collation process itself. It visualizes invariant text in black and the variants from the first version in red and the second version in green. The collation is accurate, and the output is clear. Equally important to the collation itself, however, is Diff Annotator’s mechanism to note down what one notices in the variants: to make general notes about revision patterns that one might discover along the way and to attach annotations to particular variants, flagging and tallying those patterns.

Technical Specifications and Installation

Diff Annotator’s simple web-based content management system is a lightweight node js application. To add a new text comparison, users can click on the ‘+ NEW DIFF’ button, supply a name for the comparison and then select two plain text files from their computer (other file formats than .txt are not supported, only two versions can be compared). The output generated by git diff is transformed to HTML by a python script that adds unique identifiers to all variants. The adding of notes and annotations is handled by a combination of python scripts (which make the changes to the HTML files) and client-side javascript functions (which update the visualisation in the browser as the user is working).

The automatic collation output can be corrected in three ways:

1) by clicking on a variant and making changes in the text fields marked ‘f1’ (the first text file) and ‘f2’ (the second text file)

2) by clicking the ‘Normalize to f2’ button to remove a variant and insert the variant from f2 as normal text

3) editing the source code by calling up the source HTML of a selection of paragraphs (all variants are contained in a <span class=”app”>, for instance: <span class=”app” id=”app-652″><span class=”f1″>Hello</span><span class=”f2″>Hi</span></span>).

Diff Annotator supplies statistics for every paragraph on the increase or decrease in words between the versions. ‘[inv.:19 — f1:14 — f2:12 (-6.06%)]’, for instance, describes the collation of a paragraph with 19 invariant words, 14 variant words in the first text file, compared to only 12 variant words in the second text file, which comes down to a decrease of 6% in word count between the versions.

Usage and Customization

The intended audience of the Diff Annotator will need moderate technical skills to install it since it has a moderate number of dependencies that must be installed first (git diff, node.js and python). Text comparisons can be easily transported from one installation of the Diff Annotator to another: simply copy the relevant folder from the ‘Diff-Annotator/public/data’ directory to the same directory in another installation.

The tool does not come with a user management system; it is intended to be used as local software on an individual’s computer. But since the application is open source, users are free to add such a feature to it or set it up in such a way that it can be used safely from a shared server. If so desired, a user with a technical background could remove the interactive functionality (adding new text comparisons, adding or changing annotations and notes) and publish the collection of annotated text comparisons on the web.

The source code of this application can be downloaded from:

https://github.com/eXtant-CMG/Diff-Annotator

Unlocking born-digital literary heritage: The case of the Herman de Coninck floppy disks III

Final blogpost on the case of the Herman de Coninck floppy disks

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to and building upon the datasets made available through CLARIAH-VL. This is the final blogpost on the research scenario ‘Unlocking born-digital literary heritage: the case of the Herman de Coninck floppy disks’, in which we will focus on the dataset with the file titles related to De Coninck’s columns published under the title “De vliegende keeper” in De Morgen.

Aim of the research scenario

In the previous blogpost, we discussed how computational methods could provide some preliminary insights into the contents of the files stored on De Coninck’s floppies. It became clear that many of files on Herman de Coninck’s floppy disks in the archive of the Letterenhuis are related De Coninck’s weekly column ‘De vliegende keeper’ for the Flemish newspaper De Morgen, of which a selection also appeared in the collection of essays De vliegende keeper (1995). Creating a dataset with the files related to this column would allow for further research, for example, within the field of textual scholarship and genetic criticism. Within genetic criticism, the study of the writing process is usually divided into three levels: the examination of the endogenesis, the exogenesis and the epigenesis. The endogenesis encompasses the actual composition of the text, the exogenesis the incorporation of research into the text, and the epigenesis describes the continuation of the genesis after a text has been published and studies the revisions in later publications. The files on the floppy disks can therefore be well used for epigenetic research, focusing on questions such as: What can the files on the disks tell us about the publication of this collection? Do the files tell us anything about the selection process? Did the essays need to be rewritten to be republished, or could they simply be published as they were? And what is the status of the digital files on the disks compared to the newspaper version, the version published in the collection of essays, and the versions in the paper archive?

Method

The steps discussed in the previous blogpost enabled us to create a dataset with documents that could be linked to the column ‘De vliegende keeper’ on the floppy disks and in De Coninck’s paper archive at the Letterenhuis. We then also compared these files with the essays that were published in the collection of essays De vliegende keeper. There were 36 essays republished in De vliegende keeper. For only two of these, no digital version seems to have survived in the archive. Yet, there are also four essays of which one or more digital versions exist within the archive, along with a printed version and a version as printed in the newspaper. This is the case for the essay “Poëzie en toeval” (“Poetry and Coincidence”), with one digital floppy disk version, one paper version, one newspaper version and one version as published in the collection of essays. The essay “De kunst van het stamelen” (“The Art of Stammering”) comes in two digital versions along with all the other printed versions, just like “Feestelijke zinloosheid” (“Festive Futility”) and “De avonturen van een spermatozoön” (“The Adventures of a Spermatozoon”). We compared the versions with each other using Diff Annotator. This is a lightweight environment for annotating text comparisons of two plain text files, developed by Vincent Neyt for the eXtant Toolkit for Digital Scholarly Editing, as part of CLARIAH-VL. It produces a visualisation of the variation using the ‘git diff’ command from git (an open source distributed version control system) for the collation process itself. The diff function in git is used to show the changes between commits. The collation results showed that each version of these versions differ slightly from each other.

Handwritten revision of the essay’s title on the printout

Findings

Let’s look more closely at the differences between the versions of the essay “De avonturen van een spermatozoön”, of which the printed version in the paper archive also contains major handwritten revisions. In the essay, De Coninck wrote about the poet Hans Andreus, of whom he had just read a biography written by Jan van der Vegt. Each version represents a unique version of the text – none of the versions are the same. This allows us to draw a hypothesis on the changes that were made to the essay from the first version to the reprint.

The change in title visualised with Diff Annotator

The first version – within the remaining documents – can be found on one of the floppy disks, called ANDREUS. In this version, the essay is titled “De fenomenologie van een klootzak” (“The Phenomenology of a Scumbag”). This is also the title given to the printed version of the essay. The entire base layer appears to be the same as the first version found on the floppy disks, but it contains unique handwritten revisions: paragraphs are marked for deletion or relocation, sentences are struck through and interlinear and marginal additions appear throughout. One of the revisions addresses the title: it is changed from “De fenomenologie van een klootzak” to “De avonturen van een spermatozoïde” (“The Adventures of a Spermatozoid”). De Coninck implemented the handwritten revisions in the second version found on the floppy disks, but in this process, he included other revisions as well. The digital version is therefore not exactly the same as  the paper version with the revisions taken into account. Additionally, there is the version published in the newspaper. This published version most closely resembles the paper version: the handwritten revisions have been considered, but they are not identical. Finally, there is the version published in the collection of essays. This published version bears the closest resemblance to the second version found on the floppy disks, but also contains some differences – mostly minor editorial changes and another title change – a change of spelling of “spermatozoïde” into “spermatozoon”. This indicates that De Coninck did not use the version previously published in the newspaper, but delved into his own archive to find a version to publish in the book. 

Final remarks

This very brief overview of the differences between the different versions shows the importance of the digital files in De Coninck’s archive. They are not just digital surrogates for documents in the paper archive, or vice versa, but they are necessary for understanding his working method and the publication process of the “De vliegende keeper” columns.

This research scenario aimed to make the born-digital archival material in Herman de Coninck’s archive at the Letterenhuis findable and more accessible. This led to the creation and publication of one CSV file with the people and places mentioned in the files and one with all the files related to the “De vliegende keeper”-columns. This was the first exploration of the files, and there is still much to do to fully unlock the born-digital archive. An important question is, for example, how this archive needs to be described. We, therefore, hope we can continue tackling challenges related to born-digital archives in CLARIAH-VL+, in which the Letterenhuis will participate as a third-party partner. In this next phase, we plan to develop a description model for born-digital and hybrid literary archives.

Link to the dataset

The CSV file with the list of documents related to the “De vliegende keeper”-columns is made publicly available on Zenodo through CLARIAH-VL, the files from the floppies can – after approval – be consulted at the Letterenhuis.

References

Bekius, Lamyk. (to be published) “Genetic Criticism Applied to Born-Digital Literary Heritage in Flanders: From Floppy Disks to Keystroke Logging Data”. Proceedings of TheIntangible Papers. Authorial Philology and Born-Digital Texts. Bologna: Il Mulino

Bekius, L., & Thijs, J. (2024). What does that little black square store? The contents of Herman de Coninck’s floppy disks in the Letterenhuis. DH Benelux 2024, Leuven, Belgium. Zenodo. https://doi.org/10.5281/zenodo.11401905

Written by Lamyk Bekius & Jordan Thijs

Research Scenario: Unlocking Born-Digital Literary Heritage II

Second blogpost on the case of the Herman de Coninck floppy disks


The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitized and born-digital resources. In this blogpost series, we will communicate on research scenarios leading to, and building upon, the datasets made available through CLARIAH-VL. This is the second blogpost on the research scenario ‘Unlocking born-digital literary heritage: the case of the Herman de Coninck floppy disks’, in which we reflect on the usefulness of computational methods for discovering the contents of the born-digital files of Herman de Coninck.

Aim of the research scenario

As mentioned in the previous blogpost, the Letterenhuis acquired 218 floppy disks (5¼- and 3½-inch) of the prominent Belgian poet, essayist, journalist, and publisher Herman de Coninck (1994-1997) in 1998, and more than 25 years later, the content of the digital files has never been analysed or compared with the paper archive on item level. The aim of the research scenario is, therefore, to partially analyse and discover the content of the digital files stored on the floppy disks by describing their contents, creating sub-datasets that group related files and linking the files with related files in the paper archive. While the full dataset of all the files held on the floppy disks could be used to answer various research questions relevant to biographical research, textual scholarship and genetic criticism, the first research question driving this research scenario relates to the initial access to the files: What is the content of each of the files on the disks, and what digital tools and methods can be used to reveal it?

Floppy disk Herman de Coninck
Floppy disk Herman de Coninck at the Letterenhuis

Steps taken

In 2018, the Letterenhuis took the first step in making disk images of the floppy disks. These disk images are not just copies of all the files, but are a literal representation of every bit of information on the floppy disks, to ensure that the original data is preserved. In total, the floppy disks contain over 1300 files (including .wpd and .doc format), which were converted by the Letterenhuis to .txt and .pdf for further processing. In some cases, the conversion rendered some characters incorrect (e.g. ‘ë’ instead of ‘é’) or illegible, but most files are perfectly suitable for further analysis. Several steps were then taken to gain more insight into the content of these plaintext files: 1) preprocessing, 2) filtering of correspondences, 3) content review, and 4) identifying corresponding files.

1) Preprocessing

Firstly, we ran a script to check whether the contents contained duplicate files. Ultimately, we found about 25 duplicates. We then conducted an initial analysis to discern any patterns in the file content. Initial observations indicated the presence of recurring elements, such as files beginning with “De vliegende keeper” followed by the column title. We therefore attempted to extract the column name (e.g., ‘De vliegende keeper’), the column title, and the author, which is crucial given the files also include contributions from authors of the Nieuw Wereldtijdschrift (NWT) under the editorship of De Coninck. In order to maximize data extraction efficiency, various exceptions were incorporated into the script. Manual correction was a more time-efficient approach for files with significantly different structures than script modification. Adjustments for line breaks, capitalization, and similar variations were integrated into the optimization process. During optimization, a subset of files was manually annotated and used to compare and refine the script’s accuracy.

2) Filtering of correspondences 

A large proportion of the files contained personal correspondences, and we wanted to separate these personal files from the work-related files. They may contain personal information and have to go through a more extensive sensitivity review in a later stage. We employed pattern recognition to filter out correspondence. De Coninck’s correspondences typically begin with a salutation, such as ‘geachte’ or ‘beste’. The script therefore identifies files with the most common salutations in Dutch, English and German at the beginning of a line and categorises them as correspondence. The remainder of the files were subjected to a quick manual inspection to filter out personal files without salutations. As such, we excluded 838 files from further analysis. This, of course, a very bare bone approach and not a very strong parameter for a more comprehensive sensitivity review and needs refinements for further applications.

3) Content review

We then turned our attention to Named Entity Recognition (NER) to get a general overview of the content of the files. The NER script used the nl_core_news_lg model by spaCy to extract person and place names from the text files. The results were compiled into separate text files and a CSV file with term frequencies and categories (person or place), alongside their occurrences in the dataset. It was necessary to manually clean the results to remove non-entity nouns, as uppercase initials at the beginning of sentences caused some false positives to be detected. The list still contains some errors and false positives, but the 3490 entries should provide a solid starting point for further research, such as: in which essays and in what way De Coninck discussed certain people or places – and whether this was subjected to any revision during the writing process.

4) Identifying corresponding files

The objective of this step was to identify files that corresponded to each other across De Coninck’s archive and publications, which included:

  1. The floppy disk files;
  2. Printouts of the files in the De Coninck’s paper archive at the Letterenhuis;
  3. Newspaper snippets files in the De Coninck’s paper archive at the Letterenhuis;
  4. Published columns in the collection of essays De flaptekstlezer (1992), Intimiteit onder de melkweg (1994) and De vliegende keeper (1995).

To facilitate the comparison of these sources, we firstly digitized the materials. We made use of an existing EPUB version of the collection of essays. For the printouts and newspaper snippets, we employed Optical Character Recognition (OCR) using Python-tesseract. Once all sources had been digitized, we proceeded to compare their contents using similarity ratios and list the top 5 similarities. For instance, comparing published columns with floppy disk files revealed a 0.90 similarity ratio between the column “Poëzie en toeval” and the file “DICKEY_74_968.txt”, which might suggest a strong correspondence.

Next Steps

By means of computational methods such as NER and similarity checks, we could gain some general insights into the content of the files on De Coninck’s floppies. The next and final blogpost within this series will provide further details on the dataset with the file titles related to De Coninck’s columns published under the title “De vliegende keeper” in De Morgen.

Link to the dataset

The CSV file containing the persons and places mentioned in the files stored on the floppy disks (with the exception of De Coninck’s personal communication) is made publicly available on Zenodo through CLARIAH-VL, the files from the floppies can – after approval – be consulted at the Letterenhuis.

References

Bekius, L., & Thijs, J. (2024). What does that little black square store? The contents of Herman de Coninck’s floppy disks in the Letterenhuis. DH Benelux 2024, Leuven, Belgium. Zenodo. https://doi.org/10.5281/zenodo.11401905

Written by Lamyk Bekius & Jordan Thijs

Bibundina: a Writer’s Library app

The Writer’s Library application “Bibundina”, developed at the University of Antwerp as part of eXtant, a toolkit for digital scholarly editing, aims to provide an environment both to create and to publish an edition of a collection of books and the reading traces in them. In origin, it was created to publish an edition of a writer’s library. It could for instance be used to make an edition of Virginia Woolf’s personal library, or Toni Morrison’s, or James Joyce’s—a digital reconstruction of the books that occupied their shelves. Such an edition can contain not just the bibliographic details of those books, but also focus on the marginalia left in them by the writers, as well as include information about the books’ provenance and when they were read.

Target audience

The project’s target audience consists of people who have an interest in creating such an edition of a library and are looking for a publication environment. The only technical skill users will need is an understanding of XML: to fill up an XML document following a well-documented schema. Bibundina was written as an application in eXist-db, an open-source NoSQL XML database and application platform. Bibundina can be easily installed via eXist’s “package manager” by dropping a compressed version of the app into the dropzone. Included with the eXist installation is eXide, eXist-db’s code editor. Users can edit the Writer’s Library app’s XML files in eXide and immediately see their work visualized in the publication environment. There are just two XML files to edit: a config file in which users can give their edition a name and select which sorting and browsing categories to use, and a file called “library.xml”, the main data file in which to encode the books.

Images

Images of the books can be added. The app accommodates three different ways of incorporating images, including IIIF, which will allow scholars to create editions based on their own research data or on collections of images that have been made available via IIIF at institutions all around the world. In the book view, users can browse through the images of the pages the editor wishes to include, cover and title page, for instance, and all pages with reading traces.

Reading traces

For pages with reading traces, zone numbers appear in the left margin of the image, at the same level as their corresponding trace. Clicking on a zone number will activate two pop-up windows on the right: a cropped picture of the reading trace in question and a text box with a transcription of the reading trace, the marked passage and an extract of the passage in context.

Some admin tools have been embedded into the interface to facilitate the data input. There is a tool to help editors create a new book entry, to extract image links from a IIIF manifest and formats them into the required XML schema, and a tool that assists admins with drawing rectangular zones around reading traces on IIIF images.

Available on GitHub: the source code, the current binary release, the documentation, and the user manual.

https://github.com/eXtant-CMG/writerslibrary