Styloscope and Toposcope: Towards User-Friendly Digital Text Analysis

Natural Language Processing (NLP) has been one of the fastest-growing research fields in the last decade. Innovations such as pre-trained large language models based on transformer neural networks have not only led to the popularization of AI and NLP in the general public, but also to interdisciplinary research projects in the humanities and social sciences facilitated by the scalability of these methods. In this post, we present two tools that aim to facilitate said interdisciplinary research: Styloscope and Toposcope.  The tools were developed in Python and can be used from the command line or from a user interface. The code, detailed installation instructions, and user guidelines can be found on GitHub:

Styloscope

Styloscope is a tool for automatic writing style analysis. It can be used to test hypotheses about large-scale corpora, parse documents, or detect outliers. Users can provide data by either uploading a local file or by using a publicly available Huggingface dataset. When uploading a corpus, the tool accepts CSV files with one document per row, and ZIP folders in which documents are stored in individual text files. The output contains the parsed documents, raw statistics on various writing style features such as syntactic dependencies, lexical richness,  readability, etc., and visualizations of aggregated results. An example for syntactic dependencies is provided below:

Toposcope

Toposcope can be used to detect topics in unstructured text data. It provides annotations and visualizations of the detected topics, including (changes in) topic frequency over time. The tool features four algorithms: BERTopic (Grootendorst, 2022), Top2Vec (Angelov, 2020), Non-negative Matrix Factorization (Choo et al., 2013), and Latent Dirichlet Allocation (Blei et al., 2003). Users can modify a selection of topic model parameters, and apply a number of built-in preprocessing steps, such as lemmatization and stopword removal. The input format is identical to the Styloscope format: users can upload a local corpus (CSV/ZIP), or use a Huggingface dataset. The output includes visualizations of the topic-document clusters (as shown below) and the most important keywords per topic. The raw results, among other things, consist of annotations, a topic-document matrix, and a topic-term matrix. Topic diversity and topic coherence are also computed in order to support the user during the evaluation of the tool.

How to cite

Jens Lemmens and Walter Daelemans. 2024. Styloscope and Toposcope: Towards user-friendly digital text analysis. CLiPS Technical Report Series (CTRS): 10. https://www.uantwerpen.be/en/research-groups/clips/research/computational-linguistics/compling-resources/clips-technical-repo/

References

  • Dimo Angelov. 2020. Top2Vec: Distributed representation of topics. arXiv:2008.09470.
  • David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, vol. 3, pp. 993—1022.
  • Jaegul Choo, Changhyun Lee, Chandan K. Reddy and Haesun Park. 2013. Utopian: User-driven topic modeling based on interactive nonnegative matrix factorization. IEEE Transactions on Visualization and Computer Graphics, vol. 19 (12), pp. 1992—2001. Institute of Electrical and Electronics Engineers (IEEE).
  • Maarten Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.05794.

CLARIAH-VL SIC 5 tool descriptions

Research Scenario: Unlocking Born-Digital Literary Heritage

Introducing the case of the Herman de Coninck floppy disks

The aim of the CLARIAH-VL Open Humanities Service Infrastructure is to advance digitally-enabled research in Humanities and the Arts by, among other things, providing data-level access to digitised and born-digital resources. In this blogpost series, we will communicate on research scenario’s leading to and building upon the datasets made available through CLARIAH-VL. This blogpost introduces the research scenario ‘Unlocking born-digital literary heritage: the case of the Herman de Coninck floppy disks’.

Introduction

Computers have been a widespread writing technology since the popularisation of the word processor in the early 1980s (Kirschenbaum et al. 2009), but digital materiality is only slowly entering (literary) archival institutions because born-digital archives often remain in the private ownership of the author (Reside 2014). Still, when born-digital archives are part of the collection, they are often “uncatalogued, unfindable and unusable” (Jaillant 2022, 418). Research on born-digital archives is therefore scarce (Jaillant 2022; Ries & Palkó 2019; Reside 2014). This specific research scenario makes use of the CLARIAH-VL infrastructure to make born-digital literary archival material in Flanders findable and more accessible, starting with Herman de Coninck’s floppy disks in the collection of the Letterenhuis (Antwerp).

Floppy disk Herman de Coninck
Floppy disk Herman de Coninck at the Letterenhuis

Dataset

The floppy disks of the prominent Belgian poet, essayist, journalist, and publisher Herman de Coninck (1994-1997) hold a special place in the Letterenhuis collection, as they were the first born-digital archival material acquired by the institution. In 1998, the Letterenhuis received a donation of De Coninck’s literary archive, consisting of manuscripts and typescripts, correspondence, diaries, notebooks, photographs and 218 floppy disks (5¼- and 3½-inch). De Coninck’s paper archive is now fully catalogued, but the content of the digital files is still largely unknown. The aim of the research scenario is therefore to partially ‘unlock’ the digital files stored on the floppy disks by describing their contents, creating sub-datasets that group related files while documenting their original context (e.g., the files stored on the same floppy disk) and linking the files with related files in the paper archive. 

Research Question

While the full dataset of all the files held by the floppy disks could be used to answer various research questions relevant to biographical research, textual scholarship and genetic criticism, the first research question driving this research scenario relates to the initial access to the files: What is the content of each of the files on the disks, and what digital tools and methods can be used to reveal it?

Challenges

In total, the floppy disks contain over 1300 files. These challenges include identifying sensitive material, establishing connections within and between born-digital and analog material in the same archive, and documenting duplicates and folder structures, all without having to work at the single file-level. Also, researchers must be able to find this born-digital part of De Coninck’s archive, and favourably have a rough idea of its contents to fully use it for their research.  

Solutions

There have been several initiatives for the application of natural language processing (NLP), such as named entity recognition (NER) and topic modelling, to provide information about the content of born-digital collections (Lee and Woods 2017; Jaillant and Aske 2024).

This research scenario will therefore make use of the tools provided by SIC 5 (Analyse) to automatically create (meta)data and/or to identify sensitive records. This includes tools for named entity recognition, stylometric analysis, sentiment and emotion detection, document similarity clustering and topic modelling and tools for distant reading. 

Next Steps

The following blogposts within this series will communicate on the next steps taken as part of this research scenario. This will include:

  • the documentation on the usefulness of the NLP tools for unlocking the born-digital files of Herman de Coninck;
  • the creation of sub-datasets for text genetic research;
  • the description of the (meta)data-set.

References

Jaillant, L. “How can we make born-digital and digitised archives more accessible? Identifying obstacles and solutions.” Arch Sci 22 (2022): 417-36. https://doi.org/10.1007/s10502-022-09390-7

Jaillant, L. and K. Aske. “Are Users of Digital Archives Ready for the AI Era? Obstacles to the Application of Computational Research Methods and New Opportunities. ACM J. Comput. Cult. Herit. 16. 4 (2024): 16 pages. https://doi.org/10.1145/3631125

Kirschenbaum, M.G., E. Farr, K. Kraus, N. Nelson, C.S. Peters, and G. Redwine & D. Reside. “Digital Materiality: Preserving Access to Computers as Complete Environments.” iPRES 2009: the Sixth International Conference on Preservation of Digital Objects (2009). https://escholarship.org/uc/item/7d3465vg

Lee, C.A., and K. Woods. “Diverse Digital Collections Meet Diverse Uses: Applying Natural Language Processing to Born-Digital Primary Sources.” iPRES. 2017.

Reside, D. “File Not Found: Rarity in an Age of Digital Plenty”. RBM 15.1 (2014): 68-74. https://doi.org/10.5860/rbm.15.1.416

Ries, T., & G. Palkó, “Born-digital archives.” Int J Digit Humanities 1 (2019): 1-11. https://doi.org/10.1007/s42803-019-00011-x

Reading historical maps in a Digital Era

The promises of Artificial Intelligence technologies to extract historical information from maps are becoming more impressive each day. However, these tools have been developed not only on the basis of but also, for modern maps. Older ones, including hand-drawn maps (early nineteenth century and earlier) therefore remain, as often with AI, the poor siblings of AI revolution, because the models used to extract information from maps do not work on those older documents: older maps have other characteristics than modern ones, often do not display the same sort of information which also means that researchers that are working on this kind of documents do not always have the same research questions than their colleagues working on modern ones.

On the 17th of November, members of the Antwerp Group of CLARIAH-VL (Iason Jongepier and Léa Hermenault assisted by Lamyk Bekius and Rein Debrulle), organized a workshop in Antwerp, with the support of CLARIAH-VL, the University of Antwerp and Ghent University, that aimed to tackle this issue. The underlying idea of this workshop was to facilitate brainstorming around technical solutions that allow older maps to also benefit from AI technology and AI-related workflows.

The workshop gathered 25 participants and welcomed 13 speakers from various horizons. In the morning, researchers from the leading Alan Turing Institute gave presentations and demos regarding two applications from the “Living with machines” project (namely MapReader and Machines Reading Maps). In the afternoon, colleagues from the University of Antwerp, Ghent University and University of Amsterdam presented their own work on AI and AI-related workflows, among others.

Program of the ‘Reading Maps in a Digital Era’ workshop

MapReader, a computer vision pipelines for exploring and analysing images

We first had the great pleasure to hear Katherine McDonough (Lancaster University & Alan Turing Institute) and Daniel Wilson (Alan Turing Institute) who came to introduce the tool MapReader they developed with their team in the framework of the “Living with Machines” project. This tool is open source and can be installed and used by anyone thanks to the instructions available here. Originally developed to automatically browse railways components on the Ordnance survey of England, it can be used to help researchers finding on a raster document any elements they are looking for by automatically identifying the later in pre-defined patches.

MapReader can also simply be used as a way to annotate patches. The tool first needs to be trained on a sample of patches, whose number depend on the size of the patches and the number of the specific characteristics of the browsed element: if the latter is not easily distinguishable from other elements, then the tool will need to be trained on a very large number of patches, but if on the contrary the element has very clear and specific characteristics, then the model only needs to be trained on a relative small number of patches. It should be possible to use the tool to explore old and hand-written maps if elements are easily identifiable, given that enough maps with these elements exist. One of the main problems about older maps indeed remains that we usually do not have enough material to train the models: very few maps collections dated from before the nineteenth century constitute series.  

MapReader patches on the Ordnance Survey

“Machines Reading Maps”, a tool to automatically transcribe texts on maps

After a short break, Katherine McDonough and Valeria Vitale (Sheffield University & Alan Turing Institute) introduced the audience to another tool that has been developed by the University of Southern California Digital Library, the Computer Science & Engineering Department at the University of Minnesota and the Alan Turing Institute, called “Machines Reading Maps”. This tool is trained to identify printed text on maps and to transcribe it. It has been first tested on the Rumsey collection, and then added to the platform that allows to browse it. It enables everyone to look for a toponym not only in the metadata, but also on the map itself. Machines Reading Maps can, therefore, also be used to count the occurrences of a specific place name variant in one collection for instance, or to gather text written with a specific graphic style (Bold, italic, etc.). If the later possibility would only be of interest for maps produced in series, which therefore limits drastically its use for old maps, the tool still looks promising for pre-nineteenth century cartography since writings tend to be more quickly standardized than symbology. It should definitely be tested on maps with non-printed text. 

Example of the results given by the search “Antwerpen” in the Rumsey Collection

Applying computer vision on historical documents

The afternoon was organized in three different sessions with two short papers in each of them. The first session, entitled ‘Computer vision‘ aimed at broadening our scope to the application of computer vision on geospatial and related data in general. José Oramas (University of Antwerp, Imec/IDLab) gave a paper related to his research on computer vision models applied to pictures where he tries to understand how those models really work in order to eventually improve their results. Then Thomas Smits (University of Amsterdam) introduced the audience to a research that he did together with Mona Allaert, Loren Verreyen, Wouter Haverals and Mike Kestemont at the University of Antwerp and which consisted of the use of computer vision, HTR and Large Language Models to transcribe and geo-localize addresses found on 100,000 historical postcards. This research shows how promising HTR technics are but also reveals how important it is to have a solid addresses database that can be used to geo-localize information, which is certainly reachable for modern periods, but which is more challenging for older ones.  

The second session was dedicated to the specific challenges of historical maps. The later have specificities, advantages and inconveniences that we have to be aware of if we want to build efficient and relevant applications and workflows to facilitate their digital use. The aim of this session was to focus on two very different corpus of maps to broaden our knowledge of their specificities. Dieter De Witte (Ghent University and Imec/IDLab) and Iason Jongepier (University of Antwerp and State Archives) introduced us to the specificities of historical maps of Belgium but also to the first attempts that have been done to extract information from them. Then, Katherine McDonough and Daniel Wilson showed how they used MapReader to explore “railway spaces” on the Ordnance Survey and explained the advantages of reflecting on those spaces using patches instead of vector data. 

The audience,at the end of a long day of work

Pipelines and workflows to scale up the digitization of data

The third and last session aimed at pipelines and workflows that can be use for the handling data derived from maps, or historical data with a strong spatial component. Janna Aerts (University of Amsterdam) and Leon van Wissen (University of Amsterdam and UvACreate) presented different projects for which historical data have been gathered and are connected via the linked open data system AdamLink. Next, Vincent Ducatteeuw (University of Ghent) and Léa Hermenault (University of Antwerp) gave a paper related to an article they are currently writing and that aims to show that small-scale/local gazetteers can greatly contribute to the debate regarding the structure of gazetteers by helping to choose information that should be available in a gazetteer to secure its interest for research purposes but also to meet FAIR standards. 

This fruitful day helped each of the participants to get to know new tools and to reflect on new methodological issues. It will without a doubt lead to further explorations and discussions that will hopefully help to unlock the access that historical information that old maps are packed with.

A CLARIAH-VL supported data management system: nodegoat

Information on people, places, and things are related to each other in different ways. There are countless ways we can infer these relations which are research question and source dependent. There are also many ways to store this information as data. The nodegoat platform is an object oriented relational web-based data management system which also provides network and geospatial visualizations in one platform. It has the availability to develop custom data models, and then collaborate on this data and later generate visualizations and export data. Within CLARIAH-VL nodegoat is used to store, organize, maintain and analyze relational data. Below you can find the cases where the platform is currently or has been used.

To support researchers in their use of such data, CLARIAH-VL & GhentCDH are hosting 2 nodegoat workshops. In the workshop you will learn how to use this platform for your research. The workshops will be held in two parts on the same day:

1) beginners

2) advanced users (users that already are using nodegoat or have setup an instance).

The workshops, given by the developers of the platform – Lab1100, will be held on 16 November from 10 – 12h & 14 – 16h respectively at Ghent University. Both workshops will be given in English, but questions may be asked in Dutch. The exact location will be confirmed two weeks before the location to registered participants. Please register here: https://event.ugent.be/registration/nodegoat.

An example of the nodegoat instance, as used in the TIC Collaborative project. This network represents a social visualisation of people and conferences.

To get an idea of how nodegoat can be used in research, see this list of the use cases, all projects supported by GhentCDH (CLARIAH-VL):

CLARIAH-VL tools in a computer-assisted genetic editing summer school

The CLARIAH-VL team from the Literature Department of the University of Antwerp looks back on an intense but inspiring week of teaching in the Antwerp Summer University. This year, the theme of the summer school was: Digital Humanities – Computer-assisted genetic editing: from handwritten text recognition to keystroke logging. From 3 to 7 July 2023, participants from Belgium, the Netherlands, Germany, Denmark, Italy, Poland, Turkey, Finland, and Slovenia immersed themselves in the world of digital scholarly editing, handwritten text recognition, keystroke logging and X-technologies. In addition to inspiring others to use digital tools in scholarly editing projects, the summer school also provided a great opportunity to test two tools developed within CLARIAH-VL, as part of eXtant, a toolkit for digital scholarly editing: Axolotl and Keystroke Loxensis.

Making XML less scary with Axolotl

The week began on Monday with a lecture on digital scholarly editing by Dirk Van Hulle. After this lecture, which set the context for the summer school, Josip Batinić and Loren Verreyen gave participants a theoretical introduction to handwritten text recognition (HTR). This provided the participants with a solid basis for learning how to use the HTR tools the next day. The afternoon sessions were devoted to learning the basic principles of XML, currently the most widely used markup metalanguage in digital scholarly editing. To practice TEI-XML for manuscript encoding, the participants were divided in groups of five and asked to transcribe a page from Mary Shelley’s Frankenstein manuscript – as available in the Shelley-Godwin Archive – according to the BDMP (Beckett Digital Manuscript Project) encoding manual. Whereas normally each participant would have to work on their own transcription, or assign one participant the task of typing the transcription, the participants could now join forces and work together on one transcription simultaneously. How? By using Axolotl, a collaborative XML editor, developed by Nooshin Shahidzadeh Asadi. Working in this scenario, the participants were the first users of Axolotl to put the tool to the test. Although the tool is still under development, it proved to be very useful from a pedagogical point of view: enabling group work where participants could help each other, correct each other’s transcriptions, and learn from each other directly by seeing them write TEI-XML. One participant even pointed out that it “made the learning process of TEI-XML a bit less confronting and terrifying”.

First time testing Axolotl during the summer school Digital Humanities – Computer-assisted genetic editing @ the University of Antwerp

The sessions on Tuesday were dedicated to handwritten text recognition, and in particular to working with Transkribus. Everyone wrote a short text to be transcribed later, using Transkribus. This way, the participants learned the workflow in Transkribus, how to run the available models and how to interpret the Character Error Rate (CER). Finally, the HTR results were exported to TEI-XML for use in a later phase of the summer school.

Editing digital writing processes

As scholarly editors in the 21st century, we are not only working with handwritten or typescript material. More and more of our literary heritage is born-digital, and therefore eXtant also provides tools to help and encourage us to work with born-digital material, particularly with keystroke logging data. The sessions on Wednesday were therefore devoted to an introduction to keystroke logging (taught by Lamyk Bekius). Participants began the day by writing a short text and logging their writing process using the keystroke logging tool Inputlog. To enable text genetic analysis of the writing process, the keystroke data needs to be presented in a way that captures only the information relevant to text genetic research, and that conveys the interpretation of the data in a format that is easy to read and preferably familiar to peers. Encoding the keystroke logging data in TEI-XML can be considered as one solution for this task, as Lamyk has argued in the Track Changes project (Huygens Institute/University of Antwerp). Using the encoding manual for keystroke logging data, provided as part of eXtant, the participants had to reconstruct their writing processes in the same way. Encoding the keystroke data is a laborious task, but it is a great exercise for learning XML, and it raises awareness of the peculiarities of digital writing (e.g. typing behaviour) and the challenges of encoding it. On Wednesday afternoon, Mike Kestemont gave the keynote lecture on intertextuality in Middle Dutch epic literature: “The wandering verse: the computational detection of micro-intertexts in medieval literature”. This lecture was open to the public, as part of the Platform{DH} Lecture Series.

Replay it with Keystroke Loxensis!

Since a number of tools in eXtant are developed as applications for eXist-db, we decided to offer an introduction to working with eXist-db during the summer school. eXist-db is an open-source NoSQL XML database and application platform. It provides an integrated infrastructure for the development of any web-based software that is driven by XML data and is increasingly used for many digital edition projects. Including eXist-db in the programme also had the benefit of allowing us to briefly introduce Keystroke Loxensis to the participants. This is a tool for visualizing keystroke logging data encoded in the text in TEI-XML and is part of the eXtant toolkit. Once you have the XML document with the correct encoding, you can upload the document to the application. This therefore allowed the participants to see the results of their encoding efforts: all the writing actions can be shown in one glance or replayed in the order they were carried out. On Thursday, we also took the first steps towards making a simple ‘digital edition’ of the Transkribus output in eXist-db, including an introduction to HTML and CSS – still key to the visualization of transcriptions – given by Nooshin. In the afternoon, we left the classroom behind for an excursion to the Plantin-Moretus Museum and the summer school dinner.

On Friday, we discussed a variety of X-technologies, starting from XPath to XQuery to XSLT. In eXist-db, the participants learned how to use these technologies by making a simple edition of the HTR assignment. With just a little coding, all the images and transcriptions from all the participants could be visualized, and the participants could then modify the HTML and CSS to their own liking. A great result after a week of hard work.

We are very happy with the way the summer school went and with meeting the wonderful participants. It was also great to be able to test our tools with some potential users, as it showed us where to improve and where to fix bugs. At the end of the summer school, participants anonymously filled in a survey to rate the tools they had worked with, giving us useful feedback on Axolotl and Keystroke Loxensis. And, importantly, the majority of participants acknowledged the usefulness of both tools.

Curious about what we will offer in the Summer School Digital Humanities 2024? Keep an eye on the Antwerp Summer University website where you can also sign up for the newsletter.

From Zoom conference to Vampire sessions: looking back on the first DHBenelux #GoesOnline

According to annual custom, we attended the yearly DH Benelux conference from 3-5 June 2020. Due to the COVID-19 pandemic, the 7th edition of this event that was supposed to take place at the University of Leiden as a physical event was rescheduled to June 2-4 2021. Luckily several stakeholders took the initiative to organize a virtual alternative (DHBenelux 2020 #GoesOnline) in order to allow the DH community to present and discuss some of their latest research findings and to demonstrate tools and projects.

Now that we are a few months further into the pandemic we have gotten more or less used to participating in online calls, meetings and workshops. In the beginning of June 2020 when the online DH Benelux Conference took place, however, it was quite a challenge for the organizers to find ways to make meaningful interactions possible without actually being in the same room. Nevertheless, the organizers succeeded in their plan, making clever use of Zoom (and its breakout rooms) and an interesting and varied 3-day program was prepared, with the conference welcoming more than 150 registered participants.

During the first day, in the Late-riser Session moderated by our own Sally Chambers, our DARIAH colleague Erzsébet Tóth-Czifra demonstrated the use of the OpenMethods as a metablog where a collaborative knowledge hub for Digital Humanities tools and methods is built. After the (virtual) lunch – and a nap for some – among others, we were presented with a demonstration of the Open Data Kit (ODK) for mobile linguistic metadata entry. The Graveyard Session, chaired by our colleague Mike Kestemont, highlighted some tools making use of lexica and dictionaries, for example in the research of historical Dutch.

On day 2 the lateriser session focused on the combination of Digital Humanities and pedagogy (for instance, in DH Master programmes or with aggregators such as DARIAH Campus). The second After Lunch Nap and Graveyard sessions provided more interesting talks, from advances in Digital Music Iconography to using signing (with pose estimation) as input for Sign Language Dictionary Search.

Day 3 offered a variety of topics: Mike Kestemont & Folgert Karsdorp kicked off with their amazing estimation of the loss of Medieval texts using methods from ecodiversity, after lunch many very interesting insights about OCR and HTR were shared with the Zoom audience. Also on this final day Sara Tonelli gave a very keynote lecture on the today more than ever relevant issue of Abusive language detection and more specific on where we are and what we still need to do. Finally, during the Graveyard session (chaired by another CLARIAH-VL colleague, Julie Birkholz, the participants enjoyed some state-of-the-art Linked Open Data tools and best practices and showed their interest once again by asking tons of questions – as throughout the whole conference, even in breakout rooms during the breaks – and helping the presenters with useful suggestions.

At a physical conference the golden rule is ‘no work without pleasure’ and normally we would conclude each day in a more informal way that may or may not be accompanied by something to drink or eat. Finding an alternative for this part of the conference was probably even harder than for the more formal part. Nevertheless the organizers tried and came up with the idea of so-called Vampire Sessions were intended as the ‘social event’ for the conference. Like true vampires we came together when the sun went down during ‘bar-like’ sessions with a DH Theme, each day organised by another country. The Netherlands ended day 1 with a very thoroughly enjoyed combination of work and pleasure: the ‘DHBenelux Drunkenness Ontology’ proved to be only 1 of the many ways to create a proper DH ‘Intoxication Ontology’. Badges and customized Zoom backgrounds were rewarded for all the participant’s efforts and well earned at the end of an interactive event. On the second day our colleagues from Luxemburg had created a lively and above all, very musical, quiz to test everyone’s knowledge. From metal songs to proper classics, all participant’s heads were buzzing after this late night session. The cherry to the cake was the final vampire session in which our Belgian team organized an actual DH quiz. From CLARIAH-VL – no, we didn’t ask about the acronym – to questions about long-forgotten DH heroes, in the end everyone was a winner (although we did hand out a few prizes to the best contestants).

If this DHBenelux 2020 #GoesOnline event demonstrated one thing, it’s that the DH community in our respective countries are more than capable of adapting to the challenging times we’re experiencing at the moment. Even a global pandemic cannot prevent us from effectively sharing our DH research, helping out each other and still finding ways to participate interactively in social events from behind a computer screen. Let’s hope that next year we will have the opportunity to meet up again in person (and in beautiful Leiden) for DHBenelux 2021, but if not, we will definitely learn from all the very good – and some a little less good – experiences of this first DHBenelux online event!

Tom Gheldof

Tom Gheldof is the day-to-day coordinator for DARIAH-BE/CLARIAH-VL at KU Leuven's Faculty of Arts...

More Posts

Follow Me:
Twitter

Opening up the humanities: an infrastructural opportunity

CLARIAH-VL

The CLARIAH-VL Open Humanities Service Infrastructure, funded for at least two years by the Research Foundation Flanders (FWO), is being developed by an interdisciplinary team of researchers, digital humanities experts and infrastructural specialists, from the universities of GhentAntwerpenLeuven and Brussels, together with linguists from the Dutch Language Institution (INT).

CLARIAH-VL will offer a unique combination of high-quality and user-friendly tools and resources that can be seamlessly embedded into the everyday workflows of humanities researchers, designed to improve the utility, accessibility, reusability, and sustainability of their research processes and data.

Open humanities is a guiding principle for contemporary research. As researchers we need to demonstrate the relevance of the humanities for the general public, heritage community groups and policy makers. Collaborative research paradigms of co-creation and participatory engagement, sharing authority and actively engaging with different communities, are at the heart of our vision for the digital future of the humanities. For this, we need open humanities infrastructures to make it technically possible to share knowledge, including sharing and co-creating knowledge with non-specialist users and facilitate citizen science.

Just some of the tools CLARIAH will offer include:

  • IIIF-enabled Corpus Management Platform, enabling researchers to build a research corpus from a variety digital resources and export the textual data for analysis with digital research tools
  • participatory deep mapping platform to facilitate crowdsourcing a rich array of geo-spatially annotated resources
  • web-based platform for the publication of digital scholarly editions and
  • linked open data infrastructure for analysing, sharing, connecting and enriching arts and humanities research data.

Furthermore, CLARIAH-VL will stimulate computational advances in the arts and humanities; such as advancing machine learning towards in-depth processing of big data and granular computing, with a prime role for Natural Language Processing (NLP) as a way of meeting the expectations of humanities scholars wanting to interpret the vast amount of digitized data that has become available.

Finally, CLARIAH-VL will help prepare arts and humanities researchers for participation in initiatives such as the European Open Science Cloud (EOSC), which is set to become an open and trusted environment for storing, sharing and re-using scientific data across disciplines and borders.

Search OpenEdition Search

You will be redirected to OpenEdition Search