Styloscope and Toposcope: Towards User-Friendly Digital Text Analysis

Natural Language Processing (NLP) has been one of the fastest-growing research fields in the last decade. Innovations such as pre-trained large language models based on transformer neural networks have not only led to the popularization of AI and NLP in the general public, but also to interdisciplinary research projects in the humanities and social sciences facilitated by the scalability of these methods. In this post, we present two tools that aim to facilitate said interdisciplinary research: Styloscope and Toposcope.  The tools were developed in Python and can be used from the command line or from a user interface. The code, detailed installation instructions, and user guidelines can be found on GitHub:

Styloscope

Styloscope is a tool for automatic writing style analysis. It can be used to test hypotheses about large-scale corpora, parse documents, or detect outliers. Users can provide data by either uploading a local file or by using a publicly available Huggingface dataset. When uploading a corpus, the tool accepts CSV files with one document per row, and ZIP folders in which documents are stored in individual text files. The output contains the parsed documents, raw statistics on various writing style features such as syntactic dependencies, lexical richness,  readability, etc., and visualizations of aggregated results. An example for syntactic dependencies is provided below:

Toposcope

Toposcope can be used to detect topics in unstructured text data. It provides annotations and visualizations of the detected topics, including (changes in) topic frequency over time. The tool features four algorithms: BERTopic (Grootendorst, 2022), Top2Vec (Angelov, 2020), Non-negative Matrix Factorization (Choo et al., 2013), and Latent Dirichlet Allocation (Blei et al., 2003). Users can modify a selection of topic model parameters, and apply a number of built-in preprocessing steps, such as lemmatization and stopword removal. The input format is identical to the Styloscope format: users can upload a local corpus (CSV/ZIP), or use a Huggingface dataset. The output includes visualizations of the topic-document clusters (as shown below) and the most important keywords per topic. The raw results, among other things, consist of annotations, a topic-document matrix, and a topic-term matrix. Topic diversity and topic coherence are also computed in order to support the user during the evaluation of the tool.

How to cite

Jens Lemmens and Walter Daelemans. 2024. Styloscope and Toposcope: Towards user-friendly digital text analysis. CLiPS Technical Report Series (CTRS): 10. https://www.uantwerpen.be/en/research-groups/clips/research/computational-linguistics/compling-resources/clips-technical-repo/

References

  • Dimo Angelov. 2020. Top2Vec: Distributed representation of topics. arXiv:2008.09470.
  • David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, vol. 3, pp. 993—1022.
  • Jaegul Choo, Changhyun Lee, Chandan K. Reddy and Haesun Park. 2013. Utopian: User-driven topic modeling based on interactive nonnegative matrix factorization. IEEE Transactions on Visualization and Computer Graphics, vol. 19 (12), pp. 1992—2001. Institute of Electrical and Electronics Engineers (IEEE).
  • Maarten Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.05794.

CLARIAH-VL SIC 5 tool descriptions

Reading historical maps in a Digital Era

The promises of Artificial Intelligence technologies to extract historical information from maps are becoming more impressive each day. However, these tools have been developed not only on the basis of but also, for modern maps. Older ones, including hand-drawn maps (early nineteenth century and earlier) therefore remain, as often with AI, the poor siblings of AI revolution, because the models used to extract information from maps do not work on those older documents: older maps have other characteristics than modern ones, often do not display the same sort of information which also means that researchers that are working on this kind of documents do not always have the same research questions than their colleagues working on modern ones.

On the 17th of November, members of the Antwerp Group of CLARIAH-VL (Iason Jongepier and Léa Hermenault assisted by Lamyk Bekius and Rein Debrulle), organized a workshop in Antwerp, with the support of CLARIAH-VL, the University of Antwerp and Ghent University, that aimed to tackle this issue. The underlying idea of this workshop was to facilitate brainstorming around technical solutions that allow older maps to also benefit from AI technology and AI-related workflows.

The workshop gathered 25 participants and welcomed 13 speakers from various horizons. In the morning, researchers from the leading Alan Turing Institute gave presentations and demos regarding two applications from the “Living with machines” project (namely MapReader and Machines Reading Maps). In the afternoon, colleagues from the University of Antwerp, Ghent University and University of Amsterdam presented their own work on AI and AI-related workflows, among others.

Program of the ‘Reading Maps in a Digital Era’ workshop

MapReader, a computer vision pipelines for exploring and analysing images

We first had the great pleasure to hear Katherine McDonough (Lancaster University & Alan Turing Institute) and Daniel Wilson (Alan Turing Institute) who came to introduce the tool MapReader they developed with their team in the framework of the “Living with Machines” project. This tool is open source and can be installed and used by anyone thanks to the instructions available here. Originally developed to automatically browse railways components on the Ordnance survey of England, it can be used to help researchers finding on a raster document any elements they are looking for by automatically identifying the later in pre-defined patches.

MapReader can also simply be used as a way to annotate patches. The tool first needs to be trained on a sample of patches, whose number depend on the size of the patches and the number of the specific characteristics of the browsed element: if the latter is not easily distinguishable from other elements, then the tool will need to be trained on a very large number of patches, but if on the contrary the element has very clear and specific characteristics, then the model only needs to be trained on a relative small number of patches. It should be possible to use the tool to explore old and hand-written maps if elements are easily identifiable, given that enough maps with these elements exist. One of the main problems about older maps indeed remains that we usually do not have enough material to train the models: very few maps collections dated from before the nineteenth century constitute series.  

MapReader patches on the Ordnance Survey

“Machines Reading Maps”, a tool to automatically transcribe texts on maps

After a short break, Katherine McDonough and Valeria Vitale (Sheffield University & Alan Turing Institute) introduced the audience to another tool that has been developed by the University of Southern California Digital Library, the Computer Science & Engineering Department at the University of Minnesota and the Alan Turing Institute, called “Machines Reading Maps”. This tool is trained to identify printed text on maps and to transcribe it. It has been first tested on the Rumsey collection, and then added to the platform that allows to browse it. It enables everyone to look for a toponym not only in the metadata, but also on the map itself. Machines Reading Maps can, therefore, also be used to count the occurrences of a specific place name variant in one collection for instance, or to gather text written with a specific graphic style (Bold, italic, etc.). If the later possibility would only be of interest for maps produced in series, which therefore limits drastically its use for old maps, the tool still looks promising for pre-nineteenth century cartography since writings tend to be more quickly standardized than symbology. It should definitely be tested on maps with non-printed text. 

Example of the results given by the search “Antwerpen” in the Rumsey Collection

Applying computer vision on historical documents

The afternoon was organized in three different sessions with two short papers in each of them. The first session, entitled ‘Computer vision‘ aimed at broadening our scope to the application of computer vision on geospatial and related data in general. José Oramas (University of Antwerp, Imec/IDLab) gave a paper related to his research on computer vision models applied to pictures where he tries to understand how those models really work in order to eventually improve their results. Then Thomas Smits (University of Amsterdam) introduced the audience to a research that he did together with Mona Allaert, Loren Verreyen, Wouter Haverals and Mike Kestemont at the University of Antwerp and which consisted of the use of computer vision, HTR and Large Language Models to transcribe and geo-localize addresses found on 100,000 historical postcards. This research shows how promising HTR technics are but also reveals how important it is to have a solid addresses database that can be used to geo-localize information, which is certainly reachable for modern periods, but which is more challenging for older ones.  

The second session was dedicated to the specific challenges of historical maps. The later have specificities, advantages and inconveniences that we have to be aware of if we want to build efficient and relevant applications and workflows to facilitate their digital use. The aim of this session was to focus on two very different corpus of maps to broaden our knowledge of their specificities. Dieter De Witte (Ghent University and Imec/IDLab) and Iason Jongepier (University of Antwerp and State Archives) introduced us to the specificities of historical maps of Belgium but also to the first attempts that have been done to extract information from them. Then, Katherine McDonough and Daniel Wilson showed how they used MapReader to explore “railway spaces” on the Ordnance Survey and explained the advantages of reflecting on those spaces using patches instead of vector data. 

The audience,at the end of a long day of work

Pipelines and workflows to scale up the digitization of data

The third and last session aimed at pipelines and workflows that can be use for the handling data derived from maps, or historical data with a strong spatial component. Janna Aerts (University of Amsterdam) and Leon van Wissen (University of Amsterdam and UvACreate) presented different projects for which historical data have been gathered and are connected via the linked open data system AdamLink. Next, Vincent Ducatteeuw (University of Ghent) and Léa Hermenault (University of Antwerp) gave a paper related to an article they are currently writing and that aims to show that small-scale/local gazetteers can greatly contribute to the debate regarding the structure of gazetteers by helping to choose information that should be available in a gazetteer to secure its interest for research purposes but also to meet FAIR standards. 

This fruitful day helped each of the participants to get to know new tools and to reflect on new methodological issues. It will without a doubt lead to further explorations and discussions that will hopefully help to unlock the access that historical information that old maps are packed with.

Discover Ghent through historical maps: unveiling the past with Gent Gemapt

Our partners from the Ghent Centre for Digital Humanities are proud to present their newest project: Ghent Mapped. The digital city map ‘Gent Gemapt’ stacks 20 historical maps and connects them to 4,000 places and 10,000 pieces of heritage. Collections and history that over time have become dispersed among museums, libraries, archives, associations and living rooms, come together again virtually. Reunited, they offer a multifaceted view of the city and its past. Go explore for yourself via gentgemapt.be and discover Ghent layer by layer, from the Middle Ages to today.

The cultural heritage project Gent Gemapt is a collaboration of Ghent University LibrarySTAM – Stadsmuseum GentHuis van AlijnIndustriemuseumArchief GentAmsab-Institute for Social HistoryLiberas and Erfgoedcel Gent. It was developed and coordinated by the Ghent Centre for Digital Humanities at Ghent University, with support from CLARIAH-VL.

Example of the enriched map of Ghent Mapped

Want to know more?

  1. De Kaart” is the core of Ghent Gemapt. It is an innovative presentation platform that spatially unlocks the splendor of Ghent’s heritage collections. It allows you to search, zoom and scroll through time and space of the city.
  2. Gent Verrijkt” stands next to De Kaart and is the digital toolbox of Gent Gemapt. Here, museums and archives enlist the help of volunteers to help transcribe, date, describe and identify their collections.
  3. With the project, the heritage partners are innovating in digital technology. Gent Gemapt uses open source technology Omeka SMadocIIIF (International Image Interoperability Framework), Linked Open Data, and a new Ghent place register.
  4. Gent Gemapt is funded through a project grant from the Department of Culture, Youth and Media of the Flemish Government and through Clariah-Vlaanderen.
  5. After this launch, Gent Gemapt continues to grow and the partners continue to build on the heritage presentation and enrichment via GentGemapt.be.
  6. Read About Gent Gemapt and About the project.

Launch of DigHimapper, a platform to analyse historical maps via georeferencing and annotating

CLARIAH-VL is proud to announce the release of DigHimapper, a platform to analyse historical maps via georeferencing and annotating. It involves the public in two important steps in the processing of historical maps into analysable sources of information to make it possible to visualise landscape evolutions and search maps for place names.

  • In the Georeferencing portal, historical maps are placed as closely as possible to their current state. This is done by looking for points that can be found on both a historical map and in the current landscape. By repositioning the maps, it becomes possible to compare them directly with the present, as well as with other historical maps, thus visualising landscape evolutions.
  • In the Annotation portal, place names (toponyms) on historical maps are converted into text that can be read by computers. Combined with the repositioning of the map itself, these toponyms are given a place in space. Since toponyms often contain a wealth of information about past landscapes, this provides an indispensable resource for studying landscape evolution.

The collection

Central to a first phase of the development of this platform are the magnificent maps of the Arenberg family, which can be found in the General State Archives. The Arenberg family had possessions all over Europe, but mainly in the Low Countries. The collection has now been scanned at high quality and totals some 4,000 maps, ranging from parcel maps of villages to world maps.

An example of georeferencing in the Allmaps Editor

Contribute as a volunteer yourself!

Refine existing georeferences or get to work on toponyms. The results will be used for scientific research on past landscapes, making digital historical maps better available and as a test case to expand the platform with more functions and a multitude of historical maps. Go to the DigHimapper website and you can get started right away!

Contact: dighimaps@uantwerpen.be

DigHimapper is a collaboration between the University of Antwerp, the State Archives of Belgium, Webmapper and Bert Spaan. The project is funded by the Special Research Fund of the University of Antwerp and CLARIAH-VL. The original historical maps were transferred by the Arenberg Foundation to the State Archives of Belgium, which was responsible for scanning them. DigHimapper is also part of the FED-tWIN DigHimaps, a project collaboration between the University of Antwerp and the State Archives of Belgium, funded by BELSPO.

CLARIN-BE joins CLARIN ERIC

Exciting news! CLARIN-BE has joined the CLARIN European Research Infrastructure Consortium (ERIC) for the Social Sciences and Humanities focusing on Language Resources and Technology. Within CLARIN ERIC, CLARIN-BE will focus on the inclusion of tools and resources which are not yet present in the CLARIN infrastructure.

logo CLARIN-BE

More specific, CLARIN-BE will increase knowledge on:

  • Digital Text Analysis Dashboard and Pipeline
  • Benchmarking of NLP tools for natural language understanding
  • Integration of NLP tools in Digital Text Analysis Dashboard and Pipeline (DTADP)
  • Tools for linguistic preprocessing for Dutch, English, French and German
  • Tools providing automatic annotation for tokenization, lemmatization, part-of-speech tagging, syntax parsing, chunking and named entity recognition.
  • Tools for natural language understanding (sentiment analysis, emotion detection, document similarity clustering, topic modelling, stylometry)
  • Tools for parallel data (sentence and word alignment)

The CLARIN-BE consortium will fulfill its role by organizing user involvement sessions at Belgian humanities faculties, CLARIN courses at Summer Schools, or a more specific CLARIN training event. Furthermore, sessions about using CLARIN infrastructure will be included in several university courses.

CLARIN-BE also ambitions to work in close cooperation with CLARIAH-NL and will continue its contribution to K-Dutch, the CLARIN Knowledge Centre hosted by the IVDNT (Instituut voor de Nederlandse Taal).

Tom Gheldof

Tom Gheldof is the day-to-day coordinator for DARIAH-BE/CLARIAH-VL at KU Leuven's Faculty of Arts...

More Posts

Follow Me:
Twitter

Official launch of CLARIAH-VL with European support

Although our ‘CLARIAH-VL: Advancing the Open Humanities Service Infrastructure’ project started in February 2021, it wasn’t until Friday 10th of September that we were finally able to organize our internal launch meeting. Obviously we have had many virtual meetings in the past months, but we were all looking forward to seeing each other in person, to reflect on the work that has been done so far and to look ahead to the many challenging tasks that are planned in the coming months and years.

Sally Chambers presenting the outline of the CLARIAH-VL research infrastructure

We chose the KBR Royal Library of Belgium as the location for our launch meeting. KBR is the national scientific library and preserves, manages and studies more than 8 million documents. Before starting the actual meeting, the project team was happy to catch up over a cup of coffee in the beautiful Consilium meeting space.

As a starter, an interactive group exercise was planned to think about how we want to demonstrate the value of our integrated Open Humanities Service Infrastructure by the end of the project in December 2024. To introduce new members to the project and to reminisce ourselves of the working plan, we discussed the following issues: What do we need? What do we hope our infrastructure will look like? How should we brand our services? How can we improve our communication? Every team member approach and thought about these questions from the perspective of their own service infrastructure components (SIC’s):

  • Aggregating Humanities Research Data Sets
  • Linked (Open) Humanities Data Tool Suite
  • User-driven platforms for data enrichment
  • Digital Text Analysis Pipeline
  • Outreach, Uptake & Sustain
Toma Tasovac adressing the CLARIAH-VL consortium during the launch meeting

After a short break, we discussed the next steps based on the output of the interactive group exercise and decided on the agenda for our next team meeting. Then it was time to welcome two guests for a European Intermezzo: Franciska de Jong (Executive Director of CLARIN-ERIC) and Toma Tasovac (Director of DARIAH-ERIC). Both stressed the importance of the work of the CLARIAH consortium in Flanders, and also the potential to collaborate within the European context.

We concluded this launch meeting with a small reception at the Albert Bar, a rooftop terrace at KBR, and with the promise to see each other soon during our collaborations in the next phase of the development of the CLARIAH-VL services.

Tom Gheldof

Tom Gheldof is the day-to-day coordinator for DARIAH-BE/CLARIAH-VL at KU Leuven's Faculty of Arts...

More Posts

Follow Me:
Twitter

CLARIAH-VL awarded funding from FWO

FWO

Our CLARIAH-VL Consortium has received funding for the project “CLARIAH-VL: Advancing the Open Humanities Service Infrastructure” from the International Research Infrastructures programme of the Research Foundation Flanders (FWO) for the period 2021-2024.

At the universities of Antwerp, Brussels, Ghent and Leuven as well as the Dutch Language Institute, researchers of the CLARIAH-VL consortium will implement a modular research infrastructure embedding high-quality, user-friendly tools and resources into the workflows of humanities researchers.

Offering an open infrastructure which facilitates public humanities will ensure the accessibility and relevance of the humanities to the general public, specific (heritage) community groups and policy makers. It will make it technically possible to share knowledge, including sharing and co-creating knowledge with non-specialist users, such as facilitating citizen science and crowdsourcing projects.

https://www.fwo.be/media/1024233/results-international-research-infrastructurs-2020.pdf

Tom Gheldof

Tom Gheldof is the day-to-day coordinator for DARIAH-BE/CLARIAH-VL at KU Leuven's Faculty of Arts...

More Posts

Follow Me:
Twitter

Search OpenEdition Search

You will be redirected to OpenEdition Search