The solution for your digital text analysis
The aim of the ‘analyse’ activity within CLARIAH-VL is to build a user-friendly Digital Text Analysis Dashboard and Pipeline (DTADP).
The Digital Text Analysis Dashboard and Pipeline (DTADP) allows you to
- upload your own corpus
- select from already available corpora,
- select which analysis steps you need:
Tools for linguistic preprocessing for Dutch, English, French and German
tools to annotate text with linguistic information for different standard languages, such as tokenization, lemmatization, part-of-speech tagging, syntax parsing, and named entity recognition for Dutch, English, French and German.
Tools for natural language understanding
tools for natural language understanding, such as stylometric analysis, sentiment and emotion detection, document similarity clustering and topic modelling and tools for distant reading.
Tools for parallel data
tools for parallel data, such as sentence alignment and word alignment.
- download results in a variety of formats, including non-textual visualisations (e.g. aligned syntax trees, word clouds).
The ‘analyse’ activity consists of 6 tasks:
- Creation of a modular framework for the DTADP: In this task, we will formally define and implement a modular framework for the linguistic infrastructure, allowing easy addition of new tools and analyses. For the design of the framework, great attention will be paid to compatibility and integration within the CLARIN switchboard infrastructure.
- User-Centered Creation of a user-friendly dashboard: In this task, the developer will set up and evaluate a user-friendly dashboard in cooperation with the NLP researchers and involve users to evaluate the dashboard. To this end, a list of dedicated use cases will be defined and worked out in detail in this task, and users will be consulted for personalisation of the dashboard and adapting it to their own needs.
- Benchmark existing NLP tools and new models: For a number of NLP tasks, different alternatives are available in different forms and programming languages. We will benchmark existing NLP tools and new models. The results allow users to curate which alternatives to integrate in the DTADP. This includes (re)training tools on existing resources, such as creating state-of-the art methods for language modelling of historical Dutch or specialized text corpora (e.g. medical text, legal text, etc). It includes setting up an interface with the High Performance Computing infrastructure of the VSC to allow training of SOTA models on large datasets. The set of tools that will be benchmarked are linguistic processing tools and NLP tools for natural language understanding.
- Integration of NLP tools for a) Tools for linguistic preprocessing for Dutch, English, French and German, b) Tools for natural language understanding and c) Tools for parallel data
- Release NLP analyses on existing datasets: Public datasets that have been processed with certain tools will be made available from within the dashboard, allowing users to search them.
- Creation of a Help Desk for users: The goal of the Help Desk is to help users by tailoring tools to specific use cases or domains, as well as deal with feedback on annotation and analysis errors, leading to improved models.