Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

The Belgian Historical Gazetteer: an index of Belgian historical place names to link archival collections and explore landscapes histories

Legend: extracts from various sheets of the ‘reduced cadastre’ (©IGN/NGI)

Gazetteers can help historians with mapping toponyms that appear in the sources they analyse by providing them with lists of historical place names and extra information which can be used to disambiguate the latter, if it happens that several different places share a same name. This is the aim of initiatives such as the World Historical Gazetteer or Pleiades. However, despite their merits, the international scope of these tools impedes the use for regional and local historical research. Historians thus need more suitable gazetteers if they want to work at a more local scale.

CLARIAH-VL aims to fill in this gap for Belgium by launching the project “Belgian Historical Gazetteer” in the framework of its SIC “Enrich”. Started in November 2022, the pilot of this project will last two years, and eventually aims to cover the whole country in the coming years. It is hosted at the University of Antwerp and undertaken in liaison with the Belgian National Geographical Institute (IGN/NGI). The aim of this project is to set up a historical gazetteer of toponyms for the whole present-day territory of Belgium, in order to provide researchers with a collection of data that 1) does not stop at Belgian provincial borders 2) goes beyond the level of municipalities and 3) goes back in time as much as possible.

Pilot test

The first phase of the project (pilot 2022/2024) consists in the gathering of an ancient and geographically homogeneous set of toponyms for provinces Antwerp, East Flanders and Liège. This is done by collecting all the toponyms mentioned on the ‘reduced cadastre’, a reduced version of the primitive cadastre drawn between 1847 and 1855 to prepare the creation of the first topographical map of Belgium in the second half of the 19th century.

Workflow of the ‘reduced cadastre’ (©IGN/NGI)
Workflow of the ‘reduced cadastre’ (©IGN/NGI)

From QGis to Linked Open Data

These toponyms are located by using points in QGis and described via the PostGis extension in a PostgreSQL database whose structure is highly adaptable to every source and compatible with Linked Open Data principles. For each toponym we record not only its name, location, type (as described in the source, and a standardized version of it), but also if applicable its matching number in databases of other projects (for instance the “placename” project of the State Archives, the “Dorpskernen” project for Flemish toponyms), its corresponding Wikidata page if it already exists and its corresponding notice in the book Gemeenten van Belgïe[1], where Hervé Hasquin and his team summarized historical information (notably administrative and ecclesiastical belonging during the Ancien Régime) about municipalities.

The ultimate aim of drawing those links with the existing (notably historical) resources is to provide researchers with sufficient information on each toponyms to facilitate the disambiguation of place names. For instance, if one historian wants to locate a place called “heikant” in the primary source and if it appears that there are several “heikant” in Belgium, it may help him/her to identify the correct one among the latter if he knows in which bishopric each of them belongs in the seventeenth century. Finally, thanks to another table of the database (table “relations”), we also describe elementary relations between the toponyms like the administrative ones (“this hamlet belongs to this town” / “this isolated chapel belongs to this town”).

Future integration and accessibility

At the end of the project, this gazetteer will be published on the World Historical Gazetteer, and will progressively be loaded on Druid (a LOD platform), in order to make it easily reachable for everyone as well as potentially extendable. This gazetteer will enable researchers to easily identify place names they find in their sources, and could be used in the future to automatically ‘map’ (e.g. annotated) written sources to a great extent.

Beyond these very practical benefits, we think that this Belgian gazetteer can also be used to open new perspectives in landscape history by making it possible to follow the evolution of the Belgian landscape through the evolution of toponyms (name and location) on the long-term. This is why, in a second phase (which in practice is ran parallel to the first), we compare the nineteenth century toponyms with present-day ones using the database of the IGN/NGI. By establishing links between ancient and actual toponyms, we can explore how a toponym evolved during the nineteenth and the twentieth century in order to research variations of place names through time. In a third phase, toponyms extracted from older material will be added to the database and the same sort of links will be described, in order to make it possible to follow the evolution of place name on a longer term.


[1] Duvosquel, Jean-Marie, Hervé Hasquin, and Raymond Van Uytven. Gemeenten Van België: Geschiedkundig En Administratief-Geografisch Woordenboek. Bruxelles: La Renaissance du Livre, 1980.

Reading historical maps in a Digital Era

The promises of Artificial Intelligence technologies to extract historical information from maps are becoming more impressive each day. However, these tools have been developed not only on the basis of but also, for modern maps. Older ones, including hand-drawn maps (early nineteenth century and earlier) therefore remain, as often with AI, the poor siblings of AI revolution, because the models used to extract information from maps do not work on those older documents: older maps have other characteristics than modern ones, often do not display the same sort of information which also means that researchers that are working on this kind of documents do not always have the same research questions than their colleagues working on modern ones.

On the 17th of November, members of the Antwerp Group of CLARIAH-VL (Iason Jongepier and Léa Hermenault assisted by Lamyk Bekius and Rein Debrulle), organized a workshop in Antwerp, with the support of CLARIAH-VL, the University of Antwerp and Ghent University, that aimed to tackle this issue. The underlying idea of this workshop was to facilitate brainstorming around technical solutions that allow older maps to also benefit from AI technology and AI-related workflows.

The workshop gathered 25 participants and welcomed 13 speakers from various horizons. In the morning, researchers from the leading Alan Turing Institute gave presentations and demos regarding two applications from the “Living with machines” project (namely MapReader and Machines Reading Maps). In the afternoon, colleagues from the University of Antwerp, Ghent University and University of Amsterdam presented their own work on AI and AI-related workflows, among others.

Program of the ‘Reading Maps in a Digital Era’ workshop

MapReader, a computer vision pipelines for exploring and analysing images

We first had the great pleasure to hear Katherine McDonough (Lancaster University & Alan Turing Institute) and Daniel Wilson (Alan Turing Institute) who came to introduce the tool MapReader they developed with their team in the framework of the “Living with Machines” project. This tool is open source and can be installed and used by anyone thanks to the instructions available here. Originally developed to automatically browse railways components on the Ordnance survey of England, it can be used to help researchers finding on a raster document any elements they are looking for by automatically identifying the later in pre-defined patches.

MapReader can also simply be used as a way to annotate patches. The tool first needs to be trained on a sample of patches, whose number depend on the size of the patches and the number of the specific characteristics of the browsed element: if the latter is not easily distinguishable from other elements, then the tool will need to be trained on a very large number of patches, but if on the contrary the element has very clear and specific characteristics, then the model only needs to be trained on a relative small number of patches. It should be possible to use the tool to explore old and hand-written maps if elements are easily identifiable, given that enough maps with these elements exist. One of the main problems about older maps indeed remains that we usually do not have enough material to train the models: very few maps collections dated from before the nineteenth century constitute series.  

MapReader patches on the Ordnance Survey

“Machines Reading Maps”, a tool to automatically transcribe texts on maps

After a short break, Katherine McDonough and Valeria Vitale (Sheffield University & Alan Turing Institute) introduced the audience to another tool that has been developed by the University of Southern California Digital Library, the Computer Science & Engineering Department at the University of Minnesota and the Alan Turing Institute, called “Machines Reading Maps”. This tool is trained to identify printed text on maps and to transcribe it. It has been first tested on the Rumsey collection, and then added to the platform that allows to browse it. It enables everyone to look for a toponym not only in the metadata, but also on the map itself. Machines Reading Maps can, therefore, also be used to count the occurrences of a specific place name variant in one collection for instance, or to gather text written with a specific graphic style (Bold, italic, etc.). If the later possibility would only be of interest for maps produced in series, which therefore limits drastically its use for old maps, the tool still looks promising for pre-nineteenth century cartography since writings tend to be more quickly standardized than symbology. It should definitely be tested on maps with non-printed text. 

Example of the results given by the search “Antwerpen” in the Rumsey Collection

Applying computer vision on historical documents

The afternoon was organized in three different sessions with two short papers in each of them. The first session, entitled ‘Computer vision‘ aimed at broadening our scope to the application of computer vision on geospatial and related data in general. José Oramas (University of Antwerp, Imec/IDLab) gave a paper related to his research on computer vision models applied to pictures where he tries to understand how those models really work in order to eventually improve their results. Then Thomas Smits (University of Amsterdam) introduced the audience to a research that he did together with Mona Allaert, Loren Verreyen, Wouter Haverals and Mike Kestemont at the University of Antwerp and which consisted of the use of computer vision, HTR and Large Language Models to transcribe and geo-localize addresses found on 100,000 historical postcards. This research shows how promising HTR technics are but also reveals how important it is to have a solid addresses database that can be used to geo-localize information, which is certainly reachable for modern periods, but which is more challenging for older ones.  

The second session was dedicated to the specific challenges of historical maps. The later have specificities, advantages and inconveniences that we have to be aware of if we want to build efficient and relevant applications and workflows to facilitate their digital use. The aim of this session was to focus on two very different corpus of maps to broaden our knowledge of their specificities. Dieter De Witte (Ghent University and Imec/IDLab) and Iason Jongepier (University of Antwerp and State Archives) introduced us to the specificities of historical maps of Belgium but also to the first attempts that have been done to extract information from them. Then, Katherine McDonough and Daniel Wilson showed how they used MapReader to explore “railway spaces” on the Ordnance Survey and explained the advantages of reflecting on those spaces using patches instead of vector data. 

The audience,at the end of a long day of work

Pipelines and workflows to scale up the digitization of data

The third and last session aimed at pipelines and workflows that can be use for the handling data derived from maps, or historical data with a strong spatial component. Janna Aerts (University of Amsterdam) and Leon van Wissen (University of Amsterdam and UvACreate) presented different projects for which historical data have been gathered and are connected via the linked open data system AdamLink. Next, Vincent Ducatteeuw (University of Ghent) and Léa Hermenault (University of Antwerp) gave a paper related to an article they are currently writing and that aims to show that small-scale/local gazetteers can greatly contribute to the debate regarding the structure of gazetteers by helping to choose information that should be available in a gazetteer to secure its interest for research purposes but also to meet FAIR standards. 

This fruitful day helped each of the participants to get to know new tools and to reflect on new methodological issues. It will without a doubt lead to further explorations and discussions that will hopefully help to unlock the access that historical information that old maps are packed with.