skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

A Survey of Historical Document Image Datasets

arXiv.org, 2022-10

2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;http://creativecommons.org/licenses/by/4.0 ;EISSN: 2331-8422 ;DOI: 10.48550/arxiv.2203.08504

Full text available

Citations Cited by
  • Title:
    A Survey of Historical Document Image Datasets
  • Author: Nikolaidou, Konstantina ; Seuret, Mathias ; Hamam Mokayed ; Liwicki, Marcus
  • Subjects: Algorithms ; Benchmarks ; Computer Science - Computer Vision and Pattern Recognition ; Computer vision ; Datasets ; Documents ; Evaluation ; Handwriting ; Image analysis ; Literature reviews ; Machine learning ; Support systems ; Visual aspects ; Visual tasks
  • Is Part Of: arXiv.org, 2022-10
  • Description: This paper presents a systematic literature review of image datasets for document image analysis, focusing on historical documents, such as handwritten manuscripts and early prints. Finding appropriate datasets for historical document analysis is a crucial prerequisite to facilitate research using different machine learning algorithms. However, because of the very large variety of the actual data (e.g., scripts, tasks, dates, support systems, and amount of deterioration), the different formats for data and label representation, and the different evaluation processes and benchmarks, finding appropriate datasets is a difficult task. This work fills this gap, presenting a meta-study on existing datasets. After a systematic selection process (according to PRISMA guidelines), we select 65 studies that are chosen based on different factors, such as the year of publication, number of methods implemented in the article, reliability of the chosen algorithms, dataset size, and journal outlet. We summarize each study by assigning it to one of three pre-defined tasks: document classification, layout structure, or content analysis. We present the statistics, document type, language, tasks, input visual aspects, and ground truth information for every dataset. In addition, we provide the benchmark tasks and results from these papers or recent competitions. We further discuss gaps and challenges in this domain. We advocate for providing conversion tools to common formats (e.g., COCO format for computer vision tasks) and always providing a set of evaluation metrics, instead of just one, to make results comparable across studies.
  • Publisher: Ithaca: Cornell University Library, arXiv.org
  • Language: English
  • Identifier: EISSN: 2331-8422
    DOI: 10.48550/arxiv.2203.08504
  • Source: Freely Accessible Journals
    arXiv.org
    ROAD: Directory of Open Access Scholarly Resources
    ProQuest Central

Searching Remote Databases, Please Wait