skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

A survey of historical document image datasets

International journal on document analysis and recognition, 2022-12, Vol.25 (4), p.305-338 [Peer Reviewed Journal]

The Author(s) 2022 ;ISSN: 1433-2833 ;ISSN: 1433-2825 ;EISSN: 1433-2825 ;DOI: 10.1007/s10032-022-00405-8

Digital Resources/Online E-Resources

Citations Cited by
  • Title:
    A survey of historical document image datasets
  • Author: Nikolaidou, Konstantina ; Seuret, Mathias ; Mokayed, Hamam ; Liwicki, Marcus
  • Subjects: Computer Science ; Document image analysis ; Historical documents ; Image datasets ; Image Processing and Computer Vision ; Machine learning ; Maskininlärning ; Pattern Recognition ; Special Issue Paper
  • Is Part Of: International journal on document analysis and recognition, 2022-12, Vol.25 (4), p.305-338
  • Description: This paper presents a systematic literature review of image datasets for document image analysis, focusing on historical documents, such as handwritten manuscripts and early prints. Finding appropriate datasets for historical document analysis is a crucial prerequisite to facilitate research using different machine learning algorithms. However, because of the very large variety of the actual data (e.g., scripts, tasks, dates, support systems, and amount of deterioration), the different formats for data and label representation, and the different evaluation processes and benchmarks, finding appropriate datasets is a difficult task. This work fills this gap, presenting a meta-study on existing datasets. After a systematic selection process (according to PRISMA guidelines), we select 65 studies that are chosen based on different factors, such as the year of publication, number of methods implemented in the article, reliability of the chosen algorithms, dataset size, and journal outlet. We summarize each study by assigning it to one of three pre-defined tasks: document classification, layout structure, or content analysis. We present the statistics, document type, language, tasks, input visual aspects, and ground truth information for every dataset. In addition, we provide the benchmark tasks and results from these papers or recent competitions. We further discuss gaps and challenges in this domain. We advocate for providing conversion tools to common formats (e.g., COCO format for computer vision tasks) and always providing a set of evaluation metrics, instead of just one, to make results comparable across studies.
  • Publisher: Berlin/Heidelberg: Springer Berlin Heidelberg
  • Language: English
  • Identifier: ISSN: 1433-2833
    ISSN: 1433-2825
    EISSN: 1433-2825
    DOI: 10.1007/s10032-022-00405-8
  • Source: SWEPUB Freely available online

Searching Remote Databases, Please Wait