skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances

Data mining and knowledge discovery, 2017-05, Vol.31 (3), p.606-660 [Peer Reviewed Journal]

The Author(s) 2016 ;Data Mining and Knowledge Discovery is a copyright of Springer, 2017. ;ISSN: 1384-5810 ;EISSN: 1573-756X ;DOI: 10.1007/s10618-016-0483-9 ;PMID: 30930678

Full text available

Citations Cited by
  • Title:
    The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances
  • Author: Bagnall, Anthony ; Lines, Jason ; Bostrom, Aaron ; Large, James ; Keogh, Eamonn
  • Subjects: Academic Surveys and Tutorials ; Algorithms ; Archives ; Archives & records ; Artificial Intelligence ; Benchmarking ; Chemistry and Earth Sciences ; Classification ; Classifiers ; Computer Science ; Data mining ; Data Mining and Knowledge Discovery ; Datasets ; Experiments ; Information Storage and Retrieval ; Java ; Machine learning ; Physics ; Statistics for Engineering ; Time series
  • Is Part Of: Data mining and knowledge discovery, 2017-05, Vol.31 (3), p.606-660
  • Description: In the last 5 years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train/test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only nine of these algorithms are significantly more accurate than both benchmarks and that one classifier, the collective of transformation ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more robust testing of new algorithms in the future.
  • Publisher: New York: Springer US
  • Language: English
  • Identifier: ISSN: 1384-5810
    EISSN: 1573-756X
    DOI: 10.1007/s10618-016-0483-9
    PMID: 30930678
  • Source: AUTh Library subscriptions: ProQuest Central
    Springer OA刊

Searching Remote Databases, Please Wait