The OCR4all tool ensures converting historical printings into computer-readable texts. It is very reliable, user-friendly, and open source. It was developed by scientists at the University of Würzburg.
We use Text Mining, Deep Learning and Big Data Analytics to unleash the potential of unstructured data and to integrate unused assets into decision-making processes.
In this post, I want to show how I use NLTK for preprocessing and tokenization, but then apply machine learning techniques (e.g. building a linear SVM using stochastic gradient descent) using Scikit-Learn.
@startuml
participant User
User -> A: DoWork
activate A #FFBBBB
A -> A: Internal call
activate A #DarkSalmon
A -> B: << createRequest >>
activate B
B --> A: RequestCreated
deactivate B
deactivate A
A -> User: Done
deactivate A
@enduml
P. Moreira, Y. Bizzoni, K. Nielbo, I. Lassen, and M. Thomsen. Proceedings of the The 5th Workshop on Narrative Understanding, page 25--35. Toronto, Canada, Association for Computational Linguistics, (July 2023)
Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, page 1480--1489. San Diego, California, Association for Computational Linguistics, (June 2016)
A. Nenkova, and R. Passonneau. Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, page 145--152. Boston, Massachusetts, USA, Association for Computational Linguistics, (2004)