mirror of
https://github.com/tesseract-ocr/tessdata_fast.git
synced 2024-11-14 18:28:06 +01:00
Formatting Changes
All added info is from Ray's comments on different issues in different repositories.
This commit is contained in:
parent
cd93ef77f8
commit
066ce2dc04
1 changed files with 9 additions and 9 deletions
18
README.md
18
README.md
|
@ -3,26 +3,26 @@
|
|||
This repository contains fast integer versions of trained models for the
|
||||
[Tesseract Open Source OCR Engine](https://github.com/tesseract-ocr/tesseract).
|
||||
|
||||
Most users will want to use these traineddata files and this is what is planned to be shipped as part of Linux distributions.
|
||||
Most users will want to use these traineddata files and this is what is planned to be shipped as part of Linux distributions. Fine tuning/incremental training will **NOT** be possible from these `fast` models, as they are 8-bit integer. It will be possible to convert a tuned `best` to integer to make it faster, but some of the speed in `fast` will be from the smaller model.
|
||||
|
||||
When using the models in this repository, only the new LSTM-based OCR engine is supported. The legacy 'tesseract' engine is not supported with these files, so Tesseract's oem modes '0' and '2' won't work with them.
|
||||
When using the models in this repository, only the new LSTM-based OCR engine is supported. The legacy `tesseract` engine is not supported with these files, so Tesseract's oem modes '0' and '2' won't work with them.
|
||||
|
||||
Initial capitals indicate the one model for all languages in that script.
|
||||
|
||||
Latin is all latin-based languages,
|
||||
except vie, which has its own Vietnamese.
|
||||
**Latin** is all latin-based languages,
|
||||
except vie, which has its own **Vietnamese**.
|
||||
|
||||
Devanagari is hin+san+mar+nep+eng
|
||||
**Devanagari** is hin+san+mar+nep+eng
|
||||
|
||||
Fraktur is basically a combination of all the latin-based languages that have an 'old' variant.
|
||||
**Fraktur** is basically a combination of all the latin-based languages that have an 'old' variant.
|
||||
|
||||
Most of the script models include English training data as well as the script, but not for Cyrillic, as that would have a major ambiguity problem.
|
||||
Most of the script models include English training data as well as the script, but not for **Cyrillic**, as that would have a major ambiguity problem.
|
||||
|
||||
For Latin-based languages, the existing model data provided has been trained on about 400000 textlines spanning about 4500 fonts. For other scripts, not so many fonts are available, but they have still been trained on a similar number of textlines. `
|
||||
For Latin-based languages, the existing model data provided has been trained on about 400000 textlines spanning about 4500 fonts. For other scripts, not so many fonts are available, but they have still been trained on a similar number of textlines.
|
||||
|
||||
For Latin, I have ~4500 fonts to train with. For Devanagari ~50, and for Kannada 15. With a theory that poor accuracy on test data and over-fitting on training data was caused by the lack of fonts, I tried mixing the training data with English, thinking that English is often mixed in anyway, and some of the font diversity might generalize to the other script. The overall effect was slightly positive, so I left it that way.
|
||||
|
||||
'jpn' contains whatever appears on the www that is labelled as the language, trained only with fonts that can render Japanese. As with most of the other Script traineddatas, 'Japanese' contains all the languages that use that script (in this case just the one) PLUS English.The resulting model is trained with a mix of both training sets, with the expectation that some of the generalization to 4500 English training fonts will also apply to the other script that has a lot less.
|
||||
'jpn' contains whatever appears on the www that is labelled as the language, trained only with fonts that can render Japanese. As with most of the other Script traineddatas, **Japanese** contains all the languages that use that script (in this case just the one) PLUS English.The resulting model is trained with a mix of both training sets, with the expectation that some of the generalization to 4500 English training fonts will also apply to the other script that has a lot less.
|
||||
|
||||
'jpn_vert' is trained on text rendered vertically (but the image is rotated so the long edge is still horizontal).
|
||||
|
||||
|
|
Loading…
Reference in a new issue