Displaying items by tag: CEF
ParaCrawl Corpus Release 7
ParaCrawl 7 is the final release of ParaCrawl Action 2: "Broader Web-Scale Provision of Parallel Corpora for European Languages" and it uses a brand new version of Bicleaner, namely version 0.14 (see full log of changes). Some highlights are as follows:
- new rules have been implemented to filter out noise for, e.g. sentences containing a lot of glued words or inappropriate language
- the classifier uses now a different technology: extremely randomised trees instead of random forest is the default classifier
- classifier features have been improved to better cope with OOVs and make the most of the probabilistic dictionaries
- training procedure has been simplified and logging info messages are now more informative
- access to pre-trained language packs has also been eased
- the 29 available language packs have been updated
Corpora sizes and download links are available from ParaCrawl's website (https://paracrawl.eu/v7).
The latest release of the ParaCrawl OpenSource Pipeline (Bitextor) is available on Github.
Web-Scale Acquisition of Parallel Corpora, ParaCrawl in ACL
The main goal of the ParaCrawl project is to create the largest publicly available corpora by crawling hundreds of thousands of websites, using open source tools. As part of this effort, several open source components have been developed and integrated into the open-source tool Bitextor, a highly modular pipeline that allows harvesting parallel corpora from multilingual websites or from preexisting or historical web crawls such as Common Crawl or the one available as part of the Internet Archive. The processing pipeline consists of the steps: crawling, text extraction, document alignment, sentence alignment, and sentence pair filtering. The ACL paper describes these steps in detail and evaluates alternative methods empirically in terms of their impact on machine translation quality. Hunalign, Bleualign and Vecalign tools are evaluated for the sentence alignment step. Similarly, Zipporah, Bicleaner and LASER are evaluated for the sentence pair filtering step. Benchmarking data sets for these evaluations are also published. The released parallel corpora is also described in the paper and useful statistics are tabulated about the size of the corpora before and after cleaning for different languages. The quality and usefulness of the data is measured by training Transformer-Based machine translation models with Marian for five different languages. Improvements in BLEU scores are reported against models trained on WMT data sets. Furthermore, the energy cost consumption of running and maintaining such a computationally expensive pipeline is discussed and positive environmental impacts are highlighted. The paper aims to contribute to the further development of novel methods of better processing of raw parallel data and to neural machine translation training with noisy data especially for low resource languages.
Watch our pre-recorded talk on ACL2020 Virtual Conference website.
and join the live Q&A sessions on Tuesday, July 7, 2020:
Session 8A: Resources and Evaluation-7 14:00–15:00 CEST
Session 9A: Resources and Evaluation-9 19:00–20:00 CEST
ParaCrawl Corpus Release 6
Release 6 includes a new language pair English-Icelandic with a lot more data for many other languages. Restorative cleaning with Bifixer gets more data by improving sentence splitting, better data by applying fixes to wrong encoding, html issues, alphabet issues and typos and unique data not only identifying duplicates but also near duplicates. Improved Bicleaner models have also been applied to filter out noisy parallel sentences for this release.
Corpora sizes and download links are available from ParaCrawl's website (https://paracrawl.eu/v6).
The latest release of the ParaCrawl OpenSource Pipeline (Bitextor) is available on Github.
The corpus and software are released as part of the ParaCrawl project co-financed by the European Union through the Connecting Europe Facility (CEF). This release used an existing toolchain that will be refined throughout the project and expanded to cover all official EU languages (23 languages parallel with English).
The corpora are released under the Creative Commons CC0 license ("no rights reserved"). (https://creativecommons.org/share-your-work/public-domain/cc0/)
ParaCrawl Corpus Release 5.1
Version 5.1 builds upon the same raw corpus as version 5. Thanks to improvements in filtering procedure, the official subset extracted as version 5.1 is now higher in quantity for almost all language pairs (but ga, de, sl and et). Quality measured extrinsically through MT for several language pairs shows also improvement in quality.
Corpora sizes and download links are available from ParaCrawl's website (https://paracrawl.eu/v5-1).
This is the official release to be used in WMT20. Stay tuned for more news and follow us on twitter @ParaCrawl.
The latest release of the ParaCrawl OpenSource Pipeline (Bitextor) is available on Github.
The corpus and software are released as part of the ParaCrawl project co-financed by the European Union through the Connecting Europe Facility (CEF). This release used an existing toolchain that will be refined throughout the project and expanded to cover all official EU languages (23 languages parallel with English).
The corpora are released under the Creative Commons CC0 license ("no rights reserved"). (https://creativecommons.org/share-your-work/public-domain/cc0/)
ParaCrawl - A CEF Digital Success Story
EU funding supports ParaCrawl, the largest collection of language resources for many European languages – significantly improving machine translation quality. Read the Success Story published by CEF Digital, titled "ParaCrawl taps the World Wide Web for language resources".