×
Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per ...
Missing: مجله آیت البرز? q= MaralGPT/ persian- default/
People also ask
We're on a journey to advance and democratize artificial intelligence through open source and open science.
Missing: مجله آیت البرز? q= MaralGPT/ persian- default/
Load a dataset in a single line of code, and use our powerful data processing methods to quickly get your dataset ready for training in a deep learning model.
Missing: مجله آیت البرز? q= MaralGPT/ persian- wikipedia/ default/
We're on a journey to advance and democratize artificial intelligence through open source and open science.
Missing: مجله آیت البرز? q= MaralGPT/ persian- wikipedia/ default/
Dataset Summary. Wiki Question Answering corpus from Microsoft. The WikiQA corpus is a publicly available set of question and sentence pairs, collected and ...
Missing: مجله آیت البرز? q= MaralGPT/ persian-
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/) with one subset per language, each containing a single train split. Each ...
Missing: مجله آیت البرز? MaralGPT/ persian- default/
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia.
Missing: مجله آیت البرز? q= MaralGPT/ persian- default/
wikipedia persons masked: A filtered version of the wikipedia dataset, with only pages of people. Dataset Summary. Contains ~70k pages from wikipedia, ...
Missing: مجله آیت البرز? q= MaralGPT/ persian-
In order to show you the most relevant results, we have omitted some entries very similar to the 8 already displayed. If you like, you can repeat the search with the omitted results included.