Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per ...
Missing: مجله آیت البرز? q= MaralGPT/ persian- default/
People also ask
What is the Hugging Face dataset?
🤗 Datasets is a library for easily accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP) tasks. Load a dataset in a single line of code, and use our powerful data processing methods to quickly get your dataset ready for training in a deep learning model.
What is Hugging Face used for?
Hugging Face is a machine learning (ML) and data science platform and community that helps users build, deploy and train machine learning models. It provides the infrastructure to demo, run and deploy artificial intelligence (AI) in live applications.
How do I download a dataset from Hugging Face?
Go to datasets and search the datasets that you want to download. Go to files and versions and there you can find the required data files.
How does Hugging Face make money?
At Hugging Face, we build a collaboration platform for the ML community (i.e., the Hub), and we monetize by providing simple access to compute for AI, with services like AutoTrain, Spaces and Inference Endpoints, directly accessible from the Hub, and billed by Hugging Face to the credit card on file.
Load a dataset in a single line of code, and use our powerful data processing methods to quickly get your dataset ready for training in a deep learning model.
Missing: مجله آیت البرز? q= MaralGPT/ persian- wikipedia/ default/
We're on a journey to advance and democratize artificial intelligence through open source and open science.
Missing: مجله آیت البرز? q= MaralGPT/ persian- default/
We're on a journey to advance and democratize artificial intelligence through open source and open science.
Missing: مجله آیت البرز? q= MaralGPT/ persian- wikipedia/ default/
Dataset Summary. Wiki Question Answering corpus from Microsoft. The WikiQA corpus is a publicly available set of question and sentence pairs, collected and ...
Missing: مجله آیت البرز? q= MaralGPT/ persian-
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/) with one subset per language, each containing a single train split. Each ...
Missing: مجله آیت البرز? MaralGPT/ persian- default/
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia.
Missing: مجله آیت البرز? q= MaralGPT/ persian- default/
PQuAD is a crowd- sourced reading comprehension dataset on Persian Language. It includes 80,000 questions along with their answers, with 25% of the questions ...
Missing: مجله آیت البرز? q= MaralGPT/ default/
In order to show you the most relevant results, we have omitted some entries very similar to the 8 already displayed. If you like, you can repeat the search with the omitted results included.