Difference between revisions of "AI Reference Datasets"
Jump to navigation
Jump to search
(14 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
− | [[Category:Reference Data]] | + | [[Category:Data]][[Category:Reference Data]] |
UFRC maintains a repository of reference AI datasets that can be accessed by all HiPerGator users. The primary purposes of this repository are researcher convenience, efficient use of filesystem space, and cost savings. Research groups do not have to use their Blue or Orange quota to host their own copies of these reference datasets. | UFRC maintains a repository of reference AI datasets that can be accessed by all HiPerGator users. The primary purposes of this repository are researcher convenience, efficient use of filesystem space, and cost savings. Research groups do not have to use their Blue or Orange quota to host their own copies of these reference datasets. | ||
− | + | {{Note|Please note that although these datasets are all freely available, data use licenses and restrictions vary among them. '''If you use these datasets, it is your responsibility to ensure that your use of the data complies with applicable licenses and any other use restrictions.'''|warn}} | |
− | + | Use [https://support.rc.ufl.edu https://support.rc.ufl.edu] to request the addition of a reference dataset. '''All reference datasets hosted on HiPerGator must comply with Research Computing's [[AI reference dataset hosting policy]]'''. | |
+ | You may need to use the full path /blue/data/ai for software applications to find the file. | ||
+ | |||
+ | == Catalog of available datasets == | ||
{{#get_web_data:url=https://data.rc.ufl.edu/pub/ufrc/data/ai_reference_data.csv | {{#get_web_data:url=https://data.rc.ufl.edu/pub/ufrc/data/ai_reference_data.csv | ||
|format=CSV with header | |format=CSV with header | ||
− | |data=dirpath=dirpath,dirsize=dirsize,name=name, | + | |data=dirpath=dirpath,dirsize=dirsize,name=name,url=url,version=version,date=date,license_text=license_text,license_url=license_url,categories=categories,description=description |
− | |cache seconds= | + | |cache seconds=4 |
}} | }} | ||
{| class="wikitable sortable" border="1" cellspacing="0" cellpadding="2" align="center" style="border-collapse: collapse; margin: 1em 1em 1em 0; border-top: none; border-right:none; " | {| class="wikitable sortable" border="1" cellspacing="0" cellpadding="2" align="center" style="border-collapse: collapse; margin: 1em 1em 1em 0; border-top: none; border-right:none; " | ||
Line 27: | Line 30: | ||
{{!}} <code>{{{dirpath}}}</code> | {{!}} <code>{{{dirpath}}}</code> | ||
{{!}} {{{dirsize}}} | {{!}} {{{dirsize}}} | ||
− | {{!}} {{{ | + | {{!}} {{{version}}} |
{{!}} [{{{license_url}}} {{{license_text}}}] | {{!}} [{{{license_url}}} {{{license_text}}}] | ||
{{!}} {{{date}}} | {{!}} {{{date}}} |
Latest revision as of 19:05, 31 May 2024
UFRC maintains a repository of reference AI datasets that can be accessed by all HiPerGator users. The primary purposes of this repository are researcher convenience, efficient use of filesystem space, and cost savings. Research groups do not have to use their Blue or Orange quota to host their own copies of these reference datasets.
Please note that although these datasets are all freely available, data use licenses and restrictions vary among them. If you use these datasets, it is your responsibility to ensure that your use of the data complies with applicable licenses and any other use restrictions.
Use https://support.rc.ufl.edu to request the addition of a reference dataset. All reference datasets hosted on HiPerGator must comply with Research Computing's AI reference dataset hosting policy.
You may need to use the full path /blue/data/ai for software applications to find the file.
Catalog of available datasets
Name | Categories | Location on HiPerGator | Dataset size (approximate) | Version | License | Date added | Description
|
---|---|---|---|---|---|---|---|
Free Spoken Digit Dataset (FSDD) | Audio | /data/ai/ref-data/audio/free-spoken-digit-dataset-1.0.10
|
20.4 MiB | v1.0.10 | Creative Commons Attribution-ShareAlike 4.0 International | 11-Mar-21 | A simple audio/speech dataset consisting of recordings of spoken digits in wav files at 8kHz. The recordings are trimmed so that they have near minimal silence at the beginnings and ends. |
Freesound Dataset 50k (FSD50K) | Audio | /data/ai/ref-data/audio/FSD50K
|
32.2 GiB | 1.0 (10.5281/zenodo.4060432) | [ Mixed Creative Commons licenses] | 12-Mar-21 | FSD50K is an open dataset of human-labeled sound events containing 51,197 Freesound clips unequally distributed in 200 classes drawn from the AudioSet Ontology. |
LibriSpeech ASR corpus | Audio | /data/ai/ref-data/audio/LibriSpeech
|
59.4 GiB | SLR12 | Creative Commons Attribution 4.0 International | 11-Mar-21 | LibriSpeech is a corpus of approximately 1,000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. |
ADE20K | Computer vision | /data/ai/ref-data/video/ADE20K
|
7.2 GiB | [ Not reported] | 23-Aug-22 | The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed. | |
CelebA | Computer vision | /data/ai/ref-data/image/Celebfaces
|
8.5 GiB | [ Not reported] | 23-Aug-22 | CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age. | |
CIFAR-10 | Computer vision | /data/ai/ref-data/image/cifar-10-batches-py
|
177.6 MiB | [ Not reported] | 12-Mar-21 | The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. | |
CIFAR-100 | Computer vision | /data/ai/ref-data/image/cifar-100
|
177.7 MiB | [ Not reported] | 23-Aug-22 | The CIFAR-100 dataset consists of 60,000 32x32 colour images in 100 classes, with 600 images per class. There are 50,000 training images and 10,000 test images. The 100 classes are grouped into 20 superclasses. Each image comes with a 'fine' label. | |
COIL | Computer vision | /data/ai/ref-data/image/COIL
|
821.3 MiB | [ Not reported] | 23-Aug-22 | COIL-100 was collected by the Center for Research on Intelligent Systems at the Department of Computer Science, Columbia University. The database contains color images of 100 objects. The dataset contains 7200 color images of 100 objects (72 images per object). The objects were placed on a motorized turntable against a black background and images were taken at pose intervals of 5 degrees. This dataset was used in a real-time 100 object recognition system whereby a system sensor could identify the object and display its angular pose. | |
FIRE: Fundus Image Registration Dataset | Computer vision | /data/ai/ref-data/image/FIRE
|
478.1 MiB | [ Not reported] | 23-Aug-22 | The dataset consists of 129 retinal images forming 134 image pairs. These image pairs are split into 3 different categories depending on their characteristics. The images were acquired with a Nidek AFC-210 fundus camera, which acquires images with a resolution of 2912x2912 pixels and a FOV of 45° both in the x and y dimensions. Images were acquired at the Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki from 39 patients. | |
Google LandMarks Dataset | Computer vision | /data/ai/ref-data/image/google-landmark
|
588.0 GiB | v2 | [ Not reported] | 23-Aug-22 | This is the second version of the Google Landmarks dataset (GLDv2), which contains images annotated with labels representing human-made and natural landmarks. The dataset can be used for landmark recognition and retrieval experiments. This version of the dataset contains approximately 5 million images, split into 3 sets of images: train, index and test. |
HOLLYWOOD-2 | Computer vision | /data/ai/ref-data/video/Hollywood2
|
40.4 GiB | [ Not reported] | 23-Aug-22 | Hollywood-2 is a dataset with 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video in total. The dataset intends to provide a comprehensive benchmark for human action recognition in realistic and challenging settings. The dataset is composed of video clips from 69 movies. | |
ImageNet-1K | Computer vision | /data/ai/ref-data/image/ImageNet/imagenet1k
|
156.2 GiB | (ILSVRC2012-2017) | [ Custom (research, non-commercial)] | 23-Aug-22 | The most highly-used subset of ImageNet is the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 image classification and localization dataset. This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. |
ImageNet-21K | Computer vision | /data/ai/ref-data/image/ImageNet/imagenet21k_resized
|
269.4 GiB | [ Custom (research, non-commercial)] | 23-Aug-22 | ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. This dataset is a processed version of ImageNet21K. | |
kinetics-400 | Computer vision | /data/ai/ref-data/video/Kinetics-400/kinetics-dataset
|
438.0 GiB | [ Creative Commons Attribution 4.0 International License] | 23-Aug-22 | The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube. | |
MPI-Sintel | Computer vision | /data/ai/ref-data/video/MPI-Sintel
|
5.7 GiB | [ Not reported] | 23-Aug-22 | MPI (Max Planck Institute) Sintel is a dataset for optical flow evaluation that has 1064 synthesized stereo images and ground truth data for disparity. Sintel is derived from open-source 3D animated short film Sintel. The dataset has 23 different scenes. The stereo images are RGB while the disparity is grayscale. Both have a resolution of 1024×436 pixels and 8-bit per channel. | |
Open Images Dataset | Computer vision | /data/ai/ref-data/image/OpenImagesDataset
|
1.1 TiB | V6 and Extended | Creative Commons Attribution 4.0 International | 23-Aug-22 | Open Images is a dataset of ~9M images annotated with image-level labels, object bounding boxes, object segmentation masks, visual relationships, and localized narratives. The subset with Bounding Boxes (600 classes), Object Segmentations, Visual Relationships, and Localized Narratives. These annotation files cover the 600 boxable object classes, and span the 1,743,042 training images where we annotated bounding boxes, object segmentations, visual relationships, and localized narratives; as well as the full validation (41,620 images) and test (125,436 images) sets. |
VisualGenome | Computer vision | /data/ai/ref-data/image/VisualGenome/1.2
|
23.9 GiB | 1.2 | [ Creative Commons Attribution 4.0 International License] | 23-Aug-22 | Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. Total images: 108,077; Total region descriptions: 4,297,502; Total image object instances: 1,366,673; Unique image objects: 75,729; Total object-object relationship instances: 1,531,448. |
VisualGenome | Computer vision | /data/ai/ref-data/image/VisualGenome/1.4
|
1.0 GiB | 1.4 | [ Creative Commons Attribution 4.0 International License] | 23-Aug-22 | Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. Total images: 108,077; Total region descriptions: 4,297,502; Total image |
Youtube-8M | Computer vision | /data/ai/ref-data/video/Youtube-8M
|
27.4 GiB | [ Apache License 2.0] | 23-Aug-22 | The YouTube-8M dataset is a large scale video dataset, which includes more than 7 million videos with 4716 classes labeled by the annotation system. The dataset consists of three parts: training set, validate set, and test set. In the training set, each class contains at least 100 training videos. Features of these videos are extracted by the state-of-the-art popular pre-trained models and released for public use. Each video contains audio and visual modality. Based on the visual information, videos are divided into 24 topics, such as sports, game, arts & entertainment, etc | |
DeepAtomDB_v2018-MD | Molecular Dynamics Trajectories | /data/ai/ref-data/proteinbinding
|
3.3 TB | 0.1 | [ Creative Commons Attribution-ShareAlike 4.0 International] | 23-Mar-22 | MD trajectories for drug-protein complexes extracted from PDBBind, BindingMOAB and Astex databases. |
PASCAL | Object segmentation | /data/ai/ref-data/image/PASCAL
|
4.4 GiB | VOC2012 | [ nan] | 19-Apr-22 | Object datasets from the VOC challenges. The main goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. |
COCO | Object segmentation | /data/ai/ref-data/image/COCO
|
47.7 GiB | 2017 | [ CC-BY 4.0] | 19-Apr-22 | COCO is a large-scale object detection, segmentation, and captioning dataset. COCO has several featutures: 330K images (>200K labeled), 1.5 million object instances, 80 object categories, 91 stuff categories, 5 captions per image, 250,000 people with keypoints |
Babelnet-v5 | Text | /data/ai/ref-data/nlp/babelnet
|
10.3 KiB | May, 2022 | Researchers who agree to the license will be granted access to Babelnet databases. | 23-May-22 | Babelnet is an NLP dictionary about words and their meanings. |
Wikipedia | Text | /data/ai/ref-data/nlp/wikipedia
|
30.9 GiB | January, 2021 | Creative Commons Attribution-ShareAlike 4.0 International | 23-Jan-21 | Wikipedia articles as downloaded January 2021 from https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2, cleaned using wikiextractor Python library. |