Difference between revisions of "AI Reference Datasets"

From UFRC
Jump to navigation Jump to search
m (Stuckyb moved page AI Reference Data to AI Reference Datasets without leaving a redirect)
Line 4: Line 4:
  
 
Use [https://support.rc.ufl.edu https://support.rc.ufl.edu] to request the addition of a reference dataset.
 
Use [https://support.rc.ufl.edu https://support.rc.ufl.edu] to request the addition of a reference dataset.
 +
 +
==Summary of available datasets==
  
 
{{#get_web_data:url=https://data.rc.ufl.edu/pub/ufrc/data/ai_reference_data.csv
 
{{#get_web_data:url=https://data.rc.ufl.edu/pub/ufrc/data/ai_reference_data.csv

Revision as of 21:02, 11 March 2021


UFRC maintains a repository of reference AI datasets that can be accessed by all HiPerGator users. The primary purposes of this repository are researcher convenience, efficient use of filesystem space, and cost savings. Research groups do not have to use their Blue or Orange quota to host their own copies of these reference datasets.

Use https://support.rc.ufl.edu to request the addition of a reference dataset.

Summary of available datasets

Name Categories Location on HiPerGator Dataset size (approximate) Version License Date added Description

[ ] Audio /data/ai/ref-data/audio/free-spoken-digit-dataset-1.0.10 20.4 MiB v1.0.10 [ ] A simple audio/speech dataset consisting of recordings of spoken digits in wav files at 8kHz. The recordings are trimmed so that they have near minimal silence at the beginnings and ends.
[ ] Audio /data/ai/ref-data/audio/FSD50K 32.2 GiB 1.0 (10.5281/zenodo.4060432) [ ] FSD50K is an open dataset of human-labeled sound events containing 51,197 Freesound clips unequally distributed in 200 classes drawn from the AudioSet Ontology.
[ ] Audio /data/ai/ref-data/audio/LibriSpeech 59.4 GiB SLR12 [ ] LibriSpeech is a corpus of approximately 1,000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
[ ] Computer vision /data/ai/ref-data/video/ADE20K 7.2 GiB [ ] The ADE20K semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed.
[ ] Computer vision /data/ai/ref-data/image/Celebfaces 8.5 GiB [ ] CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age.
[ ] Computer vision /data/ai/ref-data/image/cifar-10-batches-py 177.6 MiB [ ] The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.
[ ] Computer vision /data/ai/ref-data/image/cifar-100 177.7 MiB [ ] The CIFAR-100 dataset consists of 60,000 32x32 colour images in 100 classes, with 600 images per class. There are 50,000 training images and 10,000 test images. The 100 classes are grouped into 20 superclasses. Each image comes with a 'fine' label.
[ ] Computer vision /data/ai/ref-data/image/COIL 821.3 MiB [ ] COIL-100 was collected by the Center for Research on Intelligent Systems at the Department of Computer Science, Columbia University. The database contains color images of 100 objects. The dataset contains 7200 color images of 100 objects (72 images per object). The objects were placed on a motorized turntable against a black background and images were taken at pose intervals of 5 degrees. This dataset was used in a real-time 100 object recognition system whereby a system sensor could identify the object and display its angular pose.
[ ] Computer vision /data/ai/ref-data/image/FIRE 478.1 MiB [ ] The dataset consists of 129 retinal images forming 134 image pairs. These image pairs are split into 3 different categories depending on their characteristics. The images were acquired with a Nidek AFC-210 fundus camera, which acquires images with a resolution of 2912x2912 pixels and a FOV of 45° both in the x and y dimensions. Images were acquired at the Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki from 39 patients.
[ ] Computer vision /data/ai/ref-data/image/google-landmark 588.0 GiB v2 [ ] This is the second version of the Google Landmarks dataset (GLDv2), which contains images annotated with labels representing human-made and natural landmarks. The dataset can be used for landmark recognition and retrieval experiments. This version of the dataset contains approximately 5 million images, split into 3 sets of images: train, index and test.
[ ] Computer vision /data/ai/ref-data/video/Hollywood2 40.4 GiB [ ] Hollywood-2 is a dataset with 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video in total. The dataset intends to provide a comprehensive benchmark for human action recognition in realistic and challenging settings. The dataset is composed of video clips from 69 movies.
[ ] Computer vision /data/ai/ref-data/image/ImageNet/imagenet1k 156.2 GiB (ILSVRC2012-2017) [ ] The most highly-used subset of ImageNet is the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 image classification and localization dataset. This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images.
[ ] Computer vision /data/ai/ref-data/image/ImageNet/imagenet21k_resized 269.4 GiB [ ] ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. This dataset is a processed version of ImageNet21K.
[ ] Computer vision /data/ai/ref-data/video/Kinetics-400/kinetics-dataset 438.0 GiB [ ] The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube.
[ ] Computer vision /data/ai/ref-data/video/MPI-Sintel 5.7 GiB [ ] MPI (Max Planck Institute) Sintel is a dataset for optical flow evaluation that has 1064 synthesized stereo images and ground truth data for disparity. Sintel is derived from open-source 3D animated short film Sintel. The dataset has 23 different scenes. The stereo images are RGB while the disparity is grayscale. Both have a resolution of 1024×436 pixels and 8-bit per channel.
[ ] Computer vision /data/ai/ref-data/image/OpenImagesDataset 1.1 TiB V6 and Extended [ ] Open Images is a dataset of ~9M images annotated with image-level labels, object bounding boxes, object segmentation masks, visual relationships, and localized narratives. The subset with Bounding Boxes (600 classes), Object Segmentations, Visual Relationships, and Localized Narratives. These annotation files cover the 600 boxable object classes, and span the 1,743,042 training images where we annotated bounding boxes, object segmentations, visual relationships, and localized narratives; as well as the full validation (41,620 images) and test (125,436 images) sets.
[ ] Computer vision /data/ai/ref-data/image/VisualGenome/1.2 23.9 GiB 1.2 [ ] Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. Total images: 108,077; Total region descriptions: 4,297,502; Total image object instances: 1,366,673; Unique image objects: 75,729; Total object-object relationship instances: 1,531,448.
[ ] Computer vision /data/ai/ref-data/image/VisualGenome/1.4 1.0 GiB 1.4 [ ] Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. Total images: 108,077; Total region descriptions: 4,297,502; Total image