Your search
Results 23 resources
-
USC-TIMIT is a database of speech production data under ongoing development, which currently includes real-time magnetic resonance imaging data from five male and five female speakers of American English, and electromagnetic articulography data from four of these speakers. The two modalities were recorded in two independent sessions while the subjects produced the same 460 sentence corpus. In...
-
We have been collecting real-time MRI data from phoneticians producing the sounds of the International Phonetic Alphabet, together with standard sentences and texts. You may access the collected data by clicking on the pictures below.
-
Real-time magnetic resonance imaging (RT-MRI) of human speech production is enabling significant advances in speech science, linguistics, bio-inspired speech technology development, and clinical applications. Easy access to RT-MRI is however limited, and comprehensive datasets with broad access are needed to catalyze research across numerous domains. The imaging of the rapidly moving...
-
CREMA-D is a data set of 7,442 original clips from 91 actors. These clips were from 48 male and 43 female actors between the ages of 20 and 74 coming from a variety of races and ethnicities (African America, Asian, Caucasian, Hispanic, and Unspecified). Actors spoke from a selection of 12 sentences. The sentences were presented using one of six different emotions (Anger, Disgust, Fear, Happy,...
-
The British National Corpus (BNC) is a 100 million word collection of samples of written and spoken language from a wide range of sources, designed to represent a wide cross-section of British English, both spoken and written, from the late twentieth century. Access the data here: https://llds.ling-phil.ox.ac.uk/llds/xmlui/handle/20.500.14106/2554
-
The Voices Obscured in Complex Environmental Settings (VOiCES) corpus is a creative commons speech dataset targeting acoustically challenging and reverberant environments with robust labels and truth data for transcription, denoising, and speaker identification. This is one of the largest corpora to date that has transcriptions and simulatenously recorded real-world noise. The details: -...
-
This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.
-
Expressive Anechoic Recordings of Speech (EARS). Highlights: - 100 h of speech data from 107 speakers - high-quality recordings at 48 kHz in an anechoic chamber - high speaker diversity with speakers from different ethnicities and age range from 18 to 75 years - full dynamic range of human speech, ranging from whispering to yelling - 18 minutes of freeform monologues per speaker - sentence...
Explore
Audio
-
Accent/Region
(6)
- Arabic (1)
- British English (4)
- World Englishes (2)
- Conversation (2)
- Directed Speech (1)
- Electroglottography / Electrolaryngography (1)
- Emotional Speech (3)
-
Language
(11)
- Arabic (1)
- English (8)
- L2+ (1)
- Language Learning (2)
- Mandarin (1)
- Multi-Speaker (7)
- Multi-Style (2)
- Pathological (2)
- Speech in Noise (2)
Speech Production & Articulation
- Articulography (1)
- Brain Imaging (1)
- MRI (7)
- Ultrasound (4)
- Video (2)
Teaching Resources
Vocal Anatomy
- Larynx and Glottis (1)
- Vocal Tract (6)
Tags
- read speech
- audio data (19)
- adult (13)
- English (11)
- female (10)
- male (10)
- spontaneous speech (8)
- real-time MRI (rtMRI) (6)
- MRI (5)
- articulatory data (5)
- video (4)
- emotional speech (3)
- transcribed (3)
- teaching resource (3)
- speech production (3)
- ultrasound tongue imaging (UTI) (3)
- whisper (2)
- English accents (2)
- British (2)
- angry (2)
- audiovisual (2)
- happy (2)
- older adult (2)
- sad (2)
- articulation (2)
- multimodal (2)
- volumetric MRI (2)
- International Phonetic Alphabet (IPA) (2)
- vowels (2)
- lip video (2)
- conversation (2)
- L2 English (2)
- annotated (2)
- speech-language pathology (2)
- anechoic (1)
- fast speech (1)
- high pitch (1)
- loud speech (1)
- low pitch (1)
- shout (1)
- slow speech (1)
- rainbow passage (1)
- environmental noise (1)
- noisy audio (1)
- reverberation (1)
- disgust (1)
- phonetics (1)
- American English (1)
- electromagnetic articulography (EMA) (1)
- perceptually annotated (1)
- consonants (1)
- vocal tract shape (1)
- accent map (1)
- Arabic (1)
- accent variability (1)
- dialect variability (1)
- sociophonetic (1)
- Derby (1)
- Leeds (1)
- Manchester (1)
- Newcastle (1)
- York (1)
- digits (1)
- L2 speech (1)
- interview (1)
- language learning (1)
- electroglottography (EGG) (1)
- intraoral pressure (1)
- validation (1)
- Mandarin (1)
- dysarthria (1)
- pathological speech (1)
- Amyotrophic Lateral Sclerosis (ALS) (1)
- Down syndrome (1)
- Parkinson's disease (1)
- cerebral palsy (1)
- stroke (1)
- Lombard speech (1)
- clear speech (1)
- computer-directed speech (1)
- infant-directed speech (1)
- non-native-directed speech (1)
- speech in noise (1)
- brain activity (1)
- fMRI (1)
- rtMRI (1)
- vocal imitation (1)
- professional voice (1)
- silent speech (1)
- ultrasound (1)
- World Englishes (1)
- dyadic (1)
Resource type
- Dataset (17)
- Journal Article (2)
- Report (1)
- Web Page (3)