Your search
Results 23 resources
-
English is the most widely spoken language in the world, used daily by millions of people as a first or second language in many different contexts. As a result, there are many varieties of English. Although the great many advances in English automatic speech recognition (ASR) over the past decades, results are usually reported based on test datasets which fail to represent the diversity of...
-
A multi-speaker corpus of ultrasound images of the tongue and video images of the lips The Tongue and Lips (TaL) corpus is a multi-speaker corpus of ultrasound images of the tongue and video images of lips. This corpus contains synchronised imaging data of extraoral (lips) and intraoral (tongue) articulators from 82 native speakers of English. The TaL corpus consists of two datasets: - TaL1...
-
This collection contains behavioural and brain activation data from 3 laboratory studies of speech imitation. Each of the three studies involved behavioural and imaging (MRI) test sessions in which participants were familiarised with novel auditory speech targets, and were asked to imitate them as closely as possible. Across the three studies, there were variations in the type of sounds...
-
Single male native British English talker recorded producing 25 TIMIT sentences in 5 conditions, two natural: (i) quiet, (ii) while the talker listened to high-intensity speech-shaped noise, and three acted: (i) as if to a non-native listener, (ii) as if to a computer speech-recognition system, (iii) as if to an infant. Accompanied by automatic and hand-corrected phone-level transcription.
-
This dataset contains simultaneous recordings of electroglottography (EGG recorded with Glottal Enterprises EG2-PCX2), unfiltered audio, and intraoral pressure (recorded with Glottal Enterprises PG-60) from 14 subjects. It is meant to facilitate the validation of physical models of glottal control during voicing, in which the glottal/source waveform for speech is controlled by a combination of...
-
We introduce the Speak & Improve Corpus 2025, a dataset of L2 learner English data with holistic scores and language error annotation, collected from open (spontaneous) speaking tests on the Speak & Improve learning platform. The aim of the corpus release is to address a major challenge to developing L2 spoken language processing systems, the lack of publicly available data with high-quality...
-
The current data package includes 1,090 hours of recorded speech (as .wav files) from about 1,130 participants, including those with ALS, cerebral palsy, Down syndrome, Parkinson’s disease and those who have had a stroke. The download also includes text of the original speech prompts and a transcript of the participants’ responses. A subset includes annotations describing the speech...
-
The MSP-AVW is an audiovisual whisper corpus for audiovisual speech recognition purpose. The MSP-AVW corpus contains data from 20 female and 20 male speakers. For each subject, three sessions are recorded consisting of read sentences, isolated digits and spontaneous speech. The data is recorded under neutral and whisper conditions. The corpus was collected in a 13ft x 13ft ASHA certified...
-
This 3-year project investigates language change in five urban dialects of Northern England—Derby, Newcastle, York, Leeds and Manchester. Data collection method: Linguistic analysis of speech data (conversational, word list) from samples of different northern English urban communities. Data collection consisted of interviews, which included (1) some structured questions about the interviewee...
-
Ultrasound imaging has been widely adopted in speech research to visualize dynamic tongue movements during speech production. These images are universally used as visual feedback in interventions for articulation disorders or visual cues in speech recognition. Nevertheless, the availability of high-quality audio-ultrasound datasets remains scarce. The present study, therefore, aims to...
-
Twenty five countries have Arabic as an official language, but the dialects spoken vary greatly, and even within one country different accents are heard. Many features create the impression of 'a different accent', including how particular sounds are pronounced, where stress falls in a word, and what intonation pattern is used. There is extensive prior research on the first two of these for...
-
Welcome to our interactive International Phonetic Association (IPA) chart website! Clicking on the IPA symbols on our charts will allow you to listen to their sounds and see vocal-organ movements imaged with ultrasound, MRI, or in animated form. To find out more about how our IPA charts were made, click on the buttons on the left-hand side of this page. The website contains two main...
-
Dynamic Dialects contains an articulatory video-based corpus of speech samples from world-wide accents of English. Videos in this corpus contain synchronised audio, ultrasound-tongue-imaging video and video of the moving lips. We are continuing to augment the database. The website contains three main resources: - A clickable Accent Map: clicking on points of the map will open up links to...
-
The USC Speech and Vocal Tract Morphology MRI Database consists of real-time magnetic resonance images of dynamic vocal tract shaping during read and spontaneous speech with concurrently recorded denoised audio, and 3D volumetric MRI of vocal tract shapes during vowels and continuant consonants sustained for 7 seconds, from 17 speakers.
-
USC-EMO-MRI is an emotional speech production database which includes real-time magnetic resonance imaging data with synchronized speech audio from five male and five female actors, each producing a passage and a set of sentences in multiple repetitions, while enacting four different target emotions (neutral, happy, angry, sad). The database includes emotion quality evaluation from at least...
Explore
Audio
-
Accent/Region
(6)
- Arabic (1)
- British English (4)
- World Englishes (2)
- Conversation (2)
- Directed Speech (1)
- Electroglottography / Electrolaryngography (1)
- Emotional Speech (3)
-
Language
(11)
- Arabic (1)
- English (8)
- L2+ (1)
- Language Learning (2)
- Mandarin (1)
- Multi-Speaker (7)
- Multi-Style (2)
- Pathological (2)
- Speech in Noise (2)
Speech Production & Articulation
- Articulography (1)
- Brain Imaging (1)
- MRI (7)
- Ultrasound (4)
- Video (2)
Teaching Resources
Vocal Anatomy
- Larynx and Glottis (1)
- Vocal Tract (6)
Tags
- read speech
- audio data (19)
- adult (13)
- English (11)
- female (10)
- male (10)
- spontaneous speech (8)
- real-time MRI (rtMRI) (6)
- MRI (5)
- articulatory data (5)
- video (4)
- emotional speech (3)
- transcribed (3)
- teaching resource (3)
- speech production (3)
- ultrasound tongue imaging (UTI) (3)
- whisper (2)
- English accents (2)
- British (2)
- angry (2)
- audiovisual (2)
- happy (2)
- older adult (2)
- sad (2)
- articulation (2)
- multimodal (2)
- volumetric MRI (2)
- International Phonetic Alphabet (IPA) (2)
- vowels (2)
- lip video (2)
- conversation (2)
- L2 English (2)
- annotated (2)
- speech-language pathology (2)
- anechoic (1)
- fast speech (1)
- high pitch (1)
- loud speech (1)
- low pitch (1)
- shout (1)
- slow speech (1)
- rainbow passage (1)
- environmental noise (1)
- noisy audio (1)
- reverberation (1)
- disgust (1)
- phonetics (1)
- American English (1)
- electromagnetic articulography (EMA) (1)
- perceptually annotated (1)
- consonants (1)
- vocal tract shape (1)
- accent map (1)
- Arabic (1)
- accent variability (1)
- dialect variability (1)
- sociophonetic (1)
- Derby (1)
- Leeds (1)
- Manchester (1)
- Newcastle (1)
- York (1)
- digits (1)
- L2 speech (1)
- interview (1)
- language learning (1)
- electroglottography (EGG) (1)
- intraoral pressure (1)
- validation (1)
- Mandarin (1)
- dysarthria (1)
- pathological speech (1)
- Amyotrophic Lateral Sclerosis (ALS) (1)
- Down syndrome (1)
- Parkinson's disease (1)
- cerebral palsy (1)
- stroke (1)
- Lombard speech (1)
- clear speech (1)
- computer-directed speech (1)
- infant-directed speech (1)
- non-native-directed speech (1)
- speech in noise (1)
- brain activity (1)
- fMRI (1)
- rtMRI (1)
- vocal imitation (1)
- professional voice (1)
- silent speech (1)
- ultrasound (1)
- World Englishes (1)
- dyadic (1)
Resource type
- Dataset (17)
- Journal Article (2)
- Report (1)
- Web Page (3)