La présentation est en train de télécharger. S'il vous plaît, attendez

La présentation est en train de télécharger. S'il vous plaît, attendez

An overview of Automatic Speaker Recognition Gérard CHOLLET GET-ENST/CNRS-LTCI 46 rue Barrault 75634 PARIS cedex 13

Présentations similaires


Présentation au sujet: "An overview of Automatic Speaker Recognition Gérard CHOLLET GET-ENST/CNRS-LTCI 46 rue Barrault 75634 PARIS cedex 13"— Transcription de la présentation:

1 An overview of Automatic Speaker Recognition Gérard CHOLLET GET-ENST/CNRS-LTCI 46 rue Barrault PARIS cedex 13

2 Outline Motivations, Applications Speech production background Speaker characteristics in the speech signal Automatic Speaker Verification : Decision theory Text dependent Text independent Databases, Evaluation, Standardization Audio-visual speaker verification Conclusions Perspectives

3 Why should a computer recognize who is speaking ? Protection of individual property (habitation, bank account, personal data, messages, mobile phone, PDA,...) Limited access (secured areas, data bases) Personalization (only respond to its masters voice) Locate a particular person in an audio-visual document (information retrieval) Who is speaking in a meeting ? Is a suspect the criminal ? (forensic applications)

4 Domains of Automatic Speaker Recognition Your voice is a signature Speaker verification (Voice Biometric) Are you really who you claim to be ? Identification within an open set : Is this speech segment coming from a known speaker ? Identification within a closed set Speaker detection, segmentation, indexing, retrieval : Looking for recordings of a particular speaker Combining Speech and Speaker Recognition Adaptation to a new speaker Personalization in dialogue systems

5 Applications Access Control Physical facilities, Computer networks, Websites Transaction Authentication Telephone banking, e-Commerce Speech data Management Voice messaging, Search engines Law Enforcement Forensics, Home incarceration

6 Voice Biometric Avantages Often the only modality over the telephone, Low cost (microphone, A/D), Ubiquity Possible integration on a smart (SIM) card Natural bimodal fusion : speaking face Disadvantages Lack of discretion Possibility of imitation and electronic imposture Lack of robustness to noise, distortion,… Temporal drift

7 Speaker Identity in Speech Differences in Vocal tract shapes and muscular control Fundamental frequency (typical values) 100 Hz (Male), 200 Hz (Female), 300 Hz (Child) Glottal waveform Phonotactics Lexical usage, idiolects The differences between Voices of Twins is a limit case Voices can also be imitated or disguised

8 spectral envelope of / i: / f A Speaker A Speaker B Speaker Identity segmental factors (~30ms) glottal excitation: fundamental frequency, amplitude, voice quality (e.g., breathiness) vocal tract: characterized by its transfer function and represented by MFCCs (Mel Freq. Cepstral Coef) suprasegmental factors speaking speed (timing and rhythm of speech units) intonation patterns dialect, accent, pronunciation habits

9 Speech production

10 Speech analysis

11 Inter-speaker Variability We were away a year ago.

12 Intra-speaker Variability We were away a year ago.

13 Mel Frequency Cepstral Coefficients

14 Speaker Verification Typology of approaches (EAGLES Handbook) Text dependent Public password Private password Customized password Text prompted Text independent Incremental enrolment Evaluation

15 Automatic Speaker Verification Claimed Identity Automatic Speaker Verification System Acceptation Rejection Speech processing Biometric Technology

16 What are the sources of difficulty ? Intra-speaker variability of the speech signal (due to stress, pathologies, environmental conditions,…) Recording conditions (filtering, noise,…) Temporal drift Intentional imposture Voice disguise

17 Two types of errors : False rejection (a client is rejected) False acceptation (an impostor is accepted) Decision theory : given an observation O and a claimed identity H 0 hypothesis : it comes from an impostor H 1 hypothesis : it comes from our client H 1 is chosen if and only if P(H 1 |O) > P(H 0 |O) which could be rewritten (using Bayes law) as Decision theory for identity verification

18 Decision

19 Distribution of scores

20 Receiver Operating Characteristic (ROC) curve

21 Detection Error Tradeoff (DET) Curve

22 History of Speaker Recognition

23 Current approaches

24 Text-dependent Speaker Verification Uses Automatic Speech Recognition techniques (DTW, HMM, …) Client model adaptation from speaker independent HMM (World model) Synchronous alignment of client and world models for the computation of a score.

25 Dynamic Time Warping (DTW)

26 meilleur chemin Bonjour locuteur test Y Bonjour locuteur X Bonjour locuteur 1 Bonjour locuteur 2 Bonjour locuteur n DODDINGTON 1974, ROSENBERG 1976, FURUI 1981, etc.

27 Vector Quantization (VQ) meilleure quant. Dictionnaire locuteur 1 Dictionnaire locuteur 2 Dictionnaire locuteur n Bonjour locuteur test Y Dictionnaire locuteur X SOONG, ROSENBERG 1987

28 Hidden Markov Models (HMM) Best path Bonjour locuteur 1 Bonjour locuteur 2 Bonjour locuteur n Bonjour locuteur test Y Bonjour locuteur X ROSENBERG 1990, TSENG 1992

29 Ergodic HMM meilleur chemin HMM locuteur 1 HMM locuteur 2 HMM locuteur n Bonjour locuteur test Y HMM locuteur X PORITZ 1982, SAVIC 1990

30 Gaussian Mixture Models (GMM) REYNOLDS 1995

31 An example of a Text-dependent Speaker Verification System : The PICASSO project Sequences of digits Speaker independent HMM of each digit Adaptation of these HMMs to the client voice (during enrolment and incremental enrolment) EER of less than 1 % can be achieved Customized password The client chooses his password using some feedback from the system Deliberate imposture

32 The impostor has some recordings of the target client voice. He can record the same sentences and align these speech signals with the recordings of the client. A transformation (Multiple Linear Regression) is computed from these aligned data. The impostor has heard the target client password. He records that password and applies the transformation to this recording. The PICASSO reference system with less than 1 % EER is defeated by this procedure (more than 30 % EER)

33 Incremental enrolment of customised password The client chooses his password using some feedback from the system. The system attempts a phonetic transcription of the password. Incremental enrolment is achieved on further repetitions of that password Speaker independent phone HMM are adapted with the client enrolment data. Synchronous alignment likelihood ratio scoring is performed on access trials.

34 HMM structure depends on the application

35 Speaker Verification (text independent) The ELISA consortium ENST, LIA, IRISA, DDL, Uni-Fribourg, Uni-Balamand... NIST evaluations Gaussian Mixture Models, Graphical models Segmental approaches (ALISP)

36 Gaussian Mixture Model Parametric representation of the probability distribution of observations:

37 Gaussian Mixture Models 8 Gaussians per mixture

38 GMM speaker modeling Front-end GMM MODELING WORLD GMM MODEL Front-end GMM model adaptation TARGET GMM MODEL

39 Baseline GMM method HYPOTH. TARGET GMM MOD. Front-end WORLD GMM MODEL Test Speech LLR SCORE =

40 Support Vector Machines and Speaker Verification Hybrid GMM-SVM system is proposed SVM scoring model trained on development data to classify true-target speakers access and impostors access, using new feature representation based on GMMs Modeling Scoring GMM SVM

41 SVM principles X (X) Input space Feature space Separating hyperplans H, with the optimal hyperplan H o HoHo H Class(X)

42 Results

43 State of the art – research directions (3) world model, speaker independent, train with all available speaker, using the algorithm EM. client model, Obtained as an adaptation of, MAP with a prior distribution MLLR with a transform function Unified approach

44 Adaptation Degré de liberté variable Partitionnement variable des distributions Après chaque étape E de lEM partitionnement donnant une quantité de données suffisante par classe

45 Hierarchical - MLLR adapted System

46 National Institute of Standards & Technology (NIST) Speaker Verification Evaluations Annual evaluation since 1995 Common paradigm for comparing technologies

47 Evaluations NIST: généralités Standard reconnu pour lévaluation des systèmes de vérification du locuteur Plusieurs centaines de locuteurs différents, Plusieurs dizaines de milliers daccès de test. Participation des meilleurs laboratoires mondiaux MIT, IBM, Nuance…. Participation de lENST depuis 1997.

48 Evaluations NIST: Protocole Phase dapprentissage 2 minutes de parole spontanée Condition téléphonique, réseau cellulaire Phase de test Durée des fichiers de 5s à 50s de parole spontanée

49 Evaluations NIST: Résultats Les résultats sont présentés et discutés lors dun workshop annuel. Amélioration constante des performances de lENST (18% 9%) malgré une augmentation de la difficulté: Réduction de la durée dapprentissage, Réseau commuté réseau cellulaire.

50 Evaluations NIST: Résultats

51 Combining Speech Recognition and Speaker Verification. Speaker independent phone HMMs Selection of segments or segment classes which are speaker specific Preliminary evaluations are performed on the NIST extended data set (one hour of training data per speaker)

52 1. 1 Speech Segmentation Large Vocabulary Continuous Speech Recognition (LVCSR) need huge amount of transcribed speech data language (and task) dependent good results for a small set of languages (with existing AND available transcripts) we do not have such system Data-driven speech segmentation not yet usable for speech recognition purposes no annotated databases needed language and task independent we could use it to segment the speech data for a text-independent speaker verification task and for language identification ALISP (Automatic Language Independent Speech Processing) method

53 1.2 ALISP data-driven speech segmentation

54 3. Data-driven Speech Segmentation for Speaker Verification Current best speaker verification systems are based on Gaussian Mixture Models (each speech frame is treated independently, and no temporal information is taken into account); Improvements are still necessary Speech is composed of different sounds Phonemes have different discriminant characteristics for speaker verification nasals and vowels convey more speaker characteristics then other speech classes we would like to exploit this idea, but with data-driven ALISP unit An automatic speech segmentation tool is needed

55 3.1 Advantages and disadvantages of the speech segmentation step Problems: Need of an automatic speech segmentation tool Speaker modeling per speech classes => more data needed More classes => more complicated systems Advantages Possibility to use it in combination with a dialogue based systems Text-prompted speaker verification Better accuracy if enough speech data available

56 3.2 Proposed system: ALISP based Segmental Speaker Verification using DTW Speaker specific information is extracted from the : ALISP based speech segments = > Client Dictionary Non-speaker (world speakers) : ALISP based speech segments => World Dictionary Dynamic Time Warping (DTW) was already used for speaker verification, but in a text-dependent mode comparison of two speech data with a similar linguistic content the DTW distance measure between two speech segments conveys some speaker specific characteristics Originality: use DTW in text-independent mode The speech data are first segmented in ALISP classes, in order to remove the linguistic variability Measure the distances among speaker and non-speaker speech segments

57 3.3 Searching in client and world speech dictionaries for speaker verification purposes

58 3.4 Database and experimental setup for the speaker verification experiments Development data: NIST 2001 cellular data (American English) world speakers (60 female + 59 male): train the ALISP speech segmenter model the non-speakers Evaluated on small subset (14 female + 14 male speakers) from NIST 2001 cellular data full set of NIST 2002 cellular data (??? speakers) Speech parameterization : LPCC for initial ALISP segmentation and MFCC afterward 64 ALISP speech classes

59 3.5 Results: example of data-driven speech segmentation for speaker verification Comparison of a manual transcription with the ALISP segmentation (I think my my daughter ) 2 occurrences of the English phone-sequence : m - ay ; corresponding ALISP sequences: HM-Hf-Ha and HM-Hz-Ha-HC

60 3.6 Results: another example data-driven speech segmentation for speaker verification 2 another occurrences of the English phone : ay ; the corresponding ALISP sequences: HX-Hf and Hf-Ha previous slide : Hf-Ha and Ha-Hz

61 3.7 Speaker Verification DET curves

62 3.8 Conclusions State of the art NIST 2002 results for EER: best 8% to worst 28% Problem with the small data set results: influence of the size of the test set and/or mismatched train/test conditions What we have NOT done: exploit the speech classes (silence classes are also included) normalization (with pseudo-impostors) exploit the DTW distance value, not only the preference result

63 SuperSID experiments

64 GMM with cepstral features

65 Selection of nasals in words in -ing being everythin g getting anything thing somethin g things going

66 Fusion

67 Fusion results

68 Visages parlants et vérification didentité Le visage et la parole offrent des informations complémentaires sur lidentité de la personne. De nombreux PC, PDA et téléphones sont et seront équipés dune caméra et dun microphone Les situations dimposture sont plus difficiles à réaliser. Thème de recherche développé à lENST dans le cadre du projet IST-SecurePhone

69 Visages parlants et vérification didentité Série de chiffres (PIN code)Mot de passe personnalisé

70 Fusion Parole et Visage (thèse de Conrad Sanderson, août 2002)

71 1.Acquisition des signaux biométriques pour chaque modalité 2.Calcul du score de décision pour chaque système 3.Calcul dun score de décision final basé sur la fusion des scores mono-modalité Insecure Network Serveur distant: 1.Accès à des services sécurisés 2.Validation de transactions 3.Etc. Exemple dapplication

72 Conclusions et Perspectives La parole permet une vérification didentité à travers le téléphone. Combiner les approches dépendantes et indépendantes du texte améliore la fiabilité. Si lon utilise le visage pour vérifier lidentité, il ne coûte pas cher dajouter la parole (et cela rapporte gros !). De plus en plus de PC, PDA et téléphones sont équipés dun microphone et dune caméra. La reconnaissance audio-visuelle devrait se généraliser.

73 Perspectives Speech is often the only usable biometric modality (over the telephone network). Fusion of modalities. A number of R&D projects within the EU.


Télécharger ppt "An overview of Automatic Speaker Recognition Gérard CHOLLET GET-ENST/CNRS-LTCI 46 rue Barrault 75634 PARIS cedex 13"

Présentations similaires


Annonces Google