La présentation est en train de télécharger. S'il vous plaît, attendez

La présentation est en train de télécharger. S'il vous plaît, attendez

16/05/2003Reunion Bayestic / Murat Deviren1 Reunion Bayestic Excuse moi! Murat Deviren.

Présentations similaires


Présentation au sujet: "16/05/2003Reunion Bayestic / Murat Deviren1 Reunion Bayestic Excuse moi! Murat Deviren."— Transcription de la présentation:

1 16/05/2003Reunion Bayestic / Murat Deviren1 Reunion Bayestic Excuse moi! Murat Deviren

2 16/05/2003Reunion Bayestic / Murat Deviren2 Contents Frequency and wavelet filtering Supervised-predictive compensation Language modeling with DBNs Hidden Markov Trees for acoustic modeling

3 16/05/2003Reunion Bayestic / Murat Deviren3 Contents Frequency and wavelet filtering Supervised-predictive compensation Language modeling with DBNs Hidden Markov Trees for acoustic modeling

4 16/05/2003Reunion Bayestic / Murat Deviren4 Frequency Filtering Proposed by Nadeu95, Paliwal99. Goal : Spectral features comparable with MFCCs Properties : –Quasi decorrelation of logFBEs. –Cepstral weighting effect –Emphasis on spectral variations FF1FF2FF3 H(z)1-z -1 z-z -1 1-z -2 logFBEs DCT H(z) MFCC FF H(z) = 1-az -1 Simplified block diagram for MFCC and FF parameterizations Typical derivative type frequency filters

5 16/05/2003Reunion Bayestic / Murat Deviren5 Evaluation of FF on Aurora-3 Significant performance decrease for FF2 & FF3 in high mismatch case FF1FF2FF3 H(z)1-z -1 z-z -1 1-z -2

6 16/05/2003Reunion Bayestic / Murat Deviren6 Wavelets and Frequency Filtering FF1 = Haar Wavelet Reformulate FF as wavelet filtering Use higher order Daubechies wavelets Promising results Published in ICANN 2003

7 16/05/2003Reunion Bayestic / Murat Deviren7 Perspectives BUT –These results could not be verified on other subsets of Aurora-3 database. To Do –Detailed analysis of FF and wavelet filtering –Develop models that exploit frequency localized features. –Exploit statistical properties of wavelet transform.

8 16/05/2003Reunion Bayestic / Murat Deviren8 Contents Frequency and wavelet filtering Supervised-predictive compensation Language modeling with DBNs Hidden Markov Trees for acoustic modeling

9 16/05/2003Reunion Bayestic / Murat Deviren9 Noise Robustness Signal processing techniques : –CMN, RASTA, enhancement techniques Compensation schemes –Adaptive : MLLR, MAP Requires adaptation data and a canonical model –Predictive : PMC Hypothetical errors in mismatch function Strong dependence on front-end parameterization Multi-condition training

10 16/05/2003Reunion Bayestic / Murat Deviren10 Supervised-predictive compensation Goal : –exploit available data to devise a tool for robustness. Available data : –speech databases recorded in different acoustic environments. Principles : –Train matched models for each condition. –Train noise models. –Construct a parametric model that describe how matched models vary with noise model.

11 16/05/2003Reunion Bayestic / Murat Deviren11 Supervised-predictive compensation Advantages : –No mismatch function –Independent of front-end –Canonical model is not required –Computationally efficient –Model can be trained incrementally i.e. can be updated with new databases

12 16/05/2003Reunion Bayestic / Murat Deviren12 Deterministic model Databases : D 1, …, D K Noise conditions : n 1, …, n K S w (k) : matched speech model for acoustic unit w W trained on noise condition n k. N {1,…, K}: noise variable. For each w W, there exists a parametric function f w such that –|| S w (k) – f w (N) || 0 for some given norm ||.||

13 16/05/2003Reunion Bayestic / Murat Deviren13 Probabilistic model Given –S : speech model parameterization –N : noise model parameterization Learn the joint probability density P(S, N) Given the noise model N, what is the best set of speech models to use? –S` = argmax P(S|N) S1S1 S2S2 S3S3 N1N1 N2N2 N3N3 N S P(S,N) as a static Bayesian network

14 16/05/2003Reunion Bayestic / Murat Deviren14 A simple linear model Speech model : mixture density HMM Noise model : single Gaussian wls (n k ) = A wls nk + B wls – wls (n k ) : mean vector for mixture component l of state s – nk : mean vector of noise model f w is parameterized with A wls, B wls Supervised training using MMSE minimization

15 16/05/2003Reunion Bayestic / Murat Deviren15 Experiments Connected digit recognition on TiDigits 15 different noise sources from NOISEX –volvo, destroyer engine, buccaneer…. Evaluations : –Model performance in training conditions –Robustness comparison with multi-condition training : under new SNR conditions, under new noise types.

16 16/05/2003Reunion Bayestic / Murat Deviren16 Results Even a simple linear model can almost recover matched model performances. The proposed technique can generalize to new SNR conditions and new noise types. Results submitted to EUROSPEECH 2003

17 16/05/2003Reunion Bayestic / Murat Deviren17 Contents Frequency and wavelet filtering Supervised-predictive compensation Language modeling with DBNs Hidden Markov Trees for acoustic modeling

18 16/05/2003Reunion Bayestic / Murat Deviren18 Classical n-grams Word probability based on word history. P(W) = i P(w i | w i-1, w i-2, …, w i-n ) w i-n w i-2 wiwi w i-1

19 16/05/2003Reunion Bayestic / Murat Deviren19 Class based n-grams Class based word probability for a given class history. P(W) = i P(w i | c i ) P(c i | c i-1, c i-2, …, c i-n ) c i-n c i-2 cici c i-1 w i-n w i-2 wiwi w i-1

20 16/05/2003Reunion Bayestic / Murat Deviren20 Class based LM with DBNs Class based word probability in a given class context. P(W) = i P(w i | c i-n, …, c i,…c i+n ) P(c i | c i-1, c i-2, …, c i-n ) c i-n c i-2 cici c i-1 w i-n w i-2 wiwi w i-1 c i+1 c i+2

21 16/05/2003Reunion Bayestic / Murat Deviren21 Initial results Training corpus 11 months from le monde ~ 20 million words Test corpus ~ 1.5 million words Vocabulary size : 500 # class labels = 198 wiwi w i-1 cici c i-1 wiwi cici wiwi cici wiwi c i+1 ModelPerplexity

22 16/05/2003Reunion Bayestic / Murat Deviren22 Perspectives Initial results are promising. To Do –Learning structure with appropriate scoring metric, i.e., based on perplexity –Appropriate back-off schemes –Efficient CPT representations for computational constraints, i.e., noisy-OR gates.

23 16/05/2003Reunion Bayestic / Murat Deviren23 Contents Frequency and wavelet filtering Supervised-predictive compensation Language modeling with DBNs Hidden Markov Trees for acoustic modeling

24 16/05/2003Reunion Bayestic / Murat Deviren24 Reconnaissance de la parole à laide de modèles de Markov cachés sur des arbres dondelettes Sanaa GHOUZALI DESA Infotelecom Université Med V - RABAT

25 16/05/2003Reunion Bayestic / Murat Deviren25 Problèmes de la reconnaissance de la parole Paramétrisation: Besoin de localiser les paramètres du signal parole dans le domaine temps-fréquence Avoir des performances aussi bonnes que les MFCC Modélisation: Besoin de construire des modèles statistiques robuste au bruit Besoin de modéliser les dynamiques fréquentielles du signal parole aussi bien que les dynamiques temporelles

26 16/05/2003Reunion Bayestic / Murat Deviren26 Paramètrisation La transformée Ondelette a de nombreuses propriétés intéressantes qui permettent une analyse plus fine que la transformée Fourrier; Localité Multi-résolution Compression Clustering Persistence

27 16/05/2003Reunion Bayestic / Murat Deviren27 Modélisation Il existe plusieurs types de modèles statistiques qui tiennent compte des propriétés de la transformée ondelette; Independent Mixtures (IM): traite chaque coefficient indépendamment des autres (pptés primaire) Markov chains: considère seulement les corrélations entre les coefficients dans le temps (clustering) Hidden Markov Trees (HMT): considère les corrélations entre échelles (persistence)

28 16/05/2003Reunion Bayestic / Murat Deviren28 Les modèles statistiques pour la transformée ondelette t f t f

29 16/05/2003Reunion Bayestic / Murat Deviren29 Description du modèle choisi le modèle choisi WHMT : illustre bien les propriété clustering et persistance de la transformée ondelette interprète les dépendances complexes entre les coefficients d'ondelette la modélisation pour la transformée ondelette sera faite en deux étapes: modéliser chaque coefficient individuellement par un modèle de mélange de gaussienne capturer les dépendances entre ces coefficients par le biais du modèle HMT

30 16/05/2003Reunion Bayestic / Murat Deviren30 Références M. S. Crouse, R. D. Nowak, and R. G. Baraniuk, Wavelet-Based Statistical Signal- Processing Using Hidden Markov Models, IEEE Trans. Signal. Proc., vol. 46, no. 4, pp , Apr M. Crouse, H. Choi and R. Baraniuk, Multiscale Statistical Image Processing Using Tree-Structured Probability Models, IT Workshop, Feb K. Keller, S. Ben-Yacoub, and C. Mokbel, Combining Wavelet-Domain Hidden Markov Trees With Hidden Markov Models, IDIAP-RR 99-14, Aug M. Jaber Borran and R. D. Nowak, Wavelet-Based Denoising Using Hidden Markov Models


Télécharger ppt "16/05/2003Reunion Bayestic / Murat Deviren1 Reunion Bayestic Excuse moi! Murat Deviren."

Présentations similaires


Annonces Google