Universiti Teknologi Malaysia Institutional Repository

Multi-biometrics fusion (heart sound-speech authentication system)

Al-hamdani, Osama and Chekima, Ali and Dargham, Jamal and Shaikh Salleh, Sheikh Hussain and Mohd. Noor, Alias and Noman, Fuad (2012) Multi-biometrics fusion (heart sound-speech authentication system). In: Proceedings of the IASTED International Symposia on Imaging and Signal Processing in Health Care and Technology, ISPHT 2012. ACTA Press, Calgary, AB, Canada, pp. 112-118. ISBN 978-088986920-2

Full text not available from this repository.

Official URL: http://dx.doi.org/10.2316/P.2012.771-007

Abstract

Biometrics recognition systems implemented in a realworld environment often have to be content with adverse biometrics signal acquisition which can vary greatly in this environment. This includes acoustic noise that can contaminate speech signals or artifacts that can alter heart sound signals. In order to overcome these recognition errors, researchers all over the world apply various methods such as normalization, feature extraction, classification to address this issue. Recently, combining biometrics modalities has proven to be an effective strategy to improve the performance of biometrics systems. The approach in this paper is based on biometrics recognition which used the heart sound signal as a feature that can't be easily copied The Mel- Frequency Cepstral Coefficient (MFCC) is used as a feature vector and vector quantization (VQ) is used as the matching model algorithm. A simple yet highly reliable method is introduced for biometric applications. Experimental results show that the recognition rate of the Heart Sound Speaker identification (HS-SI) model is 81.9% while (S-SI) the rate for the Speech Speaker Independent model is 99.3% for a 21 client, 40 imposter database. Heart sound- speaker verification (HS-SV) provides an average EER of 17.8% while the average EER for the speech speaker verification model (S-SV) is 3.39%. In order to reach a higher security level an alternative to the above approach, which is based on multimodal and a fusion technique, is implemented into the system. The best performance of the work is based on simple-sum score fusion with a pricewise-linear normalization technique which provides an EER of 0.69%.

Item Type:Book Section
Additional Information:Indexed by Scopus
Uncontrolled Keywords:fusion, picewise-liner, speaker recognition, vector quantization
Subjects:R Medicine > R Medicine (General)
Divisions:Biosciences and Medical Engineering
ID Code:35727
Deposited By: Fazli Masari
Deposited On:29 Oct 2013 01:09
Last Modified:06 Aug 2017 03:50

Repository Staff Only: item control page