Commit beb91425 authored by Joanna Luberadzka's avatar Joanna Luberadzka

added some commentars to localizer

parent 9f0a88df
......@@ -10,10 +10,21 @@ global simwork
simwork.tmpSig = work.signal;
simwork.DOA = [];
% for each interval, run doa=localizer_preproc(interval sound),
% so that we have an estimate of the DOA for each presented interval
% NOTE!!! the target interval of the current trial is always in the
% first two columns of work.signal (so also of simwork.tmpSig), no matter
% in which order it was presented to the subject!
for i=1:def.intervalnum
simwork.DOA(i) = eval([work.vpname '_preproc(simwork.tmpSig(:,2*i-1:2*i))']); % actual sig processing
% Create the reasoning-scheme of the machine:
% The task for the subject is always: press the interval, which you think
% was different from two other intervals, so response is always 1,2 or 3.
% How will the machine respond if the DOA estimated for all intervals is
% the same? How would it be if they are all different?
if length(unique(simwork.DOA))==1
......@@ -29,12 +40,11 @@ elseif where(1)~=where(2)
% now select the interval with the maximum standard deviation
% [tmp,response] = max(simwork.actStd); % select max power
% if it is another interval than the first, the response is wrong, since in the work.signal always carries the
% signal interval in the first column
% NOTE!!! the target interval of the current trial is always in the
% first two columns of work.signal (so also of simwork.tmpSig), no matter
% in which order it was presented to the subject! Therefore the right anwer
% is always response=1. If it is 2 or 3, we set it here to 0 (wrong):
if response ~= 1
response = 0;
......@@ -9,11 +9,17 @@ global work
global simwork
% load DOASVM model
% specify the input channels of the localization algorithm (sensors)
if strcmp(work.userpar4,'in-ear')
simwork.sSensors = 'inear';
error('please go to function localizer_init.m and specify corresponding sensors')
% specify model type
simwork.sModeltype = 'HRIR';
% load DOASVM model
simwork.LocModel = load(['LocModel_' simwork.sModeltype '_' simwork.sSensors]);
% make a variable containing azimuth angles
simwork.DOAazimuth =;
......@@ -7,18 +7,28 @@ function response = localizer_main
global def
global work
% in this case the example model calls a detect routine for this, which returns 1 if the signal is detected.
% in this case the example model calls a detect routine for this, which returns 1 if the target
% interval is detected.
detect = eval([work.vpname '_detect'])
% if detected than select the current signal position as the response interval, select a random one from the
% remaining intervals otherwise
switch detect
case 1
% if the correct signal was detected, the response of the model is set
% number or the target interval (in the order it really was presented
% to the listener)
response = work.position{work.pvind}(end);
% work.position{work.pvind} contains a history of where the target
% interval was hidden [3 3 2 2 2] the last one is always the temporary
% one
case 0
responseTmp = randperm( def.intervalnum );
response = work.position{work.pvind}(end);
i = 1;
% this loop is to choose a response that is different from the right
% one
while ( response == work.position{work.pvind}(end) )
response = responseTmp(i);
i = i + 1;
The goal of this experiment is to compare the human and machine performance
in spatial hearing of speech signals (localisation abilities).
What do we compare?
Both humans (you) and machine (localisation algorithm) have
to participate in a psychoacoustic (AFC) experiment, that measures the SNR at
which a subject is still able to perceive a change in the direction
of arrival of speech.
The following tools/data are used:
- AFC toolbox (for designing the AFC experiment)
- Localisation Algorithm by H. Kayser (GCC-Phat features + SVM)
- OLLO data base (as speech material)
The following scripts are important to work with:
-afc/models/localizer_cfg.m(some configuration like if you want to display the gui and answers of a machine)
- afc/models/localizer_init.m (initializing the localiser)
- afc/models/localizer_detect.m (doa estimated for each interval is turned into response)
- afc/models/localizer_preproc.m (computing doa for one interval - here the algorithm is living)
The AFC experiment can be started by running following line in matlab command window:
afc('main','experimentDOA',subject,ref_az,change_angle,speech_data,hrir_type, noise_type)
The first two input arguments are fixed.
The remaining ones are described below:
subject - is the string containing subject name
a) human subject name, like 'joanna'
b) algorithm name, in this case 'localizer'
ref_az - is the string containing reference angle in degrees
at which the snr will be measured
change - is the string with change in angle in degrees for
which the snr will be measured
speech_data - is the string specifying the speech material
a) 'ollo_female' - Logatomes spoken by female speaker
b) 'ollo_male' - Logatomes spoken by male speaker
hrir_type - specifies the HRIRs with which the mono speech signals are
convolved to generate signals coming from different directions
a) 'in-ear' - binaural in-ear impulse responses
b) 'front' - impulse responses measured at front microphones
of the hearing aid
noise_type - specifies the noise that is used in the experiment
a) 'white' - channel uncorrelated white noise
b) 'cohnoise' - Coherent spectrally speech-shaped noise that comes from -160°
c) 'diffuse' - Spatialy diffuse spectrally speech-shaped noise
Research questions: What is better - human or machine? What happens in coherent noise?
what happens for different change angles?
File added
Markdown is supported
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment