广东工业大学学报 ›› 2024, Vol. 41 ›› Issue (03): 91-101.doi: 10.12052/gdutxb.230037

• 计算机科学与技术 • 上一篇    下一篇

说话人感知的交叉注意力说话人提取网络

李卓璋1, 许柏炎1, 蔡瑞初1, 郝志峰1,2   

  1. 1. 广东工业大学 计算机学院, 广东 广州 510006;
    2. 汕头大学 理学院, 广东 汕头 515063
  • 收稿日期:2023-02-27 出版日期:2024-05-25 发布日期:2024-06-14
  • 通信作者: 许柏炎(1991-),男,博士,主要研究方向为机器学习、自然语言处理,E-mail:hpakyim@gmail.com;蔡瑞初(1983-),男,教授,博士,博士生导师,主要研究方向为机器学习、因果关系,E-mail:cairuichu@gmail.com
  • 作者简介:李卓璋(1998-),男,硕士研究生,主要研究方向为语音分离,E-mail:920942323@qq.com
  • 基金资助:
    科技创新2030-“新一代人工智能”重大项目(2021ZD0111501) ;国家优秀青年科学基金资助项目 (62122022) ;国家自然科学基金资助项目(61876043, 61976052, 62206064)

Speaker-Aware Cross Attention Speaker Extraction Network

Li Zhuo-zhang1, Xu Bo-yan1, Cai Rui-chu1, Hao Zhi-feng1,2   

  1. 1. School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China;
    2. College of Science, Shantou University, Shantou 515063, China
  • Received:2023-02-27 Online:2024-05-25 Published:2024-06-14

摘要: 目标说话人提取任务的目标是在一段混合音频中提取特定说话人的语音,任务设置上一般会给一段目标说话人注册音频作为辅助信息。现有的研究工作主要有以下不足:(1) 说话人识别的辅助网络无法捕获学习注册音频中的关键信息;(2) 缺乏混合音频嵌入和注册音频嵌入的交互学习机制。以上不足导致了现有研究工作在注册音频和目标音频之间存在较大差异时有说话人混淆问题。为了解决该问题,提出说话人感知的交叉注意力说话人提取网络(Speaker-aware Cross Attention Speaker Extraction Network, SACAN) 。SACAN在说话人识别辅助网络引入基于注意力的说话人聚合模块,有效聚合目标说话人声音特性的关键信息和利用混合音频增强目标说话人嵌入。进一步地,SACAN通过交叉注意力构建交互学习机制促进说话人嵌入与混合音频嵌入融合学习,增强了模型的说话人感知能力。实验结果表明,SACAN相比基准方法在STOI和SI-SDRi分别提高了0.0133、1.0695 dB,并在说话人混淆相关评估和消融实验中验证了不同模块的有效性。

关键词: 语音分离, 目标说话人提取, 说话人嵌入, 交叉注意力, 多任务学习

Abstract: Target speaker extraction aims to extract the speech of the specific speaker from mixed audio, which usually treats the enrolled audio of the target speaker as auxiliary information. Existing approaches mainly have the following limitations: the auxiliary network for speaker recognition cannot capture the critical information from enrolled audio, and the second one is the lack of an interactive learning mechanism between mixed and enrolled audio embedding. These limitations lead to speaker confusion when the difference between the enrolled and target audio is significant. To address this, a speaker-aware cross-attention speaker extraction network (SACAN) is proposed. First, SACAN introduces an attention-based speaker aggregation module in the speaker recognition auxiliary network, which effectively aggregates critical information about target speaker characteristics. Then, it uses mixed audio to enhance target speaker embedding. After that, to promote the integration of speaker embedding and mixed audio embedding, SACAN builds an interactive learning mechanism through cross-attention and enhances the speaker perception ability of the model. The experimental results show that SACAN improves by 0.0133 and 1.0695 in terms of STOI and SI-SDRi when compared with the benchmark model, validating the effectiveness of the proposed module in speaker confusion assessment and ablation experiments.

Key words: speech separation, target speaker extraction, speaker embedding, cross attention, multi-task learning

中图分类号: 

  • TP391.2
[1] CHERRY E C. Some experiments on the recognition of speech, with one and with two ears [J]. The Journal of the Acoustical Society of America, 1953, 25(5): 975-979.
[2] HERSHEY J R, CHEN Z, LE ROUX J, et al. Deep clustering: discriminative embeddings for segmentation and separation[C]//2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Shanghai: IEEE, 2016: 31-35.
[3] YU D, KOLBæK M, TAN Z H, et al. Permutation invariant training of deep models for speaker-independent multi-talker speech separation[C]//2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). New Orleans: IEEE, 2017: 241-245.
[4] CHEN Z, LUO Y, MESGARANI N. Deep attractor network for single-microphone speaker separation[C]//2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . New Orleans: IEEE, 2017: 246-250.
[5] LUO Y, MESGARANI N. Conv-tasnet: surpassing ideal time–frequency magnitude masking for speech separation [J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2019, 27(8): 1256-1266.
[6] LUO Y, CHEN Z, YOSHIOKA T. Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Online: IEEE, 2020: 46-50.
[7] CHEN J, MAO Q, LIU D. Dual-path transformer network: direct context-aware modeling for end-to-end monaural speech separation[C]//Conference of the International Speech Communication Association. Online: ISCA, 2020: 2642-2646.
[8] ZEGHIDOUR N, GRANGIER D. Wavesplit: end-to-end speech separation by speaker clustering [J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 2840-2849.
[9] LAM M W Y, WANG J, SU D, et al. Sandglasset: a light multi-granularity self-attentive network for time-domain speech separation[C]//ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Online: IEEE, 2021: 5759-5763.
[10] SUBAKAN C, RAVANELLI M, CORNELL S, et al. Attention is all you need in speech separation[C]//ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Online: IEEE, 2021: 21-25.
[11] DELCROIX M, ZMOLIKOVA K, KINOSHITA K, et al. Single channel target speaker extraction and recognition with speaker beam[C]//2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Calgary: IEEE, 2018: 5554-5558.
[12] WAN L, WANG Q, PAPIR A, et al. Generalized end-to-end loss for speaker verification[C]//2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Calgary: IEEE, 2018: 4879-4883.
[13] WANG Q, MUCKENHIRN H, WILSON K, et al. VoiceFilter: targeted voice separation by speaker-conditioned spectrogram masking [C]//Conference of the International Speech Communication Association. Graz: ISCA, 2019: 2728-2732.
[14] XU C, RAO W, CHNG E S, et al. Spex: multi-scale time domain speaker extraction network [J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2020, 28: 1370-1384.
[15] GE M, XU C, WANG L, et al. SpEx+: a complete time domain speaker extraction network[C]//Conference of the International Speech Communication Association. Online: ISCA, 2020: 1406-1410.
[16] GE M, XU C, WANG L, et al. Multi-stage speaker extraction with utterance and frame-level reference signals[C]//ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Online: IEEE, 2021: 6109-6113.
[17] DENG C, MA S, SHA Y, et al. Robust speaker extraction network based on iterative refined adaptation. [C]// Conference of the International Speech Communication Association. Online: 2021, 3530-3534.
[18] HAO Y, XU J, ZHANG P, et al. Wase: learning when to attend for speaker extraction in cocktail party environments[C]//ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Online: IEEE, 2021: 6104-6108.
[19] JI X, YU M, ZHANG C, et al. Speaker-aware target speaker enhancement by jointly learning with speaker embedding extraction[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Online: IEEE, 2020: 7294-7298.
[20] JU Y, RAO W, YAN X, et al. TEA-PSE: tencent-ethereal-audio-lab personalized speech enhancement system for ICASSP 2022 DNS CHALLENGE[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Singapore: IEEE, 2022: 9291-9295.
[21] ZHAO Z, GU R, YANG D, et al. Speaker-aware mixture of mixtures training for weakly supervised speaker extraction [C]//Conference of the International Speech Communication Association. Incheon: ISCA, 2022, 5318-5322.
[22] PANDEY A, WANG D L. Attentive training: a new training framework for talker-independent speaker extraction. [C]//Conference of the International Speech Communication Association. Incheon: ISCA, 2022, 201-205
[23] DELCROIX M, ZMOLIKOVA K, KINOSHITA K, et al. Single channel target speaker extraction and recognition with speaker beam[C]//2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Calgary Alberta: IEEE, 2018: 5554-5558.
[24] WANG W, XU C, GE M, et al. Neural speaker extraction with speaker-speech cross-attention network[C]// Conference of the International Speech Communication Association. Online: 2021: 3535-3539.
[25] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [J]. Advances in Neural Information Processing Systems, 2017, 30: 5998-6008.
[26] ZHAO Z, YANG D, GU R, et al. Target confusion in end-to-end speaker extraction: analysis and approaches. [C]//Conference of the International Speech Communication Association. Incheon: ISCA, 2022: 5333-5337.
[27] OKABE K, KOSHINAKA T, SHINODA K. Attentive statistics pooling for deep speaker embedding[C]//Conference of the International Speech Communication Association. Hyderabad: ISCA, 2018: 2252-2256.
[28] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(4): 834-848.
[29] LE ROUX J, WISDOM S, ERDOGAN H, et al. SDR–half-baked or well done?[C]//ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Brighton: IEEE, 2019: 626-630.
[30] COSENTINO J, PARIENTE M, CORNELL S, et al. Librimix: an open-source dataset for generalizable speech separation[EB/OL]. (2020-5-22) [2023-3-27].https://arxiv.org/abs/2005.11262.
[31] PANAYOTOV V, CHEN G, POVEY D, et al. Librispeech: an ASR corpus based on public domain audio books[C]//2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Brisbane: IEEE, 2015: 5206-5210.
[32] TAAL C H, HENDRIKS R C, HEUSDENS R, et al. A short-time objective intelligibility measure for time-frequency weighted noisy speech[C]//2010 IEEE International Conference on Acoustics, Speech and Signal Processing. Dallas: IEEE, 2010: 4214-4217.
[33] DELCROIX M, OCHIAI T, ZMOLIKOVA K, et al. Improving speaker discrimination of target speech extraction with time-domain speakerbeam[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Online: IEEE, 2020: 691-695.
[1] 吴晓鸰, 陈祥旺, 占文韬, 凌捷. 基于门控注意力单元的中文医学命名实体识别[J]. 广东工业大学学报, 2023, 40(06): 176-184.
[2] 张锐, 吕俊. 基于分离结果信噪比估计与自适应调频网络的单通道语音分离技术[J]. 广东工业大学学报, 2023, 40(02): 45-54.
[3] 黎启祥, 肖燕珊, 郝志峰, 阮奕邦. 基于抗噪声的多任务多示例学习算法研究[J]. 广东工业大学学报, 2018, 35(03): 47-53.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!