論文・実績

論文 – Journal papers

  1. Hokuto Munakata, Yoshiaki Bando, Ryu Takeda, Kazunori Komatani, Masaki Onishi:
    Joint Separation and Localization of Moving Sound Sources Based on Neural Full-Rank Spatial Covariance Analysis.
    IEEE Signal Processing Letters, Vol. 30, pp. 384-388, 2023. [doi
  2. Shun Katada, Shogo Okada, Kazunori Komatani:
    Effects of Physiological Signals in Different Types of Multimodal Sentiment Estimation.
    IEEE Transactions on Affective Computing, Vol. 14, Issue 3, pp.2443-2457, 2023. [doi
  3. Kazunori Komatani, Kohei Ono, Ryu Takeda, Eric Nichols, Mikio Nakano:
    User Impressions of System Questions to Acquire Lexical Knowledge during Dialogues.
    Dialogue and Discourse, Vol. 13, No. 1, pp.96-122, 2022. [doi
  4. 武田 龍,駒谷 和範,中島 圭祐,中野 幹生:
    複数の対話システムコンペティションにおけるシステム開発の設計指針.
    人工知能学会論文誌,Vol.37, No.3,p. IDS-B_1-9,2022. [doi
  5. Yuta Matsumoto, Takafumi Fujita, Arne Ludwig, Andreas D. Wieck, Kazunori Komatani, Akira Oiwa:
    Noise-robust classification of single-shot electron spin readouts using a deep neural network.
    npj Quantum Information, 7, 136, 2021. [doi
  6. 平野 裕貴,岡田 将吾,西本 遥人,駒谷 和範:
    マルチタスク学習による発話対ごとに付与された複数ラベルの推定.
    電子情報通信学会論文誌,Vol.J104-A, No.2, pp.84-94,2021. [doi
  7. Mikio Nakano, Kazunori Komatani:
    A Framework for Building Closed-Domain Chat Dialogue Systems.
    Knowledge-Based Systems, Volume 204, 24 pages, 106212, September 2020. [doi

国際会議 – International Conference

  1. Ryu Takeda, Kazunori Komatani:
    Toward OOV-word Acquisition during Spoken Dialogue using Syllable-based ASR and Word Segmentation.
    International Workshop on Spoken Dialogue Systems (IWSDS), (accepted), 2024.
  2. Mikio Nakano, Hisahiro Mukai, Yoichi Matsuyama, Kazunori Komatani:
    Evaluating Dialogue Systems from the System Owners’ Perspectives.
    International Workshop on Spoken Dialogue Systems (IWSDS), (accepted), 2024.
  3. Ryu Takeda, Hokuto Munakata, Kazunori Komatani:
    Link Prediction Based on Large Language Model and Knowledge Graph Retrieval under Open-World and Resource-Restricted Environment.
    International Joint Conference on Knowledge Graphs (IJCKG 2023), (accepted), 2023.
    (Best Research Paper Award)
  4. Zhaojie Luo, Stefan Christiansson, Bence Ladoczki, Kazunori Komatani:
    Speech Emotion Recognition Using Threshold Fusion for Enhancing Audio Sensitivity.
    Workshop of Multimodal, Multilingual and Multitask Modeling Technologies for Oriental Languages (M3Oriental), (accepted), 2023.
  5. Shuichi Chikatsuji, Kenta Yamamoto, Ryu Takeda, Kazunori Komatani:
    Knowledge Graph Augmentation with Entity Identification for Improving Knowledge Graph Completion Performance.
    Pacific Rim International Conference on Artificial Intelligence (PRICAI 2023), (accepted), 2023.
  6. Ryu Takeda, Yui Sudo, Kazunori Komatani:
    Flexible Evidence Model to Reduce Uncertainty Mismatch Between Speech Enhancement and ASR Based on Encoder-Decoder Architecture.
    Asia Pacific Signal and Information Processing Association Annual Summit and Conerence (APSIPA ASC), (accepted), 2023.
  7. Miki Oshio, Hokuto Munakata, Ryu Takeda, Kazunori Komatani:
    Out-Of-Vocabulary Word Detection in Spoken Dialogues Based on Joint Decoding with User Response Patterns.
    Asia Pacific Signal and Information Processing Association Annual Summit and Conerence (APSIPA ASC), (accepted), 2023.
  8. Zhaojie Luo, Kazunori Komatani:
    Multi-Modal Emotion Recognition Based on 2D Kernel Density Estimation for Multiple Labels Fusion.
    Asia Pacific Signal and Information Processing Association Annual Summit and Conerence (APSIPA ASC), (accepted), 2023.
  9. Kazunori Komatani, Ryu Takeda, Shogo Okada:
    Analyzing Differences in Subjective Annotations by Participants and Third-party Annotators in Multimodal Dialogue Corpus.
    24th Annual SIGDIAL Meeting on Discourse and Dialogue (SIGDIAL2023), pp. 104-113, Sep. 13, 2023.
    [doi]
  10. Hokuto Munakata, Ryu Takeda, Kazunori Komatani:
    Recursive Sound Source Separation with Deep Learning-based Beamforming for Unknown Number of Sources.
    Interspeech 2023, pp. 1688-1692, Aug. 20, 2023.
    [doi]
  11. Shun Katada, Shogo Okada, Kazunori Komatani:
    Transformer-Based Physiological Feature Learning for Multimodal Analysis of Self-Reported Sentiment.
    International Conference on Multimodal Interaction (ICMI), pp. 349-358, Nov. 8, 2022.
    [doi]
  12. Ryu Takeda, Yui Sudo, Kazuhiro Nakadai and Kazunori Komatani:
    Empirical Sampling from Latent Utterance-wise Evidence Model for Missing Data ASR based on Neural Encoder-Decoder Model.
    Interspeech 2022, pp. 3789-3793, Sep. 22, 2022.
    [doi]
  13. Hokuto Munakata, Ryu Takeda, Kazunori Komatani:
    Training Data Generation with DOA-based Selecting and Remixing for Unsupervised Training of Deep Separation Models.
    Interspeech 2022, pp. 861-865, Sep. 20, 2022.
    [doi]
  14. Michimasa Inaba, Yuya Chiba, Ryuichiro Higashinaka, Kazunori Komatani, Yusuke Miyao, Takayuki Nagai:
    Collection and Analysis of Travel Agency Task Dialogues with Age-Diverse Speakers.
    Language Resources and Evaluation Conference (LREC), pp. 5759-5767, Jun. 21-23 (remote), 2022.
    [pdf]
  15. Zhaodong Wang, Kazunori Komatani:
    Graph-combined Coreference Resolution Methods on Conversational Machine Reading Comprehension with Pre-trained Language Model.
    Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, pp.72-82, May. 26, 2022.
    [doi]
  16. Ryu Takeda, Kazuhiro Nakadai, Kazunori Komatani:
    Spatial Normalization to Reduce Positional Complexity in Direction-Aided Supervised Binaural Sound Source Separation.
    Asia Pacific Signal and Information Processing Association Annual Summit and Conerence (APSIPA ASC), pp.248-253, Dec. 15, 2021.
    [link]
  17. Hokuto Munakata, Ryu Takeda, Kazunori Komatani:
    Multiple-Embedding Separation Networks: Sound Class-Specific Feature Extraction for Universal Sound Separation.
    Asia Pacific Signal and Information Processing Association Annual Summit and Conerence (APSIPA ASC), pp.961-967, Dec. 17, 2021.
    [link]
  18. Kazunori Komatani, Ryu Takeda, Keisuke Nakashima, Mikio Nakano:
    Design guidelines for developing systems for dialogue system competitions.
    International Workshop on Spoken Dialogue Systems (IWSDS), 16 pages, Nov. 16, 2021.
    [doi]
  19. Yuki Hirano, Shogo Okada, Kazunori Komatani:
    Recognizing Social Signals with Weakly Supervised Multitask Learning for Multimodal dialogue Systems.
    International Conference on Multimodal Interaction (ICMI), pp.141-149, Oct. 20, 2021.
    [doi]
  20. Wenqing Wei, Sixia Li, Shogo Okada, Kazunori Komatani:
    Multimodal User Satisfaction Recognition for Non-task Oriented Dialogue Systems.
    International Conference on Multimodal Interaction (ICMI), pp.586-594, Oct. 20, 2021.
    [doi]
  21. Kazunori Komatani, Shogo Okada:
    Multimodal Human-Agent Dialogue Corpus with Annotations at Utterance and Dialogue Levels
    International Conference on Affective Computing & Intelligent Interaction (ACII), pp.1-8, Sep. 29, 2021.
    [doi]
  22. Ryu Takeda, Kazunori Komatani:
    Age Estimation with Speech-age Model for Heterogeneous Speech Datasets.
    Interspeech 2021, pp.4164-4168, Sep. 3, 2021.
    [doi]
  23. Kazunori Komatani, Yuma Fujioka, Keisuke Nakashima, Katsuhiko Hayashi, Mikio Nakano:
    Knowledge Graph Completion-based Question Selection for Acquiring Domain Knowledge through Dialogues.
    Annual Conference on Intelligent User Interfaces (IUI), pp.531-541, Apr. 15, 2021.
    [doi]
  24. Shun Katada, Shogo Okada, Yuki Hirano, Kazunori Komatani:
    Is She Truly Enjoying the Conversation?: Analysis of Physiological Signals toward Adaptive Dialogue Systems.
    International Conference on Multimodal Interaction (ICMI), pp.315-323, Oct. 27, 2020.
    [doi]
  25. Ryu Takeda, Kazunori Komatani:
    Frame-wise Online Unsupervised Adaptation of DNN-HMM Acoustic Model from Perspective of Robust Adaptive Filtering.
    Interspeech 2020, pp.1291-1295, Oct. 26, 2020.
    [doi]
  26. Kazunori Komatani, Mikio Nakano:
    User Impressions of Questions to Acquire Lexical Knowledge.
    21st Annual SIGDIAL Meeting on Discourse and Dialogue (SIGDIAL2020), pp.147‑156, Jul. 2, 2020.
    [doi]

解説 – Explanatory article

  • 駒谷 和範, 岡田 将吾:
    マルチモーダル対話コーパスHazumi.
    自然言語処理, Vol.29, No.4, pp.1322-1329, 2022. [doi
  • 駒谷 和範:
    マルチモーダル対話コーパスの設計と公開. (小特集:音声対話システムにおける“不気味の谷”を超えるには)
    日本音響学会誌, Vol.78, No.5, pp.265-270, 2022. [doi
  • 駒谷 和範:
    円滑な対話進行のための音声からの情報抽出. (小特集:音声言語理解のこれまでとこれから)
    電子情報通信学会誌, Vol.101, No.9, pp.908-913, 2018. [paper
  • 駒谷 和範:
    階層的発話行動理解に基づく音声インタラクション.
    計測と制御, Vol.55, No.10, pp.878-883, 2016. [doi

国内研究会 – Domestic SIGs

  1. 駒谷 和範, 武田 龍, 岡田 将吾:
    マルチモーダル対話コーパスに対する主観的アノテーション結果に関する分析.
    第96回 言語・音声理解と対話処理研究会(第13回対話システムシンポジウム), pp.181-186, 12/14発表, 2022. [doi]
  2. 中野 幹生, 駒谷 和範:
    DialBB: 情報技術の教材を指向した対話システム構築フレームワーク.
    第96回 言語・音声理解と対話処理研究会(第13回対話システムシンポジウム), pp.175-180, 12/14発表, 2022. [doi]
  3. 谷口 琉聖, 武田 龍, 駒谷 和範, 翠 輝久, 細見 直希, 山田 健太郎:
    物体検出器により得た確信度が対話システム性能に与える影響.
    第96回 言語・音声理解と対話処理研究会(第13回対話システムシンポジウム), pp.63-68, 12/13発表, 2022. [doi]
  4. 久保祐喜, 生嶋竜実, 二瀬颯斗, 武田龍, 駒谷和範:
    ルールと用例を併用してユーザ発話を誘導する対話システムの構築.
    第93回 言語・音声理解と対話処理研究会(第12回対話システムシンポジウム), pp.125-130, 11/29発表, 2021. [doi]
    (第4回対話システムライブコンペティション優秀賞受賞)
  5. 黒田佑樹, 武田龍, 駒谷和範:
    システム発話間の整合性を重視した発話選択への深層強化学習の適用.
    第93回 言語・音声理解と対話処理研究会(第12回対話システムシンポジウム), pp.62-67, 11/29発表, 2021. [doi]
  6. 駒谷 和範, 岡田 将吾, 堅田 俊:
    マルチモーダル対話コーパスHazumi公開と生体信号を含む新規データ収集.
    人工知能学会研究会資料, SIG-SLUD-C002-35, pp.170-177, 12/1発表, 2020. [link]
  7. 中島 圭祐, 駒谷 和範:
    グラフデータベース中のエンティティの分布を活用したシステム発話候補の自動抽出.
    人工知能学会研究会資料, SIG-SLUD-C002-05, pp.16-19, 11/30発表, 2020. [link]

全国大会 – National Convention

  1. 久保 祐喜, 羅 兆傑, 武田 龍, 駒谷 和範:
    マルチモーダル対話におけるクロスコーパスでの心象推定のための特徴量選択.
    情報処理学会全国大会, 7U-04, 3/4発表, 2023. (学生奨励賞受賞)
  2. 久保 裕之輔, 羅 兆傑, 武田 龍, 駒谷 和範:
    マルチモーダル対話におけるユーザごとの心象推定のための学習データの割当て.
    情報処理学会全国大会, 7P-06, 3/4発表, 2023.
  3. 生嶋 竜実, 武田 龍, 合原 一究, 駒谷 和範:
    カエルの合唱音声に対する教師ありモノラル音源分離のための音声合成によるデータ拡張.
    情報処理学会全国大会, 5S-03, 3/3発表, 2023.
  4. 宗像 北斗, 坂東 宜昭, 武田 龍, 駒谷 和範, 大西 正輝:
    音源定位・分離の同時学習に基づく移動音源の深層ブラインド音源分離.
    情報処理学会全国大会, 5S-01, 3/3発表, 2023. (学生奨励賞受賞)
  5. 近辻 脩壱, 宗像 北斗, 武田 龍, 駒谷 和範:
    知識グラフ補完性能向上のための同一エンティティ判定を用いた知識グラフ拡充.
    情報処理学会全国大会, 4W-06, 3/3発表, 2023. (学生奨励賞受賞)
  6. 大塩 幹, 宗像 北斗, 武田 龍, 駒谷 和範:
    対話中のユーザの返答パターンに基づく音声発話中の未知語認識.
    情報処理学会全国大会, 4S-04, 3/3発表, 2023. (学生奨励賞受賞)
  7. 奥野 尚己, 武田 龍, 駒谷 和範:
    対話システムにおけるマルチモーダル情報を用いた自己開示の可否判断.
    情報処理学会全国大会, 7ZE-01, 3/5発表, 2022. (学生奨励賞受賞)
  8. 時末卓幹, 武田龍, 駒谷和範, 翠輝久, 細見直希, 山田健太郎:
    不完全な物体検出結果に基づく対話を通じた目的地推定のための質問選択.
    情報処理学会全国大会, 2ZE-04, 3/3発表, 2022.
  9. 黒田 佑樹, 武田 龍, 駒谷 和範:
    システム発話間の内容的整合性を用いた強化学習に基づく発話選択.
    情報処理学会全国大会, 4P-03, 3/19発表, 2021. (学生奨励賞受賞)
  10. 奥野 尚己, 武田 龍, 駒谷 和範:
    対話システムにおける適応的発話選択のためのユーザ状態の設計と推定.
    情報処理学会全国大会, 4ZB-02, 3/6発表, 2020.

受賞 – Award