17
CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research University of Edinburgh, Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, UK E-mail: [email protected] Telephone: +44-131-651-3284 職歴 1. April 2007 ‒ Present: Senior Research Fellow, The Centre for Speech Technology Research (CSTR), University of Edinburgh 2. April 2004 ‒ March 2007: Research Fellow, Japan Society for the Promotion of Science (JSPS), Japan 3. October 2003 ‒ March 2006: Intern Researcher, Advanced Telecommunications Research (ATR), Japan 学歴 1. 2006 Ph.D., ‘Average-voice-based speech synthesis’, Tokyo Institute of Technology (TIT) 2. 2003 MEng, Information processing, Tokyo Institute of Technology 3. 2002 BEng, Computer science, Tokyo Institute of Technology (1) 研究・開発業績 主要業績3編 3.1. J. Yamagishi, T. Nose, H. Zen, Z. Ling, T. Toda, K. Tokuda, S. King, S. Renals, “A Robust Speaker-Adaptive HMM-based Text-to-Speech Synthesis,” IEEE Trans. Audio, Speech, & Language Processing, vol.17, no.6, pp.1208-1230, August 2009 3.2. Z-H. Ling, K. Richmond, J. Yamagishi, R.-H. Wang, “Integrating Articulatory Features into HMM-based Parametric Speech Synthesis,” IEEE Trans. Audio, Speech, & Language Processing, vol.17 No.6 pp.1171-1185 August 2009. (IEEE Signal Processing Society Young Author Best Paper Award 2010) 3.3. S. Creer, P. Green, S. Cunningham, and J. Yamagishi, “Building personalised synthesised voices for individuals with dysarthria using the HTS toolkit,” Computer Synthesized Speech Technologies: Tools for Aiding Impairment, John W. Mullennix and Steven E. Stern (Eds), IGI Global press, Jan. 2010. ISBN: 978-1-61520-725-1 研究業績一覧 1. 学位、博士論文 1.1. Average-voice-based speech synthesis, PhD thesis, Tokyo Institute of Technology, 2006 (2007年 手島論文賞) 1

CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

CURRICULUM VITAE: JUNICHI YAMAGISHI

The Centre for Speech Technology Research University of Edinburgh, Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, UK E-mail: [email protected] Telephone: +44-131-651-3284

職歴1. April 2007 ‒ Present: Senior Research Fellow, The Centre for Speech Technology Research (CSTR), University of Edinburgh

2. April 2004 ‒ March 2007: Research Fellow, Japan Society for the Promotion of Science (JSPS), Japan

3. October 2003 ‒ March 2006: Intern Researcher, Advanced Telecommunications Research (ATR), Japan

学歴1. 2006 Ph.D., ‘Average-voice-based speech synthesis’, Tokyo Institute of Technology (TIT)2. 2003 MEng, Information processing, Tokyo Institute of Technology 3. 2002 BEng, Computer science, Tokyo Institute of Technology

(1) 研究・開発業績

主要業績3編

3.1. J. Yamagishi, T. Nose, H. Zen, Z. Ling, T. Toda, K. Tokuda, S. King, S. Renals, “A Robust Speaker-Adaptive HMM-based Text-to-Speech Synthesis,” IEEE Trans. Audio, Speech, & Language Processing, vol.17, no.6, pp.1208-1230, August 2009

3.2. Z-H. Ling, K. Richmond, J. Yamagishi, R.-H. Wang, “Integrating Articulatory Features into HMM-based Parametric Speech Synthesis,” IEEE Trans. Audio, Speech, & Language Processing, vol.17 No.6 pp.1171-1185 August 2009. (IEEE Signal Processing Society Young Author Best Paper Award 2010)

3.3. S. Creer, P. Green, S. Cunningham, and J. Yamagishi, “Building personalised synthesised voices for individuals with dysarthria using the HTS toolkit,” Computer Synthesized Speech Technologies: Tools for Aiding Impairment, John W. Mullennix and Steven E. Stern (Eds), IGI Global press, Jan. 2010. ISBN: 978-1-61520-725-1

研究業績一覧

1. 学位、博士論文

1.1. Average-voice-based speech synthesis, PhD thesis, Tokyo Institute of Technology, 2006 (2007年 手島論文賞)

1

Page 2: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

2. 査読付きジャーナル論文

2.1. S. Andersson J. Yamagishi, R.A.J. Clark "Synthesis and Evaluation of Conversational Characteristics in HMM-based Speech Synthesis" Speech Communication, 2011

2.2. J. Dines, H. Liang, L. Saheer, M. Gibson, W. Byrne, K. Oura, K. Tokuda, J. Yamagishi, S. King, M. Wester, T. Hirsimaki, R. Karhila, M. Kurimo. “Personalising Speech-to-Speech Translation: Unsupervised Cross-lingual Speaker Adaptation for HMM-based Speech Synthesis,” Computer & Speech Language, 2011

2.3. A. Stan, J. Yamagishi, S. King, and M. Aylett, “The Romanian Speech Synthesis (RSS) corpus: building a high quality HMM-based speech synthesis system using a high sampling rate”, Speech Communication, 2011 (awaiting publication)

2.4. T. Raitio, A. Suni, J. Yamagishi, H. Pulakka, J. Nurminen, M. Vainio, and P. Alku, “HMM-Based Speech Synthesis Utilizing Glottal Inverse Filtering,” IEEE Trans. Audio, Speech, & Language Processing vol.19, No.1, pp.153-165, January 2011

2.5. J. Dines, J. Yamagishi, and S. King, “Measuring the gap between HMM-based ASR and TTS,” IEEE Trans. Selected Topics in Signal Processing, vol.4, no.6, pp.1046‒1058, December 2010

2.6. Z-H. Ling, K. Richmond, J. Yamagishi, “An Analysis of HMM-based Prediction of Articulatory Movements,” Speech Communication, Volume 52m Issue 10, pages 834‒846, October 2010

2.7. J. Yamagishi, B. Usabaev, S. King, O. Watts, J. Dines, J. Tian, R. Hu, Y. Guan, K. Oura, K. Tokuda, R. Karhila, M. Kurimo, “Thousands of Voices for HMM-based Speech Synthesis -- Analysis and Application of TTS Systems Built on Various ASR Corpora,” IEEE Trans. Audio, Speech, & Language Processing, vol.18, issue.5, pp.984-1004, July 2010

2.8. O. Watts, J. Yamagishi, S. King, K. Berkling, “Synthesis of Child Speech with HMM Adaptation and Voice Conversion,” IEEE Trans. Audio, Speech, & Language Processing, vol.18, issue.5, pp.1005-1016, July 2010

2.9. R. Barra-Chicote, J. Yamagishi, S. King, J. Manuel Monero, J. Macias-Guarasa, “Analysis of Statistical Parametric and Unit-Selection Speech Synthesis Systems Applied to Emotional Speech,” Speech Communication, Volume 52, Issue 5, pp. 394-404, May 2010

2.10. M. Pucher, D. Schabus, J. Yamagishi, F. Neubarth, “Modeling and Interpolation of Austrian German and Viennese Dialect in HMM-based Speech Synthesis,” Speech Communication, Volume 52, Issue 2, Pages 164-179, February 2010

2.11. J. Yamagishi, T. Nose, H. Zen, Z. Ling, T. Toda, K. Tokuda, S. King, S. Renals, “A Robust Speaker-Adaptive HMM-based Text-to-Speech Synthesis,” IEEE Trans. Audio, Speech, & Language Processing, vol.17, no.6, pp.1208-1230, August 2009

2.12. Z-H. Ling, K. Richmond, J. Yamagishi, R.-H. Wang, “Integrating Articulatory Features into HMM-based Parametric Speech Synthesis,” IEEE Trans. Audio, Speech, & Language Processing, vol.17 No.6 pp.1171-1185 August 2009. (IEEE Signal Processing Society Young Author Best Paper Award 2010)

2.13. J. Yamagishi, T. Kobayashi, Y. Nakano, K. Ogata, J. Isogai, “Analysis of Speaker Adaptation Algorithms for HMM-based Speech Synthesis and a Constrained SMAPLR Adaptation Algorithm,” IEEE Trans. Audio, Speech, & Language Processing, vol.17, issue 1, pp.66-83, January 2009

2

Page 3: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

2.14. J. Yamagishi, H. Kawai, T. Kobayashi, “Phone Duration Modeling Using Gradient Tree Boosting,” Speech Communication, vol.50, issue 5, pp.405-415, May 2008

2.15. T. Nose, J. Yamagishi, T. Kobayashi, “A Style Control Technique for HMM-based Expressive Speech Synthesis,” IEICE Trans. Information and Systems, E90-D, no,9, pp.1406-1413, Sept. 2007

2.16. J. Yamagishi and T. Kobayashi, “Average-Voice-based Speech Synthesis using HSMM-based Speaker Adaptation and Adaptive Training,” IEICE Trans. Information and Systems, E90-D, no.2, pp.533-543, Feb. 2007

2.17. H. Kawai, T. Toda, J. Yamagishi, T. Hirai, J. Ni, N. Nishizawa, M. Tsuzaki, and K. Tokuda, “XIMERA: a concatenative speech synthesis system with large scale corpora,” IEICE Trans. Information and Systems, J89-D-II, no.12, pp.2688-2698, Dec. 2006 (in Japanese)

2.18. M. Tachibana, J. Yamagishi, T. Masuko, T. Kobayashi, “A Style Adaptation Technique for Speech Synthesis Using HSMM and Suprasegmental Features,” IEICE Trans. Information and Systems, E89-D, no.3, pp.1092-1099, March 2006.

2.19. M. Tachibana, J. Yamagishi, T. Masuko, T. Kobayashi, “Speech Synthesis with Various Emotional Expressions and Speaking Styles by Style Interpolation and Morphing,” IEICE Trans. Information and Systems, E88-D, no.11, pp.2484-2491, November 2005.

2.20. N. Niwase, J. Yamagishi, T. Kobayashi, “Human Walking Motion Synthesis with Desired Pace and Stride Length Based on HSMM,” IEICE Trans. Information and Systems E88-D, no.11, pp.2492-2499, November 2005.

2.21. J. Yamagishi, K. Onishi, T. Masuko, T. Kobayashi, “Acoustic Modeling of Speaking Styles and Emotional Expressions in HMM-based Speech Synthesis,” IEICE Trans. Information and Systems,E88-D, no.3, pp.503-509, March 2005.

2.22. J. Yamagishi, M. Tamura, T. Masuko, K. Tokuda, T. Kobayashi, “A Training Method of Average Voice Model for HMM-based Speech Synthesis,” IEICE Trans. Fundamentals of Electronics, Communications and Computer Sciences, E86-A, no.8, pp.1956-1963, August 2003.

2.23. J. Yamagishi, M. Tamura, T. Masuko, K. Tokuda, T. Kobayashi, “A Context Clustering Technique for Average Voice Models,” IEICE Trans. Information and Systems, E86-D, no.3, pp.534-542, March 2003

査読中ジャーナル論文

2.24. K. Hashimoto, J. Yamagishi, W. Byrne, S. King, and K. Tokuda, “Impact of Machine Translation And Speech Synthesis on Speech-To-Speech Translation System”, Speech Communication 2011

2.25. K. Oura, K. Tokuda, J. Yamagishi, S. King, M. Wester, “Analysis of unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis using KLD-based transform mapping”, Speech Communication 2011

2.26. P.L. De Leon, M. Pucher, J. Yamagishi, I. Hernaez, I. Saratxaga, “Evaluation of speaker verification security and detection of synthetic speech”, IEEE Trans. Audio, Speech, & Language Processing 2011

3

Page 4: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

3. 著書・解説書等

3.1. J. Yamagishi, C. Veaux, S. King, S. Renals, “Voice banking and reconstruction ̶ Speech synthesis technologies for individuals with vocal disabilities” The Acoustic Society of Japan, Dec 2011

3.2. 山岸順一, C. Veaux, S. King, S. Renals, “Voice banking and reconstruction ̶ 構音障害患者のための 音声合成技術” The Acoustic Society of Japan, Dec 2011

3.3. S. Creer, P. Green, S. Cunningham, and J. Yamagishi, “Building personalised synthesised voices for individuals with dysarthria using the HTS toolkit,” Computer Synthesized Speech Technologies: Tools for Aiding Impairment, John W. Mullennix and Steven E. Stern (Eds), IGI Global press, Jan. 2010. ISBN: 978-1-61520-725-1

4. 特許,ソフトウエア、データベース等

特許

4.1. "Efficient speaker adaptation system for robust HMM-based speech synthesis", UK priority application No: 0911494.3

4.2. 2006-84967 予測モデルの作成方法およびコンピュータプログラム

4.3. 2005-269624 音声合成装置、学習データ生成装置、ポーズ予測装置

4.4. 2006-41172 音声のスタイル検出装置,その方法およびそのプログラム

ソフトウエア

4.5. ATR Text-to-speech synthesis system “XIMERA”

4.6. HMM-based speech synthesis toolkit “HTS”, http://hts.sp.nitech.ac.jp/

4.7. Festival speech synthesis system, http://www.cstr.ed.ac.uk/projects/festival/

4.8. Signal processing toolkit “SPTK”, http://sp-tk.sourceforge.net/

4.9. HTS Voice Library, http://homepages.inf.ed.ac.uk/jyamagis/library/

4.10. Voice of the world, http://homepages.inf.ed.ac.uk/jyamagis/Demo-html/map-new.html

4.11. Voice Clone Toolkit, http://homepages.inf.ed.ac.uk/jyamagis/software/software.html

データベース

4.12. Romanian Speech Synthesis (RSS) corpus

4.13. Scottish English Speech database

5. 査読付き国際学術会議における論文、講演

チュートリアル

5.1. Junichi Yamagishi and Simon King, "New and emerging applications of speech synthesis", Invited tutorial at the 2010 International Symposium on Chinese Spoken Language Processing, Taiwan November 2010, http://conf.ncku.edu.tw/iscslp2010/Tutorial.htm 2011

4

Page 5: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

5.2. L. Saherr, J. Yamagishi, P.N. Garner, J. Dines, “Combining Vocal Tract Length Normallization with Linear Transformations in a Bayesian Framework,” Proc ICASSP 2012

5.3. Cassia Valentini-Botinhao, Ranniery Maia, Junichi Yamagishi, Simon King, Heiga Zen“ Cepstral analysis based on the Glimpse proportion measure for improving the intelligibility of HMM-based synthetic speech in noise, Proc ICASSP 2012

5.4. Christophe Veaux, Junichi Yamagishi, Simon King, “Voice banking and voice reconstruction for MND patients,” Proc. ASSETS 2011

5.5. Cassia Valentini-Botinhao, Junichi Yamagishi, and Simon King, “Can objective measures predict the intelligibility of modified HMM-based synthetic speech in noise”, Proc. Interspeech 2011

5.6. O. Watts, J. Yamagishi, S. King, "Unsupervised continuous-valued word features for phrase-break predictions without a part-of-speech tagger” Proc Interspeech 2011

5.7. M. Ling, J. Yamagishi, K. Richmond, Z.-H. Ling, S, King, LiRong Dai, "Formant-controlled HMM-based speech synthesis" Proc. Interspeech 2011

5.8. Z.-H. Ling, K. Richmond, J. Yamagishi, “Feature-space transform tying in unified acoustic-articulatory modelling for articulatory control of HMM-based speech synthesis” Proc. Interspeech 2011

5.9. Cassia Valentini-Botinhao, Junichi Yamagishi, and Simon King, “Evaluation of Objective Measures for Intelligibility Prediction of HMM-Based Synthetic Speech In Noise”, Proc. ICASSP 2011

5.10. Sandra Andraszewicz, Junichi Yamagishi, and Simon King, “Vocal Attractiveness of Statistical Speech Synthesisers”, Proc. ICASSP 2011,

5.11. Phillip L. De Leon, Inma Hernaez, Ibon Saratxaga, Michael Pucher, and Junichi Yamagishi “Detection of Synthetic Speech for the Problem of Imposture”, Proc. ICASSP 2011

5.12. Kei Hashimoto, Junichi Yamagishi, William Byrne, Simon King, and Keiichi Tokuda, “An Analysis of Machine Translation And Speech Synthesis In Speech-To-Speech Translation System”, Proc. ICASSP 2011,

5.13. Joao P. Cabral , Steve Renals, Junichi Yamagishi and Korin Richmond, “HMM-Based Speech Synthesiser Using The LF-Model of The Glottal Source”, Proc. ICASSP 2011,

5.14. O. Watts, J. Yamagishi, S. King, "Letter-based speech synthesis", Proc. 7th ISCA speech synthesis workshop, SSW7, September 2010

5.15. S. Andersson, J. Yamagishi, R. Clark. "Utilising Spontaneous Conversational Speech in HMM-Based Speech Synthesis" Proc. 7th ISCA speech synthesis workshop, SSW7, September 2010

5.16. J.P. Cabral, S. Renals, K. Richmond, J. Yamagishi, "Transforming Voice Source Parameters in a HMM-based Speech Synthesiser with Glottal Post-Filtering", Proc. 7th ISCA speech synthesis workshop, SSW7, September 2010

5.17. Yong Guan, Jilei Tian, Yi-Jian Wu, Junichi Yamagishi and Jani Nurminen, "AN UNIFIED AND AUTOMATIC APPROACH OF MANDARIN HTS SYSTEM" Proc. 7th ISCA speech synthesis workshop, SSW7, September 2010

5

Page 6: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

5.18. M. Wester, M. Kurimo, W. Byrne, J. Dines, P.N. Garner, M. Gibson, Y. Guan, T. Hirsimaki, R. Karhila, S. King, H. Liang, K. Oura, L. Saheer, M. Shannon, S. Shiota, J. Tian, K. Tokuda, Y.-J. Wu, J. Yamagishi, "Speaker adaptation and the evaluation of speaker similarity in the EMIME speech-to-speech translation project" Proc. 7th ISCA speech synthesis workshop, SSW7, September 2010

5.19. J. Yamagishi, O. Watts, S. King, B. Usabaev, "Roles of the Average Voice in Speaker-adaptive HMM-based Speech Synthesis" Proc. Interspeech 2010, September 2010

5.20. Oliver Watts, Junichi Yamagishi, Simon King, "The role of higher-level linguistic features in HMM-based speech synthesis" Proc. Interspeech 2010, September 2010

5.21. M. Pucher, D. Schabus, J. Yamagishi, "Synthesis of fast speech with interpolation of adapted HSMMs and its evaluation by blind and sighted listeners" Proc. Interspeech 2010, September 2010

5.22. Z.-H. Ling, K. Richmond, J. Yamagishi, "HMM-based Text-to-Articulatory-Movement Prediction and Analysis of Critical Articulators" Proc. Interspeech 2010, September 2010

5.23. M. Kurimo, W. Byrne, J. Dines, P.N. Garner, M. Gibson, Y. Guan, T. Hirsimaki, R. Karhila, S. King, H. Liang, K. Oura, L. Saheer, M. Shannon, S. Shiota, J. Tian, K. Tokuda, M. Wester, Y.-J. Wu, J. Yamagishi, “Personalising speech-to-speech translation in the EMIME project,” Proc ACL 2010

5.24. P.L. De Leon, M. Pucher, J. Yamagishi, “Evaluation of the Vulnerability of Speaker Verification to Synthetic Speech”, Proc. 2010 Odyssey (The speaker and language recognition workshop

5.25. J. Yamagishi, S. King, “Simple methods for improving speaker-similarity of HMM-based speech synthesis”, Proc. ICASSP 2010

5.26. P.L. De Leon, V.R. Apsingekar, M. Pucher, J. Yamagishi, “Revisiting the security of speaker verification systems against imposture using synthetic speech”, Proc. ICASSP 2010

5.27. K. Oura, K. Tokuda, J. Yamagishi, S. King, M. Wester, “Unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis”, Proc. ICASSP 2010

5.28. 5-24 Junichi Yamagishi, Bela Usabaev, Simon King, Oliver Watts, John Dines, Jilei Tian, Rile Hu, Yong Guan, Keiichiro Oura, Keiichi Tokuda, Reima Karhila, Mikko Kurimo, “Thousands of Voices for HMM-based Speech Synthesis,” Proc. Interspeech 2009

5.29. John Dines, Junichi Yamagishi, Simon King, “Measuring the gap between HMM-based ASR and TTS,” Proc. Interspeech 2009

5.30. Oliver Watts, Junichi Yamagishi, Simon King, Kay Berkling, “HMM adaptation and voice conversion for the synthesis of child speech: a comparison” Proc. Interspeech 2009

5.31. Leonardo Badino, J. Sebastian Anderson, Junichi Yamagishi, Robert A.J. Clark, “Identication of Contrast and Its Emphatic Realization in HMM based Speech Synthesis” Proc. Interspeech 2009

5.32. Matthew P. Aylett, Simon King, Junichi Yamagishi, “Speech synthesis without a phone inventory,” Proc. Interspeech 2009

6

Page 7: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

5.33. Heiga Zen, Keiichiro Oura, Takashi Nose, Junichi Yamagishi, Shinji Sako, Tomoki Toda, Takashi Masuko, Alan W. Black, Keiichi Tokuda, “Recent development of the HMM-based speech synthesis system (HTS),” Proc. 2009 Asia-Pacific Signal and Information Processing Association (APSIPA)

5.34. Oliver Watts, Junichi Yamagishi, Kay Berkling, Simon King, “HMM-based synthesis of child speech,” The 1st Workshop on Child, Computer and Interaction (ICMI'08 post-conference workshop), 2008.

5.35. Junichi Yamagishi, Zhenhua Ling, Simon King, “Robustness of HMM-based Speech Synthesis,” Proc. Interspeech 2008

5.36. Zhen-Hua Ling, Korin Richmond , Junichi Yamagishi, Ren-Hua Wang, “Articulatory Control of HMM-based Parametric Speech Synthesis Driven by Phonetic Knowledge,” Proc. Interspeech 2008

5.37. Simon King, Keiichi Tokuda, Heiga Zen, Junichi Yamagishi, “Unsupervised adaptation for HMM-based speech synthesis,” Proc. Interspeech 2008

5.38. Gregor Hofer, Junichi Yamagishi, Hiroshi Shimodaira, “Speech-driven Lip Motion Generation with a Trajectory HMM,” Proc. Interspeech 2008

5.39. Joao P. Cabral, Steve Renals, Korin Richmond and Junichi Yamagishi, “Glottal Spectral Separation for Parametric Speech Synthesis,” Proc. Interspeech 2008

5.40. Junichi Yamagishi, Takashi Nose, Heiga Zen, Tomoki Toda, Keiichi Tokuda, “Performance Evaluation of The Speaker-Independent HMM-based Speech Synthesis System "HTS-2007" for the Blizzard Challenge 2007,” Proc. ICASSP 2008

5.41. Matthew P. Aylett, Junichi Yamagishi, “Combining Statistical Parameteric Speech Synthesis and Unit-Selection for Automatic Voice Cloning,” Proc. LangTech 2008

5.42. Junichi Yamagishi, Takao Kobayashi, Steve Renals, Simon King, Heiga Zen, Tomoki Toda, Keiichi Tokuda, “Improved Average-Voice-based Speech Synthesis using Gender-Mixed Modeling and A Parameter Generation Algorithm considering GV,” Proc. SSW 6, pp.125-130, 2007.

5.43. Heiga Zen, Takashi Nose, Junichi Yamagishi, Shinji Sako, Keiichi Tokuda, “The HMM-based speech synthesis system (HTS) version 2.0,” Proc. SSW 6, pp.294-299, Aug. 2007.

5.44. Toshio Hirai, Junichi Yamagishi, Seiichi Tenpaku, “Utilization of an HMM-Based Feature Generation Module in 5 ms Segment Concatenative Speech Synthesis,” Proc. SSW 6, pp.81-84, Aug. 2007.

5.45. Joao P. Cabral, Steve Renals, Korin Richmond and Junichi Yamagishi, “Towards an Improved Modeling of the Glottal Source in Statistical Parametric Speech Synthesis,” Proc. SSW 6, pp.113-118, Aug. 2007.

5.46. Gregor Hofer, Hiroshi Shimodaira, Junichi Yamagishi, “Lip Motion synthesis using a context dependent trajectory Hidden Markov Model,” Proc. the 2007 ACM/Eurographics Symposium on Computer Animation Poster, 2007.

5.47. Gregor Hofer, Hiroshi Shimodaira, Junichi Yamagishi, “Speech driven Head Motion Synthesis based on a Trajectory Model,” Proc. SIGGRAPH2007 Poster, 2007.

7

Page 8: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

5.48. Makoto Tachibana, Keigo Kawashima, Junichi Yamagishi, Takao Kobayashi, “Performance Evaluation of HMM-Based Style Classification with a Small Amount of Training Data,” Proc. EUROSPEECH 2007, Aug. 2007.

5.49. Junichi Yamagishi, Takao Kobayashi, Makoto Tachibana, Katsumi Ogata, Yuji Nakano, “Model adaptation approach to speech synthesis with diverse voices and styles,” Proc. ICASSP 2007, pp.1233--1236, April 2007.

5.50. Takashi Nose, Junichi Yamagishi, Takao Kobayashi, “A Style Control Technique for Speech Synthesis Using Multiple Regression HSMM,” Proc. ICSLP 2006, pp.1324--1327, Sept. 2006.

5.51. Katsumi Ogata, Makoto Tachibana, Junichi Yamagishi, Takao Kobayashi, “Acoustic Model Training Based on Linear Transformation and MAP Modification for HSMM-Based Speech Synthesis,” Proc. ICSLP 2006, pp.1328--1331, Sept. 2006.

5.52. Makoto Tachibana, Takashi Nose, Junichi Yamagishi, Takao Kobayashi, “A Technique for Controlling Voice Quality of Synthetic Speech Using Multiple Regression HSMM,” Proc. ICSLP 2006, pp.2438--2441, Sept. 2006.

5.53. Yuji Nakano, Makoto Tachibana, Junichi Yamagishi, Takao Kobayashi, “Constrained Structural Maximum A Posteriori Linear Regression for Average-Voice-Based Speech Synthesis,” Proc. ICSLP 2006, pp.2286--2289, Sept. 2006.

5.54. Junichi Yamagishi, Katsumi Ogata, Yuji Nakano, Juri Isogai, Takao Kobayashi, “HSMM-based model adaptation algorithms for average-voice-based speech synthesis,” Proc. ICASSP 2006, May 2006.

5.55. Takashi Yamazaki, Naotake Niwase, Junichi Yamagishi, Takao Kobayashi, “Human Walking Motion Synthesis Based on Multiple Regression Hidden Semi-Markov Model,” Proc. LUAR 2005, Nov. 2005.

5.56. Makoto Tachibana, Junichi Yamagishi, Masuko Takashi, Takao Kobayashi, “Performance Evaluation of Style Adaptation for Hidden Semi-Markov Model,” Proc. EUROSPEECH 2005, Sept. 2005.

5.57. Juri Isogai, Junichi Yamagishi, Takao Kobayashi, “Model Adaptation and Adaptive Training using ESAT Algorithm for HMM-based Speech Synthesis,” Proc. EUROSPEECH 2005, Sept. 2005.

5.58. Junichi Yamagishi, Takao Kobayashi, “Adaptive training for hidden semi-Markov model,” Proc. ICASSP 2005, vol.I, pp.365-368, March 2005.

5.59. Junichi Yamagishi, Takashi Masuko, Takao Kobayashi, “MLLR adaptation for hidden semi-Markov model based speech synthesis” Proc. ICSLP 2004, vo.II, pp.1213--1216, October 2004.

5.60. Junichi Yamagishi, Makoto Tachibana, Takashi Masuko, Takao Kobayashi, “Speaking style adaptation using context clustering decision tree for HMM-based speech synthesis” Proc. ICASSP 2004 , vol.I, pp.5-8, May 2004.

5.61. Makoto Tachibana, Junichi Yamagishi, Koji Onishi, Takashi Masuko, Takao Kobayashi, “HMM-based speech synthesis with various speaking styles using model interpolation” Proc. Speech Prosody 2004 , pp.413-416, March 2004.

8

Page 9: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

5.62. Junichi Yamagishi, Takashi Masuko, Takao Kobayashi, “HMM-based expressive speech synthesis -- Towards TTS with arbitrary speaking styles and emotions” Special Workshop in Maui (SWIM) , January 2004.

5.63. Junichi Yamagishi, Koji Onishi, Takashi Masuko, Takao Kobayashi, “Modeling of various speaking styles and emotions for HMM-based speech synthesis,” Proc. EUROSPEECH '03, vol.III, pp.2461-2464, September 2003.

5.64. Junichi Yamagishi, Takashi Masuko, Keiichi Tokuda, Takao Kobayashi, “A training method for average voice model based on shared decision tree context clustering and speaker adaptive training” Proc. ICASSP 2003, vol.I, pp.716-719, April 2003.

5.65. Junichi Yamagishi, Masatsune Tamura, Takashi Masuko, Keiichi Tokuda, Takao Kobayashi, “A context clustering technique for average voice model in HMM-based speech synthesis” Proc. ICSLP 2002, vol.1, pp.133-136, September 2002.

6. その他の論文

音声合成コンペティション

6.1. Junichi Yamagishi and Oliver Watts, “The CSTR/EMIME HTS System for Blizzard Challenge 2010”, Proc Blizzard Challenge, Kyoto Japan, September 2010

6.2. J. Yamagishi, M. Lincoln, S. King, J. Dines, M. Gibson, J. Tian, Y. Guan, “Analysis of Unsupervised and Noise-Robust Speaker-Adaptive HMM-Based Speech Synthesis Systems toward a Unified ASR and TTS Framework” Proc. Blizzard Challenge 2009

6.3. J. Sebastian Andersson, J.P. Cabral, L. Badino, J. Yamagishi, R.A.J. Clark, “Glottal Source and Prosodic Prominence Modelling in HMM-based Speech Synthesis for the Blizzard Challenge 2009,” Proc. Blizzard Challenge 2009

6.4. Junichi Yamagishi, Heiga Zen, Yi-Jian Wu, Tomoki Toda, Keiichi Tokuda, “The HTS-2008 System: Yet Another Evaluation of the Speaker-Adaptive HMM-based Speech Synthesis System in The 2008 Blizzard Challenge,” Proc. Blizzard Challenge 2008

6.5. Korin Richmond, Volker Strom, Robert Clark, Junichi Yamagishi, Sue Fitt, “Festival Multisyn Voices for the 2007 Blizzard Challenge,” Proc. Blizzard Challenge 2007, 2007.

6.6. Junichi Yamagishi, Heiga Zen, Tomoki Toda, Keiichi Tokuda, “Speaker-Independent HMM-based Speech Synthesis System -- HTS-2007 System for the Blizzard Challenge 2007,” Proc. Blizzard Challenge 2007, 2007.

6.7. Tomoki Toda, Hisashi Kawai, Toshio Hirai, Jinfu Ni, Nobuyuki Nishizawa, Junichi Yamagishi, Minoru Tsuzaki, Keiichi Tokuda, and Satoshi Nakamura, “Developing a Test Bed of English Text-to-Speech System XIMERA for the Blizzard Challenge 2006,” The Blizzard Challenge 2006

スペインの学会での講演(スペイン語)

6.8. R. Barra-Chicote, J. Yamagishi, J. M. Montero, O. Watts, S. King, and J. Macias-Guarasa, “The GTH-CSTR Entries for the Speech Synthesis Albayzin 2010 Evaluation: HMM-based Speech Synthesis Systems considering morphosyntactic features and Speaker Adaptation Techniques”, Proc FALA 2010 (VI Jornadas en Tecnología del Habla and II Iberian SLTech Workshop), November 2010

9

Page 10: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

6.9. R. Barra-Chicote, J. Yamagishi, J. M. Montero, S. King, S. Lufti, J. Macias-Guarasa “GENERACIN DE UNA VOZ SINTTICA EN CASTELLANO BASADA EN HSMM PARA LA EVALUACIN ALBAYZN 2008: CONVERSIN TEXTO A VOZ,” (Spanish text-to-speech synthesis challenge) Proc. V Workshop on Speech Technology 2008

日本国内の研究会および学会での講演、報告書等

6.10. Unsupervised Speaker Adaptation for Speech-to-Speech Translation System, Keiichiro Oura, Junichi Yamagishi, Mirjam Wester, Simon King, and Keiichi Tokuda, 電子情報通信学会技術研究報告, vol.109, no.356, pp.13-18, 2009-12.

6.11. Unsupervised English-to-Japanese speaker adaptation for HMM-based speech synthesis,Keiichiro Oura, Junichi Yamagishi, Simon King, Mirjam Wester, and Keiichi Tokuda,ASJ2009 autumn, vol.I, 3-P-18, pp.401-402, 2009-09.

6.12. Heiga Zen, Keiichiro Oura, Takashi Nose, Junichi Yamagishi, Shinji Sako, Tomoki Toda, Takashi Masuko, Alan W. Black, Keiichi Tokuda, ``Recent developments of the HMM-based speech synthesis system (HTS),'' 電子情報通信学会技術研究報告, vol.147, SP2007, pp.301-306, 2007年12月 (Information Processing Society of Japan Yamashita SIG Research Award 2009)

6.13. 緒方克海, 橘 誠, 山岸順一, 小林隆夫, ``平均声に基づく音声合成における線形変換とMAPに基づく音響モデル学習法,'' 電子情報通信学会技術研究報告, vol.106, no.333, SP2006-84, pp.49-54, 2006年11月

6.14. 能勢 隆, 山岸順一, 小林隆夫, ``重回帰HSMMを用いた合成音声のスタイル制御,'' 電子情報通信学会技術研究報告, vol.105, no.572, SP2005-160, pp.61-66, 2006年1月

6.15. 川島啓吾, 橘誠, 山岸順一, 小林隆夫, ``MSD-HMMに基づく音声のスタイル識別の検討,'' 電子情報通信学会技術研究報告, vol.105, no.496, SP2005-136, pp.151-156, 2005年12月.

6.16. 山岸順一, 河井恒, 平井俊男, 小林隆夫,``アンサンブル学習に基づく音韻継続長のモデル化,''電子情報通信学会技術研究報告, vol.105, no.253, SP2005-53, pp.7-12, 2005年8月.

6.17. 橘誠, 山岸順一, 小林隆夫,``隠れセミマルコフモデルに基づく音声合成のためのスタイル適応手法の評価,''電子情報通信学会技術研究報告, vol.105, no.252, SP2005-51, pp.29-34, 2005年8月.

6.18. 磯貝朱里, 緒方克海, 中野雄資, 能勢隆, 山岸順一, 小林 隆夫,``多様な音声合成のためのモデル適応・適応学習アルゴリズムの検討,''電子情報通信学会技術研究報告, vol.105, no.252, SP2005-50, pp.23-28, vol.2005年8月.

6.19. 平井俊男, 河井恒, 戸田智基, 山岸順一, 倪晋富, 西澤信行, 津崎実, 徳田恵一,``コーパス・ベース音声合成システム XIMERA,''電子情報通信学会技術研究報告, vol.105, no,98, SP2005-18, pp.37-42, 2005年5月.

6.20. 山岸順一, 橘誠, 益子貴史, 小林隆夫,``隠れセミマルコフモデルに基づく音声合成システムにおける最尤線形回帰によるスタイル適応の検討,''電子情報通信学会技術研究報告, vol.104, no.252, SP2004-49, pp.13-18, 2004年8月

6.21. 山岸順一, 益子貴史, 徳田恵一, 小林隆夫,``HMM音声合成におけるコンテキストクラスタリング決定木を用いた話者適応の検討,''電子情報通信学会技術研究報告, vol.103, no.264, SP2003-79, pp.31-36, 2003年8月.

10

Page 11: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

6.22. 橘誠, 山岸順一, 大西浩二, 益子貴史, 小林隆夫,``HMM音声合成におけるモデル補間・適応による発話スタイルの多様化の検討,''電子情報通信学会技術研究報告, vol.103, no.264, SP2003-80, pp.37-42, 2003年8月.

6.23. 山岸順一, 田村正統, 益子貴史, 小林隆夫, 徳田恵一,``平均声モデル構築におけるコンテキストクラスタリングと話者適応学習の検討,''電子情報通信学会技術研究報告, 102, 292, SP2002-72, pp.5-10, 2002年8月.

6.24. 山岸順一, 田村正統, 益子貴史, 小林隆夫, 徳田恵一,``平均声モデル構築のためのコンテキストクラスタリング手法の検討,''電子情報通信学会技術研究報告, 102, 108, SP2002-28, pp.25-30, 2002年5月.

この他40編の日本語論文ありhttp://homepages.inf.ed.ac.uk/jyamagis/publication/publication.html

招待講演、招待セミナー

6.25. Junichi Yamagishi “Text-to-speech synthesis” TH:A Application of speech technology, Summer School in Granada, Spain July 2011

6.26. Simon, King, Junichi Yamagishi, Oliver Watts, “New and emerging applications of adaptive speech synthesis”, SCALE workshop, Edinburgh, UK, June 2011

6.27. Junichi Yamagishi, “New and emerging applications of adaptive speech synthesis”, Speech synthesis seminar series, University Cambridge, UK, February 2011

6.28. Junichi Yamagishi, “New and emerging applications of speech synthesis”, Nokia Research Center, Beijing, China, September 2010

6.29. Junichi Yamagishi, “1000s voices and attractive voices for speech synthesis”, Signal processing laboratory, University of the Basque country, Bilbao, Spain, September 2010

6.30. Junichi Yamagishi, "Recent development and applications of HMM-based speech synthesis", iFlytek and University of Science and Technology of China, Hefei, China, March 2009

6.31. Junichi Yamagishi, “HMM-based expressive speech synthesis: Towards TTS with arbitrary speakers, speaking styles and emotions”, Telecommunications Forum, The telecommunications research center Vienna (FTW), Vienna, Austria, January 2008

6.32. Junichi Yamagishi, “Model Adaptation Approach to Speech Synthesis with Diverse Voice and Styles”, Nuance Communications Inc, Norwich, UK, February 2007

6.33. Junichi Yamagishi, “Model Adaptation Approach to Speech Synthesis with Diverse Voice and Styles”, SpandH seminar, The Speech and Hearing Research Group, University of Sheffield, Sheffield, UK, December 2006

ミニワークショップ

6.34. Junichi Yamagishi, “Open-source/creative commons speech databases for speech synthesis “, Special session “Open source initiative for speech synthesis”, 7th ISCA Speech Synthesis Workshop, Kyoto Japan, September 2010

6.35. Junichi Yamagishi, “Overview of speech synthesis research of the EMIME project”, Joint EMIME workshop with Toshiba and Phonetic Arts (now Google) on HMM-based speech synthesis, Cambridge, UK May 2010

11

Page 12: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

6.36. Junichi Yamagishi, “CSTR HTS Voice Library and Voice Cloning Toolkits for the Festival Text-to-Speech Synthesis System” The first HTS meeting, Brighton UK, 2009

6.37. Junichi Yamagishi, “Hundreds of Voices for HMM-based Speech Synthesis ‒ Building TTS Systems on ASR Corpora,” 2nd One Day Meeting on Unified Models for Speech Recognition and Synthesis, Department of Electronic, Electrical and Computer Engineering, The University of Birmingham, March 2009.

6.38. Junichi Yamagishi, “Model Adaptation Approach to Speech Synthesis with Diverse Voice and Styles”, One day meeting on unified Models for Speech Recognition and Synthesis, Department of Electronic, Electrical and Computer Engineering The University of Birmingham, January 2007.

デモ・展示等

6.39. Christophe Veaux and Junichi Yamagishi “Voice banking and reconstruction for MND patients” Second Workshop on Speech and Language Processing for Assistive Technology (SLPAT), Edinburgh, U.K., July 2011

6.40. Mikko Kurimo et al. “Personalising speech-to-speech translation in the EMIME project”, Exhibits, Association for Computational Linguistics (ACL) 2010, Uppsala Sweden, July 2010

6.41. Junichi Yamagishi et al., “While-you-wait Voice Cloning”, Public Exhibition, Interspeech 2009

7. 実用化業績等

7.1. Orange Telecommunication との knowledge transfer partnership (KTP)

7.2. ANSI standards group on text-to-speech synthesis technology (S3-WG91) による視覚障害者のための音声合成システム標準化実験に参加

7.3. Barnsley Hospital NHS Foundation Trust (Sheffield UK) および Euan MacDonald Centre for motor neuron disease research (Edinburgh UK) での医療応用および実験

7.4. 欧州原子力核研究機構(CERN, フランス) へ音声合成アナウンスシステムを提供

7.5. Navaho Technologies Ltd (UK) へ音声合成システムを提供

8. 学術上の賞

8.1. 手島博士論文賞, 手島財団 (2006)

8.2. 板倉記念研究賞, 日本音響学会 (2010)

8.3. 日本音響学会ベストポスター賞 (共著, 受賞者 中野雄二, 2004)

8.4. 山下記念研究賞 (共著 受賞者 Zen Heiga, 2009)

8.5. IEEE Signal Processing Society Young Author Best Paper Award 2010 (共著 受賞者 Zhenhua Ling, 2011)

12

Page 13: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

(2) 学会における活動状況

1. 所属する学会および地位I. IEEE MEMBER II. IEEE Signal Processing Society MemberIII. ISCA (International Speech Communication Association) Member IV.日本音響学会 会員

2. 学会の役員・座長等の経験I. Interspeech 2010 (Makuhari Japan): Session chair “speech synthesis III“II. Interspeech 2011 (Florence Italy): Session chair “voice conversion”III. Interspeech 2011 (Florence Italy): Session chair “Speech synthesis - Unit selection and hybrid approaches”

IV.Interspeech 2012 (Oregon US): Area coordinator “Speech production an synthesis”

3. 研究助成金European Commission (EC FP7)

I. 2011-2014 SIMPLE4ALL, EC FP7 £500k, coauthor II. 2010-2013 The Listening Talker, EC FP7, £510k, work-package leaderIII. 2008-2011 Effective multilingual interaction in mobile environments, EC FP7, £500k, coauthor

UK (EPSRC, RSE) IV.2011-2013 Unified articulatory-acoustic modelling for flexible and controllable speech synthesis, RSE-NSFC joint projects, Travel grant, £11k, Principal investigator

V. 2011-2016 Deep architectures for statistical speech synthesis, EPSRC CAF. £914k, Principal Investigator

CharityVI.2011 Silence speech interface for MND patients, EMC Seed funding, £5k, Principal investigator

VII.2011-2012 Voice reconstruction, Donation, £50k, Principal investigator日本

VIII.2011-2016 コンテンツ生成の循環系を軸とした次世代音声技術基盤の確立, JST CREST, £700k, 分担

IX.2004-2007 学術振興会特別研究員, 日本学術振興会, DC14. その他I. カーネギーメロン大学、名古屋工業大学と共に“OPEN SOURCE INITIATIVE FOR SPEECH SYNTHESIS”を企画し、SPEECH SYNTHESIS WORKSHOP (SSW7)でのSPECIAL SESSIONを企画、コーディネート、運営

II. 名古屋工業大学と共にINTERSPEECH 2009のミニワークショップとして “THE HTS MEETING”を企画し、コーディネート、運営.

III. INTERSPEECH 2009の一般向け展示ブースにて”VOICE CLON TOOLKIT”を展示、実演

13

Page 14: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

14

Page 15: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

(3) 職域における活動状況

1. 重要や役職等の経験I. PhD Supervision

2006-2010 Ph.D. student Joao Cabral (co-supervisor): 音源モデリング2007-現在 Ph.D. student Oliver Watts (co-supervisor):教師無しテキスト解析2009-現在 Ph.D. student Cassia Valentini (co-supervisor): 聴覚モデルを利用した音声合成

II. European Commission Framework Programme Seventh (FP7) Project, Work package leaders: The Listening Talker, WP3, Text-to-speech SIMPLE4ALL, WP4, Acoustic modelling

III. その他の役職2010-2012 プロジェクト准教授, 名古屋工業大学2010-現在 Member, The Euan MacDonald Centre for motor neuron disease research2008-現在 Technical advisor, Cereproc Ltd., U.K

2. その他I. 関連プロジェクトI. 2007-2008 Automatically-determined unit inventories for unit selection text-to-speech synthesis, EPSRC

II. 2007-2009 Viennese Sociolect and Dialect Synthesis, WWTF, Science for creative industries programme

III. 2008-2011 Effective multilingual interaction in mobile environment),EC FP7IV. 2010-2013 The Listening Talker, EC FP7V. 2011-2012 Voice reconstruction, DonationVI. 2011-2013 Unified articulatory-acoustic modelling for flexible and controllable speech synthesis, RSE-NSFC

VII. 2011-2014 SIMPLE4ALL, EC FP7 VIII.2011-2014 INSPIRE EC FP7-PEOPLE-ITN IX. 2011-2016 Deep architectures for statistical speech synthesis, EPSRCX. 2011-2016 Natural speech technology, EPSRC XI. 2011-1017 コンテンツ生成の循環系を軸とした次世代音声技術基盤の確立 JST CREST

II. ビジター受け入れおよび指導2007: Sarah Creer (University of Sheffield, UK)2007: Zhenhua Ling (University of Science and Technology of China, China)2008: Bela Usabaev (University Tubingen, German)2008: Roberto Barra-Chicote (Universidad Politecnica de Madrid, Spain)2009: Tuomo Raitio (Helsinki University of Technology, Finland)2009: Keiichiro Oura (Nagoya Institute of Technology, Japan)2010: Adriana Stan (Universitatea Tehnica din Cluj-Napoca, Romania)2010: Kei Hashimoto (Nagoya Institute of Technology, Japan)2011: Ming Lei (University of Science and Technology of China, China)2011: Moses Ekpenyong (University of Uyo, Nigeria) 2011: Kei Hashimoto (Nagoya Institute of Technology, Japan)

15

Page 16: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

2011: Chenyu Yang (University of Science and Technology of China, China)III. External Examiner of PhD examinations I. 2011 Roberto Barra-Chicote (Universidad Politecnica de Madrid, Spain)

IV.招待講演2011 University of Granada, Spain 2011 University Cambridge, UK 2010 The International Symposium on Chinese Spoken Language Processing, Taiwan 2010 Nokia Research Center, Beijing, China 2010 Signal processing laboratory, University of the Basque country, Bilbao, Spain 2009 iFlytek and University of Science and Technology of China, Hefei, China 2008 The telecommunications research center Vienna (FTW), Vienna, Austria 2007 Nuance Communications Inc, Norwich, UK 2007 University of Sheffield, Sheffield, UK

V. オープンソース ソフトウエア開発および公開1. HTS, http://hts.sp.nitech.ac.jp/2. Festival, http://www.cstr.ed.ac.uk/projects/festival/3. SPTK, http://sp-tk.sourceforge.net/4. Voice Clone Toolkit, http://homepages.inf.ed.ac.uk/jyamagis/software/software.html 5. HTS Voice Library http://homepages.inf.ed.ac.uk/jyamagis/library/

VI.実用化・商品化に向けた活動 1. Orange Telecommunication との knowledge transfer partnership2. Cereproc Ltd との協力

16

Page 17: CURRICULUM VITAE: JUNICHI YAMAGISHIhomepages.inf.ed.ac.uk/jyamagis/Bio/CV/files/my-cv... · 2011-10-06 · CURRICULUM VITAE: JUNICHI YAMAGISHI The Centre for Speech Technology Research

(4) 社会における活動状況

1. 関係官庁団体の委員会、調査会、審議会の委員の経験

2011 ANSI standardization group on text-to-speech synthesis technology (S3-WG91) による視覚障害者のための音声合成システム標準化実験に参加

2. その他

2008 欧州原子力核研究機構(CERN)へ音声合成アナウンスシステムを提供

2008 アーティストJames Coupeへ音声合成システムを提供

2009 Barnsley Hospital NHS Foundation Trust (Sheffield UK) および Euan MacDonald Centre for motor neuron disease research (Edinburgh UK) での医療応用および実験を実施 

喉頭がん患者および運動ニューロン疾患を煩う患者のための音声合成システム試作

2011 NHS (National health service)の承認のもと、毎年50名の運動ニューロン疾患を煩う患者の個人化音声合成システム作成予定

Thursday, 6 October 2011

17