Zhou Yu   俞舟

Zhou (pronounced similar to Jo)

Assistant Professor

Computer Science Department

University of California, Davis


Academic Surge 2085

1 Shields Ave, Davis, CA 95616

Email: joyu@ucdavis.edu

CV [pdf] Research Statement [pdf]


I am an Assistant Professor at the Computer Science Department, University of California, Davis. I am the director of the Language, Multimodal and Interaction Lab. I recieved my PhD at Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Prof. Alan W Black and Prof. Alexander I. Rudnicky. 2015 summer and 2016 summer, I interned with Prof. David Suendermann-Oeft at ETS San Francisco Office on cloud based mulitmodal dialog systems. 2014 Fall, I interned with Dan Bohus and Eric Horvitz at Microsoft Research on situated multimodal dialogue systems.

Prior to CMU, I received a B.S. in Computer Science and a B.A. in Linguistics under English Language Major from Zhejiang University in 2011. I worked with Prof. Xiaofei He and Prof. Deng Cai there on Machine Learning and Computer Vision. I also worked with Prof. Yunhua Qu on Machine Translation.

Research Interests

I design algorithms for real-time intelligent interactive systems that coordinate with user actions that are beyond spoken languages, including non-verbal behaviors to achieve effective and natural communications. In particular, I optimize human-machine communication via studies of multimodal sensing and analysis, speech and natural language processing, machine learning and human-computer interaction. The central focus of my dissertation research is to bring together all the areas above to design, implement and deploy end-to-end real-time interactive intelligent systems that are able to plan globally considering interaction history and current user actions to achieve better user experience and task performance. Meanwhile, I enjoy collaborating with researchers with different backgrounds on interdisciplinary research in all area of science, such as health care, education and robotics.

Most recent talk: Grounding Reinforcement Learning with Real-World Dialog Applications slides

Here is a YouTube video on my research overview talk that is one year oldat AI2 (Thanks AI2 for the recording) YouTube video

Here is a Chinese talk that has more updated projects and explained in less technical terms. YouTube video


Two papers in EMNLP

Our Davis Team Gunrock was awarded 250,000 to compete in Amazon Alexa Prize webpage

I was featured on Forbes 2018 30 under 30 in Science webpage

I am recruiting postdocs who have interests and background knowledge on areas, such as Robotics, NLP, ML, and AI in general.

If you are thinking of doing a PhD with me, read Mor's advice Applying PhD

I co-organized the AAAI Fall workshop on Natural Communication for Human-Robot Collaboration (Nov.9-11) webpage

Please try our chatbot: TickTock. Here is the webpage

A human-chatbot conversation database. Here is the webpage

Selected Publications

Mingyang Zhou, Runxiang Cheng, Yong Jae Lee and Zhou Yu, A Visual Attention Grounding Neural Model for Multimodal Machine Translation, EMNLP 2018

Weiming Wen, Songwen Su and Zhou Yu, Cross-Lingual Cross-Platform Rumor Verification Pivoting on Multimedia Content, EMNLP 2018

Jiaping Zhang, Tiancheng Zhao and Zhou Yu, Multimodal Hierarchical Reinforcement Learning Policy for Task-Oriented Visual Dialog, SIGDIAL 2018 [pdf]

Weiyan Shi and Zhou Yu, Sentiment Adaptive End-to-End Dialog Systems, ACL 2018[pdf]

Ryant et al., Enhancement and Analysis of Conversational Speech: JSALT 2017, ICAASP 2018

Zhou Yu, Vikram Ramanarayanan, Patrick Lange, and David Suendermann-Oeft. An open-source multimodal dialog system with real-time engagement tracking for job interview trainingapplications.In IWSDS, 2017 [pdf]

-Zhou Yu, Alan W Black and Alexander I. Rudnicky, Learning Conversational Systems that Interleave Task and Non-Task Content, IJCAI 2017 [pdf]

-Zhou Yu, Xinrui He, Alan W Black and Alexander I. Rudnicky, User Engagement Modeling in Virtual Agents Under Different Cultural Contexts, IVA 2016.

-Zhou Yu, Ziyu Xu, Alan W Black and Alexander Rudnicky, Strategy and Policy Learning for Non-Task-Oriented Conversational Systems, SIGDIAL 2016. [pdf]

-Zhou Yu, Leah Nicolich-Henkin, Alan W Black and Alexander Rudnicky, A Wizard-of-Oz Study on A Non-Task-Oriented Dialog Systems that Reacts to User Engagement, SIGDIAL 2016. [pdf]

-Zhou Yu, Ziyu Xu, Alan W Black and Alexander Rudnicky, Chatbot evaluation and database expansion via crowdsourcing, In Proceedings of the RE-WOCHAT workshop of LREC, 2016. [pdf]

-Sean Andrist, Dan Bohus, Zhou Yu, Eric Horvitz, Are You Messing with Me?: Querying about the Sincerity of Interactions in the Open World. HRI 2016. [pdf]

-Zhou Yu, Vikram Ramanarayanan, Robert Mundkowsky, Patrick Lange, Alan Black, Alexei Ivanov, David Suendermann-Oeft, Multimodal HALEF: An Open-Source Modular Web-Based Multimodal HAL, IWSDS 2016. [pdf]

-Alexei Ivanov, Patrick Lange, David Suendermann-Oeft, Vikram Ramanarayanan, Yao Qian, Zhou Yu and Jidong Tao, Speed vs. Accuracy: Designing an Optimal ASR System for Spontaneous Non-Native Speech in a Real-Time Application, IWSDS 2016. [pdf]

-Zhou Yu, Vikram Ramanarayanan, David Suendermann-Oeft, Xinhao Wang, Klaus Zechner, Lei Chen, Jidong Tao and Yao Qian, Using Bidirectional LSTM Recurrent Neural Networks to Learn High-Level Abstractions of Sequential Features for Automated Scoring of Non-Native Spontaneous Speech, ASRU 2015. [pdf]

-Zhou Yu, Dan Bohus and Eric Horvitz, Incremental Coordination: Attention-Centric Speech Production in a Physically Situated Conversational Agent, SIGDIAL 2015. [pdf]

- Zhou Yu, Alexandros Papangelis, Alexander Rudnicky, TickTock: Engagement Awareness in a non-Goal-Oriented Multimodal Dialogue System, AAAI Spring Symposium on Turn-taking and Coordination in Human-Machine Interaction 2015. [pdf][slides]

- Zhou Yu, Stefan Scherer, David Devault, Jonathan Gratch, Giota Stratou, Louis-Philippe Morency and Justine Cassell, Multimodal Prediction of Psychological Disorder: Learning Verbal and Nonverbal Commonality in Adjacency Pairs, SEMDIAL 2013. [pdf] [slides]

- Zhou Yu, David Gerritsen, Amy Ogan, Alan W Black, Justine Cassell, Automatic Prediction of Friendship via Multi-model Dyadic Features, SIGDIAL, 2013. [pdf]

Demo Videos

TickTock: a multimodal chatbot with user engagement coordination
- below is a demo of using automatically generated conversational strategy to improve user engagement.

Direction-giving Robot: a direction-giving humanoid robot with user attention coordination
- below is a demo and some real user cases of people interacting with the robot.

HALEF: a distributed web-based multimodal dialog system with user engagement coordination
- below is a demo of a Amazon Turker interacting with our job interview training application via a web browser. It live-streams videos from usrs' local webcam to the server.