Zhou Yu   俞舟

Zhou (pronounced similar to Jo)

Assistant Professor

Computer Science Department

University of California, Davis


Academic Surge 2085

1 Shields Ave, Davis, CA 95616

Email: joyu@ucdavis.edu

CV [pdf] Research Statement [pdf]


I am an Assistant Professor at the Computer Science Department, University of California, Davis. I am the director of the Language, Multimodal and Interaction Lab. I recieved my PhD at Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Prof. Alan W Black and Prof. Alexander I. Rudnicky. 2015 summer and 2016 summer, I interned with Prof. David Suendermann-Oeft at ETS San Francisco Office on cloud based mulitmodal dialog systems. 2014 Fall, I interned with Dan Bohus and Eric Horvitz at Microsoft Research on situated multimodal dialogue systems.

Prior to CMU, I received a B.S. in Computer Science and a B.A. in Linguistics under English Language Major from Zhejiang University in 2011. I worked with Prof. Xiaofei He and Prof. Deng Cai there on Machine Learning and Computer Vision. I also worked with Prof. Yunhua Qu on Machine Translation.

Research Interests

I design algorithms for real-time intelligent interactive systems that coordinate with user actions that are beyond spoken languages, including non-verbal behaviors to achieve effective and natural communications. In particular, I optimize human-machine communication via studies of multimodal sensing and analysis, speech and natural language processing, machine learning and human-computer interaction. The central focus of my dissertation research is to bring together all the areas above to design, implement and deploy end-to-end real-time interactive intelligent systems that are able to plan globally considering interaction history and current user actions to achieve better user experience and task performance. Meanwhile, I enjoy collaborating with researchers with different backgrounds on interdisciplinary research in all area of science, such as health care, education and robotics.

I am also interested in various tasks in natural language processing, such as language understaning, language generation and common sense reasoning.


Most recent talk: Grounding Reinforcement Learning with Real-World Dialog Applications slides

Here is a YouTube video on my research overview talk at AI2 (Thanks AI2 for the recording) video

Here is a Chinese talk that has more updated projects and explained in less technical terms. video


We just won Amazon Alexa Prize with $500,000 webpage

I was featured on Forbes 2018 30 under 30 in Science webpage If you want to try our chatbot, just say "Alexa Let's chat!" to any Alexa device or download the App Amazon Alexa on your phone.

I am recruiting postdocs who have interests and background knowledge on areas, such as Robotics, NLP, ML, and AI in general.

If you are thinking of doing a PhD with me, read Mor's advice Applying PhD Due to the large volum of my email, I would not be able to reply to individual student. Please apply to our UC Davis PhD program

Selected Publications

Jiaao Chen, Jianshu Chen and Zhou Yu, Incorporating Structured Commonsense Knowledge in Story Completion, AAAI 2019 [pdf]

Youzhi Tian, Zhiting Hu and Zhou Yu, Structured Content Preservation for Unsupervised Text Style Transfer, arXiv 2018 [pdf]

Mingyang Zhou, Runxiang Cheng, Yong Jae Lee and Zhou Yu, A Visual Attention Grounding Neural Model for Multimodal Machine Translation, EMNLP 2018 [pdf][data]

Weiming Wen, Songwen Su and Zhou Yu, Cross-Lingual Cross-Platform Rumor Verification Pivoting on Multimedia Content, EMNLP 2018 [pdf] [code&data]

Jiaping Zhang, Tiancheng Zhao and Zhou Yu, Multimodal Hierarchical Reinforcement Learning Policy for Task-Oriented Visual Dialog, SIGDIAL 2018 [pdf]

Weiyan Shi and Zhou Yu, Sentiment Adaptive End-to-End Dialog Systems, ACL 2018 [pdf]

Ryant et al., Enhancement and Analysis of Conversational Speech: JSALT 2017, ICAASP 2018 [pdf]

Zhou Yu, Alan W Black and Alexander I. Rudnicky, Learning Conversational Systems that Interleave Task and Non-Task Content, IJCAI 2017 [pdf]

Zhou Yu, Vikram Ramanarayanan, Patrick Lange, and David Suendermann-Oeft. An open-source multimodal dialog system with real-time engagement tracking for job interview trainingapplications.In IWSDS, 2017 [pdf]

Zhou Yu, Ziyu Xu, Alan W Black and Alexander Rudnicky, Strategy and Policy Learning for Non-Task-Oriented Conversational Systems, SIGDIAL 2016. [pdf]

Zhou Yu, Leah Nicolich-Henkin, Alan W Black and Alexander Rudnicky, A Wizard-of-Oz Study on A Non-Task-Oriented Dialog Systems that Reacts to User Engagement, SIGDIAL 2016. [pdf]

Sean Andrist, Dan Bohus, Zhou Yu, Eric Horvitz, Are You Messing with Me?: Querying about the Sincerity of Interactions in the Open World. HRI 2016. [pdf]

Zhou Yu, Dan Bohus and Eric Horvitz, Incremental Coordination: Attention-Centric Speech Production in a Physically Situated Conversational Agent, SIGDIAL 2015. [pdf]

Demo Videos

TickTock: a multimodal chatbot with user engagement coordination
- below is a demo of using automatically generated conversational strategy to improve user engagement.

Direction-giving Robot: a direction-giving humanoid robot with user attention coordination
- below is a demo and some real user cases of people interacting with the robot.

HALEF: a distributed web-based multimodal dialog system with user engagement coordination
- below is a demo of a Amazon Turker interacting with our job interview training application via a web browser. It live-streams videos from usrs' local webcam to the server.