Special Focus: We would especially like to highlight emerging and underexplored areas of human and model-in-the-loop learning, such as employing humans to seek richer forms of feedback for data than labels alone, learning from dynamic adversarial data collection with humans employed to find weaknesses in models, learning from human teachers instructing computers through conversation and/or demonstration, investigating the role of humans in model interpretability, and assessing social impact of ML systems. We aim to bring together interdisciplinary researchers from academia and industry to discuss major challenges, outline recent advances, and facilitate future research in these areas.

Topics of interest for submission include but are not limited to the following:

Active and Interactive Learning Machine teaching, including instructable agents for real-world decision making (robotic systems, natural language processing, computer vision)
Interpretability Role of humans in building trustworthy AI systems: model interpretability and algorithmic fairness.
Human as Model Adversary Richer human feedback, probing weaknesses of machine learning models
System Design Design of creative interfaces for data annotation, data visualization, interactive visualization
Model Evaluation Role of humans in evaluating model performance for generation, robustness to input
Crowdsourcing Best practices for improving worker engagement, preventing annotation artifacts, maximizing crowd-sourced data quality and efficiency

Questions? Contact hamlets.neurips2020@gmail.com.