HAMLETS (Human And Machine in-the-Loop Evaluation and Learning Strategies) - December 12th @ NeurIPS 2020 from 8:15AM PT
Human involvement in AI system design, development, and evaluation is critical to ensure that the insights being derived are practical, and the systems built are meaningful, reliable, and relatable to those who need them. Humans play an integral role in all stages of machine learning development, be it during data generation, interactively teaching, or interpreting, evaluating and debugging models. With growing interest in such “human in the loop” learning, we aim to highlight research in evaluation and training strategies for humans and models in the loop.
Special Focus: We would especially like to highlight emerging and underexplored areas of human and model-in-the-loop learning, such as employing humans to seek richer forms of feedback for data than labels alone, learning from dynamic adversarial data collection with humans employed to find weaknesses in models, learning from human teachers instructing computers through conversation and/or demonstration, investigating the role of humans in model interpretability, and assessing social impact of ML systems. We aim to bring together interdisciplinary researchers from academia and industry to discuss major challenges, outline recent advances, and facilitate future research in these areas.
Topics of interest for submission include but are not limited to the following:
Active and Interactive Learning | Machine teaching, including instructable agents for real-world decision making (robotic systems, natural language processing, computer vision) |
Interpretability | Role of humans in building trustworthy AI systems: model interpretability and algorithmic fairness. |
Human as Model Adversary | Richer human feedback, probing weaknesses of machine learning models |
System Design | Design of creative interfaces for data annotation, data visualization, interactive visualization |
Model Evaluation | Role of humans in evaluating model performance for generation, robustness to input |
Crowdsourcing | Best practices for improving worker engagement, preventing annotation artifacts, maximizing crowd-sourced data quality and efficiency |
Questions? Contact hamlets.neurips2020@gmail.com.
News
Aug 19, 2020 | Our Call for Papers is live! |
Aug 14, 2020 | Our workshop proposal was accepted at NeurIPS 2020! |
Organizers
Divyansh Kaushik | Carnegie Mellon University |
Bhargavi Paranjape | University of Washington / Facebook AI Research |
Max Bartolo | University College London |
Yixin Nie | University of North Carolina Chapel-Hill |
Yanai Elazar | Bar-Ilan University / Allen Institute for Artificial Intelligence |
Polina Kirichenko | New York University |
Forough Arabshahi | Facebook AI Research |
Pontus Stenetorp | University College London |
Mohit Bansal | University of North Carolina Chapel-Hill |
Zachary C. Lipton | Carnegie Mellon University |
Douwe Kiela | Facebook AI Research |