HAMLETS 2021
Human involvement in AI system design, development, and evaluation is critical to ensure that the insights being derived are practical, and the systems built are meaningful, reliable, and relatable to those who need them. Humans play an integral role in all stages of machine learning development, be it during data generation, interactively teaching, or interpreting, evaluating and debugging models. With growing interest in such “human in the loop” learning, we aim to highlight research in evaluation and training strategies for humans and models in the loop.
Special Focus: We would especially like to highlight emerging and underexplored areas of human and model-in-the-loop learning, such as employing humans to seek richer forms of feedback for data than labels alone, learning from dynamic adversarial data collection with humans employed to find weaknesses in models, learning from human teachers instructing computers through conversation and/or demonstration, investigating the role of humans in model interpretability, and assessing social impact of ML systems. We aim to bring together interdisciplinary researchers from academia and industry to discuss major challenges, outline recent advances, and facilitate future research in these areas.
Special Focus: This 2021 edition of the workshop is aimed at bring together interdisciplinary researchers and policy leaders to discuss major challenges, outline recent advances, and facilitate future research on the technical and societal challenges we face in human-AI interaction, and how we might address them.
Topics of interest for submission include but are not limited to the following:
Active and Interactive Learning | Machine teaching, including instructable agents for real-world decision making (robotic systems, natural language processing, computer vision) |
Interpretability | Role of humans in building trustworthy AI systems: model interpretability and algorithmic fairness. |
Human as Model Adversary | Richer human feedback, probing weaknesses of machine learning models |
System Design | Design of creative interfaces for data annotation, data visualization, interactive visualization |
Model Evaluation | Role of humans in evaluating model performance for generation, robustness to input |
Crowdsourcing | Best practices for improving worker engagement, preventing annotation artifacts, maximizing crowd-sourced data quality and efficiency |
Questions? Contact hamlets.neurips2020@gmail.com.