Darren Butler
they/them
Carnegie Mellon University
Human-computer Interaction
Biography
Hi! I’m Ren - Ph.D. student in Human–Computer Interaction at Carnegie Mellon University. My work intersects Human-AI Interaction, Software Engineering, and Accessibility. I design AI-augmented practices that improve psychological safety: so diverse teams can critique models, surface risks, and co-create trustworthy systems. My dissertation, SAFER-AI, combines mixed-methods analysis of collaborative work (surveys, discourse, social networks) with participatory design and lightweight prototyping of video/whiteboard tools that detect misunderstandings and scaffold inclusive decision-making. I bring hands-on experience in applied ML, data analytics, and full-stack prototyping (Python, R, JS) across education research, industry (Deloitte), and product evaluation (Vector Capital). I have published at CHI, FSE, LAK, ICER, and ASEE, and on inclusive AI education and collaboration. I aim to translate my skills to national lab teams: building systems that directly contribute to scientific research projects in high-stakes, data-intensive domains (such as climate, energy, health). I also hope to experiment with new tools and protocols that increase trust, equity, and team effectiveness. I am eager to partner with lab mentors to co-design interventions that help multidisciplinary groups reason with AI responsibly and accelerate socially beneficial science. I hope to embrace new people and approaches to research. I look forward to meeting everyone!
Academic Status
PhD Student - 5th
Research Area/Department
Data Science; Machine Learning/AI
Major/Specialty
Human-Computer Interaction, Software Engineering
Degrees Earned or in Progress
Carnegie Mellon University (CMU), Pittsburgh, PA Ph.D., Human–Computer Interaction, Aug 2022–May 2027 M.S., Human–Computer Interaction, Aug 2022–May 2026 Philander Smith University, Little Rock, AR B.S., Computer Science, May 2022
Academic Preparation
Interactive Data Science; Data Science for Psych/Neuroscience; Augmenting Intelligence; Experimental Design & Analysis; Education Technology Design; Physics 1 & 2
Research/Publications
Darren Butler. 2025. Fostering Psychological Safety for Learning in Neurodiverse Software Teams. In ACM Conference on International Computing Education Research V.2 (ICER 2025 Vol. 2), August 3–6, 2025, Charlottesville, VA, USA. ACM, New York, NY, USA, 3 pages. https://doi.org/10.1145/3702653.3744295 Darren Butler. 2025. Fostering Psychological Safety for Interpersonal Learning in Neurodiverse Software Teams. In Proceedings of the 2025 Conference for Research on Equitable and Sustained Participation in Engineering, Computing, and Technology (RESPECT 2025), July 14–16, 2025, Newark, NJ, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3704637.3734749 Joon Jang, Rory McDaniell, Darren Butler, Matthew Boyer, Andrew Begel. Preparing Autistic Students for the AI Workforce. To appear in the ACM International Conference on the Foundations of Software Engineering. Trondheim, Norway. June 2025. Matthew Boyer, Andrew Begel, Rick Kubina, Somayeh Asadi, Taniya Mishra, Darren Butler, and JiWoong Jang. Navigating the Social-Emotional Landscape of Neurodiversity in AI Education. To appear in the 2025 ASEE Annual Conference & Exposition. Montreal, QC, Canada. June 2025. Andrew Begel, Matthew Boyer, Rick Kubina, Somayeh Asadi, Taniya Mishra, Darren Butler, and JiWoong Jang. Investigating Instructors' Experiences in a Neurodiversity-Focused AI Training Program. To appear in the 2025 ASEE Annual Conference & Exposition. Montreal, QC, Canada. June 2025. Darren Butler, Conrad Borchers, Michael W. Asher, Yongmin Lee, Sonya Karnataki, Sameeksha Dangi, Samyukta Athreya, John Stamper, Amy Ogan, and Paulo F. Carvalho. 2025. Does the Doer Effect Exist Beyond WEIRD Populations? Toward Analytics in Radio and Phone-Based Learning. In LAK25: The 15th International Learning Analytics and Knowledge Conference (LAK 2025), March 03–07, 2025, Dublin, Ireland. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3706468.3706505 Christine Kwon, Darren Butler, Judith Odili Uchidiuno, John Stamper, and Amy Ogan. 2024. Investigating Demographics and Motivation in Engineering Education Using Radio and Phone-Based Educational Technologies. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 664, 1–15. https://doi.org/10.1145/3613904.3642221 D'Andre Wilson-Ihejirika, Darren Butler, Tasha Zephirin, Yasmine M. Elmi, Amanda Tamakloe, Anuli Ndubuisi. Black Post-Secondary Student Experiences in STEMM across the US and Canada: A Review of the Literature, Special Issue: Canadian Journal of Science, Mathematics and Technology Education, Black to the Future: Achieving Racial Equity in STEMM in Canada through Research and Policy (To Appear)
Research/Academic Interests
Human-AI Interaction| Software Engineering | Accessibility | Computing Education I design AI-augmented practices and tools that help diverse teams communicate risks towards safer software. My dissertation, SAFER-AI, combines contextual inquiry (surveys, interviews, observations) with participatory design and rapid prototyping (video/whiteboard plugins; conversational agents) to detect misunderstandings, scaffold inclusive critique, and increase trust in high-stakes, data-rich work.
Computational and Data Science Areas
Artificial Intelligence and Intelligent Systems; Computer Science; Educational Sciences; Informatics, Analytics and Information Science; Other Computer and Information Sciences; Psychology Sociology; Visualization and Human-Computer Systems; Organization
Motivation
I want to participate in the Sustainable Research Pathways to: 1) kick off my career as a professional researcher beyond my graduate studies; 2) translate my computational and human-centered computing skills into insights and tools that support large-scale, collaborative software development. I’m a Ph.D. student in Human–Computer Interaction at Carnegie Mellon, building SAFER-AI, a research agenda contributing insights, frameworks, tools, and practices to help software teams turn messy communication about high-stakes AI into better software. Generative AI is unpredictably impacting software engineering and user experiences, requiring software practitioners (developers, designers, and data scientists) to negotiate risks when discussing software design. Without psychological safety – the belief that critique is welcomed – teams neglect risks and software quality suffers. I have witnessed software practitioners – novice and experienced – deny help and withhold knowledge when they feel unsafe due to power differences and knowledge gaps. As a minoritized scholar with experience in computing research, education, and practice, I’m motivated to make psychologically safe collaboration an everyday engineering practice so teams can strive for software development processes and artifacts that are reliable and respectful to developers and end users. I bring concrete experience delivering both insights and artifacts, combining interviews with social-network and discourse analyses of communications from student engineering teams to diagnose collaboration bottlenecks and prototype automated communication support; and conducting regression and classification analyses on over 10k+ learner records to link behavior to learning outcomes in digital learning platforms and translating those insights into dashboards to support teacher decision-making. I’ve also built full-stack tools that help NGO staff and teacher-training teams coordinate knowledge sharing with AI agents. These projects reflect my work process: co-design with practitioners and measure what matters. I’ll contribute practical deliverables: a conference paper, well-documented code, reports, and checklists for auditing software and team processes that improve software and research outcomes. Through Sustainable Research Pathways, I want to pair human-centered methods with computational competencies to support NAIRR or HPSF projects doing large-scale collaborative software development. I wish to explore projects that offer me new experiences in science and engineering: understanding the needs of scientists and software engineers, and addressing those needs with better tools and practices. From SRP, I’m seeking mentorship and partnership: guidance from lab leads and research software engineers on integrating sociotechnical measures into AI operations and evaluation workflows; exposure to NAIRR’s or HPSF’s large-scale collaborative software development, compute, and datasets; and a community committed to sustaining inclusive, rigorous science. SRP would be an effective bridge between my PhD and a professional career in scientific research and development. I hope to increase the quality of research and research software through sociotechnical tools for responsible, inclusive collaboration, and join a national lab or company partner after graduation.
Lightning Talk Title
Developing AI Assistants for Psychologically Safe, Neurodiverse, Collaborative Software Development
Keywords (Maximum 20 words)
Human-Centered Computing; Software Engineering; Human-AI Interaction; Accessibility; Education; Responsible AI; Analytics