Been Kim is a staff research scientist at Google Brain. Her research focuses on helping humans to communicate with complex machine learning models: not only by building tools (and tools to criticize them) but also studying their nature compared to humans. She gave a talk at the G20 meeting in Argentina in 2019. Her work TCAV received UNESCO Netexplo award, was featured at Google I/O 19'. Her work was in a chapter of Brian Christian's book on "The Alignment Problem". Been gave keynote at ECML 2020, tutorials on interpretability at ICML, University of Toronto, CVPR and at Lawrence Berkeley National Laboratory. She was a co-workshop Chair ICLR 2019, and has been an (senior) area chair at NeurIPS, ICML, ICLR, AISTATS and others. She is a steering committee member of FAccT conference and former executive board member and VP of Women in Machine Learning. She received her PhD. from MIT.
Nancy F. Chen received her Ph.D. from MIT and Harvard in 2011. She worked at MIT Lincoln Laboratory on her Ph.D. research in multilingual speech processing. She is currently leading research efforts in conversational AI and natural language generation with applications related to education, healthcare, journalism, and defense at the Institute for Infocomm Research (I2R), A*STAR (Agency for Science, Technology, and Research), Singapore. Speech evaluation technology developed by her team is deployed at the Ministry of Education in Singapore to support home-based learning during the COVID-19 pandemic. Dr. Chen also led a cross-continent team for low-resource spoken language processing, which was one of the top performers in the NIST Open Keyword Search Evaluations (2013-2016), funded by the IARPA Babel program.
Dr. Chen has received numerous awards, including Singapore 100 Women in Tech (2021), Young Scientist Award at MICCAI 2021, Best Paper Award at SIGDIAL 2021, the 2020 P&G Connect + Develop Open Innovation Award, the 2019 L’Oréal Singapore For Women in Science National Fellowship, Best Paper at APSIPA ASC (2016), MOE Outstanding Mentor Award (2012), the Microsoft-sponsored IEEE Spoken Language Processing Grant (2011), and the NIH (National Institute of Health) Ruth L. Kirschstein National Research Award (2004-2008).
Dr. Chen is …
Prof Vukosi Marivate is an Associate Professor of Computer Science and holds the ABSA UP Chair of Data Science at the University of Pretoria. He specialises in developing Machine Learning (ML) and Artificial Intelligence (AI) methods to extract insights from data, with a particular focus on the intersection of ML/AI and Natural Language Processing (NLP). His research is dedicated to improving the methods, tools and availability of data for local or low-resource languages. As the leader of the Data Science for Social Impact research group https://dsfsi.github.io/ in the Computer Science department, Vukosi is interested in using data science to solve social challenges. He has worked on projects related to science, energy, public safety, and utilities, among others. Prof Marivate is a co-founder and CTO of the Lelapa AI https://lelapa.ai/, an African startup focused on AI for Africans by Africans. Vukosi is a chief investigator on the Masakhane NLP project https://www.masakhane.io/, which aims to develop NLP technologies for African languages. Vukosi is also a co-founder of the Deep Learning Indaba, the leading grassroots Machine Learning and Artificial Intelligence conference on the African continent that aims to empower and support African researchers and practitioners in the field.
Dr. Aisha Walcott-Bryant is a research scientist and manager at IBM Research Africa - Nairobi, Kenya. She leads a team of researchers and engineers that use AI, Blockchain, and other technologies to develop innovations in Water Access and Management, core AI, and Healthcare, particularly for emerging countries. She has a strong interest in global health and development. Her team's recent healthcare work on Enabling Care Continuity was awarded honorable mention in the International Conference on Health Informatics, ICHI2019.
Current PhD student at the University of Vermont
I am a Ph.D student supervised by Simon Lacoste-Julien, I graduated from ENS Ulm and Université Paris-Saclay. I was a visiting PhD student at Sierra. I also worked for 6 months as a freelance Data Scientist for Monsieur Drive (Acquired by Criteo) and I recently co-founded a startup called Krypto. I'm currently pursuing my PhD at Mila. My work focuses on optimization applied to machine learning. More details can be found in my resume.
My research is to develop new optimization algorithms and understand the role of optimization in the learning procedure, in short, learn faster and better. I identify to the field of machine learning (NIPS, ICML, AISTATS and ICLR) and optimization (SIAM OP)
Patrick Lin, PhD, is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is a philosophy professor. Current affiliations include Stanford Law School, 100-Year Study on AI, World Economic Forum, Czech Academy of Sciences, and the Center for a New American Security. Previous affiliations include Stanford Engineering, US Naval Academy, Dartmouth College, Notre Dame, University of Iceland (Fulbright specialist), New America Foundation, and UNIDIR. He is well published in technology ethics, esp. in AI and robotics, with five books that include Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017), as well as several funded policy reports on military robotics, cyberwarfare, and enhanced warfighters. Dr. Lin regularly gives invited briefings to industry, media, and governments worldwide; and he teaches courses in ethics, technology, and law. He earned his BA at UC Berkeley and PhD at UC Santa Barbara.
I'm a professor in the Faculty of Information at the University of Toronto, affiliated with both Schwartz-Reisman and Vector institutes.
I work on deep representation learning & predictive methods in ecological modeling and environmental risk assessment, as well as real-world generalization, learning theory, and practical auditing tools (e.g. unit tests, sandboxes). I'm generally interested in responsible AI development, AI for climate and science applications, AI safety, negative externalities & cooperation, and approaches to sociotechnical issues.
If you’re interested in a position with me, collaborating or chatting about those topics, or know someone who is, please get in touch!