Panel on Ethics in NLG
Tuesday 07/19 10:30 ESTINLG will feature a panel discussing different ethical aspects of NLG systems and NLG research. Likely topics to be discussed include:
- Ethics in industry versus academia
- Documentation of data and models
- Non-western ethics traditions
- Science communication/dissemination
- Involving users/communities in developing tools and resources
Members:
- Moderator: Margaret Mitchell (@mmitchell_ai)
Margaret Mitchell is a researcher working on Ethical AI, currently focused on the ins and outs of ethics-informed AI development in tech. She has published over 50 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of conversation generation and sentiment classification. She previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google's Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and information extraction. She holds a PhD in Computer Science from the University of Aberdeen and a Master's in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University. She has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies. She likes gardening, dogs, and cats. - Nina da Hora (@ninadhora)
Nina da Hora is a 26 years-old scientist under construction - as she identifies herself - and an anti-racist hacker. Passionate for Science, Nina has a BS in Computer Science from PUC-Rio and researches Fairness and Ethics in AI. She has developed two initiatives called 'Computação da Ho' and 'Ogu'. The first is her YouTube channel where she disseminates computer education. Ogunhê is a podcast where she interviews great scientists, mainly building bridges between the African continent and Brazil in the area of hard sciences. Nina is also a teacher because teaching is in her family's blood and roots. Always researching the relations between algorithms and society, Ethics AI, Data Privacy and Disseminating Science Education. Nina is also a developer certificated by Apple Developer Academy and Columnist for MIT Technology Review Brazil. She has joined the Tik Tok Brazil Security Advisory Council, where she collaborates with discussions on themes such content policies, safety strategies, and product launches. She has joined the Elections Transparency Advisory Council in Brazil, where she Collaborates with transparency and security for elections 2022. Recently she joined the Thoughtworks Company like a Tech Lead in Resnponsible Tech. - Sebastian Gehrmann (@sebgehr)
- Sabelo Mhlambi (@sabelonow)
- Nava Tintarev (@navatintarev)
Nava Tintarev is a Full Professor of Explainable Artificial Intelligence at the University of Maastricht, and a visiting professor at TU Delft. She leads or contributes to several projects in the field of human-computer interaction in artificial advice-giving systems, such as recommender systems; specifically developing the state-of-the-art automatically generated explanations (transparency) and explanation interfaces (recourse and control). She participates in a Marie-Curie Training Network on Natural Language for Explainable AI (October 2019-October 2023. Currently, she is representing Maastricht university as a Co-Investigator in the ROBUST consortium, pre-selected for a national (NWO) grant with a total budget of 95M (25M from NWO) to carry out long term (10-years) research into trustworthy artificial intelligence. Prior, she was awarded several smaller grants and prizes at TU Delft to support her work on viewpoint diversity (e.g., DDFV seed funding, Mekel Prize). She regularly shapes international scientific research programs (e.g., on steering committees of journals, or as program chair of conferences), and actively organizes and contributes to strategy workshops relating to responsible data science, both in the Netherlands and internationally. - Frank Schilder (@fsign)
Frank Schilder is a Sr. Research Director at Thomson Reuters Labs. Before joining Thomson Reuters in 2004, he was employed by the Department for Informatics at the University of Hamburg, Germany, as an assistant professor. Frank obtained his Ph.D. in Cognitive Science from the University of Edinburgh, Scotland and a graduate degree in Computer Science from the University of Hamburg, Germany. His research interests include question answering, information extraction, automatic summarization, natural language generation and applications of deep learning techniques.