Ethical issues in natural language processing relate both to issues in the applications of NLP and to problems of bias and discrimination within the systems themselves. I will talk about virtuous and evil applications of NLP, and I will suggest some characteristics of the latter. I’ll then describe the problem of bias in learning from linguistic data, especially for minority groups and for languages other than English that have limited resources, and the bias in word-embeddings that results from linguistic data and that affects even second-order uses of the data.
Graeme Hirst is a computer scientist at the University of Toronto. His research covers a broad range of applied computational linguistics and natural language processing. His recent topics include detecting markers of Alzheimer’s disease in language; determining ideology in political texts; and the identification of the native language of a second-language writer of English. He is the editor of the series Synthesis Lectures on Human Language Technologies, the leading venue for monograph publication in natural language processing. In 2017, he received the Lifetime Achievement Award from the Canadian Artificial Intelligence Association.