Jacqueline Rowe
Email: Jacqueline.Rowe@ed.ac.uk
Research keywords: Equitable and safe NLP, low-resource languages, hate speech, multilingual, alignment
Bio:
Jacqueline has a long-standing interest in linguistic bias and marginalisation, originally grounded in her experience as an English teacher in West Africa (2014-2016). This led her to her BA in Linguistics (University of Cambridge, 2016-2019), where she conducted sociolinguistic research on the use of creoles and indigenous languages in administrative and educational settings in Guinea-Bissau. Later, during her LLM in Human Rights (Birkbeck, University of London, 2020-2021), she focused her dissertation on how the rights to education and non-discrimination apply to minority language speakers who wish to be taught in their first language. More recently, for her MSc in Computer Science with Speech and Language Processing (University of Sheffield, 2023-2024), she developed a novel dataset and series of low-resource machine translation models for Bissau-Guinean creole, investigating cross-lingual transfer between creole and Portuguese.
In addition to her academic studies, Jacqueline also holds several years’ experience in the field of technology policy and digital governance, first as Coordinator of the International Law Programme at Chatham House and then as Policy Lead at a digital rights NGO. In these roles, she conducted research on the application of international human rights law in cyberspace, focusing on online content governance and platform regulation, methods to address online disinformation and hate speech, and transparency and accountability for automated systems.
PhD research:
Jacqueline is curious about how speakers of minority languages use NLP tools and technologies, and how these technologies can be designed, developed and deployed in a way which better distributes their benefits across different language groups. Her PhD research focuses on how to detect and mitigate hate speech and other harmful text-based content in low-resource languages, with applications in both online content moderation and LLM safety. She also works on developing more inclusive and culturally-sensitive benchmarks for evaluating bias and other textual harms, with a focus on how models are used in practice and in downstream applications.
Supervisors: Alexandra Birch, Shannon Vallor