Rachel Thomas (academic)

Rachel Thomas is an American computer scientist and founding Director of the Center for Applied Data Ethics at the University of San Francisco. Together with Jeremy Howard, she is co-founder of fast.ai. Thomas was selected by Forbes magazine as one of the 20 most incredible women in artificial intelligence.

Rachel Thomas
Rachel Thomas speaks at the Linux Foundation in 2018
Alma materDuke University (PhD)
Swarthmore College
Known forData ethics
Artificial intelligence
Scientific career
InstitutionsUniversity of San Francisco
Uber

Early life and education

Thomas grew up in Galveston, Texas. In high school she began programming in C++. Thomas earned her bachelor's degree in mathematics at Swarthmore College in 2005.[1] At Swarthmore she was elected to the Phi Beta Delta honor society. She moved to Duke University for her graduate studies and finished her PhD in mathematics in 2010.[2] Her doctoral research involved a mathematical analysis of biochemical networks. During her doctorate she completed an internship at RTI International where she developed Markov models to evaluate HIV treatment protocols. Thomas joined Exelon as a quantitative analyst, where she scraped internet data and built models to provide information to energy traders.[3]

In 2013 Thomas joined Uber where she developed the driver interface and surge algorithms using machine learning.[4] She then became a teacher at Hackbright Academy, a school for women software engineers.[5]

Research and career

Thomas joined the University of San Francisco in 2016 where she founded the Center for Applied Data Ethics.[6][7] Here she has studied the rise of deepfakes,[8] bias in machine learning and deep learning.

When Thomas started to develop neural networks, only a few academics were doing so, and she was concerned that there was a lack of sharing of practical advice.[9] Whilst there is a considerable recruitment demand for artificial intelligence researchers, Thomas has argued that even though these careers have traditionally required a PhD, access to supercomputers and large data sets, these are not essential prerequisites.[9] To overcome this apparent skills gap, Thomas established Practical Deep Learning For Coders, the first university accredited open access certificate in deep learning, as well as creating the first open access machine learning programming library.[10] Thomas and Jeremy Howard co-founded fast.ai, a research laboratory that looks to make deep learning more accessible.[11] Her students have included a Canadian dairy farmer, African doctors and a French mathematics teacher.[4]

Thomas has studied unconscious bias in machine learning,[12][13] and emphasised that even when race and gender are nor explicit input variables in a particular data set, algorithms can become racist and sexist when that information becomes latently encoded on other variables.[13][14] Alongside her academic career, Thomas has called for more diverse workforces to prevent bias in systems using artificial intelligence.[9][15] She believes that there should be more people from historically underrepresented groups working in tech to mitigate some of the harms that certain technologies may cause as well as to ensure that the systems created benefit all of society.[16] In particular, she is concerned about the retention of women and people of colour in tech jobs.[4] Thomas serves on the Board of Directors of Women in Machine Learning (WiML).[17] She served as an advisor for Deep Learning Indaba, a non-profit which looks to train African people in machine learning. In 2017 she was selected by Forbes magazine as one of 20+ "leading women" in artificial intelligence.[18]

Thomas has also written on the application of data science and machine learning in medicine. In one article, she outlines uses of machine leaning in the medical field and highlights some of the ethical issues involved. The article was published in the Boston Review, titled "Medicine's Machine Learning Problem As Big Data tools reshape health care, biased datasets and unaccountable algorithms threaten to further disempower patients."[19]

Work on data ethics and diversity

Thomas is concerned about the lack of diversity in AI, and believes that there are a lot of qualified people out there who are not getting hired.[5] She has particularly focused on the problem of poor retention of women in tech, noting that "41% of women working in tech leave within 10 years. That's over twice the attrition rate for men. And those with advanced degrees, who presumably have more options, are 176% more likely to leave."[5] Thomas believes[5] AI's "cool and exclusive aura" needs to be broken in order to unlock it for outsiders, and to make it accessible to those with non-traditional and non-elite backgrounds.

References

  1. "Rachel Thomas '05 Among Top 20 Women Advancing A.I. Research". www.swarthmore.edu. 2017-05-25. Retrieved 2019-12-18.
  2. jbmorris2 (2017-04-20). "Rachel Thomas". University of San Francisco. Retrieved 2019-12-18.
  3. "| Rachel Thomas | fast.ai founder & USF assistant professor". QCon.ai San Francisco. Retrieved 2019-12-18.
  4. "Rachel Thomas, Founder of fast.ai & Assistant Professor at the University of San Francisco". OnlineEducation.com. Retrieved 2019-12-18.
  5. Stegman, Casey. "Open Source Stories: Possible Futures". Open Source Stories. Retrieved 2019-12-24.
  6. "EGG San Francisco 2019". sf.egg.dataiku.com. Archived from the original on 2019-12-19. Retrieved 2019-12-18.
  7. States, Austin TX United (2019-08-07). "USF Launches Data Ethics Center". Datanami. Retrieved 2019-12-18.
  8. Pangburn, D. J. (2019-09-21). "You've been warned: Full body deepfakes are the next step in AI-based human mimicry". Fast Company. Retrieved 2019-12-18.
  9. Snow, Jackie. "The startup diversifying the AI workforce beyond just "techies"". MIT Technology Review. Retrieved 2019-12-18.
  10. Ray, Tiernan. "Fast.ai's software could radically democratize AI". ZDNet. Retrieved 2019-12-18.
  11. "New schemes teach the masses to build AI, New schemes teach the masses to build AI". The Economist. ISSN 0013-0613. Retrieved 2019-12-18.
  12. "Can AI Have Biases?". Techopedia.com. 2 October 2019. Retrieved 2019-12-18.
  13. "Analyzing & Preventing Unconscious Bias in Machine Learning". InfoQ. Retrieved 2019-12-18.
  14. "BBC World Service - The Real Story, Can algorithms be trusted?". BBC. Retrieved 2019-12-18.
  15. "A tug-of-war over biased AI". Axios. 14 December 2019. Retrieved 2019-12-18.
  16. Artificial Intelligence needs all of us | Rachel Thomas P.h.D. | TEDxSanFrancisco, retrieved 2019-12-18
  17. "Board of Directors". Retrieved 2019-12-18.
  18. Yao, Mariya. "Meet These Incredible Women Advancing A.I. Research". Forbes. Retrieved 2019-12-18.
  19. "Medicine's Machine Learning Problem".
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.