Hao Li

Hao Li (Chinese: 黎顥; pinyin: Lí Hào, born 1980 or 1981)[1] is a computer scientist, innovator, and entrepreneur from Germany, working in the fields of computer graphics and computer vision. He is co-founder and CEO of Pinscreen, Inc, as well as associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI).[2] He was previously a Distinguished Fellow at the University of California, Berkeley,[3] an associate professor of computer science[4] at the University of Southern California, and former director of the Vision and Graphics Lab at the USC Institute for Creative Technologies.[5] He was also a visiting professor at Weta Digital and a research lead at Industrial Light & Magic / Lucasfilm.

Hao Li
黎顥
Born1980 or 1981[1]
CitizenshipGerman
Alma materETH Zurich (2010)
Karlsruhe Institute of Technology (2006)
Known forHuman Digitization, Facial Performance Capture
AwardsTR35 Award
ONR YIP
Scientific career
FieldsComputer Graphics, Computer Vision
InstitutionsPinscreen (Founder/CEO)
Mohamed bin Zayed University of Artificial Intelligence (Associate Professor)
ThesisAnimation Reconstruction of Deformable Surfaces (2010)
Doctoral advisorMark Pauly
Websitewww.hao-li.com

For his work in non-rigid shape registration, human digitization, and real-time facial performance capture, Li received the TR35 Award in 2013 from the MIT Technology Review.[6] He was named Andrew and Erna Viterbi Early Career Chair in 2015, and was awarded the Google Faculty Research Award and the Okawa Foundation Research Grant the same year. Li won an Office of Naval Research Young Investigator Award in 2018[7] and was named to the DARPA ISAT Study Group in 2019.[8] He is a member of the Global Future Council on Virtual and Augmented Reality of the World Economic Forum.[9]

Early life

Li's parents are Taiwanese and lived in Germany as of 2013.[1]

Education

Li went to a French-German high school in Saarbrücken and speaks four languages (English, German, French, and Mandarin Chinese). He obtained his Diplom (eq. M.Sc.) in computer science at the Karlsruhe Institute of Technology (then University of Karlsruhe (TH)) in 2006 and his PhD in computer science at ETH Zurich in 2010.[10] He was a visiting researcher at ENSIMAG in 2003, the National University of Singapore in 2006, Stanford University in 2008, and EPFL in 2010. He was also a postdoctoral fellow at Columbia University and Princeton University between 2011 and 2012.[3]

Career

Li joined Industrial Light & Magic / Lucasfilm in 2012 as a research lead to develop next generation real-time performance capture technologies for virtual production and visual effects. He later joined the computer science department[4] at the University of Southern California as an assistant professor in 2013 and was promoted to associate professor in 2019. In 2014, he spent a summer as a visiting professor at Weta Digital working on facial tracking and hair digitization technologies for the visual effects of Furious 7 and The Hobbit: The Battle of the Five Armies. In 2015, he founded Pinscreen, Inc., an Artificial Intelligence startup that specializes on the creation of photorealistic virtual avatars using advanced machine learning algorithms.[10] In 2016, he was appointed director of the Vision and Graphics Lab at the USC Institute for Creative Technologies and joined the University of California, Berkeley in 2020 as a Distinguished Fellow. In 2022, Li was appointed associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence in Abu Dhabi to direct a new AI center for Metaverse research.[11]

Research

He has worked on dynamic geometry processing and data-driven techniques for making 3D human digitization and facial animation. During his PhD, Li co-created a real-time and markerless system for performance-driven facial animation based on depth sensors which won the best paper award at the ACM SIGGRAPH / Eurographics Symposium on Computer Animation in 2009.[12] The team later commercialized a variant of this technology as the facial animation software Faceshift[13] (acquired by Apple Inc. in 2015 and incorporated into the iPhone X in 2017[14][15]). This technique in deformable shape registration is used by the company C-Rad AB and deployed in hospitals for tracking tumors in real-time during radiation therapy. In 2013, he worked on a home scanning system that uses a Kinect to capture people into game characters or realistic miniature versions.[16] This technology was licensed by Artec and released as a free software Shapify.me. In 2014, he was brought on as visiting professor at Weta Digital to build the high-fidelity facial performance capture pipeline for reenacting the deceased actor Paul Walker[17] in the movie Furious 7 (2015).

Hao Li speaking at the World Economic Forum 2020

His recent research focuses on combining techniques in Deep Learning and Computer Graphics to facilitate the creation of 3D avatars and to enable true immersive face-to-face communication and telepresence in Virtual Reality.[18] In collaboration with Oculus / Facebook, in 2015 he helped developed a facial performance sensing head-mounted display,[19] which allows users to transfer their facial expressions onto their digital avatars while being immersed in a virtual environment. In the same year, he founded the company Pinscreen, Inc.[20] in Los Angeles, which introduced a technology that can generate realistic 3D avatars of a person including the hair from a single photograph.[21] They also work on deep neural networks that can infer photorealistic faces[22] and expressions,[23] which has been showcased at the Annual Meeting of the New Champions 2019 of the World Economic Forum in Dalian.[10]

Due to the ease of generating and manipulating digital faces, Hao has been raising public awareness about the threat of manipulated videos such as deepfakes.[24][25] In 2019, Hao and media forensics expert, Hany Farid, from the University of California, Berkeley, released a research paper outlining a new method for spotting deepfakes by analyzing facial expression and movement patterns of a specific person.[10] With the rapid progress in artificial intelligence and computer graphics, Li has predicted that genuine videos and deepfakes will become indistinguishable in as soon as 6 to 12 months, as of September 2019.[26] In January 2020, Li spoke at the World Economic Forum Annual Meeting 2020 in Davos about deepfakes[27] and how they could pose a danger to society. Li and his team at Pinscreen, Inc. also demonstrated a real-time deepfake technology[28] at the annual meeting, where the faces of celebrities are superimposed onto the participants' face.

In 2020, Li and his team developed a volumetric human teleportation system which can digitize an entire human body in 3D from a single webcam and stream the content in real-time. The technology uses 3D deep learning to infer a complete textured model of a person using a single view. The team presented the work at ECCV 2020 and demonstrated the system live at the ACM SIGGRAPH's Real-Time Live! show, where they won the "Best in Show" award.[29][30]

Awards

  • ACM SIGGRAPH 2020 Real-Time Live! "Best in Show" Award.[29]
  • DARPA Information Science and Technology (ISAT) Study Group Member.[8]
  • Office of Naval Research Young Investigator Award.[7]
  • Andrew and Erna Viterbi Early Career Chair.[31]
  • Okawa Foundation Research Grant.[32]
  • Google Faculty Research Award.[33]
  • World's top 35 innovator under 35 by MIT Technology Review.[6]
  • Best Paper Award at the ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2009.

Media

For his work on visual effects, Hao has been credited in several motion pictures, including Blade Runner 2049 (2017), Valerian and the City of a Thousand Planets (2017), Furious 7 (2015), The Hobbit: The Battle of the Five Armies (2014), and Noah (2014). He also appeared as himself in various documentaries on artificial intelligence and deepfakes, including Buzzfeed's Follow This in 2018, CBC's The Fifth Estate in 2018, and iHuman[34] in 2019. In 2022, Hao and his company, Pinscreen, were featured in an episode of the documentary series, Amazon re:MARS Luminaries.[35]

References

  1. Manjoo, Farhad (21 August 2013). "Innovators - Hao Li". MIT Technology Review. Massachusetts Institute of Technology. Retrieved 9 February 2023.
  2. "Say hello to virtual human Hao Li".
  3. "ACM SIGGRAPH Member Profile: Hao Li". www.siggraph.org. Retrieved 2021-04-25.
  4. Archived 2016-07-07 at the Wayback Machine from faculty roster at USC Computer Science Department, retrieved 2015-03-03.
  5. "Hao Li to Spearhead Vision and Graphics Lab at the USC Institute for Creative Technologies". viterbi.usc.edu. USC.
  6. from MIT Technology Review TR35 Awards, retrieved 2015-03-03.
  7. "Hao Li Earns Office of Naval Research Young Investigator Award - USC Viterbi | School of Engineering". USC Viterbi | School of Engineering. Retrieved 2018-03-25.
  8. "Hao Li Selected for the DARPA ISAT Study Group". USC Institute for Creative Technologies. Retrieved 2019-12-08.
  9. "Global Future Council on Virtual and Augmented Reality". World Economic Forum. Retrieved 2019-12-08.
  10. Knight, Will. "The world's top deepfake artist is wrestling with the monster he created". MIT Technology Review. Retrieved 2019-12-08.
  11. "Say hello to virtual human Hao Li". News - Mohamed bin Zayed University of Artificial Intelligence. Mohamed bin Zayed University of Artificial Intelligence.
  12. Weise, Thibaut; Li, Hao; Van Gool, Luc; Pauly, Mark (August 2009). "Face/Off: Live facial puppetry". Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM. pp. 7–16. doi:10.1145/1599470.1599472. ISBN 9781605586106. S2CID 2980982.
  13. "Performance driven facial animation". www.fxguide.com. fxguide. 4 August 2015.
  14. "All of Apple's Face-Tracking Tech Behind the iPhone X's Animoji". WIRED. Retrieved 2017-10-25.
  15. "Professor's research contributed to iPhone X | Daily Trojan". Daily Trojan. 2017-09-27. Retrieved 2017-10-25.
  16. "Hao Li wants to scan you into your favourite games". Wired UK. Wired.
  17. "How I Made It: USC professor brings computer animation to life". LA Times. 25 October 2015.
  18. "Who wants to show up as Gandalf at their next meeting?". news.usc.edu. USC. 11 October 2016.
  19. "Oculus Rift Hack Transfers Your Facial Expressions onto Your Avatar". technologyreview.com. MIT Technology Review.
  20. "Stealth Face-Tracking Startup Pinscreen Raises $1.8 Million". uploadvr.com. UploadVR. 9 August 2016.
  21. "Pinscreen launches with high-tech distractions for a nerve-wracking election". techcrunch.com. TechCrunch. 9 November 2016.
  22. "Photorealistic facial texture from a single still". fxguide.com. fxguide. 12 December 2016.
  23. Pierson, David (19 February 2018). "Fake videos are on the rise. As they become more realistic, seeing shouldn't always be believing". Los Angeles Times. Retrieved 2018-03-25.
  24. "FOX 11 In Depth: The dangers of social media". FOX 11 Los Angeles. 5 August 2019. Retrieved 2019-12-08.
  25. O'Neill, Patrick Howell. "The world's top deepfake artist: 'Wow, this is developing more rapidly than I thought.'". MIT Technology Review. Retrieved 2019-12-08.
  26. Stankiewicz, Kevin (2019-09-20). "'Perfectly real' deepfakes will arrive in 6 months to a year, technology pioneer Hao Li says". CNBC. Retrieved 2019-12-08.
  27. "Deepfakes: Do Not Believe What You See". World Economic Forum. Retrieved 2020-01-26.
  28. Thomas, Daniel (2020-01-23). "Deepfakes: A threat to democracy or a bit of fun?". BBC News. Retrieved 2020-01-26.
  29. "We're One Step Closer to Consumer-accessible Immersive Teleportation". ACM SIGGRAPH Blog. 2020-10-22. Retrieved 2021-04-11.
  30. "Connecting People in a Distanced World". USC Viterbi | Magazine. Retrieved 2021-04-11.
  31. "Endowed chairs and professorships at USC Viterbi School of Engineering". USC.
  32. "USC Professors Earn International Award". USC.
  33. "Google Faculty Research Award 2015" (PDF).
  34. Keslassy, Elsa (2019-11-15). "Cinephil Acquires AI-Themed Political Thriller Documentary 'iHuman' (EXCLUSIVE)". Variety. Retrieved 2019-12-09.
  35. "Amazon re:MARS Luminaries".
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.