Computer Graphics World

July-Aug-Sept 2021

Issue link: http://digital.copcomm.com/i/1399888

Contents of this Issue

Navigation

Page 14 of 67

j u ly • a u g u s t • s e p t e m b e r 2 0 2 1 c g w 1 3 "Project Vincent," page 11). "However, back when we started, even the term 'Metaverse' didn't exist, so for a small team like ours from a midsize company, it was not an easy decision to put a lot of money into develop- ing such technology." It is the studio's belief that realistic virtual humans will have enormous potential and practical value in the near future, followed in the far distant future by robot development. To this end, the studio started Project Vincent so it could see the possibilities and develop the underlying skills needed for this tech. In the case of offline rendering, such as movies, artists and technologists can throw a lot of manual labor and computing power at the problem to adjust final outcomes. However, when it comes to a real-time digital human, the difficulty factor rises to a whole new level. As Kang points out, every calculation has to be done in 1/30 of a sec- ond and produce a finished image. "Usually the shape of the performing actor and the 3D model's face are different, so there is a retargeting process where- by the actor's facial movement is read, converted, and then applied to the shape of the 3D model. Even this process takes more than 1/30 of a second," says Kang. "If you extremely optimize the data, this could be possible, but we chose a better solution." As Kang stresses, AI is the most import- ant part in the development process. While AI was an essential component in the cre- ation of Vincent, it is not without limitations, particularly when it came to solving all of the retargeting. As a result, Giantstep developed a tool using a lightweight AI that solved the retargeting process discussed above in 1/30 of a second. However, technology alone is not the answer. Kang agrees with Nichols, Hendler, and Roble in that the solution is also depen- dent on having great artists and technicians. When Giantstep first started focusing on digital humans, it set a goal to reduce its reli- ance on highly skilled artists and technicians. So, instead of modeling by hand, the studio opted for scanning and creating blend- shapes by automated algorithms, rather than handmade production. "But, ironically, what we realized later was that had it not been for great artists and technicians, the look of Vincent would not have been possi- ble, even with the best scanning service. The same was true for technical development, and we wouldn't have been able to choose the direction that we've gone." Soul Machines Another company making very significant advancements in the area of autonomous digital humans is Soul Machines, with head- quarters in San Francisco and R&D in Auck- land, New Zealand. In fact, its co-founder, Mark Sagar, spent a number of years build- ing realistic CG human characters for various blockbuster films while at Weta Digital and Sony Pictures Imageworks. Later, he started the Laboratory for Animate Technologies at the Auckland Bioengineering Institute, where he focused on bringing digital human technology to life, pulling together mathe- matical physics simulations and combining them with computer graphics. With a PhD in bioengineering, Sagar had studied how faces are animated, how they deform, how they reflect light, and so forth. Indeed, he spent time during his VFX career building systems that simulated actors' emotions and behaviors. But, it all centered on the surface structure. He wanted to take things further, having an interest in biome- chanics, physiology, how the brain worked, and, ultimately, consciousness. "I had always looked at those areas and the progress being made in artificial intelli- gence," Sagar says. "I was interested in what actually drives actors' performances and motivates their behaviors." At the lab, Sagar began pulling those aspects together in an attempt to create truly autonomous, fully-animated charac- ters that would have a proper brain model driving the behaviors of the character. "It meant building a character in the complete opposite way that it had been done before. It meant, basically, building a character from the inside out," he says. "I wanted to make digital characters that would see and hear the world, and be able to take that all in and learn, and to have a memory and react to the world." According to Sagar, the work in the lab encompassed two aspects. One involved building the brain models to drive a fully autonomous character, which was called BabyX (see "BabyX," page 12). The other was using parts of that technology to generate facial simulations. Aer four and a half years of working toward this goal, the lab expanded into the company Soul Machines. On the research side, the company continued the extraor- dinary work on BabyX, exploring a new way to animate characters from the inside out. On the business side, the company began looking at using these lifelike digital humans in various applications. The culmination of this work is Digital People for real-world applications. The company's Digital People contain the com- pany's patented Digital Brain that contextu- alizes and adapts in real time to situations similar to human interactions for enhancing customer brand experiences. Currently, Soul Machines offers three categories of products. The Human OS Platform features a Digital Brain (for AI learning resulting in appropriate responses) and Autonomous Animation for lifelike interactions. The Digital DNA Studio Blender, meanwhile, enables customers to scale their unique Digital Person, creating multiple looks and leveraging up to 12 languages. Meanwhile, with Soul Solutions, customers can choose among three service levels, from entry level to advanced, to fit their needs. The technology used to drive Soul Ma- chines' adult digital humans, or virtual assis- tants, combines elements of the brain mod- el with other technologies, such as speech recognition and NLP (natural language processing) algorithms — technologies that companies such as Microso and Google are working on — and enable the creation of conversational models, albeit ones that are more curated, as opposed to resulting from pure intelligence. Sagar compares the two methods to an actor saying lines: the person can either recite them from a detailed script or can improvise on the fly. Soul Machines is working on both ends of that scale, depend- ing on the application it will be used for. In fact, the use cases for Digital People are vast, spanning many sectors including consumer goods, health care, retail, tech, call centers, and more. Soul Machines is continuing development of its technology and will be releasing its Dig- ital DNA Blender for creating different faces in the near future. It also will be releasing technology that makes the digital humans content-aware, for meaningful interactions — generated autonomously and live. RT Digital Humans at Work Roble, Hendler, Nichols, Kang, and Sagar

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - July-Aug-Sept 2021