Computer Graphics World

July-Aug-Sept 2021

Issue link: http://digital.copcomm.com/i/1399888

Contents of this Issue

Navigation

Page 12 of 67

j u ly • a u g u s t • s e p t e m b e r 2 0 2 1 c g w 1 1 And until we get that, we're not going to be able to get a fully realistic human out there. The only other way to accomplish it is to completely fake it with deepfakes, but then it becomes a completely different way of looking at the rendering. You're not actually looking at digital humans, you're looking at a warped face of some kind." Giantstep GX Lab Like Digital Domain and the Digital Human League, the Korea-based creative stu- dio Giantstep is also focused on realistic real-time digital humans. It started R&D in 2018, and "now, we're in the era of the Metaverse, where the development of the virtual human (metahuman) is very popular, and there are many companies challenging the technology," says Sungkoo Kang, director at GX Lab, the internal R&D division of Giantstep, who was the lead on an application called Project Vincent (see Project Vincent Douglas is not the only realistic real-time digital human out there. In fact, there are a number of them, including ongoing work by digital humans researcher Mike Seymour, whose project MeetMike — a real-time per- formance-captured VR version of himself — was shown at SIGGRAPH 2017. Another impressive application is Project Vincent from Giantstep's GX Lab. The aim of the project is to create a realistic digital human with a real-time engine. Vincent can move in real time and is capable of emotional expression, and Giantstep is developing a way for him to communicate with people by graing AI. While Vincent is not fully autonomous at present time, Giantstep is continuing to develop his communication skills. Initially, GX Lab was formed with just three people, but by the time the project was finished, that number had grown to 17. According to Sungkoo Kang, director at GX Lab, it took approximately one year to see the first visual results of their work, and since then, R&D has continued as they finesse the necessary technology. To create Vincent, the group started with 300 to 2,000 facial blendshapes, a process that would have been too time-consuming with existing digital content creation tools, and nearly impossible in a real production. Instead, the group created an internal plug-in that divided the areas of the face and recorded and automated the division and editing of facial expressions, which hugely improved quality and shortened the production time. "We also reviewed several solutions to simulate facial expressions in real time, but we didn't find a real-time solution that could produce the level of quality we wanted," says Kang. "So, we developed an artificial network that uses machine learn- ing to implement facial expressions." Determining the shape of the facial expression is a very complex problem, Kang points out. For example, there are about 12 parameters related to the corner of the mouth. "You can see how complicated this problem is by looking at only three of the parameters, like the horizontal pull of the mouth, the height of the mouth, and the degree of the chin's opening," he explains. "The tail of the mouth should have a completely different shape depending on whether the mouth is pulled to the side with the chin open, or it is pulled to the side with the mouth closed." That is a cubic equation, Kang notes, and the functions become more complicated when other factors are involved, such as surrounding elements like the nose, cheeks, and eyelids. "It's almost impossible to ap- proach this mathematically," he says. However, if there is enough training data for artificial intelligence using machine learning, this can be solved, and even faster computations in real-time simulation are possible, too. For this reason, Giantstep used machine learning to link the parameters necessary to determine facial expressions to the front image of the face, thus it became possible to implement complex expressions with just one streamed frontal image. "At the time of Vincent's development, only a small number of actors' facial expres- sions were trained, so only a small number of people could move Vincent," says Kang. Since then, the group has continued train- ing their AI networks, and now, any face can be read and can move Vincent's face. However, depending on the level of training, the AI network may cause an error between unwanted facial expressions. In order to completely exclude this error, the team also developed a technology that processes both eyes and the mouth as a completely separate network. Vincent has since been used in a number of fields. At the opening of IBM's Artificial Intelligence Conference, he spoke with the representative of the Korean branch of IBM and won the Epic Games MegaGrant. Inter- nally, GX Lab has used Vincent as a means for many other emerging technologies, and, says Kang, he's played a huge part in the continued development of digital human technology. Giantstep GX Lab's CG Vincent.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - July-Aug-Sept 2021