Researchers develop new computational method to make data-driven 3D modeling easier

Source: Xinhua| 2017-07-19 07:25:48|Editor: Zhou Xin
Video PlayerClose

LOS ANGELES, July 18 (Xinhua) -- A new computational method, to be demonstrated at SIGGRAPH 2017 later this month in Los Angeles, California, is addressing a well-known bottleneck in computer graphics: 3D content creation.

The new generative model, GRASS, based on deep neural networks developed by an international research team, enables the automatic creation of plausible, novel 3D shapes, giving graphic artists in video games, virtual reality (VR) and film the ability to more quickly and effortlessly create and explore multiple 3D shapes so as to arrive at a final product.

With GRASS, "Everything is driven implicitly by the examples, or learned from data," Kai Xu, a co-author of a new research paper, said in a press release.

According to the paper, published in ACM Transactions on Graphics, the new method uses machine-learning techniques and artificial intelligence to eliminate the burden of generating multiple 3D shapes by hand.

Researchers believe that computational methods like GRASS could one day transform the video game, film, computer-aided design (CAD) and VR industries.

Generating new 3D shapes is challenging. "The time consuming process of 3D content creation prevents computer graphics from being as ubiquitous as we had hoped," said the associate professor of computer science at the National University of Defense Technology (NUDT), China, and soon-to-be visiting professor at Princeton University.

"Our work is a data-driven automatic shape generation computational method. Given a set of example 3D shapes, our task is to generate multiple shapes of one object class, automatically," Xu said.

As an example, Xu described given a set of chairs, the new method can create more chairs but with different geometric structures, and do so quickly and simply, allowing even a novice user to utilize the method.

According to the press release on Tuesday, the core ideas that led to GRASS were born out of two questions the researchers had been asking themselves for years.

The first is whether 3D shapes can be transformed into and generated through genetic codes mimicking human DNA. Fittingly, the first code name for the project was "Shape DNA". GRASS learns to encode an arbitrarily complex 3D shape into a fixed set of parameters, and regenerate it from those parameters.

The second question is how to best represent 3D shapes for computer-aided synthesis. The team, led by NUDT, eventually settled on a structural representation, describing a 3D shape as an organized hierarchy of its constituent parts.

The authors from NUDT, Adobe Research, IIT Bombay, Simon Fraser University and Stanford University are set to showcase their work at SIGGRAPH 2017, which is known for the spotlight it shines annually on the most innovative in computer graphics research and interactive techniques worldwide.