Five months ago, Autodesk showed us Project Bernini, a proof of concept its Research Lab was working on that produced generative AI 3D shapes from a wide range of inputs in less than a minute. What made Bernini results so impressive was that the models reflected real-world functionality. At Autodesk University, the company said it is refining the work and is looking at the potential of applying the techniques from Bernini to build new foundation models that can be used to automate certain tasks in manufacturing as well as other industries.
In May of this year, Autodesk shared its new research project, Project Bernini, a proof of concept for generative AI 3D shape creation in which 3D structures and shapes are produced from a range of inputs that are functional in the real world, making them usable within the design and make industries. From Autodesk Research’s AI Lab, Project Bernini’s aim is to produce multiple functional variations of a 3D shape from a single 2D image, multiple images showing different views of an object, point clouds, voxels, and text (natural language prompt). Bernini then synthesizes real geometry, helping the user rapidly generate concepts.
Unlike other 3D generators that produce simplistic 3D shapes, Bernini produces fully realized and detailed objects, and it does so in just under a minute.
The Autodesk AI Lab trained the initial Bernini model on 10 million diverse 3D shapes—a composite dataset comprising publicly available data, a mixture of CAD objects, and organic shapes. Autodesk says its research group built the largest dataset of 3D training data ever assembled, data from across the research community, which means it’s open data, so this remains a research project in its current form.
“We can make it openly available to people,” said Mike Haley, senior VP of research at Autodesk.
Project Bernini is so named because it is a research project but one that Autodesk ultimately hopes to bring into production—eventually. However, before that can happen, new non-restrictive training data would be needed, and lots of it. And right now, there is not an abundance out there that fits the requirements.
Since unveiling this experimental work, Autodesk has continued to refine the technology. And with so much attention given to the company’s dealings in AI at this year’s Autodesk University, Bernini was bound to surface there, and it did.
During his keynote at the design and make conference, CEO Andrew Anagnost updated the audience on Project Bernini, adding that Autodesk is looking at potentially moving the Bernini prototype from concept to the company’s manufacturing cloud. This would help designers conduct more analysis during the conceptual design phase of a project.
Jeff Kinder, Autodesk EVP—design and manufacturing, iterated that Autodesk has made progress with foundations models in CAD through Project Bernini, which can rapidly generate functional 3D images that can then be converted into Fusion, where a designer can continue iterating on the concept. In fact, Autodesk said it is applying the techniques from Bernini to build new foundation models, which will automate complex, repetitive, and error-prone tasks, and not just in manufacturing.
“Bernini is indicative of how we can use future models to augment your creativity, how you can overcome your own inertia and jump three steps ahead in the process. We’re confident this has broad applicability for conceptual design and manufacturing,” said Kinder. He adds that with the success resulting from Form Explorer in Alias allowing for rapid exploration and evolution of new vehicle concepts in seconds, not days or hours, the company is confidence that models like Bernini will augment and accelerate creativity in other areas of conceptual design.
Bernini has promising applications in other industries, too, including M&E, and Autodesk is partnering with companies including Electronic Arts and to explore the potential of generative AI in game development.
Project Bernini is Autodesk’s first fully generative model, trained on 3D geometry. According to Autodesk, if trained on buildings, the models could generate geometrically rigorous creative designs and inspire a new generation of buildings and architects. If trained on video game character models or fantasy environments, they could produce fascinating new creatures or virtual worlds. If trained on car designs, they could assist in imagining an innovative new series of vehicles.
As Haley explained, traditional CAD uses geometry kernels to algorithmically change geometry with AI, whereas Project Bernini is not based on a geometry kernel, so it is using the AI itself—in other words, a large neural network model that is actually directly able to manipulate geometry.
“There’s a lot of small companies or folks out there who are taking language models and putting them on top of CAD APIs to create geometry. That’s okay. The problem is that the language model doesn’t understand geometry, so while it can do certain things, it actually breaks quite quickly,” Haley explained. “In our experience, what you actually need to do is build an AI that’s natively geometry, that it has an intrinsic understanding of geometry. So when you see something like Bernini, that’s the model itself synthesizing the geometry, because it comprehends it at the same time.”
Bernini is the tip of the spear in terms of the state of the art of this technology and how we see people potentially interacting with 3D, Haley said. Many people have commented that they do not think 3D modeling is really needed anymore, he added, but that couldn’t be further from the truth. Bernini helps get people to that first stage of design very quickly.
AI foundation models like Bernini have the potential to shift paradigms for Autodesk’s industries as users spend less time creating geometry and more time designing and making, noted Autodesk CTO Raji Arasu.
“It’s very exciting technology, and we’re looking forward to taking the learnings from Bernini and building it into some of the future generative capabilities that we’ll be offering from Autodesk in our advanced AI,” Haley said.