As content creators, keeping up with the latest technology is essential for us. The less time we spend on technical aspects, the more time we can devote to creativity, which directly impacts the work we produce for clients. That’s why we were excited to check out Luma AI and its free tool for generating NeRFs. In this article, we’ll share what we’ve learned and our experience testing the tool so far.

First, let’s define Neural Radiance Fields (NeRFs). They are a new technology that uses machine learning and neural networks to create 3D models, revolutionizing the way we think about 3D capture and development. Unlike the NERF toys we played with as kids, this technology is complex, but Luma AI has made it surprisingly simple. By providing an image set or video of an object or environment, Luma AI can learn a radiance field and create a 3D model. Depending on the level of detail captured, these models can be used as production assets or for specific projects. The time it takes to capture an object is a fraction of what it would take to create a model in programs like Cinema 4D or Blender. Once you have a 3D model, you can add live-action elements like actors and props using camera tracking information, opening up creative possibilities during the concept phase and for final production assets. Corridor Crew has an excellent walkthrough of NeRF technology and how it evolved into user-friendly apps like Luma AI.

We were surprised by the results of our first capture and immediately wanted to try it on larger spaces, both internal and external, to see if any issues arose. The texturing of objects was impressive and looked photorealistic, all from simply capturing video with our iPhone. You can view examples of our captures in the article. Our team plans to continue exploring this technology and will share more about post-production steps, such as compositing and integrating it into 3D programs, in a future post.

To learn more about Luma AI, visit their website or download the app from Apple’s App store and create a free account in minutes.

As content creators, keeping up with the latest technology is essential for us. The less time we spend on technical aspects, the more time we can devote to creativity, which directly impacts the work we produce for clients. That’s why we were excited to check out Luma AI and its free tool for generating NeRFs. In this article, we’ll share what we’ve learned and our experience testing the tool so far.

To get started, we captured a small single object by following the on-screen prompts to circle around it for approximately 5 minutes. After capturing, we uploaded the model to Luma AI for processing and learning. Within 10 minutes, we had a 3D preview of the model in a basic orbit camera movement. The app offers various modes to view the model, and you can create a custom camera that moves right inside a browser-based version of the app for more power. You can export the object’s geometry and mesh in various formats for use in 3D and compositing programs.

We were surprised by the results of our first capture and immediately wanted to try it on larger spaces, both internal and external, to see if any issues arose. The texturing of objects was impressive and looked photorealistic, all from simply capturing video with our iPhone. You can view examples of our captures in the article. Our team plans to continue exploring this technology and will share more about post-production steps, such as compositing and integrating it into 3D programs, in a future post.

To learn more about Luma AI, visit their website or download the app from Apple’s App store and create a free account in minutes.

This NeRF Render was created with Luma AI of our office entryway. The total time capturing it was around 5 minutes. After we uploaded it and made a custom camera move using their desktop app. A few simple keyframes gave us a pretty dynamic fly-through. The entire process, including the app’s process time was around 30 minutes.