Niloom.AI launches one-stop generative AI content creation platform for spatial computing

6 min read
Niloom.AI launches one-stop generative AI content creation platform for spatial computing has launched today the beta test for its generative AI content creation platform for spatial computing. is a comprehensive platform that harnesses GenAI within the spatial computing ecosystem to create, prototype, edit and instantly publish sophisticated AR/VR content at a fraction of the time and cost, said CEO Amir Baradaran, in an interview withGamesbeat. is a streamlined software-as-a-service (SaaS) solution that consolidates the entire creative process from ideation and development to testing, collaborating and publishing.

“We are the first GenAI tech in the ecosystem and we are one of the first ones who are creating spatial computing content. You can generate individual assets such as 3D models, 2D/360 images, music, sound effects, or even use text-to-speech to give voice to your characters,” Baradaran said. “I’m very happy to say that now you can personalize the personality of your character with an AI agent in a very sophisticated way. We also provide video-to-animation (integrating Kinetixtech). And then we have streamlined the process of interjecting any animation into any character. These are some of the heavy lift things that we have been doing.”

Lil Snack & GamesBeat

GamesBeat is excited to partner with Lil Snack to have customized games just for our audience! We know as gamers ourselves, this is an exciting way to engage through play with the GamesBeat content you have already come to love. Start playing games now!

He added, “Most importantly, you can easily generate an entire story on a timeline that allows you to have access from a bird’s eye view on a timeline. You have sophisticated editing capabilities and interactivity, which is really important. For me, gamification is inherent to the nature of AR/VR.”

There is a lot more that is in the pipeline that’s coming, with new upgrades every two weeks or so, for things like revenue generation, buying and selling projects, web AR and more.

By integrating over 100 key features into one platform, reduces production time and costs, optimizes production workflows, and solves the interoperability pain point of the spatial computing market. Eliminating reliance on costly engineers, the browser-based, no-code platform is easy to use for professional and casual creators alike.

“ opens the floodgates to the creative community who have been sidelined by the technical requirements of content creation in spatial computing,” said Baradaran. “As an early adopter of spatial computing, I experienced firsthand the limitations of relying on an army of engineers to bring my artworks to life. monumentally transforms the spatial computing content creation process by dismantling the technical and cost barriers that exist in the market, allowing anyone to generate and publish AR/VR experiences in minutes.”

The platform features GenAI within the spatial computing ecosystem to create, prototype and edit sophisticated AR/VR content. With easy text or speech prompts,’s GenAI generates complete AR/VR experiences, personalized AI agents and individual assets. It can now create and also publish projects directly within Apple Vision Pro and Meta Quest headsets.

It can be used for advanced creation, editing and prototyping. Developers can create immersive AR/VR experiences with advanced features including interactive 3D models and animated characters enabled with verbal communication, compelling storylines, rich backgrounds, music, visual and sound effects, AI-driven voices and more.

Amir Baradaran is CEO of

Editing tools allow for live collaboration, precision editing, version control, testing and simulation. Prototyping allows for the simulation of scenes to facilitate feedback and collaboration.

Developers can capture a bird’s eye view of entire projects against visual timelines and decision trees to “add logic” to scenes – enabling complex stories and endless possibilities for user interaction: touch, hand gestures and verbal commands.

And they can integrate directly with third-party tools including Sketchfab,, Ready Player Me, Inworld, and Google TTS for a one-stop solution. is hardware and software agnostic, facilitating both content creation and instant publishing across all spatial computing mobile devices (iOS, Android) and headsets (Apple Vision Pro, Meta Quest).

It has a management system where devs can optimize workflows with cloud-based asset and project library, team management tools and access to data and analytics.

Live demo

Baradaran did a live demo for me, with the tech working.

“You can upload your own assets, import new ones from Sketchfab or simply generate them. Be it a 3D asset, characters, animations, 2D or 360 images, or music and sound effects. And these things can all come together in order for you to control them and put them on a timeline,” he said.

He created a project in front of me in a matter of minutes and noted in the past that it would have taken weeks to do.

“Most importantly, we allow for content creators who are not developers. They can be part of this.”

“Over the past decade I have seen demos of dozens of tools trying to simplify the creation of XR experiences,” said Ori Inbar, adviser, and cofounder of Augmented World Expo, in a statement. “ nails it by not only empowering creators of all technical backgrounds to quickly prototype AR and VR experiences but also goes deeper, creating sophisticated scenes and interactions.”

“ offers a groundbreaking technology that will drive a new era in spatial computing. This is exactly the sort of scalable, transformational software that major tech companies seek to partner with or acquire to empower a new generation of content creators,” added Debu Purkayastha, strategic adviser to, and managing partner at 3rd Eye, in a statement. “What has built is revolutionary; it simply does not exist elsewhere.” is now available in the U.S. at and in the iOS App Store, the visionOS App Store, the Meta Quest store, etc. The first 1,000 creators will be given exclusive early access to the platform including a 14-day bonus to the Pro Version. Following that, they will be offered the option of exclusive beta subscription rates.

Baradaran has been in the world of augmented reality for 15 or so years, starting first as a content creator.

“I was an artist who got excited by the realm of spatial computing, augmented reality and virtual reality back in the day. I was really lucky to really stumble upon the technology. He did shows for the tech at the Louvre, the British Museum, Art Basel and more. He taught classes on spatial computing at the Columbia University School of Engineering.

“I was one of the only artists who was saying, ‘Hey, this will monumentally change how we create content, how we tell stories, and how we understand our sense of self.’ It was quite exciting because the art world was also very reticent to understand this new technology. And I’m very happy to have been one of the kind of early adopters, but also early evangelists of that space.”

More recently, he was excited to see Apple get into the market with the Apple Vision Pro.

“We started basically building what has become a generative AI-powered content creation platform to create content in AR/VR for spatial computing experiences,” he said.

Three years ago, he started the firm with a couple of his students and raised a $2.5 million pre-seed round in 2021. The team has a core group of three people, and it is adding its marketing team.

“It was so hard to really build that vision that I had to simplify the entirety of this complex process to create content,” he said. “It was technical, time consuming and very pricey.” can generate personalities.

“We have done our calculations. And we’re very excited to say that the same project that will take you about six months with Unity takes about six hours with us,” Baradaran said. And that’s a very sophisticated project. You can prompt it with us because we have incorporated a full generative AI engine.”

While many big tech platforms are siloed, wants to address pain points of the ecosystem and make the tech interoperable.

Baradaran said it is unusual to see people walking around outside with an Apple Vision Pro on their heads, but he notes the form factor will change over time and there isn’t really anything natural about walking around with a smartphone in your hand, looking down all the time.

Source link