This post was originally published by Dean Takahashi at Venture Beat
Move.ai can use artificial intelligence to capture a 3D representation of an actor in a process known as motion capture. But it doesn’t need actors in Lycra suits with lots of white balls attached to them. And it enables game companies to do motion capture in a remote way during the pandemic.
That’s an important technological advancement, because the hassles of motion-capture systems have led to a stall in production for both movie makers and video game companies. Move.ai hopes to fix that with “markerless” motion capture that can lower the costs and hassles of doing the work.
The technology comes from a London company that started out capturing the images of sports athletes and turning them into digital animated objects. But the pandemic hobbled that business with the closing of physical sports events. Luckily, games need better realism to give players total immersion and engagement in an alternate reality, and that means that they need motion capture.
“We are definitely operating in a space where creativity and technology come together,” Move.ai CEO Mark Endemano said in an interview with GamesBeat. “Our journey started more in the sports space. But we always had the desire to move into video games. One event actually created a catalyst for us to move into games, and that was COVID-19. With live sports being such a challenge, we pivoted quickly and accelerated our movement into video games.”
Move.ai takes 2D video of a person recorded on an iPhone or a Samsung Galaxy phone and turns it into a 3D avatar. What’s really exciting about this, Endemano said, is that the traditional way that developers create engaging graphics is either through painstaking work by a human artist or a complicated motion capture system.
Before the pandemic, big-time game developers used motion-capture studios with lots of cameras that captured the movement of actors on a stage and composited those images into 3D representations of them. The cameras could detect white markers on bodysuits. The work was hard on the actors, and the motion capture stages and systems were expensive, and had to be surrounded by green screens to make sure the edges of the people were easily distinguished.
But it worked. The actors’ representations were captured in fluid movements, making it much easier for animators to create game characters based on the captured images. Those game characters moved fluidly, making the games seem much more realistic to players. Motion capture is critical to making us believe that characters on the movie screens or game screens are real.
The pandemic effect
But during the pandemic, many of those studios have been shut down as studios work from home. They could no longer safely do motion capture. That’s where Move.ai comes in. It developed something called “markerless motion capture,” where it could capture the natural motion of a person with video and create a representation with much less equipment.
On top of that, there was no need for the bodysuits with the white balls. That means the actors could move better and without getting worn out. They don’t have full range of movement in a traditional body capture suit, and that’s not good if you’re trying to simulate combat or a martial arts scene.
You may at this point be thinking about Microsoft’s Kinect camera technology from the Xbox 360 game console, used for games like Dance Central. But Kinect didn’t capture as much data, it didn’t have the best AI, and it didn’t have enough processing power in the hardware. Move.ai is like a grown up version of Kinect. Move.ai can capture 100,000 data points on a person, compared to 10 or 15 points from markers. And Move.ai says its method costs less than traditional motion capture.
“We believe that we can democratize this capability and put it in the hands of everybody from influencers doing user-generated content to triple-A game developers making big games,” Endemano said.
Move.ai uses computer vision and artificial intelligence software to examine the video of a standard camera to create animated representations. The actor is unhindered and uninhibited, resulting in a better performance. For example, they can wear any shoes they want, so that their gait isn’t affected by special shoes.
“It’s certainly easier to deliver that to animators and creators,” Endemano said. “We capture the movement and project it onto a game character.”
Much of that process is automated now, so the output can go directly into a game engine or 3D art software such as Maya. “The result is you get higher quality results faster,” Endemano said. “You don’t need a studio at all. You can do this in a park.”
The tech was developed by Tino Millar, the founder and chief technology officer of Move.ai. He worked on the tech while at Imperial College in London and earned a number of patents for it. He had a few months of work left on his doctorate, but he decided to quit to work on the company. He thought it would be fun to use a camera to track his movements for exercise tracking.
“We’re not talking about Tetris and Space Invaders anymore,” Millar said. “Games require total immersion of characters from all different directions and angles. This presents lots of challenges. If you are playing Assassin’s Creed or God of War, for example, you need many animations of the character from so many different angles.”
He added, “We’ve shown this to animation directors, and they have said it’s mind-blowing.”
This is pretty heady stuff for a company with 12 people with under $2 million in angel funding. But the company is starting to sign up clients.
The challenge will be for Move.ai to get its foot in the door at big game studios, which may or may not be working on the same kind of technology. But Move.ai sees other potential work. The technology could work for volumetric animation and virtual advertising. Film and game directors could also use the tech for pre-visualizing a scene and storyboarding, since it’s not as costly to use the data.
“I spent many years at the university, and we’ve trained a machine learning model to understand people’s true forms in 3D,” Millar said. “We’ve amassed a big library of 3D shapes with people from scans, and then we’ve given it different viewpoints from cameras, and then we’ve trained it to be able to go from there. We wanted the camera to then be able to recreate the 3D mesh of the person.”
“As humans, we can look around and, if you just close one eye, you can tell that objects are in 3D even though you’re technically only using one eye. Technically you need the two eyes or cameras to understand something in 3D. But because of our brains, we’ve seen so many 3D objects that we can actually now close an eye and determine the form of things in 3D so effectively.”
There are rivals out there that Move.ai will have to beat out to get video game clients. Fortunately, there are a lot of those clients around.
Endemano and Millar are excited about the potential for the technology in the hands of folks who aren’t programmers or artists. Influencers could take this tech and make games because it simplifies the process.
“People could create the most extraordinary stuff in their own homes,” Millar said. “The power of that shouldn’t be underestimated.”
This post was originally published by Dean Takahashi at Venture Beat