Warning: include_once(/homepages/10/d321986576/htdocs/TNGblog/wp-content/plugins/Akismet3/Akismet3.php) [function.include-once]: failed to open stream: Permission denied in /homepages/10/d321986576/htdocs/TNGblog/wp-settings.php on line 215

Warning: include_once() [function.include]: Failed opening '/homepages/10/d321986576/htdocs/TNGblog/wp-content/plugins/Akismet3/Akismet3.php' for inclusion (include_path='.:/usr/lib/php5.2') in /homepages/10/d321986576/htdocs/TNGblog/wp-settings.php on line 215
mocap

mocap

Cyber Characters That Can Move It

If you are developing characters for film, television, or game project, you will certainly want them to move, and even more – you want them to come off with personality, punch, and pizzazz. Whether the character is dancing, fighting, or just being a zombie, in order for the audience to believe in the character their body language has got to show it.

geishaGirl_dynamicCloth

There is a level of expectation when it comes to people because we know how we are supposed to interact and move. The best measurement of movement is what we can record and emulate. The real test of human movement is in the face. This is where we need the most focused attention. We need to stay out of the uncanny valley when dealing with conversations between the character and the real actors. Creating an Avatar using Motion Capture helps it move in the same likeness. At some point in the future we could even mix our cyber characters with real actors and not know the difference.

3d scan_full body_lidar_large vehicle and building

When this is mastered we will be headed toward an all CG or digital entertainment programming that looks like the real thing. It is an exciting time in the industry as the momentum of using digital elements permeates the screen. With more and more effects added with no seams the audience is left with the thought of, “How did they do that?”

3D Head Scan, superhero

We still need to master the clone in appearance and performance both in body language and facial performance, thus being the puppeteers for future programming.

Where is your closest location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.

Click here to see our list of credits on IMDb.3d scanning services

Inertial Based Mocap Vs Optical Based Mocap

What is motion capture? Motion capture (mocap for short) is when you use technology to capture the intricate motions of an actor or actress for use in film, television or video game.

motion capture, mocap

Traditional hand key animation is a slow and tedious process. You can go through countless iterations over the course of several months to get the right movement.  With motion capture you can literally take the keying into your own hands by acting it out yourself, or by having an actor do it. This allows a Director to go through as many takes as they need in a single day of shooting. Depending on the amount of cg characters needed, an entire show could be shot in just a few days. You can also continue working on the data to create the exact animation you want for a character.  For a film, using pre-vis with motion capture can help setup shots well in advance saving time in production.

Motion capture, mocap

Inertial based motion capture has the possibility to go anywhere, and space is not a deciding factor in where and when you shoot. With Optical based motion capture systems, you are generally limited to a building with optical based cameras already set up. To bring Optical based systems to a set would be time consuming and costly, and requires a lot of prep time. Optical based systems also have pitfalls like occlusion. Occlusion happens when two markers come into close proximity and prevents the camera from seeing both markers. This could happen if you needed to cross your arms while shooting. With inertial based motion capture, there are no cameras and no line of sight issues, and everything can be up and running in 30 minutes. The sensors work like wifi remotes tracking the movement and everything is worn on the body – 17 sensors covering 15 joints. One thing to note, inertial based motion capture systems need to be used in an environment with little to no metal.

 

Where’s your closest TNG location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.

Let the Inertia System Move You

We’ve been working with an inertia based motion capture (Mocap) system called Perception. It is capable of providing accurate motion capture data of single or multiple characters. Perception is compact and wireless and brings a freedom from needing to be in a building or large space with hundreds of lights and a team of people directing a performance. An Optical system is an alternative Mocap system available, but there may not always be a need for that level of attention, stage size, expense, and all that goes with it.

Perception doesn’t have a long calibration or setup time and you don’t have to worry about occlusions. You can also use it outside, which allows an even greater freedom for movement and capture.

One of the most important things for us is the ability to bring our 3D models to life after we 3D scan them. There are a number of ways to make your character move from hand-keying animation to using several different types of Mocap.

When choosing a Mocap system, you need to look at what is available. The systems are now very specialized, so find out both what you need and the budget you have available for your project. Also incorporate how long you have to complete the work to meet the deadlines for the project.

Good luck with you productions and as the saying goes “The mind is like a parachute, it works best when open.”

Where’s your closest TNG location?

TNG North American Locations

About TNG Visual Effects

Contact Us

We welcome your feedback! Comment on our blog or visit our Facebook page.

Artec Interviews TNG: Eva and Spider: Bringing you mind-blowing visuals

Artec scanners really help us bring excellent, high quality 3D models to our clients. Our founder, Nick Tesi sat down with Artec Group Inc. and discussed the process behind 3D scanning and how it plays a big part in the entertainment industry.

Read the full article here.

3d scanning, head scan

full body 3d scan

3d scanning merchandise

 

Where’s your closest TNG location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.

TNG adds Motion Capture (MOCAP) to their Services

As a 3D scanning company who primarily scans human bodies and human heads, it was only a natural progression for us to branch out into motion capture. This technology brings static dormant objects to life. A 3D scan captures the surface of a person’s skin and clothing. Once we’ve put together the scan, we unwrap the UVs and remesh it to prep it for texturing. After this step we may render the model, but the next step is to insert joints (a skeleton) so that there is something to drive the skin of the character into movement. This process is called rigging.

Melika-Mocap

Once our skeleton is sitting nicely within our CG digital double (3D scanned human), a process called weighting takes place. This allows you to determine how much of each specific joint will drive the skin near that joint, and having smoothed dialed in weights will visually create lifelike movements when the character is animated. To animate these joints without having to actually grab the joints themselves, a GUI (graphical user interface) is created, which is connected to the joints via orientation constraints. This allows an animator to more easily and intuitively animate a 3D character. After animation, the video is rendered frame by frame on a network of computers called a render farm.

The MoCap system we are using is called Noitom’s Perception. It is fully wireless, fast to setup, and easy to use. The first step is to have the talent strap up the sensors. We calibrate it to the system, and then analyze their gait (walking style). As we know a person’s posture varies, you can adjust for any imperfections by having their torso lean forward or backward as well as for how heavy their heel strike is or how weak the strike is. These features allow us to basically pre-process the animation before it’s even recorded, which results in a human looking animation despite any real-life incongruities. Once the animation is recorded, we post-process the animation in a proprietary software from Noitom, focusing on each time the character’s foot makes and loses contact with the floor to make it more realistic and believable. This process is quite easy, and quite fascinating. After this, the animation is taken to a software like MotionBuilder in which further post processing takes place as well as binding the animation to a character, and finally you may export out your realistic human motion capture data along with your CG model and render it in any software you’d like.

 

Where’s your closest TNG location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.