Warning: include_once(/homepages/10/d321986576/htdocs/TNGblog/wp-content/plugins/Akismet3/Akismet3.php) [function.include-once]: failed to open stream: Permission denied in /homepages/10/d321986576/htdocs/TNGblog/wp-settings.php on line 215

Warning: include_once() [function.include]: Failed opening '/homepages/10/d321986576/htdocs/TNGblog/wp-content/plugins/Akismet3/Akismet3.php' for inclusion (include_path='.:/usr/lib/php5.2') in /homepages/10/d321986576/htdocs/TNGblog/wp-settings.php on line 215
Lidar Scanning

Lidar Scanning

Using Lidar Technology

In recent years, Lidar has become the most used technology and is a necessity for many industries. Lidar is typically used to scan large areas or environments, buildings, and vehicles. It’s been used for mapping cities and for agricultural use, however our main use has been for visual effects purposes.

building scan 2

tng cg town, lidar scanning, 3d scanning

When we use the FARO scanner on a job, the data captured can be used for Previs and then again as a background for a set. The scanner can either be mounted on a tripod to scan a large area or object, or can be attached to a vehicle or helicopter to collect the data. Some units can now scan point clouds at a much higher rate than past units. If the job is in a more remote area, we make sure to bring along a portable generator for the scanner. One thing to note is the need for a collection of SD cards to store the scans on.

Lidar scanning on site, battle ship

Whether you are scanning in a studio or for a commercial at a baseball stadium, planning is paramount to getting the best results in the shortest period of time. You observe your surroundings, consider the timeframe you have been allotted and are willing to spend on location, and you think about how many scans there will be along with their positions to optimally cover everything vital.

building scan

There are some tricks to aligning the point cloud Lidar data. For example, light bulbs in street lights can be used as markers – or anything for that matter of a consistent size that’s throughout the scene. Another style of alignment is to use printed out targets that are seen by each scanner position – with enough overlap on each scan. Line of sight is very important as well as observing the conditions such as wind and anything that can move in the scene during the capturing period.

 

Where’s your closest TNG location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.

 

 

TNG adds Motion Capture (MOCAP) to their Services

As a 3D scanning company who primarily scans human bodies and human heads, it was only a natural progression for us to branch out into motion capture. This technology brings static dormant objects to life. A 3D scan captures the surface of a person’s skin and clothing. Once we’ve put together the scan, we unwrap the UVs and remesh it to prep it for texturing. After this step we may render the model, but the next step is to insert joints (a skeleton) so that there is something to drive the skin of the character into movement. This process is called rigging.

Melika-Mocap

Once our skeleton is sitting nicely within our CG digital double (3D scanned human), a process called weighting takes place. This allows you to determine how much of each specific joint will drive the skin near that joint, and having smoothed dialed in weights will visually create lifelike movements when the character is animated. To animate these joints without having to actually grab the joints themselves, a GUI (graphical user interface) is created, which is connected to the joints via orientation constraints. This allows an animator to more easily and intuitively animate a 3D character. After animation, the video is rendered frame by frame on a network of computers called a render farm.

The MoCap system we are using is called Noitom’s Perception. It is fully wireless, fast to setup, and easy to use. The first step is to have the talent strap up the sensors. We calibrate it to the system, and then analyze their gait (walking style). As we know a person’s posture varies, you can adjust for any imperfections by having their torso lean forward or backward as well as for how heavy their heel strike is or how weak the strike is. These features allow us to basically pre-process the animation before it’s even recorded, which results in a human looking animation despite any real-life incongruities. Once the animation is recorded, we post-process the animation in a proprietary software from Noitom, focusing on each time the character’s foot makes and loses contact with the floor to make it more realistic and believable. This process is quite easy, and quite fascinating. After this, the animation is taken to a software like MotionBuilder in which further post processing takes place as well as binding the animation to a character, and finally you may export out your realistic human motion capture data along with your CG model and render it in any software you’d like.

 

Where’s your closest TNG location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.

 

 

 

Creating a CG Car for a Commercial, Film Prop, or Video Game

Toyota_RedThe process to create a computer generated image (CGI) involves hardware and software to produce a quality product. A 3D scanner is first used to capture the model, and then the raw data is processed into 3D modeling software such as Autodesk Maya or exported from AutoCAD. The high resolution data from the scan or CAD file is then optimized for CGI.

Let’s say the body of a vehicle is scanned and the scan or CAD data has higher than 2 million polygons. It then needs to be retopologized to a much lower resolution. A max amount of polys for any vehicle should be between 100,000 polys and 200,000 polys, depending on how close to the camera a shot was taken as polygon density varies. Whenever scan data has holes, all geometry needs to be cleaned up and/or retopologized. One tool we use for this (others are available) is ZBrush. When CAD data is used, inner parts that are invisible to the camera need to be deleted to make the file lighter.

Toyota_White_SilverIn the data prep stage we also start to create UV space for the geometry. If materials such as fabrics are hand painted, there should be a greater focus to get an even UV space so no stretching occurs. We use Headus UVLayout for creating the UVs. After the base model is done, we make sure to check the normals and nonmanifold geometry. We then export the data to Maya for polishing the model.

To scan the exterior of a vehicle takes approximately a week or so, based on detail. If the interior also needs to be scanned, add an additional week or more depending on the level of detail and coverage needed.

Toyota Interior

The product is normally delivered with UVs and  textures/shaders, but can be customized to our customers specific need.

Where’s your closest TNG location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.

TNG is Growing

TNG opened their Canadian Corporation in Vancouver and is now opening and expanding into Toronto and Montreal. Vancouver has been referred to as “Hollywood North” since the late 1970s and is in reference to their hold on being the third largest movie production center after Los Angeles and New York, as well as being in the same time zone and having the infrastructure in place to handle production projects. The title has also been used to describe Toronto, as well as the entire Canadian film industry as a whole, and offers extremely versatile landscapes for production studios to take advantage of, attractive tax credits, and highly skilled laborers in all aspects of production.

USA and Canada

TNG is headquartered in Los Angeles, and along with their move into the major cities in Canada, they have a location in Louisiana and have been working on projects in New York. These efforts give TNG the ability to service the production industry and major hubs in North America ensuring tax credits for production houses in those areas.

 

Where’s your closest TNG location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.

What is 3D Scanning?

3D scanning is becoming a favored service by several industries such as entertainment, medical, crime scene investigation, and merchandising. It can be thought of as a type of photography in which you capture a person, object or scene in 3D, allowing for it to be rotated 360 degrees. You may also projection map photos on to whatever you have 3D scanned thus making a true to life in-color 3D replica that can be inserted and composited into any form of video or computer generated (CG) render.

Lidar Scanned Charger

To scan an environment, a type of 3D scanner is used that emits lasers for measurements and 3D reconstruction. This is called LIDAR, and will capture what is called a point cloud of 3D data. Accurate measurements of the real life scene will copy over to the 3D data, allowing for correct proportions, and to have an entire environment converted to 3D. This data can be used for many things, and will allow for use of 3D cameras, which are not under any type of real world constraints.

lidar scan of old building

It’s becoming more common for the main cast of a film or episodic to be 3D scanned for visual effects purposes, which include stunt work and digital manipulation through the use of 3d shaders and texture maps. Green screens are used to switch out the background on a set and allows a location to be anywhere, likewise a 3D scanned object can be inserted into any location.

For 3D scanning humans, white light technology has many advantages. It is safe, fast, and transportable while still capturing dense accurate data. An alternative is to use photogrammetry, in which DSLR cameras capture the data, and that data is fed through software with complex algorithms that can interpret the photos from 2D to 3D by having enough variety of angles for overage, determining X, Y, Z space.

3d scanning, lidar scanning

Regardless of the type of 3D scanning used, there will be post-processing to clean up the data and to fit the data into a quad shaped mesh over the 3D scan data. This will make the data more flexible, allowing for sculpting, UVs to be applied, and texture maps afterwards. At this point, the 3D object or person can be rigged (inserting digital joints/bones), and then animated either like a CG cartoon with plenty of squash and stretch, or like the real world characters like you and I.

Ultimately the result is of a 3D object that is a replica of their real life counterpart. It can be used in a CG scene or a real life scene by compositing it in. The viewer will be able to experience the performance, and may not even notice when the 3D effect is on screen. This art is becoming more seamless and allows for an array of creative choices that at one time wasn’t even possible.

Where’s your closest TNG location?

tng vfx locations

We welcome your feedback! Comment on our blog or visit our Facebook page.