Common production techniques include photogrammetry, alchemy, simulation, etc.
Commonly used software include: 3dsMAX, MAYA, Photoshop, Painter, Blender, ZBrush, Photogrammetry
Commonly used game platforms include cell phones (Android, Apple), PC (steam, etc.), consoles (Xbox/PS4/PS5/SWITCH, etc.), handhelds, cloud games, etc.
The distance between an object and the human eye can be described as “depth” in a sense. Based on the depth information of each point on the object, we can further perceive the geometry of the object and obtain the color information of the object with the help of the photoreceptor cells on the retina. 3D scanning devices (usually single wall scanning and set scanning) work very similarly to the human eye, by collecting the depth information of the object to generate a point cloud (point cloud). The point cloud is a set of vertices generated by the 3D scanning device after scanning the model and collecting the data. The main attribute of the points is the position, and these points are connected to form a triangular surface, which generates the basic unit of the 3D model grid in the computer environment. The aggregate of vertices and triangular surfaces is the mesh, and the mesh renders three-dimensional objects in the computer environment.
Texture refers to the pattern on the surface of the model, that is, the color information, the game art understanding of him is Diffuse mapping. Textures are presented as 2D image files, each pixel has U and V coordinates and carries the corresponding color information. The process of adding textures to a mesh is called UV mapping or texture mapping. Adding color information to the 3D model gives us the final file we want.
The DSLR matrix is used to build our 3D scanning device: it consists of a 24-sided cylinder for mounting the camera and the light source. A total of 48 Canon cameras were installed to get the best acquisition results. 84 sets of lights were also installed, each set consisting of 64 LEDs, for a total of 5376 lights, each forming a surface light source of uniform brightness, allowing for more uniform exposure of the scanned object.
In addition, in order to enhance the effect of photo modeling, we added a polarizing film to each group of lights and a polarizer to each camera.
After getting the automatically generated 3D data, we also need to import the model into the traditional modeling tool Zbrush to make some slight adjustments and remove some imperfections, such as eyebrows and hair (we will do this by other means for hair-like resources).
In addition, the topology and UVs need to be adjusted to give a better performance when animating the expressions. The left picture below is the automatically generated topology, which is rather messy and without rules. The right side is the effect after adjusting the topology, which is more in line with the wiring structure needed for making expression animation.
And adjusting UV enables us to bake a more intuitive mapping resource. These two steps can be considered in the future to do automated processing through AI.
Using 3D scanning modeling technology we only need 2 days or less to make the pore-level precision model in the figure below. If we use the traditional way to make such a realistic model, a very experienced model maker will need a month to complete it conservatively.
Quick and easy to get a CG character model is no longer a difficult task, the next step is to make the character model move. Humans have evolved over a long period to be very sensitive to the expressions of their kind, and the expressions of characters, whether in games or film CG has always been a difficult point.