How can I correctly align a 2D face image to preserve identity on a generated 3D human mesh?

3 weeks ago 20
ARTICLE AD BOX

I am generating a 3D human body mesh from user attributes (height, weight, gender, etc.) using a parametric model. The body is created correctly and proportions look accurate.

After that, I try to apply the user’s real face using one or more input images. The issue is that the final 3D model does not preserve the person’s identity — the face either looks generic, blurred, or poorly aligned with the head mesh.

Current pipeline:

Generate body mesh from parameters.

Detect face region from uploaded image.

Extract facial landmarks (2D).

Attempt to project the face image as a texture onto the head region of the 3D mesh.

Render the combined model.

Problem:
Although the texture is applied, the face does not align correctly with the 3D geometry, and the resulting model does not resemble the input person.

Expected behavior:
The projected face should match the geometry of the head and preserve recognizable identity.

Actual behavior:
The texture appears stretched or averaged, and facial structure does not match the mesh.

Question:
From a geometric and computer-vision perspective, what is the missing step required to correctly map a 2D face image onto an existing 3D head mesh?

Is an intermediate 3D face reconstruction (estimating depth/shape from landmarks) required before texture projection, instead of directly projecting the 2D image?

I am trying to understand the correct mathematical/vision pipeline rather than looking for library suggestions.

Read Entire Article