To morph two faces, we need to define corresponding points on the two images. I did it by hand using ginput to select points. I chose portraits of Zinedine Zidane and George Clooney by Martin Schoeller, and resized both of them by 50 percent for faster computation.
Here is the mean triangulation for both photos. Also, I added four points at the corners so the whole image is warped.
In this section, I computed the "midway face" by morphing between two images. The first step was to define corresponding points on both faces and compute the average points. Using these average points, I calculated a Delaunay triangulation to ensure the facial structures align correctly. For each triangle, I applied an inverse affine transformation to warp both images into the average shape. Then, I averaged the pixel values from both images to generate the final midway face.
In my naive implementation, I had thin white lines in the warped image. I believe these lines were caused by the overlapping borders of triangles, so when adding different polygons, the values on borders become white because of adding values there multiple times. To fix this, I added a weighted mask and then divided by it in the end. Also, I used map_coordinates to map pixels instead of griddata because it was faster.
I think I got a better result when morphing George into Bradley Cooper. When morphing George and Zidane, their noses don't align. I think that's because Zinedine's nose has a different shape from George Clooney's, or I should have added more keypoints for better alignment.
To create a morph sequence, I essentially reused the code from part 2 and introduced two parameters,
warp_frac
and dissolve_frac
, to control shape warping and cross-dissolving.
These parameters allowed for smooth transitions between the two faces, with the warp_frac adjusting the geometric warping,
while dissolve_frac controlled the blending between the two images. The following GIF demonstrates the morphing sequence at 30 fps.
For this part, I used the Danes dataset. I added four points to the corners so I can get the whole image at the end and not just a face on the black background. The only modification to my code was adding a function that reads points from a .asf file. Then I looped through every picture in the dataset.
Here is a warp to the mean face and a warp from the mean face of mine.
I produced a caricature of my face by extrapolating from the Danes' mean face. This is done by adding alpha * (my_keypoints_resized - average_shape) to my face shape. I used alpha = 1 for this image.
Here I used alpha = 0.5.
I implemented auto-labeling of keypoints using the Mediapipe library by Google. I referenced their guide on face feature detection here: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md. I used stock images for a better example.
I modified the code to save the image when the morph GIF is half done so I can get a face of myself morphed into an average female from the Danes dataset with parameters (warp_frac = 0.5, dissolve_frac = 0.5).