Introduction

Objective

We are building up a set of capabilities to replace the manual work required for researchers and experimenters who want to understand facial characteristics of their subject pool. Those characteristics such as skin tones and facial measurements (e.g. facial width to height ratio) can be used to better understand percieved biases in subjects. What is typically used in statistical studies are self-reported metrics. While self reported metrics are useful to reflect a subject's self identity, they are less useful for systematically reflecting how the world sees that individual. We see a relevant use case in Schniter, et. al. (2020), which examines the pairwise interactions between subjects in a repeated prisoners' dilemma game. In this case, it is useful to understand how each subject is percieved by their partner.

Dataset

The dataset used here was collected in Schniter and Shields, 2020 is comprised of portraits collected manually from video frames. The skin tone and facial width to height ratios (fWHR) used as the truth were graciously contributed by Eric Schniter who collected them using Photoshop.</p> </div> </div> </div>

Citations

1. Soellinger, A., Schniter, E. (2020). "Training a UNet for Accurate Facial Attribute Profiling". https://blog.prcvd.ai/research/perception/image/2020/10/12/Training-a-Face-Segmentation-Model-for-Automatic-Skin-Tone-Detection.html.

2. Schniter, E., Shields, T. (2020). "Participant Faces From a Repeated Prisoner’s Dilemma". Unpublished raw data.

Code

%matplotlib inline

from pathlib import Path
# import random
# 
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import pandas as pd
import numpy as np
# from PIL import Image
# import imutils
# 
# 
# from fastai.basics import *
# from fastai.vision import models
# from fastai.vision.all import *
# from fastai.metrics import *
# from fastai.data.all import *
# from fastai.callback import *
# 
import skimage
from skimage import color
# from sklearn.mixture import GaussianMixture
# 
# import mediapipe as mp
# import cv2
from prcvd.img import (
    compute_theta, apply_rgb2lab, calc_euclidean_error, MaskedImg, 
    LabelMask, FaceMask,  MeshMask, TrainedSegmentationModel
)

from prcvd.core import IndexerDict

## shared with training
def get_y_fn(fp):
    l_str = str(img_to_l[fp])
    out = l_str \
        .replace('labels', 'labels_int') \
        .replace('png', 'tif')
    return out


    
class FacialProfile:
    def __init__(self, img, model, sampling_strategy='use_all', align_face=True):
        self.segmask = None
        self.eye_slope_threshold = 0.50 #degrees
        self.model = model
        self.img = img
        self.eye_slope = np.nan
        self.apply_rotation(angle=0.0)
        self.fix_rotation()
        self.mesh = MeshMask(img=self.segmask.decoded_img)
        self.skin_tones = self.compute_skintones(
            sampling_strategy=sampling_strategy
        )
        self.tot_head_area, self.face_areas = self.create_area_features()

        self.bizygomatic_left, self.bizygomatic_right, self.bizygomatic_dist = \
            self.compute_bizygomatic_width()
    
        self.upperfacial_top, self.upperfacial_bottom, self.upperfacial_dist = \
            self.compute_upperfacial_height()
        
        self.fwhr = self.bizygomatic_dist / self.upperfacial_dist
        # self.modeled_img_fp = write_to/'imgs'/(fp.stem+'.jpg') #TODO
    
    def get_profile(self):
        """Returns the face profile object."""
        tones = {'rgb_of_{}'.format(k):v for k,v in self.skin_tones.items()}
        other = {
            'img_rot_degrees': self.img.rotation_degrees,
            'img_num_rotations': self.img.num_rotations,
            'img_eye_slope': self.eye_slope,
            'fwhr': self.fwhr,
            'bizygoatic_w_px': self.bizygomatic_dist,
            'upperfacial_h_px': self.upperfacial_dist,
            'tot_head_area_px': self.tot_head_area
        }
        out = {**tones, **other}
        out = {**out, **self.face_areas}
        return out
    
    
    def apply_rotation(self, angle):
        """Applies a rotation by angle (in degrees) to self.img"""
        if angle != 0.0:
            self.img.rotate(angle)
            
        self.segmask = FaceMask(img=self.img, model=self.model)
        self.create_eye_features()
        # TODO: add check to see if the face rotation worked out
        self.create_secondary_features()
    
    
    def fix_rotation(self,):
        """Loops over rotations until the image is corrected."""
        while abs(self.theta_degrees) > self.eye_slope_threshold:
            self.apply_rotation(angle=self.theta_degrees)
        
    
    def create_area_features(self):
        nots = ['Background/undefined']
        total_area = {
            lab: np.count_nonzero(self.segmask.mask == code) 
            for lab, code in self.segmask.label_to_code.items()
            if lab not in nots
        }
        s = sum(total_area.values())
        total_area = {'pct_of_head_{}'.format(k): v/s for k,v in total_area.items()}
        return s, total_area
    
    
    def create_eye_features(self):
        """Writes the eye features."""
        self.segmask.add_eyes()
        self.eye_slope = self.segmask.compute_eye_slope()          
        if self.eye_slope > 0.0:            
            self.theta_degrees = \
                -1.0*compute_theta(slope1=self.eye_slope, slope2=0.0)
            
        elif self.eye_slope < 0.0:
            self.theta_degrees = \
                compute_theta(slope2=self.eye_slope, slope1=0.0)
        
        elif self.eye_slope == 0.0:
            self.theta_degrees = \
                compute_theta(slope2=self.eye_slope, slope1=0.0)
        
        
    def create_secondary_features(self):
        """Writes the secondary facial features, dependent on reference features."""
        self.segmask.add_cheeks()
        self.segmask.add_forehead()

    
    def compute_skintones(self, sampling_strategy, thresh=None):
        """Computes skin tones for all regions"""
        if thresh:
            self.segmask.calc_new_decision(thresh=thresh)
        
        return {
            region: self.segmask.calc_region_color(
                region=region, 
                sampling_strategy=sampling_strategy
            )
            for region in self.segmask.label_to_code.keys()
        }
    
    
    def compute_bizygomatic_width(self):
        """"""
        # Sources: 
        # https://carta.anthropogeny.org/moca/topics/upper-facial-height

        right_eye_right = self.segmask.find_region_extrema(
            region='right_eye',direction='right',
        )
        right_cheek_right = self.segmask.find_region_extrema(
            region='right_cheek',direction='right',
        )
        bizygomatic_right = (float(right_cheek_right[0]), float(right_eye_right[1]))

        left_eye_left = self.segmask.find_region_extrema(
            region='left_eye',direction='left',
        )
        left_cheek_left = self.segmask.find_region_extrema(
            region='left_cheek',direction='left',
        )
        bizygomatic_left = (float(left_cheek_left[0]), float(left_eye_left[1]))
        bizygomatic_dist = bizygomatic_right[0] - bizygomatic_left[0]

        return bizygomatic_left, bizygomatic_right, bizygomatic_dist
        
        
    def compute_upperfacial_height(self):
        """"""
        upperfacial_top = self.segmask.find_region_extrema(
            region='Eyebrows',direction='bottom',
        )
        upperfacial_bottom = self.segmask.find_region_extrema(
            region='Lips',direction='top',
        )
        upperfacial_top = (float(upperfacial_top[0]), float(upperfacial_top[1]))
        upperfacial_bottom = (float(upperfacial_bottom[0]), float(upperfacial_bottom[1]))
        upperfacial_dist = upperfacial_bottom[1] - upperfacial_top[1]
        
        return upperfacial_top, upperfacial_bottom, upperfacial_dist

        
    def get_skintones_in_lab(self):
        """
        Returns skin tones in LAB colors
        """
        return {
            k: apply_rgb2lab(v) 
            for k, v in self.skin_tones.items()
        }

path = Path("/ws/data/skin-tone/headsegmentation_dataset_ccncsa")
mod_dir = path/'Models'
mod_fp = mod_dir/'checkpoint_20201007'
# TODO: figure out how to get the labels from the trained model...
output_classes = ['Background/undefined', 'Lips', 'Eyes', 'Nose', 'Hair', 
  'Ears', 'Eyebrows', 'Teeth', 'General face', 'Facial hair',
  'Specs/sunglasses']
size = 224
truth_fp = Path('/ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/MasterFacesDatabaseLabels.xlsx - labels.csv') 
truth = pd.read_csv(truth_fp)
truth.index = truth['PostStatementPhotoFilename']
del truth['PostStatementPhotoFilename']
truth['L_m'] = (truth['L_l'] + truth['L_r']) / 2
truth['a_m'] = (truth['a_l'] + truth['a_r']) / 2
truth['b_m'] = (truth['b_l'] + truth['b_r']) / 2

def run_summary(fp, summary, attempt=1, num_attempts=10):
    print(fp)
    outimg = write_to/'imgs'/(fp.stem+'.jpg')
    
    if outimg.exists():
        return None, None

    try: del img
    except: pass
    img = MaskedImg()
    img.load_from_file(fn=fp)

    try: del model
    except: pass

    model = TrainedSegmentationModel(
        mod_fp=mod_fp, 
        input_size=size,
        output_classes=output_classes
    )

    try: del profile
    except: pass
    
    try:
        profile = FacialProfile(
            model=model, 
            img=img, 
            sampling_strategy='use_all', 
            align_face=True
        )
    except:
        if not attempt > num_attempts:
            return run_summary(
                fp=fp, 
                summary=summary,
                attempt=attempt+1,
                num_attempts=num_attempts
            )
        else:
            return None, None
    
    plt.figure(figsize=(10,10))
    plt.imshow(profile.segmask.decoded_img.img)
    plt.imshow(
        skimage.color.label2rgb(np.array(profile.segmask.mask)), 
        alpha=0.3
    )
    plt.title('Computed fWHR based on Segmentation Only (not FaceMesh).\nfWHR: {}'.format(profile.fwhr))

    plt.scatter(x=[profile.bizygomatic_right[0]], 
                y=[profile.bizygomatic_right[1]], 
                marker='+', c='orange')
    plt.scatter(x=[profile.bizygomatic_left[0]], 
                y=[profile.bizygomatic_left[1]], 
                marker='+', c='orange')
    plt.plot(
        [profile.bizygomatic_right[0], profile.bizygomatic_right[0]], 
        [0, profile.segmask.mask.shape[1]-1],'ro-')
    plt.plot(
        [profile.bizygomatic_left[0], profile.bizygomatic_left[0]], 
        [0, profile.segmask.mask.shape[1]-1],'ro-')

    plt.scatter(x=[profile.upperfacial_top[0]], 
                y=[profile.upperfacial_top[1]],
                marker='+', c='red')
    plt.plot(
        [0, profile.segmask.mask.shape[0]-1], 
        [profile.upperfacial_top[1], profile.upperfacial_top[1]],
        'go-'
    )

    plt.scatter(x=[profile.upperfacial_bottom[0]], 
                y=[profile.upperfacial_bottom[1]],
                marker='+', c='red')
    plt.plot(
        [0, profile.segmask.mask.shape[0]-1], 
        [profile.upperfacial_bottom[1], profile.upperfacial_bottom[1]], 'go-')
    plt.savefig(outimg, format='jpeg')
    
    row = profile.get_profile()
    row['model_id'] = mod_fp
    
    return row, plt
    

erics_img = Path('/ws/data/skin-tone/from_zenodo/Media/MediaForExport/')
ls = [fp for fp in list(erics_img.ls()) if str(fp)[-4:] == '.jpg']
# row = truth.loc[fp.name].to_dict()
write_to = Path('/ws/data/skin-tone/output1')
summary = []

fp = ls[1]
outimg = write_to/'imgs'/(fp.stem+'.jpg')

try: del img
except: pass
img = MaskedImg()
img.load_from_file(fn=fp)

try: del model
except: pass

model = TrainedSegmentationModel(
    mod_fp=mod_fp, 
    input_size=size,
    output_classes=output_classes
)

try: del profile
except: pass


profile = FacialProfile(
    model=model, 
    img=img, 
    sampling_strategy='use_all', 
    align_face=True
)
plt.imshow(profile.img.img)
<matplotlib.image.AxesImage at 0x7f907811e6a0>
print(profile.bizygomatic_right)
profile.bizygomatic_left
(172.0, 83.0)
(92.0, 82.0)
def plot_all(save=False)
    plt.figure(figsize=(10,10))
    plt.imshow(profile.segmask.decoded_img.img)
    plt.imshow(
        skimage.color.label2rgb(np.array(profile.segmask.mask)), 
        alpha=0.3
    )
    plt.title('Computed fWHR based on Segmentation Only (not FaceMesh).\nfWHR: {}'.format(profile.fwhr))

    plt.scatter(x=[profile.bizygomatic_right[0]], 
                y=[profile.bizygomatic_right[1]], 
                marker='+', c='orange')
    plt.scatter(x=[profile.bizygomatic_left[0]], 
                y=[profile.bizygomatic_left[1]], 
                marker='+', c='orange')
    plt.plot(
        [profile.bizygomatic_right[0], profile.bizygomatic_right[0]], 
        [0, profile.segmask.mask.shape[1]-1],'ro-')
    plt.plot(
        [profile.bizygomatic_left[0], profile.bizygomatic_left[0]], 
        [0, profile.segmask.mask.shape[1]-1],'ro-')

    plt.scatter(x=[profile.upperfacial_top[0]], 
                y=[profile.upperfacial_top[1]],
                marker='+', c='red')
    plt.plot(
        [0, profile.segmask.mask.shape[0]-1], 
        [profile.upperfacial_top[1], profile.upperfacial_top[1]],
        'go-'
    )

    plt.scatter(x=[profile.upperfacial_bottom[0]], 
                y=[profile.upperfacial_bottom[1]],
                marker='+', c='red')
    plt.plot(
        [0, profile.segmask.mask.shape[0]-1], 
        [profile.upperfacial_bottom[1], profile.upperfacial_bottom[1]], 'go-')
    plt.savefig(outimg, format='jpeg')

    row = profile.get_profile()
    row['model_id'] = mod_fp

    ##Exceptions
    # ZeroDivisionError, RuntimeError
outimg
Path('/ws/data/skin-tone/output1/imgs/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg')
summary = []
failures = []
for fp in ls[:10]:
    if str(fp)[-4:] != '.jpg': continue
    try:
        row, plt = run_summary(fp=fp, summary=summary)
    except:
        failures.append(fp)
        continue
    if not row and not plt:
        failures.append(fp)
        continue
    
    summary.append(row)
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg
failures
[Path('/ws/data/skin-tone/from_zenodo/Media/MediaForExport/3e9d3a9c-c62d-4e17-b370-ff1b8c0a16c4.jpg'),
 Path('/ws/data/skin-tone/from_zenodo/Media/MediaForExport/a9f8a933-3428-4213-bfac-30b6a8c8daca.jpg')]
profile.get_profile()
{'rgb_of_Background/undefined': (107, 104, 104),
 'rgb_of_Lips': (132, 100, 110),
 'rgb_of_Nose': (168, 144, 140),
 'rgb_of_Hair': (50, 48, 52),
 'rgb_of_Ears': (56, 21, 34),
 'rgb_of_Eyebrows': (80, 73, 77),
 'rgb_of_Teeth': None,
 'rgb_of_General face': (118, 112, 112),
 'rgb_of_Facial hair': None,
 'rgb_of_Specs/sunglasses': None,
 'rgb_of_right_eye': (91, 87, 93),
 'rgb_of_left_eye': (80, 76, 85),
 'rgb_of_left_cheek': (173, 153, 150),
 'rgb_of_right_cheek': (148, 129, 126),
 'rgb_of_forehead': (180, 156, 148),
 'img_eye_slope': -0.0,
 'fwhr': 1.7222222222222223,
 'bizygoatic_w_px': 62.0,
 'upperfacial_h_px': 36.0,
 'tot_head_area_px': 10828,
 'pct_of_head_Lips': 0.012929442186922793,
 'pct_of_head_Nose': 0.025951237532323604,
 'pct_of_head_Hair': 0.17694865164388623,
 'pct_of_head_Ears': 0.0001847063169560399,
 'pct_of_head_Eyebrows': 0.010251200591060215,
 'pct_of_head_Teeth': 0.0,
 'pct_of_head_General face': 0.5391577391946805,
 'pct_of_head_Facial hair': 0.0,
 'pct_of_head_Specs/sunglasses': 0.0,
 'pct_of_head_right_eye': 0.004155892131510898,
 'pct_of_head_left_eye': 0.0034170668636867383,
 'pct_of_head_left_cheek': 0.05744366457332841,
 'pct_of_head_right_cheek': 0.05227188769855929,
 'pct_of_head_forehead': 0.11728851126708534}
plt.figure(figsize = (8,8))
plt.title('Original Image, Rotated and Shrunken to size={}'.format(size))
plt.imshow(profile.segmask.decoded_img.img)
<matplotlib.image.AxesImage at 0x7f6a093e5940>
plt.figure(figsize = (8,8))
plt.title('Enhanced Segmentation Model Output')
plt.imshow(profile.segmask.mask)
<matplotlib.image.AxesImage at 0x7f6a06a3a2e8>

Compute FaceMesh (google)

profile.mesh.draw(thickness=1, circle_radius=1)
plt.figure(figsize=(10,10))
plt.imshow(profile.segmask.decoded_img.img)
plt.imshow(
    skimage.color.label2rgb(np.array(profile.segmask.mask)), 
    alpha=0.3
)
plt.title('Computed fWHR based on Segmentation Only (not FaceMesh).\nfWHR: {}'.format(profile.fwhr))

plt.scatter(x=[profile.bizygomatic_right[0]], 
            y=[profile.bizygomatic_right[1]], 
            marker='+', c='orange')
plt.scatter(x=[profile.bizygomatic_left[0]], 
            y=[profile.bizygomatic_left[1]], 
            marker='+', c='orange')
plt.plot(
    [profile.bizygomatic_right[0], profile.bizygomatic_right[0]], 
    [0, profile.segmask.mask.shape[1]-1],'ro-')
plt.plot(
    [profile.bizygomatic_left[0], profile.bizygomatic_left[0]], 
    [0, profile.segmask.mask.shape[1]-1],'ro-')

plt.scatter(x=[profile.upperfacial_top[0]], 
            y=[profile.upperfacial_top[1]],
            marker='+', c='red')
plt.plot(
    [0, profile.segmask.mask.shape[0]-1], 
    [profile.upperfacial_top[1], profile.upperfacial_top[1]],
    'go-'
)

plt.scatter(x=[profile.upperfacial_bottom[0]], 
            y=[profile.upperfacial_bottom[1]],
            marker='+', c='red')
plt.plot(
    [0, profile.segmask.mask.shape[0]-1], 
    [profile.upperfacial_bottom[1], profile.upperfacial_bottom[1]], 'go-')
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:4: FutureWarning: The new recommended value for bg_label is 0. Until version 0.19, the default bg_label value is -1. From version 0.19, the bg_label default value will be 0. To avoid this warning, please explicitly set bg_label value.
  after removing the cwd from sys.path.
[<matplotlib.lines.Line2D at 0x7f0fd737a048>]

Plot all Models Together

plt.figure(figsize=(10,10))
# plt.imshow(profile.segmask.decoded_img.img)
plt.imshow(profile.segmask.decoded_img.img)
plt.imshow(
    skimage.color.label2rgb(np.array(profile.segmask.mask)), 
    alpha=0.3
)
plt.title('fWHR: {}'.format(profile.fwhr))

plt.scatter(x=[profile.bizygomatic_right[0]], 
            y=[profile.bizygomatic_right[1]], 
            marker='+', c='orange')
plt.scatter(x=[profile.bizygomatic_left[0]], 
            y=[profile.bizygomatic_left[1]], 
            marker='+', c='orange')
plt.plot(
    [profile.bizygomatic_right[0], profile.bizygomatic_right[0]], 
    [0, profile.segmask.mask.shape[1]-1],'ro-')
plt.plot(
    [profile.bizygomatic_left[0], profile.bizygomatic_left[0]], 
    [0, profile.segmask.mask.shape[1]-1],'ro-')

plt.scatter(x=[profile.upperfacial_top[0]], 
            y=[profile.upperfacial_top[1]],
            marker='+', c='red')
plt.plot(
    [0, profile.segmask.mask.shape[0]-1], 
    [profile.upperfacial_top[1], profile.upperfacial_top[1]],
    'go-'
)

plt.scatter(x=[profile.upperfacial_bottom[0]], 
            y=[profile.upperfacial_bottom[1]],
            marker='+', c='red')
plt.plot(
    [0, profile.segmask.mask.shape[0]-1], 
    [profile.upperfacial_bottom[1], profile.upperfacial_bottom[1]], 'go-')
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:5: FutureWarning: The new recommended value for bg_label is 0. Until version 0.19, the default bg_label value is -1. From version 0.19, the bg_label default value will be 0. To avoid this warning, please explicitly set bg_label value.
  """
[<matplotlib.lines.Line2D at 0x7f0fa570d5c0>]
(a1,b1) = profile.segmask.find_region_extrema(
            region='right_eye', direction='bottom'
        )
(a2,b2) = profile.segmask.find_region_extrema(
            region='left_eye', direction='bottom'
        )
plt.figure(figsize = (10,10))
grp = (profile.segmask.mask == 1).nonzero()
plt.gca().invert_yaxis()
plt.scatter(grp[:,1],grp[:,0],)
<matplotlib.collections.PathCollection at 0x7f79a92135f8>

Validation

Basic Validation of the Facial Segmentation Model and Added Regions

Original Image

plt.figure(figsize = (10,10))
plt.imshow(profile.segmask.decoded_img.img)
<matplotlib.image.AxesImage at 0x7f0fa584bb38>

Facial Segmentation Model Output Mask

plt.figure(figsize = (10,10))
plt.imshow(profile.segmask.mask)
<matplotlib.image.AxesImage at 0x7f0fd7e79588>
centroid_leye = profile.segmask.find_region_extrema(region='left_eye', direction='centroid')
centroid_reye = profile.segmask.find_region_extrema(region='right_eye', direction='centroid')
slope = (float(centroid_reye[1])-centroid_leye[1])/(centroid_reye[0]-centroid_leye[0])

plt.figure(figsize = (10,10))
plt.imshow(profile.segmask.mask)
plt.scatter(x=[centroid_leye[0]], y=[centroid_leye[1]], marker='+', c='black')
plt.scatter(x=[centroid_reye[0]], y=[centroid_reye[1]], marker='+', c='black')
plt.plot([centroid_leye[0],centroid_reye[0]], [[centroid_leye[1]],[centroid_reye[1]]],'ro-')
flat_pt1 = centroid_leye
flat_pt2 = torch.Tensor([centroid_leye[0], centroid_reye[1]])
plt.scatter(x=[flat_pt2[0]], y=[flat_pt2[1]], marker='+', c='black')
plt.plot([centroid_leye[0],flat_pt2[0]], [[centroid_leye[1]],[flat_pt2[1]]],'bo-')
[<matplotlib.lines.Line2D at 0x7f0f76d34048>]

Validating by Intuition on Single Samples and Color Swatches

Validate Nose Sample

left_cheek_truth = np.array([row['L_l'], row['a_l'], row['b_l']])
right_cheek_truth = np.array([row['L_r'], row['a_r'], row['b_r']])
left_cheek_truth_rgb = color.lab2rgb(left_cheek_truth.astype(float))
right_cheek_truth_rgb = color.lab2rgb(right_cheek_truth.astype(float))
pred_left_cheek_lab = abs(skimage.color.rgb2lab(profile.skin_tones['left_cheek'],
    illuminant='D50', observer='2'
)).astype('int')
pred_right_cheek_lab = abs(skimage.color.rgb2lab(
    profile.skin_tones['right_cheek'],
    illuminant='D50', observer='2'
)).astype('int')
print('predicted left cheek (LAB)', pred_left_cheek_lab)
print('predicted right cheek (LAB)', pred_right_cheek_lab)
print('true left cheek',left_cheek_truth)
print('true right cheek',right_cheek_truth)


print('left cheek error', calc_euclidean_error(pred_left_cheek_lab, left_cheek_truth))
print('right cheek error',calc_euclidean_error(pred_right_cheek_lab, right_cheek_truth))
predicted left cheek (LAB) [64  5  9]
predicted right cheek (LAB) [55  4  7]
true left cheek [68  8  4]
true right cheek [57  7  4]
left cheek error 7.0710678118654755
right cheek error 4.69041575982343
plt.figure(figsize = (10,10))
# plt.imshow(img)

region = 'Nose'
fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0), 1, 1,
        edgecolor = (np.array(profile.skin_tones[region]).astype(float)) / 255,
        facecolor = (np.array(profile.skin_tones[region]).astype(float)) / 255,
        fill=True
     )
)

ax.annotate('predicted - {}'.format(region), (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = right_cheek_truth_rgb,
        facecolor = right_cheek_truth_rgb,
        fill=True
     )
)

ax.annotate('Right Cheek Truth', (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = left_cheek_truth_rgb,
        facecolor = left_cheek_truth_rgb,
        fill=True
     )
)

ax.annotate('Left Cheek Truth', (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()
<Figure size 720x720 with 0 Axes>

Validate Left Cheek Sample

region = 'left_cheek'
fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = (np.array(profile.skin_tones[region]).astype(float)) / 255,
        facecolor = (np.array(profile.skin_tones[region]).astype(float)) / 255,
        fill=True
     ) 
)
ax.annotate('predicted - {}'.format(region), (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = right_cheek_truth_rgb,
        facecolor = right_cheek_truth_rgb,
        fill=True
     )
)

ax.annotate('Right Cheek Truth', (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = left_cheek_truth_rgb,
        facecolor = left_cheek_truth_rgb,
        fill=True
     )
)

ax.annotate('Left Cheek Truth', (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

Validate Right Cheek Sample

region = 'right_cheek'
fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = (np.array(profile.skin_tones[region]).astype(float)) / 255,
        facecolor = (np.array(profile.skin_tones[region]).astype(float)) / 255,
        fill=True
     ) 
)
ax.annotate('predicted - {}'.format(region), (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = right_cheek_truth_rgb,
        facecolor = right_cheek_truth_rgb,
        fill=True
     )
)

ax.annotate('Right Cheek Truth', (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = left_cheek_truth_rgb,
        facecolor = left_cheek_truth_rgb,
        fill=True
     )
)

ax.annotate('Left Cheek Truth', (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

Validate Forehead Sample

region = 'forehead'
fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = (np.array(profile.skin_tones[region]).astype(float)) / 255,
        facecolor = (np.array(profile.skin_tones[region]).astype(float)) / 255,
        fill=True
     ) 
)
ax.annotate('predicted - {}'.format(region), (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = right_cheek_truth_rgb,
        facecolor = right_cheek_truth_rgb,
        fill=True
     )
)

ax.annotate('Right Cheek Truth', (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

fig, ax = plt.subplots()
ax.add_patch(
     patches.Rectangle(
        (0, 0),1,1,
        edgecolor = left_cheek_truth_rgb,
        facecolor = left_cheek_truth_rgb,
        fill=True
     )
)

ax.annotate('Left Cheek Truth', (0.5, 0.5), color='w', weight='bold', 
                fontsize=18, ha='center', va='center')

plt.show()

Facial width-to-height ratio (fWHR)

# hor_lip: Top of lip, parallel to hor_eye
# ver_rear: left side of right ear (subject perspective), orthogonal to hor_eye
# ver_rear: right side of left ear (subject perspective), orthogonal to hor_eye


# requires: right eye vs left eye
    # requires facial orientation transform (up side down -> right side up)

# Simplifying Assumptions
import cv2
import imutils

Validation Summary Table for Metrics Against All Images

import traceback
for im in ls:
    try:
        if im in out: continue
        if 'jpg' not in str(im): continue
        colors = run_img(fn=im, tags=tags)
        row = truth.loc[im.name].to_dict()
        row['fp'] = im
        row['PostStatementPhotoFilename'] = im.name
        for tag in tags:
            row['Lab_{}'.format(tag)] = colors_profile_lab[tag]
            row['error_{}'.format(tag)] = calc_euclidean_error(
                colors_profile_lab[tag],
                np.array([row['L_m'], row['a_m'], row['b_m']])
            )
        out[im] = row
    except KeyboardInterrupt:
        break
        
    except:
        traceback.print_exc()
        
df = pd.DataFrame(out.values())
df[['error_forehead','error_right_cheek','error_left_cheek','error_Nose']].sum()
error_forehead       4201.102983
error_right_cheek    4253.485379
error_left_cheek     3998.065089
error_Nose           6192.452324
dtype: float64
df.to_csv('early_look_validation.csv')
df
L_l a_l b_l L_r a_r b_r Leftcheeksize_sq Rightcheeksize_sq coloration_notes L_m ... fp PostStatementPhotoFilename Lab_forehead error_forehead Lab_right_cheek error_right_cheek Lab_left_cheek error_left_cheek Lab_Nose error_Nose
0 67 13 4 51 12 3 51 51 NaN 59.0 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-04-16h01m54s241.jpg vlcsnap-2020-09-04-16h01m54s241.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 21.399000 [55.425016610496215, 11.704792912227369, -14.396626105998145] 18.267515 [63.52683332635404, 11.380539178516557, -14.996442437544054] 19.075214 [83.42976058083897, 12.153559972545857, -19.33626419328327] 33.442909
1 72 14 -4 83 13 -2 51 51 NaN 77.5 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-05-13h02m33s216.jpg vlcsnap-2020-09-05-13h02m33s216.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 16.175225 [55.425016610496215, 11.704792912227369, -14.396626105998145] 24.908046 [63.52683332635404, 11.380539178516557, -14.996442437544054] 18.537965 [83.42976058083897, 12.153559972545857, -19.33626419328327] 17.431250
2 64 8 7 59 6 5 51 51 NaN 61.5 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-04-11h55m33s741.jpg vlcsnap-2020-09-04-11h55m33s741.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 22.577631 [55.425016610496215, 11.704792912227369, -14.396626105998145] 21.795937 [63.52683332635404, 11.380539178516557, -14.996442437544054] 21.544089 [83.42976058083897, 12.153559972545857, -19.33626419328327] 33.902800
3 70 16 1 63 14 0 51 51 NaN 66.5 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-05-15h44m56s025.jpg vlcsnap-2020-09-05-15h44m56s025.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 17.167994 [55.425016610496215, 11.704792912227369, -14.396626105998145] 18.852669 [63.52683332635404, 11.380539178516557, -14.996442437544054] 16.188883 [83.42976058083897, 12.153559972545857, -19.33626419328327] 26.233498
4 59 14 5 55 15 6 51 51 NaN 57.0 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-05-12h50m12s634.jpg vlcsnap-2020-09-05-12h50m12s634.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 24.435819 [55.425016610496215, 11.704792912227369, -14.396626105998145] 20.153647 [63.52683332635404, 11.380539178516557, -14.996442437544054] 21.735564 [83.42976058083897, 12.153559972545857, -19.33626419328327] 36.343886
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
198 73 14 -3 82 14 -3 51 51 NaN 77.5 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-05-13h00m37s394.jpg vlcsnap-2020-09-05-13h00m37s394.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 16.323961 [55.425016610496215, 11.704792912227369, -14.396626105998145] 24.949067 [63.52683332635404, 11.380539178516557, -14.996442437544054] 18.601763 [83.42976058083897, 12.153559972545857, -19.33626419328327] 17.476983
199 71 16 -3 67 15 -1 51 51 NaN 69.0 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-05-19h12m51s833.jpg vlcsnap-2020-09-05-19h12m51s833.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 15.000087 [55.425016610496215, 11.704792912227369, -14.396626105998145] 18.771258 [63.52683332635404, 11.380539178516557, -14.996442437544054] 14.691257 [83.42976058083897, 12.153559972545857, -19.33626419328327] 22.802691
200 56 8 6 63 8 6 51 51 NaN 59.5 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-04-13h21m27s855.jpg vlcsnap-2020-09-04-13h21m27s855.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 23.193619 [55.425016610496215, 11.704792912227369, -14.396626105998145] 21.127076 [63.52683332635404, 11.380539178516557, -14.996442437544054] 21.644723 [83.42976058083897, 12.153559972545857, -19.33626419328327] 35.097176
201 68 8 3 58 7 3 51 51 NaN 63.0 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-05-13h16m00s788.jpg vlcsnap-2020-09-05-13h16m00s788.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 19.243649 [55.425016610496215, 11.704792912227369, -14.396626105998145] 19.434589 [63.52683332635404, 11.380539178516557, -14.996442437544054] 18.417602 [83.42976058083897, 12.153559972545857, -19.33626419328327] 30.625797
202 78 8 6 71 7 4 51 51 NaN 74.5 ... /ws/data/skin-tone/ScreenshotFaceAfterStatement/erics_imgs/vlcsnap-2020-09-05-11h35m03s917.jpg vlcsnap-2020-09-05-11h35m03s917.jpg [68.25674162237833, 8.91622504113837, -15.457491770822251] 21.435786 [55.425016610496215, 11.704792912227369, -14.396626105998145] 27.527520 [63.52683332635404, 11.380539178516557, -14.996442437544054] 23.137128 [83.42976058083897, 12.153559972545857, -19.33626419328327] 26.337236

203 rows × 22 columns

Conclusions

In Summation

Next Steps

  • The accuracy of this is ~92% across the training set. So there’s going to be some errors. I heard that it gets a lot more accurate if we use a face box model with it. That isolates the face only from the picture
### 
### Color Metric Spaces
#### RGB vs LAB (XYZ): The CIE 1931 Color Space
To represent bias it is desirable to choose a color space that is as close as possible to representing the way humans view colors.  In this dimension, the ubuiquitous RGB color space has been found to be undesirable. For example, it RGB does not accurately represent darkness because there are color distortions related to darkness of human perception.  [citation]()

The [CIE 1931 Color Space](https://en.wikipedia.org/wiki/CIE_1931_color_space) is designed to be in line with how humans perceive colors.  Instead of RGB, XYZ is one of several transformations which integrates those adjustements and lightness/darkness values (not just color).  When judging the [relative luminance](https://en.wikipedia.org/wiki/Relative_luminance) (brightness) of different colors in well-lit situations, humans tend to perceive light within the green parts of the spectrum as brighter than red or blue light of equal power. The [luminosity function](https://en.wikipedia.org/wiki/Luminosity_function) that describes the perceived brightnesses of different wavelengths is thus roughly analogous to the spectral sensitivity of [M cones](https://en.wikipedia.org/wiki/Cone_cell).

The CIE model capitalizes on this fact by setting Y as [luminance](https://en.wikipedia.org/wiki/Luminance). Z is quasi-equal to blue, or the S cone response, and X is a mix of response curves chosen to be nonnegative. The XYZ tristimulus values are thus analogous to, but different from, the LMS cone responses of the human eye. Setting Y as luminance has the useful result that for any given Y value, the XZ plane will contain all possible chromaticities at that luminance.

The unit of the tristimulus values X, Y, and Z is often arbitrarily chosen so that Y = 1 or Y = 100 is the brightest white that a color display supports. In this case, the Y value is known as the relative luminance. The corresponding whitepoint values for X and Z can then be inferred using the standard illuminants.

#### LAB as a color standardization layer
It seems that a careful measurement approach to this task will also attempt to standardize the lighting on all photos, so as to calibrate the color measurements. What people do to achieve this is to select a standard "known" color that is consistent across photos. We have two kinds of backgrounds in our photos (the partitions, and the rear wall) - so we could use these as two standards. The idea is that if you compare swatches of the background you might see some are lighter or darker than others (despite same color) and this could be used to calibrate the measure of face color. It seems that this level of rigor (using a standardization technique) is expected for reliable estimates (given the position and camera variability in our sample).

### Existing approach to feature encoding in academic social sciences
It seems that in the literature people have coded photos by hand, or else used virtual images of people. But as far as an automated way of measuring facial skin color? I am pretty sure no such thing exists. Its a bit weird to start think of how useful this can be to so many - I keep thinking I'm delusional and missing something.

### Other Attributes Hair, BMI, etc..
Ya. I wouldn't bet my savings on eye or hair color as a big deal, but who knows - there might be a blond effect. I suspect that the color of eyes and hair is kind of like of second-order importance to skin color in terms of judgments.
8:58
Obviously gender is the biggest single factor of human variation besides age and physical stature that affects impressions.
8:59
DeBruin has (among her 180 publications) a paper on detecting BMI from faces not really working (to my surprise).
8:59
Do you think AI is any good at visually detecting gender (to match with self-reports as the answer key I suppose).
8:59
?
### On "Race" and "Ethnicity"
I can give you a long explanation for the many reasons why, but bottom line is that skin coloration crosses all these perceived categories and is not diagnostic in and of itself of race or ethnicity - nor is our goal to use it for those judgements
2:48
but we do acknowledge that skin color could affect perception of one another for a variety of reasons, and so agnostically approach the research topic of measuring coloration in the face.
https://www.wsj.com/articles/the-quiet-growth-of-race-detection-software-sparks-concerns-over-bias-11597378154


### Manual Encoding
I finished coding the coloration of cheeks on 204 photographs (attached) by doing the following: I used a 51x51 pixel square to find the best "cheek" area. I never included the edge of a face or lips or eyelashes in the area. Sometimes I needed to overlap with features of the nose or upper-upper lip area (but not the reddened lip area) in the 51 pixel square area. There is only 1 set of 2 photos (1 person) where I have noted the head is too-far/too-small in the frame such I could not sample two distinct cheeks according to the above protocol. I centered a "color sampler tool" in Photoshop2020 that averaged over a 51x51 pixel area the L,a,b values that are relevant to human color perception (of the pixels shown in the image, and sampled). These coloration values are added to the MasterFacesDatabaseLabels.xlsx file (now updated) which is in that skin-color folder you set up on drive.

Eric: I learned when I was doing my sampling and figuring out initially how to set my procedure that if the edges of the sample area include the lines which define features (e.g. on the face the edges are darkened because of shadow and topography - same for lips edge, nose edge, and especially the eyelashes which are super dark on most) then the averaged values will be very influenced by those darker pixels.

There is a setting in whatever program you used to extract the LAB values called illuminant  Can you confirm that its value is D50?
- No I think photoshop is D50


photoshop doesnt take digits after the decimal, so rounded to nearest whole number for your values

Feature requests:
- Can you read how many pixels are in your selected area?
- if you could also avoid the few pixels that are outer edge/borders of faces that would be ideal because they are dark due to countour/shading and shift the value of the average considerably (I've tested w/ and w/out).

A couple papers with some methods about how they measured fWTH (facial width to height). The simpler Carre paper seems like a good way to go. both that easier method, and the more involved multiple ratio method of Pentonvoak has been used by others in literature.
[A](https://prcvdworkspace.slack.com/files/U015N333YQN/F01DJNTGSE6/carremccormick2008facialwidthtoheight.pdf)
[B](https://prcvdworkspace.slack.com/files/U015N333YQN/F01D411CR2B/pentonvoaketal2001facialproportions.pdf)


My top idea is to use it to summarize a bunch of big face datasets that are used to train models.  This of course has been done before by bias researchers, but to my knowledge methods to reproduce those stats have not been made available

Eric Schniter:house_with_garden:  1:58 PM
Thats good ambitions - but obviously more lofty and involved than the even lower hanging fruit: to first produce a publication for a freely available tool that is as good or better at measuring the color and dimensions than humans hand coding on standardized datasets such as those used by researchers or part of an identification/ yearbooking system. It seems like we were actually getting close to that with what you were developing - you were almost getting accurate coloration values. Let's see that through in the short-term, right?

The issue I see is that the proposed metric, distance between the manually collected and the model generated skin tone values, doesnt address the question of how good the estimate is at accurately describing the variance between people skin tones

Eric Schniter:house_with_garden:  2:08 PM
That is not the question though.
2:08
That question would be a question of true colorimetry.

The question I would like us to focus on is a question of pixel sample colorimetry.

comparing methods
all we need to show is that the automated method is less variance or at least statistically indistiguishable, and then we can sell it as good enough as a replacement for the standard method


First if we can produce a tool that does what humans do, but obviously it can do A LOT MORE, and already is - so all the while there will be these other versions of cheek, or nose or whatever getting measured, and we can use these down the line to ask if any have some predictive power for behavior....
2:21
ya, i thought that at minimum we would want to use my dataset, and also the London dataset which is in the literature but has some even better standardization.

The problem here is that there is so much variation in human perception around this, so anyevaluation of match between your models measure of #1, and your difficult and expensive to gather measures of #2 will be very niche and perhaps not generalizable to sensibilities of people not representative of your sample for #2

The literature on physiognomy and judgements/perceptions of faces also makes increasing use of virtual models (e.g. the "cheek colors" could be painted back onto these.... Cool to see the mixed methods being employed by the research. Look over the table of contents for an overview, or spend five minutes reading the summary of results at end: lots of questions explored in this dissertation that overlap with your broad interests mentioned today:
https://prcvdworkspace.slack.com/files/U015N333YQN/F01DMND2DUK/facial_discrimination__dissertaton_.pdf

I like the phase morphed models and the features they claim regarding trustworthiness:

https://www.newscientist.com/article/mg20126957-300-how-your-looks-betray-your-personality/?ignored=irrelevant

When we started this collaboration it was based on intersection of interests. This document I shared is an overview of the research program that I began to develop several years back. As you and I have developed our conversation and mutual focus we have gotten into these discussions of what can be measured about faces and what those measures might inform. I would expect that you would want to look at the above at least for the literature review and summary of results pertinent to these interests of ours. e.g., it would be hugely informative if you were considering writing a blog post like you mentioned. Finally, as I wrote above, "Depending on what we agree to follow through with, I look forward to writing up more detailed methods of the tests of automated measures (i.e. for now at least coloration, fWHR) and development/deployment of the tool." - I was referring to elaborating more detail in this project overview. As of right now, we have not agreed on any of the methods we have discussed for an actual empirical study comparing hand-coding vs machine coding methods for coloration and fWHR. I am hoping that you will want to discuss and commit to a plan with me that I will then carefully write up and possible even preregister before we test and post datasets and tools. So, there is tons of opportunity to contribute to this, and as soon as we are committed to a plan for tool testing and release (that fits in with this project) I will add you to the set of coauthors on this working document, we will add our project goals to the schedule, I will begin developing the writing for the actual pregistration/publication. Alternatively, if you have some new interest or constraints - I'd like to know.

3:24
One side note about what is covered in the review of literature and scope of inquiry about faces: I decided to leave out any connections between faces and personality - one because I think any connection might really be based on attractiveness and how personality develops in light of attractive opps, and two because despite some papers claiming a found connection there are other papers that cannot replicate the link and this new review which concludes it can't be done: https://psyarxiv.com/4x7d8

One final comment, there is speculative value of adding you and a plan of our collab to this doc: a really naive shitty early version of this document got us thousands of dollars from Koch foundation - we used this to hire a programmer to develop our original experiment software and to pay participants. so, if you join in my optimism - getting you on this document for good reason could pave a path towards funding again in the future.
## Methodology
- Color Summarizer Tool (see _notebooks/images/... for examples of the output
- [Paper from Google](https://arxiv.org/pdf/1701.08393.pdf)
</div>