读书笔记tf0五二十人脸,学习笔记TF05八【葡萄娱乐官方】

人脸识别,基于人脸部特征消息识别身份的生物识别技能。录制机、录制头搜罗人脸图像或摄像流,自动物检疫验、追踪图像中脸部,做脸部相关技能处理,人脸检查实验、人脸关键点检验、人脸验证等。《洛桑联邦理工技(science and technology)评价》(MIT
Technology
Review),201柒年海内外拾大突破性手艺榜单,支付宝“刷脸支付”(Paying with Your
Face)入围。

上学笔记TF05八:人脸识别,学习笔记tf05拾贰位脸

人脸识别,基于人脸部特征音讯识别身份的浮游生物识别本领。录像机、录制头采撷人脸图像或摄像流,自动测验、追踪图像中脸部,做脸部相关工夫处理,人脸检查评定、人脸关键点检查评定、人脸验证等。《伊利诺伊香槟分校科学技术评价》(MIT
Technology
Review),20壹柒年天下10大突破性本事榜单,支付宝“刷脸支付”(Paying with Your
Face)入围。

人脸识别优势,非强制性(搜集格局不轻便被发觉,被识外人脸图像可积极获取)、非接触性(用户不要求与设备接触)、并发性(可同时两人脸检测、追踪、识别)。深度学习前,人脸识别两步骤:高维人工特征提取、降维。古板人脸识别本事基于可见光图像。深度学习+大数量(海量有标注人脸数据)为人脸识别领域主流本事路径。神经互连网人脸识别本事,多量样本图像陶冶识别模型,无需人工选取特征,样本陶冶进程自行学习,识别准确率能够达到9九%。

人脸识别工夫流程。

人脸图像采撷、检查实验。人脸图像搜罗,录像头把人脸图像收集下来,静态图像、动态图像、不一样职责、不一致表情。用户在搜罗设备拍报范围内,搜集设置自动寻觅并拍片。人脸检查评定属于目的检验(object
detection)。对要检验对象对象概率总计,获得待检查评定对象特征,建立指标检查测试模型。用模子相配输入图像,输出相配区域。人脸检查评定是人脸识别预处理,准确标定人脸在图像的岗位大小。人脸图像方式特点足够,直方图特征、颜色特征、模板特征、结构特征、哈尔特征(Haar-like
feature)。人脸检查测试挑出有用新闻,用特色检查评定脸部。人脸检验算法,模板相配模型、Adaboost模型,Adaboost模型速度。精度综合品质最棒,磨练慢、检查评定快,可到达摄像流实时检验效果。

人脸图像预处理。基于人脸检查测试结果,处理图像,服务特征提取。系统得到人脸图像遭到各种条件限制、随机困扰,需缩放、旋转、拉伸、光线补偿、灰度调换、直方图均衡化、规范化、几何改良、过滤、锐化等图像预处理。

人脸图像特征提取。人脸图像新闻数字化,人脸图像调换为一串数字(特征向量)。如,眼睛左侧、嘴唇右侧、鼻子、下巴位置,特征点间欧氏距离、曲率、角度提收取特色分量,相关特征连接成长特征向量。

人脸图像相配、识别。提取人脸图像特点数据与数据仓库储存款和储蓄人脸特征模板搜索相配,依照相似程度对地位音信举办判定,设定阈值,相似度赶过阈值,输出相配结果。确认,一对壹(一:壹)图像比较,申明“你正是您”,金融查证身份、新闻安全领域。辨认,一对多(一:N)图像相配,“N人中找你”,录制流,人走进识别范围就大功告成辨认,安防领域。

人脸识别分类。

人脸检查评定。检验、定位图片人脸,再次回到高业饿啊人脸框坐标。对人脸分析、处理的率先步。“滑动窗口”,选用图像矩形区域作滑动窗口,窗口中领到特征对图像区域描述,遵照特征描述判别窗口是否人脸。不断遍历须求调查窗口。

人脸关键点检查实验。定位、重临人脸五官、轮廓关键点坐标地点。人脸概况、眼睛、眉毛、嘴唇、鼻子概况。Face++提供高达10陆点关键点。人脸关键点定位技巧,级联形回归(cascaded
shape regression,
CSLAND)。人脸识别,基于DeepID网络布局。DeepID网络布局类似卷积神经网络布局,尾数第叁层,有DeepID层,与卷积层4、最大池化层三相连,卷积神经互联网层数越高视界域越大,既思量部分特征,又怀恋全局特征。输入层
3一x3九x一、卷积层1 28x36x20(卷积核四x4x1)、最大池化层1
1二x18x20(过滤器二x二)、卷积层二 12x16x20(卷积核三x三x20)、最大池化层二六x8x40(过滤器贰x二)、卷积层叁 四x6x60(卷积核三x3x40)、最大池化层2
贰x三x60(过滤器二x二)、卷积层四 二x二x80(卷积核二x贰x60)、DeepID层
1×160、全连接层 Softmax。《Deep Learning Face Representation from
Predicting 一千0 Classes》
http://mmlab.ie.cuhk.edu.hk/pdf/YiSun\_CVPR14.pdf

人脸验证。分析两张人脸同1位或者大小。输入两张人脸,得到置信度分类、相应阈值,评估相似度。

人脸属性检查实验。人脸属性辩识、人脸心绪分析。https://www.betaface.com/wpa/
在线人脸识别测试。给出人年龄、是或不是有胡子、心境(开心、不荒谬、生气、愤怒)、性别、是不是带老花镜、肤色。

人脸识别应用,美图秀秀美颜应用、世纪佳缘查看地下配偶“面相”相似度,支付领域“刷脸支付”,安全堤防领域“人脸鉴权”。Face++、商汤科技(science and technology),提供人脸识别SDK。

人脸检测。https://github.com/davidsandberg/facenet

Florian Schroff、Dmitry Kalenichenko、James Philbin论文《FaceNet: A
Unified Embedding for Face Recognition and Clustering》
https://arxiv.org/abs/1503.03832
https://github.com/davidsandberg/facenet/wiki/Validate-on-lfw

LFW(Labeled Faces in the Wild
Home)数据集。http://vis-www.cs.umass.edu/lfw/
。米国密苏里大学阿姆斯特分校Computer视觉实验室整理。13233张图纸,57四十八个人。40九七个人唯有一张图片,1682位多于一张。每张图片尺寸250×250。人脸图片在每种人物名字文件夹下。

数码预处理。校准代码
https://github.com/davidsandberg/facenet/blob/master/src/align/align\_dataset\_mtcnn.py

检查测试所用数据集校准为和预练习模型所用数据集大小同样。
安装环境变量

export PYTHONPATH=[…]/facenet/src

校准命令

for N in {1..4}; do python src/align/align_dataset_mtcnn.py
~/datasets/lfw/raw ~/datasets/lfw/lfw_mtcnnpy_160 –image_size 160
–margin 32 –random_order –gpu_memory_fraction 0.25 & done

预陶冶模型2017021陆-09114玖.zip
https://drive.google.com/file/d/0B5MzpY9kBtDVZ2RpVDYwWmxoSUk
训练集 MS-Celeb-1M数据集
https://www.microsoft.com/en-us/research/project/ms-celeb-1m-challenge-recognizing-one-million-celebrities-real-world/
。微软人脸识别数据库,名家榜采用前100万巨星,寻觅引擎采撷种种名家十0张人脸图片。预演习模型准确率0.9九三+-0.00四。

检测。python src/validate_on_lfw.py datasets/lfw/lfw_mtcnnpy_160
models
原则相比较,采纳facenet/data/pairs.txt,官方随机生成多少,相称和不相配人名和图纸编号。

拾折交叉验证(10-fold cross
validation),精度测试方法。数据集分成十份,轮流将当中玖份做练习集,一份做测试保,十四回结果均值作算法精度测度。1般需要频仍十折交叉验证求均值。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import argparse
import facenet
import lfw
import os
import sys
import math
from sklearn import metrics
from scipy.optimize import brentq
from scipy import interpolate

def main(args):
with tf.Graph().as_default():
with tf.Session() as sess:

# Read the file containing the pairs used for testing
# 一. 读入此前的pairs.txt文件
# 读入后如[[‘Abel_Pacheco’,’1′,’4′]]
pairs = lfw.read_pairs(os.path.expanduser(args.lfw_pairs))
# Get the paths for the corresponding images
# 获取文件路线和是不是协作关系对
paths, actual_issame =
lfw.get_paths(os.path.expanduser(args.lfw_dir), pairs,
args.lfw_file_ext)
# Load the model
# 二. 加载模型
facenet.load_model(args.model)

# Get input and output tensors
# 获取输入输出张量
images_placeholder =
tf.get_default_graph().get_tensor_by_name(“input:0”)
embeddings =
tf.get_default_graph().get_tensor_by_name(“embeddings:0”)
phase_train_placeholder =
tf.get_default_graph().get_tensor_by_name(“phase_train:0”)

#image_size = images_placeholder.get_shape()[1] # For some reason
this doesn’t work for frozen graphs
image_size = args.image_size
embedding_size = embeddings.get_shape()[1]

# Run forward pass to calculate embeddings
# 3. 使用前向传播验证
print(‘Runnning forward pass on LFW images’)
batch_size = args.lfw_batch_size
nrof_images = len(paths)
nrof_batches = int(math.ceil(1.0*nrof_images / batch_size)) #
总共批次数
emb_array = np.zeros((nrof_images, embedding_size))
for i in range(nrof_batches):
start_index = i*batch_size
end_index = min((i+1)*batch_size, nrof_images)
paths_batch = paths[start_index:end_index]
images = facenet.load_data(paths_batch, False, False, image_size)
feed_dict = { images_placeholder:images,
phase_train_placeholder:False }
emb_array[start_index:end_index,:] = sess.run(embeddings,
feed_dict=feed_dict)

# 四. 计算准确率、验证率,十折交叉验证办法
tpr, fpr, accuracy, val, val_std, far = lfw.evaluate(emb_array,
actual_issame, nrof_folds=args.lfw_nrof_folds)
print(‘Accuracy: %1.3f+-%1.3f’ % (np.mean(accuracy),
np.std(accuracy)))
print(‘Validation rate: %2.5f+-%2.5f @ FAR=%2.5f’ % (val, val_std,
far))
# 得到auc值
auc = metrics.auc(fpr, tpr)
print(‘Area Under Curve (AUC): %1.3f’ % auc)
# 1获得错误率(eer)
eer = brentq(lambda x: 1. – x – interpolate.interp1d(fpr, tpr)(x), 0.,
1.)
print(‘Equal Error Rate (EER): %1.3f’ % eer)

def parse_arguments(argv):
parser = argparse.ArgumentParser()

parser.add_argument(‘lfw_dir’, type=str,
help=’Path to the data directory containing aligned LFW face
patches.’)
parser.add_argument(‘–lfw_batch_size’, type=int,
help=’Number of images to process in a batch in the LFW test set.’,
default=100)
parser.add_argument(‘model’, type=str,
help=’Could be either a directory containing the meta_file and
ckpt_file or a model protobuf (.pb) file’)
parser.add_argument(‘–image_size’, type=int,
help=’Image size (height, width) in pixels.’, default=160)
parser.add_argument(‘–lfw_pairs’, type=str,
help=’The file containing the pairs to use for validation.’,
default=’data/pairs.txt’)
parser.add_argument(‘–lfw_file_ext’, type=str,
help=’The file extension for the LFW dataset.’, default=’png’,
choices=[‘jpg’, ‘png’])
parser.add_argument(‘–lfw_nrof_folds’, type=int,
help=’Number of folds to use for cross validation. Mainly used for
testing.’, default=10)
return parser.parse_args(argv)
if __name__ == ‘__main__’:
main(parse_arguments(sys.argv[1:]))

性别、年龄识别。https://github.com/dpressel/rude-carnie

Adience
数据集。http://www.openu.ac.il/home/hassner/Adience/data.html\#agegender
。26580张图片,22捌四类,年龄范围几个区段(0~2、4~6、8~13、15~20、25~32、38~43、48~53、60~),含有噪声、姿势、光照变化。aligned
# 经过剪裁对齐多少,faces #
原始数据。fold_0_data.txt至fold_4_data.txt
全体数额符号。fold_frontal_0_data.txt至fold_frontal_4_data.txt
仅用类似正面态度面部标志。数据结构 user_id
用户Flickr帐户ID、original_image 图片文件名、face_id
人标记符、age、gender、x、y、dx、dy 人脸边框、tilt_ang
切斜角度、fiducial_yaw_angle 基准偏移角度、fiducial_score
基准分数。https://www.flickr.com/

数量预处理。脚本把数量处理成TFRecords格式。https://github.com/dpressel/rude-carnie/blob/master/preproc.py
https://github.com/GilLevi/AgeGenderDeepLearning/tree/master/Folds文件夹,已经对训练集、测试集划分、标注。gender\_train.txt、gender\_val.txt
图片列表 Adience 数据集处理TFRecords文件。图片处理为大小256×25陆JPEG编码陆风X八GB图像。tf.python_io.TFRecordWriter写入TFRecords文件,输出文件output_file。

营造立模型型。年龄、性别训练模型,Gil Levi、Tal Hassner散文《Age and Gender
Classification Using Convolutional Neural
Networks》http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.722.9654&rank=1
。模型 https://github.com/dpressel/rude-carnie/blob/master/model.py
。tenforflow.contrib.slim。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import time
import os
import numpy as np
import tensorflow as tf
from data import distorted_inputs
import re
from tensorflow.contrib.layers import *
from tensorflow.contrib.slim.python.slim.nets.inception_v3 import
inception_v3_base
TOWER_NAME = ‘tower’
def select_model(name):
if name.startswith(‘inception’):
print(‘selected (fine-tuning) inception model’)
return inception_v3
elif name == ‘bn’:
print(‘selected batch norm model’)
return levi_hassner_bn
print(‘selected default model’)
return levi_hassner
def get_checkpoint(checkpoint_path, requested_step=None,
basename=’checkpoint’):
if requested_step is not None:
model_checkpoint_path = ‘%s/%s-%s’ % (checkpoint_path, basename,
requested_step)
if os.path.exists(model_checkpoint_path) is None:
print(‘No checkpoint file found at [%s]’ % checkpoint_path)
exit(-1)
print(model_checkpoint_path)
print(model_checkpoint_path)
return model_checkpoint_path, requested_step
ckpt = tf.train.get_checkpoint_state(checkpoint_path)
if ckpt and ckpt.model_checkpoint_path:
# Restore checkpoint as described in top of this program
print(ckpt.model_checkpoint_path)
global_step =
ckpt.model_checkpoint_path.split(‘/’)[-1].split(‘-‘)[-1]
return ckpt.model_checkpoint_path, global_step
else:
print(‘No checkpoint file found at [%s]’ % checkpoint_path)
exit(-1)
def _activation_summary(x):
tensor_name = re.sub(‘%s_[0-9]*/’ % TOWER_NAME, ”, x.op.name)
tf.summary.histogram(tensor_name + ‘/activations’, x)
tf.summary.scalar(tensor_name + ‘/sparsity’, tf.nn.zero_fraction(x))
def inception_v3(nlabels, images, pkeep, is_training):
batch_norm_params = {
“is_training”: is_training,
“trainable”: True,
# Decay for the moving averages.
“decay”: 0.9997,
# Epsilon to prevent 0s in variance.
“epsilon”: 0.001,
# Collection containing the moving mean and moving variance.
“variables_collections”: {
“beta”: None,
“gamma”: None,
“moving_mean”: [“moving_vars”],
“moving_variance”: [“moving_vars”],
}
}
weight_decay = 0.00004
stddev=0.1
weights_regularizer =
tf.contrib.layers.l2_regularizer(weight_decay)
with tf.variable_scope(“InceptionV3”, “InceptionV3”, [images]) as
scope:
with tf.contrib.slim.arg_scope(
[tf.contrib.slim.conv2d, tf.contrib.slim.fully_connected],
weights_regularizer=weights_regularizer,
trainable=True):
with tf.contrib.slim.arg_scope(
[tf.contrib.slim.conv2d],
weights_initializer=tf.truncated_normal_initializer(stddev=stddev),
activation_fn=tf.nn.relu,
normalizer_fn=batch_norm,
normalizer_params=batch_norm_params):
net, end_points = inception_v3_base(images, scope=scope)
with tf.variable_scope(“logits”):
shape = net.get_shape()
net = avg_pool2d(net, shape[1:3], padding=”VALID”, scope=”pool”)
net = tf.nn.dropout(net, pkeep, name=’droplast’)
net = flatten(net, scope=”flatten”)

with tf.variable_scope(‘output’) as scope:

weights = tf.Variable(tf.truncated_normal([2048, nlabels], mean=0.0,
stddev=0.01), name=’weights’)
biases = tf.Variable(tf.constant(0.0, shape=[nlabels],
dtype=tf.float32), name=’biases’)
output = tf.add(tf.matmul(net, weights), biases, name=scope.name)
_activation_summary(output)
return output
def levi_hassner_bn(nlabels, images, pkeep, is_training):
batch_norm_params = {
“is_training”: is_training,
“trainable”: True,
# Decay for the moving averages.
“decay”: 0.9997,
# Epsilon to prevent 0s in variance.
“epsilon”: 0.001,
# Collection containing the moving mean and moving variance.
“variables_collections”: {
“beta”: None,
“gamma”: None,
“moving_mean”: [“moving_vars”],
“moving_variance”: [“moving_vars”],
}
}
weight_decay = 0.0005
weights_regularizer =
tf.contrib.layers.l2_regularizer(weight_decay)
with tf.variable_scope(“LeviHassnerBN”, “LeviHassnerBN”, [images]) as
scope:
with tf.contrib.slim.arg_scope(
[convolution2d, fully_connected],
weights_regularizer=weights_regularizer,
biases_initializer=tf.constant_initializer(1.),
weights_initializer=tf.random_normal_initializer(stddev=0.005),
trainable=True):
with tf.contrib.slim.arg_scope(
[convolution2d],
weights_initializer=tf.random_normal_initializer(stddev=0.01),
normalizer_fn=batch_norm,
normalizer_params=batch_norm_params):
conv1 = convolution2d(images, 96, [7,7], [4, 4], padding=’VALID’,
biases_initializer=tf.constant_initializer(0.), scope=’conv1′)
pool1 = max_pool2d(conv1, 3, 2, padding=’VALID’, scope=’pool1′)
conv2 = convolution2d(pool1, 256, [5, 5], [1, 1], padding=’SAME’,
scope=’conv2′)
pool2 = max_pool2d(conv2, 3, 2, padding=’VALID’, scope=’pool2′)
conv3 = convolution2d(pool2, 384, [3, 3], [1, 1], padding=’SAME’,
biases_initializer=tf.constant_initializer(0.), scope=’conv3′)
pool3 = max_pool2d(conv3, 3, 2, padding=’VALID’, scope=’pool3′)
# can use tf.contrib.layer.flatten
flat = tf.reshape(pool3, [-1, 384*6*6], name=’reshape’)
full1 = fully_connected(flat, 512, scope=’full1′)
drop1 = tf.nn.dropout(full1, pkeep, name=’drop1′)
full2 = fully_connected(drop1, 512, scope=’full2′)
drop2 = tf.nn.dropout(full2, pkeep, name=’drop2′)
with tf.variable_scope(‘output’) as scope:

weights = tf.Variable(tf.random_normal([512, nlabels], mean=0.0,
stddev=0.01), name=’weights’)
biases = tf.Variable(tf.constant(0.0, shape=[nlabels],
dtype=tf.float32), name=’biases’)
output = tf.add(tf.matmul(drop2, weights), biases, name=scope.name)
return output
def levi_hassner(nlabels, images, pkeep, is_training):
weight_decay = 0.0005
weights_regularizer =
tf.contrib.layers.l2_regularizer(weight_decay)
with tf.variable_scope(“LeviHassner”, “LeviHassner”, [images]) as
scope:
with tf.contrib.slim.arg_scope(
[convolution2d, fully_connected],
weights_regularizer=weights_regularizer,
biases_initializer=tf.constant_initializer(1.),
weights_initializer=tf.random_normal_initializer(stddev=0.005),
trainable=True):
with tf.contrib.slim.arg_scope(
[convolution2d],
weights_initializer=tf.random_normal_initializer(stddev=0.01)):
conv1 = convolution2d(images, 96, [7,7], [4, 4], padding=’VALID’,
biases_initializer=tf.constant_initializer(0.), scope=’conv1′)
pool1 = max_pool2d(conv1, 3, 2, padding=’VALID’, scope=’pool1′)
norm1 = tf.nn.local_response_normalization(pool1, 5, alpha=0.0001,
beta=0.75, name=’norm1′)
conv2 = convolution2d(norm1, 256, [5, 5], [1, 1], padding=’SAME’,
scope=’conv2′)
pool2 = max_pool2d(conv2, 3, 2, padding=’VALID’, scope=’pool2′)
norm2 = tf.nn.local_response_normalization(pool2, 5, alpha=0.0001,
beta=0.75, name=’norm2′)
conv3 = convolution2d(norm2, 384, [3, 3], [1, 1],
biases_initializer=tf.constant_initializer(0.), padding=’SAME’,
scope=’conv3′)
pool3 = max_pool2d(conv3, 3, 2, padding=’VALID’, scope=’pool3′)
flat = tf.reshape(pool3, [-1, 384*6*6], name=’reshape’)
full1 = fully_connected(flat, 512, scope=’full1′)
drop1 = tf.nn.dropout(full1, pkeep, name=’drop1′)
full2 = fully_connected(drop1, 512, scope=’full2′)
drop2 = tf.nn.dropout(full2, pkeep, name=’drop2′)
with tf.variable_scope(‘output’) as scope:

weights = tf.Variable(tf.random_normal([512, nlabels], mean=0.0,
stddev=0.01), name=’weights’)
biases = tf.Variable(tf.constant(0.0, shape=[nlabels],
dtype=tf.float32), name=’biases’)
output = tf.add(tf.matmul(drop2, weights), biases, name=scope.name)
return output

磨炼模型。https://github.com/dpressel/rude-carnie/blob/master/train.py

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from six.moves import xrange
from datetime import datetime
import time
import os
import numpy as np
import tensorflow as tf
from data import distorted_inputs
from model import select_model
import json
import re
LAMBDA = 0.01
MOM = 0.9
tf.app.flags.DEFINE_string(‘pre_checkpoint_path’, ”,
“””If specified, restore this pretrained model “””
“””before beginning any training.”””)
tf.app.flags.DEFINE_string(‘train_dir’,
‘/home/dpressel/dev/work/AgeGenderDeepLearning/Folds/tf/test_fold_is_0’,
‘Training directory’)
tf.app.flags.DEFINE_boolean(‘log_device_placement’, False,
“””Whether to log device placement.”””)
tf.app.flags.DEFINE_integer(‘num_preprocess_threads’, 4,
‘Number of preprocessing threads’)
tf.app.flags.DEFINE_string(‘optim’, ‘Momentum’,
‘Optimizer’)
tf.app.flags.DEFINE_integer(‘image_size’, 227,
‘Image size’)
tf.app.flags.DEFINE_float(‘eta’, 0.01,
‘Learning rate’)
tf.app.flags.DEFINE_float(‘pdrop’, 0.,
‘Dropout probability’)
tf.app.flags.DEFINE_integer(‘max_steps’, 40000,
‘Number of iterations’)
tf.app.flags.DEFINE_integer(‘steps_per_decay’, 10000,
‘Number of steps before learning rate decay’)
tf.app.flags.DEFINE_float(‘eta_decay_rate’, 0.1,
‘Learning rate decay’)
tf.app.flags.DEFINE_integer(‘epochs’, -1,
‘Number of epochs’)
tf.app.flags.DEFINE_integer(‘batch_size’, 128,
‘Batch size’)
tf.app.flags.DEFINE_string(‘checkpoint’, ‘checkpoint’,
‘Checkpoint name’)
tf.app.flags.DEFINE_string(‘model_type’, ‘default’,
‘Type of convnet’)
tf.app.flags.DEFINE_string(‘pre_model’,
”,#’./inception_v3.ckpt’,
‘checkpoint file’)
FLAGS = tf.app.flags.FLAGS
# Every 5k steps cut learning rate in half
def exponential_staircase_decay(at_step=10000, decay_rate=0.1):
print(‘decay [%f] every [%d] steps’ % (decay_rate, at_step))
def _decay(lr, global_step):
return tf.train.exponential_decay(lr, global_step,
at_step, decay_rate, staircase=True)
return _decay
def optimizer(optim, eta, loss_fn, at_step, decay_rate):
global_step = tf.Variable(0, trainable=False)
optz = optim
if optim == ‘Adadelta’:
optz = lambda lr: tf.train.AdadeltaOptimizer(lr, 0.95, 1e-6)
lr_decay_fn = None
elif optim == ‘Momentum’:
optz = lambda lr: tf.train.MomentumOptimizer(lr, MOM)
lr_decay_fn = exponential_staircase_decay(at_step, decay_rate)
return tf.contrib.layers.optimize_loss(loss_fn, global_step, eta,
optz, clip_gradients=4., learning_rate_decay_fn=lr_decay_fn)
def loss(logits, labels):
labels = tf.cast(labels, tf.int32)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels, name=’cross_entropy_per_example’)
cross_entropy_mean = tf.reduce_mean(cross_entropy,
name=’cross_entropy’)
tf.add_to_collection(‘losses’, cross_entropy_mean)
losses = tf.get_collection(‘losses’)
regularization_losses =
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
total_loss = cross_entropy_mean + LAMBDA *
sum(regularization_losses)
tf.summary.scalar(‘tl (raw)’, total_loss)
#total_loss = tf.add_n(losses + regularization_losses,
name=’total_loss’)
loss_averages = tf.train.ExponentialMovingAverage(0.9, name=’avg’)
loss_averages_op = loss_averages.apply(losses + [total_loss])
for l in losses + [total_loss]:
tf.summary.scalar(l.op.name + ‘ (raw)’, l)
tf.summary.scalar(l.op.name, loss_averages.average(l))
with tf.control_dependencies([loss_averages_op]):
total_loss = tf.identity(total_loss)
return total_loss
def main(argv=None):
with tf.Graph().as_default():
model_fn = select_model(FLAGS.model_type)
# Open the metadata file and figure out nlabels, and size of epoch
#
张开元数据文件md.json,这一个文件是在预处理数量时生成。寻找nlabels、epoch大小
input_file = os.path.join(FLAGS.train_dir, ‘md.json’)
print(input_file)
with open(input_file, ‘r’) as f:
md = json.load(f)
images, labels, _ = distorted_inputs(FLAGS.train_dir,
FLAGS.batch_size, FLAGS.image_size, FLAGS.num_preprocess_threads)
logits = model_fn(md[‘nlabels’], images, 1-FLAGS.pdrop, True)
total_loss = loss(logits, labels)
train_op = optimizer(FLAGS.optim, FLAGS.eta, total_loss,
FLAGS.steps_per_decay, FLAGS.eta_decay_rate)
saver = tf.train.Saver(tf.global_variables())
summary_op = tf.summary.merge_all()
sess = tf.Session(config=tf.ConfigProto(
log_device_placement=FLAGS.log_device_placement))
tf.global_variables_initializer().run(session=sess)
# This is total hackland, it only works to fine-tune iv3
# 本例能够输入预磨炼模型英斯ption V三,可用来微调 英斯ption V3
if FLAGS.pre_model:
inception_variables = tf.get_collection(
tf.GraphKeys.VARIABLES, scope=”InceptionV3″)
restorer = tf.train.Saver(inception_variables)
restorer.restore(sess, FLAGS.pre_model)
if FLAGS.pre_checkpoint_path:
if tf.gfile.Exists(FLAGS.pre_checkpoint_path) is True:
print(‘Trying to restore checkpoint from %s’ %
FLAGS.pre_checkpoint_path)
restorer = tf.train.Saver()
tf.train.latest_checkpoint(FLAGS.pre_checkpoint_path)
print(‘%s: Pre-trained model restored from %s’ %
(datetime.now(), FLAGS.pre_checkpoint_path))
# 将ckpt文件存款和储蓄在run-(pid)目录
run_dir = ‘%s/run-%d’ % (FLAGS.train_dir, os.getpid())
checkpoint_path = ‘%s/%s’ % (run_dir, FLAGS.checkpoint)
if tf.gfile.Exists(run_dir) is False:
print(‘Creating %s’ % run_dir)
tf.gfile.MakeDirs(run_dir)
tf.train.write_graph(sess.graph_def, run_dir, ‘model.pb’,
as_text=True)
tf.train.start_queue_runners(sess=sess)
summary_writer = tf.summary.FileWriter(run_dir, sess.graph)
steps_per_train_epoch = int(md[‘train_counts’] /
FLAGS.batch_size)
num_steps = FLAGS.max_steps if FLAGS.epochs < 1 else FLAGS.epochs
* steps_per_train_epoch
print(‘Requested number of steps [%d]’ % num_steps)

for step in xrange(num_steps):
start_time = time.time()
_, loss_value = sess.run([train_op, total_loss])
duration = time.time() – start_time
assert not np.isnan(loss_value), ‘Model diverged with loss = NaN’
# 每10步记录3回摘要文件,保存1个检查点文件
if step % 10 == 0:
num_examples_per_step = FLAGS.batch_size
examples_per_sec = num_examples_per_step / duration
sec_per_batch = float(duration)

format_str = (‘%s: step %d, loss = %.3f (%.1f examples/sec; %.3f ‘
‘sec/batch)’)
print(format_str % (datetime.now(), step, loss_value,
examples_per_sec, sec_per_batch))
# Loss only actually evaluated every 100 steps?
if step % 100 == 0:
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)

if step % 1000 == 0 or (step + 1) == num_steps:
saver.save(sess, checkpoint_path, global_step=step)
if __name__ == ‘__main__’:
tf.app.run()

评释模型。https://github.com/dpressel/rude-carnie/blob/master/guess.py

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import math
import time
from data import inputs
import numpy as np
import tensorflow as tf
from model import select_model, get_checkpoint
from utils import *
import os
import json
import csv
RESIZE_FINAL = 227
GENDER_LIST =[‘M’,’F’]
AGE_LIST = [‘(0, 2)’,'(4, 6)’,'(8, 12)’,'(15, 20)’,'(25, 32)’,'(38,
43)’,'(48, 53)’,'(60, 100)’]
MAX_BATCH_SZ = 128
tf.app.flags.DEFINE_string(‘model_dir’, ”,
‘Model directory (where training data lives)’)
tf.app.flags.DEFINE_string(‘class_type’, ‘age’,
‘Classification type (age|gender)’)
tf.app.flags.DEFINE_string(‘device_id’, ‘/cpu:0’,
‘What processing unit to execute inference on’)
tf.app.flags.DEFINE_string(‘filename’, ”,
‘File (Image) or File list (Text/No header TSV) to process’)
tf.app.flags.DEFINE_string(‘target’, ”,
‘CSV file containing the filename processed along with best guess and
score’)
tf.app.flags.DEFINE_string(‘checkpoint’, ‘checkpoint’,
‘Checkpoint basename’)
tf.app.flags.DEFINE_string(‘model_type’, ‘default’,
‘Type of convnet’)
tf.app.flags.DEFINE_string(‘requested_step’, ”, ‘Within the model
directory, a requested step to restore e.g., 9000’)
tf.app.flags.DEFINE_boolean(‘single_look’, False, ‘single look at the
image or multiple crops’)
tf.app.flags.DEFINE_string(‘face_detection_model’, ”, ‘Do frontal
face detection with model specified’)
tf.app.flags.DEFINE_string(‘face_detection_type’, ‘cascade’, ‘Face
detection model type (yolo_tiny|cascade)’)
FLAGS = tf.app.flags.FLAGS
def one_of(fname, types):
return any([fname.endswith(‘.’ + ty) for ty in types])
def resolve_file(fname):
if os.path.exists(fname): return fname
for suffix in (‘.jpg’, ‘.png’, ‘.JPG’, ‘.PNG’, ‘.jpeg’):
cand = fname + suffix
if os.path.exists(cand):
return cand
return None
def classify_many_single_crop(sess, label_list, softmax_output,
coder, images, image_files, writer):
try:
num_batches = math.ceil(len(image_files) / MAX_BATCH_SZ)
pg = ProgressBar(num_batches)
for j in range(num_batches):
start_offset = j * MAX_BATCH_SZ
end_offset = min((j + 1) * MAX_BATCH_SZ, len(image_files))

batch_image_files = image_files[start_offset:end_offset]
print(start_offset, end_offset, len(batch_image_files))
image_batch = make_multi_image_batch(batch_image_files, coder)
batch_results = sess.run(softmax_output,
feed_dict={images:image_batch.eval()})
batch_sz = batch_results.shape[0]
for i in range(batch_sz):
output_i = batch_results[i]
best_i = np.argmax(output_i)
best_choice = (label_list[best_i], output_i[best_i])
print(‘Guess @ 1 %s, prob = %.2f’ % best_choice)
if writer is not None:
f = batch_image_files[i]
writer.writerow((f, best_choice[0], ‘%.2f’ % best_choice[1]))
pg.update()
pg.done()
except Exception as e:
print(e)
print(‘Failed to run all images’)
def classify_one_multi_crop(sess, label_list, softmax_output,
coder, images, image_file, writer):
try:
print(‘Running file %s’ % image_file)
image_batch = make_multi_crop_batch(image_file, coder)
batch_results = sess.run(softmax_output,
feed_dict={images:image_batch.eval()})
output = batch_results[0]
batch_sz = batch_results.shape[0]

for i in range(1, batch_sz):
output = output + batch_results[i]

output /= batch_sz
best = np.argmax(output) # 最或者质量分类
best_choice = (label_list[best], output[best])
print(‘Guess @ 1 %s, prob = %.2f’ % best_choice)

nlabels = len(label_list)
if nlabels > 2:
output[best] = 0
second_best = np.argmax(output)
print(‘Guess @ 2 %s, prob = %.2f’ % (label_list[second_best],
output[second_best]))
if writer is not None:
writer.writerow((image_file, best_choice[0], ‘%.2f’ %
best_choice[1]))
except Exception as e:
print(e)
print(‘Failed to run image %s ‘ % image_file)
def list_images(srcfile):
with open(srcfile, ‘r’) as csvfile:
delim = ‘,’ if srcfile.endswith(‘.csv’) else ‘\t’
reader = csv.reader(csvfile, delimiter=delim)
if srcfile.endswith(‘.csv’) or srcfile.endswith(‘.tsv’):
print(‘skipping header’)
_ = next(reader)

return [row[0] for row in reader]
def main(argv=None): # pylint: disable=unused-argument
files = []

if FLAGS.face_detection_model:
print(‘Using face detector (%s) %s’ % (FLAGS.face_detection_type,
FLAGS.face_detection_model))
face_detect = face_detection_model(FLAGS.face_detection_type,
FLAGS.face_detection_model)
face_files, rectangles = face_detect.run(FLAGS.filename)
print(face_files)
files += face_files
config = tf.ConfigProto(allow_soft_placement=True)
with tf.Session(config=config) as sess:
label_list = AGE_LIST if FLAGS.class_type == ‘age’ else
GENDER_LIST
nlabels = len(label_list)
print(‘Executing on %s’ % FLAGS.device_id)
model_fn = select_model(FLAGS.model_type)
with tf.device(FLAGS.device_id):

images = tf.placeholder(tf.float32, [None, RESIZE_FINAL,
RESIZE_FINAL, 3])
logits = model_fn(nlabels, images, 1, False)
init = tf.global_variables_initializer()

requested_step = FLAGS.requested_step if FLAGS.requested_step else
None

checkpoint_path = ‘%s’ % (FLAGS.model_dir)
model_checkpoint_path, global_step =
get_checkpoint(checkpoint_path, requested_step, FLAGS.checkpoint)

saver = tf.train.Saver()
saver.restore(sess, model_checkpoint_path)

softmax_output = tf.nn.softmax(logits)
coder = ImageCoder()
# Support a batch mode if no face detection model
if len(files) == 0:
if (os.path.isdir(FLAGS.filename)):
for relpath in os.listdir(FLAGS.filename):
abspath = os.path.join(FLAGS.filename, relpath)

if os.path.isfile(abspath) and any([abspath.endswith(‘.’ + ty) for ty
in (‘jpg’, ‘png’, ‘JPG’, ‘PNG’, ‘jpeg’)]):
print(abspath)
files.append(abspath)
else:
files.append(FLAGS.filename)
# If it happens to be a list file, read the list and clobber the
files
if any([FLAGS.filename.endswith(‘.’ + ty) for ty in (‘csv’, ‘tsv’,
‘txt’)]):
files = list_images(FLAGS.filename)

writer = None
output = None
if FLAGS.target:
print(‘Creating output file %s’ % FLAGS.target)
output = open(FLAGS.target, ‘w’)
writer = csv.writer(output)
writer.writerow((‘file’, ‘label’, ‘score’))
image_files = list(filter(lambda x: x is not None, [resolve_file(f)
for f in files]))
print(image_files)
if FLAGS.single_look:
classify_many_single_crop(sess, label_list, softmax_output, coder,
images, image_files, writer)
else:
for image_file in image_files:
classify_one_multi_crop(sess, label_list, softmax_output, coder,
images, image_file, writer)
if output is not None:
output.close()

if __name__ == ‘__main__’:
tf.app.run()

微软脸部图片识别性别、年龄网址 http://how-old.net/
。图片识别年龄、性别。根据题目查找图片。

参考资料:
《TensorFlow技术解析与实战》

欢迎推荐上海机械学习工作机会,我的微信:qingxingfengzi

http://www.bkjia.com/Pythonjc/1231904.htmlwww.bkjia.comtruehttp://www.bkjia.com/Pythonjc/1231904.htmlTechArticle学习笔记TF058:人脸识别,学习笔记tf058人脸
人脸识别,基于人脸部特征音讯识别身份的古生物识别技艺。录制机、录像头搜罗人脸图像或录制…

人脸识别优势,非强制性(收罗格局不轻便被发觉,被识别人脸图像可积极获取)、非接触性(用户不须要与设备接触)、并发性(可同时多少人脸检查实验、追踪、识别)。深度学习前,人脸识别两步骤:高维人工特征提取、降维。守旧人脸识别本事基于可知光图像。深度学习+大数目(海量有标注人脸数据)为人脸识别领域主流手艺路线。神经互联网人脸识别技巧,多量样书图像锻练识别模型,无需人工选拔特征,样本陶冶进度自行学习,识别准确率能够直达9九%。

人脸识别能力流程。

人脸图像搜聚、检查实验。人脸图像搜集,摄像头把人脸图像搜罗下来,静态图像、动态图像、不相同任务、不一样表情。用户在搜集设备拍报范围内,搜集设置自动物检疫索并拍录。人脸检测属于目的检验(object
detection)。对要检查评定对象对象可能率统计,拿到待检验对象特征,建立指标检查评定模型。用模子相称输入图像,输出匹配区域。人脸检查评定是人脸识别预处理,准确标定人脸在图像的岗位大小。人脸图像形式特点丰裕,直方图特征、颜色特征、模板特征、结构特征、哈尔特征(Haar-like
feature)。人脸检查评定挑出有用音信,用特色检查实验脸部。人脸检验算法,模板相配模型、Adaboost模型,艾达boost模型速度。精度综合品质最佳,陶冶慢、检验快,可达到摄像流实时检查评定效果。

人脸图像预处理。基于人脸检验结果,处理图像,服务特征提取。系统拿到人脸图像遭到种种口径限制、随机干扰,需缩放、旋转、拉伸、光线补偿、灰度转换、直方图均衡化、规范化、几何改进、过滤、锐化等图像预处理。

人脸图像特征提取。人脸图像音信数字化,人脸图像转换为一串数字(特征向量)。如,眼睛右侧、嘴唇右侧、鼻子、下巴地点,特征点间欧氏距离、曲率、角度提收取特色分量,相关特征连接成长特征向量。

人脸图像匹配、识别。提取人脸图像特点数据与数据仓库储存款和储蓄人脸特征模板寻找相称,依照相似程度对地位消息举行推断,设定阈值,相似度越过阈值,输出相配结果。确认,一对1(1:1)图像对比,注解“你正是您”,金融核准身份、音讯安全领域。辨认,1对多(一:N)图像匹配,“N人中找你”,摄像流,人走进识别范围就形成辨认,安全防备领域。

人脸识别分类。

人脸检查评定。检查实验、定位图片人脸,再次来到高业饿啊人脸框坐标。对人脸分析、处理的首先步。“滑动窗口”,选取图像矩形区域作滑动窗口,窗口中提取特征对图像区域描述,根据特征描述决断窗口是或不是人脸。不断遍历需求观望窗口。

人脸关键点检查实验。定位、再次来到人脸五官、概略关键点坐标地方。人脸概况、眼睛、眉毛、嘴唇、鼻子轮廓。Face++提供高达⑩陆点关键点。人脸关键点定位手艺,级联形回归(cascaded
shape regression,
CSPRADO)。人脸识别,基于DeepID互联网布局。DeepID互连网布局类似卷积神经互连网布局,尾数第3层,有DeepID层,与卷积层4、最大池化层三相连,卷积神经互连网层数越高视界域越大,既思索部分特征,又思量全局特征。输入层
3一x3九x1、卷积层1 2八x3陆x20(卷积核4x肆x一)、最大池化层1
1贰x1八x20(过滤器二x贰)、卷积层2 12×1陆x20(卷积核3x三x20)、最大池化层二陆x捌x40(过滤器二x2)、卷积层三 四x六x60(卷积核三x3x40)、最大池化层2
2x三x60(过滤器2×2)、卷积层4 2x贰x80(卷积核二x二x60)、DeepID层
一x160、全连接层 Softmax。《Deep Learning Face Representation from
Predicting 一千0 Classes》
http://mmlab.ie.cuhk.edu.hk/pdf/YiSun\_CVPR14.pdf

人脸验证。分析两张人脸同一位或然大小。输入两张人脸,得到置信度分类、相应阈值,评估相似度。

人脸属性检查评定。人脸属性辩识、人脸心理分析。https://www.betaface.com/wpa/
在线人脸识别测试。给出人年龄、是还是不是有胡子、心情(神采飞扬、符合规律、生气、愤怒)、性别、是不是带近视镜、肤色。

人脸识别应用,美图秀秀美颜应用、世纪佳缘查看地下配偶“面相”相似度,支付领域“刷脸支付”,安全防守领域“人脸鉴权”。Face++、商汤科学和技术,提供人脸识别SDK。

人脸检验。https://github.com/davidsandberg/facenet

Florian Schroff、Dmitry Kalenichenko、James Philbin论文《FaceNet: A
Unified Embedding for Face Recognition and Clustering》
https://arxiv.org/abs/1503.03832
https://github.com/davidsandberg/facenet/wiki/Validate-on-lfw

LFW(Labeled Faces in the Wild
Home)数据集。http://vis-www.cs.umass.edu/lfw/
。花旗国罗德岛高校阿姆斯特分校计算机视觉实验室整理。1323三张图片,5745个人。40玖拾捌人唯有一张图纸,16捌十一人多于一张。每张图片尺寸250×250。人脸图片在各类人物名字文件夹下。

数码预处理。校准代码
https://github.com/davidsandberg/facenet/blob/master/src/align/align\_dataset\_mtcnn.py

检验所用数据集校准为和预磨炼模型所用数据集大小同等。
安装环境变量

export PYTHONPATH=[…]/facenet/src

校准命令

for N in {1..4}; do python src/align/align_dataset_mtcnn.py
~/datasets/lfw/raw ~/datasets/lfw/lfw_mtcnnpy_160 –image_size 160
–margin 32 –random_order –gpu_memory_fraction 0.25 & done

预演习模型2017021陆-09114九.zip
https://drive.google.com/file/d/0B5MzpY9kBtDVZ2RpVDYwWmxoSUk
训练集 MS-Celeb-1M数据集
https://www.microsoft.com/en-us/research/project/ms-celeb-1m-challenge-recognizing-one-million-celebrities-real-world/
。微软人脸识别数据库,有名的人榜采取前100万政要,寻找引擎搜聚每一个有名的人100张人脸图片。预磨炼模型准确率0.99三+-0.00四。

检测。python src/validate_on_lfw.py datasets/lfw/lfw_mtcnnpy_160
models
原则比较,选拔facenet/data/pairs.txt,官方随机生成多少,相称和不相称人名和图纸编号。

拾折交叉验证(10-fold cross
validation),精度测试方法。数据集分成十份,轮流将个中九份做陶冶集,一份做测试保,拾二遍结果均值作算法精度估量。一般要求反复十折交叉验证求均值。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import argparse
import facenet
import lfw
import os
import sys
import math
from sklearn import metrics
from scipy.optimize import brentq
from scipy import interpolate

def main(args):
with tf.Graph().as_default():
with tf.Session() as sess:

# Read the file containing the pairs used for testing
# 一. 读入以前的pairs.txt文件
# 读入后如[[‘Abel_Pacheco’,’1′,’4′]]
pairs = lfw.read_pairs(os.path.expanduser(args.lfw_pairs))
# Get the paths for the corresponding images
# 获取文件路线和是不是相配关系对
paths, actual_issame =
lfw.get_paths(os.path.expanduser(args.lfw_dir), pairs,
args.lfw_file_ext)
# Load the model
# 二. 加载模型
facenet.load_model(args.model)

# Get input and output tensors
# 获取输入输出张量
images_placeholder =
tf.get_default_graph().get_tensor_by_name(“input:0”)
embeddings =
tf.get_default_graph().get_tensor_by_name(“embeddings:0”)
phase_train_placeholder =
tf.get_default_graph().get_tensor_by_name(“phase_train:0”)

#image_size = images_placeholder.get_shape()[1] # For some reason
this doesn’t work for frozen graphs
image_size = args.image_size
embedding_size = embeddings.get_shape()[1]

# Run forward pass to calculate embeddings
# 3. 使用前向传播验证
print(‘Runnning forward pass on LFW images’)
batch_size = args.lfw_batch_size
nrof_images = len(paths)
nrof_batches = int(math.ceil(1.0*nrof_images / batch_size)) #
总共批次数
emb_array = np.zeros((nrof_images, embedding_size))
for i in range(nrof_batches):
start_index = i*batch_size
end_index = min((i+1)*batch_size, nrof_images)
paths_batch = paths[start_index:end_index]
images = facenet.load_data(paths_batch, False, False, image_size)
feed_dict = { images_placeholder:images,
phase_train_placeholder:False }
emb_array[start_index:end_index,:] = sess.run(embeddings,
feed_dict=feed_dict)

# 四. 测算准确率、验证率,10折交叉验证办法
tpr, fpr, accuracy, val, val_std, far = lfw.evaluate(emb_array,
actual_issame, nrof_folds=args.lfw_nrof_folds)
print(‘Accuracy: %1.3f+-%1.3f’ % (np.mean(accuracy),
np.std(accuracy)))
print(‘Validation rate: %2.5f+-%2.5f @ FAR=%2.5f’ % (val, val_std,
far))
# 得到auc值
auc = metrics.auc(fpr, tpr)
print(‘Area Under Curve (AUC): %1.3f’ % auc)
# 一得到错误率(eer)
eer = brentq(lambda x: 1. – x – interpolate.interp1d(fpr, tpr)(x), 0.,
1.)
print(‘Equal Error Rate (EER): %1.3f’ % eer)

def parse_arguments(argv):
parser = argparse.ArgumentParser()

parser.add_argument(‘lfw_dir’, type=str,
help=’Path to the data directory containing aligned LFW face
patches.’)
parser.add_argument(‘–lfw_batch_size’, type=int,
help=’Number of images to process in a batch in the LFW test set.’,
default=100)
parser.add_argument(‘model’, type=str,
help=’Could be either a directory containing the meta_file and
ckpt_file or a model protobuf (.pb) file’)
parser.add_argument(‘–image_size’, type=int,
help=’Image size (height, width) in pixels.’, default=160)
parser.add_argument(‘–lfw_pairs’, type=str,
help=’The file containing the pairs to use for validation.’,
default=’data/pairs.txt’)
parser.add_argument(‘–lfw_file_ext’, type=str,
help=’The file extension for the LFW dataset.’, default=’png’,
choices=[‘jpg’, ‘png’])
parser.add_argument(‘–lfw_nrof_folds’, type=int,
help=’Number of folds to use for cross validation. Mainly used for
testing.’, default=10)
return parser.parse_args(argv)
if __name__ == ‘__main__’:
main(parse_arguments(sys.argv[1:]))

性别、年龄识别。https://github.com/dpressel/rude-carnie

Adience
数据集。http://www.openu.ac.il/home/hassner/Adience/data.html\#agegender
。26580张图片,22捌四类,年龄限制八个区段(0~2、4~6、8~13、15~20、25~32、38~43、48~53、60~),含有噪声、姿势、光照变化。aligned
# 经过剪裁对齐多少,faces #
原始数据。fold_0_data.txt至fold_4_data.txt
全体数额符号。fold_frontal_0_data.txt至fold_frontal_4_data.txt
仅用类似正面态度面部标志。数据结构 user_id
用户Flickr帐户ID、original_image 图片文件名、face_id
人标记符、age、gender、x、y、dx、dy 人脸边框、tilt_ang
切斜角度、fiducial_yaw_angle 基准偏移角度、fiducial_score
基准分数。https://www.flickr.com/

数据预处理。脚本把数量处理成TFRecords格式。https://github.com/dpressel/rude-carnie/blob/master/preproc.py
https://github.com/GilLevi/AgeGenderDeepLearning/tree/master/Folds文件夹,已经对训练集、测试集划分、标注。gender\_train.txt、gender\_val.txt
图片列表 Adience 数据集处理TFRecords文件。图片处理为大小25六x256JPEG编码OdysseyGB图像。tf.python_io.TFRecordWriter写入TFRecords文件,输出文件output_file。

创设立模型型。年龄、性别陶冶模型,Gil Levi、Tal Hassner故事集《Age and Gender
Classification Using Convolutional Neural
Networks》http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.722.9654&rank=1
。模型 https://github.com/dpressel/rude-carnie/blob/master/model.py
。tenforflow.contrib.slim。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import time
import os
import numpy as np
import tensorflow as tf
from data import distorted_inputs
import re
from tensorflow.contrib.layers import *
from tensorflow.contrib.slim.python.slim.nets.inception_v3 import
inception_v3_base
TOWER_NAME = ‘tower’
def select_model(name):
if name.startswith(‘inception’):
print(‘selected (fine-tuning) inception model’)
return inception_v3
elif name == ‘bn’:
print(‘selected batch norm model’)
return levi_hassner_bn
print(‘selected default model’)
return levi_hassner
def get_checkpoint(checkpoint_path, requested_step=None,
basename=’checkpoint’):
if requested_step is not None:
model_checkpoint_path = ‘%s/%s-%s’ % (checkpoint_path, basename,
requested_step)
if os.path.exists(model_checkpoint_path) is None:
print(‘No checkpoint file found at [%s]’ % checkpoint_path)
exit(-1)
print(model_checkpoint_path)
print(model_checkpoint_path)
return model_checkpoint_path, requested_step
ckpt = tf.train.get_checkpoint_state(checkpoint_path)
if ckpt and ckpt.model_checkpoint_path:
# Restore checkpoint as described in top of this program
print(ckpt.model_checkpoint_path)
global_step =
ckpt.model_checkpoint_path.split(‘/’)[-1].split(‘-‘)[-1]
return ckpt.model_checkpoint_path, global_step
else:
print(‘No checkpoint file found at [%s]’ % checkpoint_path)
exit(-1)
def _activation_summary(x):
tensor_name = re.sub(‘%s_[0-9]*/’ % TOWER_NAME, ”, x.op.name)
tf.summary.histogram(tensor_name + ‘/activations’, x)
tf.summary.scalar(tensor_name + ‘/sparsity’, tf.nn.zero_fraction(x))
def inception_v3(nlabels, images, pkeep, is_training):
batch_norm_params = {
“is_training”: is_training,
“trainable”: True,
# Decay for the moving averages.
“decay”: 0.9997,
# Epsilon to prevent 0s in variance.
“epsilon”: 0.001,
# Collection containing the moving mean and moving variance.
“variables_collections”: {
“beta”: None,
“gamma”: None,
“moving_mean”: [“moving_vars”],
“moving_variance”: [“moving_vars”],
}
}
weight_decay = 0.00004
stddev=0.1
weights_regularizer =
tf.contrib.layers.l2_regularizer(weight_decay)
with tf.variable_scope(“InceptionV3”, “InceptionV3”, [images]) as
scope:
with tf.contrib.slim.arg_scope(
[tf.contrib.slim.conv2d, tf.contrib.slim.fully_connected],
weights_regularizer=weights_regularizer,
trainable=True):
with tf.contrib.slim.arg_scope(
[tf.contrib.slim.conv2d],
weights_initializer=tf.truncated_normal_initializer(stddev=stddev),
activation_fn=tf.nn.relu,
normalizer_fn=batch_norm,
normalizer_params=batch_norm_params):
net, end_points = inception_v3_base(images, scope=scope)
with tf.variable_scope(“logits”):
shape = net.get_shape()
net = avg_pool2d(net, shape[1:3], padding=”VALID”, scope=”pool”)
net = tf.nn.dropout(net, pkeep, name=’droplast’)
net = flatten(net, scope=”flatten”)

with tf.variable_scope(‘output’) as scope:

weights = tf.Variable(tf.truncated_normal([2048, nlabels], mean=0.0,
stddev=0.01), name=’weights’)
biases = tf.Variable(tf.constant(0.0, shape=[nlabels],
dtype=tf.float32), name=’biases’)
output = tf.add(tf.matmul(net, weights), biases, name=scope.name)
_activation_summary(output)
return output
def levi_hassner_bn(nlabels, images, pkeep, is_training):
batch_norm_params = {
“is_training”: is_training,
“trainable”: True,
# Decay for the moving averages.
“decay”: 0.9997,
# Epsilon to prevent 0s in variance.
“epsilon”: 0.001,
# Collection containing the moving mean and moving variance.
“variables_collections”: {
“beta”: None,
“gamma”: None,
“moving_mean”: [“moving_vars”],
“moving_variance”: [“moving_vars”],
}
}
weight_decay = 0.0005
weights_regularizer =
tf.contrib.layers.l2_regularizer(weight_decay)
with tf.variable_scope(“LeviHassnerBN”, “LeviHassnerBN”, [images]) as
scope:
with tf.contrib.slim.arg_scope(
[convolution2d, fully_connected],
weights_regularizer=weights_regularizer,
biases_initializer=tf.constant_initializer(1.),
weights_initializer=tf.random_normal_initializer(stddev=0.005),
trainable=True):
with tf.contrib.slim.arg_scope(
[convolution2d],
weights_initializer=tf.random_normal_initializer(stddev=0.01),
normalizer_fn=batch_norm,
normalizer_params=batch_norm_params):
conv1 = convolution2d(images, 96, [7,7], [4, 4], padding=’VALID’,
biases_initializer=tf.constant_initializer(0.), scope=’conv1′)
pool1 = max_pool2d(conv1, 3, 2, padding=’VALID’, scope=’pool1′)
conv2 = convolution2d(pool1, 256, [5, 5], [1, 1], padding=’SAME’,
scope=’conv2′)
pool2 = max_pool2d(conv2, 3, 2, padding=’VALID’, scope=’pool2′)
conv3 = convolution2d(pool2, 384, [3, 3], [1, 1], padding=’SAME’,
biases_initializer=tf.constant_initializer(0.), scope=’conv3′)
pool3 = max_pool2d(conv3, 3, 2, padding=’VALID’, scope=’pool3′)
# can use tf.contrib.layer.flatten
flat = tf.reshape(pool3, [-1, 384*6*6], name=’reshape’)
full1 = fully_connected(flat, 512, scope=’full1′)
drop1 = tf.nn.dropout(full1, pkeep, name=’drop1′)
full2 = fully_connected(drop1, 512, scope=’full2′)
drop2 = tf.nn.dropout(full2, pkeep, name=’drop2′)
with tf.variable_scope(‘output’) as scope:

weights = tf.Variable(tf.random_normal([512, nlabels], mean=0.0,
stddev=0.01), name=’weights’)
biases = tf.Variable(tf.constant(0.0, shape=[nlabels],
dtype=tf.float32), name=’biases’)
output = tf.add(tf.matmul(drop2, weights), biases, name=scope.name)
return output
def levi_hassner(nlabels, images, pkeep, is_training):
weight_decay = 0.0005
weights_regularizer =
tf.contrib.layers.l2_regularizer(weight_decay)
with tf.variable_scope(“LeviHassner”, “LeviHassner”, [images]) as
scope:
with tf.contrib.slim.arg_scope(
[convolution2d, fully_connected],
weights_regularizer=weights_regularizer,
biases_initializer=tf.constant_initializer(1.),
weights_initializer=tf.random_normal_initializer(stddev=0.005),
trainable=True):
with tf.contrib.slim.arg_scope(
[convolution2d],
weights_initializer=tf.random_normal_initializer(stddev=0.01)):
conv1 = convolution2d(images, 96, [7,7], [4, 4], padding=’VALID’,
biases_initializer=tf.constant_initializer(0.), scope=’conv1′)
pool1 = max_pool2d(conv1, 3, 2, padding=’VALID’, scope=’pool1′)
norm1 = tf.nn.local_response_normalization(pool1, 5, alpha=0.0001,
beta=0.75, name=’norm1′)
conv2 = convolution2d(norm1, 256, [5, 5], [1, 1], padding=’SAME’,
scope=’conv2′)
pool2 = max_pool2d(conv2, 3, 2, padding=’VALID’, scope=’pool2′)
norm2 = tf.nn.local_response_normalization(pool2, 5, alpha=0.0001,
beta=0.75, name=’norm2′)
conv3 = convolution2d(norm2, 384, [3, 3], [1, 1],
biases_initializer=tf.constant_initializer(0.), padding=’SAME’,
scope=’conv3′)
pool3 = max_pool2d(conv3, 3, 2, padding=’VALID’, scope=’pool3′)
flat = tf.reshape(pool3, [-1, 384*6*6], name=’reshape’)
full1 = fully_connected(flat, 512, scope=’full1′)
drop1 = tf.nn.dropout(full1, pkeep, name=’drop1′)
full2 = fully_connected(drop1, 512, scope=’full2′)
drop2 = tf.nn.dropout(full2, pkeep, name=’drop2′)
with tf.variable_scope(‘output’) as scope:

weights = tf.Variable(tf.random_normal([512, nlabels], mean=0.0,
stddev=0.01), name=’weights’)
biases = tf.Variable(tf.constant(0.0, shape=[nlabels],
dtype=tf.float32), name=’biases’)
output = tf.add(tf.matmul(drop2, weights), biases, name=scope.name)
return output

陶冶模型。https://github.com/dpressel/rude-carnie/blob/master/train.py

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from six.moves import xrange
from datetime import datetime
import time
import os
import numpy as np
import tensorflow as tf
from data import distorted_inputs
from model import select_model
import json
import re
LAMBDA = 0.01
MOM = 0.9
tf.app.flags.DEFINE_string(‘pre_checkpoint_path’, ”,
“””If specified, restore this pretrained model “””
“””before beginning any training.”””)
tf.app.flags.DEFINE_string(‘train_dir’,
‘/home/dpressel/dev/work/AgeGenderDeepLearning/Folds/tf/test_fold_is_0’,
‘Training directory’)
tf.app.flags.DEFINE_boolean(‘log_device_placement’, False,
“””Whether to log device placement.”””)
tf.app.flags.DEFINE_integer(‘num_preprocess_threads’, 4,
‘Number of preprocessing threads’)
tf.app.flags.DEFINE_string(‘optim’, ‘Momentum’,
‘Optimizer’)
tf.app.flags.DEFINE_integer(‘image_size’, 227,
‘Image size’)
tf.app.flags.DEFINE_float(‘eta’, 0.01,
‘Learning rate’)
tf.app.flags.DEFINE_float(‘pdrop’, 0.,
‘Dropout probability’)
tf.app.flags.DEFINE_integer(‘max_steps’, 40000,
‘Number of iterations’)
tf.app.flags.DEFINE_integer(‘steps_per_decay’, 10000,
‘Number of steps before learning rate decay’)
tf.app.flags.DEFINE_float(‘eta_decay_rate’, 0.1,
‘Learning rate decay’)
tf.app.flags.DEFINE_integer(‘epochs’, -1,
‘Number of epochs’)
tf.app.flags.DEFINE_integer(‘batch_size’, 128,
‘Batch size’)
tf.app.flags.DEFINE_string(‘checkpoint’, ‘checkpoint’,
‘Checkpoint name’)
tf.app.flags.DEFINE_string(‘model_type’, ‘default’,
‘Type of convnet’)
tf.app.flags.DEFINE_string(‘pre_model’,
”,#’./inception_v3.ckpt’,
‘checkpoint file’)
FLAGS = tf.app.flags.FLAGS
# Every 5k steps cut learning rate in half
def exponential_staircase_decay(at_step=10000, decay_rate=0.1):
print(‘decay [%f] every [%d] steps’ % (decay_rate, at_step))
def _decay(lr, global_step):
return tf.train.exponential_decay(lr, global_step,
at_step, decay_rate, staircase=True)
return _decay
def optimizer(optim, eta, loss_fn, at_step, decay_rate):
global_step = tf.Variable(0, trainable=False)
optz = optim
if optim == ‘Adadelta’:
optz = lambda lr: tf.train.AdadeltaOptimizer(lr, 0.95, 1e-6)
lr_decay_fn = None
elif optim == ‘Momentum’:
optz = lambda lr: tf.train.MomentumOptimizer(lr, MOM)
lr_decay_fn = exponential_staircase_decay(at_step, decay_rate)
return tf.contrib.layers.optimize_loss(loss_fn, global_step, eta,
optz, clip_gradients=4., learning_rate_decay_fn=lr_decay_fn)
def loss(logits, labels):
labels = tf.cast(labels, tf.int32)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels, name=’cross_entropy_per_example’)
cross_entropy_mean = tf.reduce_mean(cross_entropy,
name=’cross_entropy’)
tf.add_to_collection(‘losses’, cross_entropy_mean)
losses = tf.get_collection(‘losses’)
regularization_losses =
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
total_loss = cross_entropy_mean + LAMBDA *
sum(regularization_losses)
tf.summary.scalar(‘tl (raw)’, total_loss)
#total_loss = tf.add_n(losses + regularization_losses,
name=’total_loss’)
loss_averages = tf.train.ExponentialMovingAverage(0.9, name=’avg’)
loss_averages_op = loss_averages.apply(losses + [total_loss])
for l in losses + [total_loss]:
tf.summary.scalar(l.op.name + ‘ (raw)’, l)
tf.summary.scalar(l.op.name, loss_averages.average(l))
with tf.control_dependencies([loss_averages_op]):
total_loss = tf.identity(total_loss)
return total_loss
def main(argv=None):
with tf.Graph().as_default():
model_fn = select_model(FLAGS.model_type)
# Open the metadata file and figure out nlabels, and size of epoch
#
展开元数据文件md.json,这几个文件是在预处理数量时生成。找寻nlabels、epoch大小
input_file = os.path.join(FLAGS.train_dir, ‘md.json’)
print(input_file)
with open(input_file, ‘r’) as f:
md = json.load(f)
images, labels, _ = distorted_inputs(FLAGS.train_dir,
FLAGS.batch_size, FLAGS.image_size, FLAGS.num_preprocess_threads)
logits = model_fn(md[‘nlabels’], images, 1-FLAGS.pdrop, True)
total_loss = loss(logits, labels)
train_op = optimizer(FLAGS.optim, FLAGS.eta, total_loss,
FLAGS.steps_per_decay, FLAGS.eta_decay_rate)
saver = tf.train.Saver(tf.global_variables())
summary_op = tf.summary.merge_all()
sess = tf.Session(config=tf.ConfigProto(
log_device_placement=FLAGS.log_device_placement))
tf.global_variables_initializer().run(session=sess)
# This is total hackland, it only works to fine-tune iv3
# 本例可以输入预陶冶模型英斯ption V叁,可用来微调 英斯ption V三
if FLAGS.pre_model:
inception_variables = tf.get_collection(
tf.GraphKeys.VARIABLES, scope=”InceptionV3″)
restorer = tf.train.Saver(inception_variables)
restorer.restore(sess, FLAGS.pre_model)
if FLAGS.pre_checkpoint_path:
if tf.gfile.Exists(FLAGS.pre_checkpoint_path) is True:
print(‘Trying to restore checkpoint from %s’ %
FLAGS.pre_checkpoint_path)
restorer = tf.train.Saver()
tf.train.latest_checkpoint(FLAGS.pre_checkpoint_path)
print(‘%s: Pre-trained model restored from %s’ %
(datetime.now(), FLAGS.pre_checkpoint_path))
# 将ckpt文件存款和储蓄在run-(pid)目录
run_dir = ‘%s/run-%d’ % (FLAGS.train_dir, os.getpid())
checkpoint_path = ‘%s/%s’ % (run_dir, FLAGS.checkpoint)
if tf.gfile.Exists(run_dir) is False:
print(‘Creating %s’ % run_dir)
tf.gfile.MakeDirs(run_dir)
tf.train.write_graph(sess.graph_def, run_dir, ‘model.pb’,
as_text=True)
tf.train.start_queue_runners(sess=sess)
summary_writer = tf.summary.FileWriter(run_dir, sess.graph)
steps_per_train_epoch = int(md[‘train_counts’] /
FLAGS.batch_size)
num_steps = FLAGS.max_steps if FLAGS.epochs < 1 else FLAGS.epochs
* steps_per_train_epoch
print(‘Requested number of steps [%d]’ % num_steps)

for step in xrange(num_steps):
start_time = time.time()
_, loss_value = sess.run([train_op, total_loss])
duration = time.time() – start_time
assert not np.isnan(loss_value), ‘Model diverged with loss = NaN’
# 每10步记录3次摘要文件,保存3个检查点文件
if step % 10 == 0:
num_examples_per_step = FLAGS.batch_size
examples_per_sec = num_examples_per_step / duration
sec_per_batch = float(duration)

format_str = (‘%s: step %d, loss = %.3f (%.1f examples/sec; %.3f ‘
‘sec/batch)’)
print(format_str % (datetime.now(), step, loss_value,
examples_per_sec, sec_per_batch))
# Loss only actually evaluated every 100 steps?
if step % 100 == 0:
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)

if step % 1000 == 0 or (step + 1) == num_steps:
saver.save(sess, checkpoint_path, global_step=step)
if __name__ == ‘__main__’:
tf.app.run()

表明模型。https://github.com/dpressel/rude-carnie/blob/master/guess.py

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import math
import time
from data import inputs
import numpy as np
import tensorflow as tf
from model import select_model, get_checkpoint
from utils import *
import os
import json
import csv
RESIZE_FINAL = 227
GENDER_LIST =[‘M’,’F’]
AGE_LIST = [‘(0, 2)’,'(4, 6)’,'(8, 12)’,'(15, 20)’,'(25, 32)’,'(38,
43)’,'(48, 53)’,'(60, 100)’]
MAX_BATCH_SZ = 128
tf.app.flags.DEFINE_string(‘model_dir’, ”,
‘Model directory (where training data lives)’)
tf.app.flags.DEFINE_string(‘class_type’, ‘age’,
‘Classification type (age|gender)’)
tf.app.flags.DEFINE_string(‘device_id’, ‘/cpu:0’,
‘What processing unit to execute inference on’)
tf.app.flags.DEFINE_string(‘filename’, ”,
‘File (Image) or File list (Text/No header TSV) to process’)
tf.app.flags.DEFINE_string(‘target’, ”,
‘CSV file containing the filename processed along with best guess and
score’)
tf.app.flags.DEFINE_string(‘checkpoint’, ‘checkpoint’,
‘Checkpoint basename’)
tf.app.flags.DEFINE_string(‘model_type’, ‘default’,
‘Type of convnet’)
tf.app.flags.DEFINE_string(‘requested_step’, ”, ‘Within the model
directory, a requested step to restore e.g., 9000’)
tf.app.flags.DEFINE_boolean(‘single_look’, False, ‘single look at the
image or multiple crops’)
tf.app.flags.DEFINE_string(‘face_detection_model’, ”, ‘Do frontal
face detection with model specified’)
tf.app.flags.DEFINE_string(‘face_detection_type’, ‘cascade’, ‘Face
detection model type (yolo_tiny|cascade)’)
FLAGS = tf.app.flags.FLAGS
def one_of(fname, types):
return any([fname.endswith(‘.’ + ty) for ty in types])
def resolve_file(fname):
if os.path.exists(fname): return fname
for suffix in (‘.jpg’, ‘.png’, ‘.JPG’, ‘.PNG’, ‘.jpeg’):
cand = fname + suffix
if os.path.exists(cand):
return cand
return None
def classify_many_single_crop(sess, label_list, softmax_output,
coder, images, image_files, writer):
try:
num_batches = math.ceil(len(image_files) / MAX_BATCH_SZ)
pg = ProgressBar(num_batches)
for j in range(num_batches):
start_offset = j * MAX_BATCH_SZ
end_offset = min((j + 1) * MAX_BATCH_SZ, len(image_files))

batch_image_files = image_files[start_offset:end_offset]
print(start_offset, end_offset, len(batch_image_files))
image_batch = make_multi_image_batch(batch_image_files, coder)
batch_results = sess.run(softmax_output,
feed_dict={images:image_batch.eval()})
batch_sz = batch_results.shape[0]
for i in range(batch_sz):
output_i = batch_results[i]
best_i = np.argmax(output_i)
best_choice = (label_list[best_i], output_i[best_i])
print(‘Guess @ 1 %s, prob = %.2f’ % best_choice)
if writer is not None:
f = batch_image_files[i]
writer.writerow((f, best_choice[0], ‘%.2f’ % best_choice[1]))
pg.update()
pg.done()
except Exception as e:
print(e)
print(‘Failed to run all images’)
def classify_one_multi_crop(sess, label_list, softmax_output,
coder, images, image_file, writer):
try:
print(‘Running file %s’ % image_file)
image_batch = make_multi_crop_batch(image_file, coder)
batch_results = sess.run(softmax_output,
feed_dict={images:image_batch.eval()})
output = batch_results[0]
batch_sz = batch_results.shape[0]

for i in range(1, batch_sz):
output = output + batch_results[i]

output /= batch_sz
best = np.argmax(output) # 最恐怕质量分类
best_choice = (label_list[best], output[best])
print(‘Guess @ 1 %s, prob = %.2f’ % best_choice)

nlabels = len(label_list)
if nlabels > 2:
output[best] = 0
second_best = np.argmax(output)
print(‘Guess @ 2 %s, prob = %.2f’ % (label_list[second_best],
output[second_best]))
if writer is not None:
writer.writerow((image_file, best_choice[0], ‘%.2f’ %
best_choice[1]))
except Exception as e:
print(e)
print(‘Failed to run image %s ‘ % image_file)
def list_images(srcfile):
with open(srcfile, ‘r’) as csvfile:
delim = ‘,’ if srcfile.endswith(‘.csv’) else ‘\t’
reader = csv.reader(csvfile, delimiter=delim)
if srcfile.endswith(‘.csv’) or srcfile.endswith(‘.tsv’):
print(‘skipping header’)
_ = next(reader)

return [row[0] for row in reader]
def main(argv=None): # pylint: disable=unused-argument
files = []

if FLAGS.face_detection_model:
print(‘Using face detector (%s) %s’ % (FLAGS.face_detection_type,
FLAGS.face_detection_model))
face_detect = face_detection_model(FLAGS.face_detection_type,
FLAGS.face_detection_model)
face_files, rectangles = face_detect.run(FLAGS.filename)
print(face_files)
files += face_files
config = tf.ConfigProto(allow_soft_placement=True)
with tf.Session(config=config) as sess:
label_list = AGE_LIST if FLAGS.class_type == ‘age’ else
GENDER_LIST
nlabels = len(label_list)
print(‘Executing on %s’ % FLAGS.device_id)
model_fn = select_model(FLAGS.model_type)
with tf.device(FLAGS.device_id):

images = tf.placeholder(tf.float32, [None, RESIZE_FINAL,
RESIZE_FINAL, 3])
logits = model_fn(nlabels, images, 1, False)
init = tf.global_variables_initializer()

requested_step = FLAGS.requested_step if FLAGS.requested_step else
None

checkpoint_path = ‘%s’ % (FLAGS.model_dir)
model_checkpoint_path, global_step =
get_checkpoint(checkpoint_path, requested_step, FLAGS.checkpoint)

saver = tf.train.Saver()
saver.restore(sess, model_checkpoint_path)

softmax_output = tf.nn.softmax(logits)
coder = ImageCoder()
# Support a batch mode if no face detection model
if len(files) == 0:
if (os.path.isdir(FLAGS.filename)):
for relpath in os.listdir(FLAGS.filename):
abspath = os.path.join(FLAGS.filename, relpath)

if os.path.isfile(abspath) and any([abspath.endswith(‘.’ + ty) for ty
in (‘jpg’, ‘png’, ‘JPG’, ‘PNG’, ‘jpeg’)]):
print(abspath)
files.append(abspath)
else:
files.append(FLAGS.filename)
# If it happens to be a list file, read the list and clobber the
files
if any([FLAGS.filename.endswith(‘.’ + ty) for ty in (‘csv’, ‘tsv’,
‘txt’)]):
files = list_images(FLAGS.filename)

writer = None
output = None
if FLAGS.target:
print(‘Creating output file %s’ % FLAGS.target)
output = open(FLAGS.target, ‘w’)
writer = csv.writer(output)
writer.writerow((‘file’, ‘label’, ‘score’))
image_files = list(filter(lambda x: x is not None, [resolve_file(f)
for f in files]))
print(image_files)
if FLAGS.single_look:
classify_many_single_crop(sess, label_list, softmax_output, coder,
images, image_files, writer)
else:
for image_file in image_files:
classify_one_multi_crop(sess, label_list, softmax_output, coder,
images, image_file, writer)
if output is not None:
output.close()

if __name__ == ‘__main__’:
tf.app.run()

微软脸部图片识别性别、年龄网址 http://how-old.net/
。图片识别年龄、性别。依据难题查找图片。

参考资料:
《TensorFlow技术解析与实战》

欢迎推荐新加坡机械学习工作机会,小编的微信:qingxingfengzi