本文由 沈庆阳 所有,转载请与作者取得联系!
在继续进行之前,我们先来看一下Google Tensorflow Models中的Object Detection API的Github页面。
其内容主要有:快速上手、环境搭建、Object-Detection API的运行和一些额外内容。通过阅读这些,对理解和运用Tensorflow的Object-Detection API具有极大的帮助。
另外,运用机器学习算法进行研究,其实质是寻找目标函数的过程。通过构建机器学习模型(形成函数集),用训练数据做驱动,寻找与训练数据匹配,并且在测试数据中表现优异的函数。因此构建合适的机器学习模型,显得尤为关键。
在Tensorflow detection model zoo页面,我们可以找到TensorFlow项目维护者们通过COCO、Kitti和Open Images等训练数据集预先训练好的模型。
COCO-trained models {#coco-models}
Model name | Speed (ms) | COCO mAP[^1] | Outputs |
---|---|---|---|
ssd_mobilenet_v1_coco | 30 | 21 | Boxes |
ssd_inception_v2_coco | 42 | 24 | Boxes |
faster_rcnn_inception_v2_coco | 58 | 28 | Boxes |
faster_rcnn_resnet50_coco | 89 | 30 | Boxes |
faster_rcnn_resnet50_lowproposals_coco | 64 | Boxes | |
rfcn_resnet101_coco | 92 | 30 | Boxes |
faster_rcnn_resnet101_coco | 106 | 32 | Boxes |
faster_rcnn_resnet101_lowproposals_coco | 82 | Boxes | |
faster_rcnn_inception_resnet_v2_atrous_coco | 620 | 37 | Boxes |
faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco | 241 | Boxes | |
faster_rcnn_nas | 1833 | 43 | Boxes |
faster_rcnn_nas_lowproposals_coco | 540 | Boxes | |
mask_rcnn_inception_resnet_v2_atrous_coco | 771 | 36 | Masks |
mask_rcnn_inception_v2_coco | 79 | 25 | Masks |
mask_rcnn_resnet101_atrous_coco | 470 | 33 | Masks |
mask_rcnn_resnet50_atrous_coco | 343 | 29 | Masks |
由于我们追求实时的检测速度,所以此处选用速度最快的ssd_mobilenet_v1_coco模型。
编写训练的配置文件
使用wget命令或其他下载器下载相应的config配置文件。此处选用的ssd_mobilenet_v1_coco配置文件下载路径为:
https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v1_coco.config
打开ssd_mobilenet_v1_coco.config配置文件
# SSD with Mobilenet v1 configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
ssd {
num_classes: 1
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
}
}
similarity_calculator {
iou_similarity {
}
}
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
}
}
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
box_predictor {
convolutional_box_predictor {
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.8
kernel_size: 1
box_code_size: 4
apply_sigmoid_to_scores: false
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
}
}
feature_extractor {
type: 'ssd_mobilenet_v1'
min_depth: 16
depth_multiplier: 1.0
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.9997,
epsilon: 0.001,
}
}
}
loss {
classification_loss {
weighted_sigmoid {
}
}
localization_loss {
weighted_smooth_l1 {
}
}
hard_example_miner {
num_hard_examples: 3000
iou_threshold: 0.99
loss_type: CLASSIFICATION
max_negatives_per_positive: 3
min_negatives_per_image: 0
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}
train_config: {
batch_size: 24
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.004
decay_steps: 800720
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
}
fine_tune_checkpoint: "ssd_mobilenet_v1_coco_2017_11_17/model.ckpt"
from_detection_checkpoint: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "data/train.record"
}
label_map_path: "training/object-detection.pbtxt"
}
eval_config: {
num_examples: 8000
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
max_evals: 10
}
eval_input_reader: {
tf_record_input_reader {
input_path: "data/test.record"
}
label_map_path: "training/object-detection.pbtxt"
shuffle: false
num_readers: 1
}
修改第9行
num_classes: 1
修改第175行
input_path: "data/train.record"
修改第177行和第191行
label_map_path: "training/object-detection.pbtxt"
修改第189行
input_path: "data/test.record"
第9行指定了我们训练的目标类目,由于此处只训练了一个目标,所以数值为1。第175行指定了训练的输入的train Record的文件位置。第177和191行指定了label map的位置,该文件在下面会创建。第189行指定了测试数据的位置。以上配置按需修改。
我们来到training目录下,新建名为object-detection.pbtxt的空白文档并打开。输入如下内容,保存。
item {
id: 1
name: 'pen'
}
开始训练
将项目目录下的data文件夹、images文件夹、ssd_mobilenet_v1_coco_2017_11_17文件夹、training文件夹和ssd_mobilenet_v1_coco.config配置文件复制到models/research/object_detection目录下,病选择合并。
打开命令行,并进入models/research目录下。将Object Detection的库加入Python变量。
# From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
注:export的效力仅及于该次登陆操作。
在控制台进入models/research/object_detection目录下,输入如下命令运行训练程序
python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_coco.config
当你的终端开始输出如下内容的时候则证明训练程序正常开始运行了。
INFO:tensorflow:Restoring parameters from training/model.ckpt-1
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Starting Session.
INFO:tensorflow:Saving checkpoint to path training/model.ckpt
INFO:tensorflow:Starting Queues.
INFO:tensorflow:global_step/sec: 0
INFO:tensorflow:Recording summary at step 1.
INFO:tensorflow:global step 2: loss = 2.2202 (2.397 sec/step)
INFO:tensorflow:global step 3: loss = 2.5926 (1.749 sec/step)
INFO:tensorflow:global step 4: loss = 1.7984 (1.980 sec/step)
INFO:tensorflow:global step 5: loss = 1.5214 (1.734 sec/step)
INFO:tensorflow:global step 6: loss = 1.7882 (1.147 sec/step)
TensorBoard可视化学习
使用TensorBoard可以将学习的过程可视化。
TensorBoard 涉及到的运算,通常是在训练庞大的深度神经网络中出现的复杂而又难以理解的运算。
为了更方便 TensorFlow 程序的理解、调试与优化,Google的Tensorflow发布了一套叫做 TensorBoard 的可视化工具。你可以用 TensorBoard 来展现你的 TensorFlow 图像,绘制图像生成的定量指标图以及附加数据。
TensorBoard的界面如下:
在TensorFlow程序运行的时候,你的training文件夹下会出现event文件。重新打开一个终端,使用如下命令配置TensorBoard。
jack@jack~/tensorflowProject/object_detection/models/object_detection$ tensorboard --logdir='training'
运行之后,会出现如下提示
TensorBoard 1.7.0a20180302 at http://127.0.0.1:6006 (Press CTRL+C to quit)
使用浏览器打开http://127.0.0.1:6006,则成功地打开了TensorBoard的界面。
当你的TensorBoard中的TotalLoss差不多小于1,且稳定的时候就可以停止训练了。
此时你的training目录下将会有看似如下的文件存在。
训练程序在第375、772、970等步骤时保存了模型文件。确保你的相应步数时的模型有.index、.meta和.data-00000-of-00001后缀的文件存在。
再回到training的上级目录,开启终端,输入如下命令。
python export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path training/faster_rcnn_inception_v2_coco.config \
--trained_checkpoint_prefix training/model.ckpt-1167 \
--output_directory pen_graph;
当程序没有报错的时候,证明我们将图已经导出到pen_graph目录下了。
测试训练的模型
我们回到object detection目录下,使用juypter notebook打开object_detection_tutorial.ipynb文件。
修改其第60行
# What model to download.
MODEL_NAME = 'pen_graph'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('training', 'object-detection.pbtxt')
NUM_CLASSES = 1
修改其第64行
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(3, 8) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
我们找到object_detection文件夹下的test_images文件夹。将7张包含钢笔的图片重命名为image1-8.jpg。回到Jupyter Notebook,点击Cell->Run All,并到页面最下方观察。
页面最下方将会对在test_images文件夹下名为image3-8.jpg的图片进行目标检测,并标出其中的钢笔。
至此,我们已经成功训练好了模型。可以照着前一篇的文章修改,使用摄像头进行目标检测。
觉得写的不错的朋友可以点一个 喜欢♥ ~
谢谢你的支持!