高级API
- tf
- gfile
- Exists(data_file)
- decode_csv(value, record_defaults=_CSV_COLUMN_DEFAULTS)
- equal(labels, '>50K')
- sigmoid
- data
- Dataset
Dataset中包含了很多元素element,每个元素包含1个或多个张量tensor,张量tensor是一个嵌套结构。Dataset中的每个元素element要用
Iterator去取出来。- from_tensors()
- from_tensor_slices()
- zip((dataset1, dataset2))
- map()
用作Dataset变换,返回变换后的Dataset。 - flat_map()
用作Dataset变换,返回变换后的Dataset。 - filter()
用作Dataset变换,返回变换后的Dataset。 - range(100)
- skip()
- make_one_shot_iterator()
- make_initializable_iterator()
- TFRecordDataset
- TextLineDataset(data_file)
- shuffle(buffer_size=_SHUFFLE_BUFFER)
- Iterator
使用Iterator取Dataset中的一个元素element,一个元素element中会包含1个或多个张量tensor。- initializer
- get_next()
- Dataset
- contrib
- layers
- real_valued_column()
- flatten(x)
- fully_connected(images_flat, 62, tf.nn.relu)
- learn
- datasets
- base
- load_csv_with_header(filename=IRIS_TRAINING,target_dtype=np.int,features_dtype=np.float32)
- data
- target
- load_csv_without_header(filename=abalone_train, target_dtype=np.int, features_dtype=np.float64)
- load_csv_with_header(filename=IRIS_TRAINING,target_dtype=np.int,features_dtype=np.float32)
- base
- datasets
- layers
- feature_column
配置神经网络。- numeric_column("x", shape=[4])
- numeric_column('age')
属于连续特征列。 - bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
属于连续分组非线性特征列。 - categorical_column_with_vocabulary_list('relationship', ['Husband', 'Not-in-family', 'Wife', 'Own-child', 'Unmarried','Other-relative'])
属于离散特征列。 - categorical_column_with_hash_bucket('occupation', hash_bucket_size=1000)
属于离散特征列。 - crossed_column(['education', 'occupation'], hash_bucket_size=1000)
组合列 - crossed_column([age_buckets, 'education', 'occupation'], hash_bucket_size=1000)
嵌套组合列 - input_layer(features=features, feature_columns=[age, height, weight])
创建神经网络的输入层。 - inputs
创建模型的输入函数。- numpy_input_fn(x={"x": np.array(training_set.data)},y=np.array(training_set.target),num_epochs=None,shuffle=True)
把numpy arrays传给input函数。 - pandas_input_fn(x=pd.DataFrame({"x": x_data}),y=pd.Series(y_data),...)
把pandas dataframes传给input函数。
- numpy_input_fn(x={"x": np.array(training_set.data)},y=np.array(training_set.target),num_epochs=None,shuffle=True)
- layers
配置神经网络。- dense(inputs=input_layer, units=10, activation=tf.nn.relu)
创建神经网络的全连隐藏层。 - dropout
- flatten
- conv1d
- conv2d
- conv3d
- separable_conv2d
- conv2d_transpose
- conv3d_transpose
- average_pooling1d
- max_pooling1d
- average_pooling2d
- max_pooling2d
- average_pooling3d
- max_pooling3d
- batch_normalization
- dense(inputs=input_layer, units=10, activation=tf.nn.relu)
- estimator
estimator中已经包含几个已经预定义做好的模型LinearClassifier、LinearRegressor、DNNClassifier、DNNRegressor、DNNLinearCombinedClassifier、DNNLinearCombinedRegressor。也可以定制开发自己的模型。- BaselineClassifier
- BaselineRegressor
- LinearRegressor(feature_columns=feature_columns)
- train(input_fn=input_fn, steps=1000)
- evaluate(input_fn=train_input_fn)
- LinearClassifier(
model_dir=model_dir, feature_columns=base_columns + crossed_columns,optimizer=tf.train.FtrlOptimizer(learning_rate=0.1,l1_regularization_strength=1.0,l2_regularization_strength=1.0))
l1和l2参数解决过拟合的问题。- train(input_fn=lambda: input_fn(train_data, num_epochs, True, batch_size))
- evaluate(input_fn=lambda: input_fn(test_data, 1, False, batch_size))
- DNNClassifier
- DNNRegressor(feature_columns=feature_cols,hidden_units=[10, 10], model_dir="/tmp/boston_model")
- predict(input_fn=get_input_fn(prediction_set, num_epochs=1, shuffle=False))
- DNNLinearCombinedClassifier
- DNNLinearCombinedRegressor
- DNNClassifier(feature_columns=feature_columns,hidden_units=[10, 20, 10],n_classes=3,model_dir="/tmp/iris_model")
- DNNClassifier(feature_columns=[age, height, weight],hidden_units=[10, 10, 10],
activation_fn=tf.nn.relu,
dropout=0.2,
n_classes=3,
optimizer="Adam")- train(input_fn=train_input_fn, steps=2000)
- evaluate(input_fn=test_input_fn)["accuracy"]
- predict(input_fn=predict_input_fn)
- Estimator(model_fn=model_fn, model_dir=None, config=None,params=model_params)
利用这个类可以定制自己的模型- model_fn(features, labels, mode, config)
- EstimatorSpec(mode=mode,predictions={"ages": predictions})
- ModeKeys
- TRAIN
训练 - EVAL
评估 - PREDICT
预测
- TRAIN
- reshape(output_layer, [-1])
Reshape output layer to 1-dim Tensor to return predictions - metrics
- root_mean_squared_error(tf.cast(labels, tf.float64), predictions)
- cast(labels, tf.float64)
- SparseTensor(indices=[[0,1], [2,4]],values=[6, 0.5],dense_shape=[3, 5])
- logging
- set_verbosity(tf.logging.INFO)
- INFO
- gfile
底层API
- tf
- float32
- constant(3.0, dtype=tf.float32)
- get_variable("W", [1], dtype=tf.float64)
- add(node1, node2)
- assign(W, [-1.])
- assign_add(global_step, 1)
- matmul(images, weights)
矩阵乘法。 - placeholder(tf.float32, shape=(batch_size,mnist.IMAGE_PIXELS))
- name_scope('hidden1')
- Variable(tf.zeros([hidden1_units]),name='biases')
- zeros([hidden1_units])
使用0初始化变量。 - truncated_normal([IMAGE_PIXELS, hidden1_units],stddev=1.0 / math.sqrt(float(IMAGE_PIXELS)))
使用随机分布初始化变量。 - nn
- relu(tf.matmul(images, weights) + biases)
- relu6
- softmax(y)
- conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
- max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME') - dropout(h_fc1, keep_prob)
- in_top_k(logits, labels, 1)
- softmax_cross_entropy_with_logits(labels=y_, logits=y))
- nce_loss(weights=nce_weights,biases=nce_biases,labels=train_labels,inputs=embed,
num_sampled=num_sampled,
num_classes=vocabulary_size))
- losses
- sparse_softmax_cross_entropy(labels=labels, logits=logits)
- mean_squared_error(labels, predictions)
- absolute_difference(labels, predictions)
- log_loss(labels, predictions)
- train
- get_global_step()
- AdamOptimizer(learning_rate=0.001).minimize(loss)
- GradientDescentOptimizer(learning_rate=params["learning_rate"])
- minimize(loss=loss, global_step=tf.train.get_global_step())
- FtrlOptimizer(learning_rate=0.1,l1_regularization_strength=1.0,l2_regularization_strength=1.0))
- Saver()
- save(sess, FLAGS.train_dir, global_step=step)
- restore(sess, FLAGS.train_dir)
- square(linear_model - y)
- reduce_sum(tf.square(y - labels))
线性回归损失函数。 - reduce_mean(cross_entropy, name='xentropy_mean')
- group(optimizer.minimize(loss),tf.assign_add(global_step, 1))
- summary
- scalar('loss', loss)
- merge_all()
- FileWriter(FLAGS.train_dir, sess.graph)
- add_summary(summary_str, step)
- Graph()
- as_default()
- Session()
- run([train_op, loss],feed_dict=feed_dict)
- InteractiveSession()
- global_variables_initializer()
- ConfigProto(log_device_placement=True)
- argmax(logits, 1)
- rank(my3d)
sennchi