简述
二次代价函数是最容易理解的cost函数,从线性/非线性拟合课程中过来的同学会很熟悉了,它还可以以用来做多元的cost,只要把它理解为递归式的单个cost;
详解代码实在是没有精力,请见谅;
代码
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('./',one_hot=True)
import matplotlib.pyplot as plt
x = tf.placeholder(tf.float32,[None,784])
y_ = tf.placeholder(tf.float32,[None,10])
w = tf.Variable(tf.random_normal([784,10]))
b = tf.Variable(tf.random_normal([10,]))
y = tf.nn.softmax(tf.matmul(x,w) + b)
loss = tf.reduce_mean(tf.square(y-y_))
train_stap = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
correct_predict = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_predict,tf.float32))
accuracies = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for times in range(21):
for _ in range(10000):
X,Y = data.train.next_batch(100)
feed_dict={x:X,y_:Y}
sess.run(train_stap,feed_dict=feed_dict)
accuracies.append(sess.run(accuracy,feed_dict=feed_dict))
print('range %s success!'%times)
plt.plot(accuracies)
plt.show()
图例
识别率在缓慢提升:
改进
把 cost函数 改为 :
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_,logits=y))
把batch改为 1000:
X,Y = data.train.next_batch(1000)
识别率会上升一点点,也不会提高太多,因为这个入门向神经网络只有一层,没有深度。