网络程序设计课程总结

sa15226142 王振西

前言

本学年网络程序设计课程项目是血常规报告的OCR识别和基于识别到的数据进行年龄和性别的预测。孟宁老师一直以开放式教学而深受同学们的喜爱,主张学生的自主性和主观能动性。利用线上的git服务器同学们共同完成整个项目。由孟宁老师给出框架与阶段任务,对pull request的代码进行严格的筛选,监督进度,给出完成项目的指导意见与需求。完全模拟了实际项目开发过程中的方方面面。同学们在课程中,不仅对课程本身知识内容有很好的吸收,在完成项目的过程中,增强了团队合作意识,学习到正确规范的代码格式,培养了写文档的习惯,同时,取长补短,在各自擅长的领域,对项目进行了完善。相比于其他课程追求大而全的思路不同,孟宁老师更愿意让同学们在某一点上发散思考,深度研究。这也更能让同学们在开发过程中找到自己的“技能点”。

项目介绍

本课程的目标是通过神经网络等深度学习的知识对血常规检测报告进行OCR识别以及对患者性别和年龄的预测。
该项目主要分成两个模块,一个是对血常规检测报告图片的OCR识别,另一个是用已经训练好的神经网络模型,对识别的数据进行性别和年龄的预测。
对血常规检测报告的OCR识别
模块功能

1.接收从前端上传的血常规检测报告图片至服务器
2.服务器对上传的图片进行预处理,使用透视算法对图片进行剪切调整后,进行OCR识别,将数据作为JSON格式的对象存储在MongoDB数据库中
3.将识别完成后的数据显示在前端

对数据进行性别和年龄的预测
模块功能

1.在前端点击predict事件后,可通过训练完成的模型,对数据进行性别和年龄的预测 。

安装及运行

# 安装numpy,
sudo apt-get install python-numpy # http://www.numpy.org/
# 安装opencv
sudo apt-get install python-opencv # http://opencv.org/

##安装OCR和预处理相关依赖
sudo apt-get install tesseract-ocr
sudo pip install pytesseract
sudo apt-get install python-tk
sudo pip install pillow

# 安装Flask框架、mongo
sudo pip install Flask
sudo apt-get install mongodb # 如果找不到可以先sudo apt-get update
sudo service mongodb started
sudo pip install pymongo

运行

cd  BloodTestReportOCR
python view.py # upload图像,在浏览器打开http://yourip:8080

演示Demo

运行view.py后,打开浏览器,访问localhost:8080,进入页面:
上传图片
上传图片.jpg
生成报告
生成报告.jpg
结果预测
结果预测.jpg

性别和年龄预测的实现

对于性别和年龄的预测,不同的同学采用了不同方案,这里作者给出自己的实现。借助Theano完成网络的搭建。
数据处理

由于在训练集中包含无效数据,所以在进行预测前要筛选掉包含空和非法字符的行,并在有效的数据中选择一部分当做验证集,预测集。

import pandas as pd 
import numpy as np
import os
import csv

inputfilepath = "./train2.csv"

if not os.path.exists("./processed2"):
    os.makedirs("./processed2")
train_set_Path = "./processed2/trainset.csv"
validate_set_Path = "./processed2/validateset.csv"
predict_set_Path = "./processed2/predictset.csv"

dataf = pd.read_csv(inputfilepath)
# dorp 5 empty columns
dataf = dataf.dropna(axis=1,how='all')
# drop all items which has nan, remaining 6500 items
dataf1 = dataf.dropna()

entity_num = 6500 
validate_num = 650
validate_indice = np.random.random_integers(0,entity_num - 1,validate_num)

train_set = []
validate_set = []
predict_set = []
indexp = 0
for index, row in dataf1.iterrows():
    temp = []
    for i in range(2,29):
        temp.append(row[i])
    temp.append(row[0])             # sex
    temp.append(row[1])             # age
    if indexp in validate_indice:
        validate_set.append(temp)
    else:
        train_set.append(temp)
    indexp += 1
predict_set = train_set[-201:-1]
train_set = train_set[0:-201]
print len(train_set), len(predict_set),len(validate_set)

f1 = open(train_set_Path, 'a')
writer1 = csv.writer(f1)
for item in train_set:
    writer1.writerow(item)
f1.close()

f2 = open(validate_set_Path, 'a')
writer2 = csv.writer(f2)
for item in validate_set:
    writer2.writerow(item)
f2.close()

f3 = open(predict_set_Path, 'a')
writer3 = csv.writer(f3)
for item in predict_set:
    writer3.writerow(item)
f3.close()
神经网络搭建
class HiddenLayer(object):
    def __init__(self, rng, input, n_in, n_out, W=None, b=None,
                    activation=None):
        """
        A multiple layer perceptron hidden layer.

        :type rng: numpy.random.RandomState
        :param rng: a random number generator used to initialize weights

        :type input: theano.tensor.dmatrix
        :param input: a symbolic tensor of shape (n_examples, n_in)

        :type n_in: int
        :param n_in: dimensionality of input

        :type n_out: int
        :param n_out: number of hidden units

        :type activation: theano.Op or function
        :param activation: Non linearity to be applied in the hidden
                           layer
        """

        self.input = input


        # initialiaze W from a Gaussian distribution, the prior distribution
        # of W is given by Gaussian.
        """
        if W is None:
            W_values = np.asarray(
                rng.normal(
                    loc = 0.0,
                    scale = 1.0,
                    size = (n_in, n_out)
                ),
                dtype = theano.config.floatX
            )
            W = theano.shared(value=W_values, name='W', borrow=True)
        """
        if W is None:
            W_values = np.asarray(
                rng.uniform(
                    low=-np.sqrt(6. / (n_in + n_out)),
                    high=np.sqrt(6. / (n_in + n_out)),
                    size=(n_in, n_out)
                ),
                dtype=theano.config.floatX
            )
            if activation == theano.tensor.nnet.sigmoid:
                W_values *= 4

            W = theano.shared(value=W_values, name='W', borrow=True)
        
        if b is None:
            b_values = np.zeros((n_out,), dtype=theano.config.floatX)
            b = theano.shared(value=b_values, name='b', borrow=True)

        self.W = W
        self.b=b

        _output = T.dot(input, self.W) + self.b
        self.output = (
            _output if activation is None
            else activation(_output)
        )

        self.parameters = [self.W, self.b]

class LogisticRegression(object):
    def __init__(self, input, n_in, n_out):
        """ Initialize the parameters of the logistic regression

        :type input: theano.tensor.TensorType
        :param input: symbolic variable that describes the input of the
                      architecture (one minibatch)

        :type n_in: int
        :param n_in: number of input units, the dimension of the space in
                     which the datapoints lie

        :type n_out: int
        :param n_out: number of output units, the dimension of the space in
                      which the labels lie

        """

        # initialize the weights W with 0 as a matrix of shape(n_in, n_out)
        self.W = theano.shared(
            value=np.zeros(
                (n_in, n_out),
                dtype=theano.config.floatX
            ),
            name='W',
            borrow=True
        )

        #initialize the biased b as vector of n_out 0s
        self.b = theano.shared(
            value=np.zeros(
                (n_out,),
                dtype=theano.config.floatX
            ),
            name='b',
            borrow=True
        )

        # symbolic expression for computing the matrix of class-membership
        # probabilities
        # Where:
        # W is a matrix where column-k represent the separation hyperplane for
        # class-k
        # x is a matrix where row-j  represents input training sample-j
        # b is a vector where element-k represent the free parameter of
        # hyperplane-k
        self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)

        # symbolic description of how to compute prediction as class whose
        # probability is maximal
        self.y_pred = T.argmax(self.p_y_given_x, axis=1)
        # end-snippet-1

        # parameters of the model
        self.parameters = [self.W, self.b]

        # keep track of model input
        self.input = input

    def negative_log_likelihood(self, y):
        """Return the mean of the negative log-likelihood of the prediction
        of this model under a given target distribution.

        :type y: theano.tensor.TensorType
        :param y: corresponds to a vector that gives for each example the
                  correct label

        Note: we use the mean instead of the sum so that
              the learning rate is less dependent on the batch size
        """

        return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])

    def errorsforSex(self, y):
        """Return a float representing the number of errors in the minibatch
        over the total number of examples of the minibatch ; zero one
        loss over the size of the minibatch

        :type y: theano.tensor.TensorType
        :param y: corresponds to a vector that gives for each example the
                  correct label
        """

        # check if y has same dimension of y_pred
        if y.ndim != self.y_pred.ndim:
            raise TypeError(
                'y should have the same shape as self.y_pred',
                ('y', y.type, 'y_pred', self.y_pred.type)
            )
        # check if y is of the correct datatype
        if y.dtype.startswith('int'):
            # the T.neq operator returns a vector of 0s and 1s, where 1
            # represents a mistake in prediction
            return T.mean(T.neq(self.y_pred, y)), self.y_pred, y
        else:
            raise NotImplementedError()
    

class MLPForSex(object):
    """
    The MLP model for predict sex.
    """

    def __init__(self, rng, input, n_in, n_out):
        """Initialize the parameters for the multilayer perceptron

        :type rng: numpy.random.RandomState
        :param rng: a random number generator used to initialize weights

        :type input: theano.tensor.TensorType
        :param input: symbolic variable that describes the input of the
        architecture (one minibatch)

        :type n_in: int
        :param n_in: number of input units, the dimension of the space in
        which the datapoints lie

        :type n_out: int
        :param n_out: number of output units, the dimension of the space in
        which the labels lie

        """
        
        self.hiddenLayer1 = HiddenLayer(
            rng=rng,
            input=input,
            n_in=n_in,
            n_out=64,
            activation=T.tanh
        )
        self.hiddenLayer2 = HiddenLayer(
            rng=rng,
            input=self.hiddenLayer1.output,
            n_in=64,
            n_out=128,
            activation=T.tanh
        )
        self.hiddenLayer3 = HiddenLayer(
            rng=rng,
            input=self.hiddenLayer2.output,
            n_in=128,
            n_out=16,
            activation=T.tanh
        )
        self.outputLayer = LogisticRegression(
            input=self.hiddenLayer3.output,
            n_in=16,
            n_out=n_out
        )

        self.L1 = (
            abs(self.hiddenLayer1.W).sum()
            + abs(self.hiddenLayer2.W).sum()
            + abs(self.hiddenLayer3.W).sum()
            + abs(self.outputLayer.W).sum()
        )

        self.L2 = (
            (self.hiddenLayer1.W ** 2).sum()
            + (self.hiddenLayer2.W ** 2).sum()
            + (self.hiddenLayer3.W ** 2).sum()
            + (self.outputLayer.W ** 2).sum()
        )

        self.negative_log_likelihood = (
            self.outputLayer.negative_log_likelihood
        )

        self.errors = self.outputLayer.errorsforSex

        self.parameters = self.hiddenLayer1.parameters + self.hiddenLayer2.parameters +\
        self.hiddenLayer3.parameters + self.outputLayer.parameters

        self.input = input

class AgeOutput(object):
    def __init__(self, input, n_in, n_out, W=None, b=None, activation=None):
        """ Initialize the parameters of the logistic regression

        :type input: theano.tensor.TensorType
        :param input: symbolic variable that describes the input of the
                      architecture (one minibatch)

        :type n_in: int
        :param n_in: number of input units, the dimension of the space in
                     which the datapoints lie

        :type n_out: int
        :param n_out: number of output units, the dimension of the space in
                      which the labels lie

        """

        # initialize the weights W with 0 as a matrix of shape(n_in, n_out)
        rng = np.random.RandomState(1234)
        if W is None:
            W_values = np.asarray(
                rng.uniform(
                    low=-np.sqrt(6. / (n_in + n_out)),
                    high=np.sqrt(6. / (n_in + n_out)),
                    size=(n_in, n_out)
                ),
                dtype=theano.config.floatX
            )
            if activation == theano.tensor.nnet.sigmoid:
                W_values *= 4

            W = theano.shared(value=W_values, name='W', borrow=True)
        
        if b is None:
            b_values = np.zeros((n_out,), dtype=theano.config.floatX)
            b = theano.shared(value=b_values, name='b', borrow=True)

        self.W = W
        self.b=b

        # symbolic expression for computing the matrix of class-membership
        # probabilities
        # Where:
        # W is a matrix where column-k represent the separation hyperplane for
        # class-k
        # x is a matrix where row-j  represents input training sample-j
        # b is a vector where element-k represent the free parameter of
        # hyperplane-k
        gaussian = np.random.normal(0,25,n_out)
        self.output = T.dot(input, self.W) + self.b
        #self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)

        # symbolic description of how to compute prediction as class whose
        # probability is maximal
        #self.y_pred = T.argmax(self.p_y_given_x, axis=1)
        # end-snippet-1

        # parameters of the model
        self.parameters = [self.W, self.b]

        # keep track of model input
        self.input = input

    def cost(self, y):
        """Return the mean of the negative log-likelihood of the prediction
        of this model under a given target distribution.

        :type y: theano.tensor.TensorType
        :param y: corresponds to a vector that gives for each example the
                  correct label

        Note: we use the mean instead of the sum so that
              the learning rate is less dependent on the batch size
        """
        return 0.5 * T.dot(T.transpose(self.output - y), self.output - y)
        return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])

    def errorsforAge(self, y):
        """Return a float representing the number of errors in the minibatch
        over the total number of examples of the minibatch ; zero one
        loss over the size of the minibatch

        :type y: theano.tensor.TensorType
        :param y: corresponds to a vector that gives for each example the
                  correct label
        """

        # check if y has same dimension of y_pred
        
        if y.ndim != self.output.ndim:
            raise TypeError(
                'y should have the same shape as self.y_pred',
                ('y', y.type, 'y_pred', self.output.type)
            )
        # check if y is of the correct datatype
        if y.dtype.startswith('int'):
            # the T.neq operator returns a vector of 0s and 1s, where 1
            # represents a mistake in prediction
            def notEqualbyFive(a, b):
                if T.le(T.abs_(a-b),5):
                    return 0
                else:
                    return 1
            return T.mean(T.le(5,T.abs_(self.output-y))), self.output, y
            #return T.mean(T.neq(self.outputLayer.y_pred, y)), self.outputLayer.y_pred, y
        else:
            raise NotImplementedError()

class MLPForAge(object):
    """
    The MLP model for predict age.
    """

    def __init__(self, rng, input, n_in):
        """Initialize the parameters for the multilayer perceptron

        :type rng: numpy.random.RandomState
        :param rng: a random number generator used to initialize weights

        :type input: theano.tensor.TensorType
        :param input: symbolic variable that describes the input of the
        architecture (one minibatch)

        :type n_in: int
        :param n_in: number of input units, the dimension of the space in
        which the datapoints lie

        """
        
        self.hiddenLayer1 = HiddenLayer(
            rng=rng,
            input=input,
            n_in=n_in,
            n_out=128,
            activation=None
        )
        self.hiddenLayer2 = HiddenLayer(
            rng=rng,
            input=self.hiddenLayer1.output,
            n_in=128,
            n_out=128,
            activation=None
        )
        self.hiddenLayer3 = HiddenLayer(
            rng=rng,
            input=self.hiddenLayer2.output,
            n_in=128,
            n_out=256,
            activation=None
        )
        self.hiddenLayer4 = HiddenLayer(
            rng=rng,
            input=self.hiddenLayer3.output,
            n_in=256,
            n_out=512,
            activation=None
        )
        self.hiddenLayer5 = HiddenLayer(
            rng=rng,
            input=self.hiddenLayer4.output,
            n_in=512,
            n_out=256,
            activation=None
        )
        self.hiddenLayer6 = HiddenLayer(
            rng=rng,
            input=self.hiddenLayer5.output,
            n_in=256,
            n_out=256,
            activation=None
        )
        self.outputLayer = AgeOutput(
            input=self.hiddenLayer6.output,
            n_in=256,
            n_out=1
        )

        self.L1 = (
            abs(self.hiddenLayer1.W).sum()
            + abs(self.hiddenLayer2.W).sum()
            + abs(self.hiddenLayer3.W).sum()
            + abs(self.hiddenLayer4.W).sum()
            + abs(self.hiddenLayer5.W).sum()
            + abs(self.hiddenLayer6.W).sum()
            + abs(self.outputLayer.W).sum()
        )

        self.L2 = (
            (self.hiddenLayer1.W ** 2).sum()
            + (self.hiddenLayer2.W ** 2).sum()
            + (self.hiddenLayer3.W ** 2).sum()
            + (self.hiddenLayer4.W ** 2).sum()
            + (self.hiddenLayer5.W ** 2).sum()
            + (self.hiddenLayer6.W ** 2).sum()
            + (self.outputLayer.W ** 2).sum()
        )

        self.cost = (
            self.outputLayer.cost
        )
        self.errors = self.errorsforAge

        self.parameters = self.hiddenLayer1.parameters + self.hiddenLayer2.parameters +\
        self.hiddenLayer3.parameters + self.hiddenLayer4.parameters + self.hiddenLayer5.parameters +\
        self.hiddenLayer6.parameters + self.outputLayer.parameters

        self.input = input

    def errorsforAge(self, y):
        """Return a float representing the number of errors in the minibatch
        over the total number of examples of the minibatch ; zero one
        loss over the size of the minibatch

        :type y: theano.tensor.TensorType
        :param y: corresponds to a vector that gives for each example the
                  correct label
        """

        # check if y has same dimension of y_pred
        
        if y.ndim != self.outputLayer.output.ndim:
            raise TypeError(
                'y should have the same shape as self.y_pred',
                ('y', y.type, 'y_pred', self.outputLayer.output.type)
            )
        # check if y is of the correct datatype
        if y.dtype.startswith('float'):
            # the T.neq operator returns a vector of 0s and 1s, where 1
            # represents a mistake in prediction
            def notEqualbyFive(a, b):
                if T.le(T.abs_(a-b),5):
                    return 0
                else:
                    return 1
            return T.mean(T.le(5,T.abs_(self.outputLayer.output-y))), self.outputLayer.output, y
            #return T.mean(T.neq(self.outputLayer.y_pred, y)), self.outputLayer.y_pred, y
        else:
            raise NotImplementedError()

def load_data(dataset, id):
    """
    loads the dataset.

    :type dataset: string
    :param dataset: the path to the dataset

    :type id: string
    :param id: generate the dataset for predicting age or gender. 'age'
    for age, 'gender' for gender
    """

    gender = ['BAS', 'EOS%', 'HGB', 'LIC%', 'PDW', 'PLT', 'WBC']
    age = ['BAS%', 'HCT', 'LIC%', 'LYM', 'MCH', 'MCV', 'MPV']
    data_set = []
    def findIndice(original, info):
        indice = []
        for item in info:
            if item in original:
                indice.append(original.index(item))
        return indice
    with open(dataset, 'rb') as f:
        rows = csv.reader(f)
        index = 0
        indice = []
        for row in rows:
            temp = []
            if index == 0:
                index += 1
                if id == 'age':
                    indice = findIndice(row, age)
                elif id == 'gender':
                    indice = findIndice(row, gender)
                print indice
                continue
            if len(indice) == 7:
                for i in indice:
                    temp.append(row[i])
            else:
                for i in indice:
                    temp.append(row[i])
                for i in range(7 - len(indice)):
                    temp.append(0)
            if row[-2] == 'Ů':      # not know male or female
                temp.append(0)
            else:
                temp.append(1)
            temp.append(row[-1])
            #print temp
            data_set.append(temp)
    f.close()

    def convertStrToFloat(dataset):
        resultX = []
        resultY1 = []
        resultY2 = []
        for i in range(0,len(dataset)):
            tempX = []
            for j in range(0,7):
                #print dataset[i][j]
                tempX.append(float(dataset[i][j]))
            resultY1.append(float(dataset[i][-2]))
            resultY2.append(float(dataset[i][-1]))
            resultX.append(tempX)
        #for i in range(0,len(dataset)):
        #    print resultX[i], resultY1[i], resultY2[i]
        return (resultX, resultY1, resultY2)

    def shared_dataset(data_xy1y2, borrow=True):
        data_x, data_y1, data_y2 = data_xy1y2
        shared_x = theano.shared(np.asarray(data_x,
                                               dtype=theano.config.floatX),
                                 borrow=borrow)
        shared_y1 = theano.shared(np.asarray(data_y1,
                                               dtype=theano.config.floatX),
                                 borrow=borrow)
        shared_y2 = theano.shared(np.asarray(data_y2,
                                               dtype=theano.config.floatX),
                                 borrow=borrow)
        return shared_x, T.cast(shared_y1, 'int32'), T.cast(shared_y2, 'int32')
    dataset_x, dataset_y1, dataset_y2 = shared_dataset(convertStrToFloat(data_set))
    
    result = (dataset_x, dataset_y1, dataset_y2)
    #print result
    return result

def load_data1(dataset, id):
    """
    loads the dataset.

    :type dataset: string
    :param dataset: the path to the dataset

    :type id: string
    :param id: generate the dataset for predicting age or gender. 'age'
    for age, 'gender' for gender
    """

    gender = ['BAS', 'EOS%', 'HGB', 'LIC%', 'PDW', 'PLT', 'WBC']
    age = ['BAS%', 'HCT', 'LIC%', 'LYM', 'MCH', 'MCV', 'MPV']
    data_set = []
    def findIndice(original, info):
        indice = []
        for item in info:
            if item in original:
                indice.append(original.index(item))
        return indice
    with open(dataset, 'rb') as f:
        rows = csv.reader(f)
        index = 0
        indice = []
        for row in rows:
            temp = []
            for i in range(0,29):
                temp.append(row[i])
            data_set.append(temp)
    f.close()

    def convertStrToFloat(dataset):
        resultX = []
        resultY1 = []
        resultY2 = []
        for i in range(0,len(dataset)):
            tempX = []
            for j in range(0,27):
                #print dataset[i][j]
                tempX.append(float(dataset[i][j]))
            resultY1.append(float(dataset[i][-2]) - 1)    # sex
            resultY2.append(float(dataset[i][-1]))    # age
            resultX.append(tempX)
        #for i in range(0,len(dataset)):
        #    print resultX[i], resultY1[i], resultY2[i]
        return (resultX, resultY1, resultY2)

    def shared_dataset(data_xy1y2, borrow=True):
        data_x, data_y1, data_y2 = data_xy1y2
        shared_x = theano.shared(np.asarray(data_x,
                                               dtype=theano.config.floatX),
                                 borrow=borrow)
        shared_y1 = theano.shared(np.asarray(data_y1,
                                               dtype=theano.config.floatX),
                                 borrow=borrow)
        shared_y2 = theano.shared(np.asarray(data_y2,
                                               dtype=theano.config.floatX),
                                 borrow=borrow)
        return shared_x, T.cast(shared_y1, 'int32'), T.cast(shared_y2, 'float64')
    dataset_x, dataset_y1, dataset_y2 = shared_dataset(convertStrToFloat(data_set))
    
    result = (dataset_x, dataset_y1, dataset_y2)
    #print result
    return result
if __name__ == '__main__':
    load_data('../data/processed/validateset.csv')
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 196,165评论 5 462
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 82,503评论 2 373
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 143,295评论 0 325
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,589评论 1 267
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,439评论 5 358
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,342评论 1 273
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,749评论 3 387
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,397评论 0 255
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,700评论 1 295
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,740评论 2 313
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,523评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,364评论 3 314
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,755评论 3 300
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,024评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,297评论 1 251
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,721评论 2 342
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,918评论 2 336

推荐阅读更多精彩内容

  • 前言 《网络程序设计》的课程已经结束了,一如孟宁老师以往的风格,继高软的“kingke”后,老师给出了“np201...
    没用的伞啊阅读 1,618评论 0 0
  • 学号:SA16225140姓名:李豪俊 前言:为期7周的“网络程序设计”课程即将告一段落,在此提笔写下对于这门课的...
    陌石路阅读 750评论 0 1
  • 清晨 我看到一朵彼岸花 在灰色的教学楼前 翠绿的叶子 红色的花蕊 不自觉的被吸引过去 我走近一些 俯身 低头 却未...
    夏初晨阅读 923评论 12 8
  • head 与 tail 就像它的名字一样的浅显易懂,它是用来显示开头或结尾某个数量的文字区块,head 用来显示档...
    架构飞毛腿阅读 148评论 0 0
  • 我曾经对一个朋友说过,宁可相信世界上有鬼,也不要相信男女之间有爱情。我不相信爱情,准确的说,是我不相信在现实中长久...
    讨讨阅读 1,087评论 0 0