机器学习基础:案例研究——week 3

基础知识:

(待补充)

作业相关基本代码

import graphlab
#设置线程限制,节省内存,防止程序崩溃
graphlab.set_runtime_config("GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS", 8)
#导入数据
products = graphlab.SFrame("amazon_baby.gl/")
#统计评论中的词语
products["word_count"] = graphlab.text_analytics.count_words(products["review"])
#选择12个词作为情感分析的输入
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
#统计每一记录中各个词语的频数
def selected_words_count(dict):
    value = 0
    if i in dict:
        value = dict[i]
    return value
for i in selected_words:
    products[i] = products['word_count'] .apply(selected_words_count)   
#查看统计结果
products.head()

效果如下:

Paste_Image.png
#统计12个词出现的次数
def count(word):
    value = 0
    for i in products["word_count"]:
        if word in i:
            value += i.get(word)
    return value
list = []
for x in selected_words:
    list.append(count(x))
new_dict = dict(zip(selected_words,list))
print new_dict

输出如下结果:
{'fantastic': 932, 'love': 42065, 'bad': 3724, 'awesome': 2090, 'great': 45206, 'terrible': 748, 'amazing': 1363, 'horrible': 734, 'awful': 383, 'hate': 1220, 'wow': 144}

#去除评论中性的词
products = products[products['rating'] != 3]
#添加新列'sentiment'
products['sentiment'] = products['rating'] >= 4

训练情感分类器

#将数据分为train(80%)和test(20%)两部分
train_data,test_data = products.random_split(.8, seed=0)
selected_words_model=graphlab.logistic_classifier.create(train_data,
                                                     target='sentiment',
                                                     features=selected_words,
                                                     validation_set=test_data)

输出如下结果:
Logistic regression:


Number of examples : 133448
Number of classes : 2
Number of feature columns : 11
Number of unpacked features : 11
Number of coefficients : 12
Starting Newton Method


+-----------+----------+--------------+-------------------+---------------------+
| Iteration | Passes | Elapsed Time | Training-accuracy | Validation-accuracy |
+-----------+----------+--------------+-------------------+---------------------+
| 1 | 2 | 1.285266 | 0.844299 | 0.842842 |
| 2 | 3 | 1.485671 | 0.844186 | 0.842842 |
| 3 | 4 | 1.685881 | 0.844276 | 0.843142 |
| 4 | 5 | 1.896848 | 0.844269 | 0.843142 |
| 5 | 6 | 2.098513 | 0.844269 | 0.843142 |
| 6 | 7 | 2.300982 | 0.844269 | 0.843142 |
+-----------+----------+--------------+-------------------+---------------------+
SUCCESS: Optimal solution found.

检查模型系数

selected_words_model['coefficients'].print_rows(num_rows=12)

输出结果如下:

Paste_Image.png

评估模型

graphlab.canvas.set_target('ipynb')
selected_words_model.show(view='Evaluation')

selected_words_model.evaluate(test_data)

输出结果如下:

Paste_Image.png

Paste_Image.png

{'accuracy': 0.8431419649291376,
'auc': 0.6648096413721418,
'confusion_matrix': Columns:
target_label int
predicted_label int
count int

Rows: 4

Data:
+--------------+-----------------+-------+
| target_label | predicted_label | count |
+--------------+-----------------+-------+
| 0 | 0 | 234 |
| 0 | 1 | 5094 |
| 1 | 1 | 27846 |
| 1 | 0 | 130 |
+--------------+-----------------+-------+
[4 rows x 3 columns],
'f1_score': 0.914242563530107,
'log_loss': 0.4054747110366022,
'precision': 0.8453551912568306,
'recall': 0.9953531598513011,
'roc_curve': Columns:
threshold float
fpr float
tpr float
p int
n int

Rows: 100001

Data:
+-----------+-----+-----+-------+------+
| threshold | fpr | tpr | p | n |
+-----------+-----+-----+-------+------+
| 0.0 | 1.0 | 1.0 | 27976 | 5328 |
| 1e-05 | 1.0 | 1.0 | 27976 | 5328 |
| 2e-05 | 1.0 | 1.0 | 27976 | 5328 |
| 3e-05 | 1.0 | 1.0 | 27976 | 5328 |
| 4e-05 | 1.0 | 1.0 | 27976 | 5328 |
| 5e-05 | 1.0 | 1.0 | 27976 | 5328 |
| 6e-05 | 1.0 | 1.0 | 27976 | 5328 |
| 7e-05 | 1.0 | 1.0 | 27976 | 5328 |
| 8e-05 | 1.0 | 1.0 | 27976 | 5328 |
| 9e-05 | 1.0 | 1.0 | 27976 | 5328 |
+-----------+-----+-----+-------+------+
[100001 rows x 5 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.}

应用模型进行预测(习题10)

diaper_champ_reviews= products[products['name'] == 'Baby Trend Diaper Champ']
selected_words_model.predict(diaper_champ_reviews[0:1], output_type='probability')

输出结果如下:
dtype: float
Rows: 1
[0.796940851290673]

使用所有词的模型进

sentiment_model = graphlab.logistic_classifier.create(train_data,
                                                     target='sentiment',
                                                     features=['word_count'],
                                                     validation_set=test_data)
sentiment_model.evaluate(test_data)

输出结果如下:
{'accuracy': 0.916256305548883,
'auc': 0.9446492867438502,
'confusion_matrix': Columns:
target_label int
predicted_label int
count int

Rows: 4

Data:
+--------------+-----------------+-------+
| target_label | predicted_label | count |
+--------------+-----------------+-------+
| 0 | 1 | 1328 |
| 0 | 0 | 4000 |
| 1 | 1 | 26515 |
| 1 | 0 | 1461 |
+--------------+-----------------+-------+
[4 rows x 3 columns],
'f1_score': 0.9500349343413533,
'log_loss': 0.26106698432422365,
'precision': 0.9523039902309378,
'recall': 0.9477766657134686,
'roc_curve': Columns:
threshold float
fpr float
tpr float
p int
n int

Rows: 100001

Data:
+-----------+----------------+----------------+-------+------+
| threshold | fpr | tpr | p | n |
+-----------+----------------+----------------+-------+------+
| 0.0 | 1.0 | 1.0 | 27976 | 5328 |
| 1e-05 | 0.909346846847 | 0.998856162425 | 27976 | 5328 |
| 2e-05 | 0.896021021021 | 0.998748927652 | 27976 | 5328 |
| 3e-05 | 0.886448948949 | 0.998462968259 | 27976 | 5328 |
| 4e-05 | 0.879692192192 | 0.998284243637 | 27976 | 5328 |
| 5e-05 | 0.875187687688 | 0.998212753789 | 27976 | 5328 |
| 6e-05 | 0.872184684685 | 0.998177008865 | 27976 | 5328 |
| 7e-05 | 0.868618618619 | 0.998034029168 | 27976 | 5328 |
| 8e-05 | 0.864677177177 | 0.997998284244 | 27976 | 5328 |
| 9e-05 | 0.860735735736 | 0.997962539319 | 27976 | 5328 |
+-----------+----------------+----------------+-------+------+
[100001 rows x 5 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.}

习题9

diaper_champ_reviews= products[products['name'] == 'Baby Trend Diaper Champ']
diaper_champ_reviews['predicted_sentiment'] = sentiment_model.predict(diaper_champ_reviews, output_type='probability')
diaper_champ_reviews =diaper_champ_reviews .sort('predicted_sentiment', ascending=False)
diaper_champ_reviews[0:1]

输出结果如下:

Paste_Image.png

Quiz Week 3

  1. Out of the 11 words in selected_words, which one is most used in the reviews in the dataset?
    -great

  2. Out of the 11 words in selected_words, which one is least used in the reviews in the dataset?
    -wow

  3. Out of the 11 words in selected_words, which one got the most positive weight in the selected_words_model?
    -love

  4. Out of the 11 words in selected_words, which one got the most negative weight in the selected_words_model?
    -terrible

  5. Which of the following ranges contains the accuracy of the selected_words_model on the test_data?
    -0.843

  6. Which of the following ranges contains the accuracy of the sentiment_model in the IPython Notebook from lecture on the test_data?
    -.916

  7. Which of the following ranges contains the accuracy of the majority class classifier, which simply predicts the majority class on the test_data?
    -.835

  8. How do you compare the different learned models with the baseline approach where we are just predicting the majority class?
    -all words better and other almost same

  9. Which of the following ranges contains the "predicted_sentiment" for the most positive review for "Baby Trend Diaper Champ", according to the sentiment_model from the IPython Notebook from lecture?
    -0.999999937267

  10. Consider the most positive review for "Baby Trend Diaper Champ" according to the sentiment_model from the IPython Notebook from lecture. Which of the following ranges contains the predicted_sentiment for this review, if we use the selected_words_model to analyze it?
    -0.79694

  11. Why is the value of the predicted_sentiment for the most positive review found using the sentiment_model much more positive than the value predicted using the selected_words_model?
    -None of the selected_words appeared in the text of this review.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 198,932评论 5 466
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 83,554评论 2 375
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 145,894评论 0 328
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 53,442评论 1 268
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 62,347评论 5 359
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 47,899评论 1 275
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,325评论 3 390
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,980评论 0 254
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,196评论 1 294
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,163评论 2 317
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,085评论 1 328
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,826评论 3 316
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,389评论 3 302
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,501评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,753评论 1 255
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,171评论 2 344
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 41,616评论 2 339

推荐阅读更多精彩内容

  • 亲爱的宝贝,今天是你在妈妈肚子里的第六周,你像颗瓜子那么大了,不过对我而言,你还是那么小,小到因为妈妈看不到你...
    才才的小坏蛋阅读 196评论 0 0
  • 2015-08/19 多久不曾触碰那些让自己疼痛的文字。也写不出好的文章。坚持不了自己的愿望。可是,我却还是一直...
    Kuailema599阅读 335评论 0 1
  • 小时候,最喜欢把崭新的黄色一分钱纸币当作书签夹在字典里,积到一角时就够买一支头上戴着橡皮帽的铅笔,但这对我来说需要...
    73cad27d1822阅读 177评论 0 0
  • 参考 如何处理浮动 给父元素增加一个伪类after 参考
    c59ffede9db6阅读 147评论 0 0
  • 第一天 打了豆浆,三人份的,南瓜小米,香甜香甜的。第一次烙煎饼,面糊太稀,根本翻不过来面……我还搞了一大盘,只好硬...
    圆圆gemini阅读 117评论 0 0