作为一个十级互联网冲浪选手,当看到吴恩达老师出新课的时候就已经无脑冲了。
传送门:
https://learn.deeplearning.ai/courses/ai-python-for-beginners/lesson/1/introduction
国内可以无压力打开,并且全免费!
[图片上传失败...(image-99393c-1723252979634)]
新课叫AI python for Beginners,我昨天也花了点时间把整个课程过了一遍。
冲浪回来给大家做一下总结。
先说结论:
这个课程确实是给Beginners,但凡有点python基础,这个课程就并不适合你。
首先,这是一个Short Courses,本身的体量就不大,感觉很像是一个大佬亲自下场的引流课程,主要是推荐这个DeepLearning.AI平台。感受一下这种新颖的视频 + 在线写代码 + AI辅助的编程语言入门的模式。
其次,就国内环境而言,前几年刮的那阵全民学代码学python的潮流,基本已经把新用户转化得差不多了,受众群体已经少了很多了。这种从python数据结构和如何变量赋值等基础从头教起的课程,受众可能已经不多了。
但是从这短短的课程中我也获得了一些有意思的东西。
基于Jupyter Notebook的AI对话模块
在第九课,也就是倒数第二课,吴恩达老师介绍了如何用Jupyter Notebook引入一个大模型回答的模块。
[图片上传失败...(image-667cb8-1723252979634)]
引入的命令是:
from helper_functions import print_llm_response
之后提了一个问题:
print_llm_response("What is the capital of France?")
AI的回复是:
The capital of France is Paris.
也可以让AI根据你写的引导词(prompt)描述一只三岁大的狗狗的生活方式。
[图片上传失败...(image-37d873-1723252979634)]
从课程举的例子中你就能看出来,这个课程的对象确确实实是纯新手纯Beginners。😂
另外,最开始引入的这个from helper_functions import print_llm_response
一看就不是公共的模块或者功能。大概率是一个私有的,也就是自己写的模块。模块的全文我附在这个推文的最后了。
我感觉这个模块稍微配置一下就能在日常的Jupyter Notebook中使用了。
这个反而是我上这门课最大的收获。
一些细节
顺便唠唠一些细节:
1. 为什么每次回答可以比较稳定?
使用过课程提供的AI机器人之后发现每次回答都是比较稳定的。
查看了一下模块代码发现了端倪。
模块代码里设置的temperature=0.0
,这个参数是用来设置大模型回复的“天马行空”程度的,范围在0.0到1.0之间,设置成0.0,就能获得较为稳定的结果,发散程度最低。
当然,稳定不代表一模一样,多刷新几次还是能刷出细微的差别的。
2. 调用的模型是什么?
目前模块里调用的模型还是gpt-3.5-turbo-0125
,是今年2月16日上线的,实际上的体验比较一般,现在官方已经用GPT-4o代替3.5模型了,也包括升级后的3.5-turbo。
AI chatbox的引导词
这个课程还配了一个AI机器人辅助回答问题
[图片上传失败...(image-297dc0-1723252979634)]
我觉得调教得还蛮好的。机器人会在回答完你的提问之后,给出下一步的引导,确实新手比较友好。
例如上面的截图,吴老师问传统上来说学习一个新的语言的第一个程序是什么,AI回答是打印“Hallo World”,并且还问你是否需要帮你用python写出这个代码。
下面是AI机器人的引导词,有需要的同学可以学习一下类似的场景如何写引导词。
这段引导词总结起来做了这么几件事情:
- 你是一位友好的AI助教,帮助初学者学习Python编程。
- 你假设学习者几乎没有编程经验。
- 只使用Python语言回答问题,除非涉及计算机工作原理时,可以提及汇编或机器码。
- 只有在学习者直接要求时才编写代码,代码应尽量简单易读,不使用复杂的Python惯用法。
- 回答问题时尽量简短,只提供必要的解释,让学习者自行提出进一步问题。
- 如果学习者问不相关的问题,提醒他们专注于编程学习。
You are the friendly AI assistant for a beginner python programming class.
You are available to help learners with questions they might have about computer programming,
python, artificial intelligence, the internet, and other related topics.
You should assume zero to very little prior experience of coding when you reply to questions.
You should only use python and not mention other programming languages (unless the question is
about how computers work, where you may mention assembly or machine code if it is relevant to
the answer).
Only write code if you are asked directly by the learner. If you do write any code, it should
be as simple and easy to read as possible - name variables things that are easy to understand,
and avoid pythonic conventions like list comprehensions to help the learner stick to foundations
like for loops and if statements.
Keep your answers to questions short, offering as little explanation as is necessary to answer
the question. Let the learner ask follow up questions to dig deeper.
If the learner asks unrelated questions, respond with a brief reminder: "Please, focus on your programming for AI journey"
好啦。基本内容就聊到这里。感谢你的阅读。祝你今天愉快。
helper_functions.py
这个脚本叫helper_functions.py
,但凡你会用Jupyter Notebook,获取全文对你而言应该不是什么难事 : )
# import gradio as gr
import os
from openai import OpenAI
from dotenv import load_dotenv
import random
#Get the OpenAI API key from the .env file
load_dotenv('.env', override=True)
openai_api_key = os.getenv('OPENAI_API_KEY')
# Set up the OpenAI client
client = OpenAI(api_key=openai_api_key)
def print_llm_response(prompt):
"""This function takes as input a prompt, which must be a string enclosed in quotation marks,
and passes it to OpenAI's GPT3.5 model. The function then prints the response of the model.
"""
llm_response = get_llm_response(prompt)
print(llm_response)
def get_llm_response(prompt):
"""This function takes as input a prompt, which must be a string enclosed in quotation marks,
and passes it to OpenAI's GPT3.5 model. The function then saves the response of the model as
a string.
"""
try:
if not isinstance(prompt, str):
raise ValueError("Input must be a string enclosed in quotes.")
completion = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=[
{
"role": "system",
"content": "You are a helpful but terse AI assistant who gets straight to the point.",
},
{"role": "user", "content": prompt},
],
temperature=0.0,
)
response = completion.choices[0].message.content
return response
except TypeError as e:
print("Error:", str(e))
def get_chat_completion(prompt, history):
history_string = "\n\n".join(["\n".join(turn) for turn in history])
prompt_with_history = f"{history_string}\n\n{prompt}"
completion = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=[
{
"role": "system",
"content": "You are a helpful but terse AI assistant who gets straight to the point.",
},
{"role": "user", "content": prompt_with_history},
],
temperature=0.0,
)
response = completion.choices[0].message.content
return response
# def open_chatbot():
# """This function opens a Gradio chatbot window that is connected to OpenAI's GPT3.5 model."""
# gr.close_all()
# gr.ChatInterface(fn=get_chat_completion).launch(quiet=True)
def get_dog_age(human_age):
"""This function takes one parameter: a person's age as an integer and returns their age if
they were a dog, which is their age divided by 7. """
return human_age / 7
def get_goldfish_age(human_age):
"""This function takes one parameter: a person's age as an integer and returns their age if
they were a dog, which is their age divided by 5. """
return human_age / 5
def get_cat_age(human_age):
if human_age <= 14:
# For the first 14 human years, we consider the age as if it's within the first two cat years.
cat_age = human_age / 7
else:
# For human ages beyond 14 years:
cat_age = 2 + (human_age - 14) / 4
return cat_age
def get_random_ingredient():
"""
Returns a random ingredient from a list of 20 smoothie ingredients.
The ingredients are a bit wacky but not gross, making for an interesting smoothie combination.
Returns:
str: A randomly selected smoothie ingredient.
"""
ingredients = [
"rainbow kale", "glitter berries", "unicorn tears", "coconut", "starlight honey",
"lunar lemon", "blueberries", "mermaid mint", "dragon fruit", "pixie dust",
"butterfly pea flower", "phoenix feather", "chocolate protein powder", "grapes", "hot peppers",
"fairy floss", "avocado", "wizard's beard", "pineapple", "rosemary"
]
return random.choice(ingredients)
def get_random_number(x, y):
"""
Returns a random integer between x and y, inclusive.
Args:
x (int): The lower bound (inclusive) of the random number range.
y (int): The upper bound (inclusive) of the random number range.
Returns:
int: A randomly generated integer between x and y, inclusive.
"""
return random.randint(x, y)
def calculate_llm_cost(characters, price_per_1000_tokens=0.015):
tokens = characters / 4
cost = (tokens / 1000) * price_per_1000_tokens
return f"${cost:.4f}"