找到与问题有关的任何信息,把他们转换成特征矩阵的数值。
分类特征
分类数据是一种常见的非数值数据类型。
一种方法是独热编码,增加额外的列,让0和1出现在对应的列分别表示每个分类值的有或无。可以使用稀疏矩阵表示。
data = [
{'price': 850000, 'rooms': 4, 'neighborhood': 'Queen Anne'},
{'price': 700000, 'rooms': 3, 'neighborhood': 'Fremont'},
{'price': 650000, 'rooms': 3, 'neighborhood': 'Wallingford'},
{'price': 600000, 'rooms': 2, 'neighborhood': 'Fremont'}
]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer(sparse=False, dtype=int)
vec.fit_transform(data)
vec.get_feature_names_out()
array(['neighborhood=Fremont', 'neighborhood=Queen Anne',
'neighborhood=Wallingford', 'price', 'rooms'], dtype=object)
vec = DictVectorizer(sparse=True, dtype=int)
vec.fit_transform(data)
另外,OneHotEncoder和FeatureHasher是另外两个工具。
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
# 首先将字典列表转换为DataFrame,因为OneHotEncoder更适合处理这种格式
df = pd.DataFrame(data)
# 实例化OneHotEncoder
encoder = OneHotEncoder(sparse_output=False)
# 对neighborhood列进行编码
encoded_data = encoder.fit_transform(df[['neighborhood']])
# 获取编码后的特征名称
encoded_feature_names = encoder.get_feature_names_out(['neighborhood'])
# 将编码后的数据转换为DataFrame,便于查看
encoded_df = pd.DataFrame(encoded_data, columns=encoded_feature_names)
# 将编码后的特征添加回原始数据集中(先去掉`neighborhood`列)
df_without_neighborhood = df.drop('neighborhood', axis=1)
final_df = pd.concat([df_without_neighborhood, encoded_df], axis=1)
print(final_df)
from sklearn.feature_extraction import FeatureHasher
# 实例化FeatureHasher,指定输出特征的数量
# 这里的n_features可以根据实际情况调整,太小可能导致哈希冲突,太大则可能导致稀疏矩阵过大
hasher = FeatureHasher(n_features=4, input_type='dict')
# 使用FeatureHasher对数据进行转换
hashed_features = hasher.transform(data)
# 转换为数组查看结果(因为FeatureHasher默认产生稀疏矩阵)
hashed_features.toarray()
文本特征
需要将文本转换成一组数值。可以使用CountVectorizer进行单词统计:
sample = ['problem of evil',
'evil queen',
'horizon problem']
import pandas as pd
pd.DataFrame(X.toarray(), columns=vec.get_feature_names_out())
更好的方法是通过TF-IDF(term frequency-inverse document frequency,词频逆文档频率),即词频和逆文档频率的乘积。
from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer()
X = vec.fit_transform(sample)
pd.DataFrame(X.toarray(), columns=vec.get_feature_names_out())
图像特征
使用scikit-image
衍生特征
输入特征经过数学变换衍生出来的新特性。
例如,基函数回归:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1, 2, 3, 4, 5])
y = np.array([4, 2, 1, 3, 7])
plt.scatter(x, y);
from sklearn.linear_model import LinearRegression
X = x[:, np.newaxis]
model = LinearRegression().fit(X, y)
yfit = model.predict(X)
plt.scatter(x, y)
plt.plot(x, yfit);
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=3,include_bias=False)
X2 = poly.fit_transform(X)
print(X2)
model = LinearRegression().fit(X2, y)
yfit = model.predict(X2)
plt.scatter(x,y)
plt.plot(x, yfit)
缺失值填充
使用Imputer填充
from numpy import nan
X = np.array([[ nan, 0, 3 ],
[ 3, 7, 9 ],
[ 3, 5, 2 ],
[ 4, nan, 6 ],
[ 8, 8, 1 ]])
y = np.array([14, 16, -1, 8, -5])
from sklearn.impute import SimpleImputer
imp = SimpleImputer(strategy='mean')
X2 = imp.fit_transform(X)
X2
model = LinearRegression().fit(X2, y)
model.predict(X2)
特征管道
使用管道将多个步骤串起来。
from sklearn.pipeline import make_pipeline
model = make_pipeline(SimpleImputer(strategy='mean'),
PolynomialFeatures(degree=2),
LinearRegression())
model.fit(X,y)
print(y)
print(model.predict(X))
参考:
[1]美 万托布拉斯 (VanderPlas, Jake).Python数据科学手册[M].人民邮电出版社,2018.