做一个网站得多少钱广告安装接单app
文章目录
- 验证数据的由来
- 随机问题 和 交叉验证的由来
- K折交叉验证方法
- 留一法 LOO-CV
- 代码实现
- Validation 和 Cross Validation
- 测试train_test_split
- 使用交叉验证
- 回顾网格搜索
- cross_val_score 参数
验证数据的由来
只是将数据分为 训练数据和测试数据,产生了问题:过拟合了测试数据;
解决方式:将数据分为 训练数据、验证数据、测试数据;常用比例为 8、1、1。
验证数据集用来 调整超参数使用的数据集。
测试数据集保留原来的功能:不参与模型的创建,对于模型完全不可知,作为衡量最终模型性能的数据集;
随机问题 和 交叉验证的由来
验证数据集 是每一次随机的从原来的数据中取出来的,模型可能会 过拟合 验证数据集;
如果只有一份验证数据,一旦它里面存在极端数据,就可能导致模型不准确,因此有了 交叉验证。
交叉验证:Cross Validation
K折交叉验证方法
K折交叉验证:K-folds Cross Validation
把测试数据和训练数据区分之后,将训练数据切分为k份;
k-1 用来训练,1份用来验证。这一份叫做验证数据。用来 调整超参数。
缺点:每次训练k个模型,相当于整体性能慢了k倍。
假设划分为5份
留一法 LOO-CV
留一法:Leave-One-Out Cross Validation;
在极端情况下,KCV 会变成 留一法 这样的交叉验证方式,
训练数据集有m 个样本,就分成m份。 m-1 份拿来训练,去看剩下的一个样本预测的准不准。
优点:KCV 还存在了 k 份怎么分带来的随机影响;LOO-CV 完全不受随机的影响,最接近模型真正的性能指标。
缺点:计算量巨大。
代码实现
Validation 和 Cross Validation
import numpy as np
from sklearn import datasetsdigits = datasets.load_digits()
X = digits.data
y = digits.target
测试train_test_split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=666)from sklearn.neighbors import KNeighborsClassifierbest_k, best_p, best_score = 0, 0, 0
for k in range(2, 11): # kNN 中几个邻居 2-19之间for p in range(1, 6): # 距离,1-5之间选择knn_clf = KNeighborsClassifier(weights="distance", n_neighbors=k, p=p)knn_clf.fit(X_train, y_train)score = knn_clf.score(X_test, y_test)if score > best_score:best_k, best_p, best_score = k, p, scoreprint("Best K =", best_k)
print("Best P =", best_p)
print("Best Score =", best_score)'''Best K = 3Best P = 4Best Score = 0.986091794159
'''
使用交叉验证
from sklearn.model_selection import cross_val_scoreknn_clf = KNeighborsClassifier()
cross_val_score(knn_clf, X_train, y_train) # 默认为3交叉验证:将 X_train 分成三份进行交叉验证,交叉验证的结果为以下三个数值
# array([ 0.98895028, 0.97777778, 0.96629213])best_k, best_p, best_score = 0, 0, 0
for k in range(2, 11):for p in range(1, 6):knn_clf = KNeighborsClassifier(weights="distance", n_neighbors=k, p=p)scores = cross_val_score(knn_clf, X_train, y_train)score = np.mean(scores)if score > best_score:best_k, best_p, best_score = k, p, scoreprint("Best K =", best_k)
print("Best P =", best_p)
print("Best Score =", best_score) # 分数虽然比 train_test_split 低,但更可信
''' Best K = 2Best P = 2Best Score = 0.982359987401
'''best_knn_clf = KNeighborsClassifier(weights="distance", n_neighbors=2, p=2)
best_knn_clf.fit(X_train, y_train)
best_knn_clf.score(X_test, y_test)
# 0.98052851182197498
回顾网格搜索
from sklearn.model_selection import GridSearchCVparam_grid = [{'weights': ['distance'],'n_neighbors': [i for i in range(2, 11)], # 9种'p': [i for i in range(1, 6)] # 5种}
]grid_search = GridSearchCV(knn_clf, param_grid, verbose=1)
grid_search.fit(X_train, y_train)'''Fitting 3 folds for each of 45 candidates, totalling 135 fits[Parallel(n_jobs=1)]: Done 135 out of 135 | elapsed: 1.9min finishedGridSearchCV(cv=None, error_score='raise',estimator=KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',metric_params=None, n_jobs=1, n_neighbors=10, p=5,weights='distance'),fit_params={}, iid=True, n_jobs=1,param_grid=[{'weights': ['distance'], 'n_neighbors': [2, 3, 4, 5, 6, 7, 8, 9, 10], 'p': [1, 2, 3, 4, 5]}],pre_dispatch='2*n_jobs', refit=True, return_train_score=True,scoring=None, verbose=1)
'''
grid_search.best_score_
# 0.98237476808905377grid_search.best_params_
# {'n_neighbors': 2, 'p': 2, 'weights': 'distance'}best_knn_clf = grid_search.best_estimator_
best_knn_clf.score(X_test, y_test)
# 0.98052851182197498
cross_val_score 参数
cross_val_score(knn_clf, X_train, y_train, cv=5)
# array([ 0.99543379, 0.96803653, 0.98148148, 0.96261682, 0.97619048])grid_search = GridSearchCV(knn_clf, param_grid, verbose=1, cv=5)