xgb_clf =xgb.XGBClassifier(base_score=0.5, booster=‘gbtree’, colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.7, gamma=0.0, gpu_id=-1,
importance_type=‘gain’, interaction_constraints=’’,
learning_rate=0.1, max_delta_step=0, max_depth=10,
min_child_weight=3, monotone_constraints=’()’,
n_estimators=100, n_jobs=0, num_parallel_tree=1,
objective=‘binary:logistic’, random_state=50, reg_alpha=1.2,
reg_lambda=1.6, scale_pos_weight=1.0, subsample=0.9,
tree_method=‘exact’, validate_parameters=1, verbosity=None)
xgb_clf.fit(X1, y1)
#predict train set
print(‘train set’)
y2_pred = xgb_clf.predict(X1)
print the accuracy
print(’ xtra Gradient boosting Classifier model accuracy score for train set : {0:0.4f}’. format(accuracy_score(y1, y2_pred)))
I have designed model using XGBoostingClassifier()
then,
#pip install pickle-mixin
import pickle
saving the model to the local file system
filename = ‘finalized_model.pickle’
pickle.dump(xgb_clf, open(filename, ‘wb’))
when , I m passing train data set to check my pickled model is working or not ?
prediction using the saved model.
loaded_model = pickle.load(open(filename, ‘rb’))
prediction=loaded_model.predict(X1) #X1 is train at set
print(prediction)
O/P:-[0 0 0 … 1 1 1]
its giving fine as output
but when , I m giving single list of input then its giving error like
prediction using the saved model.
loaded_model = pickle.load(open(filename, ‘rb’))
prediction=loaded_model.predict([[62.0,9.0,16.0,39.0,35.0,205.0]])
print(prediction)
TypeError: Input data can not be a list.
when I m removing bracket then its again giving like
prediction using the saved model.
loaded_model = pickle.load(open(filename, ‘rb’))
prediction=loaded_model.predict([62.0,9.0,16.0,39.0,35.0,205.0])
print(prediction)
TypeError: Input data can not be a list.
when I m removing both [] from input
prediction using the saved model.
loaded_model = pickle.load(open(filename, ‘rb’))
prediction=loaded_model.predict(62.0,9.0,16.0,39.0,35.0,205.0)
print(prediction)
TypeError: predict() takes from 2 to 6 positional arguments but 7 were given
1 post - 1 participant