@hackers wrote:
Hello,
I am trying to use xgboost for a binary classification problem.
# xgboost parameters param <- list("objective" = "binary:logistic", # binary classification "num_class" = 2, # number of classes "eval_metric" = "error", # evaluation metric "nthread" = 8, # number of threads to be used "max_depth" = 16, # maximum depth of tree "eta" = 0.3, # step size shrinkage "gamma" = 0, # minimum loss reduction "subsample" = 1, # part of data instances to grow tree "colsample_bytree" = 1, # subsample ratio of columns when constructing each tree "min_child_weight" = 12) # minimum sum of instance weight needed in a child
Here since the classification is binary I am using error as the eval_metric as per:
However when I am running the xgboost model:
system.time( bst.cv <- xgb.cv(param=param, data=train.matrix, label=train$Survived, nfold=10, nrounds=nround.cv, prediction=TRUE, verbose=FALSE) )
I am getting an error:
Error in xgb.iter.eval(fd$booster, fd$watchlist, i - 1, feval) : label and prediction size not matchhint: use merror or mlogloss for multi-class classification
However if I change the eval_metric to "merror" this works fine.
Why this discrepancy and how to resolve it??
Posts: 2
Participants: 2