scale_pos_weight using XGBoost's Learning API

I see it is possible to add a weight for unbalanced problems in XGBoost's Scikit-Learn API through scale_pos_weight. Does it have an equivalent in the Learning API? If not,

  • is there a reason behind this?
  • Could this corrective factor/weight also be somehow implemented using the learning API?

Topic xgboost python machine-learning

Category Data Science


Yes, you can use scale_pos_weight in the native python API; it goes in the params dictionary. E.g.,

params = {'objective': 'binary:logistic',
          'scale_pos_weight': 2.5}
model = xgboost.train(params, dmat)

https://xgboost.readthedocs.io/en/latest/parameter.html#parameters-for-tree-booster https://github.com/dmlc/xgboost/blob/master/demo/kaggle-higgs/speedtest.py

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.