-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GBM: support hyperopt #2490
GBM: support hyperopt #2490
Conversation
for more information, see https://pre-commit.ci
…to gbm-hyperopt
7feccf0
to
560aaae
Compare
fixes a bug where small O(1) number of boosting rounds led to infinite loop
441580a
to
f25cc53
Compare
booster = self.train_step( | ||
params, lgb_train, eval_sets, eval_names, booster, self.boosting_round_log_frequency, evals_result | ||
) | ||
def check_progress_on_validation( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this function the same as the ECD trainer? If so, we should factor this out, like you've done with append_metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not entirely, ECD trainer has additional logic here for reducing learning rate and increasing batch size
Runs evaluation after every
boosting_rounds_per_checkpoint
to populate training/validation metrics as required by hyperoptMimics as much as possible the ECD train loop logic