feat(deps): Update dependency kfp-server-api to v2.2.0 #563
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==2.0.5
->==2.2.0
Release Notes
kubeflow/pipelines (kfp-server-api)
v2.2.0
Compare Source
Features
preview.llm.rlhf_pipeline
(22a98d9)preview.llm.rlhf_pipeline
in real time (3d8069b)t5-xxl
with thepreview.llm.rlhf_pipeline
(ff7f660)text
andchat
variants ofbison@001
with thepreview.llm.rlhf_pipeline
(ac39931)Bug Fixes
preview.llm.rlhf_pipeline
runs if notensorboard_id
is provided (ff0d0a7)Other Pull Requests
v2.1.0
Compare Source
Features
num_microbatches
to_implementation.llm
training components (685634d)preview.llm.rlhf_pipeline
(3dbf3cf)preview.llm.infer_pipeline
(b7ea6e7)preview.llm.rlhf_pipeline
(361c16f)preview.llm
pipelines (9007fb0)text-bison@002
model by default (83cb88f)dsl.OutputPath
read logic #localexecution (#10334) (654bbde)Bug Fixes
preview.llm.bulk_inference
after tuning third-party models with RLHF (b9e08de)preview.llm.rlhf_pipeline
run instead of reusing cached result (075d58f)preview.llm.rlhf_pipeline
(2e2ba9e)large_model_reference
asmodel_reference_name
when uploading models frompreview.llm.rlhf_pipeline
instead of hardcoding value astext-bison@001
(f51a930)llama-2-7b
for the base reward model when tuningllama-2-13
with thepreview.llm.rlhf_pipeline
(227eab1)dsl.OneOf
with multiple consumers cannot be compiled (#10452) (21c5ffe)DockerRunner
logs (#10354) (86b7e23)Other Pull Requests
2.0.5 (2023-12-08)
Features
Bug Fixes
tune-type
label when uploading models tuned bypreview.llm.rlhf_pipeline
(708b8bd)Other Pull Requests
2.0.4 (2023-12-01)
Features
preview.llm.rlhf_pipeline
(f67cbfa)preview.llm.infer_pipeline
(d8f2c14)create_custom_training_job_from_component
(91f50da)preview.llm.rlhf_pipeline
components for more readability (c23b720)preview.llm.rlhf_pipeline
components for more readability (bcd5922)preview.llm.rlhf_pipeline
components for more readability (a927984).after()
referencing task inParallelFor
group (#10257) (11f60d8)Bug Fixes
Other Pull Requests
2.0.3 (2023-10-27)
Features
_implementation.llm.chat_dataset_preprocessor
(99fd201)model_checkpoint
optional forpreview.llm.infer_pipeline
(e8fb699)DataflowFlexTemplateJobOp
to GA namespace (nowv1.dataflow.DataflowFlexTemplateJobOp
) (faba922)dsl.OneOf
(#10067) (2d3171c)Bug Fixes
dsl.importer
argument is provided by loop variable (#10116) (73d51c8)Other Pull Requests
2.0.2 (2023-10-11)
Features
persistent_resource_id
to preview GCPC custom job components/utils (fc1f12b)v1
custom_job
andgcp_launcher
container code topreview
(abf05f4)create_templated_custom_job
for Templated Custom Job Launcher (e307545)PipelineTaskFinalStatus
in tasks that use.ignore_upstream_failure()
(#10010) (e99f270)Bug Fixes
Configuration
📅 Schedule: Branch creation - "before 11pm" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Renovate Bot.