IR YAML
The IR YAML is an intermediate representation of a compiled pipeline or component.
It is an instance of the PipelineSpec
protocol buffer message type, which is a platform-agnostic pipeline representation protocol; this makes it possible for pipelines to be submitted on different backends. It is considered an intermediate representation because the KFP backend compiles PipelineSpec
to Argo Workflow YAML as the final pipeline definition for execution.
Unlike the v1 component YAML, the IR YAML is not intended to be written directly. While IR YAML is not intended to be easily human-readable, you can still inspect it if you know a bit about its contents:
Section | Description | Example |
---|---|---|
components | This section is a map of the names of all components used in the pipeline to ComponentSpec . ComponentSpec defines the interface, including inputs and outputs, of a component.For primitive components, ComponentSpec contains a reference to the executor containing the component implementation.For pipelines used as components, ComponentSpec contains a DagSpec instance, which includes references to the underlying primitive components. | View on Github |
deployment_spec | This section contains a map of executor name to ExecutorSpec . ExecutorSpec contains the implementation for a primitive component. | View on Github |
root | This section defines the steps of the outermost pipeline definition, also called the pipeline root definition. The root definition is the workflow executed when you submit the IR YAML. It is an instance of ComponentSpec . | View on Github |
pipeline_info | This section contains pipeline metadata, including the pipelineInfo.name field. This field contains the name of your pipeline template. When you upload your pipeline, a pipeline context name is created based on this template name. The pipeline context lets the backend and the dashboard associate artifacts and executions from pipeline runs using the pipeline template. You can use a pipeline context to determine the best model by comparing metrics and artifacts from multiple pipeline runs based on the same training pipeline. | View on Github |
sdk_version | This section records the version of the KFP SDK used to compile the pipeline. | View on Github |
schema_version | This section records the version of the PipelineSpec schema used for the IR YAML. | View on Github |
default_pipeline_root | This section records the remote storage root path, such as a MinIO URI or Google Cloud Storage URI, where the pipeline output is written. | View on Github |
Next steps
- Read an overview of Kubeflow Pipelines.
- Follow the pipelines quickstart guide to deploy Kubeflow and run a sample pipeline directly from the Kubeflow Pipelines UI.
Feedback
Was this page helpful?
Thank you for your feedback!
We're sorry this page wasn't helpful. If you have a moment, please share your feedback so we can improve.