PipelineRuns
- PipelineRuns
- Overview
- Configuring a
PipelineRun
- Specifying the target
Pipeline
- Specifying
Resources
- Specifying Task-level
ComputeResources
- Specifying
Parameters
- Specifying custom
ServiceAccount
credentials - Mapping
ServiceAccount
credentials toTasks
- Specifying a
Pod
template - Specifying taskRunSpecs
- Specifying
Workspaces
- Specifying
LimitRange
values - Configuring a failure timeout
- Specifying the target
PipelineRun
status- Cancelling a
PipelineRun
- Gracefully cancelling a
PipelineRun
- Gracefully stopping a
PipelineRun
- Pending
PipelineRuns
Overview
A PipelineRun
allows you to instantiate and execute a Pipeline
on-cluster.
A Pipeline
specifies one or more Tasks
in the desired order of execution. A PipelineRun
executes the Tasks
in the Pipeline
in the order they are specified until all Tasks
have
executed successfully or a failure occurs.
Note: A PipelineRun
automatically creates corresponding TaskRuns
for every
Task
in your Pipeline
.
The Status
field tracks the current state of a PipelineRun
, and can be used to monitor
progress.
This field contains the status of every TaskRun
, as well as the full PipelineSpec
used
to instantiate this PipelineRun
, for full auditability.
Configuring a PipelineRun
A PipelineRun
definition supports the following fields:
- Required:
apiVersion
- Specifies the API version. For exampletekton.dev/v1beta1
.kind
- Indicates that this resource object is aPipelineRun
object.metadata
- Specifies the metadata that uniquely identifies thePipelineRun
object. For example, aname
.spec
- Specifies the configuration information for thisPipelineRun
object.pipelineRef
orpipelineSpec
- Specifies the targetPipeline
.
- Optional:
resources
- Specifies thePipelineResources
to provision for executing the targetPipeline
.params
- Specifies the desired execution parameters for thePipeline
.serviceAccountName
- Specifies aServiceAccount
object that supplies specific execution credentials for thePipeline
.status
- Specifies options for cancelling aPipelineRun
.taskRunSpecs
- Specifies a list ofPipelineRunTaskSpec
which allows for settingServiceAccountName
,Pod
template, andMetadata
for each task. This overrides thePod
template set for the entirePipeline
.timeout
- Specifies the timeout before thePipelineRun
fails.timeout
is deprecated and will eventually be removed, so consider usingtimeouts
instead.timeouts
- Specifies the timeout before thePipelineRun
fails.timeouts
allows more granular timeout configuration, at the pipeline, tasks, and finally levelspodTemplate
- Specifies aPod
template to use as the basis for the configuration of thePod
that executes eachTask
.workspaces
- Specifies a set of workspace bindings which must match the names of workspaces declared in the pipeline being used.
Specifying the target Pipeline
You must specify the target Pipeline
that you want the PipelineRun
to execute, either by referencing
an existing Pipeline
definition, or embedding a Pipeline
definition directly in the PipelineRun
.
To specify the target Pipeline
by reference, use the pipelineRef
field:
spec:
pipelineRef:
name: mypipeline
To embed a Pipeline
definition in the PipelineRun
, use the pipelineSpec
field:
spec:
pipelineSpec:
tasks:
- name: task1
taskRef:
name: mytask
The Pipeline
in the pipelineSpec
example
example displays morning and evening greetings. Once you create and execute it, you can check the logs for its Pods
:
kubectl logs $(kubectl get pods -o name | grep pipelinerun-echo-greetings-echo-good-morning)
Good Morning, Bob!
kubectl logs $(kubectl get pods -o name | grep pipelinerun-echo-greetings-echo-good-night)
Good Night, Bob!
You can also embed a Task
definition the embedded Pipeline
definition:
spec:
pipelineSpec:
tasks:
- name: task1
taskSpec:
steps: ...
In the taskSpec
in pipelineSpec
example
it’s Tasks
all the way down!
You can also specify labels and annotations with taskSpec
which are propagated to each taskRun
and then to the
respective pods. These labels can be used to identify and filter pods for further actions (such as collecting pod metrics,
and cleaning up completed pod with certain labels, etc) even being part of one single Pipeline.
spec:
pipelineSpec:
tasks:
- name: task1
taskSpec:
metadata:
labels:
pipeline-sdk-type: kfp
# ...
- name: task2
taskSpec:
metadata:
labels:
pipeline-sdk-type: tfx
# ...
Tekton Bundles
Note: This is only allowed if enable-tekton-oci-bundles
is set to
"true"
in the feature-flags
configmap, see install.md
You may also use a Tekton Bundle
to reference a pipeline defined remotely.
spec:
pipelineRef:
name: mypipeline
bundle: docker.io/myrepo/mycatalog:v1.0
The syntax and caveats are similar to using Tekton Bundles
for Task
references
in Pipelines or TaskRuns.
Tekton Bundles
may be constructed with any toolsets that produce valid OCI image artifacts
so long as the artifact adheres to the contract.
Remote Pipelines
A pipelineRef
field may specify a Pipeline in a remote location such as git.
Support for specific types of remote will depend on the Resolvers your
cluster’s operator has installed. For more information please check the Tekton resolution repo. The below example demonstrates
referencing a Pipeline in git:
spec:
pipelineRef:
resolver: git
params:
- name: url
value: https://github.com/tektoncd/catalog.git
- name: revision
value: abc123
- name: pathInRepo
value: /pipeline/buildpacks/0.1/buildpacks.yaml
Specifying Resources
⚠️
PipelineResources
are deprecated.Consider using replacement features instead. Read more in documentation and TEP-0074.
A Pipeline
requires PipelineResources
to provide inputs and store outputs
for the Tasks
that comprise it. You must provision those resources in the resources
field
in the spec
section of the PipelineRun
definition.
A Pipeline
may require you to provision a number of different resources. For example:
- When executing a
Pipeline
against a pull request, the triggering system must specify the commit-ish of agit
resource. - When executing a
Pipeline
manually against your own environment, you must provision your GitHub fork using thegit
resource; your image registry using theimage
resource; and your Kubernetes cluster using thecluster
resource.
You can reference a PipelineResources
using the resourceRef
field:
spec:
resources:
- name: source-repo
resourceRef:
name: skaffold-git
- name: web-image
resourceRef:
name: skaffold-image-leeroy-web
- name: app-image
resourceRef:
name: skaffold-image-leeroy-app
You can also embed a PipelineResource
definition in the PipelineRun
using the resourceSpec
field:
spec:
resources:
- name: source-repo
resourceSpec:
type: git
params:
- name: revision
value: v0.32.0
- name: url
value: https://github.com/GoogleContainerTools/skaffold
- name: web-image
resourceSpec:
type: image
params:
- name: url
value: gcr.io/christiewilson-catfactory/leeroy-web
- name: app-image
resourceSpec:
type: image
params:
- name: url
value: gcr.io/christiewilson-catfactory/leeroy-app
Note: All persistentVolumeClaims
specified within a PipelineRun
are bound
until their respective Pods
or the entire PipelineRun
are deleted. This also applies
to all persistentVolumeClaims
generated internally.
Specifying Task-level ComputeResources
(alpha only) (This feature is under development and not functional yet. Stay tuned!)
Task-level compute resources can be configured in PipelineRun.TaskRunSpecs.ComputeResources
or TaskRun.ComputeResources
.
e.g.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline
spec:
tasks:
- name: task
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun
spec:
pipelineRef:
name: pipeline
taskRunSpecs:
- pipelineTaskName: task
computeResources:
requests:
cpu: 2
Further details and examples could be found in Compute Resources in Tekton.
Specifying Parameters
(See also Specifying Parameters in Tasks)
You can specify Parameters
that you want to pass to the Pipeline
during execution,
including different values of the same parameter for different Tasks
in the Pipeline
.
Note: You must specify all the Parameters
that the Pipeline
expects. Parameters
that have default values specified in Pipeline are not required to be provided by PipelineRun.
For example:
spec:
params:
- name: pl-param-x
value: "100"
- name: pl-param-y
value: "500"
You can pass in extra Parameters
if needed depending on your use cases. An example use
case is when your CI system autogenerates PipelineRuns
and it has Parameters
it wants to
provide to all PipelineRuns
. Because you can pass in extra Parameters
, you don’t have to
go through the complexity of checking each Pipeline
and providing only the required params.
Propagated Parameters
When using an inlined spec, parameters from the parent PipelineRun
will be
propagated to any inlined specs without needing to be explicitly defined. This
allows authors to simplify specs by automatically propagating top-level
parameters down to other inlined resources.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pr-echo-
spec:
params:
- name: HELLO
value: "Hello World!"
- name: BYE
value: "Bye World!"
pipelineSpec:
tasks:
- name: echo-hello
taskSpec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
- name: echo-bye
taskSpec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.BYE)"
On executing the pipeline run, the parameters will be interpolated during resolution. The specifications are not mutated before storage and so it remains the same. The status is updated.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pr-echo-szzs9
...
spec:
params:
- name: HELLO
value: Hello World!
- name: BYE
value: Bye World!
pipelineSpec:
tasks:
- name: echo-hello
taskSpec:
steps:
- image: ubuntu
name: echo
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
- name: echo-bye
taskSpec:
steps:
- image: ubuntu
name: echo
script: |
#!/usr/bin/env bash
echo "$(params.BYE)"
status:
conditions:
- lastTransitionTime: "2022-04-07T12:34:58Z"
message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
pipelineSpec:
...
taskRuns:
pr-echo-szzs9-echo-hello:
pipelineTaskName: echo-hello
status:
...
taskSpec:
steps:
- image: ubuntu
name: echo
resources: {}
script: |
#!/usr/bin/env bash
echo "Hello World!"
pr-echo-szzs9-echo-bye:
pipelineTaskName: echo-bye
status:
...
taskSpec:
steps:
- image: ubuntu
name: echo
resources: {}
script: |
#!/usr/bin/env bash
echo "Bye World!"
Scope and Precedence
When Parameters names conflict, the inner scope would take precedence as shown in this example:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pr-echo-
spec:
params:
- name: HELLO
value: "Hello World!"
- name: BYE
value: "Bye World!"
pipelineSpec:
tasks:
- name: echo-hello
params:
- name: HELLO
value: "Sasa World!"
taskSpec:
params:
- name: HELLO
type: string
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
...
resolves to
# Successful execution of the above PipelineRun
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pr-echo-szzs9
...
spec:
...
status:
conditions:
- lastTransitionTime: "2022-04-07T12:34:58Z"
message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
...
taskRuns:
pr-echo-szzs9-echo-hello:
pipelineTaskName: echo-hello
status:
conditions:
- lastTransitionTime: "2022-04-07T12:34:57Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
taskSpec:
steps:
- image: ubuntu
name: echo
resources: {}
script: |
#!/usr/bin/env bash
echo "Sasa World!"
...
Default Values
When Parameter
specifications have default values, the Parameter
value provided at runtime would take precedence to give users control, as shown in this example:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pr-echo-
spec:
params:
- name: HELLO
value: "Hello World!"
- name: BYE
value: "Bye World!"
pipelineSpec:
tasks:
- name: echo-hello
taskSpec:
params:
- name: HELLO
type: string
default: "Sasa World!"
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
...
resolves to
# Successful execution of the above PipelineRun
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pr-echo-szzs9
...
spec:
...
status:
conditions:
- lastTransitionTime: "2022-04-07T12:34:58Z"
message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
...
taskRuns:
pr-echo-szzs9-echo-hello:
pipelineTaskName: echo-hello
status:
conditions:
- lastTransitionTime: "2022-04-07T12:34:57Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
taskSpec:
steps:
- image: ubuntu
name: echo
resources: {}
script: |
#!/usr/bin/env bash
echo "Hello World!"
...
Referenced Resources
When a PipelineRun definition has referenced specifications but does not explicitly pass Parameters, the PipelineRun will be created but the execution will fail because of missing Parameters.
# Invalid PipelineRun attempting to propagate Parameters to referenced Tasks
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pr-echo-
spec:
params:
- name: HELLO
value: "Hello World!"
- name: BYE
value: "Bye World!"
pipelineSpec:
tasks:
- name: echo-hello
taskRef:
name: echo-hello
- name: echo-bye
taskRef:
name: echo-bye
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: echo-hello
spec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: echo-bye
spec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.BYE)"
Fails as follows:
# Failed execution of the above PipelineRun
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pr-echo-24lmf
...
spec:
params:
- name: HELLO
value: Hello World!
- name: BYE
value: Bye World!
pipelineSpec:
tasks:
- name: echo-hello
taskRef:
kind: Task
name: echo-hello
- name: echo-bye
taskRef:
kind: Task
name: echo-bye
status:
conditions:
- lastTransitionTime: "2022-04-07T20:24:51Z"
message: 'invalid input params for task echo-hello: missing values for
these params which have no default values: [HELLO]'
reason: PipelineValidationFailed
status: "False"
type: Succeeded
...
Object Parameters
When using an inlined spec, object parameters from the parent PipelineRun
will also be
propagated to any inlined specs without needing to be explicitly defined. This
allows authors to simplify specs by automatically propagating top-level
parameters down to other inlined resources.
When propagating object parameters, scope and precedence also holds as shown below.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pipelinerun-object-param-result
spec:
params:
- name: gitrepo
value:
url: abc.com
commit: sha123
pipelineSpec:
tasks:
- name: task1
params:
- name: gitrepo
value:
branch: main
url: xyz.com
taskSpec:
steps:
- name: write-result
image: bash
args: [
"echo",
"--url=$(params.gitrepo.url)",
"--commit=$(params.gitrepo.commit)",
"--branch=$(params.gitrepo.branch)",
]
resolves to
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-object-param-resultpxp59
...
spec:
params:
- name: gitrepo
value:
commit: sha123
url: abc.com
pipelineSpec:
tasks:
- name: task1
params:
- name: gitrepo
value:
branch: main
url: xyz.com
taskSpec:
metadata: {}
spec: null
steps:
- args:
- echo
- --url=$(params.gitrepo.url)
- --commit=$(params.gitrepo.commit)
- --branch=$(params.gitrepo.branch)
image: bash
name: write-result
resources: {}
status:
completionTime: "2022-09-08T17:22:01Z"
conditions:
- lastTransitionTime: "2022-09-08T17:22:01Z"
message: 'Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
pipelineSpec:
tasks:
- name: task1
params:
- name: gitrepo
value:
branch: main
url: xyz.com
taskSpec:
metadata: {}
spec: null
steps:
- args:
- echo
- --url=xyz.com
- --commit=sha123
- --branch=main
image: bash
name: write-result
resources: {}
startTime: "2022-09-08T17:21:57Z"
taskRuns:
pipelinerun-object-param-resultpxp59-task1:
pipelineTaskName: task1
status:
completionTime: "2022-09-08T17:22:01Z"
conditions:
- lastTransitionTime: "2022-09-08T17:22:01Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
podName: pipelinerun-object-param-resultpxp59-task1-pod
startTime: "2022-09-08T17:21:57Z"
steps:
- container: step-write-result
...
taskSpec:
steps:
- args:
- echo
- --url=xyz.com
- --commit=sha123
- --branch=main
image: bash
name: write-result
resources: {}
Specifying custom ServiceAccount
credentials
You can execute the Pipeline
in your PipelineRun
with a specific set of credentials by
specifying a ServiceAccount
object name in the serviceAccountName
field in your PipelineRun
definition. If you do not explicitly specify this, the TaskRuns
created by your PipelineRun
will execute with the credentials specified in the configmap-defaults
ConfigMap
. If this
default is not specified, the TaskRuns
will execute with the default
service account
set for the target namespace
.
For more information, see ServiceAccount
.
Custom tasks
may or may not use a service account name.
Consult the documentation of the custom task that you are using to determine whether it supports a service account name.
Mapping ServiceAccount
credentials to Tasks
If you require more granularity in specifying execution credentials, use the taskRunSpecs[].taskServiceAccountName
field to
map a specific serviceAccountName
value to a specific Task
in the Pipeline
. This overrides the global
serviceAccountName
you may have set for the Pipeline
as described in the previous section.
For example, if you specify these mappings:
spec:
serviceAccountName: sa-1
taskRunSpecs:
- pipelineTaskName: build-task
taskServiceAccountName: sa-for-build
for this Pipeline
:
kind: Pipeline
spec:
tasks:
- name: build-task
taskRef:
name: build-push
- name: test-task
taskRef:
name: test
then test-task
will execute using the sa-1
account while build-task
will execute with sa-for-build
.
Specifying a Pod
template
You can specify a Pod
template configuration that will serve as the configuration starting
point for the Pod
in which the container images specified in your Tasks
will execute. This allows you to
customize the Pod
configuration specifically for each TaskRun
.
In the following example, the Task
defines a volumeMount
object named my-cache
. The PipelineRun
provisions this object for the Task
using a persistentVolumeClaim
and executes it as user 1001.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: mytask
spec:
steps:
- name: writesomething
image: ubuntu
command: ["bash", "-c"]
args: ["echo 'foo' > /my-cache/bar"]
volumeMounts:
- name: my-cache
mountPath: /my-cache
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: mypipeline
spec:
tasks:
- name: task1
taskRef:
name: mytask
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: mypipelinerun
spec:
pipelineRef:
name: mypipeline
podTemplate:
securityContext:
runAsNonRoot: true
runAsUser: 1001
volumes:
- name: my-cache
persistentVolumeClaim:
claimName: my-volume-claim
Custom tasks
may or may not use a pod template.
Consult the documentation of the custom task that you are using to determine whether it supports a pod template.
Specifying taskRunSpecs
Specifies a list of PipelineTaskRunSpec
which contains TaskServiceAccountName
, TaskPodTemplate
and PipelineTaskName
. Mapping the specs to the corresponding Task
based upon the TaskName
a PipelineTask
will run with the configured TaskServiceAccountName
and TaskPodTemplate
overwriting the pipeline
wide ServiceAccountName
and podTemplate
configuration,
for example:
spec:
podTemplate:
securityContext:
runAsUser: 1000
runAsGroup: 2000
fsGroup: 3000
taskRunSpecs:
- pipelineTaskName: build-task
taskServiceAccountName: sa-for-build
taskPodTemplate:
nodeSelector:
disktype: ssd
If used with this Pipeline
, build-task
will use the task specific PodTemplate
(where nodeSelector
has disktype
equal to ssd
).
PipelineTaskRunSpec
may also contain StepOverrides
and SidecarOverrides
; see
Overriding Task
Steps
and Sidecars
for more information.
The optional annotations and labels can be added under a Metadata
field as for a specific running context.
e.g.
Rendering needed secrets with Vault:
spec:
pipelineRef:
name: pipeline-name
taskRunSpecs:
- pipelineTaskName: task-name
metadata:
annotations:
vault.hashicorp.com/agent-inject-secret-foo: "/path/to/foo"
vault.hashicorp.com/role: role-name
Updating labels applied in a runtime context:
spec:
pipelineRef:
name: pipeline-name
taskRunSpecs:
- pipelineTaskName: task-name
metadata:
labels:
app: cloudevent
If a metadata key is present in different levels, the value that will be used in the PipelineRun
is determined using this precedence order: PipelineRun.spec.taskRunSpec.metadata
> PipelineRun.metadata
> Pipeline.spec.tasks.taskSpec.metadata
.
Specifying Workspaces
If your Pipeline
specifies one or more Workspaces
, you must map those Workspaces
to
the corresponding physical volumes in your PipelineRun
definition. For example, you
can map a PersistentVolumeClaim
volume to a Workspace
as follows:
workspaces:
- name: myworkspace # must match workspace name in Task
persistentVolumeClaim:
claimName: mypvc # this PVC must already exist
subPath: my-subdir
For more information, see the following topics:
- For information on mapping
Workspaces
toVolumes
, see SpecifyingWorkspaces
inPipelineRuns
. - For a list of supported
Volume
types, see SpecifyingVolumeSources
inWorkspaces
. - For an end-to-end example, see
Workspaces
in aPipelineRun
.
Custom tasks
may or may not use workspaces.
Consult the documentation of the custom task that you are using to determine whether it supports workspaces.
Propagated Workspaces
When using an embedded spec, workspaces from the parent PipelineRun
will be
propagated to any inlined specs without needing to be explicitly defined. This
allows authors to simplify specs by automatically propagating top-level
workspaces down to other inlined resources.
Workspace substutions will only be made for commands
, args
and script
fields of steps
, stepTemplates
, and sidecars
.
# Inline specifications of a PipelineRun
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
spec:
workspaces:
- name: shared-data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Mi
volumeMode: Filesystem
pipelineSpec:
#workspaces:
# - name: shared-data
tasks:
- name: fetch-secure-data
# workspaces:
# - name: shared-data
taskSpec:
# workspaces:
# - name: shared-data
steps:
- name: fetch-and-write-secure
image: ubuntu
script: |
echo hi >> $(workspaces.shared-data.path)/recipe.txt
- name: print-the-recipe
# workspaces:
# - name: shared-data
runAfter:
- fetch-secure-data
taskSpec:
# workspaces:
# - name: shared-data
steps:
- name: print-secrets
image: ubuntu
script: cat $(workspaces.shared-data.path)/recipe.txt
On executing the pipeline run, the workspaces will be interpolated during resolution.
# Successful execution of the above PipelineRun
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
...
spec:
pipelineSpec:
...
status:
completionTime: "2022-06-02T18:17:02Z"
conditions:
- lastTransitionTime: "2022-06-02T18:17:02Z"
message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
pipelineSpec:
...
taskRuns:
recipe-time-lslt9-fetch-secure-data:
pipelineTaskName: fetch-secure-data
status:
...
taskSpec:
steps:
- image: ubuntu
name: fetch-and-write-secure
resources: {}
script: |
echo hi >> cat /workspace/shared-data/recipe.txt
workspaces:
- name: shared-data
recipe-time-lslt9-print-the-recipe:
pipelineTaskName: print-the-recipe
status:
...
taskSpec:
steps:
- image: ubuntu
name: print-secrets
resources: {}
script: cat /workspace/shared-data/recipe.txt
workspaces:
- name: shared-data
Workspace Referenced Resources
Workspaces
cannot be propagated to referenced specifications. For example, the following Pipeline will fail when executed because the workspaces defined in the PipelineRun cannot be propagated to the referenced Pipeline.
# PipelineRun attempting to propagate Workspaces to referenced Tasks
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-task-storage
spec:
resources:
requests:
storage: 16Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: fetch-and-print-recipe
spec:
tasks:
- name: fetch-the-recipe
taskRef:
name: fetch-secure-data
- name: print-the-recipe
taskRef:
name: print-data
runAfter:
- fetch-the-recipe
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
spec:
pipelineRef:
name: fetch-and-print-recipe
workspaces:
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
Upon execution, this will cause failures:
# Failed execution of the above PipelineRun
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
...
spec:
pipelineRef:
name: fetch-and-print-recipe
workspaces:
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
status:
completionTime: "2022-06-02T19:02:58Z"
conditions:
- lastTransitionTime: "2022-06-02T19:02:58Z"
message: 'Tasks Completed: 1 (Failed: 1, Canceled 0), Skipped: 1'
reason: Failed
status: "False"
type: Succeeded
pipelineSpec:
...
taskRuns:
recipe-time-v5scg-fetch-the-recipe:
pipelineTaskName: fetch-the-recipe
status:
completionTime: "2022-06-02T19:02:58Z"
conditions:
- lastTransitionTime: "2022-06-02T19:02:58Z"
message: |
"step-fetch-and-write" exited with code 1 (image: "docker.io/library/ubuntu@sha256:26c68657ccce2cb0a31b330cb0be2b5e108d467f641c62e13ab40cbec258c68d"); for logs run: kubectl -n default logs recipe-time-v5scg-fetch-the-recipe-pod -c step-fetch-and-write
reason: Failed
status: "False"
type: Succeeded
...
taskSpec:
steps:
- image: ubuntu
name: fetch-and-write
resources: {}
script: | # See below: Replacements do not happen.
echo hi >> $(workspaces.shared-data.path)/recipe.txt
Referenced TaskRuns within Embedded PipelineRuns
As mentioned in the Workspace Referenced Resources, workspaces can only be propagated from PipelineRuns to embedded Pipeline specs, not Pipeline references. Similarly, workspaces can only be propagated from a Pipeline to embedded Task specs, not referenced Tasks. For example:
# PipelineRun attempting to propagate Workspaces to referenced Tasks
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: fetch-secure-data
spec:
workspaces: # If Referenced, Workspaces need to be explicitly declared
- name: shared-data
steps:
- name: fetch-and-write
image: ubuntu
script: |
echo $(workspaces.shared-data.path)
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
spec:
workspaces:
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
pipelineSpec:
# workspaces: # Since this is embedded specs, Workspaces don’t need to be declared
# ...
tasks:
- name: fetch-the-recipe
workspaces: # If referencing resources, Workspaces need to be explicitly declared
- name: shared-data
taskRef: # Referencing a resource
name: fetch-secure-data
- name: print-the-recipe
# workspaces: # Since this is embedded specs, Workspaces don’t need to be declared
# ...
taskSpec:
# workspaces: # Since this is embedded specs, Workspaces don’t need to be declared
# ...
steps:
- name: print-secrets
image: ubuntu
script: cat $(workspaces.shared-data.path)/recipe.txt
runAfter:
- fetch-the-recipe
The above pipelinerun successfully resolves to:
# Successful execution of the above PipelineRun
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
...
spec:
pipelineSpec:
...
workspaces:
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
status:
completionTime: "2022-06-09T18:42:14Z"
conditions:
- lastTransitionTime: "2022-06-09T18:42:14Z"
message: 'Tasks Completed: 2 (Failed: 0, Cancelled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
pipelineSpec:
...
taskRuns:
recipe-time-pj6l7-fetch-the-recipe:
pipelineTaskName: fetch-the-recipe
status:
...
taskSpec:
steps:
- image: ubuntu
name: fetch-and-write
resources: {}
script: |
echo /workspace/shared-data
workspaces:
- name: shared-data
recipe-time-pj6l7-print-the-recipe:
pipelineTaskName: print-the-recipe
status:
...
taskSpec:
steps:
- image: ubuntu
name: print-secrets
resources: {}
script: cat /workspace/shared-data/recipe.txt
workspaces:
- name: shared-data
Specifying LimitRange
values
In order to only consume the bare minimum amount of resources needed to execute one Step
at a
time from the invoked Task
, Tekton will request the compute values for CPU, memory, and ephemeral
storage for each Step
based on the LimitRange
object(s), if present. Any Request
or Limit
specified by the user (on Task
for example) will be left unchanged.
For more information, see the LimitRange
support in Pipeline.
Configuring a failure timeout
You can use the timeouts
field to set the PipelineRun's
desired timeout value in minutes. There are three sub-fields than can be used to specify failures timeout for the entire pipeline, for tasks, and for finally
tasks.
timeouts:
pipeline: "0h0m60s"
tasks: "0h0m40s"
finally: "0h0m20s"
All three sub-fields are optional, and will be automatically processed according to the following constraint:
timeouts.pipeline >= timeouts.tasks + timeouts.finally
Example timeouts usages are as follows:
Combination 1: Set the timeout for the entire pipeline
and reserve a portion of it for tasks
.
kind: PipelineRun
spec:
timeouts:
pipeline: "0h4m0s"
tasks: "0h1m0s"
Combination 2: Set the timeout for the entire pipeline
and reserve a portion of it for finally
.
kind: PipelineRun
spec:
timeouts:
pipeline: "0h4m0s"
finally: "0h3m0s"
Combination 3: Set only a tasks
timeout, with no timeout for the entire pipeline
.
kind: PipelineRun
spec:
timeouts:
pipeline: "0" # No timeout
tasks: "0h3m0s"
Combination : Set only a finally
timeout, with no timeout for the entire pipeline
.
kind: PipelineRun
spec:
timeouts:
pipeline: "0" # No timeout
finally: "0h3m0s"
You can also use the Deprecated timeout
field to set the PipelineRun's
desired timeout value in minutes.
If you do not specify this value in the PipelineRun
, the global default timeout value applies.
If you set the timeout to 0, the PipelineRun
fails immediately upon encountering an error.
⚠️ **
timeout
is deprecated and will be removed in future versions. Consider usingtimeouts
instead.
If you do not specify the timeout
value or timeouts.pipeline
in the PipelineRun
, the global default timeout value applies.
If you set the timeout
value or timeouts.pipeline
to 0, the PipelineRun
fails immediately upon encountering an error.
If timeouts.tasks
or timeouts.finally
is set to 0, timeouts.pipeline
must also be set to 0.
The global default timeout is set to 60 minutes when you first install Tekton. You can set
a different global default timeout value using the default-timeout-minutes
field in
config/config-defaults.yaml
.
The timeout
value is a duration
conforming to Go’s
ParseDuration
format. For example, valid
values are 1h30m
, 1h
, 1m
, and 60s
. If you set the global timeout to 0, all PipelineRuns
that do not have an individual timeout set will fail immediately upon encountering an error.
PipelineRun
status
The status
field
Your PipelineRun
’s status
field can contain the following fields:
- Required:
status
- Most relevant,status.conditions
, which contains the latest observations of thePipelineRun
’s state. See here for information on typical status properties.startTime
- The time at which thePipelineRun
began executing, in RFC3339 format.completionTime
- The time at which thePipelineRun
finished executing, in RFC3339 format.pipelineSpec
- The exactPipelineSpec
used when starting thePipelineRun
.
- Optional:
taskRuns
- A map ofTaskRun
names to detailed information about the status of thatTaskRun
. This is deprecated and will be removed in favor of usingchildReferences
.runs
- A map of custom taskRun
names to detailed information about the status of thatRun
. This is deprecated and will be removed in favor of usingchildReferences
.pipelineResults
- Results emitted by thisPipelineRun
.skippedTasks
- A list ofTask
s which were skipped when running thisPipelineRun
due to when expressions, including the when expressions applying to the skipped task.childReferences
- A list of references to eachTaskRun
orRun
in thisPipelineRun
, which can be used to look up the status of the underlyingTaskRun
orRun
. Each entry contains the following:kind
- Generally eitherTaskRun
orRun
.apiVersion
- The API version for the underlyingTaskRun
orRun
.whenExpressions
- The list of when expressions guarding the execution of this task.
Configuring usage of TaskRun
and Run
embedded statuses
Currently, the default behavior is for the statuses of TaskRun
s and Run
s within this PipelineRun
to be embedded in the status.taskRuns
and status.runs
fields. This will change in the future to
instead default to status.childReferences
being populated with references to the TaskRun
s and
Run
s, which can be used to look up their statuses.
This behavior can be controlled by changing the embedded-status
feature flag in the feature-flags
config map. See install.md
for more
information on feature flags. The possible values for embedded-status
are:
full
- The current default behavior of populatingstatus.taskRuns
andstatus.runs
, without populatingstatus.childReferences
.minimal
- Just populatestatus.childReferences
, notstatus.taskRuns
orstatus.runs
.both
- Populatestatus.childReferences
as well asstatus.taskRuns
andstatus.runs
.
Monitoring execution status
As your PipelineRun
executes, its status
field accumulates information on the execution of each TaskRun
as well as the PipelineRun
as a whole. This information includes the name of the pipeline Task
associated
to a TaskRun
, the complete status of the TaskRun
and details
about whenExpressions
that may be associated to a TaskRun
.
The following example shows an extract from the status
field of a PipelineRun
that has executed successfully:
completionTime: "2020-05-04T02:19:14Z"
conditions:
- lastTransitionTime: "2020-05-04T02:19:14Z"
message: "Tasks Completed: 4, Skipped: 0"
reason: Succeeded
status: "True"
type: Succeeded
startTime: "2020-05-04T02:00:11Z"
taskRuns:
triggers-release-nightly-frwmw-build:
pipelineTaskName: build
status:
completionTime: "2020-05-04T02:10:49Z"
conditions:
- lastTransitionTime: "2020-05-04T02:10:49Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
podName: triggers-release-nightly-frwmw-build-pod
resourcesResult:
- key: commit
resourceName: git-source-triggers-frwmw
value: 9ab5a1234166a89db352afa28f499d596ebb48db
startTime: "2020-05-04T02:05:07Z"
steps:
- container: step-build
imageID: docker-pullable://golang@sha256:a90f2671330831830e229c3554ce118009681ef88af659cd98bfafd13d5594f9
name: build
terminated:
containerID: docker://6b6471f501f59dbb7849f5cdde200f4eeb64302b862a27af68821a7fb2c25860
exitCode: 0
finishedAt: "2020-05-04T02:10:45Z"
reason: Completed
startedAt: "2020-05-04T02:06:24Z"
The following tables shows how to read the overall status of a PipelineRun
.
Completion time is set once a PipelineRun
reaches status True
or False
:
status |
reason |
completionTime is set |
Description |
---|---|---|---|
Unknown | Started | No | The PipelineRun has just been picked up by the controller. |
Unknown | Running | No | The PipelineRun has been validate and started to perform its work. |
Unknown | Cancelled | No | The user requested the PipelineRun to be cancelled. Cancellation has not be done yet. |
True | Succeeded | Yes | The PipelineRun completed successfully. |
True | Completed | Yes | The PipelineRun completed successfully, one or more Tasks were skipped. |
False | Failed | Yes | The PipelineRun failed because one of the TaskRuns failed. |
False | [Error message] | Yes | The PipelineRun failed with a permanent error (usually validation). |
False | Cancelled | Yes | The PipelineRun was cancelled successfully. |
False | PipelineRunTimeout | Yes | The PipelineRun timed out. |
When a PipelineRun
changes status, events are triggered accordingly.
When a PipelineRun
has Tasks
that were skipped
, the reason
for skipping the task will be listed in the Skipped Tasks
section of the status
of the PipelineRun
.
When a PipelineRun
has Tasks
with when
expressions:
- If the
when
expressions evaluate totrue
, theTask
is executed then theTaskRun
and its resolvedwhen
expressions will be listed in theTask Runs
section of thestatus
of thePipelineRun
. - If the
when
expressions evaluate tofalse
, theTask
is skipped then its name and its resolvedwhen
expressions will be listed in theSkipped Tasks
section of thestatus
of thePipelineRun
.
Conditions:
Last Transition Time: 2020-08-27T15:07:34Z
Message: Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 1
Reason: Completed
Status: True
Type: Succeeded
Skipped Tasks:
Name: skip-this-task
Reason: When Expressions evaluated to false
When Expressions:
Input: foo
Operator: in
Values:
bar
Input: foo
Operator: notin
Values:
foo
Task Runs:
pipelinerun-to-skip-task-run-this-task:
Pipeline Task Name: run-this-task
Status:
...
When Expressions:
Input: foo
Operator: in
Values:
foo
The name of the TaskRuns
and Runs
owned by a PipelineRun
are univocally associated to the owning resource.
If a PipelineRun
resource is deleted and created with the same name, the child TaskRuns
will be created with the
same name as before. The base format of the name is <pipelinerun-name>-<pipelinetask-name>
. If the PipelineTask
has a Matrix
, the name will have an int suffix with format <pipelinerun-name>-<pipelinetask-name>-<combination-id>
.
The name may vary according the logic of kmeta.ChildName
.
Some examples:
PipelineRun Name |
PipelineTask Name |
TaskRun Names |
---|---|---|
pipeline-run | task1 | pipeline-run-task1 |
pipeline-run | task2-0123456789-0123456789-0123456789-0123456789-0123456789 | pipeline-runee4a397d6eab67777d4e6f9991cd19e6-task2-0123456789-0 |
pipeline-run-0123456789-0123456789-0123456789-0123456789 | task3 | pipeline-run-0123456789-0123456789-0123456789-0123456789-task3 |
pipeline-run-0123456789-0123456789-0123456789-0123456789 | task2-0123456789-0123456789-0123456789-0123456789-0123456789 | pipeline-run-0123456789-012345607ad8c7aac5873cdfabe472a68996b5c |
pipeline-run | task4 (with 2x2 Matrix ) |
pipeline-run-task1-0, pipeline-run-task1-2, pipeline-run-task1-3, pipeline-run-task1-4 |
Cancelling a PipelineRun
To cancel a PipelineRun
that’s currently executing, update its definition
to mark it as “Cancelled”. When you do so, the spawned TaskRuns
are also marked
as cancelled, all associated Pods
are deleted, and their Retries
are not executed.
Pending finally
tasks are not scheduled.
For example:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "Cancelled"
Gracefully cancelling a PipelineRun
To gracefully cancel a PipelineRun
that’s currently executing, update its definition
to mark it as “CancelledRunFinally”. When you do so, the spawned TaskRuns
are also marked
as cancelled, all associated Pods
are deleted, and their Retries
are not executed.
finally
tasks are scheduled normally.
For example:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "CancelledRunFinally"
Gracefully stopping a PipelineRun
To gracefully stop a PipelineRun
that’s currently executing, update its definition
to mark it as “StoppedRunFinally”. When you do so, the spawned TaskRuns
are completed normally,
including executing their retries
, but no new non-finally
task is scheduled. finally
tasks are executed afterwards.
For example:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "StoppedRunFinally"
Pending PipelineRuns
A PipelineRun
can be created as a “pending” PipelineRun
meaning that it will not actually be started until the pending status is cleared.
Note that a PipelineRun
can only be marked “pending” before it has started, this setting is invalid after the PipelineRun
has been started.
To mark a PipelineRun
as pending, set .spec.status
to PipelineRunPending
when creating it:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "PipelineRunPending"
To start the PipelineRun, clear the .spec.status
field. Alternatively, update the value to Cancelled
to cancel it.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.
Feedback
Was this page helpful?
Thanks! Tell us how we can further improve.
Sorry about that. Tell us how we can further improve.