1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

crd formatting for release notes and all assemblies

This commit is contained in:
Preeti
2021-05-10 23:31:12 +05:30
committed by openshift-cherrypick-robot
parent 47ecfe9314
commit 08d349294c
10 changed files with 125 additions and 134 deletions

View File

@@ -29,7 +29,7 @@ This section uses the `pipelines-tutorial` example to demonstrate the preceding
* You have access to an {product-title} cluster.
* You have installed xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[OpenShift Pipelines] using the {pipelines-title} Operator listed in the OpenShift OperatorHub. Once installed, it is applicable to the entire cluster.
* You have installed xref:../../cli_reference/tkn_cli/installing-tkn.adoc#installing-tkn[OpenShift Pipelines CLI].
* You have forked the front-end link:https://github.com/openshift/pipelines-vote-ui/tree/{pipelines-ver}[`pipelines-vote-ui`] and back-end link:https://github.com/openshift/pipelines-vote-api/tree/{pipelines-ver}[`pipelines-vote-api`] Git repositories using your GitHub ID, and have Administrator access to these repositories.
* You have forked the front-end link:https://github.com/openshift/pipelines-vote-ui/tree/{pipelines-ver}[`pipelines-vote-ui`] and back-end link:https://github.com/openshift/pipelines-vote-api/tree/{pipelines-ver}[`pipelines-vote-api`] Git repositories using your GitHub ID, and have administrator access to these repositories.
* Optional: You have cloned the link:https://github.com/openshift/pipelines-tutorial/tree/{pipelines-ver}[`pipelines-tutorial`] Git repository.
@@ -60,7 +60,7 @@ include::modules/op-triggering-a-pipelinerun.adoc[leveloffset=+1]
[id="pipeline-addtl-resources"]
== Additional resources
* For more details on pipelines in the *Developer* perspective, see the xref:../../cicd/pipelines/working-with-pipelines-using-the-developer-perspective.adoc#working-with-pipelines-using-the-developer-perspective[working with Pipelines in the *Developer* perspective] section.
* For more details on pipelines in the *Developer* perspective, see the xref:../../cicd/pipelines/working-with-pipelines-using-the-developer-perspective.adoc#working-with-pipelines-using-the-developer-perspective[working with pipelines in the *Developer* perspective] section.
* To learn more about Security Context Constraints (SCCs), see the xref:../../authentication/managing-security-context-constraints.adoc#managing-pod-security-policies[Managing Security Context Constraints] section.
* For more examples of reusable tasks, see the link:https://github.com/openshift/pipelines-catalog[OpenShift Catalog] repository. Additionally, you can also see the Tekton Catalog in the Tekton project.
* For more details on re-encrypt TLS termination, see link:https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#re-encryption-termination[Re-encryption Termination].

View File

@@ -24,11 +24,11 @@ If you have the pull secret, add the `redhat-operators` catalog to the OperatorH
endif::[]
//Installing Pipelines Operator using web console
//Installing pipelines Operator using web console
include::modules/op-installing-pipelines-operator-in-web-console.adoc[leveloffset=+1]
// Installing Pipelines Operator using CLI
// Installing pipelines Operator using CLI
include::modules/op-installing-pipelines-operator-using-the-cli.adoc[leveloffset=+1]

View File

@@ -14,7 +14,7 @@ toc::[]
* Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko.
* Portability across any Kubernetes distribution.
* Powerful CLI for interacting with pipelines.
* Integrated user experience with the Developer perspective of the {product-title} web console.
* Integrated user experience with the *Developer* perspective of the {product-title} web console.
For an overview of {pipelines-title}, see xref:../../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[Understanding OpenShift Pipelines].

View File

@@ -8,41 +8,38 @@ toc::[]
:FeatureName: OpenShift Pipelines
{pipelines-title} is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard Custom Resource Definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
{pipelines-title} is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
////
{pipelines-title} is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses link:https://tekton.dev[Tekton] building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard Custom Resource Definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
////
[id="op-key-features"]
== Key features
* {pipelines-title} is a serverless CI/CD system that runs Pipelines with all the required dependencies in isolated containers.
* {pipelines-title} is a serverless CI/CD system that runs pipelines with all the required dependencies in isolated containers.
* {pipelines-title} are designed for decentralized teams that work on microservice-based architecture.
* {pipelines-title} use standard CI/CD pipeline definitions that are easy to extend and integrate with the existing Kubernetes tools, enabling you to scale on-demand.
* You can use {pipelines-title} to build images with Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko that are portable across any Kubernetes platform.
* You can use the {product-title} Developer Console to create Tekton resources, view logs of Pipeline runs, and manage pipelines in your {product-title} namespaces.
* You can use the {product-title} Developer console to create Tekton resources, view logs of pipeline runs, and manage pipelines in your {product-title} namespaces.
[id="op-detailed-concepts"]
== OpenShift Pipeline Concepts
This guide provides a detailed view of the various Pipeline concepts.
This guide provides a detailed view of the various pipeline concepts.
//About Tasks
//About tasks
include::modules/op-about-tasks.adoc[leveloffset=+2]
//About TaskRun
//About task run
include::modules/op-about-taskrun.adoc[leveloffset=+2]
//About Pipelines
//About pipelines
include::modules/op-about-pipelines.adoc[leveloffset=+2]
//About PipelineRun
//About pipeline run
include::modules/op-about-pipelinerun.adoc[leveloffset=+2]
//About Workspace
//About workspace
include::modules/op-about-workspace.adoc[leveloffset=+2]
//About Triggers
//About triggers
include::modules/op-about-triggers.adoc[leveloffset=+2]
== Additional resources
* For information on installing Pipelines, see xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing OpenShift Pipelines].
* For information on installing pipelines, see xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing OpenShift Pipelines].
* For more details on creating custom CI/CD solutions, see xref:../../cicd/pipelines/creating-applications-with-cicd-pipelines.adoc#creating-applications-with-cicd-pipelines[Creating applications with CI/CD Pipelines].
* For more details on re-encrypt TLS termination, see link:https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#re-encryption-termination[Re-encryption Termination].
* For more details on secured routes, see the xref:../../networking/routes/secured-routes.adoc#secured-routes[Secured routes] section.

View File

@@ -8,20 +8,20 @@ include::modules/pipelines-document-attributes.adoc[]
toc::[]
You can use the *Developer* perspective of the {product-title} web console to create CI/CD Pipelines for your software delivery process.
You can use the *Developer* perspective of the {product-title} web console to create CI/CD pipelines for your software delivery process.
In the *Developer* perspective:
* Use the *Add* -> *Pipeline* -> *Pipeline Builder* option to create customized Pipelines for your application.
* Use the *Add* -> *From Git* option to create Pipelines using operator-installed Pipeline templates and resources while creating an application on {product-title}.
* Use the *Add* -> *Pipeline* -> *Pipeline Builder* option to create customized pipelines for your application.
* Use the *Add* -> *From Git* option to create pipelines using operator-installed pipeline templates and resources while creating an application on {product-title}.
After you create the Pipelines for your application, you can view and visually interact with the deployed Pipelines in the *Pipelines* view. You can also use the *Topology* view to interact with the Pipelines created using the *From Git* option. You need to apply custom labels to a Pipeline created using the *Pipeline Builder* to see it in the *Topology* view.
After you create the pipelines for your application, you can view and visually interact with the deployed pipelines in the *Pipelines* view. You can also use the *Topology* view to interact with the pipelines created using the *From Git* option. You need to apply custom labels to a pipeline created using the *Pipeline Builder* to see it in the *Topology* view.
[discrete]
== Prerequisites
* You have access to an {product-title} cluster and have switched to the xref:../../web_console/odc-about-developer-perspective.adoc[Developer perspective] in the web console.
* You have access to an {product-title} cluster and have switched to the xref:../../web_console/odc-about-developer-perspective.adoc[*Developer* perspective] in the web console.
* You have the xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[OpenShift Pipelines Operator installed] in your cluster.
* You are a cluster administrator or a user with create and edit permissions.
* You have created a project.
@@ -31,7 +31,7 @@ include::modules/op-constructing-pipelines-using-pipeline-builder.adoc[leveloffs
== Creating applications with OpenShift Pipelines
To create Pipelines along with applications, use the *From Git* option in the *Add* view of the *Developer* perspective. For more information, see xref:../../applications/application_life_cycle_management/odc-creating-applications-using-developer-perspective.adoc#odc-importing-codebase-from-git-to-create-application_odc-creating-applications-using-developer-perspective[Creating applications using the Developer perspective].
To create pipelines along with applications, use the *From Git* option in the *Add* view of the *Developer* perspective. For more information, see xref:../../applications/application_life_cycle_management/odc-creating-applications-using-developer-perspective.adoc#odc-importing-codebase-from-git-to-create-application_odc-creating-applications-using-developer-perspective[Creating applications using the Developer perspective].
include::modules/op-interacting-with-pipelines-using-the-developer-perspective.adoc[leveloffset=+1]

View File

@@ -12,38 +12,34 @@
* Tekton Pipelines 0.11.3
* Tekton `tkn` CLI 0.9.0
* Tekton Triggers 0.4.0
* ClusterTasks based on Tekton Catalog 0.11
* cluster tasks based on Tekton Catalog 0.11
In addition to the fixes and stability improvements, the following sections highlight what is new in {pipelines-title} 1.0.
[id="pipeline-new-features-1-0_{context}"]
=== Pipelines
* Support for v1beta1 API Version.
* Support for an improved LimitRange. Previously, LimitRange was specified exclusively for the TaskRun and the PipelineRun. Now there is no need to explicitly specify the LimitRange. The minimum LimitRange across the namespace is used.
* Support for sharing data between Tasks using TaskResults and TaskParams.
* Pipelines can now be configured to not overwrite the `HOME` environment variable and `workingDir` of Steps.
* Similar to Task Steps, `sidecars` now support script mode.
* You can now specify a different scheduler name in TaskRun `podTemplate`.
* Support for an improved limit range. Previously, limit range was specified exclusively for the task run and the pipeline run. Now there is no need to explicitly specify the limit range. The minimum limit range across the namespace is used.
* Support for sharing data between tasks using task results and task params.
* Pipelines can now be configured to not overwrite the `HOME` environment variable and the working directory of steps.
* Similar to task steps, `sidecars` now support script mode.
* You can now specify a different scheduler name in task run `podTemplate` resource.
* Support for variable substitution using Star Array Notation.
* Tekton Controller can now be configured to monitor an individual namespace.
* A new description field is now added to the specification of Pipeline, Task, ClusterTask, Resource, and Condition.
* Addition of proxy parameters to Git PipelineResources.
* Tekton controller can now be configured to monitor an individual namespace.
* A new description field is now added to the specification of pipelines, tasks, cluster tasks, resources, and conditions.
* Addition of proxy parameters to Git pipeline resources.
[id="cli-new-features-1-0_{context}"]
=== Pipelines CLI
* The `describe` subcommand is now added for the following `tkn` resources: `eventlistener`, `condition`, `triggertemplate`, `clustertask`, and `triggerbinding`.
* Support added for `v1beta1` to the following commands along with backward comptibility for `v1alpha1`: `clustertask`, `task`, `pipeline`, `pipelinerun`, and `taskrun`.
* The following commands can now list output from all namespaces using the `--all-namespaces` flag option:
** `tkn task list`
** `tkn pipeline list`
** `tkn taskrun list`
** `tkn pipelinerun list`
* The `describe` subcommand is now added for the following `tkn` resources: `EventListener`, `Condition`, `TriggerTemplate`, `ClusterTask`, and `TriggerSBinding`.
* Support added for `v1beta1` to the following resources along with backward compatibility for `v1alpha1`: `ClusterTask`, `Task`, `Pipeline`, `PipelineRun`, and `TaskRun`.
* The following commands can now list output from all namespaces using the `--all-namespaces` flag option: `tkn task list`, `tkn pipeline list`, `tkn taskrun list`, `tkn pipelinerun list`
+
The output of these commands is also enhanced to display information without headers using the `--no-headers` flag option.
* You can now start a Pipeline using default parameter values by specifying `--use-param-defaults` flag in the `tkn pipelines start` command.
* Support for Workspace is now added to `tkn pipeline start` and `tkn task start` commands.
* You can now start a pipeline using default parameter values by specifying `--use-param-defaults` flag in the `tkn pipelines start` command.
* Support for workspace is now added to `tkn pipeline start` and `tkn task start` commands.
* A new `clustertriggerbinding` command is now added with the following subcommands: `describe`, `delete`, and `list`.
* You can now directly start a pipeline run using a local or remote `yaml` file.
* The `describe` subcommand now displays an enhanced and detailed output. With the addition of new fields, such as `description`, `timeout`, `param description`, and `sidecar status`, the command output now provides more detailed information about a specific `tkn` resource.
@@ -51,23 +47,21 @@ The output of these commands is also enhanced to display information without hea
[id="triggers-new-features-1-0_{context}"]
=== Triggers
* Triggers can now create both `v1alpha1` and `v1beta1` Pipeline resources.
* Triggers can now create both `v1alpha1` and `v1beta1` pipeline resources.
* Support for new Common Expression Language (CEL) interceptor function - `compareSecret`. This function securely compares strings to secrets in CEL expressions.
* Support for authentication and authorization at the EventListener Trigger level.
* Support for authentication and authorization at the event listener trigger level.
[id="deprecated-features-1-0_{context}"]
== Deprecated features
The following items are deprecated in this release:
* The environment variable `$HOME`, and variable `workingDir` in the Steps specification are deprecated and might be changed in a future release. Currently in a Step container, `HOME` and `workingDir` are overwritten to `/tekton/home` and `/workspace` respectively.
* The environment variable `$HOME`, and variable `workingDir` in the `Steps` specification are deprecated and might be changed in a future release. Currently in a `Step` container, the `HOME` and `workingDir` variables are overwritten to `/tekton/home` and `/workspace` variables, respectively.
+
In a later release, these two fields will not be modified, and will be set to values defined in the container image and Task YAML.
For this release, use flags `disable-home-env-overwrite` and `disable-working-directory-overwrite` to disable overwriting of the `HOME` and `workingDir` variables.
In a later release, these two fields will not be modified, and will be set to values defined in the container image and the `Task` YAML.
For this release, use the `disable-home-env-overwrite` and `disable-working-directory-overwrite` flags to disable overwriting of the `HOME` and `workingDir` variables.
* The following commands are deprecated and might be removed in the future release:
** `tkn pipeline create`
** `tkn task create`
* The following commands are deprecated and might be removed in the future release: `tkn pipeline create`, `tkn task create`.
* The `-f` flag with the `tkn resource create` command is now deprecated. It might be removed in the future release.
@@ -76,10 +70,10 @@ For this release, use flags `disable-home-env-overwrite` and `disable-working-di
[id="known-issues-1-4-0_{context}"]
== Known issues
* If you are upgrading from an older version of {pipelines-title}, you must delete your existing deployments before upgrading to {pipelines-title} version 1.0. To delete an existing deployment, you must first delete Custom Resources and then uninstall the {pipelines-title} Operator. For more details, see the uninstalling {pipelines-title} section.
* Submitting the same `v1alpha1` Tasks more than once results in an error. Use `oc replace` instead of `oc apply` when re-submitting a `v1alpha1` Task.
* The `buildah` ClusterTask does not work when a new user is added to a container.
* Submitting the same `v1alpha1` tasks more than once results in an error. Use the `oc replace` command instead of `oc apply` when re-submitting a `v1alpha1` task.
* The `buildah` cluster task does not work when a new user is added to a container.
+
When the Operator is installed, the `--storage-driver` flag for the `buildah` ClusterTask is not specified, therefore the flag is set to its default value. In some cases, this causes the storage driver to be set incorrectly. When a new user is added, the incorrect storage-driver results in the failure of the `buildah` ClusterTask with the following error:
When the Operator is installed, the `--storage-driver` flag for the `buildah` cluster task is not specified, therefore the flag is set to its default value. In some cases, this causes the storage driver to be set incorrectly. When a new user is added, the incorrect storage-driver results in the failure of the `buildah` cluster task with the following error:
+
----
useradd: /etc/passwd.8: lock file already used
@@ -93,14 +87,14 @@ As a workaround, manually set the `--storage-driver` flag value to `overlay` in
----
$ oc login -u <login> -p <password> https://openshift.example.com:6443
----
. Use the `oc edit` command to edit `buildah` ClusterTask:
. Use the `oc edit` command to edit `buildah` cluster task:
+
----
$ oc edit clustertask buildah
----
+
The current version of the `buildah` clustertask YAML file opens in the editor set by your `EDITOR` environment variable.
. Under the `steps` field, locate the following `command` field:
. Under the `Steps` field, locate the following `command` field:
+
----
command: ['buildah', 'bud', '--format=$(params.FORMAT)', '--tls-verify=$(params.TLSVERIFY)', '--layers', '-f', '$(params.DOCKERFILE)', '-t', '$(resources.outputs.image.url)', '$(params.CONTEXT)']
@@ -115,19 +109,19 @@ The current version of the `buildah` clustertask YAML file opens in the editor s
+
Alternatively, you can also modify the `buildah` ClusterTask YAML file directly on the web console by navigating to *Pipelines* -> *Cluster Tasks* -> *buildah*. Select *Edit Cluster Task* from the *Actions* menu and replace the `command` field as shown in the previous procedure.
Alternatively, you can also modify the `buildah` cluster task YAML file directly on the web console by navigating to *Pipelines* -> *Cluster Tasks* -> *buildah*. Select *Edit Cluster Task* from the *Actions* menu and replace the `command` field as shown in the previous procedure.
[id="fixed-issues-1-0_{context}"]
== Fixed issues
* Previously, the `DeploymentConfig` Task triggered a new deployment build even when an image build was already in progress. This caused the deployment of the Pipeline to fail. With this fix, the `deploy task` command is now replaced with the `oc rollout status` command which waits for the in-progress deployment to finish.
* Support for `APP_NAME` parameter is now added in Pipeline templates.
* Previously, the Pipeline template for Java S2I failed to look up the image in the registry. With this fix, the image is looked up using the existing image PipelineResources instead of the user provided `IMAGE_NAME` parameter.
* Previously, the `DeploymentConfig` task triggered a new deployment build even when an image build was already in progress. This caused the deployment of the pipeline to fail. With this fix, the `deploy task` command is now replaced with the `oc rollout status` command which waits for the in-progress deployment to finish.
* Support for `APP_NAME` parameter is now added in pipeline templates.
* Previously, the pipeline template for Java S2I failed to look up the image in the registry. With this fix, the image is looked up using the existing image pipeline resources instead of the user provided `IMAGE_NAME` parameter.
* All the OpenShift Pipelines images are now based on the Red Hat Universal Base Images (UBI).
* Previously, when the Pipeline was installed in a namespace other than `tekton-pipelines`, the `tkn version` command displayed the Pipeline version as `unknown`. With this fix, the `tkn version` command now displays the correct Pipeline version in any namespace.
* Previously, when the pipeline was installed in a namespace other than `tekton-pipelines`, the `tkn version` command displayed the pipeline version as `unknown`. With this fix, the `tkn version` command now displays the correct pipeline version in any namespace.
* The `-c` flag is no longer supported for the `tkn version` command.
* Non-admin users can now list the ClusterTriggerBindings.
* The EventListener CompareSecret function is now fixed for the CEL Interceptor.
* The `list`, `describe`, and `start` subcommands for `task` and `clustertask` now correctly display the output in case a Task and ClusterTask have the same name.
* Non-admin users can now list the cluster trigger bindings.
* The event listener `CompareSecret` function is now fixed for the CEL Interceptor.
* The `list`, `describe`, and `start` subcommands for tasks and cluster tasks now correctly display the output in case a task and cluster task have the same name.
* Previously, the OpenShift Pipelines Operator modified the privileged security context constraints (SCCs), which caused an error during cluster upgrade. This error is now fixed.
* In the `tekton-pipelines` namespace, the timeouts of all TaskRuns and PipelineRuns are now set to the value of `default-timeout-minutes` field using the ConfigMap.
* Previously, the Pipelines section in the web console was not displayed for non-admin users. This issue is now resolved.
* In the `tekton-pipelines` namespace, the timeouts of all task runs and pipeline runs are now set to the value of `default-timeout-minutes` field using the config map.
* Previously, the pipelines section in the web console was not displayed for non-admin users. This issue is now resolved.

View File

@@ -12,55 +12,55 @@
* Tekton Pipelines 0.14.3
* Tekton `tkn` CLI 0.11.0
* Tekton Triggers 0.6.1
* ClusterTasks based on Tekton Catalog 0.14
* cluster tasks based on Tekton Catalog 0.14
In addition to the fixes and stability improvements, the following sections highlight what is new in {pipelines-title} 1.1.
[id="pipeline-new-features-1-1_{context}"]
=== Pipelines
* Workspaces can now be used instead of PipelineResources. It is recommended that you use Workspaces in OpenShift Pipelines, as PipelineResources are difficult to debug, limited in scope, and make Tasks less reusable. For more details on Workspaces, see Understanding OpenShift Pipelines.
* Workspace support for VolumeClaimTemplates has been added:
** The VolumeClaimTemplate for a PipelineRun and TaskRun can now be added as a volume source for Workspaces. The tekton-controller then creates a PersistentVolumeClaim (PVC) using the template that is seen as a PVC for all TaskRuns in the Pipeline. Thus you do not need to define the PVC configuration every time it binds a workspace that spans multiple tasks.
** Support to find the name of the PersistentVolumeClaim when a VolumeClaimTemplate is used as a volume source is now available using variable substitution.
* Workspaces can now be used instead of pipeline resources. It is recommended that you use workspaces in OpenShift Pipelines, as pipeline resources are difficult to debug, limited in scope, and make tasks less reusable. For more details on workspaces, see the Understanding OpenShift Pipelines section.
* Workspace support for volume claim templates has been added:
** The volume claim template for a pipeline run and task run can now be added as a volume source for workspaces. The tekton-controller then creates a persistent volume claim (PVC) using the template that is seen as a PVC for all task runs in the pipeline. Thus you do not need to define the PVC configuration every time it binds a workspace that spans multiple tasks.
** Support to find the name of the PVC when a volume claim template is used as a volume source is now available using variable substitution.
* Support for improving audits:
** The `PipelineRun.Status` field now contains the status of every TaskRun in the Pipeline and the Pipeline specification used to instantiate a PipelineRun to monitor the progress of the PipelineRun.
** The `PipelineRun.Status` field now contains the status of every task run in the pipeline and the pipeline specification used to instantiate a pipeline run to monitor the progress of the pipeline run.
** Pipeline results have been added to the pipeline specification and `PipelineRun` status.
** The `TaskRun.Status` field now contains the exact Task specification used to instantiate the `TaskRun`.
* Support to apply the default parameter to Conditions.
* A TaskRun created by referencing a ClusterTask now adds the `tekton.dev/clusterTask` label instead of the `tekton.dev/task` label.
* The `kubeconfigwriter` now adds the `ClientKeyData` and the `ClientCertificateData` configurations in the Resource structure to enable replacement of the pipeline resource type cluster with the kubeconfig-creator Task.
* The names of the `feature-flags` and the `config-defaults` ConfigMaps are now customizable.
* Support for HostNetwork in the PodTemplate used by TaskRun is now available.
* An Affinity Assistant is now available to support node affinity in TaskRuns that share workspace volume. By default, this is disabled on OpenShift Pipelines.
* The PodTemplate has been updated to specify `imagePullSecrets` to identify secrets that the container runtime should use to authorize container image pulls when starting a pod.
* Support for emitting warning events from the TaskRun controller if the controller fails to update the TaskRun.
** The `TaskRun.Status` field now contains the exact task specification used to instantiate the `TaskRun` resource.
* Support to apply the default parameter to conditions.
* A task run created by referencing a cluster task now adds the `tekton.dev/clusterTask` label instead of the `tekton.dev/task` label.
* The kube config writer now adds the `ClientKeyData` and the `ClientCertificateData` configurations in the resource structure to enable replacement of the pipeline resource type cluster with the kubeconfig-creator task.
* The names of the `feature-flags` and the `config-defaults` config maps are now customizable.
* Support for the host network in the pod template used by the task run is now available.
* An Affinity Assistant is now available to support node affinity in task runs that share workspace volume. By default, this is disabled on OpenShift Pipelines.
* The pod template has been updated to specify `imagePullSecrets` to identify secrets that the container runtime should use to authorize container image pulls when starting a pod.
* Support for emitting warning events from the task run controller if the controller fails to update the task run.
* Standard or recommended k8s labels have been added to all resources to identify resources belonging to an application or component.
* The Entrypoint process is now notified for signals and these signals are then propagated using a dedicated PID Group of the Entrypoint process.
* The PodTemplate can now be set on a Task level at runtime using `TaskRunSpecs`.
* The `Entrypoint` process is now notified for signals and these signals are then propagated using a dedicated PID Group of the `Entrypoint` process.
* The pod template can now be set on a task level at runtime using task run specs.
* Support for emitting Kubernetes events:
** The controller now emits events for additional TaskRun lifecycle events - `taskrun started` and `taskrun running`.
** The PipelineRun controller now emits an event every time a Pipeline starts.
* In addition to the default Kubernetes events, support for CloudEvents for TaskRuns is now available. The controller can be configured to send any TaskRun events, such as create, started, and failed, as cloud events.
* Support for using the `$context.<task|taskRun|pipeline|pipelineRun>.name` variable to reference the appropriate name when in PipelineRuns and TaskRuns.
* Validation for PipelineRun parameters is now available to ensure that all the parameters required by the Pipeline are provided by the PipelineRun. This also allows PipelineRuns to provide extra parameters in addition to the required parameters.
* You can now specify Tasks within a Pipeline that will always execute before the pipeline exits, either after finishing all tasks successfully or after a Task in the Pipeline failed, using the `finally` field in the Pipeline YAML file.
* The `git-clone` ClusterTask is now available.
** The controller now emits events for additional task run lifecycle events - `taskrun started` and `taskrun running`.
** The pipeline run controller now emits an event every time a pipeline starts.
* In addition to the default Kubernetes events, support for cloud events for task runs is now available. The controller can be configured to send any task run events, such as create, started, and failed, as cloud events.
* Support for using the `$context.<task|taskRun|pipeline|pipelineRun>.name` variable to reference the appropriate name when in pipeline runs and task runs.
* Validation for pipeline run parameters is now available to ensure that all the parameters required by the pipeline are provided by the pipeline run. This also allows pipeline runs to provide extra parameters in addition to the required parameters.
* You can now specify tasks within a pipeline that will always execute before the pipeline exits, either after finishing all tasks successfully or after a task in the pipeline failed, using the `finally` field in the pipeline YAML file.
* The `git-clone` cluster task is now available.
[id="cli-new-features-1-1_{context}"]
=== Pipelines CLI
* Support for embedded Trigger binding is now available to the `tkn evenlistener describe` command.
* Support for embedded trigger binding is now available to the `tkn evenlistener describe` command.
* Support to recommend subcommands and make suggestions if an incorrect subcommand is used.
* The `tkn task describe` command now auto selects the task if only one task is present in the Pipeline.
* You can now start a Task using default parameter values by specifying the `--use-param-defaults` flag in the `tkn task start` command.
* You can now specify a volumeClaimTemplate for PipelineRuns or TaskRuns using the `--workspace` option with the `tkn pipeline start` or `tkn task start` commands.
* The `tkn task describe` command now auto selects the task if only one task is present in the pipeline.
* You can now start a task using default parameter values by specifying the `--use-param-defaults` flag in the `tkn task start` command.
* You can now specify a volume claim template for pipeline runs or task runs using the `--workspace` option with the `tkn pipeline start` or `tkn task start` commands.
* The `tkn pipelinerun logs` command now displays logs for the final tasks listed in the `finally` section.
* Interactive mode support has now been provided to the `tkn task start` command and the `describe` subcommand for the following tkn resources: `pipeline`, `pipelinerun`, `task`, `taskrun`, `clustertask`, and `pipelineresource`.
* The `tkn version` command now displays the version of the Triggers installed in the cluster.
* The `tkn pipeline describe` command now displays parameter values and timeouts specified for Tasks used in the Pipeline.
* Support added for the `--last` option for the `tkn pipelinerun describe` and the `tkn taskrun describe` commands to describe the most recent PipelineRun or TaskRun, respectively.
* The `tkn pipeline describe` command now displays the conditions applicable to the Tasks in the Pipeline.
* Interactive mode support has now been provided to the `tkn task start` command and the `describe` subcommand for the following `tkn` resources: `pipeline`, `pipelinerun`, `task`, `taskrun`, `clustertask`, and `pipelineresource`.
* The `tkn version` command now displays the version of the triggers installed in the cluster.
* The `tkn pipeline describe` command now displays parameter values and timeouts specified for tasks used in the pipeline.
* Support added for the `--last` option for the `tkn pipelinerun describe` and the `tkn taskrun describe` commands to describe the most recent pipeline run or task run, respectively.
* The `tkn pipeline describe` command now displays the conditions applicable to the tasks in the pipeline.
* You can now use the `--no-headers` and `--all-namespaces` flags with the `tkn resource list` command.
[id="triggers-new-features-1-1_{context}"]
@@ -69,11 +69,11 @@ In addition to the fixes and stability improvements, the following sections high
** `parseURL` to parse and extract portions of a URL
** `parseJSON` to parse JSON value types embedded in a string in the `payload` field of the `deployment` webhook
* A new interceptor for webhooks from Bitbucket has been added.
* EventListeners now display the `Address URL` and the `Available status` as additional fields when listed with the `kubectl get` command.
* TriggerTemplate params now use the `$(tt.params.<paramName>)` syntax instead of `$(params.<paramName>)` to reduce the confusion between TriggerTemplate and ResourceTemplates params.
* You can now add `tolerations` in the EventListener CRD to ensure that EventListeners are deployed with the same configuration even if all nodes are tainted due to security or management issues.
* You can now add a Readiness Probe for EventListener Deployment at `URL/live`.
* Support for embedding TriggerBinding specifications in EventListener Triggers.
* Event listeners now display the `Address URL` and the `Available status` as additional fields when listed with the `kubectl get` command.
* trigger template params now use the `$(tt.params.<paramName>)` syntax instead of `$(params.<paramName>)` to reduce the confusion between trigger template and resource templates params.
* You can now add `tolerations` in the `EventListener` CRD to ensure that event listeners are deployed with the same configuration even if all nodes are tainted due to security or management issues.
* You can now add a Readiness Probe for event listener Deployment at `URL/live`.
* Support for embedding `TriggerBinding` specifications in event listener triggers is now added.
* Trigger resources are now annotated with the recommended `app.kubernetes.io` labels.
@@ -82,27 +82,27 @@ In addition to the fixes and stability improvements, the following sections high
The following items are deprecated in this release:
* The `--namespace` or `-n` flags for all cluster-wide commands, including the `clustertask` and `clustertriggerbinding` commands, are deprecated. It will be removed in a future release.
* The `name` field in `triggers.bindings` within an EventListener has been deprecated in favor of the `ref` field and will be removed in a future release.
* Variable interpolation in TriggerTemplates using `$(params)` has been deprecated in favor of using `$(tt.params)` to reduce confusion with the Pipeline variable interpolation syntax. The `$(params.<paramName>)` syntax will be removed in a future release.
* The `tekton.dev/task` label is deprecated on ClusterTasks.
* The `name` field in `triggers.bindings` within an event listener has been deprecated in favor of the `ref` field and will be removed in a future release.
* Variable interpolation in trigger templates using `$(params)` has been deprecated in favor of using `$(tt.params)` to reduce confusion with the pipeline variable interpolation syntax. The `$(params.<paramName>)` syntax will be removed in a future release.
* The `tekton.dev/task` label is deprecated on cluster tasks.
* The `TaskRun.Status.ResourceResults.ResourceRef` field is deprecated and will be removed.
* The `tkn pipeline create`, `tkn task create`, and `tkn resource create -f` subcommands have been removed.
* Namespace validation has been removed from `tkn` commands.
* The default timeout of `1h` and the `-t` flag for the `tkn ct start` command have been removed.
* The `s2i` ClusterTask has been deprecated.
* The `s2i` cluster task has been deprecated.
[id="known-issues-1-1_{context}"]
== Known issues
* Conditions do not support Workspaces.
* Conditions do not support workspaces.
* The `--workspace` option and the interactive mode is not supported for the `tkn clustertask start` command.
* Support of backward compatibility for `$(params.<paramName>)` forces you to use TriggerTemplates with pipeline specific params as the Triggers webhook is unable to differentiate Trigger params from pipelines params.
* Support of backward compatibility for `$(params.<paramName>)` syntax forces you to use trigger templates with pipeline specific params as the trigger s webhook is unable to differentiate trigger params from pipelines params.
* Pipeline metrics report incorrect values when you run a promQL query for `tekton_taskrun_count` and `tekton_taskrun_duration_seconds_count`.
* PipelineRuns and TaskRuns continue to be in the `Running` and `Running(Pending)` states respectively even when a non existing PVC name is given to a Workspace.
* pipeline runs and task runs continue to be in the `Running` and `Running(Pending)` states respectively even when a non existing PVC name is given to a workspace.
[id="fixed-issues-1-1_{context}"]
== Fixed issues
* Previously, the `tkn task delete <name> --trs` command would delete both the Task and ClusterTask if the name of the Task and ClusterTask were the same. With this fix, the command deletes only the TaskRuns that are created by the Task `<name>`.
* Previously the `tkn pr delete -p <name> --keep 2` command would disregard the `-p` flag when used with the `--keep` flag and would delete all the PipelineRuns except the latest two. With this fix, the command deletes only the PipelineRuns that are created by the Pipeline `<name>`, except for the latest two.
* The `tkn triggertemplate describe` output now displays ResourceTemplates in a table format instead of YAML format.
* Previously the `buildah` ClusterTask failed when a new user was added to a container. With this fix, the issue has been resolved.
* Previously, the `tkn task delete <name> --trs` command would delete both the task and cluster task if the name of the task and cluster task were the same. With this fix, the command deletes only the task runs that are created by the task `<name>`.
* Previously the `tkn pr delete -p <name> --keep 2` command would disregard the `-p` flag when used with the `--keep` flag and would delete all the pipeline runs except the latest two. With this fix, the command deletes only the pipeline runs that are created by the pipeline `<name>`, except for the latest two.
* The `tkn triggertemplate describe` output now displays resource templates in a table format instead of YAML format.
* Previously the `buildah` cluster task failed when a new user was added to a container. With this fix, the issue has been resolved.

View File

@@ -12,7 +12,7 @@
* Tekton Pipelines 0.16.3
* Tekton `tkn` CLI 0.13.1
* Tekton Triggers 0.8.1
* ClusterTasks based on Tekton Catalog 0.16
* cluster tasks based on Tekton Catalog 0.16
* IBM Power Systems on {product-title} 4.6
* IBM Z and LinuxONE on {product-title} 4.6
@@ -29,7 +29,7 @@ In addition to the fixes and stability improvements, the following sections high
Installations in restricted environments are currently not supported on IBM Power Systems, IBM Z, and LinuxONE.
====
* You can now use the `when` field, instead of `conditions`, to run a task only when certain criteria are met. The key components of `WhenExpressions` are `Input`, `Operator`, and `Values`. If all the `WhenExpressions` evaluate to `True`, then the task is run. If any of the `WhenExpressions` evaluate to `False`, the task is skipped.
* You can now use the `when` field, instead of `conditions` resource, to run a task only when certain criteria are met. The key components of `WhenExpression` resources are `Input`, `Operator`, and `Values`. If all the when expressions evaluate to `True`, then the task is run. If any of the when expressions evaluate to `False`, the task is skipped.
* Step statuses are now updated if a task run is canceled or times out.
* Support for Git Large File Storage (LFS) is now available to build the base image used by `git-init`.
* You can now use the `taskSpec` field to specify metadata, such as labels and annotations, when a task is embedded in a pipeline.
@@ -69,7 +69,7 @@ Installations in restricted environments are currently not supported on IBM Powe
[id="deprecated-features-1-2_{context}"]
== Deprecated features
* `$(params)` are now removed and replaced by `$(tt.params)` to avoid confusion between the `resourcetemplate` and `triggertemplate` parameters.
* `$(params)` parameters are now removed from the `triggertemplate` resource and replaced by `$(tt.params)` to avoid confusion between the `resourcetemplate` and `triggertemplate` resource parameters.
* The `ServiceAccount` reference of the optional `EventListenerTrigger`-based authentication level has changed from an object reference to a `ServiceAccountName` string. This ensures that the `ServiceAccount` reference is in the same namespace as the `EventListenerTrigger` object.
* The `Conditions` custom resource definition (CRD) is now deprecated; use the `WhenExpressions` CRD instead.
* The `PipelineRun.Spec.ServiceAccountNames` object is being deprecated and replaced by the `PipelineRun.Spec.TaskRunSpec[].ServiceAccountName` object.

View File

@@ -12,7 +12,7 @@
* Tekton Pipelines 0.19.0
* Tekton `tkn` CLI 0.15.0
* Tekton Triggers 0.10.2
* ClusterTasks based on Tekton Catalog 0.19.0
* cluster tasks based on Tekton Catalog 0.19.0
* IBM Power Systems on {product-title} 4.7
* IBM Z and LinuxONE on {product-title} 4.7
@@ -23,20 +23,20 @@ In addition to the fixes and stability improvements, the following sections high
* Tasks that build images, such as S2I and Buildah tasks, now emit a URL of the image built that includes the image SHA.
* Conditions in pipeline tasks that reference custom tasks are disallowed because the `Conditions` custom resource definition (CRD) has been deprecated.
* Conditions in pipeline tasks that reference custom tasks are disallowed because the `Condition` custom resource definition (CRD) has been deprecated.
* Variable expansion is now added in the `Task` CRD for the following fields:
`spec.steps[].imagePullPolicy` and `spec.sidecar[].imagePullPolicy`.
* You can disable the built-in credential mechanism in Tekton by setting the `disable-creds-init` feature-flag to `true`.
* Resolved `When` expressions are now listed in the `Skipped Tasks` and the `Task Runs` sections in the `Status` field of the `PipelineRun` configuration.
* Resolved when expressions are now listed in the `Skipped Tasks` and the `Task Runs` sections in the `Status` field of the `PipelineRun` configuration.
* The `git init` command can now clone recursive submodules.
* A `Task` CR author can now specify a timeout for a step in the `Task` spec.
* You can now base the entry point image on `distroless/static:nonroot` and give it a mode to copy itself to the destination, without relying on the `cp` command being present in the base image.
* You can now base the entry point image on the `distroless/static:nonroot` image and give it a mode to copy itself to the destination, without relying on the `cp` command being present in the base image.
* You can now use the configuration flag `require-git-ssh-secret-known-hosts` to disallow omitting known hosts in the Git SSH secret. When the flag value is set to `true`, you must include the `known_host` field in the Git SSH secret. The default value for the flag is `false`.
@@ -70,11 +70,11 @@ In addition to the fixes and stability improvements, the following sections high
* You can now specify your resource information in the `EventListener` template.
* It is now mandatory for `EventListener` service accounts to have the `list` and `watch` verbs, in addition to the `get` verb for all the triggers resources. This enables you to use `Listers` to fetch data from `EventListeners`, `Triggers`, `TriggerBindings`, `TriggerTemplates`, and `ClusterTriggerBindings` resources. You can use this feature to create a `Sink` object rather than specifying multiple informers, and directly make calls to the API server.
* It is now mandatory for `EventListener` service accounts to have the `list` and `watch` verbs, in addition to the `get` verb for all the triggers resources. This enables you to use `Listers` to fetch data from `EventListener`, `Trigger`, `TriggerBinding`, `TriggerTemplate`, and `ClusterTriggerBinding` resources. You can use this feature to create a `Sink` object rather than specifying multiple informers, and directly make calls to the API server.
* A new Interceptor interface is added to support immutable input event bodies. Interceptors can now add data or fields to a new `extensions` field, and cannot modify the input bodies making them immutable. The CEL interceptor uses this new Interceptor interface.
* A new `Interceptor` interface is added to support immutable input event bodies. Interceptors can now add data or fields to a new `extensions` field, and cannot modify the input bodies making them immutable. The CEL interceptor uses this new `Interceptor` interface.
* A `namespaceSelector` field is added to the `EventListener` resource. Use it to specify the namespaces from where the `EventListener` resource can fetch the `Trigger` object for processing events. To use the `namespaceSelector` field, the service account for the `EventListener` must have a cluster role.
* A `namespaceSelector` field is added to the `EventListener` resource. Use it to specify the namespaces from where the `EventListener` resource can fetch the `Trigger` object for processing events. To use the `namespaceSelector` field, the service account for the `EventListener` resource must have a cluster role.
* The triggers `EventListener` resource now supports end-to-end secure connection to the `eventlistener` pod.
@@ -104,19 +104,19 @@ In addition to the fixes and stability improvements, the following sections high
[id="known-issues-1-3_{context}"]
== Known issues
* CEL overlays add fields to a new top-level `extensions` function, instead of modifying the incoming event body. `TriggerBinding` resources can access values within this new `extensions` function using the `$(extensions.<key>)` syntax. Update your binding to use `$(extensions.<key>)` instead of `$(body.<overlay-key>)` .
* CEL overlays add fields to a new top-level `extensions` function, instead of modifying the incoming event body. `TriggerBinding` resources can access values within this new `extensions` function using the `$(extensions.<key>)` syntax. Update your binding to use the `$(extensions.<key>)` syntax instead of the `$(body.<overlay-key>)` syntax.
* The escaping parameters behavior by replacing `"` with `\"` is now removed. If you need to retain the old escaping parameters behavior add the `tekton.dev/old-escape-quotes: true"` annotation to your `TriggerTemplate` specification.
* You can embed `TriggerBinding` resources by using the `name` and `value` fields inside a trigger or an event listener. However, you cannot specify both `name` and `ref` fields for a single binding. Use the `ref` field to refer to a `TriggerBinding` resource and the `name` field for embedded bindings.
* An interceptor cannot attempt to reference a `Secret` outside the namespace of an `EventListener`. You must include secrets in the namespace of the `EventListener`.
* An interceptor cannot attempt to reference a `secret` outside the namespace of an `EventListener` resource. You must include secrets in the namespace of the `EventListener`resource.
* In Triggers 0.9.0 and later, if a body or header based `TriggerBinding` parameter is missing or malformed in an event payload, the default values are used instead of displaying an error.
* `Tasks` and `Pipelines` created with `WhenExpressions` using Tekton Pipelines 0.16.x must be reapplied to fix their JSON annotations.
* Tasks and pipelines created with `WhenExpression` objects using Tekton Pipelines 0.16.x must be reapplied to fix their JSON annotations.
* When a pipeline accepts an optional workspace and gives it to a `PipelineTask`, the `PipelineRun` stalls if the workspace is not provided.
* When a pipeline accepts an optional workspace and gives it to a task, the pipeline run stalls if the workspace is not provided.
* To use the Buildah cluster task in a disconnected environment, ensure that the Dockerfile uses an internal image stream as the base image, and then use it in the same manner as any S2I cluster task.
@@ -136,15 +136,15 @@ In addition to the fixes and stability improvements, the following sections high
* The `tkn pr desc` command is now enhanced to ensure that it does not fail in case of pipeline runs with conditions.
* When you delete a task run using `tkn tr delete` with the `--task` option, and a cluster task exists with the same name, the task runs for the cluster task also gets deleted. As a workaround, filter the task runs by using the `TaskRefKind` field.
* When you delete a task run using the `tkn tr delete` command with the `--task` option, and a cluster task exists with the same name, the task runs for the cluster task also get deleted. As a workaround, filter the task runs by using the `TaskRefKind` field.
* The `tkn triggertemplate describe` command would display only part of the `apiVersion` in the output. For example, only `triggers.tekton.dev` was displayed instead of `triggers.tekton.dev/v1alpha1`. This bug is now fixed.
* The `tkn triggertemplate describe` command would display only part of the `apiVersion` value in the output. For example, only `triggers.tekton.dev` was displayed instead of `triggers.tekton.dev/v1alpha1`. This bug is now fixed.
* The webhook, under certain conditions, would fail to acquire a lease and not function correctly. This bug is now fixed.
* Pipelines with `When` expressions created in v0.16.3 can now be run in v0.17.1 and later. After an upgrade, you do not need to reapply pipeline definitions created in previous versions because both the uppercase and lowercase first letters for the annotations are now supported.
* Pipelines with when expressions created in v0.16.3 can now be run in v0.17.1 and later. After an upgrade, you do not need to reapply pipeline definitions created in previous versions because both the uppercase and lowercase first letters for the annotations are now supported.
* By default, the `leader-election-ha` is now enabled for high availability. When the controller flag `disable-ha` is set to `true`, it disables high availability support.
* By default, the `leader-election-ha` field is now enabled for high availability. When the `disable-ha` controller flag is set to `true`, it disables high availability support.
* Issues with duplicate cloud events are now fixed. Cloud events are now sent only when a condition changes the state, reason, or message.

View File

@@ -120,7 +120,7 @@ fsGroup:
[id="known-issues-1-4_{context}"]
== Known issues
* In the **Developer** perspective, Pipeline Metrics and Triggers are available only on {product-title} 4.7.6 or later versions.
* In the *Developer* perspective, the pipeline metrics and triggers features are available only on {product-title} 4.7.6 or later versions.
* On IBM Power Systems, IBM Z, and LinuxONE, the `tkn hub` command is not supported.