This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.


Articles are paginated with only three posts here for example. You can set the number of entries to show on this page with the “pagination” setting in the config file.

Serverless Workflows: an Automated Developer Experience

Great job on installing the Orchestrator plugin and the SonataFlow operator! But what comes next?

If you aim to understand the full development lifecycle of serverless workflows, from zero to production, then you’ve come to the right place.

Thanks to the Orchestrator functions and automations, developers can now focus solely on building their applications without being burdened by unnecessary cognitive load. Let’s delve into how to effectively manage the end-to-end software development lifecycle of serverless workflows, leveraging these built-in capabilities.

A Reference Architecture for Automated Deployments of Serverless Workflows

The reference architecture that we’re going to describe consists of the following components:

  • Orchestrator Helm chart: the installer of RHDH Orchestrator .
  • Red Hat Developer Hub (RHDH): the Red Hat product for Backstage.
  • Tekton/Red Hat OpenShift Pipelines: the Kubernetes-Native CI pipeline to build images and deployment configurations.
  • ArgoCD/Red Hat OpenShift GitOps: the CD pipeline to deploy the workflow on the RHDH instance.
  • the container registry service to store the software images.
  • SonataFlow platform: the SonataFlow implementation of the Serverless Workflow specifications, including a Kubernetes operator and the platform services (data index, jobs service).
  • SonataFlow: the custom resource representing the workflow.
  • GitHub workflow repo: the source code repository of the workflow.
  • GitHub gitops repo: the repository of the kustomize deployment configuration.
    • Includes the commands to boostrap the ArgoCD applications on your selected environment.

feature branches git workflow

Please note that all these components, with the exclusion of the and the GitHub organizations, are either bundled with the Orchestrator plugin or managed by the software projects generated with the RHDH Software Templates.

Software Development with Git

Let’s assume your company follows the feature branches git workflow:

  • Developers work on individual feature branches.
  • The develop branch serves as the integration point where all features are merged to validate the application in the staging environment.
  • Once the software receives the green light, the code is released to the main branch and deployed to the production environment.

feature branches git workflow

Don’t be surprised, but the Orchestrator plugin automatically installs all the needed resources to handle these steps for you throughout the entire software development lifecycle.

The Software Development Lifecycle

Creating the Software Project

RHDH offers the software template functionality to create the foundational structure of software projects adhering to industry best practices in software development and deployment.

The Orchestrator plugin comes with its own templates designed to kickstart your workflow project. By selecting a template tagged with orchestrator, you gain access to the following benefits, all at no cost:

  • A fully operational software project to develop your serverless workflow, in a newly generated Git repository under the organization of your choice.
  • A ready-to-use configuration repository with a kustomize configuration to deploy the workflow on the designated RHDH instance.
  • (*) Automated CI tool deployment to build workflows on the selected cluster.
  • (*) Automated CD automation deployment to deploy applications implementing your workflow.

(*): optional but highly recommended!

Sounds great, isn’t it?

Developing the Serverless Workflow

This topic will be soon expanded in a dedicated post. However, we’d like to at least provide a list of a few amazing tools that you can use in this stage:

Using these toolkits and platforms, you can develop and test(*) your applicationn either on your local machine or as a containerized image, before moving to the next step.

(*): both unit and integration tests are supported

Testing the Staging Environment

And here comes the magic of automation.

Whenever a feature is merged in the staging branch, the CI/CD pipelines are triggered to build the container image, update the deployment configuration and deploy them to the staging instance of RHDH. You don’t have to do anything for this – the installed automation tools will handle the process for you.

That was a brief section, wasn’t it? This way, you can save reading time and focus on validating the workflow application in the staging environment.

Ready for Production

Get ready for another quick section.

Once the software has been validated and released, the CI/CD pipelines are triggered again to build and deploy the application in the production environment. Easy-peasy, and once again, making efficient use of the developer’s time.

Wrapping Up

What are you waiting for then? Design your first workflow and let the Orchestrator handle the tedious tasks for you.

Get customer-ready in just a minute with the power of the Automated Developer Experience for RHDH Orchestrator!

Serverless Workflows: an Automated Developer Experience Step-by-Step

In this blog, we’ll guide you through the journey from a software template to bootstrapping the workflow development, building, packaging, releasing, and deploying it on a cluster. If you need a high-level explanation or want to dive into the architecture of the solution, check out our previous blog. You can also watch a detailed demonstration of the content covered in this post in this recording.

Prerequisites and Assumptions

This blog assumes familiarity with specific tools, technologies, and methodologies. We’ll start with RHDH (Backstage) by launching a basic workflow template, working with GitHub for source control, pushing the workflow image to Quay, and using Kustomize to deploy the ArgoCD application for GitOps.

  • The target Quay repository for the workflow’s image should exist.
  • The target namespace for both the pipeline and the workflow is set to sonataflow-infra and not configurable.

Creating a Workflow Repository

Let’s begin by creating a workflow named demo under the Quay organization orchestrator-testing. We’ll use the repository orchestrator-testing/serverless-workflow-demo to store the workflow image.

Creating a new workflow repository in Quay

Setting Robot Account Permissions

Next, add robot account permissions to the created repository.

Setting permissions

Creating a Secret for GitOps Cluster

Refer to the instructions here for creating and configuring the secret for the target cluster.

Creating the Software Template

The Orchestrator plugin provides templates to kickstart your workflow project. By selecting a template tagged with orchestrator, you gain access to the following benefits:

  • A fully operational software project in a new Git repository under your chosen organization.
  • A configuration repository with kustomize configurations for deploying the workflow on RHDH.
  • Automated CI tool deployment using OpenShift Pipelines.
  • Automated CD deployment for applications using OpenShift GitOps.

Selecting and Launching the Template

Navigate to the Catalog and select the Basic workflow bootstrap project template. Click “Launch Template” to start filling in the input parameters for creating the workflow and its GitOps projects.

Selecting the software template

Input Parameters Overview

Review the parameters required for workflow creation, including organization name, repository name, workflow ID, workflow type, CI/CD method, namespaces, Quay details, persistence option, and database properties.

Input parameters

This section provides an overview of the parameters required for workflow creation:

  • Organization Name - The GitHub organization where workflow repositories will be created. Ensure that the GitHub token provided during Orchestrator chart installation includes repository creation permissions in this organization.
  • Repository Name - The name of the repository containing the workflow definition, spec and schema files, and application properties. Workflow development occurs in this repository. For example, if this repository is named onboarding, a second repository named onboarding-gitops is created for CD automated deployment of the workflow.
  • Description - This description will be added to the file of the generated project and the workflow definition shown in the Orchestrator plugin.
  • Workflow ID - A unique identifier for the workflow. This ID is used to generate project resources (appearing in file names) and acts as the name of the Sonataflow CR for that workflow. After deploying the CR to the cluster, the ID identifies the workflow in Sonataflow.

On the second screen, you’ll need to select the workflow type. You can learn more about different workflow types here. Input parameters

  • Workflow Type - There are two supported types: infrastructure for operations returning output, and assessment for evaluation/assessment leading to potential infrastructure workflows.

On the final screen, you’ll be prompted to input the CI/CD parameters and persistence-related parameters.

  • Select a CI/CD method - Choosing None means no GitOps resources are created in target repositories, only the workflow source repository. Selecting Tekton with ArgoCD creates two repositories: one for the workflow and another for GitOps resources for deploying the built workflow on a cluster.
  • Workflow Namespace - The namespace for deploying the workflow in the target cluster, currently supporting sonataflow-infra where Sonataflow infrastructure is deployed.
  • GitOps Namespace - Namespace for GitOps secrets and ArgoCD application creation. The default orchestrator-gitops complies with the default installation steps of the Orchestrator deployment.
  • Quay Organization Name - Organization name in Quay for the published workflow. The Tekton pipeline pushes the workflow to this organization.
  • Quay Repository Name - Repository name in Quay for the published workflow, which must exist before deploying GitOps. The secret created in the GitOps Namespace needs permission to push to this repository.
  • Enable Persistance - Check this option to enable persistence for the workflow. It ensures each workflow persists its instances in a configured database schema, with the schema name matching the workflow ID. Persistence is recommended for long-running workflows and to support the Abort operation.
  • Database properties - Self-explanatory list of database properties.

After providing all parameters, click Review, ensure correctness, and then click Create. Successful creation leads to:

Template created

This includes links to three resources:

  • Bootstrap the GitOps Resources - Directs to the workflow GitOps repository, enabling GitOps for ArgoCD deployment on the target cluster.
  • Open the Source Code Repository - Opens the Git repository for workflow development.
  • Open the Catalog Info Component - The RHDH Catalog Components view which should include the newly created components: the workflow source repository and the workflow GitOps repository.

Bootstrap the GitOps Resources

Navigate to the first link to enable GitOps automation on the cluster. Follow the steps provided, including setting up CI pipelines and viewing ArgoCD resources.

Exploring the Repositories

The source code repository is where the workflow development happens. Each commit triggers the CI workflow.

The GitOps resources repository contains deployment configurations for the workflow on the OCP cluster.

Viewing the Catalog Info Components

Both repositories are represented as components in RHDH: Catalog Items

View the Source Code Repository Component

This component represents the Git repository where workflow development occurs. Navigating to the CI tab reveals the pipeline-run diagram: workflow Ci

Once the pipeline-run is completed, the CD step starts, and the workflow is deployed on the cluster.

View the GitOps Resources Repository Component

This component represents the deployment of the workflow on the OCP cluster. Navigating to the CD tab shows the K8s resources representing the deployed workflow. When the items in this view are ready, the workflow should be ready to be executed from the Orchestrator plugin.

Running the workflow

After completing the CI/CD pipelines, navigate to the Orchestrator plugin, choose the workflow, and run it.


Streamlining workflow development and deployment empowers developers to focus on creating impactful workflows tailored to their needs.

Installing the Orchestrator on existing RHDH instance

When RHDH is already installed and in use, reinstalling it via the Helm chart is unnecessary. Instead, integrating the Orchestrator into such an environment involves a few key steps:

  1. Utilize the Helm chart to install the requisite components, such as the SonataFlow operator and the OpenShift Serverless Operator, while ensuring the RHDH installation is disabled.
  2. Manually update the existing RHDH ConfigMap resources with the necessary configuration for the Orchestrator plugin.
  3. Import the Orchestrator software templates into the Backstage catalog. To install the required components without RHDH, utilize the –set rhdhOperator.enabled=false option. A comprehensive command would resemble the following:
helm upgrade -i orchestrator orchestrator --set rhdhOperator.enabled=false

This command will result in the installation of the Sonataflow Operator and OpenShift Serverless Operators. Alternatively, these operators can be installed directly from the operator catalog.

In an RHDH installation, there are two primary ConfigMaps that require modification, typically found under the backstage-system or the rhdh-operator namespaces:

  • dynamic-plugins ConfigMap: This ConfigMap houses the configuration for enabling and configuring dynamic plugins. To incorporate the orchestrator plugins, append the following configuration to the dynamic-plugins ConfigMap:
      - disabled: false
        package: "@janus-idp/backstage-plugin-orchestrator-backend-dynamic@1.8.0"
        integrity: sha512-wVZE7Dak10edxh1ZEckzYKrE13GrqhzSVelURhxjZcgXEHdGPWYUFHNMEpte7hzIBE85350Ka7fpy7C4BNPvEw==
              url: http://sonataflow-platform-data-index-service.sonataflow-infra
      - disabled: false
        package: "@janus-idp/backstage-plugin-orchestrator@1.10.6"
        integrity: sha512-qSXQ2O7/eLBEF186PzaRfzLfutFYUq9MdiiIZbHejz+KML9rVInPJkc1tine3R3JQVuw1QBIQ2vhPNbGbHXWZg==
                  - importName: OrchestratorIcon
                    module: OrchestratorPlugin
                    name: orchestratorIcon
                  - importName: OrchestratorPage
                      icon: orchestratorIcon
                      text: Orchestrator
                    module: OrchestratorPlugin
                    path: /orchestrator

The versions of the plugins may undergo updates, leading to changes in their integrity values. To ensure you are utilizing the latest versions, please consult the Helm chart values available here). It’s imperative to set both the version and integrity values accordingly.

Additionally, ensure that the dataIndexService.url points to the service of the Data Index installed by the Chart/Operator. When installed by the Helm chart, it should point to http://sonataflow-platform-data-index-service.sonataflow-infra:

oc get svc -n sonataflow-infra sonataflow-platform-data-index-service -o jsonpath='http://{}.{.metadata.namespace}'

In the app-config ConfigMap add the following:

      script-src: ["'self'", "'unsafe-inline'", "'unsafe-eval'"]
      script-src-elem: ["'self'", "'unsafe-inline'", "'unsafe-eval'"]
      connect-src: ["'self'", 'http:', 'https:', 'data:']
      origin: {{ URL to RHDH service or route }}

To enable the Notifications plugin, edit the same ConfigMaps. For the dynamic-plugins ConfigMap add:

      - disabled: false
        package: "@janus-idp/plugin-notifications@1.2.5"
        integrity: sha512-BQ7ujmrbv2MLelNGyleC4Z8fVVywYBMYZTwmRC534WCT38QHQ0cWJbebOgeIYszFA98STW4F5tdKbVot/2gWMg==
                  - name: notificationsIcon
                    module: NotificationsPlugin
                    importName: NotificationsActiveIcon
                  - path: /notifications
                    importName: NotificationsPage
                    module: NotificationsPlugin
                      icon: notificationsIcon
                      text: Notifications
                      pollingIntervalMs: 5000
      - disabled: false
        package: "@janus-idp/plugin-notifications-backend-dynamic@1.4.11"
        integrity: sha512-5zluThJwFVKX0Wlh4E15vDKUFGu/qJ0UsxHYWoISJ+ing1R38gskvN3kukylNTgOp8B78OmUglPfNlydcYEHvA==

For the app-config ConfigMap add the database configuration if isn’t already provided. It is required for the notifications plugin:

      title: Red Hat Developer Hub
      baseUrl: {{ URL to RHDH service or route }}
        client: pg
          password: ${POSTGRESQL_ADMIN_PASSWORD}
          user: ${POSTGRES_USER}
          host: ${POSTGRES_HOST}
          port: ${POSTGRES_PORT}

If persistence is enabled (which should be the default setting), ensure that the PostgreSQL environment variables are accessible.

Once the ConfigMaps have been updated, it is necessary to restart the RHDH instance to implement the changes effectively.

To import the Orchestrator software templates into the catalog via the Backstage UI, follow the instructions outlined in this document. Register new templates into the catalog from the specified source

Upgrade plugin versions

To perform an upgrade of the plugin versions, start by acquiring the new plugin version along with its associated integrity value. In the future, this section will be updated to reference the Red Hat NPM registry. However, at present, it directs to @janus-idp NPM packages on The following script is useful to obtain the required information for updating the plugin version:



for PLUGIN_NAME in "${PLUGINS[@]}"
    echo "Processing plugin: $PLUGIN_NAME"
    curl -s -q "${PLUGIN_NAME}" | \
    jq -r '.versions | keys_unsorted[-1] as $latest_version | .[$latest_version] | "\(.name)\n\(.version)\n\(.dist.integrity)"'

A sample output should look like:

Processing plugin: @janus-idp/plugin-notifications

Processing plugin: @janus-idp/plugin-notifications-backend-dynamic

Processing plugin: @janus-idp/backstage-plugin-orchestrator

Processing plugin: @janus-idp/backstage-plugin-orchestrator-backend-dynamic

After editing the version and integrity values in the dynamic-plugins ConfigMap, restart the Backstage instance for changes to take effect.

What is Sonataflow Operator?

SonataFlow Operator

The SonataFlow Operator defines a set of Kubernetes Custom Resources to help users to deploy SonataFlow projects on Kubernetes and OpenShift.

Please visit our official documentation to know more.

Available modules for integrations

If you’re a developer, and you are interested in integrating your project or application with the SonataFlow Operator ecosystem, this repository provides a few Go Modules described below.

SonataFlow Operator Types (api)

Every custom resource managed by the operator is exported in the module api. You can use it to programmatically create any custom type managed by the operator. To use it, simply run:

go get

Then you can create any type programmatically, for example:

workflow := &v1alpha08.SonataFlow{
ObjectMeta: metav1.ObjectMeta{Name:, Namespace: w.namespace},
Spec:       v1alpha08.SonataFlowSpec{Flow: *myWorkflowDef>}

You can use the Kubernetes client-go library to manipulate these objects in the cluster.

You might need to register our schemes:

    s := scheme.Scheme

Container Builder (container-builder)

Please see the module’s README file.

Workflow Project Handler (workflowproj)

Please see the module’s README file.

Development and Contributions

Contributing is easy, just take a look at our contributors‘guide.

Productization notes

In order to productize the Red Hat OpenShift Serverless Logic Operator read the notes into the productization‘section.