This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Blog

Articles are paginated with only three posts here for example. You can set the number of entries to show on this page with the “pagination” setting in the config file.

Production Mode vs. Dev Mode

When setting up workflow orchestration, it’s crucial to understand the differences between production mode and development (dev) mode, particularly in terms of infrastructure requirements. This distinction ensures that workflows are efficiently managed and executed based on their intended use case. Here, we’ll explore these differences, focusing on the infrastructure required to run workflows in each mode.

Production Mode

Production mode is tailored for environments where stability, reliability, and scalability are paramount. The Orchestrator Helm chart or Orchestrator operator is designed specifically to meet the demanding requirements of production environments. Key requirements include:

  • Long-running Workflows: Production mode supports workflows that may take several hours or even days to complete, ensuring that these processes run smoothly without interruption.
  • Persistence of Running Workflow Instances: Ensuring the persistence of workflow instances is critical. In production, all workflow data is stored and maintained even if the orchestrator restarts, preventing data loss.
  • Event Handling Reliability: Reliable handling of events is essential for maintaining workflow integrity and ensuring that all triggers and actions occur as expected.
  • Scalability: The system must be capable of scaling up to handle increasing workloads, allowing for the addition of resources as demand grows.
  • Updates Through Pipelines: Workflows can be updated and managed through continuous integration/continuous deployment (CI/CD) pipelines, facilitating smooth and efficient updates.
  • Externalizing Credentials: Credentials are managed outside the workflow configuration to enhance security and simplify updates.
  • Runtime Isolation: Each workflow runs in its isolated environment, preventing any interference between workflows.
  • Authorization and Administration: Robust mechanisms are in place for authorizing and administering workflow deployments, ensuring that only authorized personnel can make changes.

Development (Dev) Mode

Dev mode, as the name implies, is optimized for development purposes. It allows developers to experiment with the Orchestrator plugins without the need for a full deployment process or access to a Kubernetes (K8s) or OpenShift (OCP) cluster. Characteristics of dev mode include:

  • Ephemeral Workflows: Workflows run in an ephemeral mode, meaning that they are temporary and do not persist after the container restarts. This is suitable for development and testing.
  • No Persistence: In dev mode, there is no persistence for running workflow instances. All instance information is lost after a container restart, making it ideal for short-running or non-critical workflows.
  • Development Focus: Dev mode is designed for developers to gain experience and test workflows without the overhead of a full production environment.
  • Hot-deployment of Workflows: Developers can deploy new workflows by simply placing the workflow files in a designated folder, enabling rapid iteration and testing.
  • Simpler Deployment Model: A single container serves all workflows, simplifying the deployment process and reducing the need for extensive infrastructure setup.
  • Acceptable Data Loss: For short-running workflows or development scenarios, the occasional loss of workflow instance tracking is acceptable.

Summary

Understanding the infrastructure requirements for production and dev modes is essential for effective workflow orchestration. Production mode ensures reliability, scalability, and persistence, making it suitable for critical, long-running workflows. Dev mode, on the other hand, provides a lightweight, flexible environment for development and testing, where temporary workflows and occasional data loss are acceptable. By selecting the appropriate mode based on the use case, organizations can optimize their workflow management processes.

Serverless Workflows: an Automated Developer Experience

Great job on installing the Orchestrator plugin and the SonataFlow operator! But what comes next?

If you aim to understand the full development lifecycle of serverless workflows, from zero to production, then you’ve come to the right place.

Thanks to the Orchestrator functions and automations, developers can now focus solely on building their applications without being burdened by unnecessary cognitive load. Let’s delve into how to effectively manage the end-to-end software development lifecycle of serverless workflows, leveraging these built-in capabilities.

A Reference Architecture for Automated Deployments of Serverless Workflows

The reference architecture that we’re going to describe consists of the following components:

  • Orchestrator Helm chart: the installer of RHDH Orchestrator .
  • Red Hat Developer Hub (RHDH): the Red Hat product for Backstage.
  • Tekton/Red Hat OpenShift Pipelines: the Kubernetes-Native CI pipeline to build images and deployment configurations.
  • ArgoCD/Red Hat OpenShift GitOps: the CD pipeline to deploy the workflow on the RHDH instance.
  • Quay.io: the container registry service to store the software images.
  • SonataFlow platform: the SonataFlow implementation of the Serverless Workflow specifications, including a Kubernetes operator and the platform services (data index, jobs service).
  • SonataFlow: the custom resource representing the workflow.
  • GitHub workflow repo: the source code repository of the workflow.
  • GitHub gitops repo: the repository of the kustomize deployment configuration.
    • Includes the commands to boostrap the ArgoCD applications on your selected environment.

feature branches git workflow

Please note that all these components, with the exclusion of the Quay.io and the GitHub organizations, are either bundled with the Orchestrator plugin or managed by the software projects generated with the RHDH Software Templates.

Software Development with Git

Let’s assume your company follows the feature branches git workflow:

  • Developers work on individual feature branches.
  • The develop branch serves as the integration point where all features are merged to validate the application in the staging environment.
  • Once the software receives the green light, the code is released to the main branch and deployed to the production environment.

feature branches git workflow

Don’t be surprised, but the Orchestrator plugin automatically installs all the needed resources to handle these steps for you throughout the entire software development lifecycle.

The Software Development Lifecycle

Creating the Software Project

RHDH offers the software template functionality to create the foundational structure of software projects adhering to industry best practices in software development and deployment.

The Orchestrator plugin comes with its own templates designed to kickstart your workflow project. By selecting a template tagged with orchestrator, you gain access to the following benefits, all at no cost:

  • A fully operational software project to develop your serverless workflow, in a newly generated Git repository under the organization of your choice.
  • A ready-to-use configuration repository with a kustomize configuration to deploy the workflow on the designated RHDH instance.
  • (*) Automated CI tool deployment to build workflows on the selected cluster.
  • (*) Automated CD automation deployment to deploy applications implementing your workflow.

(*): optional but highly recommended!

Sounds great, isn’t it?

Developing the Serverless Workflow

This topic will be soon expanded in a dedicated post. However, we’d like to at least provide a list of a few amazing tools that you can use in this stage:

Using these toolkits and platforms, you can develop and test(*) your applicationn either on your local machine or as a containerized image, before moving to the next step.

(*): both unit and integration tests are supported

Testing the Staging Environment

And here comes the magic of automation.

Whenever a feature is merged in the staging branch, the CI/CD pipelines are triggered to build the container image, update the deployment configuration and deploy them to the staging instance of RHDH. You don’t have to do anything for this – the installed automation tools will handle the process for you.

That was a brief section, wasn’t it? This way, you can save reading time and focus on validating the workflow application in the staging environment.

Ready for Production

Get ready for another quick section.

Once the software has been validated and released, the CI/CD pipelines are triggered again to build and deploy the application in the production environment. Easy-peasy, and once again, making efficient use of the developer’s time.

Wrapping Up

What are you waiting for then? Design your first workflow and let the Orchestrator handle the tedious tasks for you.

Get customer-ready in just a minute with the power of the Automated Developer Experience for RHDH Orchestrator!

Serverless Workflows: an Automated Developer Experience Step-by-Step

In this blog, we’ll guide you through the journey from a software template to bootstrapping the workflow development, building, packaging, releasing, and deploying it on a cluster. If you need a high-level explanation or want to dive into the architecture of the solution, check out our previous blog. You can also watch a detailed demonstration of the content covered in this post in this recording.

Prerequisites and Assumptions

This blog assumes familiarity with specific tools, technologies, and methodologies. We’ll start with RHDH (Backstage) by launching a basic workflow template, working with GitHub for source control, pushing the workflow image to Quay, and using Kustomize to deploy the ArgoCD application for GitOps.

  • The target Quay repository for the workflow’s image should exist.
  • The target namespace for both the pipeline and the workflow is set to sonataflow-infra and not configurable.

Creating a Workflow Repository

Let’s begin by creating a workflow named demo under the Quay organization orchestrator-testing. We’ll use the repository orchestrator-testing/serverless-workflow-demo to store the workflow image.

Creating a new workflow repository in Quay

Setting Robot Account Permissions

Next, add robot account permissions to the created repository.

Setting permissions

Creating a Secret for GitOps Cluster

Refer to the instructions here for creating and configuring the secret for the target cluster.

Creating the Software Template

The Orchestrator plugin provides templates to kickstart your workflow project. By selecting a template tagged with orchestrator, you gain access to the following benefits:

  • A fully operational software project in a new Git repository under your chosen organization.
  • A configuration repository with kustomize configurations for deploying the workflow on RHDH.
  • Automated CI tool deployment using OpenShift Pipelines.
  • Automated CD deployment for applications using OpenShift GitOps.

Selecting and Launching the Template

Navigate to the Catalog and select the Basic workflow bootstrap project template. Click “Launch Template” to start filling in the input parameters for creating the workflow and its GitOps projects.

Selecting the software template

Input Parameters Overview

Review the parameters required for workflow creation, including organization name, repository name, workflow ID, workflow type, CI/CD method, namespaces, Quay details, persistence option, and database properties.

Input parameters

This section provides an overview of the parameters required for workflow creation:

  • Organization Name - The GitHub organization where workflow repositories will be created. Ensure that the GitHub token provided during Orchestrator chart installation includes repository creation permissions in this organization.
  • Repository Name - The name of the repository containing the workflow definition, spec and schema files, and application properties. Workflow development occurs in this repository. For example, if this repository is named onboarding, a second repository named onboarding-gitops is created for CD automated deployment of the workflow.
  • Description - This description will be added to the README.md file of the generated project and the workflow definition shown in the Orchestrator plugin.
  • Workflow ID - A unique identifier for the workflow. This ID is used to generate project resources (appearing in file names) and acts as the name of the Sonataflow CR for that workflow. After deploying the CR to the cluster, the ID identifies the workflow in Sonataflow.

On the second screen, you’ll need to select the workflow type. You can learn more about different workflow types here. Input parameters

  • Workflow Type - There are two supported types: infrastructure for operations returning output, and assessment for evaluation/assessment leading to potential infrastructure workflows.

On the final screen, you’ll be prompted to input the CI/CD parameters and persistence-related parameters.

  • Select a CI/CD method - Choosing None means no GitOps resources are created in target repositories, only the workflow source repository. Selecting Tekton with ArgoCD creates two repositories: one for the workflow and another for GitOps resources for deploying the built workflow on a cluster.
  • Workflow Namespace - The namespace for deploying the workflow in the target cluster, currently supporting sonataflow-infra where Sonataflow infrastructure is deployed.
  • GitOps Namespace - Namespace for GitOps secrets and ArgoCD application creation. The default orchestrator-gitops complies with the default installation steps of the Orchestrator deployment.
  • Quay Organization Name - Organization name in Quay for the published workflow. The Tekton pipeline pushes the workflow to this organization.
  • Quay Repository Name - Repository name in Quay for the published workflow, which must exist before deploying GitOps. The secret created in the GitOps Namespace needs permission to push to this repository.
  • Enable Persistance - Check this option to enable persistence for the workflow. It ensures each workflow persists its instances in a configured database schema, with the schema name matching the workflow ID. Persistence is recommended for long-running workflows and to support the Abort operation.
  • Database properties - Self-explanatory list of database properties.

After providing all parameters, click Review, ensure correctness, and then click Create. Successful creation leads to:

Template created

This includes links to three resources:

  • Bootstrap the GitOps Resources - Directs to the workflow GitOps repository, enabling GitOps for ArgoCD deployment on the target cluster.
  • Open the Source Code Repository - Opens the Git repository for workflow development.
  • Open the Catalog Info Component - The RHDH Catalog Components view which should include the newly created components: the workflow source repository and the workflow GitOps repository.

Bootstrap the GitOps Resources

Navigate to the first link to enable GitOps automation on the cluster. Follow the steps provided, including setting up CI pipelines and viewing ArgoCD resources.

Exploring the Repositories

The source code repository is where the workflow development happens. Each commit triggers the CI workflow.

The GitOps resources repository contains deployment configurations for the workflow on the OCP cluster.

Viewing the Catalog Info Components

Both repositories are represented as components in RHDH: Catalog Items

View the Source Code Repository Component

This component represents the Git repository where workflow development occurs. Navigating to the CI tab reveals the pipeline-run diagram: workflow Ci

Once the pipeline-run is completed, the CD step starts, and the workflow is deployed on the cluster.

View the GitOps Resources Repository Component

This component represents the deployment of the workflow on the OCP cluster. Navigating to the CD tab shows the K8s resources representing the deployed workflow. When the items in this view are ready, the workflow should be ready to be executed from the Orchestrator plugin.

Running the workflow

After completing the CI/CD pipelines, navigate to the Orchestrator plugin, choose the workflow, and run it.

Conclusion

Streamlining workflow development and deployment empowers developers to focus on creating impactful workflows tailored to their needs.

Installing the Orchestrator on existing RHDH instance

When RHDH is already installed and in use, reinstalling it via the Helm chart is unnecessary. Instead, integrating the Orchestrator into such an environment involves a few key steps:

  1. Utilize the Helm chart to install the requisite components, such as the SonataFlow operator and the OpenShift Serverless Operator, while ensuring the RHDH installation is disabled.
  2. Manually update the existing RHDH ConfigMap resources with the necessary configuration for the Orchestrator plugin.
  3. Import the Orchestrator software templates into the Backstage catalog. To install the required components without RHDH, utilize the –set rhdhOperator.enabled=false option. A comprehensive command would resemble the following:
helm upgrade -i orchestrator orchestrator/orchestrator --set rhdhOperator.enabled=false

This command will result in the installation of the Sonataflow Operator and OpenShift Serverless Operators. Alternatively, these operators can be installed directly from the operator catalog.

In an RHDH installation, there are two primary ConfigMaps that require modification, typically found under the backstage-system or the rhdh-operator namespaces:

  • dynamic-plugins ConfigMap: This ConfigMap houses the configuration for enabling and configuring dynamic plugins. To incorporate the orchestrator plugins, append the following configuration to the dynamic-plugins ConfigMap:
    plugins:
      - disabled: false
        package: "@janus-idp/backstage-plugin-orchestrator-backend-dynamic@1.8.0"
        integrity: sha512-wVZE7Dak10edxh1ZEckzYKrE13GrqhzSVelURhxjZcgXEHdGPWYUFHNMEpte7hzIBE85350Ka7fpy7C4BNPvEw==
        pluginConfig:
          orchestrator:
            dataIndexService:
              url: http://sonataflow-platform-data-index-service.sonataflow-infra
      - disabled: false
        package: "@janus-idp/backstage-plugin-orchestrator@1.10.6"
        integrity: sha512-qSXQ2O7/eLBEF186PzaRfzLfutFYUq9MdiiIZbHejz+KML9rVInPJkc1tine3R3JQVuw1QBIQ2vhPNbGbHXWZg==
        pluginConfig:
          dynamicPlugins:
            frontend:
              janus-idp.backstage-plugin-orchestrator:
                appIcons:
                  - importName: OrchestratorIcon
                    module: OrchestratorPlugin
                    name: orchestratorIcon
                dynamicRoutes:
                  - importName: OrchestratorPage
                    menuItem:
                      icon: orchestratorIcon
                      text: Orchestrator
                    module: OrchestratorPlugin
                    path: /orchestrator

The versions of the plugins may undergo updates, leading to changes in their integrity values. To ensure you are utilizing the latest versions, please consult the Helm chart values available here). It’s imperative to set both the version and integrity values accordingly.

Additionally, ensure that the dataIndexService.url points to the service of the Data Index installed by the Chart/Operator. When installed by the Helm chart, it should point to http://sonataflow-platform-data-index-service.sonataflow-infra:

oc get svc -n sonataflow-infra sonataflow-platform-data-index-service -o jsonpath='http://{.metadata.name}.{.metadata.namespace}'

In the app-config ConfigMap add the following:

app:
  backend:
    csp:
      script-src: ["'self'", "'unsafe-inline'", "'unsafe-eval'"]
      script-src-elem: ["'self'", "'unsafe-inline'", "'unsafe-eval'"]
      connect-src: ["'self'", 'http:', 'https:', 'data:']
    cors:
      origin: {{ URL to RHDH service or route }}

To enable the Notifications plugin, edit the same ConfigMaps. For the dynamic-plugins ConfigMap add:

    plugins:
      - disabled: false
        package: "@janus-idp/plugin-notifications@1.2.5"
        integrity: sha512-BQ7ujmrbv2MLelNGyleC4Z8fVVywYBMYZTwmRC534WCT38QHQ0cWJbebOgeIYszFA98STW4F5tdKbVot/2gWMg==
        pluginConfig:
          dynamicPlugins:
            frontend:
              janus-idp.backstage-plugin-notifications:
                appIcons:
                  - name: notificationsIcon
                    module: NotificationsPlugin
                    importName: NotificationsActiveIcon
                dynamicRoutes:
                  - path: /notifications
                    importName: NotificationsPage
                    module: NotificationsPlugin
                    menuItem:
                      icon: notificationsIcon
                      text: Notifications
                    config:
                      pollingIntervalMs: 5000
      - disabled: false
        package: "@janus-idp/plugin-notifications-backend-dynamic@1.4.11"
        integrity: sha512-5zluThJwFVKX0Wlh4E15vDKUFGu/qJ0UsxHYWoISJ+ing1R38gskvN3kukylNTgOp8B78OmUglPfNlydcYEHvA==

For the app-config ConfigMap add the database configuration if isn’t already provided. It is required for the notifications plugin:

    app:
      title: Red Hat Developer Hub
      baseUrl: {{ URL to RHDH service or route }}
    backend:
      database:
        client: pg
        connection:
          password: ${POSTGRESQL_ADMIN_PASSWORD}
          user: ${POSTGRES_USER}
          host: ${POSTGRES_HOST}
          port: ${POSTGRES_PORT}

If persistence is enabled (which should be the default setting), ensure that the PostgreSQL environment variables are accessible.

Once the ConfigMaps have been updated, it is necessary to restart the RHDH instance to implement the changes effectively.

To import the Orchestrator software templates into the catalog via the Backstage UI, follow the instructions outlined in this document. Register new templates into the catalog from the specified source

Upgrade plugin versions

To perform an upgrade of the plugin versions, start by acquiring the new plugin version along with its associated integrity value. In the future, this section will be updated to reference the Red Hat NPM registry. However, at present, it directs to @janus-idp NPM packages on https://registry.npmjs.com. The following script is useful to obtain the required information for updating the plugin version:

#!/bin/bash

PLUGINS=(
  "@janus-idp/plugin-notifications"
  "@janus-idp/plugin-notifications-backend-dynamic"
  "@janus-idp/backstage-plugin-orchestrator"
  "@janus-idp/backstage-plugin-orchestrator-backend-dynamic"
)

for PLUGIN_NAME in "${PLUGINS[@]}"
do
    echo "Processing plugin: $PLUGIN_NAME"
    curl -s -q "https://registry.npmjs.com/${PLUGIN_NAME}" | \
    jq -r '.versions | keys_unsorted[-1] as $latest_version | .[$latest_version] | "\(.name)\n\(.version)\n\(.dist.integrity)"'
    echo
done

A sample output should look like:

Processing plugin: @janus-idp/plugin-notifications
@janus-idp/plugin-notifications
1.1.12
sha512-GCdEuHRQek3ay428C8C4wWgxjNpNwCXgIdFbUUFGCLLkBFSaOEw+XaBvWaBGtQ5BLgE3jQEUxa+422uzSYC5oQ==

Processing plugin: @janus-idp/plugin-notifications-backend-dynamic
@janus-idp/plugin-notifications-backend-dynamic
1.3.6
sha512-Qd8pniy1yRx+x7LnwjzQ6k9zP+C1yex24MaCcx7dGDPT/XbTokwoSZr4baSSn8jUA6P45NUUevu1d629mG4JGQ==

Processing plugin: @janus-idp/backstage-plugin-orchestrator
@janus-idp/backstage-plugin-orchestrator
1.7.8
sha512-wJtu4Vhx3qjEiTe/i0Js2Jc0nz8B3ZIImJdul02KcyKmXNSKm3/rEiWo6AKaXUk/giRYscZQ1jTqlw/nz7xqeQ==

Processing plugin: @janus-idp/backstage-plugin-orchestrator-backend-dynamic
@janus-idp/backstage-plugin-orchestrator-backend-dynamic
1.5.3
sha512-l1MJIrZeXp9nOQpxFF5cw1ItOgA/p4xhGjKN12sg4Re8GC1qL+5hik+lA1BjMxAN6nKGWsLdFkgqLWa6jQuQFw==

After editing the version and integrity values in the dynamic-plugins ConfigMap, restart the Backstage instance for changes to take effect.

What is Sonataflow Operator?

SonataFlow Operator

The SonataFlow Operator defines a set of Kubernetes Custom Resources to help users to deploy SonataFlow projects on Kubernetes and OpenShift.

Please visit our official documentation to know more.

Available modules for integrations

If you’re a developer, and you are interested in integrating your project or application with the SonataFlow Operator ecosystem, this repository provides a few Go Modules described below.

SonataFlow Operator Types (api)

Every custom resource managed by the operator is exported in the module api. You can use it to programmatically create any custom type managed by the operator. To use it, simply run:

go get github.com/kiegroup/kogito-serverless-workflow/api

Then you can create any type programmatically, for example:

workflow := &v1alpha08.SonataFlow{
ObjectMeta: metav1.ObjectMeta{Name: w.name, Namespace: w.namespace},
Spec:       v1alpha08.SonataFlowSpec{Flow: *myWorkflowDef>}
}

You can use the Kubernetes client-go library to manipulate these objects in the cluster.

You might need to register our schemes:

    s := scheme.Scheme
utilruntime.Must(v1alpha08.AddToScheme(s))

Container Builder (container-builder)

Please see the module’s README file.

Workflow Project Handler (workflowproj)

Please see the module’s README file.

Development and Contributions

Contributing is easy, just take a look at our contributors’ guide.

See origin of this content here.