diff --git a/.nojekyll b/.nojekyll deleted file mode 100644 index e69de29..0000000 diff --git a/1-intro/index.html b/1-intro/index.html deleted file mode 100644 index babd490..0000000 --- a/1-intro/index.html +++ /dev/null @@ -1,953 +0,0 @@ - - -
- - - - - - - - -This is the storyline you are going to follow:
-This workshop is for intermediate OpenShift users. A good understanding of how OpenShift works along with hands-on experience is expected. For example we will not tell you how to log in with oc to your cluster or tell you what it is… ;)
We try to balance guided workshop steps and challenge you to use your knowledge to learn new skills. This means you’ll get detailed step-by-step instructions for every new chapter/task, later on the guide will become less verbose and we’ll weave in some challenges.
-As part of the workshop you will be provided with freshly installed OpenShift 4.14 clusters. Depending on attendee numbers we might ask you to gather in teams. Some workshop tasks must be done only once for the cluster (e.g. installing Operators), others like deploying and securing the application can be done by every team member separately in their own Project. This will be mentioned in the guide.
-You’ll get all access details for your lab cluster from the facilitators. This includes the URL to the OpenShift console and information about how to SSH into your bastion host to run oc if asked to.
The easiest way to provide this environment is through the Red Hat Demo System. Provision catalog item AWS with OpenShift Open Environment for the the attendees. Make sure to provision 3 controlplane nodes (similar to m6a.2xlarge) and 3 worker nodes (similar to m6a.4xlarge).
-While the workshop is designed to be run on Red Hat Demo System (RHDS) and the environment AWS with OpenShift Open Environment, you should be able to run the workshop on a 4.14 cluster of your own.
-Just make sure :
-This workshop was tested with these versions :
-We’ll tackle the topics at hand step by step with an introduction covering the things worked on before each section.
-You’ll notice placeholders for cluster access details, mainly the part of the domain that is specific to your cluster. There are two options:
-<DOMAIN> replace it with the value for your environment
-apps. e.g. for console-openshift-console.apps.cluster-t50z9.t50z9.sandbox4711.opentlc.com replace cluster-t50z9.t50z9.sandbox4711.opentlc.comEnter your OpenShift url after the apps part (e.g. cluster-t50z9.t50z9.sandbox4711.opentlc.com ) and click the button to generate a link that will customize your lab guide.
Click the generated link once to apply it the the current guide.
- - - - - - Generate URL - - - -Check to see if replacement is active -> <DOMAIN>
During the workshop you went through the OpenShift developer experience starting from software development using Quarkus and odo, moving on to automating build and deployment using Tekton pipelines and finally using GitOps for production deployments.
-Now it’s time to add another extremely important piece to the setup: enhancing application security in a containerized world. Using Red Hat Advanced Cluster Security for Kubernetes, of course!
-Install the Advanced Cluster Security for Kubernetes operator from the OperatorHub:
-ManualRed Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace. This will happen by default..
-You must install the ACS Central instance in its own project and not in the rhacs-operator and openshift-operator projects, or in any project in which you have installed the ACS Operator!
-apiVersion: platform.stackrox.io/v1alpha1
-kind: Central
-metadata:
- name: stackrox-central-services
- namespace: stackrox
-spec:
- monitoring:
- openshift:
- enabled: true
- central:
- notifierSecretsEncryption:
- enabled: false
- exposure:
- loadBalancer:
- enabled: false
- port: 443
- nodePort:
- enabled: false
- route:
- enabled: true
- telemetry:
- enabled: true
- db:
- isEnabled: Default
- persistence:
- persistentVolumeClaim:
- claimName: central-db
- resources:
- limits:
- cpu: 2
- memory: 6Gi
- requests:
- cpu: 500m
- memory: 1Gi
- persistence:
- persistentVolumeClaim:
- claimName: stackrox-db
- egress:
- connectivityPolicy: Online
- scannerV4:
- db:
- persistence:
- persistentVolumeClaim:
- claimName: scanner-v4-db
- indexer:
- scaling:
- autoScaling: Disabled
- maxReplicas: 2
- minReplicas: 1
- replicas: 1
- matcher:
- scaling:
- autoScaling: Disabled
- maxReplicas: 2
- minReplicas: 1
- replicas: 1
- scannerComponent: Default
- scanner:
- analyzer:
- scaling:
- autoScaling: Disabled
- maxReplicas: 2
- minReplicas: 1
- replicas: 1
-After the deployment has finished (Status Conditions: Deployed, Initialized in the Operator view on the Central tab), it can take some time until the application is completely up and running. One easy way to check the state, is to switch to the Developer console view on the upper left. Then make sure you are in the stackrox project and open the Topology map. You’ll see the three deployments of the Central instance:
Wait until all Pods have been scaled up properly.
-Verify the Installation
-Switch to the Administrator console view again. Now to check the installation of your Central instance, access the ACS Portal:
-If you access the details of your Central instance in the Operator page you’ll find the complete commandline using oc to retrieve the password from the secret under Admin Credentials Info. Just sayin… ;)
This will get you to the ACS Portal, accept the self-signed certificate and login as user admin with the password from the secret.
-Now you have a Central instance that provides the following services in an -RHACS setup:
-The application management interface and services. It handles data persistence, API interactions, and user interface access. You can use the same Central instance to secure multiple OpenShift or Kubernetes clusters.
-Scanner, which is a vulnerability scanner for scanning container images. It analyzes all image layers for known vulnerabilities from the Common Vulnerabilities and Exposures (CVEs) list. Scanner also identifies vulnerabilities in packages installed by package managers and in dependencies for multiple programming languages.
-To actually do and see anything you need to add a SecuredCluster (be it the same or another OpenShift cluster). For effect go to the ACS Portal, the Dashboard should by pretty empty, click on either of the Compliance link in the menu to the left, lots of zero’s and empty panels, too.
-This is because you don’t have a monitored and secured OpenShift cluster yet.
-Now we’ll add your OpenShift cluster as Secured Cluster to ACS.
-First, you have to generate an init bundle which contains certificates and is used to authenticate a SecuredCluster to the Central instance, regardless if it’s the same cluster as the Central instance or a remote/other cluster.
-We are using the API to create the init bundle in this workshop, because if we use the Web Terminal we can’t upload and downloaded file to it. For the steps to create the init bundle in the ACS Portal see the appendix.
-Let’s create the init bundle using the ACS API on the commandline:
-Go to your Web Terminal (if it timed out just start it again), then paste, edit and execute the following lines:
-<central_url> with the base URL of your ACS portal (without ‘https://’ e.g. central-stackrox.apps.cluster-cqtsh.cqtsh.example.com)export ROX_ENDPOINT=<central_url>:443
-export PASSWORD=<password>
-export DATA={\"name\":\"my-init-bundle\"}
-curl command against the API to create the init bundle using the variables set abovecurl -k -o bundle.json -X POST -u "admin:$PASSWORD" -H "Content-Type: application/json" --data $DATA https://${ROX_ENDPOINT}/v1/cluster-init/init-bundles
-cat bundle.json | jq -r '.kubectlBundle' > bundle64
-base64 -d bundle64 > kube-secrets.bundle
-You should now have these two files in your Web Terminal session: bundle.json and kube-secrets.bundle.
The init bundle needs to be applied to all OpenShift clusters you want to secure and monitor.
- -As said, you can create an init bundle in the ACS Portal, download it and apply it from any terminal where you can run oc against your cluster. We used the API method to show you how to use it and to enable you to use the Web Terminal.
For this workshop we run Central and SecuredCluster on one OpenShift cluster. E.g. we monitor and secure the same cluster the central services live on.
-Apply the init bundle
-Again in the web terminal:
-oc create -f kube-secrets.bundle -n stackrox pointing to the init bundle you downloaded from the Central instance or created via the API as above.secret/collector-tls created
-secret/sensor-tls created
-secret/admission-control-tls created
-You are ready to install the SecuredClusters instance, this will deploy the secured cluster services:
-https://central-stackrox.apps.<DOMAIN> the endpoint is central-stackrox.apps.<DOMAIN>:443.Now go to your ACS Portal again, after a couple of minutes you should see your secured cluster under Platform Configuration->Clusters. Wait until all Cluster Status indicators become green.
-To enable scanning of images in your Quay registry, you’ll have to configure an Integration with valid credentials, so this is what you’ll do.
-Now, create a new Integration:
-https://quay-quay-quay.apps.<DOMAIN> (replace domain if required)
Before we start to integrate Red Hat Advanced Cluster Security in our setup, you should become familiar with the basic concepts.
-ACS delivers on these security use cases:
-
Dashboard: -The dashboard serves as the security overview - helping the security team understand what the sources of risk are, categories of violations, and gaps in compliance. All of the elements are clickable for more information and categories are customizable.
-Top bar: -Near the top, we see a condensed overview of the status. It provides insight into the status of clusters, nodes, violations and so on. The top bar provides links to Search, Command-line tools, Cluster Health, Documentation, API Reference, and the logged-in user account.
-Left menus: -The left hands side menus provide navigation into each of the security use-cases, as well as product configuration to integrate with your existing tooling.
-Global Search: -On every page throughout the UI, the global search allows you to search for any data that ACS tracks.
-Now start to explore the Security Use Cases ACS targets as provided in the left side menu.
-Network Graph:
-Violations:
-Compliance:
-Vulnerability Management:
-Configuration Management:
-Risk:
-Most UI pages have a filter section at the top that allows you to narrow the view to matching or non-matching criteria. Almost all of the attributes that ACS gathers can be filtered, try it out:
-Process Name and select the Process Name keyjava and press enter: click away to get the filters dropdown to clearAs the foundation of ACS are the system policies, have a good look around:
-By default only some policies are enforced. If you want to get an overview which ones, you can use the filter view introduced above. Use Enforcement as filter key and FAIL_BUILD_ENFORCEMENT as value.
You should have one or more pipelines to build your application from the first workshop part, now we want to secure the build and deployment of it. For the sake of this workshop we’ll take a somewhat simplified use case:
-We want to scan our application image for the Red Hat Security Advisory RHSA-2021:4904 concerning openssl-lib.
-If this RHSA is found in an image we don’t want to deploy the application.
-These are the steps you will go through:
-First create a new policy category and the system policy. In the ACS Portal do the following:
-Workshop as Category nameRHSA-2021:4904 into the CVE identifier field
Currently there is an issue with persisting the group change to the central instance. As a workaround run this in your Web Terminal zu restart the central instance:
-oc delete pod -n stackrox -l app=central
-Start the pipeline with the affected image version:
-workshop-int project, start it and set Version to java-old-image (Remember how we set up this ImageStream tag to point to an old and vulnerable version of the image?)To make it easier spotting the violations for this deployment you can filter the list by entering namespace and then workshop-int in the filter bar.
Quarkus-Build-Options-Git-Gsklhg-Build-...) come and go when they are finished.Workshop RHSA-2021:4904 (Check the Time of the violation)There will be other policy violations listed, triggered by default policies, have a look around. Note that none of the policies are enforced (so that the pipeline build would be stopped) yet!
-Now start the pipeline with the fixed image version that doesn’t contain the CVE anymore:
-openjdk-11-el7).Workshop RHSA-2021:4904 for your deployment is gone because the image no longer contains itThis shows how ACS is automatically scanning images when they become active against all enabled policies. But we don’t want to just admire a violation after the image has been deployed, we want to disable the deployment during build time! So the next step is to integrate the check into the build pipeline and enforce it (don’t deploy the application).
-
There are basically two ways to interface with ACS. The UI, which focuses on the needs of the security team, and a separate “interface” for developers to integrate into their existing toolset (CI/CD pipeline, consoles, ticketing systems etc): The roxctl commandline tool. This way ACS provides a familiar interface to understand and address issues that the security team considers important.
ACS policies can act during the CI/CD pipeline to identify security risk in container images before they are started.
-You should have created and build a custom policy in ACS and tested it to trigger violations. Now you will integrate it into the build pipeline.
-roxctl cliBuild-time policies require the use of the roxctl command-line tool which is available for download from the ACS Central UI, in the upper right corner of the dashboard. You don’t need to to download this now as our Tekton task will do this automatically.
roxctl needs to authenticate to ACS Central to do anything. You can use either username and password or API tokens to authenticate against ACS Central. It’s good practice to use a token so that’s what we’ll do.
roxctl tokenIn the ACS portal:
-pipeline for the token and select the role Admin.Change to the OpenShift Web Console and create a secret with the API token in the project your pipeline lives in:
-workshop-int ProjectSecret named roxsecretsDOMAIN placeholder was automatically replaced it should be: central-stackrox.apps.<DOMAIN>:443Even if the form says Drag and drop file with your value here… you can just paste the text.
-There is one more thing you have to do before integrating the image scanning into your build pipeline:
-When you created your deployment, a trigger was automatically added that deploys a new version when the image referenced by the ImageStream changes.
This is not what we want! Because this way a newly build image would be deployed immediately even if the roxctl scan detects a policy violation and terminates the pipeline.
Have a look for yourself:
-workshop Deploymentimage.openshift.io/triggers.Remove exactly this lines and click Save:
-image.openshift.io/triggers: >-
- [{"from":{"kind":"ImageStreamTag","name":"workshop2:latest","namespace":"workshop-int"},"fieldPath":"spec.template.spec.containers[?(@.name==\"workshop2\")].image","pause":"false"
-This way we make sure that a new image won’t be deployed automatically right after the build task which also updates the ImageStream.
You are now ready to create a new pipeline task that will use roxctl to scan the image build in your pipeline before the deploy step:
roxsecretsapiVersion: tekton.dev/v1beta1
-kind: ClusterTask
-metadata:
- name: rox-image-check
-spec:
- params:
- - description: >-
- Secret containing the address:port tuple for StackRox Central (example -
- rox.stackrox.io:443)
- name: rox_central_endpoint
- type: string
- - description: Secret containing the StackRox API token with CI permissions
- name: rox_api_token
- type: string
- - description: "Full name of image to scan (example -- gcr.io/rox/sample:5.0-rc1)"
- name: image
- type: string
- - description: Use image digest result from s2i-java build task
- name: image_digest
- type: string
- results:
- - description: Output of `roxctl image check`
- name: check_output
- steps:
- - env:
- - name: ROX_API_TOKEN
- valueFrom:
- secretKeyRef:
- key: rox_api_token
- name: $(params.rox_api_token)
- - name: ROX_CENTRAL_ENDPOINT
- valueFrom:
- secretKeyRef:
- key: rox_central_endpoint
- name: $(params.rox_central_endpoint)
- image: registry.access.redhat.com/ubi8/ubi-minimal:latest
- name: rox-image-check
- resources: {}
- script: >
- #!/usr/bin/env bash
-
- set +x
-
- curl -k -L -H "Authorization: Bearer $ROX_API_TOKEN"
- https://$ROX_CENTRAL_ENDPOINT/api/cli/download/roxctl-linux --output
- ./roxctl > /dev/null; echo "Getting roxctl"
-
- chmod +x ./roxctl > /dev/null
-
- ./roxctl image check -c Workshop --insecure-skip-tls-verify -e $ROX_CENTRAL_ENDPOINT
- --image $(params.image)@$(params.image_digest)
-Take your time to understand the Tekton task definition:
-roxctl binary into the pipeline workspace so you’ll always have a version compatible with your ACS version.roxctl execution, of course:
-image check commandNow add the rox-image-check task to your pipeline between the build and deploy steps.
-Remember how we edited the pipeline directly in yaml before? OpenShift comes with a graphical Pipeline editor that we will use this time.
-build task and click the + at the right side of it, to add a task
roxsecretsroxsecretsquay-quay-quay.apps.<DOMAIN>/openshift_workshop-int/workshop (if the DOMAIN placeholder hasn’t been replaced automatically, do it manually)
As you remember we removed the trigger that updates the Deployment on ImageStream changes. Now the Deployment will never be updated and our new Image version will never be deployed to workshop-int.
To fix this we will add a new oc client Task that updates the Deployment, only after the Scan Task has run.
-deploy Taskopenshiftand select the openshift-client from Red Hatopenshift-client Taskoc patch deploy/workshop -p '{"spec":{"template":{"spec":{"containers":[{"name":"workshop","image":"$(params.QUAY_URL)/openshift_workshop-int/workshop@$(tasks.build.results.IMAGE_DIGEST)"}]}}}}'
With our custom System Policy still not set to enforce we first are going to test the pipeline integration. Go to Pipelines and next to your pipeline click on the three dots and then Start. Now in the pipeline startform enter java-old-image in the Version field.
rox-image-check task should succeed, but if you have a look at the output (click the task in the visual representation) you should see that the build violated our policy!The last step is to enforce the System Policy. If the policy is violated the pipeline should be stopped and the application should not be deployed.
-Workshop RHSA-2021:4904 in ACS Portal and set Response Method to Inform and enforce and then switch on Build and Deploy below.java-old-image and then with Version openjdk-11-el7 (default)
So far you’ve seen how ACS can handle security issues concerning Build and Deploy stages. But ACS is also able to detect and secure container runtime behaviour. Let’s have a look…
-As a scenario let’s assume you want to protect container workloads against attackers who are trying to install software. ACS comes with pre-configured policies for Ubuntu and Red Hat-based containers to detect if a package management tool is installed, this can be used in the Build and Deploy stages:
-And, more important for this section about runtime security, a policy to detect the execution of a package manager as a runtime violation, using Kernel instrumentation:
-In the ACS Portal, go to Platform Configuration->Policy Management, search for the policies by e.g. typing policy and then red hat into the filter. Open the policy detail view by clicking it and have a look at what they do.
You can use the included policies as they are but you can always e.g. clone and adapt them to your needs or write completely new ones.
-As you can see the Red Hat Package Manager Execution policy will alert as soon as a process rpm or dnf or yum is executed.
- -Like with most included policies it is not set to enforce!
-To see how the alert would look like, we have to trigger the condition:
-yum search testyum commands in the terminal and check back with the Violations view:
-But the real fun starts when you enforce the policy. Using the included policy, it’s easy to just “switch it on”:
-Now trigger the policy again by opening a terminal into the pod in the OpenShift Web Console and executing yum. See what happens:
Red Hat Advanced Cluster Management for Kubernetes (ACM) provides management, visibility and control for your OpenShift and Kubernetes environments. It provides management capabilities for:
-All across hybrid cloud environments.
-Clusters and applications are visible and managed from a single console, with built-in security policies. Run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet.
-Before you can start using ACM, you have to install it using an Operator on your OpenShift cluster.
-Advanced Cluster Management for Kubernetes operator.open-cluster-management by default.After the operator has been installed it will inform you to create a MultiClusterHub, the central component of ACM.
Click the Create MultiClusterHub button and have a look at the available installation parameters, but don’t change anything.
-Click Create.
-At some point you will be asked to refresh the web console. Do this, you’ll notice a new drop-down menu at the top of the left menu bar. If left set to local-cluster you get the standard console view, switching to All Clusters takes you to a view provided by ACM covering all your clusters.
Okay, right now you’ll only see one, your local-cluster listed here.
Now let’s change to the full ACM console:
-local-clusters viewmulticlusterhub instance you deployed should be in Status Running by now.All ClustersYou are now in your ACM dashboard!
-
Have a look around:
-One of the main features of Advanced Cluster Management is cluster lifecycle management. ACM can help to:
-Let’s give this a try!
-Okay, do not overstress our cloud ressources and for the fun of it we’ll deploy a Single Node OpenShift (SNO) cluster to the same AWS account your lab cluster is running in.
-The first step is to create credentials in ACM to deploy to the Amazon Web Services account.
- -You’ll get the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY needed to deploy to AWS from your facilitators.
snoAWSsno namespacesandbox<NNNN>.opentlc.com, replace <NNNN> with your id, you can find it e.g. in the URLAccess key ID and Secret access key as provided.openshift-config and copy the content of the secret pull-secret$HOME/.ssh/<LABID>key.pem) and public key ($HOME/.ssh/<LABID>key.pub).
-<LABID> can be found in the URL, e.g. multicloud-console.apps.cluster-z48z9.z48z9.sandbox910.opentlc.comYou have created a new set of credentials to deploy to the AWS account you are using.
-Now you’ll deploy a new OpenShift instance:
-sno credential you created.us-east-1m5.2xlarge.0 (we want a single node OCP…).Now click Next until you arrive at the Review. Do the following:
-YAML: OncontrolPlane section change the replicas field to 1.It’s time to deploy your cluster, click Create!
-ACM monitors the installation of the new cluster and finally imports it. Click View logs under Cluster install to follow the installation log.
- -Installation of a SNO takes around 30 minutes in our lab environment.
-After the installation has finished, access the Clusters section in the ACM portal again.
-
Explore the information ACM is providing, including the Console URL and the access credentials of your shiny new SNO instance. Use them to login to the SNO Web Console.
-In the previous lab, you explored the Cluster Lifecycle functionality of RHACM by deploying a new OpenShift single-node instance to AWS. Now let’s have a look at another capability, Application Lifecycle management.
-Application Lifecycle management is used to manage applications on your clusters. This allows you to define a single or multi-cluster application using Kubernetes specifications, but with additional automation of the deployment and lifecycle management of resources to individual clusters. An application designed to run on a single cluster is straightforward and something you ought to be familiar with from working with OpenShift fundamentals. A multi-cluster application allows you to orchestrate the deployment of these same resources to multiple clusters, based on a set of rules you define for which clusters run the application components.
-The naming convention of the different components of the Application Lifecycle model in RHACM is as follows:
-Start with adding labels to your two OpenShift clusters in your ACM console:
-environment=prodenvironment=dev
Now it’s time to actually deploy the application. But first have a look at the manifest definitions ACM will use as deployables at https://github.com/devsecops-workshop/book-import/tree/master/book-import.
-Then in the ACM console navigate to Applications:
-GIT
Click Create and then the topology tab to view the application being deployed:
-
environment=devNow edit the application in the ACM console and change the label to environment=prod. What happens?
In this simple example you have seen how to deploy an application to an OpenShift cluster using ACM. All manifests defining the application where kept in a Git repo, ACM then used the manifests to deploy the required objects into the target cluster.
-You can integrate Ansible Automation Platform and the Automation Controller (formerly known as Ansible Tower) with ACM to perform pre / post tasks within the application lifecycle engine. The prehook and posthook task allows you to trigger an Ansible playbook before and after the application is deployed, respectively.
-Notice that you will need a Red Hat Account with a valid Ansible subscription for this part.
-To give this a try you need an Automation Controller instance. So let’s deploy one on your cluster using the AAP Operator:
-Ansible Automation Platform operator and install it using the default settings.automationcontrollerautomationcontroller-admin-password secretautomationcontroller route, access it and login as user admin using the password from the secretYou are now set with a shiny new Ansible Automation Platform Controller!
-In the Automation Controller web UI, generate a token for the admin user:
-admin and select TokensToken for use by ACMWriteSave the token value to a text file, you will need this token later!
-For Automation Controller to run something we must configure a Project and a Template first.
-Create an Ansible Project:
-Create an Ansible Job Template:
-Verify that the Job run by going to Jobs and looking for an acm-test job showing a successful Playbook run.
Set up the credential which is going to allow ACM to interact with your AAP instance in your ACM Portal:
-And now let’s configure the ACM integration with Ansible Automation Platform to kick off a job in Automation Controller. In this case the Ansible job will just run our simple playbook that will only output a message.
-In the ACM Portal:
-Give this a few minutes. The application will complete and in the application topology view you will see the Ansible prehook. In Automation Controller go to Jobs and verify the Automation Job run.
- - - - - - - - -During this workshop you’ll install and use a good number of software components. The first one is OpenShift Data Foundation for providing storage. We’ll start with it because the install takes a fair amount of time. Number two is Gitea for providing Git services in your cluster with more to follow in subsequent chapters.
But fear not, all are managed by Kubernetes Operators on OpenShift.
-Let’s install OpenShift Data Foundation which you might know under the old name OpenShift Container Storage. It is engineered as the data and storage services platform for OpenShift and provides software-defined storage for containers.
OpenShift Data Foundation operator
-
After the operator has been installed it will inform you to install a StorageSystem and to refresh the web console view. From the operator overview page click Create StorageSystem with the following settings:
Deployment Type Full deployment and for Backing storage type make sure gp2-csi is selected.Requested capacity as is (2 TiB) and select all nodes.Default (OVN).You’ll see a review of your settings, hit Create StorageSystem. Don’t worry if you see a temporary 404 Page. Just releod the browser page once and you will see the System Overview
As mentioned already this takes some time, so go ahead and install the other prerequisites. We’ll come back later.
-You will be asked to run oc (the OpenShift commandline tool) commands a couple of times. We will do this by using the OpenShift Web Terminal. This is the easiest way because you don’t have to install oc or an SSH client.
To extend OpenShift with the Web Terminal option, install the Web Terminal operator:
-This will take some time and installs another operator as a dependency.
-After the operator has installed, reload the OCP Web Console browser window. You will now have a new button (>_) in the upper right. Click it to start a new web terminal. From here you can run the oc commands when the lab guide requests it (copy/paste might depend on your laptop OS and browser settings, e.g. try Ctrl-Shift-V for pasting).
The terminal is not persistent, so if it was closed for any reason anything you did in the terminal is gone after re-opening.
-If for any reason you can’t use the webterminal, your options are:
-oc on your laptopoc without login.TODO: Change yaml applies to direct git download
-We’ll need Git repository services to keep our app and infrastructure source code, so let’s just install trusted Gitea using an operator:
Gitea is an OpenSource Git Server similar to GitHub. A team at Red Hat was so nice to create an Operator for it. This is a good example of how you can integrate an operator into your catalog that is not part of the default OperatorHub already.
-To integrate the Gitea operator into your Operator catalog you need to access your cluster with the oc client. You can do this in two ways:
oc command you copied above, you may need to add –insecure-skip-tls-verify at the end of the lineOr, if working on a Red Hat RHPDS environment:
-lab-user you will be able to run oc commands without additional login.Now using oc add the Gitea Operator to your OpenShift OperatorHub catalog:
oc apply -f https://raw.githubusercontent.com/rhpds/gitea-operator/ded5474ee40515c07211a192f35fb32974a2adf9/catalog_source.yaml
-Gitea (You may need to disable search filters)Gitea Operator with default settingsgit with the Project selection menu at the top
-TODO : Screenshotgit project via the top Project selection menu !git)
Click image to enlarge
-
spec values :spec:
- giteaAdminUser: gitea
- giteaAdminPassword: "gitea"
- giteaAdminEmail: opentlc-mgr@redhat.com
-After creation has finished:
-Gitea with user gitea and password gitea| - | - |
|---|---|
Click image to enlarge- |
- Click image to enlarge- |
-
Now we will clone a git repository of a sample application into our Gitea, so we have some code to works with
-In the cloned repository you’ll find a devspaces_devfile.yml. We will need the URL to the file soon, so keep the tab open.
In later chapters we will need a second repository to hold your GitOps yaml resources. Let’s create this now as well
-Gitea create a New Migration and clone the Config GitOps Repo which will be the repository that contains our GitOps infrastructure components and stateNow it’s time to check if the StorageSystem deployment from ODF completed succesfully. In the openShift web console:
Your container storage is ready to go, explore the information on the overview pages if you’d like.
-The image that we have just deployed was pushed to the internal OpenShift Registry which is a great starting point for your cloud native journey. But if you require more control over you image repos, a graphical GUI, scalability, internal security scanning and the like you may want to upgrade to Red Hat Quay. So as a next step we want to replace the internal registry with Quay.
-Quay installation is done through an operator, too:
-Quayquay at the top Project selection menuquay go to Administration->LimitRanges and delete the quay-core-resource-limits
-
quay projectquayTrue
-
Now that the Registry is installed you have to configure a superuser:
-quay Projectquay-quay)quayadmin, a (fake) email address and and quayadmin as password.quay-config-editor-credentials-..., open the secret and copy the values, you’ll need them in a second.quay-quay-config-editor routequayadminReconfiguring Quay takes some time. The easiest way to determine if it’s been finished is to open the Quay portal (using the quay-quay Route). At the upper right you’ll see the username (quayadmin), if you click the username the drop-down should show a link Super User Admin Panel. When it shows up you can proceed.
To synchronize the internal default OpenShift Registry with the Quay Registry, Quay Bridge is used.
-Now we finally create an Quay Bridge instance. :
-quay namespace)quaytokenhttps://) and paste it into the Quay Hostname fieldtrue
To synchronize the internal default OpenShift Registry with the Quay Registry, the Quay Bridge is used. Now we need to create a new Organization in Quay:
-quay Projectquay-quay)openshift_integrationWe need an OAuth Application in Quay for the integration:
-openshift, press Enter and click on the new openshift item by clicking it
Now create a new secret for the Quay Bridge to access Quay. In the OpenShift web console make sure you are in the quay Project. Then:
And you are done with the installation and integration of Quay as your registry!
-Test if the integration works:
openshift_ (you might have to reload the browser).
-openshift_git Quay Organization.
Create the init bundle using the ACS Portal:
-If you are running oc on your laptop, you are set. If you are SSH-ing to another host (like the bastion host) to run oc, you have to scp the init bundle file over there. If you are using the OpenShift Web Terminal you have to use the API method.
The integrations to the internal registry were created automatically. But to enable scanning of images in the internal registry, you’ll have to configure valid credentials, so this is what you’ll do:
-But the first step is to disable the auto-generate mechanism, otherwise your updated credentials would be set back automatically:
-stackrox-central-servicesspec: add the following YAML snippet (one indent):customize:
- envVars:
- - name: ROX_DISABLE_AUTOGENERATED_REGISTRIES
- value: 'true'
-Create ServiceAccount to read images from Registry
-stackrox Projectacs-registry-reader and click Createacs-registry-reader-token-... secretsoc give the ServiceAccount the right to read images from all projects:oc adm policy add-cluster-role-to-user 'system:image-puller' system:serviceaccount:stackrox:acs-registry-reader -n stackrox
-Configure Registry Integrations in ACS
-Access the ACS Portal and configure the already existing integrations of type Generic Docker Registry. Go to Platform Configuration -> Integrations -> Generic Docker Registry. You should see a number of autogenerated (from existing pull-secrets) entries.
-You have to change four entries pointing to the internal registry, you can easily recognize them by the placeholder Username serviceaccount.
For each of the four local registry integrations click Edit integration using the three dots at the right:
-acs-registry-reader as UsernameACS is now able to scan images in the internal registry!
- - - - - - - - -In this part of the workshop you’ll experience how modern software development using the OpenShift tooling can be done in a fast, iterative way. Inner loop here means this is the way, sorry, process, for developers to try out new things and quickly change and test their code on OpenShift without having to build new images all the time or being a Kubernetes expert.
-
OpenShift Dev Spaces is a browser-based IDE for Cloud Native Development. All the heavy lifting is done through a container running your workspace on OpenShift. All you really need is a laptop. You can easily switch and setup a customized environment, plugin, build tools and runtimes. So switching from one project context to another is as easy a switching a website. No more endless installation and configuration marathons on your dev laptop. It is already part of your OpenShift subscription. If you want to find out more have a look here
-openshift-operatorsstatus > chePhase: Active), look up the devspaces Route in the openshift-workspaces project (If you can see the openshift-workspaces, you may need to toggle the Show default project button).We could create a workspace from one of the templates that come with Dev Spaces, but we want to use a customized workspace with some additionally defined plugins in a v2 devfile in our git repo. With devfiles you can share a complete workspace setup and with the click of a link and you will end up in a fully configured project in your browser.
-You will now need to access the Gitea repository where your Quarkus app resides and specifically get the path to the devfile.
-git project in openshift and then Networking > Routesgitea/quarkus-build-optionsdevspaces_devfile.ymlIt is important that you have the URL to the Raw version, otherwise DevSpace will recieve a website that it cannot parse.
-Now back in your DevSpaces Workspace :
-
Yes, I trust the authors
When your workspace has finally started, have a good look around in the UI. It should look familiar if you have ever worked with VSCode or similar IDEs.
- -While working with Dev Spaces make sure you have AdBlockers disabled, you are not on a VPN and a have good internet connection to ensure a stable setup. If you are facing any issues try to reload the Browser window. If that doesn’t help restart the workspace in the main DevSpaces Web Console under Workspaces and then menu Restart Workspace
-As an example you’ll create a new Java application. You don’t need to have prior experience programming in Java as this will be kept really simple.
- -We will use a Java application based on the Quarkus stack. Quarkus enables you to create much smaller and faster containerized Java applications than ever before. You can even transcompile these apps to native Linux binaries that start blazingly fast. The app that we will use is just a basic example created with the Quarkus Generator with a simple RESTful API that answers to http requests. But at the end of the day this setup will work with any Java application.
-Fun fact: Every OpenShift Subscription already provides a Quarkus Subscription.
Let’s clone our project into our workspace :
-OpenShift DevSpaces in your browsergit clone until you can select the Git: Clone item
-
Git URL to your Gitea Repository (You can copy the URL by clicking on the clipboard icon in Gitea) and press enter
-
/projects dir, then click the button OK
Yes, I trust the authors again. Last time, promise :)
-
Now we want to create a new OpenShift project for our app:
-terminal in your DevSpaces IDE
-oc OpenShift cli client is already installed and you are already logged into the clusterworkshop-devoc new-project workshop-dev
-odo or ‘OpenShift do’ is a cli that enables developers to quickly get started with cloud native app development without being a Kubernetes expert. It offers support for multiple runtimes and you can easily setup microservice components, push code changes into running containers and debug remotely with just a few simple commands. To find out more, have look here
-First we need to make sure we are in the folder of the cloned project.
-Enter the following command in the terminal:
-pwd
-if you are not in the /projects/quarkus-build-options folder, change into with the cd command
odo is smart enough to figure out what programming language and frameworks you are using. So let’s let initialize our project
-odo init
-odo is now intialized for your app. Let’s deploy the app to openshift in odo dev mode
-odo dev
-This will compile the app, start a pod in the OpenShift project and inject the app.
-There will be a couple of popups in the bottom right corner (Click on all of them as explained below)
-
New tabs will open. One with the DevFile Editor and one showing the Quarkus webpage of your app. You may have to wait a reload in a few seconds.
-To test the app in the Quarkus App tab:
-Your app should be displayed as a simple web page. In the RESTEasy JAX-RS section click the @Path endpoint /hello to see the result.
Now for the fun part:
-Using odo you can dynamically change your code and push it again without the need to build a new container image! No dev magic involved:
In your DevWorkspace on the left, expand the file tree to open file src/main/java/org/acme/GreetingRessource.java and change the string “Hello RESTEasy” to “Hello Workshop” (DevSpaces auto saves every edit directly. No need to save the file manually.)
And reload the app webpage.
-Bam! The change should be there in a matter of seconds
-
Now that you have seen how a developer can quickly start to code using modern cloud native tooling, it’s time to learn how to push the application towards a production environment. The first step is to implement a CI/CD pipeline to automate new builds. Let’s call this stage int for integration.
To create and run the build pipeline you’ll use OpenShift Pipelines based on project Tekton. The first step is to install it:
-Red Hat OpenShift Pipelines Operator
-Red Hat OpenShift Pipelines Operator and install it with the default settingsSince the Piplines assets are installed asynchronously it is possible that the Pipeline Templates are not yet setup when proceeding immedately to the next step. So now is good time to grab a coffee.
After installing the Operator, create a new deployment of your game-changing application:
-workshop-int (e.g. using the Projects menu item at the top)workshop-int project by verifying in the top Project menuquarkus-build-options repo in your Gitea instance
-
othermasterBuilder ImageJava and openjdk-11-el7 / Red Hat OpenJDK 11 (RHEL 7)
-workshop-appworkshopIf you don’t have the checkbox Add pipeline and get the message There are no pipeline templates available for Java and Deployment combination in the next step then just give it few more minutes and reload the page.
The current Pipeline deploys to the Internal Registry by default. The image that was just created by the first run was pushed there.
-To leverage our brand new Quay registry we need to modify the Pipeline in order to push the images to the Quay registry. In addition the OpenShift ImageStream must be modified to point to the Quay registry, too.
-s2i-java ClusterTaskThe first thing is to create a new Source-To-Image Pipeline Task to automatically update the ImageStream to point to Quay. You could of course copy and modify the default s2i-java task using the built-in YAML editor of the OpenShift Web Console. But to make this as painless as possible we have prepared the needed YAML object definition for you already.
oc commandshttps://github.com/devsecops-workshop/yaml.git, go there and review the YAML definition.oc create -f https://raw.githubusercontent.com/devsecops-workshop/yaml/main/s2i-java-workshop.yml
-
To make this lab pretty much self-contained, we run oc commands from the OCP Web Terminal. But of course you can do the above steps from any Linux system where you set up the oc command.
You should now have a new ClusterTask named s2i-java-workshop, go to the OpenShift Web Console and check:
workshop-int Projects2i-java-workshop ClusterTask and open itPlease take the time to review the additions to the default s2i-java task:
In the params section there are two new parameters, that will tell the pipeline which ImageStream and tag to update.
- default: ''
- description: The name of the ImageStream which should be updated
- name: IMAGESTREAM
- type: string
-- default: ''
- description: The Tag of the ImageStream which should be updated
- name: IMAGESTREAMTAG
- type: string
-At the end of the steps section is a new step that takes care of actaully creating the ImageStream tag that points to the image in Quay
- env:
- - name: HOME
- value: /tekton/home
- image: 'image-registry.openshift-image-registry.svc:5000/openshift/cli:latest'
- name: update-image-stream
- resources: {}
- script: >
- #!/usr/bin/env bash
-
- oc tag --source=docker $(params.IMAGE)
- $(params.IMAGESTREAM):$(params.IMAGESTREAMTAG) --insecure
- securityContext:
- runAsNonRoot: true
- runAsUser: 65532
-Now that we have our new build tasks we need to modify the pipeline to:
-s2i-java-workshop taskTo make this easier we again provide you with a full YAML definition for the Pipeline.
-Do the following:
-If you use this lab guide with your domain as query parameter (see here), you are good to go with the command below because your domain was already inserted into the command. -If not, you have to replace <DOMAIN> manually.
-curl https://raw.githubusercontent.com/devsecops-workshop/yaml/main/workshop-pipeline-without-git-update.yml -o workshop-pipeline-without-git-update.yml
-REPLACEME placeholders in the YAML file with your lab domain.sed -i 's/REPLACEME/<DOMAIN>/g' workshop-pipeline-without-git-update.yml
-oc replace -f workshop-pipeline-without-git-update.yml
-Again take the time to review the changes in the web console:
-workshop Pipeline and switch to YAML- default: workshop
- name: IMAGESTREAM
- type: string
-- default: latest
- name: IMAGESTREAMTAG
- type: string
-The preexisting parameter IMAGE_NAME now points to your local Quay registry:
- - default: >-
- quay-quay-quay.apps.<DOMAIN>/openshift_workshop-int/workshop
- name: IMAGE_NAME
- type: string
-And finally the build task was modified to work with the two new parameters:
tasks:
- - name: build
- params:
- [...]
- - name: IMAGESTREAM
- value: $(params.IMAGESTREAM)
- - name: IMAGESTREAMTAG
- value: $(params.IMAGESTREAMTAG)
-taskRef was changed to s2i-java-workshop, in order to use our custom Pipeline Task:taskRef:
- kind: ClusterTask
- name: s2i-java-workshop
-You are done with adapting the Pipeline to use the Quay registry!
-We are ready to give it a try, but first let’s have quick look at our target Quay repository
-openshift_workshop-int organization.openshift_workshop-int / workshop repository access the Tags in the menu to the left.
Now it’s time to configure and start the Pipeline.
-workshop PipelineIn the Start Pipeline window that opens, but before (!) starting the actual pipeline, we need to add a Secret so the pipeline can authenticate and push to the Quay repository:
-openshift_workshop-int / workshop repositoryopenshift_workshop-int+builder Robot Account and copy the token
-
quay-workshop-int-tokenImage RegistryBasic Authenticationquay-quay-quay.apps.<DOMAIN>/openshift_workshop-int (replace your cluster domain if necessary)openshift_workshop-int+builderIf the pipeline fails you may have to recheck the Secret quay-workshop-int-token directly if the username and password are set correctly.
Once the Pipeline run has finished, go to the Quay Portal and check the Repository openshift_workshop-int/workshop again. Under Tags you should now see a new workshop Image version that was just pushed by the pipeline.
Congratulations: Quay is now a first level citizen of your pipeline build strategy.
-Now that your build pipeline has been set up and is ready. There is one more step in preparation of the security part of this workshop. We need a way to build and deploy from an older image with some security issues in it. For this we will add another ImageStream Tag in the default Java ImageStream that points to an older version with a known CVE issue in it.
openshift and under Builds click on ImageStreamsjavaspec > tags: section.
-- name: java-old-image
- annotations:
- description: Build and run Java applications using Maven and OpenJDK 8.
- iconClass: icon-rh-openjdk
- openshift.io/display-name: Red Hat OpenJDK 8 (UBI 8)
- sampleContextDir: undertow-servlet
- sampleRepo: "https://github.com/jboss-openshift/openshift-quickstarts"
- supports: "java:8,java"
- tags: "builder,java,openjdk"
- version: "8"
- from:
- kind: DockerImage
- name: "registry.redhat.io/openjdk/openjdk-11-rhel7:1.10-1"
- generation: 4
- importPolicy: {}
- referencePolicy:
- type: Local
-This will add a tag java-old-image that points to an older version of the RHEL Java image. The image and security vulnerabilities can be inspected in the Red Hat Software Catalog here
1.10-1We will use this tag to test our security setup in a later chapter.
-For the subsequent exercises we need a new project:
-workshop-prod
Now that our CI/CD build and integration stage is ready we could promote the app version directly to a production stage. But with the help of the GitOps approach, we can leverage our Git system to handle promotion that is tracked through commits and can deploy and configure the whole production environment. This stage is just too critical to configure manually and without an audit.
-So let’s start be installing the OpenShift GitOps Operator based on the project ArgoCD.
-The installation of the GitOps Operator will give you a clusterwide ArgoCD instance available at the link in the top right menu, but since we want to have an instance to manage just our prod project we will create another ArgoCD instance in that specific project.
-workshop-prodworkshop-prod selected in the top menu click on Installed Operators and then Red Hat OpenShift GitOps.workshop-prod project.
We already have a second repository, called openshift-gitops-getting-started in Gitea that holds the required Gitops yaml resources. We will use this repo to push changes to our workshop-prod enivronment.
Have a quick look at the structure of this git project:
-app - contains yaml files for the deployment, service and route resources needed by our application. These will be applied to the cluster. There is also a kustomization.yaml defining that kustomize layers will be applied to all yamls
environments/dev - contains the kustomization.yaml which will be modified by our builds with new Image versions. ArgoCD will pick up these changes and trigger new deployments.
Let’s setup the project that tells ArgoCD to watch our configuration repository and update resources in the workshop-prod project accordingly.
workshop-prodadmin and password will be in Secret argocd-cluster in the Project workspace-prodArgoCD works with the concept of Applications. We will create an Application and point it to the configuration Git repository. ArgoCD will look for Kubernetes yaml files in the repository and path and deploy them to the defined project. Additionally, ArgoCD will also react to changes to the repository and reflect these to the project. You can also enable self-healing to prevent configuration drift. If you want find out more about OpenShift GitOps have look here.
-openshift-gitops-getting-started from GiteaWatch the resources (Deployment, Service, Route) get rolled out to the project workshop-prod. Notice, we also scaling our app to 2 pods in the production stage as we want some high availability. But the actual deployment will not succeed as shown by the ‘broken heart’ icons!
Since we have not published our image to the Quay workshop-prod repository the initial Deployment will try to roll out non existant image from Quay. Once the first pipeline run is complete, our newly built image will be replaced in the Deployment and rolled out.
Our complete production stage is now configured and controlled through GitOps. But how do we tell ArgoCD that there is a new version of our app to deploy? Well, we will add a step to our build pipeline updating the configuration repository.
-As we do not want to modify our original repository file we will use a tool called Kustomize that can add incremental change layers to YAML files. Since ArgoCD permanently watches this repository, it will pick up these Kustomize changes.
- -It is also possible to update the repository with a Pull request. Then you have an approval process for your production deployment.
-We will need to initialize the workshop-prod/workshop in Quay so the robo user will be able to push images there later on.
openshift_workshop-prod on the right
openshift_workshop-prod as Organizationworkshop as repo name
Let’s add a new custom Tekton task to the workshop-int project that can update the Image tag via Kustomize after the build process completed and then push the change to our git configuration repository.
We could add this through the OpenShift Web Console as well but to save time we will apply the file directly via the oc command.
oc create -f https://raw.githubusercontent.com/devsecops-workshop/yaml/main/tekton-kustomize.yml
-workshop-int and then go to Pipelines > Tasks > Tasks and have a look at the just imported task git-update-deployment. You should see the git commands how the configuration repository will be cloned, patched by Kustomize and then pushed again.So now we have a new Tekton Task in our task catalog to update a GitOps Git repository, but we still need to promote the actual image from our workshop-int to workshop-prod project. Otherwise the image will not be available for our deployment.
workshop_int project, go to Pipelines > Pipelines > workshop and then YAMLYou can edit pipelines either directly in YAML or in the visual Pipeline Builder. We will see how to use the Builder later on, so let’s edit the YAML for now.
-Add the new Task to your Pipeline by adding it to the YAML like this:
-spec > params section add the following (if the <DOMAIN> placeholder hasn’t been replaced automatically, do it manually):- default: >-
- https://repository-git.apps.<DOMAIN>/gitea/openshift-gitops-getting-started.git
- name: GIT_CONFIG_REPO
- type: string
-tasks level right after the deploy taskGIT_CONFIG_REPO to the Task parameter GIT_REPOSITORYIn the OpenShift YAML viewer/editor you can mark multiple lines and use tab to indent this lines for one step.
-- name: skopeo-copy
- params:
- - name: srcImageURL
- value: "docker://$(params.QUAY_URL)/openshift_workshop-int/workshop:latest"
- - name: destImageURL
- value: "docker://$(params.QUAY_URL)/openshift_workshop-prod/workshop:latest"
- - name: srcTLSverify
- value: "false"
- - name: destTLSverify
- value: "false"
- runAfter:
- - build
- taskRef:
- kind: ClusterTask
- name: skopeo-copy
- workspaces:
- - name: images-url
- workspace: workspace
-- name: git-update-deployment
- params:
- - name: GIT_REPOSITORY
- value: $(params.GIT_CONFIG_REPO)
- - name: CURRENT_IMAGE
- value: "quay.io/nexus6/hello-microshift:1.0.0-SNAPSHOT"
- - name: NEW_IMAGE
- value: $(params.QUAY_URL)/openshift_workshop-prod/workshop
- - name: NEW_DIGEST
- value: $(tasks.build.results.IMAGE_DIGEST)
- - name: KUSTOMIZATION_PATH
- value: environments/dev
- runAfter:
- - skopeo-copy
- taskRef:
- kind: Task
- name: git-update-deployment
- workspaces:
- - name: workspace
- workspace: workspace
-The Pipeline should now look like this. Notice that the new tasks runs in parallel to the deploy task
Now, the pipeline is set. The last thing we need is authentication against the Gitea repository and the workshop-prod Quay org. We will add those from the start pipeline form next. Make sure to replace the
Click on Pipeline Start
-openshift_workshop-prod robo account openshift_workshop-prod+builder as before)Run the pipeline by clicking Start and see that in your Gitea configuration repository the file /environment/dev/kustomize.yaml is updated with the new image version
-
-
Notice that the deploy and the git-update steps now run in parallel. This is one of the strength of Tekton. It can scale natively with pods on OpenShift.
This will tell ArgoCD to update the Deployment with this new image version
-Check that the new image is rolled out sucessfully now (you may need to sync manually in ArgoCD to speed things up)
-
This workshop will introduce you to the application development cycle leveraging OpenShift’s tooling & features with a special focus on securing your environment using Advanced Cluster Security for Kubernetes (ACS). -You will get a brief introduction in several OpenShift features like OpenShift Pipelines, OpenShift GitOps, OpenShift DevSpaces. -And all in a fun way.
-
This workshop was created by
- -with contributions from
- -Feel free to open an issue or create a pull request in GitHub
- - - -