diff --git a/1-intro/index.html b/1-intro/index.html index c633611..ed7fdaa 100644 --- a/1-intro/index.html +++ b/1-intro/index.html @@ -9,25 +9,25 @@ -
This is the storyline you are going to follow:
+This workshop is meant to introduce you to the application development cycle leveraging OpenShift’s tooling & features with a special focus on securing your environment using Advanced Cluster Security for Kubernetes (ACS). And all in a fun way.
+This is the storyline you’ll follow today:
We try to balance guided workshop steps and challenging you to use your knowledge to learn new skills. This means you’ll get detailed step-by-step instructions for every new chapter/task, later on the guide will become less verbose and we’ll weave in some challenges.
-This workshop is for intermediate OpenShift users. A good understanding of how OpenShift works along with hands-on experience is expected. For example we will not tell you how to log in with oc to your cluster or tell you what it is… ;)
A good understanding of how OpenShift works together with hands-on experience is expected. For example we will not tell you how to log in oc to your cluster or tell you what it is… ;)
We try to balance guided workshop steps and challenge you to use your knowledge to learn new skills. This means you’ll get detailed step-by-step instructions for every new chapter/task, later on the guide will become less verbose and we’ll weave in some challenges.
As part of the workshop you will be provided with freshly installed OpenShift 4.10 clusters. Depending on attendee numbers we might ask you to gather in teams. Some workshop tasks must be done only once for the cluster (e.g. installing Operators), others like deploying and securing the application can be done by every team member separately in their own Project. This will be mentioned in the guide.
-You’ll get all access details for your lab cluster from the facilitators. This includes the URL to the OpenShift console and information about how to SSH into your bastion host to run oc if asked to.
The easiest way to provide this environment is through the Red Hat Demo System. Provision catalog item Red Hat OpenShift Container Platform 4 Demo for the the attendees.
-While the workshop is designed to be run on Red Hat Demo System (RHDS) and the environment AWS with OpenShift Open Environment, you should be able to run the workshop on a 4.14 cluster of your own.
-Just make sure :
-This workshop was tested with these versions :
-You will be provided with freshly installed OpenShift 4 clusters running in AWS. Depending on attendee numbers we might ask you to gather in teams. Some workshop tasks must be done only once for the cluster (e.g. installing Operators), others like deploying and securing the application can be done by every team member separately in her/his own Project. This will be mentioned in the guide.
We’ll tackle the topics at hand step by step with an introduction covering the things worked on before each section.
-You’ll notice placeholders for cluster access details, mainly the part of the domain that is specific to your cluster. There are two options:
-<DOMAIN> replace it with the value for your environment
-apps. e.g. for console-openshift-console.apps.cluster-t50z9.t50z9.sandbox4711.opentlc.com replace cluster-t50z9.t50z9.sandbox4711.opentlc.comEnter your OpenShift url after the apps part (e.g. cluster-t50z9.t50z9.sandbox4711.opentlc.com ) and click the button to generate a link that will customize your lab guide.
Click the generated link once to apply it the the current guide.
- - - - - - Generate URL - - - -Check to see if replacement is active -> <DOMAIN>
We’ll tackle the topics at hand step by step with an introduction covering the things worked on before every section.
@@ -863,38 +684,6 @@During the workshop you went through the OpenShift developer experience starting from software development using Quarkus and odo, moving on to automating build and deployment using Tekton pipelines and finally using GitOps for production deployments.
-Now it’s time to add another extremely important piece to the setup: enhancing application security in a containerized world. Using Red Hat Advanced Cluster Security for Kubernetes, of course!
+During the workshop you went through the OpenShift developer experience starting from software development using Quarkus and odo, moving on to automating build and deployment using Tekton pipelines and finally using GitOps for production deployments.
Now it’s time to add another extremely important piece to the setup; enhancing application security in a conainerized world. Using the most recent addition to the OpenShift portfolio: Red Hat Advanced Cluster Security for Kubernetes!
Install the Advanced Cluster Security for Kubernetes operator from the OperatorHub:
+ManualRed Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace. This will happen by default..
+Red Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operators namespace. This will happen by default..
You must install the ACS Central instance in its own project and not in the rhacs-operator and openshift-operator projects, or in any project in which you have installed the ACS Operator!
apiVersion: platform.stackrox.io/v1alpha1
-kind: Central
-metadata:
- name: stackrox-central-services
- namespace: stackrox
-spec:
- monitoring:
- openshift:
- enabled: true
- central:
- notifierSecretsEncryption:
- enabled: false
- exposure:
- loadBalancer:
- enabled: false
- port: 443
- nodePort:
- enabled: false
- route:
- enabled: true
- telemetry:
- enabled: true
- db:
- isEnabled: Default
- persistence:
- persistentVolumeClaim:
- claimName: central-db
- resources:
- limits:
- cpu: 2
- memory: 6Gi
- requests:
- cpu: 500m
- memory: 1Gi
- persistence:
- persistentVolumeClaim:
- claimName: stackrox-db
- egress:
- connectivityPolicy: Online
- scannerV4:
- db:
- persistence:
- persistentVolumeClaim:
- claimName: scanner-v4-db
- indexer:
- scaling:
- autoScaling: Disabled
- maxReplicas: 2
- minReplicas: 1
- replicas: 1
- matcher:
- scaling:
- autoScaling: Disabled
- maxReplicas: 2
- minReplicas: 1
- replicas: 1
- scannerComponent: Default
- scanner:
- analyzer:
- scaling:
- autoScaling: Disabled
- maxReplicas: 2
- minReplicas: 1
- replicas: 1
-After the deployment has finished (Status Conditions: Deployed, Initialized in the Operator view on the Central tab), it can take some time until the application is completely up and running. One easy way to check the state, is to switch to the Developer console view on the upper left. Then make sure you are in the stackrox project and open the Topology map. You’ll see the three deployments of the Central instance:
After deployment has finished ("Status Conditions: Deployed, Initialized”) it can take some time until the application is completely up and running. One easy way to check the state is to switch to the Developer console view at the upper left. Then make sure you are in the rhacs-operator project and open the Topology map. You’ll see the three deployments of an Central instance:
Wait until all Pods have been scaled up properly.
Verify the Installation
-Switch to the Administrator console view again. Now to check the installation of your Central instance, access the ACS Portal:
+Switch to the Administrator console view again. Now to check the installation of your Central instance, access the RHACS Portal:
If you access the details of your Central instance in the Operator page you’ll find the complete commandline using oc to retrieve the password from the secret under Admin Credentials Info. Just sayin… ;)
If you access the details of your Central instance you’ll find the complete commandline using oc to retrieve the password from the secret under Admin Credentials Info. Just sayin… ;)
This will get you to the ACS Portal, accept the self-signed certificate and login as user admin with the password from the secret.
+This will get you to the RHACS portal, accept the self-signed certificate and login as user admin with the password from the secret.
Now you have a Central instance that provides the following services in an RHACS setup:
Scanner, which is a vulnerability scanner for scanning container images. It analyzes all image layers for known vulnerabilities from the Common Vulnerabilities and Exposures (CVEs) list. Scanner also identifies vulnerabilities in packages installed by package managers and in dependencies for multiple programming languages.
+Scanner, which is a vulnerability scanner for scanning container images. It analyzes all image layers to check known vulnerabilities from the Common Vulnerabilities and Exposures (CVEs) list. Scanner also identifies vulnerabilities in packages installed by package managers and in dependencies for multiple programming languages.
To actually do and see anything you need to add a SecuredCluster (be it the same or another OpenShift cluster). For effect go to the ACS Portal, the Dashboard should by pretty empty, click on either of the Compliance link in the menu to the left, lots of zero’s and empty panels, too.
+To actually do and see anything you need to add a SecuredCluster (be it the same or another OpenShift cluster). For effect go to the RHACS Portal, the Dashboard should by pretty empty, click on the Compliance link in the menu to the left, lots of zero’s and empty panels, too.
This is because you don’t have a monitored and secured OpenShift cluster yet.
Now we’ll add your OpenShift cluster as Secured Cluster to ACS.
-First, you have to generate an init bundle which contains certificates and is used to authenticate a SecuredCluster to the Central instance, regardless if it’s the same cluster as the Central instance or a remote/other cluster.
-We are using the API to create the init bundle in this workshop, because if we use the Web Terminal we can’t upload and downloaded file to it. For the steps to create the init bundle in the ACS Portal see the appendix.
-Let’s create the init bundle using the ACS API on the commandline:
-Go to your Web Terminal (if it timed out just start it again), then paste, edit and execute the following lines:
+First you have to generate an init bundle which contains certificates and is used to authenticate a SecuredCluster to the Central instance, again regardless if it’s the same cluster as the Central instance or a remote/other cluster.
+In the RHACS Portal:
<central_url> with the base URL of your ACS portal (without ‘https://’ e.g. central-stackrox.apps.cluster-cqtsh.cqtsh.example.com)export ROX_ENDPOINT=<central_url>:443
-export PASSWORD=<password>
-export DATA={\"name\":\"my-init-bundle\"}
-curl command against the API to create the init bundle using the variables set abovecurl -k -o bundle.json -X POST -u "admin:$PASSWORD" -H "Content-Type: application/json" --data $DATA https://${ROX_ENDPOINT}/v1/cluster-init/init-bundles
-cat bundle.json | jq -r '.kubectlBundle' > bundle64
-base64 -d bundle64 > kube-secrets.bundle
-You should now have these two files in your Web Terminal session: bundle.json and kube-secrets.bundle.
The init bundle needs to be applied to all OpenShift clusters you want to secure and monitor.
- -As said, you can create an init bundle in the ACS Portal, download it and apply it from any terminal where you can run oc against your cluster. We used the API method to show you how to use it and to enable you to use the Web Terminal.
The init bundle needs to be applied on all OpenShift clusters you want to secure & monitor.
For this workshop we run Central and SecuredCluster on one OpenShift cluster. E.g. we monitor and secure the same cluster the central services live on.
Apply the init bundle
-Again in the web terminal:
oc create -f kube-secrets.bundle -n stackrox pointing to the init bundle you downloaded from the Central instance or created via the API as above.oc command to log in to the OpenShift cluster as cluster-admin.
+stackrox.oc create -f <init_bundle>.yaml -n stackrox pointing to the init bundle you downloaded from the Central instance and the Project you created.secret/collector-tls created
secret/sensor-tls created
secret/admission-control-tls created
You are ready to install the SecuredClusters instance, this will deploy the secured cluster services:
+Now you are ready to install the SecuredClusters instance, this will deploy the secured cluster services:
https://central-stackrox.apps.<DOMAIN> the endpoint is central-stackrox.apps.<DOMAIN>:443.https://central-stackrox.apps.cluster-65h4j.65h4j.sandbox1803.opentlc.com/ the endpoint is central-stackrox.apps.cluster-65h4j.65h4j.sandbox1803.opentlc.com:443.Now go to your ACS Portal again, after a couple of minutes you should see your secured cluster under Platform Configuration->Clusters. Wait until all Cluster Status indicators become green.
-To enable scanning of images in your Quay registry, you’ll have to configure an Integration with valid credentials, so this is what you’ll do.
-Now, create a new Integration:
+Now go to your RHACS Portal again, after a couple of minutes you should see you secured cluster under Platform Configuration->Clusters. Wait until all Cluster Status indicators become green.
+The integrations to the internal registry where created automatically. To enable scanning of images in the internal registry, you’ll have to configure credentials, so this is what you’ll do:
+Create ServiceAccount to read images from Registry
https://quay-quay-quay.apps.<DOMAIN> (replace domain if required)stackrox Projectacs-registry-reader and click Createacs-registry-reader-token-... secretsoc give the ServiceAccount the right to read images from all projects:oc adm policy add-cluster-role-to-user 'system:image-puller' system:serviceaccount:stackrox:acs-registry-reader -n stackrox
+
+Configure Registry Integrations in ACS
+Access the RHACS Portal and configure the already existing integrations of type Generic Docker Registry. Go to Platform Configuration -> Integrations -> Generic Docker Registry. You should see a number of autogenerated (from existing pull-secrets) entries.
+Change four entries pointing to the internal registry, you can easily recognize them by the placeholder Username serviceaccount.
For each click Edit integration using the three dots at the right:
+acs-registry-reader as Username
ACS is now able to scan images in the internal registry!
@@ -848,22 +720,6 @@To synchronize the internal default OpenShift Registry with the Quay Registry, the Quay Bridge is used. Now we need to create a new Organization in Quay:
-quay Projectquay-quay)openshift_integrationWe need an OAuth Application in Quay for the integration:
-openshift, press Enter and click on the new openshift item by clicking itValue of var “deployment” is: production
+Text here
+Text here
+
Module: yum
Now create a new secret for the Quay Bridge to access Quay. In the OpenShift web console make sure you are in the quay Project. Then:
Arguments: name=nano
And you are done with the installation and integration of Quay as your registry!
-Test if the integration works:
openshift_ (you might have to reload the browser).
-openshift_git Quay Organization.Tick Enable Privilege Escalation
Dashboard: @@ -523,7 +467,9 @@
Top bar: -Near the top, we see a condensed overview of the status. It provides insight into the status of clusters, nodes, violations and so on. The top bar provides links to Search, Command-line tools, Cluster Health, Documentation, API Reference, and the logged-in user account.
+Near the top, we see an overview of our OpenShift clusters. It provides insight into the usage of images and secrets. +The top bar provides links to Search, Command-line tools, Cluster Health, Documentation, API Reference, and logged-in +user accountLeft menus: @@ -541,7 +487,7 @@
Network Graph:
Vulnerability Management:
Risk:
As the foundation of ACS are the system policies, have a good look around:
+Most UI pages have a filter section at the top that allows you to narrow the view to matching or non-matching criteria. Almost all of the attributes that ACS gathers can be filtered, try it out:
+Most UI pages have a filters section at the top that allows you to narrow the reporting view to matching or non-matching criteria. Almost all of the attributes that ACS gathers are filterable, try it out:
Process Name and select the Process Name keyjava and press enter: click away to get the filters dropdown to clearjava and press enter; click away to get the filters dropdown to clearAs the foundation of ACS are the system policies, have a good look around:
-By default only some policies are enforced. If you want to get an overview which ones, you can use the filter view introduced above. Use Enforcement as filter key and FAIL_BUILD_ENFORCEMENT as value.
You should have one or more pipelines to build your application from the first workshop part, now we want to secure the build and deployment of it. For the sake of this workshop we’ll take a somewhat simplified use case:
-We want to scan our application image for the Red Hat Security Advisory RHSA-2021:4904 concerning openssl-lib.
-If this RHSA is found in an image we don’t want to deploy the application.
+You should by now have one or more pipelines to build your application, now we want to secure the build and deployment of it. For the sake of this workshop we’ll take a somewhat simplified use case:
+We want to scan our application image for the Red Hat Security Advisory RHSA-2020:5566 concerning openssl-lib.
+If this RHSA is found in an image we don’t want to deploy the application using it.
These are the steps you will go through:
First create a new policy category and the system policy. In the ACS Portal do the following:
-Workshop as Category nameFirst create the system policy. In the ACS Portal do the following:
RHSA-2021:4904 into the CVE identifier fieldRHSA-2020:5566 into the CVE field
Currently there is an issue with persisting the group change to the central instance. As a workaround run this in your Web Terminal zu restart the central instance:
-oc delete pod -n stackrox -l app=central
-Start the pipeline with the affected image version:
workshop-int project, start it and set Version to java-old-image (Remember how we set up this ImageStream tag to point to an old and vulnerable version of the image?)To make it easier spotting the violations for this deployment you can filter the list by entering namespace and then workshop-int in the filter bar.
java-old-imageQuarkus-Build-Options-Git-Gsklhg-Build-...) come and go when they are finished.Workshop RHSA-2021:4904 (Check the Time of the violation)Workshop RHSA-2020:5566 (Check the Time of the violation)There will be other policy violations listed, triggered by default policies, have a look around. Note that none of the policies are enforced (so that the pipeline build would be stopped) yet!
-Now start the pipeline with the fixed image version that doesn’t contain the CVE anymore:
openjdk-11-el7).Workshop RHSA-2021:4904 for your deployment is gone because the image no longer contains itThis shows how ACS is automatically scanning images when they become active against all enabled policies. But we don’t want to just admire a violation after the image has been deployed, we want to disable the deployment during build time! So the next step is to integrate the check into the build pipeline and enforce it (don’t deploy the application).
-
This shows how ACS is automatically scanning images when they become active against all enabled policies. But we don’t want to just see a violation after the image has been deployed, we want to disable the deployment during build time! So the next step is to integrate the check into the build pipeline and enforce it (don’t deploy the application).
@@ -761,22 +662,6 @@There are basically two ways to interface with ACS. The UI, which focuses on the needs of the security team, and a separate “interface” for developers to integrate into their existing toolset (CI/CD pipeline, consoles, ticketing systems etc): The roxctl commandline tool. This way ACS provides a familiar interface to understand and address issues that the security team considers important.
ACS policies can act during the CI/CD pipeline to identify security risk in container images before they are started.
+ACS policies can act during the CI/CD pipeline to identify security risk in images before they are run as a container.
You should have created and build a custom policy in ACS and tested it to trigger violations. Now you will integrate it into the build pipeline.
-roxctl cliBuild-time policies require the use of the roxctl command-line tool which is available for download from the ACS Central UI, in the upper right corner of the dashboard. You don’t need to to download this now as our Tekton task will do this automatically.
roxctl needs to authenticate to ACS Central to do anything. You can use either username and password or API tokens to authenticate against ACS Central. It’s good practice to use a token so that’s what we’ll do.
roxctl tokenIn the ACS portal:
+You should have created and build a custom policy in ACS and tested it for triggering violations. Now you will integrate it into the build pipeline.
+roxctlBuild-time policies require the use of the roxctl command-line tool which is available for download from the ACS Central UI, in the upper right corner of the dashboard. Roxctl needs to authenticate to ACS Central to do anything. It can use either username and password authentication to Central, or API token based. It’s good practice to use a token so that’s what you’ll do.
roxctl tokenOn the ACS portal:
pipeline for the token and select the role Admin.Change to the OpenShift Web Console and create a secret with the API token in the project your pipeline lives in:
+In your OCP cluster, create a secret with the API token in the project your pipeline lives in:
workshop-int ProjectSecret named roxsecretsDOMAIN placeholder was automatically replaced it should be: central-stackrox.apps.<DOMAIN>:443Even if the form says Drag and drop file with your value here… you can just paste the text.
-There is one more thing you have to do before integrating the image scanning into your build pipeline:
-When you created your deployment, a trigger was automatically added that deploys a new version when the image referenced by the ImageStream changes.
This is not what we want! Because this way a newly build image would be deployed immediately even if the roxctl scan detects a policy violation and terminates the pipeline.
Have a look for yourself:
-workshop Deploymentimage.openshift.io/triggers.Remove exactly this lines and click Save:
-image.openshift.io/triggers: >-
- [{"from":{"kind":"ImageStreamTag","name":"workshop2:latest","namespace":"workshop-int"},"fieldPath":"spec.template.spec.containers[?(@.name==\"workshop2\")].image","pause":"false"
-This way we make sure that a new image won’t be deployed automatically right after the build task which also updates the ImageStream.
You are now ready to create a new pipeline task that will use roxctl to scan the image build in your pipeline before the deploy step:
roxsecretsroxsecretsTake your time to understand the Tekton task definition:
roxctl binary into the pipeline workspace so you’ll always have a version compatible with your ACS version.roxctl execution, of course:
+roxctl execution, of course.
image check commandNow add the rox-image-check task to your pipeline between the build and deploy steps.
+Remember how we edited the pipeline directly in yaml before? OpenShift comes with a graphical Pipeline editor that we will use this time.
-build task and click the + at the right side of it, to add a task
After you added it you have to fill in values for the parameters the task defines. Click the task, a form with the parameters will open, fill it in:
roxsecretsroxsecretsquay-quay-quay.apps.<DOMAIN>/openshift_workshop-int/workshop (if the DOMAIN placeholder hasn’t been replaced automatically, do it manually)image-registry.openshift-image-registry.svc:5000/workshop-int/quarkus-workshop
+
As you remember we removed the trigger that updates the Deployment on ImageStream changes. Now the Deployment will never be updated and our new Image version will never be deployed to workshop-int.
To fix this we will add a new oc client Task that updates the Deployment, only after the Scan Task has run.
+With our Workshop Security Policy still not set to enforce we first are going to test the pipeline integration. Start the pipeline with Java Version java-old-image
deploy Taskopenshiftand select the openshift-client from Red Hatopenshift-client Taskoc patch deploy/workshop -p '{"spec":{"template":{"spec":{"containers":[{"name":"workshop","image":"$(params.QUAY_URL)/openshift_workshop-int/workshop@$(tasks.build.results.IMAGE_DIGEST)"}]}}}}'rox-image-check task should succeed, but if you have a look at the output (click the task in the visual representation) you should see that the build violated our policy!
With our custom System Policy still not set to enforce we first are going to test the pipeline integration. Go to Pipelines and next to your pipeline click on the three dots and then Start. Now in the pipeline startform enter java-old-image in the Version field.
To test the fixed image, just start the task with the default (latest) Java version again.
rox-image-check task should succeed, but if you have a look at the output (click the task in the visual representation) you should see that the build violated our policy!rox-image-check task should succeed, if you have a look at the output you should see no policy violation!The last step is to enforce the System Policy. If the policy is violated the pipeline should be stopped and the application should not be deployed.
+The last step is to enforce the Security Policy. If the policy is violated the pipeline should be stopped and the application should not be deployed.
Workshop RHSA-2021:4904 in ACS Portal and set Response Method to Inform and enforce and then switch on Build and Deploy below.java-old-image and then with Version openjdk-11-el7 (default)java-old-image and then with the latest default version.
So far you’ve seen how ACS can handle security issues concerning Build and Deploy stages. But ACS is also able to detect and secure container runtime behaviour. Let’s have a look…
As a scenario let’s assume you want to protect container workloads against attackers who are trying to install software. ACS comes with pre-configured policies for Ubuntu and Red Hat-based containers to detect if a package management tool is installed, this can be used in the Build and Deploy stages:
+As a scenario let’s assume you won’t to protect container workloads against attackers who are trying to install software. ACS comes with pre-configured policies for Ubuntu and Red Hat-based containers to detect if a package management tool is installed, this can be used in the Build and Deploy stages:
In the ACS Portal, go to Platform Configuration->Policy Management, search for the policies by e.g. typing policy and then red hat into the filter. Open the policy detail view by clicking it and have a look at what they do.
In the ACS Portal, go to Platform Configuration->System Policies, search for the policies by e.g. typing policy and then red hat into the filter. Open the policy detail view by clicking it and have a look at what they do.
You can use the included policies as they are but you can always e.g. clone and adapt them to your needs or write completely new ones.
To see how the alert would look like, we have to trigger the condition:
yum search testyum search test. Or whatever.yum commands in the terminal and check back with the Violations view:
@@ -534,29 +484,21 @@ But the real fun starts when you enforce the policy. Using the included policy, it’s easy to just “switch it on”:
Now trigger the policy again by opening a terminal into the pod in the OpenShift Web Console and executing yum. See what happens:
Now trigger the policy again by opening a terminal into the pod and executing yum. See what happens:
Red Hat Advanced Cluster Management for Kubernetes (ACM) provides management, visibility and control for your OpenShift and Kubernetes environments. It provides management capabilities for:
-All across hybrid cloud environments.
-Clusters and applications are visible and managed from a single console, with built-in security policies. Run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet.
-Before you can start using ACM, you have to install it using an Operator on your OpenShift cluster.
-Advanced Cluster Management for Kubernetes operator.open-cluster-management by default.After the operator has been installed it will inform you to create a MultiClusterHub, the central component of ACM.
Click the Create MultiClusterHub button and have a look at the available installation parameters, but don’t change anything.
-Click Create.
-At some point you will be asked to refresh the web console. Do this, you’ll notice a new drop-down menu at the top of the left menu bar. If left set to local-cluster you get the standard console view, switching to All Clusters takes you to a view provided by ACM covering all your clusters.
Okay, right now you’ll only see one, your local-cluster listed here.
Now let’s change to the full ACM console:
-local-clusters viewmulticlusterhub instance you deployed should be in Status Running by now.All ClustersYou are now in your ACM dashboard!
-
Have a look around:
-One of the main features of Advanced Cluster Management is cluster lifecycle management. ACM can help to:
-Let’s give this a try!
-Okay, do not overstress our cloud ressources and for the fun of it we’ll deploy a Single Node OpenShift (SNO) cluster to the same AWS account your lab cluster is running in.
-The first step is to create credentials in ACM to deploy to the Amazon Web Services account.
- -You’ll get the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY needed to deploy to AWS from your facilitators.
snoAWSsno namespacesandbox<NNNN>.opentlc.com, replace <NNNN> with your id, you can find it e.g. in the URLAccess key ID and Secret access key as provided.openshift-config and copy the content of the secret pull-secret$HOME/.ssh/<LABID>key.pem) and public key ($HOME/.ssh/<LABID>key.pub).
-<LABID> can be found in the URL, e.g. multicloud-console.apps.cluster-z48z9.z48z9.sandbox910.opentlc.comYou have created a new set of credentials to deploy to the AWS account you are using.
-Now you’ll deploy a new OpenShift instance:
-sno credential you created.us-east-1m5.2xlarge.0 (we want a single node OCP…).Now click Next until you arrive at the Review. Do the following:
-YAML: OncontrolPlane section change the replicas field to 1.It’s time to deploy your cluster, click Create!
-ACM monitors the installation of the new cluster and finally imports it. Click View logs under Cluster install to follow the installation log.
- -Installation of a SNO takes around 30 minutes in our lab environment.
-After the installation has finished, access the Clusters section in the ACM portal again.
-
Explore the information ACM is providing, including the Console URL and the access credentials of your shiny new SNO instance. Use them to login to the SNO Web Console.
-In the previous lab, you explored the Cluster Lifecycle functionality of RHACM by deploying a new OpenShift single-node instance to AWS. Now let’s have a look at another capability, Application Lifecycle management.
-Application Lifecycle management is used to manage applications on your clusters. This allows you to define a single or multi-cluster application using Kubernetes specifications, but with additional automation of the deployment and lifecycle management of resources to individual clusters. An application designed to run on a single cluster is straightforward and something you ought to be familiar with from working with OpenShift fundamentals. A multi-cluster application allows you to orchestrate the deployment of these same resources to multiple clusters, based on a set of rules you define for which clusters run the application components.
-The naming convention of the different components of the Application Lifecycle model in RHACM is as follows:
-Start with adding labels to your two OpenShift clusters in your ACM console:
-environment=prodenvironment=dev
Now it’s time to actually deploy the application. But first have a look at the manifest definitions ACM will use as deployables at https://github.com/devsecops-workshop/book-import/tree/master/book-import.
-Then in the ACM console navigate to Applications:
-GIT
Click Create and then the topology tab to view the application being deployed:
-
environment=devNow edit the application in the ACM console and change the label to environment=prod. What happens?
In this simple example you have seen how to deploy an application to an OpenShift cluster using ACM. All manifests defining the application where kept in a Git repo, ACM then used the manifests to deploy the required objects into the target cluster.
-You can integrate Ansible Automation Platform and the Automation Controller (formerly known as Ansible Tower) with ACM to perform pre / post tasks within the application lifecycle engine. The prehook and posthook task allows you to trigger an Ansible playbook before and after the application is deployed, respectively.
-Notice that you will need a Red Hat Account with a valid Ansible subscription for this part.
-To give this a try you need an Automation Controller instance. So let’s deploy one on your cluster using the AAP Operator:
-Ansible Automation Platform operator and install it using the default settings.automationcontrollerautomationcontroller-admin-password secretautomationcontroller route, access it and login as user admin using the password from the secretYou are now set with a shiny new Ansible Automation Platform Controller!
-In the Automation Controller web UI, generate a token for the admin user:
-admin and select TokensToken for use by ACMWriteSave the token value to a text file, you will need this token later!
-For Automation Controller to run something we must configure a Project and a Template first.
-Create an Ansible Project:
-Create an Ansible Job Template:
-Verify that the Job run by going to Jobs and looking for an acm-test job showing a successful Playbook run.
Set up the credential which is going to allow ACM to interact with your AAP instance in your ACM Portal:
-And now let’s configure the ACM integration with Ansible Automation Platform to kick off a job in Automation Controller. In this case the Ansible job will just run our simple playbook that will only output a message.
-In the ACM Portal:
-Give this a few minutes. The application will complete and in the application topology view you will see the Ansible prehook. In Automation Controller go to Jobs and verify the Automation Job run.
- - - - - - - - -During this workshop you’ll install and use a good number of software components. The first one is OpenShift Data Foundation for providing storage. We’ll start with it because the install takes a fair amount of time. Number two is Gitea for providing Git services in your cluster with more to follow in subsequent chapters.
But fear not, all are managed by Kubernetes Operators on OpenShift.
-Let’s install OpenShift Data Foundation which you might know under the old name OpenShift Container Storage. It is engineered as the data and storage services platform for OpenShift and provides software-defined storage for containers.
OpenShift Data Foundation operator
-
After the operator has been installed it will inform you to install a StorageSystem. From the operator overview page click Create StorageSystem with the following settings:
Deployment Type Full deployment and for Backing storage type make sure gp2 is selected.Requested capacity as is (2 TiB) and select all nodes.Default (SDN)You’ll see a review of your settings, hit Create StorageSystem. Don’t worry if you see a temporary 404 Page. Just releod the browser page once and you will see the System Overview
As mentioned already this takes some time, so go ahead and install the other prerequisites. We’ll come back later.
-You will be asked to run oc (the OpenShift commandline tool) commands a couple of times. We will do this by using the OpenShift Web Terminal. This is the easiest way because you don’t have to install oc or an SSH client.
To extend OpenShift with the Web Terminal option, install the Web Terminal operator:
-This will take some time and installs another operator as a dependency.
-After the operator has installed, reload the OCP Web Console browser window. You will now have a new button (>_) in the upper right. Click it to start a new web terminal. From here you can run the oc commands when the lab guide requests it (copy/paste might depend on your laptop OS and browser settings, e.g. try Ctrl-Shift-V for pasting).
The terminal is not persistent, so if it was closed for any reason anything you did in the terminal is gone after re-opening.
-If for any reason you can’t use the webterminal, your options are:
-oc on your laptopoc without login.TODO: Change yaml applies to direct git download
-We’ll need Git repository services to keep our app and infrastructure source code, so let’s just install trusted Gitea using an operator:
Gitea is an OpenSource Git Server similar to GitHub. A team at Red Hat was so nice to create an Operator for it. This is a good example of how you can integrate an operator into your catalog that is not part of the default OperatorHub already.
-To integrate the Gitea operator into your Operator catalog you need to access your cluster with the oc client. You can do this in two ways:
oc command you copied above, you may need to add –insecure-skip-tls-verify at the end of the lineOr, if working on a Red Hat RHPDS environment:
-lab-user you will be able to run oc commands without additional login.Now using oc add the Gitea Operator to your OpenShift OperatorHub catalog:
oc apply -f https://raw.githubusercontent.com/rhpds/gitea-operator/ded5474ee40515c07211a192f35fb32974a2adf9/catalog_source.yaml
-Gitea (You may need to disable search filters)Gitea Operator with default settingsgit with the Project selection menu at the top
-TODO : Screenshotgit project via the top Project selection menu !git)
Click image to enlarge
-
spec values :spec:
- giteaAdminUser: gitea
- giteaAdminPassword: "gitea"
- giteaAdminEmail: opentlc-mgr@redhat.com
-After creation has finished:
-Gitea with user gitea and password gitea| - | - |
|---|---|
Click image to enlarge- |
- Click image to enlarge- |
-
Now we will clone a git repository of a sample application into our Gitea, so we have some code to works with
-In the cloned repository you’ll find a devfile.yaml. We will need the URL to the file soon, so keep the tab open.
In later chapters we will need a second repository to hold your GitOps yaml resources. Let’s create this now as well
-Gitea create a New Migration and clone the Config GitOps Repo which will be the repository that contains our GitOps infrastructure components and stateNow it’s time to check if the StorageSystem deployment from ODF completed succesfully. In the openShift web console:
Your container storage is ready to go, explore the information on the overview pages if you’d like.
-The image that we have just deployed was pushed to the internal OpenShift Registry which is a great starting point for your cloud native journey. But if you require more control over you image repos, a graphical GUI, scalability, internal security scanning and the like you may want to upgrade to Red Hat Quay. So as a next step we want to replace the internal registry with Quay.
-Quay installation is done through an operator, too:
-Quayquay at the top Project selection menuquay go to Administration->LimitRanges and delete the quay-core-resource-limits
-
quay projectquayTrue
-
Now that the Registry is installed you have to configure a superuser:
-quay Projectquay-quay)quayadmin, a (fake) email address and and quayadmin as password.quay-config-editor-credentials-..., open the secret and copy the values, you’ll need them in a second.quay-quay-config-editor routequayadminReconfiguring Quay takes some time. The easiest way to determine if it’s been finished is to open the Quay portal (using the quay-quay Route). At the upper right you’ll see the username (quayadmin), if you click the username the drop-down should show a link Super User Admin Panel. When it shows up you can proceed.
To synchronize the internal default OpenShift Registry with the Quay Registry, Quay Bridge is used.
-Now we finally create an Quay Bridge instance. :
-quay namespace)quaytokenhttps://) and paste it into the Quay Hostname fieldtrue