From e002401ea16db59d97230a72651137349d3f3dc4 Mon Sep 17 00:00:00 2001 From: OpenStack Release Bot Date: Fri, 3 Mar 2023 18:15:06 +0000 Subject: [PATCH 01/11] Update .gitreview for stable/2023.1 Change-Id: I05fb34bb245b015517f141d944f78e3edbf5da26 --- .gitreview | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitreview b/.gitreview index 64aebd2..1b8e2a9 100644 --- a/.gitreview +++ b/.gitreview @@ -2,3 +2,4 @@ host=review.opendev.org port=29418 project=openstack/devstack-plugin-ceph.git +defaultbranch=stable/2023.1 From 9afbbc2e7e12693d6cc8a71aa63cfc569345f4cc Mon Sep 17 00:00:00 2001 From: OpenStack Release Bot Date: Fri, 3 Mar 2023 18:15:07 +0000 Subject: [PATCH 02/11] Update TOX_CONSTRAINTS_FILE for stable/2023.1 Update the URL to the upper-constraints file to point to the redirect rule on releases.openstack.org so that anyone working on this branch will switch to the correct upper-constraints list automatically when the requirements repository branches. Until the requirements repository has as stable/2023.1 branch, tests will continue to use the upper-constraints list on master. Change-Id: Iad977e2f8734d743b4fa1c38e7a47696e5c0098c --- tox.ini | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tox.ini b/tox.ini index fe80eb7..271c0c2 100644 --- a/tox.ini +++ b/tox.ini @@ -26,7 +26,7 @@ commands = bash -c "find {toxinidir} \ [testenv:docs] deps = - -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} + -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/2023.1} -r{toxinidir}/doc/requirements.txt commands = rm -rf doc/build From 2d61c38586bfd25cc6cf858c52c739f44f96c191 Mon Sep 17 00:00:00 2001 From: Sean Mooney Date: Fri, 7 Jul 2023 14:20:48 +0100 Subject: [PATCH 03/11] [Partial Backport] Revert "Temporary pin the ceph jobs nodeset to Focal" Note(sean-k-mooney): This is a partail backport of I899822fec863f43cd6c58b25cf4688c6a3ac1e9b contianing only the change to enable validations in the base job and the swap/concurrency/mysql changes to account for the high memory pressure in the job which leads to instablity. All changes outside of the .zuul.yaml change are dropped as is the depend on for the cinder-tempest-plugin. cinder-tempest-plugin is branchless so we do not need to backport it and it is already merged on master so the depency is fulfilled. This reverts commit 863a01b03286e6595d68ac7f2560c857bcf944c5. Partial revert only for the pin to focal, leaves the broken other jobs commented out. Update paste-deploy workaround to be used always. Add qemu-block-extra and podman deps to the debs list. Running on the newer ceph and distro causes some quite different performance characteristics that cause tests that used to pass to fail more often. This includes some performance optimizations to help reduce the memory footprint, as well as depends on changes to tempest tests to improve the reliability of those tests by enabling validation via SSH. This also moves the cephadm job to be the voting/gating job as that seems to be the clear consensus about "the future" of how we deploy ceph for testing. Co-Authored-By: Dan Smith Change-Id: I899822fec863f43cd6c58b25cf4688c6a3ac1e9b (cherry picked from commit 41b6a8c227190a9b52d29a078425321f96240d92) --- .zuul.yaml | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/.zuul.yaml b/.zuul.yaml index 115194a..6820c71 100644 --- a/.zuul.yaml +++ b/.zuul.yaml @@ -21,11 +21,14 @@ - ^tox.ini$ timeout: 7200 vars: + configure_swap_size: 8192 + tempest_concurrency: 3 devstack_localrc: ENABLE_FILE_INJECTION: false TEMPEST_PLUGINS: '/opt/stack/cinder-tempest-plugin' ENABLE_VOLUME_MULTIATTACH: false - TEMPEST_RUN_VALIDATION: false + TEMPEST_RUN_VALIDATION: True + MYSQL_REDUCE_MEMORY: True devstack_plugins: devstack-plugin-ceph: https://opendev.org/openstack/devstack-plugin-ceph devstack_services: From b8829a54f237ab721e48a9d5a33e80e6aa3f6303 Mon Sep 17 00:00:00 2001 From: Goutham Pacha Ravi Date: Fri, 10 Mar 2023 18:44:50 -0800 Subject: [PATCH 04/11] Cleanup installation and revive cephfs-nfs job The cephfs-nfs job was turned off [1] for perma-failing. This commit adds the original non-voting job back into the check queue and fixes some installation issues: 1) use ceph "quincy" release: Ceph Pacific's end of life is 2023-06-01 [2]. The manila community thinks deployers are more likely to use quincy with the 2023.2 (bobcat) release of OpenStack. 2) run the job with centos-stream-9: There are no packages currently available for Jammy Jellyfish on download.ceph.com [3]. The OS shouldn't really matter for this CI job that is meant to test feature functionality provided by manila. At this time, we'd like to stick with builds provided by the ceph community instead of the distro since it may take a while to get bugfixes into distro builds. 3) The install script uses "nfs-ganesha" builds for ubuntu and centos hosted by the nfs-ganesha community [4]. We will not rely on the ceph community to provide the latest builds for nfs-ganesha any longer. This commit also cleans up the unnecessary condition in the ceph script file pertaining to configuring ceph packages for Jammy Jellyfish. This step wasn't doing anything. Ubuntu packages don't work at the moment and that requires some more investigation. [1] Id2ae61979505de5efb47ce90a2bac8aac2fc5484 [2] https://docs.ceph.com/en/latest/releases/ [3] https://www.spinics.net/lists/ceph-users/msg74312.html [4] https://download.nfs-ganesha.org/ Change-Id: I40dfecfbbe21b2f4b3e4efd903980b5b146c4202 Signed-off-by: Goutham Pacha Ravi (cherry picked from commit 563cb5deeb21815ce0c62fa30249e85e886c783a) --- .zuul.yaml | 22 ++- devstack/files/debs/devstack-plugin-ceph | 1 + devstack/files/rpms/devstack-plugin-ceph | 2 + devstack/lib/ceph | 175 ++++++++--------------- 4 files changed, 82 insertions(+), 118 deletions(-) create mode 100644 devstack/files/debs/devstack-plugin-ceph create mode 100644 devstack/files/rpms/devstack-plugin-ceph diff --git a/.zuul.yaml b/.zuul.yaml index 6820c71..2ecc11c 100644 --- a/.zuul.yaml +++ b/.zuul.yaml @@ -96,6 +96,22 @@ Runs manila tempest plugin tests with CephFS via NFS-Ganesha as a manila back end (DHSS=False) parent: manila-tempest-plugin-cephfs-nfs + nodeset: devstack-single-node-centos-9-stream + vars: + # TODO(gouthamr): some tests are disabled due to bugs + # IPv6 Tests: https://bugs.launchpad.net/manila/+bug/1998489 + # snapshot clone fs sync: https://bugs.launchpad.net/manila/+bug/1989273 + tempest_exclude_regex: "\ + (^manila_tempest_tests.tests.scenario.*IPv6.*)|\ + (^manila_tempest_tests.tests.scenario.test_share_basic_ops.TestShareBasicOpsNFS.test_write_data_to_share_created_from_snapshot)" + devstack_localrc: + MANILA_OPTGROUP_cephfsnfs_cephfs_ganesha_server_ip: "{{ hostvars[inventory_hostname]['nodepool']['private_ipv4'] }}" + CEPH_RELEASE: "quincy" + MANILA_SETUP_IPV6: false + NEUTRON_CREATE_INITIAL_NETWORKS: true + IP_VERSION: 4 + + - job: name: devstack-plugin-ceph-tempest-fedora-latest @@ -179,9 +195,9 @@ - devstack-plugin-ceph-cephfs-native: irrelevant-files: *irrelevant-files voting: false - # - devstack-plugin-ceph-cephfs-nfs: - # irrelevant-files: *irrelevant-files - # voting: false + - devstack-plugin-ceph-cephfs-nfs: + irrelevant-files: *irrelevant-files + voting: false # - devstack-plugin-ceph-tempest-fedora-latest # - devstack-plugin-ceph-multinode-tempest-py3 # - devstack-plugin-ceph-multinode-tempest-cephadm: diff --git a/devstack/files/debs/devstack-plugin-ceph b/devstack/files/debs/devstack-plugin-ceph new file mode 100644 index 0000000..0dee61c --- /dev/null +++ b/devstack/files/debs/devstack-plugin-ceph @@ -0,0 +1 @@ +xfsprogs \ No newline at end of file diff --git a/devstack/files/rpms/devstack-plugin-ceph b/devstack/files/rpms/devstack-plugin-ceph new file mode 100644 index 0000000..2d67a35 --- /dev/null +++ b/devstack/files/rpms/devstack-plugin-ceph @@ -0,0 +1,2 @@ +xfsprogs +dbus-tools \ No newline at end of file diff --git a/devstack/lib/ceph b/devstack/lib/ceph index 9b37b61..a054d9a 100755 --- a/devstack/lib/ceph +++ b/devstack/lib/ceph @@ -30,7 +30,19 @@ TEST_MASTER=$(trueorfalse False TEST_MASTER) CEPH_RELEASE=${CEPH_RELEASE:-pacific} -GANESHA_RELEASE=${GANESHA_RELEASE:-V3.5-stable} +GANESHA_RELEASE=${GANESHA_RELEASE:-'unspecified'} +# Remove "v" and "-stable" prefix/suffix tags +GANESHA_RELEASE=$(echo $GANESHA_RELEASE | sed -e "s/^v//" -e "s/-stable$//") + +if [[ "$MANILA_CEPH_DRIVER" == "cephfsnfs" && "$GANESHA_RELEASE" == "unspecified" ]]; then + # default ganesha release based on ceph release + case $CEPH_RELEASE in + pacific) + GANESHA_RELEASE='3.5' ;; + *) + GANESHA_RELEASE='4.0' ;; + esac +fi # Deploy a Ceph demo container instead of a non-containerized version CEPH_CONTAINERIZED=$(trueorfalse False CEPH_CONTAINERIZED) @@ -273,17 +285,13 @@ function _undefine_virsh_secret { # check_os_support_ceph() - Check if the OS provides a decent version of Ceph function check_os_support_ceph { - if [[ ! ${DISTRO} =~ (jammy|focal|bionic|xenial|f31|f32|f33|f34|rhel8) ]]; then - echo "WARNING: your distro $DISTRO does not provide \ - (at least) the Luminous release. \ - Please use Ubuntu Xenial, Ubuntu Bionic, Ubuntu Focal, Ubuntu Jammy, \ - Fedora 31-34 or CentOS Stream 8." + if [[ ! ${DISTRO} =~ (jammy|focal|bionic|xenial|f31|f32|f33|f34|rhel8|rhel9) ]]; then + echo "WARNING: devstack-plugin-ceph hasn't been tested with $DISTRO. \ + Set FORCE_CEPH_INSTALL=yes in your local.conf if you'd like to \ + attempt installation anyway." if [[ "$FORCE_CEPH_INSTALL" != "yes" ]]; then - die $LINENO "If you wish to install Ceph on this distribution \ - anyway run with FORCE_CEPH_INSTALL=yes, \ - this assumes that YOU will setup the proper repositories" + die $LINENO "Not proceeding with install." fi - NO_UPDATE_REPOS=False fi if [[ ! $INIT_SYSTEM == 'systemd' ]]; then @@ -893,7 +901,7 @@ function configure_ceph_cinder { sudo rbd pool init ${CINDER_CEPH_POOL} } -# install_ceph() - Collect source and prepare +# install_ceph_remote() - Collect source and prepare function install_ceph_remote { install_package ceph-common # ceph-common in Bionic (18.04) installs only the python2 variants of @@ -920,12 +928,12 @@ function install_ceph_remote { function dnf_add_repository_ceph { local ceph_release=$1 - local distro_release=$2 + local package_release=$2 cat > ceph.repo < nfs-ganesha.repo < \ -# [] -# - package_manager: apt-get or yum -# - ceph_release: luminous, ... -# - distro_release: 7, xenial, bionic +# Usage: configure_repo_ceph +# - package_release: to override the os_RELEASE variable function configure_repo_ceph { - local package_manager="$1" - local ceph_release="$2" - local distro_release="$3" + + package_release=${1:-$os_RELEASE} if is_ubuntu; then if [[ "${TEST_MASTER}" == "True" ]]; then repo_file_name="/etc/apt/sources.list.d/ceph-master.list" - sudo wget -c "https://shaman.ceph.com/api/repos/ceph/master/latest/ubuntu/${distro_release}/flavors/default/repo" -O ${repo_file_name} + sudo wget -c "https://shaman.ceph.com/api/repos/ceph/master/latest/ubuntu/${package_release}/flavors/default/repo" -O ${repo_file_name} else wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - - sudo apt-add-repository -y "deb https://download.ceph.com/debian-${ceph_release}/ ${distro_release} main" + sudo apt-add-repository -y "deb https://download.ceph.com/debian-${CEPH_RELEASE}/ $package_release main" fi - sudo ${package_manager} -y update + sudo apt-get -y update elif is_fedora; then + package_release="el"${package_release} if [[ "${TEST_MASTER}" == "True" ]]; then repo_file_name="/etc/yum.repos.d/ceph-master.repo" - sudo wget -c "https://shaman.ceph.com/api/repos/ceph/master/latest/centos/${distro_release}/flavors/default/repo" -O ${repo_file_name} + sudo wget -c "https://shaman.ceph.com/api/repos/ceph/master/latest/centos/${package_release}/flavors/default/repo" -O ${repo_file_name} sudo dnf config-manager --add-repo ${repo_file_name} else - dnf_add_repository_ceph ${ceph_release} ${distro_release} + dnf_add_repository_ceph ${CEPH_RELEASE} ${package_release} fi fi } @@ -1024,37 +1010,22 @@ function cleanup_repo_ceph { } # configure_repo_nfsganesha() - Configure NFS Ganesha repositories -# Usage: configure_repo_nfsganesha \ -# [] -# - package_manager: apt-get or dnf -# - ganesha_release: 2.7, 2.8, 3.0 -# - ceph_release: ceph_luminous, ceph_nautilus function configure_repo_nfsganesha { - local package_manager="$1" - local ganesha_release="$2" - local ceph_release="$3" - if is_ubuntu; then - # FIXME(vkmc) We need to use community ppa's because there are no builds - # for ubuntu bionic and ubuntu focal available for nfs-ganesha 2.7 and above - # Remove this when they provide the build in download.ceph.com - # FIXME(vkmc) Community ppa's don't provide builds for nfs-ganesha-3 - # microversions (3.3, 3.5). Default to latest. - if [[ $GANESHA_RELEASE =~ V3.(3|5)-stable ]]; then + # NOTE(gouthamr): Ubuntu PPAs contain the latest build from each major + # version; we can't use a build microversion unlike el8/el9 builds + if [[ $GANESHA_RELEASE =~ 3 ]]; then sudo add-apt-repository -y ppa:nfs-ganesha/libntirpc-3.0 sudo add-apt-repository -y ppa:nfs-ganesha/nfs-ganesha-3.0 - elif [ $GANESHA_RELEASE == 'V2.8-stable' ]; then - sudo add-apt-repository -y ppa:nfs-ganesha/libntirpc-1.8 - sudo add-apt-repository -y ppa:nfs-ganesha/nfs-ganesha-2.8 - elif [ $GANESHA_RELEASE == 'V2.7-stable' ]; then - sudo add-apt-repository -y ppa:nfs-ganesha/libntirpc-1.7 - sudo add-apt-repository -y ppa:nfs-ganesha/nfs-ganesha-2.7 + elif [[ $GANESHA_RELEASE =~ 4 ]]; then + sudo add-apt-repository -y ppa:nfs-ganesha/libntirpc-4 + sudo add-apt-repository -y ppa:nfs-ganesha/nfs-ganesha-4 else - die $LINENO "GANESHA_RELEASE is not supported by the Ceph plugin for Devstack" + die $LINENO "NFS-Ganesha $GANESHA_RELEASE is not supported by the Ceph plugin for Devstack" fi - sudo ${package_manager} -y update + sudo apt-get -y update elif is_fedora; then - dnf_add_repository_nfsganesha rpm-${ganesha_release} ${ceph_release} + dnf_add_repository_nfsganesha fi } @@ -1062,23 +1033,7 @@ function configure_repo_nfsganesha { # Usage: cleanup_repo_nfsganesha function cleanup_repo_nfsganesha { if is_ubuntu; then - # FIXME(vkmc) We need to use community ppa's because there are no builds - # for ubuntu bionic and ubuntu focal available for nfs-ganesha 2.7 and above - # Remove this when they provide the builds in download.ceph.com - # FIXME(vkmc) Community ppa's don't provide builds for nfs-ganesha-3 - # microversions (3.3, 3.5). Default to latest. - if [[ $GANESHA_RELEASE =~ V3.(3|5)-stable ]]; then - sudo rm -rf /etc/apt/sources.list.d/nfs-ganesha-ubuntu-libntirpc-3_0-*.list* - sudo rm -rf /etc/apt/sources.list.d/nfs-ganesha-ubuntu-nfs-ganesha-3_0-*.list* - elif [ $GANESHA_RELEASE == 'V2.8-stable' ]; then - sudo rm -rf /etc/apt/sources.list.d/nfs-ganesha-ubuntu-libntirpc-1_8-*.list* - sudo rm -rf /etc/apt/sources.list.d/nfs-ganesha-ubuntu-nfs-ganesha-2_8-*.list* - elif [ $GANESHA_RELEASE == 'V2.7-stable' ]; then - sudo rm -rf /etc/apt/sources.list.d/nfs-ganesha-ubuntu-libntirpc-1_7-*.list* - sudo rm -rf /etc/apt/sources.list.d/nfs-ganesha-ubuntu-nfs-ganesha-2_7-*.list* - else - die $LINENO "GANESHA_RELEASE is not supported by the Ceph plugin for Devstack" - fi + sudo rm -rf "/etc/apt/sources.list.d/nfs-ganesha-ubuntu*" elif is_fedora; then sudo rm -rf /etc/yum.repos.d/nfs-ganesha.repo fi @@ -1088,9 +1043,10 @@ function setup_packages_for_manila_on_ubuntu { CEPH_PACKAGES="${CEPH_PACKAGES} ceph-mds libcephfs2" if [ $MANILA_CEPH_DRIVER == 'cephfsnfs' ]; then - configure_repo_nfsganesha "apt-get" "$GANESHA_RELEASE" "$CEPH_RELEASE" - CEPH_PACKAGES="${CEPH_PACKAGES} libntirpc3 nfs-ganesha nfs-ganesha-ceph \ - nfs-ganesha-rados-urls nfs-ganesha-vfs" + configure_repo_nfsganesha + LIBNTIRPC_PACKAGE="libntirpc${GANESHA_RELEASE:0:1}" + CEPH_PACKAGES="${CEPH_PACKAGES} $LIBNTIRPC_PACKAGE \ + nfs-ganesha nfs-ganesha-ceph nfs-ganesha-rados-urls nfs-ganesha-vfs" fi if python3_enabled; then @@ -1100,31 +1056,18 @@ function setup_packages_for_manila_on_ubuntu { function setup_packages_for_manila_on_fedora_family { if [ $MANILA_CEPH_DRIVER == 'cephfsnfs' ]; then - # NOTE(vkmc) el7 packages work on Fedora - configure_repo_nfsganesha "dnf" "$GANESHA_RELEASE" "$CEPH_RELEASE" + configure_repo_nfsganesha CEPH_PACKAGES="${CEPH_PACKAGES} nfs-ganesha nfs-ganesha-ceph \ - nfs-ganesha-rados-urls nfs-ganesha-vfs" + nfs-ganesha-rados-urls nfs-ganesha-vfs" fi } function install_ceph { if is_ubuntu; then - if ! [[ $os_CODENAME =~ (jammy|focal|xenial|bionic) ]]; then - die $LINENO "Supported for Ubuntu Xenial, Bionic, Focal or Jammy. \ - Not supported for other releases." - fi - # NOTE(vkmc) Dependencies for setting up repos install_package software-properties-common - # NOTE(noonedeadpunk): There're no community repos for Ubuntu Jammy yet - # but Ceph Quincy is provided from default - # Ubuntu 22.04 repos. - if ! [[ $os_CODENAME =~ (jammy) ]]; then - configure_repo_ceph "apt-get" "$CEPH_RELEASE" "$os_CODENAME" - fi - CEPH_PACKAGES="ceph libnss3-tools" if python3_enabled; then CEPH_PACKAGES="${CEPH_PACKAGES} python3-rados python3-rbd" @@ -1150,12 +1093,14 @@ function install_ceph { install_package ${CEPH_PACKAGES} elif is_fedora; then + override_os_release="" if ! [[ $os_VENDOR =~ Fedora ]] && [[ $os_RELEASE =~ (31|32) ]]; then die $LINENO "Supported for Fedora 31 and 32. Not supported for other releases." + override_os_release="8" fi # NOTE(lyarwood) Use the py3 el8 packages on Fedora - configure_repo_ceph "dnf" "$CEPH_RELEASE" "el8" + configure_repo_ceph ${override_os_release} CEPH_PACKAGES="ceph" From 8ec10d48c5344e639179578ea092fcf9be31e044 Mon Sep 17 00:00:00 2001 From: Goutham Pacha Ravi Date: Mon, 23 Oct 2023 12:26:01 -0700 Subject: [PATCH 05/11] Update default ceph image tag to remove patch version Ceph release tags adhere to a versioning scheme x.y.z [1], where: - x = major release number (e.g.: quincy is 17, reef is 18) - y = 1 or 2, where 1 is an release candidate, and 2 is a stable release - z = patch/updates We shouldn't hardcode a patch version in the default container image we're fetching in our jobs, unless absolutely necessary for some bugfix/feature that we rely on. [1] https://docs.ceph.com/en/latest/releases/general/ Related-Bug: #1989273 Change-Id: Iea541d2edefc871bcac2d965997c88462fcbe521 Signed-off-by: Goutham Pacha Ravi (cherry picked from commit 7b209845d508c0876d917a9db03157abebd627af) (cherry picked from commit 190be0de97e0a77256ec13e8c796a2db9846655a) --- devstack/lib/cephadm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/devstack/lib/cephadm b/devstack/lib/cephadm index 8b5e685..d600fbf 100755 --- a/devstack/lib/cephadm +++ b/devstack/lib/cephadm @@ -29,7 +29,7 @@ DISABLE_CEPHADM_POST_DEPLOY=${DISABLE_CEPHADM_POST_DEPLOY:-False} # DEFAULT OPTIONS ATTEMPTS=30 -CONTAINER_IMAGE=${CONTAINER_IMAGE:-'quay.io/ceph/ceph:v17.2.3'} +CONTAINER_IMAGE=${CONTAINER_IMAGE:-'quay.io/ceph/ceph:v17.2'} DEVICES=() FSID=$(uuidgen) KEY_EXPORT_DIR="/etc/ceph" From 98901bd27a07478a46f95b933d443a959215ff92 Mon Sep 17 00:00:00 2001 From: ashrod98 Date: Mon, 16 Oct 2023 19:53:44 +0000 Subject: [PATCH 06/11] Remote Ceph with cephadm Add podman ceph-common and jq as part of preinstall dependency. Add REMOTE_CEPH capabilities to CEPHADM deployment. Removed set_min_client only if cinder is enabled, this should be set in any case. Get FSID from ceph.conf in /etc/ceph to avoid unnecessary override. Update paste-deploy workaround to be used always. Part of an effort to test multinode deployments with cephadm. Pinned tempest-py3-base to single-node-jammy. Added cephadm deploy to tempest-py3 job. Needed-By: I5162815b66d3f3e8cf8c1e246b61b0ea06c1a270 Change-Id: I84249ae268dfe00a112c67e5170b679acb318a25 --- .zuul.yaml | 81 ++++++------------------ devstack/files/debs/devstack-plugin-ceph | 7 +- devstack/files/rpms/devstack-plugin-ceph | 5 +- devstack/lib/cephadm | 54 +++++++--------- devstack/override-defaults | 1 + devstack/plugin.sh | 11 ++-- 6 files changed, 60 insertions(+), 99 deletions(-) diff --git a/.zuul.yaml b/.zuul.yaml index 2ecc11c..9fe196d 100644 --- a/.zuul.yaml +++ b/.zuul.yaml @@ -2,9 +2,7 @@ name: devstack-plugin-ceph-tempest-py3-base abstract: true parent: tempest-full-py3 - # TODO: Remove the nodeset pinning to focal once below bug is fixed - # https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1996628 - nodeset: openstack-single-node-focal + nodeset: openstack-single-node-jammy description: | Base integration tests that runs with the ceph devstack plugin and py3. Former names for this job where: @@ -48,19 +46,8 @@ This job enable the multiattach feature enable from stein on. vars: devstack_localrc: - ENABLE_VOLUME_MULTIATTACH: true - CEPH_RELEASE: "pacific" - -- job: - name: devstack-plugin-ceph-tempest-cephadm - parent: devstack-plugin-ceph-tempest-py3-base - description: | - Integration tests that runs with the ceph devstack plugin and py3. - The ceph cluster is deployed using cephadm - vars: - tempest_concurrency: 1 - devstack_localrc: - CEPHADM_DEPLOY: true + DISABLE_CEPHADM_POST_DEPLOY: True + CEPHADM_DEPLOY: True - job: name: devstack-plugin-ceph-compute-local-ephemeral @@ -96,50 +83,29 @@ Runs manila tempest plugin tests with CephFS via NFS-Ganesha as a manila back end (DHSS=False) parent: manila-tempest-plugin-cephfs-nfs - nodeset: devstack-single-node-centos-9-stream - vars: - # TODO(gouthamr): some tests are disabled due to bugs - # IPv6 Tests: https://bugs.launchpad.net/manila/+bug/1998489 - # snapshot clone fs sync: https://bugs.launchpad.net/manila/+bug/1989273 - tempest_exclude_regex: "\ - (^manila_tempest_tests.tests.scenario.*IPv6.*)|\ - (^manila_tempest_tests.tests.scenario.test_share_basic_ops.TestShareBasicOpsNFS.test_write_data_to_share_created_from_snapshot)" - devstack_localrc: - MANILA_OPTGROUP_cephfsnfs_cephfs_ganesha_server_ip: "{{ hostvars[inventory_hostname]['nodepool']['private_ipv4'] }}" - CEPH_RELEASE: "quincy" - MANILA_SETUP_IPV6: false - NEUTRON_CREATE_INITIAL_NETWORKS: true - IP_VERSION: 4 - - - -- job: - name: devstack-plugin-ceph-tempest-fedora-latest - parent: devstack-plugin-ceph-tempest-py3 - description: | - Integration tests that runs with the ceph devstack plugin on Fedora. - nodeset: devstack-single-node-fedora-latest - voting: false - job: name: devstack-plugin-ceph-multinode-tempest-py3 parent: tempest-multinode-full-py3 description: | Integration tests that runs the ceph device plugin across multiple - nodes on py3. - # TODO: Remove the nodeset pinning to focal once below bug is fixed - # https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1996628 - nodeset: openstack-two-node-focal + nodes on py3. The Ceph deployment strategy used by this job is Cephadm. required-projects: - openstack/cinder-tempest-plugin - openstack/devstack-plugin-ceph timeout: 10800 - voting: false vars: + configure_swap_size: 8192 + tempest_concurrency: 3 devstack_localrc: ENABLE_FILE_INJECTION: false ENABLE_VOLUME_MULTIATTACH: true - TEMPEST_RUN_VALIDATION: false + TEMPEST_RUN_VALIDATION: true + USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION: false + CEPHADM_DEPLOY: True + DISABLE_CEPHADM_POST_DEPLOY: True + MYSQL_REDUCE_MEMORY: True + REMOTE_CEPH: False CINDER_CEPH_UUID: d531d2d4-3937-429c-b0c2-658fe41e82aa devstack_plugins: devstack-plugin-ceph: https://opendev.org/openstack/devstack-plugin-ceph @@ -155,7 +121,8 @@ group-vars: subnode: devstack_localrc: - REMOTE_CEPH: true + REMOTE_CEPH: True + CEPHADM_DEPLOY: True CINDER_CEPH_UUID: d531d2d4-3937-429c-b0c2-658fe41e82aa - job: @@ -169,19 +136,6 @@ devstack_localrc: TEST_MASTER: true -- job: - name: devstack-plugin-ceph-multinode-tempest-cephadm - parent: devstack-plugin-ceph-multinode-tempest-py3 - description: | - Integration tests that runs the ceph device plugin across multiple - nodes on py3. - The ceph deployment strategy used by this job is cephadm. - vars: - devstack_localrc: - USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION: false - CEPHADM_DEPLOY: true - tempest_concurrency: 1 - - project-template: name: devstack-plugin-ceph-tempest-jobs description: | @@ -192,6 +146,7 @@ voting: false - devstack-plugin-ceph-tempest-cephadm: voting: false + - devstack-plugin-ceph-multinode-tempest-py3 - devstack-plugin-ceph-cephfs-native: irrelevant-files: *irrelevant-files voting: false @@ -204,9 +159,9 @@ # voting: false # - devstack-plugin-ceph-master-tempest: # voting: false - # gate: - # jobs: - # - devstack-plugin-ceph-tempest-py3 + gate: + jobs: + - devstack-plugin-ceph-tempest-py3 - project: templates: diff --git a/devstack/files/debs/devstack-plugin-ceph b/devstack/files/debs/devstack-plugin-ceph index 0dee61c..73bc9b9 100644 --- a/devstack/files/debs/devstack-plugin-ceph +++ b/devstack/files/debs/devstack-plugin-ceph @@ -1 +1,6 @@ -xfsprogs \ No newline at end of file +xfsprogs +qemu-block-extra +catatonit +podman +jq +ceph-common diff --git a/devstack/files/rpms/devstack-plugin-ceph b/devstack/files/rpms/devstack-plugin-ceph index 2d67a35..1b806da 100644 --- a/devstack/files/rpms/devstack-plugin-ceph +++ b/devstack/files/rpms/devstack-plugin-ceph @@ -1,2 +1,5 @@ xfsprogs -dbus-tools \ No newline at end of file +dbus-tools +podman +jq +ceph-common diff --git a/devstack/lib/cephadm b/devstack/lib/cephadm index 8b5e685..35e72d8 100755 --- a/devstack/lib/cephadm +++ b/devstack/lib/cephadm @@ -31,7 +31,11 @@ DISABLE_CEPHADM_POST_DEPLOY=${DISABLE_CEPHADM_POST_DEPLOY:-False} ATTEMPTS=30 CONTAINER_IMAGE=${CONTAINER_IMAGE:-'quay.io/ceph/ceph:v17.2.3'} DEVICES=() -FSID=$(uuidgen) +if [[ "$REMOTE_CEPH" = "False" ]]; then + FSID=$(uuidgen) +else + FSID=$(cat ceph.conf | awk '/fsid/ { print $3 }') +fi KEY_EXPORT_DIR="/etc/ceph" KEYS=("client.openstack") # at least the client.openstack default key should be created MIN_OSDS=1 @@ -117,33 +121,19 @@ function export_spec { echo "Ceph cluster config exported: $EXPORT" } -# Pre-install ceph: install podman -function _install_podman { - # FIXME(vkmc) Check required for Ubuntu 20.04 LTS (current CI node) - # Remove when our CI is pushed to the next LTS version - if ! command -v podman &> /dev/null; then - if [[ $os_CODENAME =~ (focal) ]]; then - echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /" \ - | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list - curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/Release.key" \ - | sudo apt-key add - - sudo apt-get update - sudo apt-get -y upgrade - fi - install_package podman - fi -} - # Pre-install ceph: install required dependencies function install_deps { - install_package jq ceph-common - _install_podman - install_package python3-cephfs python3-prettytable python3-rados python3-rbd python3-requests + if [[ "$REMOTE_CEPH" == "False" ]]; then + install_package python3-cephfs python3-prettytable python3-rados python3-rbd python3-requests + fi } # Pre-install ceph: get cephadm binary function get_cephadm { - curl -O https://raw.githubusercontent.com/ceph/ceph/"$CEPH_RELEASE"/src/cephadm/cephadm + # NOTE(gouthamr): cephadm binary here is a python executable, and the + # $os_PACKAGE ("rpm") or $os_release (el9) doesn't really matter. There is + # no ubuntu/debian equivalent being published by the ceph community. + curl -O https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm $SUDO mv cephadm $TARGET_BIN $SUDO chmod +x $TARGET_BIN/cephadm echo "[GET CEPHADM] cephadm is now available" @@ -176,7 +166,7 @@ EOF function start_ceph { cluster=$(sudo cephadm ls | jq '.[]' | jq 'select(.name | test("^mon*")).fsid') if [ -z "$cluster" ]; then - $SUDO $CEPHADM --image "$CONTAINER_IMAGE" \ + $SUDO "$CEPHADM" --image "$CONTAINER_IMAGE" \ bootstrap \ --fsid $FSID \ --config "$BOOTSTRAP_CONFIG" \ @@ -234,7 +224,7 @@ function add_osds { # let's add some osds if [ -z "$DEVICES" ]; then echo "Using ALL available devices" - $SUDO $CEPHADM shell ceph orch apply osd --all-available-devices + $SUDO "$CEPHADM" shell ceph orch apply osd --all-available-devices else for item in "${DEVICES[@]}"; do echo "Creating osd $item on node $HOSTNAME" @@ -244,7 +234,7 @@ function add_osds { fi while [ "$ATTEMPTS" -ne 0 ]; do - num_osds=$($SUDO $CEPHADM shell --fsid $FSID --config $CEPH_CONFIG \ + num_osds=$($SUDO "$CEPHADM" shell --fsid $FSID --config $CEPH_CONFIG \ --keyring $CEPH_KEYRING -- ceph -s -f json | jq '.osdmap | .num_up_osds') if [ "$num_osds" -ge "$MIN_OSDS" ]; then break; @@ -300,6 +290,7 @@ function _create_key { --keyring $CEPH_KEYRING -- ceph auth get-or-create "$name" mgr "allow rw" mon "allow r" osd "$osd_caps" \ -o "$KEY_EXPORT_DIR/ceph.$name.keyring" + $SUDO chown ${STACK_USER}:$(id -g -n $whoami) \ ${CEPH_CONF_DIR}/ceph.$name.keyring } @@ -642,7 +633,6 @@ function configure_ceph { if is_ceph_enabled_for_service cinder; then POOLS+=($CINDER_CEPH_POOL) KEYS+=("client.$CINDER_CEPH_USER") - set_min_client_version fi if is_ceph_enabled_for_service c-bak; then @@ -659,8 +649,10 @@ function configure_ceph { [ "$ENABLE_CEPH_RGW" == "True" ] && SERVICES+=('rgw') enable_services - add_pools - create_keys + if [[ "$REMOTE_CEPH" = "False" ]]; then + add_pools + create_keys + fi client_config import_libvirt_secret_ceph @@ -677,8 +669,10 @@ function configure_ceph_manila { function cleanup_ceph { # Cleanup the service. - stop_ceph - delete_osd_dev + if [[ "$REMOTE_CEPH" == "False" ]]; then + stop_ceph + delete_osds + fi # purge ceph config file and keys $SUDO rm -f ${CEPH_CONF_DIR}/* if is_ceph_enabled_for_service nova; then diff --git a/devstack/override-defaults b/devstack/override-defaults index 18afcd6..aa80ef1 100644 --- a/devstack/override-defaults +++ b/devstack/override-defaults @@ -20,3 +20,4 @@ if [[ $ENABLE_CEPH_CINDER == "True" ]]; then fi CEPHADM_DEPLOY=$(trueorfalse False CEPHADM_DEPLOY) +REMOTE_CEPH=$(trueorfalse False REMOTE_CEPH) diff --git a/devstack/plugin.sh b/devstack/plugin.sh index c6063f1..b415f6e 100644 --- a/devstack/plugin.sh +++ b/devstack/plugin.sh @@ -40,10 +40,14 @@ elif [[ "$1" == "stack" && "$2" == "pre-install" ]]; then fi fi elif [[ "$1" == "stack" && "$2" == "install" ]]; then - if [[ "$CEPHADM_DEPLOY" = "True" ]]; then + if [[ "$CEPHADM_DEPLOY" = "True" && "$REMOTE_CEPH" = "False" ]]; then # Perform installation of service source echo_summary "[cephadm] Installing ceph" install_ceph + set_min_client_version + elif [[ "$CEPHADM_DEPLOY" = "True" && "$REMOTE_CEPH" = "True" ]]; then + echo "[CEPHADM] Remote Ceph: Skipping install" + get_cephadm else # FIXME(melwitt): This is a hack to get around a namespacing issue with # Paste and PasteDeploy. For stable/queens, we use the Pike UCA packages @@ -53,9 +57,8 @@ elif [[ "$1" == "stack" && "$2" == "install" ]]; then # newer version of it, while python-pastedeploy remains. The mismatch # between the install path of paste and paste.deploy causes Keystone to # fail to start, with "ImportError: cannot import name deploy." - if [[ "$TARGET_BRANCH" == stable/queens || "$TARGET_BRANCH" == master ]]; then - pip_install -U --force PasteDeploy - fi + pip_install -U --force PasteDeploy + install_package python-is-python3 fi elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then if [[ "$CEPHADM_DEPLOY" = "True" ]]; then From b540b164b6f7e4a6f38abd5b78a32b2e949de524 Mon Sep 17 00:00:00 2001 From: Ashley Rodriguez Date: Fri, 2 Feb 2024 10:34:10 -0500 Subject: [PATCH 07/11] Bump to Reef Bumps ceph versions to Reef to enable ingress service deployments. Affects only cephadm based jobs Change-Id: I85ad659bf1ad36cb5340a53cd57603451fc77147 (cherry picked from commit c7fb07d47944b4337fcacaecc473634e01b35170) (cherry picked from commit 8195827bd9ce142bcb3cf4c699399189a1694238) --- devstack/lib/cephadm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/devstack/lib/cephadm b/devstack/lib/cephadm index ef606e5..be1b71d 100755 --- a/devstack/lib/cephadm +++ b/devstack/lib/cephadm @@ -18,7 +18,7 @@ XTRACE=$(set +o | grep xtrace) set +o xtrace # GENERIC CEPHADM INTERNAL OPTIONS, DO NOT EDIT -CEPH_RELEASE=${CEPH_RELEASE:-quincy} +CEPH_RELEASE=${CEPH_RELEASE:-reef} CEPH_PUB_KEY="/etc/ceph/ceph.pub" CEPH_CONFIG="/etc/ceph/ceph.conf" BOOTSTRAP_CONFIG="$HOME/bootstrap_ceph.conf" @@ -29,7 +29,7 @@ DISABLE_CEPHADM_POST_DEPLOY=${DISABLE_CEPHADM_POST_DEPLOY:-False} # DEFAULT OPTIONS ATTEMPTS=30 -CONTAINER_IMAGE=${CONTAINER_IMAGE:-'quay.io/ceph/ceph:v17.2'} +CONTAINER_IMAGE=${CONTAINER_IMAGE:-'quay.io/ceph/ceph:v18.2'} DEVICES=() if [[ "$REMOTE_CEPH" = "False" ]]; then FSID=$(uuidgen) From 2fbf9375b82360d6b1955d36c183bdc333ca1854 Mon Sep 17 00:00:00 2001 From: ashrod98 Date: Wed, 14 Feb 2024 21:13:04 +0000 Subject: [PATCH 08/11] Fix manila jobs on stable/2023.1 branch Converts the manila cephfs-native job to use cephadm Adds a cephfs-nfs multinode job Change-Id: Ib4bbe4e9ab43513d91ba8fc7ddff70ffb8ae9d8f --- .zuul.yaml | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/.zuul.yaml b/.zuul.yaml index 9fe196d..fe1351e 100644 --- a/.zuul.yaml +++ b/.zuul.yaml @@ -75,7 +75,7 @@ description: | Runs manila tempest plugin tests with Native CephFS as a manila back end (DHSS=False) - parent: manila-tempest-plugin-cephfs-native + parent: manila-tempest-plugin-cephfs-native-cephadm - job: name: devstack-plugin-ceph-cephfs-nfs @@ -84,6 +84,11 @@ back end (DHSS=False) parent: manila-tempest-plugin-cephfs-nfs +- job: + name: devstack-plugin-ceph-multinode-cephfs-nfs-cephadm + parent: manila-tempest-plugin-multinode-cephfs-nfs-cephadm + description: Test CephFS NFS (DHSS=False) in a Multinode devstack env + - job: name: devstack-plugin-ceph-multinode-tempest-py3 parent: tempest-multinode-full-py3 @@ -147,6 +152,7 @@ - devstack-plugin-ceph-tempest-cephadm: voting: false - devstack-plugin-ceph-multinode-tempest-py3 + - devstack-plugin-ceph-multinode-cephfs-nfs-cephadm - devstack-plugin-ceph-cephfs-native: irrelevant-files: *irrelevant-files voting: false From 11260fcbc95a9b13a91fef5ff259fd998651254c Mon Sep 17 00:00:00 2001 From: Goutham Pacha Ravi Date: Mon, 1 Apr 2024 17:11:05 -0700 Subject: [PATCH 09/11] Standalone nfs-ganesha with cephadm deployment Manila supports using a standalone NFS-Ganesha server as well as a ceph orchestrator deployed NFS-Ganesha cluster ("ceph nfs service"). We've only ever allowed using ceph orch deployed NFS with ceph orch deployed clusters through this devstack plugin. With this change, the plugin can optionally deploy a standalone NFS-Ganesha service with a ceph orch deployed ceph cluster. This will greatly simplify testing when we sunset the package based installation/deployment of ceph. Depends-On: I2198eee3892b2bb0eb835ec66e21b708152b33a9 Change-Id: If983bb5d5a5fc0c16c1cead84b5fa30ea961d21b Implements: bp/cephadm-deploy Signed-off-by: Goutham Pacha Ravi (cherry picked from commit ca2486efb408094683848b3f4cd1e551ea266872) (cherry picked from commit af28bdab3919eb2dc714a87ef0625717b5bfa938) (cherry picked from commit 2a7fca87fea7b01cbc1f7161f68293ba99d475ea) --- .zuul.yaml | 7 +- devstack/lib/ceph | 130 ++----------------------------------- devstack/lib/cephadm | 80 +++++++++++++++++------ devstack/lib/common | 149 +++++++++++++++++++++++++++++++++++++++++++ devstack/settings | 16 ++--- 5 files changed, 225 insertions(+), 157 deletions(-) create mode 100755 devstack/lib/common diff --git a/.zuul.yaml b/.zuul.yaml index fe1351e..acdb2dd 100644 --- a/.zuul.yaml +++ b/.zuul.yaml @@ -78,10 +78,11 @@ parent: manila-tempest-plugin-cephfs-native-cephadm - job: - name: devstack-plugin-ceph-cephfs-nfs + name: devstack-plugin-ceph-cephfs-nfs-standalone description: | Runs manila tempest plugin tests with CephFS via NFS-Ganesha as a manila - back end (DHSS=False) + back end (DHSS=False). The Ceph cluster is created with cephadm + while nfs-ganesha is installed "standalone" via a package. parent: manila-tempest-plugin-cephfs-nfs - job: @@ -156,7 +157,7 @@ - devstack-plugin-ceph-cephfs-native: irrelevant-files: *irrelevant-files voting: false - - devstack-plugin-ceph-cephfs-nfs: + - devstack-plugin-ceph-cephfs-nfs-standalone: irrelevant-files: *irrelevant-files voting: false # - devstack-plugin-ceph-tempest-fedora-latest diff --git a/devstack/lib/ceph b/devstack/lib/ceph index a054d9a..7e15427 100755 --- a/devstack/lib/ceph +++ b/devstack/lib/ceph @@ -21,6 +21,7 @@ # Save trace setting XTRACE=$(set +o | grep xtrace) set +o xtrace +source $CEPH_PLUGIN_DIR/lib/common # Defaults @@ -30,20 +31,6 @@ TEST_MASTER=$(trueorfalse False TEST_MASTER) CEPH_RELEASE=${CEPH_RELEASE:-pacific} -GANESHA_RELEASE=${GANESHA_RELEASE:-'unspecified'} -# Remove "v" and "-stable" prefix/suffix tags -GANESHA_RELEASE=$(echo $GANESHA_RELEASE | sed -e "s/^v//" -e "s/-stable$//") - -if [[ "$MANILA_CEPH_DRIVER" == "cephfsnfs" && "$GANESHA_RELEASE" == "unspecified" ]]; then - # default ganesha release based on ceph release - case $CEPH_RELEASE in - pacific) - GANESHA_RELEASE='3.5' ;; - *) - GANESHA_RELEASE='4.0' ;; - esac -fi - # Deploy a Ceph demo container instead of a non-containerized version CEPH_CONTAINERIZED=$(trueorfalse False CEPH_CONTAINERIZED) @@ -111,10 +98,6 @@ CEPHFS_DATA_POOL=${CEPHFS_DATA_POOL:-cephfs_data} MANILA_CEPH_DRIVER=${MANILA_CEPH_DRIVER:-cephfsnative} MANILA_CEPH_USER=${MANILA_CEPH_USER:-manila} -# Allows driver to store NFS-Ganesha exports and export counter as -# RADOS objects in CephFS's data pool. This needs NFS-Ganesha v2.5.4 or later, -# Ceph v12.2.2 or later, and OpenStack Queens or later. -MANILA_CEPH_GANESHA_RADOS_STORE=${MANILA_CEPH_GANESHA_RADOS_STORE:-True} # Set ``CEPH_REPLICAS`` to configure how many replicas are to be # configured for your Ceph cluster. By default we are configuring @@ -757,67 +740,13 @@ function configure_ceph_manila { if [ $MANILA_CEPH_DRIVER == 'cephfsnfs' ]; then configure_nfs_ganesha - # NFS-Ganesha server cannot run alongwith with other kernel NFS server. - sudo systemctl stop nfs-server || true - sudo systemctl disable nfs-server || true - sudo systemctl enable nfs-ganesha - sudo systemctl start nfs-ganesha || ( - echo "Ganesha didn't start. Let's debug..." >&2 - sudo systemctl status nfs-ganesha || true - echo "**Ganesha conf file**" >&2 - sudo cat /etc/ganesha/ganesha.conf || true - echo "**Ganesha log file**" >&2 - sudo cat /var/log/ganesha/ganesha.log || true - echo "**Exiting**" >&2 - exit 1 - ) - echo "Ganesha started successfully!" >&2 + start_nfs_ganesha fi # RESTART DOCKER CONTAINER } -function configure_nfs_ganesha { - # Configure NFS-Ganesha to work with Manila's CephFS driver - sudo mkdir -p /etc/ganesha/export.d - if [ $MANILA_CEPH_GANESHA_RADOS_STORE == 'True' ]; then - # Create an empty placeholder ganesha export index object - echo | sudo rados -p ${CEPHFS_DATA_POOL} put ganesha-export-index - - cat </dev/null -RADOS_URLS { - ceph_conf = ${CEPH_CONF_FILE}; - userid = admin; -} - -CACHEINODE { - Dir_Max = 1; - Dir_Chunk = 0; - - Cache_FDs = false; - - NParts = 1; - Cache_Size = 1; -} - -EXPORT_DEFAULTS { - Attr_Expiration_Time = 0; -} - -%url rados://${CEPHFS_DATA_POOL}/ganesha-export-index -EOF - else - sudo touch /etc/ganesha/export.d/INDEX.conf - echo "%include /etc/ganesha/export.d/INDEX.conf" | sudo tee /etc/ganesha/ganesha.conf - fi -} - -function cleanup_nfs_ganesha { - sudo systemctl stop nfs-ganesha - sudo systemctl disable nfs-ganesha - sudo uninstall_package nfs-ganesha nfs-ganesha-ceph libntirpc3 nfs-ganesha-rados-urls nfs-ganesha-vfs -} - function configure_ceph_embedded_manila { if [[ $CEPH_REPLICAS -ne 1 ]]; then sudo $DOCKER_EXEC ceph -c ${CEPH_CONF_FILE} osd pool set ${CEPHFS_DATA_POOL} \ @@ -958,19 +887,6 @@ EOF sudo dnf config-manager --add-repo ceph.repo } -function dnf_add_repository_nfsganesha { - local repo="" - - case $ganesha_release in - 3.*) - repo="centos-release-nfs-ganesha30" ;; - *) - repo="centos-release-nfs-ganesha4" ;; - esac - - sudo dnf -y install ${repo} -} - # configure_repo_ceph() - Configure Ceph repositories # Usage: configure_repo_ceph # - package_release: to override the os_RELEASE variable @@ -1009,44 +925,11 @@ function cleanup_repo_ceph { fi } -# configure_repo_nfsganesha() - Configure NFS Ganesha repositories -function configure_repo_nfsganesha { - if is_ubuntu; then - # NOTE(gouthamr): Ubuntu PPAs contain the latest build from each major - # version; we can't use a build microversion unlike el8/el9 builds - if [[ $GANESHA_RELEASE =~ 3 ]]; then - sudo add-apt-repository -y ppa:nfs-ganesha/libntirpc-3.0 - sudo add-apt-repository -y ppa:nfs-ganesha/nfs-ganesha-3.0 - elif [[ $GANESHA_RELEASE =~ 4 ]]; then - sudo add-apt-repository -y ppa:nfs-ganesha/libntirpc-4 - sudo add-apt-repository -y ppa:nfs-ganesha/nfs-ganesha-4 - else - die $LINENO "NFS-Ganesha $GANESHA_RELEASE is not supported by the Ceph plugin for Devstack" - fi - sudo apt-get -y update - elif is_fedora; then - dnf_add_repository_nfsganesha - fi -} - -# cleanup_repo_nfsganesha() - Remove NFS Ganesha repositories -# Usage: cleanup_repo_nfsganesha -function cleanup_repo_nfsganesha { - if is_ubuntu; then - sudo rm -rf "/etc/apt/sources.list.d/nfs-ganesha-ubuntu*" - elif is_fedora; then - sudo rm -rf /etc/yum.repos.d/nfs-ganesha.repo - fi -} - function setup_packages_for_manila_on_ubuntu { CEPH_PACKAGES="${CEPH_PACKAGES} ceph-mds libcephfs2" if [ $MANILA_CEPH_DRIVER == 'cephfsnfs' ]; then - configure_repo_nfsganesha - LIBNTIRPC_PACKAGE="libntirpc${GANESHA_RELEASE:0:1}" - CEPH_PACKAGES="${CEPH_PACKAGES} $LIBNTIRPC_PACKAGE \ - nfs-ganesha nfs-ganesha-ceph nfs-ganesha-rados-urls nfs-ganesha-vfs" + install_nfs_ganesha fi if python3_enabled; then @@ -1056,9 +939,7 @@ function setup_packages_for_manila_on_ubuntu { function setup_packages_for_manila_on_fedora_family { if [ $MANILA_CEPH_DRIVER == 'cephfsnfs' ]; then - configure_repo_nfsganesha - CEPH_PACKAGES="${CEPH_PACKAGES} nfs-ganesha nfs-ganesha-ceph \ - nfs-ganesha-rados-urls nfs-ganesha-vfs" + install_nfs_ganesha fi } @@ -1175,8 +1056,7 @@ function stop_ceph { fi if is_ceph_enabled_for_service manila; then if [ $MANILA_CEPH_DRIVER == 'cephfsnfs' ]; then - sudo systemctl stop nfs-ganesha - sudo systemctl disable nfs-ganesha + stop_nfs_ganesha fi sudo systemctl stop ceph-mds@${MDS_ID} sudo systemctl disable ceph-mds@${MDS_ID} diff --git a/devstack/lib/cephadm b/devstack/lib/cephadm index be1b71d..7f5c5e3 100755 --- a/devstack/lib/cephadm +++ b/devstack/lib/cephadm @@ -65,7 +65,11 @@ RBD_CLIENT_LOG=/var/log/ceph/qemu-guest-\$pid.log # MANILA DEFAULTS MANILA_CEPH_USER=${MANILA_CEPH_USER:-manila} -# NFS OPTIONS +# NFS OPTIONS: Only apply when ENABLE_CEPH_MANILA=True +# Whether or not cephadm should deploy/manage NFS-Ganesha? If set to False, +# we'll deploy a "standalone" NFS Ganesha instead, not managed by cephadm. +CEPHADM_DEPLOY_NFS=${CEPHADM_DEPLOY_NFS:-True} +# Clustered NFS Options FSNAME=${FSNAME:-'cephfs'} NFS_PORT=2049 CEPHFS_CLIENT=0 @@ -123,7 +127,7 @@ function export_spec { # Pre-install ceph: install required dependencies function install_deps { - if [[ "$REMOTE_CEPH" == "False" ]]; then + if [[ "$REMOTE_CEPH" = "False" ]]; then install_package python3-cephfs python3-prettytable python3-rados python3-rbd python3-requests fi } @@ -182,16 +186,16 @@ function start_ceph { --skip-mon-network \ --mon-ip "$HOST_IP" - test -e $CEPH_CONFIG - test -e $CEPH_KEYRING + test -e $CEPH_CONFIG + test -e $CEPH_KEYRING - if [ "$CEPHADM_DEV_OSD" == 'True' ]; then - create_osd_dev - fi - # Wait cephadm backend to be operational - # and add osds via drivegroups - sleep "$SLEEP" - add_osds + if [ "$CEPHADM_DEV_OSD" == 'True' ]; then + create_osd_dev + fi + # Wait cephadm backend to be operational + # and add osds via drivegroups + sleep "$SLEEP" + add_osds fi } @@ -235,7 +239,7 @@ function add_osds { while [ "$ATTEMPTS" -ne 0 ]; do num_osds=$($SUDO "$CEPHADM" shell --fsid $FSID --config $CEPH_CONFIG \ - --keyring $CEPH_KEYRING -- ceph -s -f json | jq '.osdmap | .num_up_osds') + --keyring $CEPH_KEYRING -- ceph -s -f json | jq '.osdmap | .num_up_osds') if [ "$num_osds" -ge "$MIN_OSDS" ]; then break; fi @@ -306,22 +310,50 @@ function create_keys { # Install ceph: add MDS function cephfs_config { # Two pools are generated by this action - # - $FSNAME.FSNAME.data - # - $FSNAME.FSNAME.meta + # - cephfs.$FSNAME.data + # - cephfs.$FSNAME.meta # and the mds daemon is deployed $SUDO "$CEPHADM" shell --fsid $FSID --config $CEPH_CONFIG \ --keyring $CEPH_KEYRING -- ceph fs volume create "$FSNAME" } -# Install ceph: add NFS -function ceph_nfs_config { - # (fpantano) TODO: Build an ingress daemon on top of this +# Get Ceph version +function _get_ceph_version { + local ceph_version_str + + ceph_version_str=$(sudo podman run --rm --entrypoint ceph $CONTAINER_IMAGE \ + --version | awk '{ print $3 }') + + echo $ceph_version_str +} + +function _install_and_configure_clustered_nfs { + local ceph_version + ceph_version=$(_get_ceph_version) + echo "[CEPHADM] Deploy nfs.$FSNAME backend" $SUDO "$CEPHADM" shell --fsid $FSID --config $CEPH_CONFIG \ --keyring $CEPH_KEYRING -- ceph orch apply nfs \ "$FSNAME" --placement="$HOSTNAME" --port $NFS_PORT } +function _install_and_configure_standalone_nfs { + source $CEPH_PLUGIN_DIR/lib/common + install_nfs_ganesha + configure_nfs_ganesha + start_nfs_ganesha +} + +# Install ceph: add NFS +function ceph_nfs_config { + if [[ "$CEPHADM_DEPLOY_NFS" == "True" ]]; then + _install_and_configure_clustered_nfs + else + _install_and_configure_standalone_nfs + fi + +} + function _create_swift_endpoint { local swift_service @@ -425,17 +457,17 @@ function configure_ceph_manila { function enable_services { for item in "${SERVICES[@]}"; do case "$item" in - cephfs|CEPHFS) + cephfs|CEPHFS) echo "[CEPHADM] Config cephfs volume on node $HOSTNAME" cephfs_config CEPHFS_CLIENT=1 ;; - nfs|NFS) + nfs|NFS) echo "[CEPHADM] Deploying NFS on node $HOSTNAME" ceph_nfs_config CEPHFS_CLIENT=1 ;; - rgw|RGW) + rgw|RGW) echo "[CEPHADM] Deploying RGW on node $HOSTNAME" rgw ;; @@ -676,8 +708,14 @@ function cleanup_ceph { # purge ceph config file and keys $SUDO rm -f ${CEPH_CONF_DIR}/* if is_ceph_enabled_for_service nova; then - _undefine_virsh_secret + _undefine_virsh_secret fi + if [[ "$CEPHADM_DEPLOY_NFS" != "True" ]]; then + stop_nfs_ganesha + cleanup_nfs_ganesha + cleanup_repo_nfs_ganesha + fi + } function disable_cephadm { diff --git a/devstack/lib/common b/devstack/lib/common new file mode 100755 index 0000000..1b7b665 --- /dev/null +++ b/devstack/lib/common @@ -0,0 +1,149 @@ +#!/bin/bash + +# Allows driver to store NFS-Ganesha exports and export counter as +# RADOS objects in CephFS's data pool. This needs NFS-Ganesha v2.5.4 or later, +# Ceph v12.2.2 or later, and OpenStack Queens or later. +MANILA_CEPH_GANESHA_RADOS_STORE=${MANILA_CEPH_GANESHA_RADOS_STORE:-True} +GANESHA_RELEASE=${GANESHA_RELEASE:-'unspecified'} +# Remove "v" and "-stable" prefix/suffix tags +GANESHA_RELEASE=$(echo $GANESHA_RELEASE | sed -e "s/^v//" -e "s/-stable$//") +if [[ "$CEPHADM_DEPLOY" = "True" ]]; then + FSNAME=${FSNAME:-'cephfs'} + CEPHFS_DATA_POOL="cephfs.$FSNAME.data" +else + CEPHFS_DATA_POOL=${CEPHFS_DATA_POOL:-cephfs_data} +fi + +if [[ "$MANILA_CEPH_DRIVER" == "cephfsnfs" && "$GANESHA_RELEASE" == "unspecified" ]]; then + # default ganesha release based on ceph release + case $CEPH_RELEASE in + pacific) + GANESHA_RELEASE='3.5' + ;; + *) + GANESHA_RELEASE='5.0' + ;; + esac +fi + +# configure_repo_nfsganesha - Configure NFS Ganesha repositories +function configure_repo_nfsganesha { + if is_ubuntu; then + # NOTE(gouthamr): Ubuntu PPAs contain the latest build from each major + # version; we can't use a build microversion unlike el8/el9 builds + case $GANESHA_RELEASE in + 3.*) + sudo add-apt-repository -y ppa:nfs-ganesha/libntirpc-3.0 + sudo add-apt-repository -y ppa:nfs-ganesha/nfs-ganesha-3.0 + ;; + *) + GANESHA_PPA_VERSION="${GANESHA_RELEASE:0:1}" + sudo add-apt-repository -y ppa:nfs-ganesha/libntirpc-"$GANESHA_PPA_VERSION" + sudo add-apt-repository -y ppa:nfs-ganesha/nfs-ganesha-"$GANESHA_PPA_VERSION" + ;; + esac + sudo apt-get -y update + elif is_fedora; then + local repo="" + case $GANESHA_RELEASE in + 3.*) + repo="centos-release-nfs-ganesha30" + ;; + *) + repo="centos-release-nfs-ganesha5" + ;; + esac + sudo dnf -y install ${repo} + fi +} + +function install_nfs_ganesha { + configure_repo_nfsganesha + NFS_GANESHA_PACKAGES="nfs-ganesha nfs-ganesha-ceph \ + nfs-ganesha-rados-urls nfs-ganesha-vfs" + if is_ubuntu; then + LIBNTIRPC_PACKAGE="libntirpc${GANESHA_RELEASE:0:1}" + NFS_GANESHA_PACKAGES="${LIBNTIRPC_PACKAGE} ${NFS_GANESHA_PACKAGES}" + fi + install_package $NFS_GANESHA_PACKAGES +} + +function configure_nfs_ganesha { + # Configure NFS-Ganesha to work with Manila's CephFS driver + rados_cmd="sudo rados -p ${CEPHFS_DATA_POOL}" + if [[ "$CEPHADM_DEPLOY" = "True" ]]; then + CEPHADM=${TARGET_BIN}/cephadm + rados_cmd="sudo $CEPHADM shell rados -p ${CEPHFS_DATA_POOL}" + fi + + + sudo mkdir -p /etc/ganesha/export.d + if [ $MANILA_CEPH_GANESHA_RADOS_STORE == 'True' ]; then + # Create an empty placeholder ganesha export index object + echo | $rados_cmd put ganesha-export-index - + cat </dev/null + RADOS_URLS { + ceph_conf = ${CEPH_CONF_FILE}; + userid = admin; +} + +CACHEINODE { + Dir_Max = 1; + Dir_Chunk = 0; + + Cache_FDs = false; + + NParts = 1; + Cache_Size = 1; +} + +EXPORT_DEFAULTS { + Attr_Expiration_Time = 0; +} + +%url rados://${CEPHFS_DATA_POOL}/ganesha-export-index +EOF + else + sudo touch /etc/ganesha/export.d/INDEX.conf + echo "%include /etc/ganesha/export.d/INDEX.conf" | sudo tee /etc/ganesha/ganesha.conf + fi +} + +function start_nfs_ganesha { + # NFS-Ganesha server cannot run alongwith with other kernel NFS server. + sudo systemctl stop nfs-server || true + sudo systemctl disable nfs-server || true + sudo systemctl enable nfs-ganesha + sudo systemctl start nfs-ganesha || ( + echo "Ganesha didn't start. Let's debug..." >&2 + sudo systemctl status nfs-ganesha || true + echo "**Ganesha conf file**" >&2 + sudo cat /etc/ganesha/ganesha.conf || true + echo "**Ganesha log file**" >&2 + sudo cat /var/log/ganesha/ganesha.log || true + echo "**Exiting**" >&2 + exit 1 + ) + echo "Standalone NFS-Ganesha started successfully!" >&2 +} + +function stop_nfs_ganesha { + sudo systemctl stop nfs-ganesha + sudo systemctl disable nfs-ganesha +} + +function cleanup_nfs_ganesha { + sudo systemctl stop nfs-ganesha + sudo systemctl disable nfs-ganesha + sudo uninstall_package nfs-ganesha nfs-ganesha-ceph libntirpc3 nfs-ganesha-rados-urls nfs-ganesha-vfs +} + +# cleanup_repo_nfsganesha() - Remove NFS Ganesha repositories +# Usage: cleanup_repo_nfsganesha +function cleanup_repo_nfsganesha { + if is_ubuntu; then + sudo rm -rf "/etc/apt/sources.list.d/nfs-ganesha-ubuntu*" + elif is_fedora; then + sudo rm -rf /etc/yum.repos.d/nfs-ganesha.repo + fi +} diff --git a/devstack/settings b/devstack/settings index 01f74a7..ef9e4db 100644 --- a/devstack/settings +++ b/devstack/settings @@ -62,16 +62,16 @@ if (is_ceph_enabled_for_service manila); then MANILA_OPTGROUP_cephfsnfs1_cephfs_conf_path=${CEPH_CONF_FILE} MANILA_OPTGROUP_cephfsnfs1_cephfs_auth_id=${MANILA_CEPH_USER} MANILA_OPTGROUP_cephfsnfs1_cephfs_protocol_helper_type=NFS - MANILA_OPTGROUP_cephfsnfs1_cephfs_ganesha_server_ip=$HOST_IP - MANILA_CEPH_GANESHA_RADOS_STORE=$(trueorfalse False MANILA_CEPH_GANESHA_RADOS_STORE) - if [ "$MANILA_CEPH_GANESHA_RADOS_STORE" = "True" ]; then - MANILA_OPTGROUP_cephfsnfs1_ganesha_rados_store_enable=${MANILA_CEPH_GANESHA_RADOS_STORE} - MANILA_OPTGROUP_cephfsnfs1_ganesha_rados_store_pool_name=${CEPHFS_DATA_POOL} - fi - - if [ "$CEPHADM_DEPLOY" = "True" ]; then + if [[ $CEPHADM_DEPLOY_NFS == "True" ]]; then MANILA_OPTGROUP_cephfsnfs1_cephfs_nfs_cluster_id=${FSNAME} + else + MANILA_OPTGROUP_cephfsnfs1_cephfs_ganesha_server_ip=$HOST_IP + MANILA_CEPH_GANESHA_RADOS_STORE=$(trueorfalse False MANILA_CEPH_GANESHA_RADOS_STORE) + if [ "$MANILA_CEPH_GANESHA_RADOS_STORE" = "True" ]; then + MANILA_OPTGROUP_cephfsnfs1_ganesha_rados_store_enable=${MANILA_CEPH_GANESHA_RADOS_STORE} + MANILA_OPTGROUP_cephfsnfs1_ganesha_rados_store_pool_name=${CEPHFS_DATA_POOL} + fi fi fi fi From 68a18e1e1b5794ed7d86463d1189c727db4ec057 Mon Sep 17 00:00:00 2001 From: OpenStack Release Bot Date: Tue, 12 Nov 2024 13:49:00 +0000 Subject: [PATCH 10/11] Update .gitreview for unmaintained/2023.1 Change-Id: Icb01b3adce79253b4d8ee1df142b9c86dfd14323 --- .gitreview | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.gitreview b/.gitreview index 1b8e2a9..6bc1d6e 100644 --- a/.gitreview +++ b/.gitreview @@ -2,4 +2,4 @@ host=review.opendev.org port=29418 project=openstack/devstack-plugin-ceph.git -defaultbranch=stable/2023.1 +defaultbranch=unmaintained/2023.1 From 8ad7b93f80212b7e39e2998b72c5a2f31dbb02b1 Mon Sep 17 00:00:00 2001 From: Elod Illes Date: Sat, 28 Dec 2024 15:57:28 +0100 Subject: [PATCH 11/11] [CI] Remove undefined tempest job Zuul drops an error [1] on unmaintained/2023.1. This patch removes the non-existing job from the check queue (which is non-voting anyway). [1] 'Job devstack-plugin-ceph-tempest-cephadm not defined' Change-Id: I587f00cbc9ba8d3eac90d9d0a867b10588aa98e4 --- .zuul.yaml | 2 -- 1 file changed, 2 deletions(-) diff --git a/.zuul.yaml b/.zuul.yaml index acdb2dd..85068ca 100644 --- a/.zuul.yaml +++ b/.zuul.yaml @@ -150,8 +150,6 @@ jobs: - devstack-plugin-ceph-tempest-py3: voting: false - - devstack-plugin-ceph-tempest-cephadm: - voting: false - devstack-plugin-ceph-multinode-tempest-py3 - devstack-plugin-ceph-multinode-cephfs-nfs-cephadm - devstack-plugin-ceph-cephfs-native: