Cloud Blog: Empower your teams with self-service Kubernetes using GKE fleets and Argo CD

Source URL: https://cloud.google.com/blog/products/containers-kubernetes/empower-your-teams-with-self-service-kubernetes-using-gke-fleets-and-argo-cd/
Source: Cloud Blog
Title: Empower your teams with self-service Kubernetes using GKE fleets and Argo CD

Feedly Summary: Managing applications across multiple Kubernetes clusters is complex, especially when those clusters span different environments or even cloud providers. One powerful and secure solution combines Google Kubernetes Engine (GKE) fleets and, Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes. The solution is further enhanced with Connect Gateway and Workload Identity.
This blog post guides you in setting up a robust, team-centric multi-cluster infrastructure with these offerings. We use a sample GKE fleet with application clusters for your workloads and a control cluster to host Argo CD. To streamline authentication and enhance security, we leverage Connect Gateway and Workload Identity, enabling Argo CD to securely manage clusters without the need to manage cumbersome Kubernetes Services Accounts.
On top of this, we incorporate GKE Enterprise Teams to manage access and resources, helping to ensure that each team has the right permissions and namespaces within this secure framework.

aside_block
), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectpath=/marketplace/product/google/container.googleapis.com’), (‘image’, None)])]>

Finally, we introduce the fleet-argocd-plugin, a custom Argo CD generator designed to simplify cluster management within this sophisticated setup. This plugin automatically imports your GKE Fleet cluster list into Argo CD and maintains synchronized cluster information, making it easier for platform admins to manage resources and for application teams to focus on deployments.
Follow along as we:

Create a GKE fleet with application and control clusters

Deploy Argo CD on the control cluster, configured to use Connect Gateway and Workload Identity

Configure GKE Enterprise Teams for granular access control

Install and leverage the fleet-argocd-plugin to manage your secure, multi-cluster fleet with team awareness

By the end, you’ll have a powerful and automated multi-cluster system using GKE Fleets, Argo CD, Connect Gateway, Workload Identity, and Teams, ready to support your organization’s diverse needs and security requirements. Let’s dive in!
Set up multi-cluster infrastructure with GKE fleet and Argo CD
Setting up a sample GKE fleet is a straightforward process:

1. Enable the required APIs in the desired Google Cloud Project. We use this project as the fleet host project.
a. gcloud SDK must be installed, and you must be authenticated via gcloud auth login.

code_block
<ListValue: [StructValue([(‘code’, ‘export FLEET_PROJECT_ID=\r\nexport FLEET_PROJECT_NUMBER = “"\r\n\r\ngcloud config set project $FLEET_PROJECT_ID\r\ngcloud services enable container.googleapis.com \\\r\ngkehub.googleapis.com \\\r\ncloudresourcemanager.googleapis.com \\\r\niam.googleapis.com \\\r\nconnectgateway.googleapis.com \\\r\nanthos.googleapis.com \\\r\ncloudbuild.googleapis.com’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d0756cd0>)])]>

2. Create application clusters and register them under your fleet host project.

code_block
<ListValue: [StructValue([(‘code’, ‘gcloud container clusters create app-cluster-1 –enable-fleet –region=us-central1 \r\ngcloud container clusters create app-cluster-2 –enable-fleet –region=us-central1’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d0756b50>)])]>

3. Set up teams on your fleet. Let’s say you have one frontend team with a webserver namespace. 
a. With fleet teams and fleet Namespace, you can control which team accesses specific namespaces on specific clusters.

code_block
<ListValue: [StructValue([(‘code’, ‘# Create a frontend team. \r\ngcloud container fleet scopes create frontend\r\n\r\n# Add your application clusters to the frontend team. \r\ngcloud container fleet memberships bindings create app-cluster-1-b \\\r\n –membership app-cluster-1 \\\r\n –scope frontend \\\r\n –location us-central1\r\n\r\ngcloud container fleet memberships bindings create app-cluster-2-b \\\r\n –membership app-cluster-2 \\\r\n –scope frontend \\\r\n –location us-central1\r\n\r\n# Create a fleet namespace for webserver.\r\ngcloud container fleet scopes namespaces create webserver –scope=frontend\r\n\r\n# [Optional] Verify your fleet team setup.\r\n# Check member clusters in your fleet.\r\ngcloud container fleet memberships list \r\n# Verify member clusters have been added to the right team (`scope`). \r\ngcloud container fleet memberships bindings list –membership=app-cluster-1 –location=us-central1\r\ngcloud container fleet memberships bindings list –membership=app-cluster-2 –location=us-central1’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d12c9f40>)])]>

4. Now, set up Argo CD and deploy it to the control cluster. Create a new GKE cluster as your application and enable Workload Identity on it.

code_block
<ListValue: [StructValue([(‘code’, ‘gcloud container clusters create control-cluster –region=us-central1 \\\r\n–workload-pool=$FLEET_PROJECT_ID.svc.id.goog’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d12c90a0>)])]>

5. Install the Argo CD CLI to interact with the Argo CD API server. Version 2.8.0 or higher is required. Detailed installation instructions can be found via the CLI installation documentation. 
6. Deploy Argo CD on the control cluster.

code_block
<ListValue: [StructValue([(‘code’, ‘gcloud container clusters get-credentials control-cluster –region=us-central1\r\nkubectl create namespace argocd\r\nkubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d12c9d00>)])]>

Customize the Argo CD generator
Now you’ve got your GKE fleet up and running, and you’ve installed Argo CD on the control cluster. In Argo CD, application clusters are registered with the control cluster by storing their credentials (like API server address and authentication details) as Kubernetes Secrets within the Argo CD namespace. We’ve got a way to make this whole process a lot easier!
The fleet-argocd-plugin is a customized Argo CD plugin generator that takes the hassle out of cluster management by: 

Automatically importing your GKE fleet cluster list into Argo CD and setting up the cluster secret objects for each application cluster 

Keeping an eye on your fleet’s status on Google Cloud, making sure your Argo CD cluster list is always in sync and up-to-date

Now, let’s see how to build and configure the Argo CD generator. 
7. Install fleet-argocd-plugin on your control cluster. 
a. In this demo, we use Cloud Build to build and deploy the fleet-argocd-plugin.

code_block
<ListValue: [StructValue([(‘code’, ‘git clone https://github.com/GoogleCloudPlatform/gke-fleet-management.git \r\ncd gke-fleet-management/fleet-argocd-plugin\r\n\r\ngcloud artifacts repositories create argocd-fleet-sync \\\r\n –repository-format=docker \\\r\n –location=us-central1 \\\r\n –description="Docker repository for argocd fleet sync"\r\ngcloud builds submit –region=us-central1 –config=cloudbuild.yaml\r\nexport PATH_TO_IMAGE=us-central1-docker.pkg.dev/$FLEET_PROJECT_ID/argocd-fleet-sync/plugin:v1.0’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d12c9bb0>)])]>

8. To make sure the fleet-argocd-plugin works as it should, give it the right permissions for fleet management. 
a. Create an IAM service account in your Argo CD control cluster and grant it the appropriate permissions. The setup follows the official onboarding guide of GKE Workload Identity Federation.

code_block
<ListValue: [StructValue([(‘code’, ‘gcloud iam service-accounts create argocd-fleet-admin \\\r\n –project=$FLEET_PROJECT_ID\r\n\r\ngcloud projects add-iam-policy-binding $FLEET_PROJECT_ID \\\r\n–member "serviceAccount:argocd-fleet-admin@$FLEET_PROJECT_ID.iam.gserviceaccount.com" \\\r\n–role "roles/container.developer"\r\n\r\ngcloud projects add-iam-policy-binding $FLEET_PROJECT_ID \\\r\n–member "serviceAccount:argocd-fleet-admin@$FLEET_PROJECT_ID.iam.gserviceaccount.com" \\\r\n–role "roles/gkehub.gatewayEditor"\r\n\r\ngcloud projects add-iam-policy-binding $FLEET_PROJECT_ID \\\r\n–member "serviceAccount:argocd-fleet-admin@$FLEET_PROJECT_ID.iam.gserviceaccount.com" \\\r\n–role "roles/gkehub.viewer"\r\n\r\n# Allow ArgoCD application controller and fleet-argocd-plugin to impersonate this IAM service account.\r\ngcloud iam service-accounts add-iam-policy-binding argocd-fleet-admin@$FLEET_PROJECT_ID.iam.gserviceaccount.com \\\r\n–role roles/iam.workloadIdentityUser \\\r\n–member "serviceAccount:$FLEET_PROJECT_ID.svc.id.goog[argocd/argocd-application-controller]"\r\ngcloud iam service-accounts add-iam-policy-binding argocd-fleet-admin@$FLEET_PROJECT_ID.iam.gserviceaccount.com \\\r\n–role roles/iam.workloadIdentityUser \\\r\n–member "serviceAccount:$FLEET_PROJECT_ID.svc.id.goog[argocd/argocd-fleet-sync]"\r\n\r\n# Annotate the Kubernetes ServiceAccount so that GKE sees the link between the service accounts.\r\nkubectl annotate serviceaccount argocd-application-controller \\\r\n –namespace argocd \\\r\n iam.gke.io/gcp-service-account=argocd-fleet-admin@$FLEET_PROJECT_ID.iam.gserviceaccount.com’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d12c9730>)])]>

b. You also need to allow the Google Compute Engine service account to access images from your artifacts repository.

code_block
<ListValue: [StructValue([(‘code’, ‘gcloud projects add-iam-policy-binding projects/$FLEET_PROJECT_ID \\\r\n–role="roles/artifactregistry.reader" \\\r\n–member=serviceAccount:$FLEET_PROJECT_NUMBER-compute@developer.gserviceaccount.com’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d12c9c40>)])]>

9. Run the fleet plugin on your Argo CD control cluster!

code_block
<ListValue: [StructValue([(‘code’, ‘export FLEET_SA="argocd-fleet-admin@$FLEET_PROJECT_ID.iam.gserviceaccount.com"\r\n\r\nenvsubst \’$FLEET_SA $FLEET_PROJECT_NUMBER $PATH_TO_IMAGE\’ < fleet-sync-install.yaml | kubectl apply -f – -n argocd’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d32637c0>)])]>

Demo time
Let’s do a quick check to make sure the GKE fleet and Argo CD are playing nicely together. You should see that the secrets for your application clusters have been automatically generated.

code_block
<ListValue: [StructValue([(‘code’, ‘kubectl get secret -n argocd\r\n\r\n# Example Output: TYPE DATA AGE\r\n# app-cluster-1.us-central1.141594892609 Opaque 3 64m\r\n# app-cluster-2.us-central1.141594892609 Opaque 3 64m’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d3263f70>)])]>

Demo 1: Automatic fleet management in Argo CD
Okay, let’s see how this works! We’ll use the guestbook example app. First, we deploy it to the clusters that the frontend team uses. You should then see the guestbook app running on your application clusters, and you won’t have to deal with any cluster secrets manually!

code_block
<ListValue: [StructValue([(‘code’, "export TEAM_ID=frontend\r\nenvsubst ‘$FLEET_PROJECT_NUMBER $TEAM_ID’ < applicationset-demo.yaml | kubectl apply -f – -n argocd\r\n\r\nkubectl config set-context –current –namespace=argocd\r\nargocd app list -o name \r\n# Example Output:\r\n# argocd/app-cluster-1.us-central1.141594892609-webserver\r\n# argocd/app-cluster-2.us-central1.141594892609-webserver"), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d3263c10>)])]>

Demo 2: Evolving your fleet is easy with fleet-argocd-plugin
Suppose you decide to add another cluster to the frontend team. Create a new GKE cluster and assign it to the frontend team. Then, check to see if your guestbook app has been deployed on the new cluster.

code_block
<ListValue: [StructValue([(‘code’, ‘gcloud container clusters create app-cluster-3 –enable-fleet –region=us-central1\r\ngcloud container fleet memberships bindings create app-cluster-3-b \\\r\n –membership app-cluster-3 \\\r\n –scope frontend \\\r\n –location us-central1\r\n\r\nargocd app list -o name\r\n# Example Output: a new app shows up!\r\n# argocd/app-cluster-1.us-central1.141594892609-webserver\r\n# argocd/app-cluster-2.us-central1.141594892609-webserver\r\n# argocd/app-cluster-3.us-central1.141594892609-webserver’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e27d3263bb0>)])]>

Closing thoughts
In this blog post, we’ve shown you how to combine the power of GKE fleets, Argo CD, Connect Gateway, Workload Identity, and GKE Enterprise Teams to create a robust and automated multi-cluster platform. By leveraging these tools, you can streamline your Kubernetes operations, enhance security, and empower your teams to efficiently manage and deploy applications across your fleet.
However, this is just the beginning! There’s much more to explore in the world of multi-cluster Kubernetes. Here are some next steps to further enhance your setup:

You can find the source code for the fleet-argocd-plugin on GitHub.

Deep dive into GKE Enterprise Teams: Explore the advanced features of GKE Enterprise Teams to fine-tune access control, resource allocation, and namespace management for your teams. Learn more in the official documentation.

Secure your clusters with Connect Gateway: Delve deeper into Connect Gateway and Workload Identity to understand how they simplify and secure authentication to your clusters, eliminating the need for VPNs or complex network configurations. Check out this blog post for a detailed guide.

Master advanced deployment strategies: Explore advanced deployment strategies with Argo CD, such as blue/green deployments, canary releases, and automated rollouts, to achieve zero-downtime deployments and minimize risk during updates. This blog post provides a great starting point.

As you continue your journey with multi-cluster Kubernetes, remember that GKE fleets and Argo CD provide a solid foundation for building a scalable, secure, and efficient platform. Embrace the power of automation, GitOps principles, and team-based management to unlock the full potential of your Kubernetes infrastructure.

AI Summary and Description: Yes

Summary: The text discusses a comprehensive approach to managing applications across multiple Kubernetes clusters using Google Kubernetes Engine (GKE) fleets and Argo CD. It outlines a secure and efficient multi-cluster infrastructure design, emphasizing team-centric access control and automated cluster management. Key features discussed include the Connect Gateway, Workload Identity, and the fleet-argocd-plugin.

Detailed Description:

– **Complex Kubernetes Management**: The text addresses the challenges of managing applications across multiple Kubernetes clusters, particularly when these clusters span different environments or cloud providers.

– **GKE Fleets and Argo CD**: It introduces a powerful and secure solution that combines GKE fleets with Argo CD for continuous delivery. This allows for streamlined deployments and easier management of applications.

– **Security Enhancements**:
– **Connect Gateway and Workload Identity**: These tools facilitate secure management of identities and authentication, reducing the need for complex service account setups within Kubernetes.

– **Access Control**: The incorporation of GKE Enterprise Teams enables granular access control, ensuring that teams only have the necessary permissions within the secure framework.

– **Automated Cluster Management**: The addition of the fleet-argocd-plugin simplifies cluster management by:
– Automatically importing the GKE fleet cluster list into Argo CD.
– Maintaining synchronized cluster information, allowing platform admins to focus on resource management.

– **Step-by-Step Guidance**: The text provides a clear guide to setting up a GKE fleet with application clusters and control clusters hosting Argo CD, along with detailed instructions on configuring teams and deploying applications.

– **Further Exploration**:
– Emphasis is placed on continuing to explore advanced features and strategies within the Kubernetes ecosystem, suggesting dive-ins into GKE’s advanced capabilities and automated deployment strategies.

Overall, the proposal encourages the adoption of these tools to create a robust, automated multi-cluster platform that enhances security and streamlines operations. This insight is particularly relevant for security and compliance professionals in the cloud computing and infrastructure sectors, highlighting the importance of integrating strong security practices within Kubernetes management.