Welcome to this workshop on testing your Pull Request on Kubernetes. The workshop is designed to be self-driven, but if you happen to be in an instructor-driven session and you need help, please ask the instructor.
- 1. Prepare for the workshop
- 2. The application to test
- 3. Create the host cluster
- 4. Authorize the pipeline to use GKE
- 5. Start to configure the pipeline
- 6. Set up the Python environment
- 7. Build and push the image to the GitHub Container Registry
- 8. Prepare the virtual cluster
- 9. Deployment and testing
- 10. Clean up the virtual cluster
Overview
Testing Pull Requests is an engineering practice that ensures the quality of the codebase. You run the tests and prove that there’s no regression, and that the new code is fit for purpose. The testing code is created by the developers; in this workshop, you’re going to focus on the pipeline automation that runs the tests on the target environment, i.e., Kubernetes.
The goal is to create a GitHub workflow that:
-
Builds a container from a Dockerfile
-
Pushes the container to GitHub Container Registry
-
Creates a virtual cluster in an existing Google Kubernetes Engine cluster
-
Deploys the container to the virtual cluster
-
Runs tests on the deployed container
While the name of this workshop is "Testing your Pull Request", you can make good use of the workshop if you follow trunk-based development principles. It just happens that a majority of teams do branch-based development at the moment of this writing. |
1. Prepare for the workshop
While the principles behind this workshop are universal, we need specific tools in the end. As mentioned above, we will use GitHub and Google Cloud. Thus, in order to follow the instructions, you’ll need:
-
A Google Cloud account
2. The application to test
The application to test is available on GitHub at https://github.com/loftlabs-experiments/workshop-test-pr-k8s.
The application manages product entities via 3 endpoints:
-
GET /products
: returns all products -
GET /products/{id}
: returns a single product -
POST /products
: creates a new product
The application uses a PostgreSQL database to store the products.
The database is postgres
and the schema public
.
A newly-initialized application should have a couple of existing products.
For information, products are stored in the product
table, but it plays no role regarding the setup.
The single end-to-end test:
-
Gets all products and count them
-
Creates a new product
-
Gets all products and count them again
-
Checks that the count has increased by one
The application is built on Python with the FastAPI framework.
We configure the URL to test with the BASE_URL
environment variable.
For example, to test the application deployed on http://localhost:8000
:
export BASE_URL=http://localhost:8000
The test uses PyTest; the dependency manager is Poetry. Thus, to launch the test on a existing deployed application, we run the following command:
poetry run pytest tests/e2e.py
3. Create the host cluster
In this section, we are going to create the host GKE cluster.
We have a couple of options here:
- Create the cluster inside the pipeline
-
Creating a full-fledged GKE cluster, such as the one below, takes between 5 and 7 minutes. In the context of a CI/CD pipeline, it’s a long time: we want to give the developers feedback on their change as fast as possible.
- Create the cluster in advance
-
It addresses the speed issue, but it has a huge downside of its own. If the team creates multiple PRs in parallel, one pipeline will spawn for each, and it’s hard to guarantee complete isolation. Worse, some Kubernetes objects are cluster wide, meaning you can’t test the upgrade of such objects.
- Create the cluster in advance and create a virtual cluster in the pipeline
-
vCluster by Loft Labs is an Open Source project that allows creating a virtual cluster inside an existing host cluster. In contrast to the above, you can spin a regular virtual cluster in less than a minute and these clusters are fully isolated. vCluster gives your pipelines a significant speed boost, while allowing you to isolate the tests.
With this in mind, we chose the last option. In this section, we are going to create the host GKE cluster.
-
Go to your Google Cloud console
-
Create a new Google Cloud project:
export PROJECT_ID=vcluster-pipeline (1)(2) gcloud projects create $PROJECT_ID
1 Projects must have a unique ID globally, i.e., across all Google Cloud projects, and not only inside of your organization. 2 Set an environment variable so you can use any project ID you’d like and still copy-paste the following commands. Create in progress for [https://cloudresourcemanager.googleapis.com/v1/projects/vcluster-pipeline]. Waiting for [operations/create_project.global.4675904464572254295] to finish...done. Enabling service [cloudapis.googleapis.com] on project [vcluster-pipeline]... Operation "operations/acat.p2-49535911505-ad65c93b-c4e1-4c54-aa77-42df5c74170b" finished successfully.
-
Switch to the newly created project:
gcloud config set project $PROJECT_ID
Updated property [core/project]
The prompt should display your project ID in brackets.
-
Find an opened billing account ID.
In order to continue, we need to associate the project with a billing account. The workshop assumes you already have at least one; if not, please create one.
gcloud billing accounts list
ACCOUNT_ID: REDACT-EDXXXX-XXXXXX NAME: My Billing Account OPEN: True MASTER_ACCOUNT_ID:
-
Associate the project with the billing account of your choice:
gcloud billing projects link $PROJECT_ID --billing-account=REDACT-EDXXXX-XXXXXX
billingAccountName: billingAccounts/REDACT-EDXXXX-XXXXXX billingEnabled: true name: projects/vcluster-pipeline/billingInfo projectId: vcluster-pipeline
-
Enable the Kubernetes Engine API
gcloud services enable container.googleapis.com
Operation "operations/acf.p2-49535911505-3d2edbe6-1a0a-4e5b-8225-51a0f4950852" finished successfully.
-
Create a GKE cluster
time gcloud container clusters create "minimal-cluster" \ --project "$PROJECT_ID" \ --zone "europe-west9" --num-nodes "1" \ --node-locations "europe-west9-a" --machine-type "e2-standard-4" \ --network "projects/$PROJECT_ID/global/networks/default" \ --subnetwork "projects/$PROJECT_ID/regions/europe-west9/subnetworks/default" \ --cluster-ipv4-cidr "/17" --release-channel "regular" \ --enable-ip-alias --no-enable-basic-auth --no-enable-google-cloud-access
The above creates a single node GKE cluster in Europe. Feel free to change it to your preferred zone.
Here’s an example output:
Creating cluster minimal-cluster in europe-west9... Cluster is being health-checked (Kubernetes Control Plane is healthy)...done. Created [https://container.googleapis.com/v1/projects/vcluster-pipeline/zones/europe-west9/clusters/minimal-cluster]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/europe-west9/minimal-cluster?project=vcluster-pipeline kubeconfig entry generated for minimal-cluster. NAME: minimal-cluster LOCATION: europe-west9 MASTER_VERSION: 1.31.5-gke.1169000 MASTER_IP: 34.163.155.150 MACHINE_TYPE: e2-standard-4 NODE_VERSION: 1.31.5-gke.1169000 NUM_NODES: 1 STATUS: RUNNING real 5m49.216s user 0m1.423s sys 0m0.175s
Note the time–it’s within the range announced above.
4. Authorize the pipeline to use GKE
In this section, we are going to create the way for the pipeline to use the newly-created cluster.cluster
We must interact with the GKE instance to at least install our app.
Since we are inside a GitHub workflow, we will need to authenticate.
Fortunately, the google-github-actions/auth
GitHub Action can help us:
This GitHub Action authenticates to Google Cloud. It supports authentication via a Google Cloud Service Account Key JSON and authentication via Workload Identity Federation.
We will use the last method, as it strikes the right balance between security and usability. Configuring the GKE instance with a Service Account and a Workload Identity Pool is a process that requires a couple of different objects to create.
-
Enable the Credentials API.
gcloud services enable iam.googleapis.com iamcredentials.googleapis.com
Operation "operations/acat.p2-49535911505-796bcf79-a23b-48eb-8b1a-43209e57fe63" finished successfully.
-
Create the SA.
gcloud iam service-accounts create github-actions --display-name "GitHub Actions Service Account"
Created service account [github-actions].
-
Give the SA the editor role. It’s broad, but the host cluster is not critical, i.e., it’s not production.
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member "serviceAccount:github-actions@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/editor"
Updated IAM policy for project [vcluster-pipeline]. bindings: - members: - serviceAccount:service-49535911505@compute-system.iam.gserviceaccount.com role: roles/compute.serviceAgent - members: - serviceAccount:service-49535911505@gcp-sa-gkenode.iam.gserviceaccount.com role: roles/container.defaultNodeServiceAgent - members: - serviceAccount:service-49535911505@container-engine-robot.iam.gserviceaccount.com role: roles/container.serviceAgent - members: - serviceAccount:service-49535911505@containerregistry.iam.gserviceaccount.com role: roles/containerregistry.ServiceAgent - members: - serviceAccount:49535911505-compute@developer.gserviceaccount.com - serviceAccount:49535911505@cloudservices.gserviceaccount.com - serviceAccount:github-actions@vcluster-pipeline.iam.gserviceaccount.com role: roles/editor - members: - serviceAccount:service-49535911505@gcp-sa-networkconnectivity.iam.gserviceaccount.com role: roles/networkconnectivity.serviceAgent - members: - user:nicolas@frankel.ch role: roles/owner - members: - serviceAccount:service-49535911505@gcp-sa-pubsub.iam.gserviceaccount.com role: roles/pubsub.serviceAgent etag: BwYwsXB2qC4= version: 1
-
Give the SA the
container.admin
, role so it can create objects in the cluster.gcloud projects add-iam-policy-binding $PROJECT_ID \ --member "serviceAccount:github-actions@$PROJECT_ID.iam.gserviceaccount.com" \ --role "roles/container.admin"
-
Create a WIP to use by the GitHub Action.
gcloud iam workload-identity-pools create "github-actions" \ --project="$PROJECT_ID" \ --display-name="GitHub Actions Pool" \ --location="global"
Created workload identity pool [github-actions].
-
Create the associated WIPP.
gcloud iam workload-identity-pools providers create-oidc "github-provider" \ --project="$PROJECT_ID" \ --workload-identity-pool="github-actions" \ --display-name="GitHub Provider" \ --attribute-mapping="google.subject=assertion.sub,attribute.actor=assertion.actor,attribute.repository_owner=assertion.repository_owner,attribute.repository=assertion.repository" \ --issuer-uri="https://token.actions.githubusercontent.com" \ --attribute-condition="assertion.repository_owner == 'LoftLabs-Experiments'" \ (1) --location="global"
1 Specify the GitHub repository owner, i.e., the account or the organization that hosts the repository–replace the value with your account/organization handle. The check is case-sensitive. In the context of this workshop, it’s more than enough. For other conditions, please check the documentation. Note that Google Cloud now requires this option.
Created workload identity pool provider [github-provider].
-
Get your project number from your project ID and store it in an environment variable:
export PROJECT_NUMBER=`gcloud projects describe $PROJECT_ID --format="value(projectNumber)"`
-
Finally, bind the previously created SA to the WIP.
gcloud iam service-accounts add-iam-policy-binding \ "github-actions@$PROJECT_ID.iam.gserviceaccount.com" \ --project="$PROJECT_ID" \ --role="roles/iam.workloadIdentityUser" \ --member="principalSet://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/github-actions/*" (1)
1 The value of the PROJECT_NUMBER
environment variable was computed in the previous step"github-actions@$PROJECT_ID.iam.gserviceaccount.com" \ --project="$PROJECT_ID" \ --role="roles/iam.workloadIdentityUser" \ --member="principalSet://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/github-actions/*" Updated IAM policy for serviceAccount [github-actions@vcluster-pipeline.iam.gserviceaccount.com]. bindings: - members: - principalSet://iam.googleapis.com/projects/49535911505/locations/global/workloadIdentityPools/github-actions/* role: roles/iam.workloadIdentityUser etag: BwYwsdWtwns= version: 1
We can now manage the vcluster-pipeline
Cloud project from a GitHub workflow started by nfrankel
by impersonating the github-actions
SA.
5. Start to configure the pipeline
Creating the pipeline is the meat of this workshop.
Start by forking the repo at https://github.com/loftlabs-experiments/workshop-test-pr-k8s.
Open the .github/workflows/e2e.yml
file.
It’s the one we are going to work on.
The beginning is already created for you:
name: Run the E2E test
on:
# For demo purposes only
workflow_dispatch: (1)
pull_request:
branches: [ "master" ] (2)
1 | Manually trigger the workflow. In a real-world scenario, you’d probably remove this when the workflow is stable. In the context of this workshop, we will trigger the workflow manually while updating it. |
2 | "Regular" trigger. Define the branches that trigger a workflow run when you submit a PR to it. |
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
env:
REGISTRY: ghcr.io
DOCKER_BUILD_RECORD_RETENTION_DAYS: 1 (1)
steps:
- name: Checkout repository (2)
uses: actions/checkout@v4
1 | Avoid accumulating too many images in the registry |
2 | Name says it all |
6. Set up the Python environment
Python is necessary to run the tests, thus, we need to set up the whole Python environment. We need three steps:
-
Install the Python runtime in the desired version
-
Install Poetry, the dependency manager used by the app
-
Install the dependencies themselves, including the ones for testing
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.13'
- name: Install Poetry
uses: snok/install-poetry@v1
with:
version: 2.1.1
virtualenvs-create: false
- name: Install dependencies
run: poetry install --with dev --no-interaction
7. Build and push the image to the GitHub Container Registry
This section aims to build and push the image to the GitHub Container Registry so we can get it later from GKE.
GitHub workflows are very flexible, so there are many ways to achieve the same result. In the context of this workshop, I opt to use GitHub Actions when I can.
-
docker/login-action: Logs in to the GitHub Container Registry to be able to push the image in consequent steps.
-
docker/metadata-action: Sets metadata, including the image tag.
-
docker/build-push-action: Builds and pushes the image to the registry.
7.1. Login to the GitHub Container Registry
Your first task is to add a step to log in to the GitHub Container Registry.
See the first hint
Use the docker/login-action
See the second hint
The action requires three parameters:
-
registry
: the URL registry to log in to, i.e., the GitHub Container Registry. It’s an environment variable defined above. -
username
: the GitHub username. It’s an implicit variable. -
password
: a GitHub token for the username. It’s an implicit secret.
See the solution
- name: Log into registry ${{ env.REGISTRY }} (1)(2)
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }} (1)
username: ${{ github.actor }} (3)
password: ${{ secrets.GITHUB_TOKEN }} (4)
1 | Use the environment variable declared at the beginning of the workflow |
2 | Not strictly necessary, but useful for debugging purpose |
3 | User who triggered the workflow |
4 | Secret automatically created by the GitHub Worfklow |
7.2. Generate the container image tag
In regular release cycles, you would tag the image with a version number, following some conventions; semantic versioning is a popular one. In the context of building an image for testing, we don’t need the benefits of semantic versioning. We need a unique tag for each run instead: we will reuse it later to deploy the image to the Kubernetes cluster.
Your task is to get a unique ID and tag the future image. The first hint is for free: use the docker/metadata-action GitHub Action.
See the second hint
The action requires two parameters:
-
images
: the image name, fully qualified by the registry. -
tags
: the tag to set on the image. Check the https://github.com/marketplace/actions/docker-metadata-action#tags-inputdocumentation^] for thedocker/metadata-action
to find options that work.
See the solution
- name: Compute Docker tag
id: meta (1)
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ github.repository }} (2)
tags: |
type=raw,value=${{github.run_id}} (3)
1 | An id is mandatory to reuse the output in later steps |
2 | I used the repository name as the image name, but anything would work. If you set a harcoded name, I’d advise to create an environment variable for the job, so you can use it in later steps, and change it in a single place. |
3 | We consider a couple of options here:
|
7.3. Build and push the image
The last task is to build the image and push it to the GitHub Container Registry.
It’s pretty straightforwad with the docker/build-push-action GitHub Action.
See the solution
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: . (1)
tags: ${{ steps.meta.outputs.tags }} (2)(3)
labels: ${{ steps.meta.outputs.labels }} (2)(4)
push: true
1 | The context is the directory where the Dockerfile is located |
2 | steps.meta references the step above that we annotated with id: meta |
3 | Use the tag generated in the previous step |
4 | Use the default labels generated in the previous step |
If you run the workflow at this point, it should result in a new package in your project:

Notice that the image tag is the same as the workflow’s ID. Every run creates a new image. We configured the job to only keep the artifacts for one day; it’s a good default. If you need an even shorter retention range, you can create a cleanup job.
8. Prepare the virtual cluster
In this section, we are going to create the virtual cluster inside the host GKE cluster and prepare it for later deployment.
8.1. Authenticate on GKE
We are going to authenticate on the GKE cluster, with the help of two more GitHub Actions:
-
google-github-actions/auth: Makes use of the Workload Identity Pool we created earlier to authenticate on Google Cloud.
-
get-gke-credentials: Configures authentication to a GKE cluster using a
kubeconfig
file.
The task is to add a step to authenticate on Google Cloud.
See the first hint
Use the Workload Identity Federation through a Service Account approach of the google-github-actions/auth
GitHub Action.
See the second hint
The action requires two parameters:
-
workload_identity_provider
: Check the Workload Identity Provider created earlier. The pattern isprojects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$WORKLOAD_ID_POOL_NAME/providers/$WORKLOAD_ID_POOL_PROVIDER_NAME
-
service_account
: Check the Service Account created earlier. It’s an email address, whose domain ends withiam.gserviceaccount.com
.
See the solution
- name: Authenticate on Google Cloud
uses: google-github-actions/auth@v2
with:
workload_identity_provider: projects/49535911505/locations/global/workloadIdentityPools/github-actions/providers/github-provider
service_account: github-actions@vcluster-pipeline.iam.gserviceaccount.com
The next task is to use the Google Cloud authentication to authenticate on the GKE cluster created above. Use the other GitHub Action referenced in this section.
See the hint
The action requires two parameters: cluster_name
and location
(cloud zone).
You can find them above when we created the GKE cluster.
See the solution
- name: Set GKE credentials
uses: google-github-actions/get-gke-credentials@v2
with:
cluster_name: minimal-cluster
location: europe-west9
If you want, you can add a temporary step to check if everything works at this point.
- name: Check it works
run: kubectl get all --all-namespaces
Run the workflow to check you can access the GKE cluster and remove the step afterwards.
8.2. Install vCluster and create a virtual cluster
The task here is twofold: install the vCluster command-line and use it to create a virtual cluster. As mentioned above, virtual clusters spin off much faster than regular clusters and are fully isolated from one another.
vCluster offers a couple of installation alternatives, including curl
.
While it’s possible, there’s a dedicated GitHub Actions.
Add a step to install vCluster via the dedicated GitHub Action.
See the hint
Check the loft-sh/setup-vcluster GitHub Action.
See the solution
- name: Install vCluster
uses: loft-sh/setup-vcluster@main
with:
kubectl-install: false
The next step is to create a virtual cluster. Each virtual cluster needs to have a unique name to avoid conflict with other virtual clusters created by other parallel PRs. Add a step that creates such a unique-named virtual cluster.
See the hint
You can reuse the same unique ID we used for the image tag, i.e., the github.run_id
.
See the solution
- name: Create a virtual cluster
id: vcluster (1)
run: time vcluster create vcluster-pipeline-${{github.run_id}}
1 | Not necessary until the very end of the workshop. Be patient! |
The time
command is to check how long it takes to create a virtual cluster compared to how long it too to create the host GKE cluster.
It’s definitely not necessary, but helps you grasp the speed benefit of using virtual clusters.
At this stage, any kubectl
command still executes on the host cluster.
Add a step to connect to the virtual cluster.
See the solution
- name: Connect to the virtual cluster
run: vcluster connect vcluster-pipeline-${{github.run_id}}
Any kubectl
command now executes on the virtual cluster.
8.3. Configure the virtual cluster
To deploy our app on the virtual cluster, we will need to download the container from the cluster side. The GitHub Container Registry requires authentication to download artifacts, via a login/password pair. Your task is to create a secret on the virtual cluster, which we will use later in the pipeline.
See the first hint
Use the kubectl create secret docker-registry
command.
Check its syntax.
See the second hint
The command requires a name parameter and several options.
Use any relevant name you want, e.g., github-docker-registry
.
We will use it in the manifest to pull the Secret
.
Options are:
-
--docker-server
: the URL of the registry -
--docker.email
: whatever you want -
--docker.login
: the GitHub username -
--docker.password
: the GitHub token
See the third hint
To avoid errors if the secret already exists, pipe the output of the command to the kubectl apply
command.
It shouldn’t happen because each workflow run creates a new virtual cluster, but it’s the usual approach and doesn’t cost anything. |
See the solution
- name: Create Docker Registry Secret
run: |
kubectl create secret docker-registry github-docker-registry \ (1)
--docker-server=${{ env.REGISTRY }} \ (2)
--docker-email="noreply@github.com" \ (3)
--docker-username="${{ github.actor }}" \ (4)
--docker-password="${{ secrets.GITHUB_TOKEN }}" \ (5)
--dry-run=client -o yaml | kubectl apply -f - (6)
1 | Name to use later |
2 | URL of the GitHub Container Registry |
3 | Whatever you want |
4 | GitHub username |
5 | GitHub token. Remember the token is provided by GitHub by default. |
6 | Make the command idempotent–doesn’t fail if the Secret already exists |
The next task if to provide the PostgreSQL database URL to the application.
We will deploy PostgreSQL using the Bitnami Helm Chart.
Look at the kube/values.yaml
file in the repository.
It already contains everything we need to deploy, including a user/password pair.
To keep things simple, we will create a regular ConfigMap
that contains the database URL.
-
The database is temporary in nature, because we will delete the virtual cluster at the end of the workflow
-
None of the data is sensitive
Your task is to create a postgres-config
ConfigMap
, which contains the database URL as an environment variable:
-
The name is
DATABASE_URL
-
The value follows the pattern:
postgresql+asyncpg://$USER:$PASSWORD@$HOST:$PORT/$USER
–in PostgreSQL, the default database has the same name as the user
See the $USER
hint
The values.yaml
file references the user under auth.user
.
See the $PASSWORD
hint
The values.yaml
file references the password under auth.password
.
See the $HOST
hint
The values.yaml
file uses the fullnameOverride
attribute, with which the Chart creates the Service
.
The Service
name becomes the hostname. The application and the database will run in the same namespace, so you can use this value to access the Service
from any application Pod
.
See the $PORT
hint
Every workflow run creates its own virtual cluster. Thus, there is a single PostgreSQL database per cluster. You can use the default PostgreSQL’s port.
See the solution
- name: Set config map from values.yaml
run: |
kubectl create configmap postgres-config \
--from-literal="DATABASE_URL=postgresql+asyncpg://$(yq .auth.user kube/values.yaml):$(yq .auth.password kube/values.yaml)@$(yq .fullnameOverride kube/values.yaml):5432/$(yq .auth.user kube/values.yaml)/" \
We are leveraging the yq command-line tool to extract values from the YAML file to follow the DRY principle.
9. Deployment and testing
In this section, we will deploy the database and the application in the virtual cluster, and finally run the test in the target environment.
9.1. Deploy the PostgreSQL database
Many approaches are available to deploy a PostgreSQL database on Kubernetes.
Then, in each of its approach, you have virtually countless configuration options.
In the context of this workshop, we will use the Bitnami Helm Chart with the simplest setup possible.
It’s not what you should do in production, but it’s more than enough for testing purposes.
We already used the values.yaml
file to create the database URL.
Now is time to deploy the database.
Add the step in the workflow:
- name: Install PostgreSQL
run: helm install postgresql oci://registry-1.docker.io/bitnamicharts/postgresql --values kube/values.yaml
9.2. Deploy the application
We need a Deployment
and a Service
to deploy the application.
The Deployment
is pretty standard, but for three items:
-
Use the previously created
Secret
to pull the image from the GitHub Container Registry -
Use the previously created
ConfigMap
to set the database URL -
Last but not least, manage the fact that the container tag is different for each run
To focus on the goal of this workshop, here’s a starting manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: vcluster-pipeline
labels:
type: app
app: vcluster-pipeline
spec:
replicas: 1
selector:
matchLabels:
app: vcluster-pipeline
template:
metadata:
labels:
type: app
app: vcluster-pipeline
spec:
initContainers:
- name: wait-for-postgres
image: atkrad/wait4x:3.1
command: (1)
- wait4x
- postgresql
- postgres://$(DATABASE_URL)?sslmode=disable
containers:
- name: vcluster-pipeline
image: ghcr.io/loftlabs-experiments/workshop-test-pr-k8s:latest
1 | Wait until PostgreSQL is initialized to connect. It’s not strictly necessary: we could let the app crash and restart, but it avoids unnecessary restarts. |
Your first task is to allow Kubernetes to download from the GitHub Container Registry, using the Secret
created earlier.
See the first hint
Read the documentation.
See the second hint
Use the imagePullSecrets
field; it’s an array of Secret
names, but we only need one.
See the solution
spec:
containers:
- name: vcluster-pipeline
image: ghcr.io/loftlabs-experiments/workshop-test-pr-k8s:latest
imagePullSecrets:
- name: github-docker-registry (1)
1 | The name of the Secret created earlier |
Your next task is to set the database URL as an environment variable.
See the hint
Use the ConfigMap
created earlier.
See the solution
spec:
initContainers:
- name: wait-for-postgres
image: atkrad/wait4x:3.1
command:
- wait4x
- postgresql
- postgres://$(DATABASE_URL)?sslmode=disable
envFrom:
- configMapRef:
name: postgres-config (1)
containers:
- name: vcluster-pipeline
image: ghcr.io/loftlabs-experiments/workshop-test-pr-k8s:latest
envFrom:
- configMapRef:
name: postgres-config (1)
imagePullSecrets:
- name: github-docker-registry
1 | The name of the ConfigMap created earlier containing the DATABASE_URL environment variable |
Your next task is to set the correct image tag.
You may have noticed the manifest uses the latest
tag, but we use a different tag for each run.
We will use sed
to change the tag in place; in such a simple case, it’s enough.
In more complex scenarios, have a look at Kustomize.
See the first hint
-
Use the
-i
to change the file in place. -
Use the
|
delimiter to avoid conflicts with slashes in the image references
See the second hint
Use the ${{github.run_id}}
variable.
See the solution
- name: Update the manifest with the correct image tag
run: |
(cd kube && sed -i "s|image: ghcr.io/loftlabs-experiments/workshop-test-pr-k8s:latest|image: ghcr.io/loftlabs-experiments/workshop-test-pr-k8s:${{ github.run_id }}|" vcluster-pipeline.yaml)
It’s time add a LoadBalancer
Service
to the manifest.
GKE creates a public IP address for the service, and we will use it to test the application from the GitHub workflow.
apiVersion: v1
kind: Service
metadata:
name: vcluster-pipeline
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
selector:
app: vcluster-pipeline
Your last task is to apply the manifest.
See the solution
- name: Apply updated manifest to the virtual cluster
run: kubectl apply -f kube/vcluster-pipeline.yaml
9.3. Get the public IP
GKE provides any Service
of type LoadBalancer
with a public IP address.
Unfortunately, it takes some time.
Add the following step to wait until the LoadBalancer
has a public IP to continue the workflow:
- name: Retrieve public IP inside the virtual cluster
run: |
for i in {1..10}; do (1)
EXTERNAL_IP=$(kubectl get service vcluster-pipeline -o jsonpath='{.status.loadBalancer.ingress[0].ip}') (2)
if [ -n "$EXTERNAL_IP" ]; then
break (1)
fi
echo "Waiting for external IP..."
sleep 10 (2)
done
if [ -z "$EXTERNAL_IP" ]; then
echo "Error: External IP not assigned to the service" >&2 (3)
exit 1
fi
BASE_URL="http://${EXTERNAL_IP}:8000"
echo "BASE_URL=$BASE_URL" >> $GITHUB_ENV (4)
echo "External IP is $BASE_URL" (5)
1 | Loop ten times, but break when the Service has the IP |
2 | Wait for 10 seconds between each iteration |
3 | Fail the GitHub job if we didn’t manage to get the IP after the last iteration |
4 | Store the URL as a environment variable;
it’s a practical way to pass information between steps.
Every step after this one has now access to the BASE_URL environment variable. |
9.4. Wait for the application to be ready
The GitHub workflow has instructred the virtual cluster to deploy the application pod. Yet, it takes some time, depending on your context. There’s a chance that if we start the tests right away, the application won’t be ready and tests will fail, much to our chagrin.
To take no chance, we should wait until the application is ready.
- name: Wait until the application has started
uses: addnab/docker-run-action@v3
with:
image: atkrad/wait4x:latest (1)
run: wait4x http ${{ env.BASE_URL }}/health --expect-status-code 200 (2)(3)
1 | wait4x is a great generic command, available as a container, that waits until a remote service is ready |
2 | The application provides a /health endpoint |
3 | Wait until the endpoint returns a 200 HTTP status code |
9.5. Run the end-to-end test
It’s time to run the end-to-end test. Your task is to add a step to do it. All necessary informations are available above.
See the solution
- name: Run the test
run: poetry run pytest tests/e2e.py
Run the workflow and enjoy the magic!
10. Clean up the virtual cluster
At this point, we could call it a day and stop right there. For this workshop, it’s indeed enough. But in real-world, you’d soon have a lot of virtual clusters and you’d wouldn’t have enough resources to spin more. We definitely need a cleanup phase.
It’s straightforwad to delete a virtual cluster with the vcluster delete
command.
However, if we add a step to run it as it is, the latter will run regardless whether the virtual cluster has been created successfully or not.
By default, GitHub workflows' steps run unconditionally.
Your task is to add a condition to the cleanup step to run:
-
Even if previous steps have failed
-
But only if the virtual cluster has been created successfully; we can’t delete a cluster that hasn’t been created yet!
Remember that we set the vcluster
id to the step that created the virtual cluster.
See the hint
Read the documentation related to steps conditions.
See the solution
- name: Delete the virtual cluster
if: always() && steps.vcluster.conclusion == 'success'
run: vcluster delete vcluster-pipeline-${{github.run_id}}