Configure Continuous Deployment Using Kustomize Components and Spinnaker Operator in Amazon EKS

May 22, 2023 By Mark Otto 0

Spinnaker is a cloud native continuous delivery platform that provides fast, safe, and repeatable deployments for every enterprise.

In the precursor to this blog, we learned how to manage Spinnaker using the Apache licensed open source Spinnaker Operator and deploy the application using Spinnaker continuous delivery pipeline to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. We configured different components using Spinnaker YAML, as well as the KubeConfig using Spinnaker Tools.

In this blog, we will streamline the Spinnaker service configurations using Kustomize components, Spinnaker Operator, and Amazon EKS Blueprint. We also presented this topic at the 2022 Spinnaker Summit.

Kustomize is an open source tool for customizing Kubernetes configurations to generate resources from other sources and compose and customize collections of resources. We will also introduce Kustomize patches for Spinnaker resources, which is a new kind of Kustomization that allows users to define reusable Kustomizations. In this blog, we will use the kustomize.yaml file to work with the overlays base components from this repository and the patches from the local files.

We will use two personas when talking about continuous deployment using Spinnaker: platform team and development team.

Platform team

In the diagram below, the platform team will setup the infrastructure for Spinnaker using the following steps:

  • Use Amazon EKS Blueprints to create the Amazon EKS cluster.
  • Install the Spinnaker Operator, a Kubernetes operator for managing Spinnaker that is built by Armory.
  • Setup the Amazon Elastic Container Registry (Amazon ECR) repository and Amazon Simple Storage Service (Amazon S3) bucket in your Amazon Web Services (AWS) account. We will create these as part of the walkthrough.
  • Use Kustomize components to deploy Spinnaker service on Amazon EKS. We will also use the kustomize patch configuration to integrate with different AWS services. All the patched information to configure the components below will live in the kustomize.yml file:
    • Amazon S3 to persist the data for Spinnaker metadata and the pipeline
    • Application load balancer to expose the Spinnaker UI
    • Amazon ECR for the docker registry

Process diagram for platform teams

Development team

In this diagram, we document how Spinnaker is used as a CI/CD tool and will help deploy the application using GitOps.

  • The DevOps team (who can either be part of the development team or not depending on organizational structure) will be responsible for creating the Spinnaker pipeline. In our case, we have imported the pre-created pipeline.json, which you will see in the walkthrough section.
  • The developer will commit code changes that should trigger the build and upload Artifact to Amazon ECR.
  • The Spinnaker pipeline will detect the new artifact with a new tag and start the deployment to test environment using a Kustomize configuration for test environment.
  • Once approved, the pipeline will complete the deployment to production environment using a Kustomize configuration for production environment.

how developers use Spinnaker as a CICD tool



You will need to use AWS Command Line Interface (AWS CLI), eksctl, kubectl, Terraform, jq, and yq. At the time of writing this blog, the latest version of yq was having issues while passing environment variables, so make sure to use the 4.25.1 version.

Step 1 ⁠- Provision Amazon EKS Cluster using Amazon EKS Terraform Blueprint

Follow steps from the Amazon EKS Terraform Blueprint Git repository to create an Amazon EKS cluster. For this example, we have named the Amazon EKS Cluster eksworkshop-eksctl and set the version to 1.24. Refer to Amazon EKS Blueprint for Terraform for more information.

Step 2 – Install Spinnaker CRDs

Pick a release from GitHub and export that version. We are using 1.3.0, the latest Spinnaker Operator that was available at the time of writing this blog. You can see the latest Spinnaker operator update on the Spinnaker blog.

The operator pattern allows us to extend the Kubernetes API to manage applications and their components through constructs such as the control loop. The Spinnaker Operator streamlines the following tasks:

  • Validate Spinnaker configurations to reduce the incidences of incorrect feature configuration
  • Create and monitor all Spinnaker microservices
  • Activates upgrades between versions of Spinnaker

To install the Spinnaker CRDs, run these commands:

export VERSION=1.3.0
echo $VERSION cd ~/environment
mkdir -p spinnaker-operator && cd spinnaker-operator
bash -c "curl -L${VERSION}/manifests.tgz | tar -xz"
kubectl apply -f deploy/crds/

When successful, you should get the following output: created

Step 3 – Install Spinnaker Operator

Next, we need to install the Spinnaker Operator in the namespace spinnaker-operator. We have used cluster mode for the operator that works across namespaces and requires a cluster role to perform validation. Run these commands:

kubectl create ns spinnaker-operator
kubectl -n spinnaker-operator apply -f deploy/operator/cluster

Make sure the Spinnaker Operator pod is running. This may take a couple of minutes. To confirm, run this command:

kubectl get pod -n spinnaker-operator

When successful, you should get the following output:

spinnaker-operator-6d95f9b567-tcq4w 2/2 Running 0 82s

Step 4 – Create an Amazon ECR instance

Now, we need to create an Amazon ECR instance. Make sure you have your AWS Region and Account ID ready. Run these commands:

export AWS_REGION=<your region>
export AWS_ACCOUNT_ID=<your aws account id>
echo "export AWS_REGION=${AWS_REGION}" >> ~/.bash_profile
echo "export AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID}" >> ~/.bash_profile export ECR_REPOSITORY=spinnaker-summit-22
echo "export ECR_REPOSITORY=${ECR_REPOSITORY}" >> ~/.bash_profile
aws --region ${AWS_REGION} ecr get-login-password | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION} aws ecr create-repository --repository-name ${ECR_REPOSITORY} --region ${AWS_REGION}>/dev/null

Next, push the sample NGINX image into your Amazon ECR instance:

docker pull nginx:latest
docker tag nginx:latest ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}${ECR_REPOSITORY}:v1.1.0
docker push ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}${ECR_REPOSITORY}:v1.1.0

When successful, you should get output like this:

latest: Pulling from library/nginx
Status: Downloaded newer image for nginx:latest
The push refers to repository []
d6a3537fc36a: Pushed
v1.1.0: digest: sha256:bab399017a659799204147065aab53838ca6f5aeed88cf7d329bc4fda1d2bac7 size: 1570

Step 5 – Create an Amazon S3 bucket

Using these commands, create and configure an Amazon S3 bucket:

export S3_BUCKET=spinnaker-workshop-$(cat /dev/urandom | LC_ALL=C tr -dc "[:alpha:]" | tr '[:upper:]' '[:lower:]' | head -c 10)
aws s3 mb s3://$S3_BUCKET --region ${AWS_REGION}
aws s3api put-public-access-block \
--bucket $S3_BUCKET \
--public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
echo ${S3_BUCKET}
echo "export S3_BUCKET=${S3_BUCKET}" >> ~/.bash_profile 

Step 6 – Create a service account

Run these commands to create a service account on your Amazon EKS instance:

eksctl utils associate-iam-oidc-provider --cluster eksworkshop-eksctl --approve
kubectl create ns spinnaker eksctl create iamserviceaccount \
 --name s3-access-sa \
 --namespace spinnaker \
 --cluster eksworkshop-eksctl \
 --attach-policy-arn arn\:aws\:iam::aws\:policy/AmazonS3FullAccess \
 --approve \

Step 7 – Create a secret

Make sure you have the GitHub token created using the instructions here: Your username and token as secret will be used by the Spinnaker pipeline to clone the spinnaker-summit-22 git repo, via these commands:

cd ~/environment
kubectl -n spinnaker create secret generic spin-secrets --from-literal=http-password="spinsum22\!\?" --from-literal=github-token=<Your Git hub Token>

Step 8 – Install Spinnaker

  • Clone the Spinnaker repository:
cd ~/environment/
git clone
cd spinnaker-summit-22

  • Change the ~/environment/spinnaker-summit-22/ecr-registry.yml configuration file by adding your account and region.
export ECR_URI=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}
echo "export ECR_URI=${ECR_URI}" >> ~/.bash_profile  ECR_URI=${ECR_URI} yq -i '.[1].value = env(ECR_URI)' ~/environment/spinnaker-summit-22/ecr-registry.yml
sed -i 's|AWS_REGION|'${AWS_REGION}'|g' ~/environment/spinnaker-summit-22/ecr-registry.yml

  • Change the ~/environment/spinnaker-summit-22/s3-bucket.yml configuration file by adding your Amazon S3 bucket name.
S3_BUCKET=${S3_BUCKET} yq  -i '.spec.spinnakerConfig.config.persistentStorage.s3.bucket = env(S3_BUCKET)' ~/environment/spinnaker-summit-22/s3-bucket.yml

  • Change account/name in the ~/environment/spinnaker-summit-22/gitrepo.yaml configuration file, and run the command below based on your GitHub account.
yq -i '.spec.spinnakerConfig.config.artifacts.gitrepo.accounts[0].name = "<Your github user name>"' ~/environment/spinnaker-summit-22/gitrepo.yml

  • Delete the validation webhook. This is the current workaround for the Spinnaker Operator having a validation error in Kubernetes 1.22.
kubectl delete ValidatingWebhookConfiguration spinnakervalidatingwebhook 

  • Create Spinnaker service with these commands:
cd ~/environment/spinnaker-summit-22/
kubectl apply -k .

When successful, you should get the following output:

serviceaccount/spin-sa created created created created

Check that all pods and services are running with these kubectl commands:

kubectl get svc -n spinnaker
kubectl get pods -n spinnaker

Here is some example output:

NAME               TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
spin-clouddriver   ClusterIP     <none>                                                                    7002/TCP       5d23h
spin-deck          LoadBalancer   80:30893/TCP   5d23h
spin-echo          ClusterIP   <none>                                                                    8089/TCP       5d23h
spin-front50       ClusterIP    <none>                                                                    8080/TCP       5d23h
spin-gate          LoadBalancer   80:30591/TCP   5d23h
spin-igor          ClusterIP     <none>                                                                    8088/TCP       5d23h
spin-orca          ClusterIP    <none>                                                                    8083/TCP       5d23h
spin-redis         ClusterIP   <none>                                                                    6379/TCP       5d23h
spin-rosco         ClusterIP   <none>                                                                    8087/TCP       5d23h NAME READY STATUS RESTARTS AGE
spin-clouddriver-865f7d77d5-lxfps 1/1 Running 0 19h
spin-deck-5d546d6f59-psmk8 1/1 Running 0 19h
spin-echo-6579d45865-dlxs2 1/1 Running 0 19h
spin-front50-74646b785d-jqxh5 1/1 Running 0 19h
spin-gate-7f6f86d75f-65rdm 1/1 Running 0 19h
spin-igor-868dbb6656-qqrgh 1/1 Running 0 19h
spin-orca-5458c9c4c4-s4r5x 1/1 Running 0 19h
spin-redis-5b685889fd-mjjjd 1/1 Running 0 19h
spin-rosco-6969544f6b-s4nc8 1/1 Running 0 19h

Step 9 – Configure Spinnaker pipeline

  • In this example we will use a pre-created Spinnaker pipeline.json. However, we need to edit the file ~/environment/spinnaker-summit-22/pipeline,json with your Amazon ECR repository information by inputting the command below. Replace the Amazon ECR endpoint in this command:
cd ~/environment/spinnaker-summit-22/
cat <<< "$(jq '.parameterConfig[0].default = ""' pipeline.json)" > ~/environment/spinnaker-summit-22/pipeline.json
cat <<< "$(jq '.triggers[0].registry = ""' pipeline.json)" > ~/environment/spinnaker-summit-22/pipeline.json

  • Open the Spinnaker UI by getting the load balancer URL for the Spinnaker service spin-deck from this kubectl command:
kubectl get svc -n spinnaker

The hostname will be listed in the LoadBalancer row in the output, at the end of the line:

NAME               TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE
spin-deck          LoadBalancer   80:30781/TCP   5m2s
spin-echo          ClusterIP   <none>                                                                   8089/TCP       5m2s

  • Go to the browser and load the Load Balancer hostname as a URL. From the UI, create an application for continuous deployment.

This image demonstrates all of the information that needs to be entered into the UI to create a new application, specifically, Name, Owner Email, Repo Type, Description, Instance Health, Instance Port, and Pipeline Behavior.

UI information entry

  • Create a new pipeline for your application. The UI will prompt you to enter a type (select Pipeline) and provide a pipeline name (for this example, use NginxApp):

New pipeline window

  • Create the pipeline stages using the JSON file. In the UI, select “Pipeline Actions” then “Edit as JSON”:

pipeline actions screen

  • Replace the pipeline json file content with the content from your ~/environment/spinnaker-summit-22/pipeline.json file. Update your Pipeline by selecting “Save Changes.”
  • The Spinnaker UI does not auto save in the bake stages. Click the drop down and select your account in “Bake Test” and “Bake Production”:

Bake production manifest screen

  • Verify your stages in the UI. Your Pipeline should be NginxApp, the tag value should be v1.1.0, and the repo_name should reflect your specific repository:select pipeline screen

Test each input before going back to the pipeline and manually completing it:

sampleapp screen

  • Your pipeline will run until the stage, Promote to Production, pause, and wait for approval. Confirm that the application has been deployed to the test environment.
    promote to production status screen
  • Using this kubectl command, check if the application has been deployed to the test environment:
kubectl get pods -n test-environment

  • You should get output showing one pod running in the test-environment namespace:
test-hello-world-5b9c48d997-ksprc 1/1 Running 1 23h

  • Click on the “Continue on the Spinnaker Pipeline” popup in the UI for Promote to Production and the pipeline will deploy three pods to the prod environment:

Spinnaker pipeline popup

Check if the application has been deployed to the prod environment. You should see three pods running in the production-environment namespace as the output from this kubectl command:

kubectl get pods -n production-environment

Step 10 – Activate GitOps based automated deployment

  • Run This bash script will create the application container image with a new tag and push the image to your Amazon ECR repository:
cd ~/environment/spinnaker-summit-22
./ 1.3.0

  • From here, the pipeline should trigger automatically. You can confirm this in the UI:

pipepline deployment status

  • Check the pods after test deployment with this kubectl command:
kubectl get pods -n test-environment NAME READY STATUS RESTARTS AGE
test-hello-world-777ddbb675-w7s9p 1/1 Running 0 5m55s

  • Check the image used by the pod with this kubectl command. It should match the tag used in the script:
kubectl get pod test-hello-world-777ddbb675-w7s9p -n test-environment -ojson| jq ".spec.containers[0].image" ""

  • Check the pod after production deployment with this kubectl command. You should see three pods running in the production-environment namespace:

kubectl get pods -n production-environment

  • Check the image used by the pod with this kubectl command. It should match the tag used in the script:
kubectl get pod production-hello-world-66d9f986c9-45s8b -n production-environment -ojson| jq ".spec.containers[0].image" ""


To clean up your environment, run the following commands, being careful to substitute in the correct values for your AWS_REGION, ECR_REPOSITORY, and S3_BUCKET:

cd ~/environment/spinnaker-summit-22
kubectl delete -k .
eksctl delete iamserviceaccount \
--name s3-access-sa \
--namespace spinnaker \
--cluster eksworkshop-eksctl
aws ecr delete-repository --repository-name ${ECR_REPOSITORY} --region ${AWS_REGION} --force
aws s3 rb s3://$S3_BUCKET --region ${AWS_REGION} --force
cd ~/environment/terraform-aws-eks-blueprints/examples/ipv4-prefix-delegation/
terraform destroy --auto-approve
cd ~/environment/
rm -rf spinnaker-summit-22
rm -rf terraform-aws-eks-blueprints
rm -rf spinnaker-operator


In this post, we installed Spinnaker Service using Spinnaker Operator and Kustomize and walked you through the process of setting up a sample application in Spinnaker service using Kustomize. Then we built a Spinnaker CD pipeline which used Kustomize to overlay the test and prod environment during the deployment stage.

We observed how the Spinnaker pipeline got triggered when we pushed a new image into an Amazon ECR repository. Spinnaker then executed the pipeline deployment stage and deployed the sample application artifacts into an Amazon EKS cluster.

To learn more, we recommend you review these additional resources:

Spinnaker Concepts
Spinnaker Architecture Overview
GitHub Spinnaker Operator
Deploy Armory Continuous Deployment or Spinnaker Using Kubernetes Operators
Spinnaker Architecture
Kustomize patches for configuring Armory Continuous Deployment