- Create a hosted zone in Route53 using these steps if you don’t already have one available.
- Create a public certificate for the hosted zone (created in step 1) in Certificate Manager using these steps if you don’t have one already available. Make a note of the certificate ARN for use later.
-
Create a file called external-dns.yaml with the text below
(replace YOUR-DOMAIN-NAME with the domain name you created in
step 1). This manifest defines a service account and a cluster role for managing
DNS:
apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list","watch"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.opensource.zalan.do/teapot/external-dns:latest args: - --source=service - --domain-filter=YOUR-DOMAIN-NAME - --provider=aws - --policy=sync - --aws-zone-type=public - --registry=txt - --txt-owner-id=acs-deployment - --log-level=debug
-
Use the kubectl command to deploy the external-dns
service.
kubectl apply -f external-dns.yaml -n kube-system
-
List node groups for your cluster and make note of nodegroup name
YOUR-NODEGROUP (replace YOUR-CLUSTER-NAME
with the name you gave your cluster).
bash aws eks list-nodegroups --cluster-name YOUR-CLUSTER-NAME
-
Find the name of the role used by the nodes by running the following command
(replace YOUR-CLUSTER-NAME with the name you gave your cluster
and YOUR-NODEGROUP with your nodegroup name):
aws eks describe-nodegroup --cluster-name YOUR-CLUSTER-NAME --nodegroup-name YOUR-NODEGROUP --query "nodegroup.nodeRole" --output text
-
In the IAM console find the role discovered in the previous
step and attach the AmazonRoute53FullAccess managed
policy as shown in the screenshot below: