Ansible Roles for launching Wordpress with MySQL in Kubernetes Cluster over AWS

Akurathi Sri Krishna Sagar
9 min readJul 11, 2022

--

In this Article, we are going to launch our custom Kubernetes cluster over the AWS instances and after that launch the wordpress and mysql pods inside the K8s cluster — All with ONE command by using Ansible Roles!

GitHub Link containing Ansible Roles :

Get Ansible Ready

The first step is to configure the /etc/ansible/ansible.cfg file. The following is my ansible.cfg file :

[defaults]
inventory = /home/sagar/inventory.txt
host_key_checking = false
command_warnings = false
deprecation_warnings = false
roles_path = /home/sagar/wordpress_k8s
ask_pass = false
remote_user = ec2-user
private_key_file = /home/sagar/Downloads/kubernetes.pem
[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = false

You need to change some things in above file. Create an inventory file (.txt) and specify that location in above file. My ansible roles are created under the directory “/home/sagar/wordpress_k8s” — You can change it. My private key file for attaching it to the AWS instances is at “/home/sagar/Downloads/kubernetes.pem”.

In the project I’m using 4 ansible roles :

  • launch_ec2_instances : For launching AWS EC2 Instances
  • configure-master-slave : For setting up K8s Cluster over the above created AWS Instances
  • start-master : For starting cluster services on the master and making worder nodes join the cluster
  • wordpress-mysql : For launching the wordpress and mysql pods with respective services, pv’s and pvc’s inside the K8s cluster.

You can create the directory structure for above ansible roles with the command : (inside the location specified in ansible.cfg file)

ansible-galaxy init <<ROLE NAME>>

The tasks/main.yml files in the 4 roles are given below :

Role-1 : launch_ec2_instances

tasks/main.yml :

---
# tasks file for launch_ec2_instances
- name: "Installing boto python library"
pip:
name: boto
- name: "Launching K8s Master"
ec2:
key_name: "{{ key_name }}"
instance_type: "{{ instance_type }}"
image: "{{ AMI_ID }}"
wait: yes
group: "{{ master_security_group }}"
count: 1
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
region: "{{ region_name }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
instance_tags:
"Name": "Master"
register: master_info
- name: "Launching K8s Slave Nodes"
ec2:
key_name: "{{ key_name }}"
instance_type: "{{ instance_type }}"
image: "{{ AMI_ID }}"
wait: yes
group: "{{ worker_security_group }}"
count: "{{ number_of_slave_nodes }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
region: "{{ region_name }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
instance_tags:
"Name": "Slave"
register: slave_info
- name: "Adding Master and Slave Nodes Public IP's to the Inventory"
template:
src: inventory.txt
dest: "{{ inventory_location }}"
- name: "Waiting for Master SSH port 22"
wait_for:
port: 22
host: "{{ item.public_dns_name }}"
state: started
loop: "{{ master_info['instances'] }}"
- name: "Waiting for Slave SSH port 22"
wait_for:
port: 22
host: "{{ item.public_dns_name }}"
state: started
loop: "{{ slave_info['instances'] }}"

You need to specify the values for above used variables inside vars/main.yml file :

---
# vars file for launch_ec2_instances
key_name: "YOUR_KEY_NAME"
instance_type: "t2.micro"
AMI_ID : "ami-08df646e18b182346"
master_security_group: "YOUR_SG_NAME"
worker_security_group: "YOUR_SG_NAME"
subnet_id: "YOUR_SUBNET_ID"
region_name: "ap-south-1"
access_key: "YOUR_ACCESS_KEY"
secret_key: "YOUR_SECRET_KEY"
number_of_slave_nodes: "2"
inventory_location: "/home/sagar/inventory.txt"

I’m using Amazon Linux instance of t2.micro type. Make sure to provide your credentials, subnet id, SG, inventory_location in above file.

IMP! One important step here is configuring proper Inbound rules for your security group. Otherwise this will lead to connectivity issues between the nodes in your K8s cluster. The following are the Inbound rules for my Security Group : (I’ve used same SG for both master and worker Instances)

In this role, I’ve used a template for storing the public IP’s of launched Instances : templates/inventory.txt :

[master_node]
{% for i in master_info['instances'] %}
{{ i.public_ip }}
{% endfor %}
[slave_nodes]
{% for i in slave_info['instances'] %}
{{ i.public_ip }}
{% endfor %}

The hosts inside the above template file will be used by the Roles following…

Role-2 : configure-master-slave

tasks/main.yml :

---
# tasks file for configure-master-slave
- name: "Turning off Swap"
command: "swapoff -a"
- name: "Configuring k8s.conf file"
copy:
content: |
overlay
br_netfilter
dest: "/etc/modules-load.d/k8s.conf"
- name: "Configuring Kernel Modules"
shell: |
modprobe overlay
modprobe br_netfilter
- name: "Configuring sysctl params"
copy:
content: |
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
dest: "/etc/sysctl.d/k8s.conf"
- name: "Applying sysctl params"
shell: sudo sysctl --system
- name: "Installing Docker"
package:
name: docker
state: present
- name: "Creating docker directory"
file:
path: /etc/docker
state: directory
- name: "Configuring cgroup driver for docker"
copy:
content: |
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
dest: "/etc/docker/daemon.json"
- name: "Enabling Docker service"
service:
name: docker
enabled: yes
- name: "Reloading systemd files"
shell: "sudo systemctl daemon-reload"
- name: "Restarting Docker service"
service:
name: docker
state: restarted
- name: "Configuring YUM repository for kubernetes"
yum_repository:
name: kubernetes
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled: yes
gpgcheck: yes
gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude:
- kubelet
- kubeadm
- kubectl
description: yum repo for kubernetes
- name: "Disabling SELinux"
shell: |
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
- name: "Installing kubeadm, kubelet, kubectl in Master and Worker"
yum:
name: "{{ item }}"
state: present
disable_excludes: kubernetes
loop:
- kubelet
- kubeadm
- kubectl
- name: "Starting and Enabling kubelet"
service:
name: kubelet
state: started
enabled: yes
- name: "Downloading required Docker Images"
command: kubeadm config images pull
ignore_errors: yes
- name: "Installing iproute-tc in Master and Worker"
package:
name: iproute-tc
state: present

I’ve used docker as my CRI and systemd as a Cgroup driver for docker. The above steps should be run on both the master and worker ec2 instances as these steps are common for both K8s master and worker node.

Next, we’ve to start the master node…

Role-3 : start-master

tasks/main.yml :

---
# tasks file for start-master
- name: "Starting the Master"
command: "kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
- name: "Kubeconfig"
shell: |
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- name: "Applying Flannel"
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

I’ve used flannel as my AddOn for the overlay network as seen above.

Next, the worker nodes has to join the cluster which I’ll show in the main.yml file.

Role-4 : wordpress-mysql

tasks/main.yml :

---
# tasks file for wordpress-mysql
- name: "Creating directory for storing kubernetes files"
file:
path: /home/ec2-user/k8s-files
state: directory
- name: "Copying Kubernetes Files"
copy:
src: ./
dest: /home/ec2-user/k8s-files
- name: "Applying Kubernetes Files"
shell: kubectl apply -f /home/ec2-user/k8s-files/
- name: "Getting ClusterIP of MySQL"
shell: "kubectl describe svc wordpress-mysql | grep IP: | tr -s ' '"
register: ip
- name: "Extracting ClusterIP"
set_fact:
clusterIP: "{{ ip['stdout_lines'][0] | regex_replace('^IP: ', '') }}"
- name: "Copying wordpress yml file"
template:
src: wordpress-deployment.yaml
dest: /home/ec2-user/k8s-files
- name: "Applying wordpress yml file"
shell: kubectl apply -f /home/ec2-user/k8s-files/wordpress-deployment.yaml
- name: "Getting Port Number of Wordpress"
shell: kubectl get svc wordpress
register: NodePort
- name: "Displaying Port Number of Wordpress"
debug:
var: NodePort['stdout_lines']

The above roles contains the following yml files for launching pv’s, pvc’s, secret, serivces, pods :

files/pv1.yaml : (for creating 2GiB of physical volume)

apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv1
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: ""
hostPath:
path: "/pv1"

files/pv2.yaml : (for creating another 2GiB of physical volume)

apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv2
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: ""
hostPath:
path: "/pv2"

files/secret.yaml : (for storing the password of mysql database)

apiVersion: v1
data:
password: c2FnYXI=
kind: Secret
metadata:
name: mysql-pass

In the above file, the password is base64 encoded. Make sure you change the password in the above file.

files/mysql-deployment.yaml : (for creating a service for mysql deployment, claiming pv using pvc and launching mysql deployment)

apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
app: wordpress
tier: mysql
ports:
- port: 3306
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim

Next, I’ve used a template in which the clusterIP of MySQL database will be loaded into the environmental variable : WORDPRESS_DB_HOST :

templates/wordpress-deployment.yaml : (for creating a NodePort service for wordpress and for claiming pv using pvc and creating a deployment for wordpress)

apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
targetPort: 80
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:5.1.1-php7.3-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: "{{clusterIP}}:3306"
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_TABLE_PREFIX
value: wp_
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim

Note how tasks/main.yml file extracts the clusterIP of mysql service.

main.yml

Now we need a driver playbook which run all of the above roles sequentially and pass the cluster joining command to the worker nodes. The following is that main.yml file :

- hosts: localhost
gather_facts: False
roles:
- launch_ec2_instances
tasks:
- meta: refresh_inventory
- hosts: master_node:slave_nodes
gather_facts: False
roles:
- configure-master-slave
- hosts: master_node
gather_facts: False
roles:
- start-master
tasks:
- name: "Getting token from master"
shell: "kubeadm token create --print-join-command"
register: token
- add_host:
name: "token_for_worker"
link: "{{ token['stdout'] }}"
- hosts: slave_nodes
gather_facts: False
tasks:
- name: "Worker Nodes joining the cluster"
shell: "{{ hostvars['token_for_worker']['link'] }}"
- hosts: master_node
gather_facts: False
roles:
- wordpress-mysql

The above file, after launching instances, refreshes the inventory file so that further roles can connect to the ec2 instances. After launching ec2 instances, the above playbook configures the master and worker nodes and finally launches the deployments for wordpress and mysql pods.

It’s time to test

Now run the main.yml driver playbook with sudo. The following is the output for me :

That was long!

In the last screenshot, we can see the PORT exposed (32243) of wordpress service. Now, you can view your wordpress website by heading to the url : http://<<Any EC2 Instance Public IP>>:<<PORT Exposed. In my case, it is :

http://13.127.98.228:32243

You can see my instances are launched :

The wordpress website can also be accessed with the worker node public IP’s also.

You can log into the master node and see the cluster info :

After some setup, you can use your own hosted wordpress :

Thanks for Reading 😊

--

--

No responses yet