# 一、环境准备
## 1.1 安装后的拓扑图
![k8s安装拓扑图](./pics/install-k8s.png)<br>
## 1.2 硬件信息
|ip地址|hostname|CPU|内存|磁盘|说明|
|-|-|-|-|-|-|
|172.16.1.20|k8smaster|2 vcpu|4 GB|50 GB|k8s Control Plane节点|
|172.16.1.21|k8snode1|2 vcpu|4 GB|50 GB|k8s Worker节点 1|
|172.16.1.22|k8snode2|2 vcpu|4 GB|50 GB|k8s Worker节点 2|
## 1.3 软件信息
|Software|Version|
|-|-|
|CentOS|CentOS Linux release 7.8.2003 (Core)|
|Kubernetes|v1.18.5|
|Docker|19.03.12|
## 1.4 保证环境正确性
|检查点|命令|备注|
|-|-|-|
|保证集群各节点互通|ping -c 3 \<ip>||
|保证MAC地址唯一|ip link 或 ifconfig -a|修改MAC地址参考命令:<br>ifconfig eth0 down<br>ifconfig eth0 hw ether 00:0C:18:EF:FF:ED<br>ifconfig eth0 up|
|保证集群内主机名唯一|查询 hostnamectl status<br>修改 hostnamectl set-hostname \<hostname>||
|保证系统产品uuid唯一|dmidecode -s system-uuid 或 <br>sudo cat /sys/class/dmi/id/product_uuid|如product_uuid不唯一,请考虑重装CentOS系统|
## 1.5 确保端口开放正常
### kube-master节点端口检查:
|Protocol|Direction|Port Range|Purpose|
|-|-|-|-|
|TCP|Inbound|6443*|kube-api-server|
|TCP|Inbound|2379-2380|etcd API|
|TCP|Inbound|10250|Kubelet API|
|TCP|Inbound|10251|kube-scheduler|
|TCP|Inbound|10252|kube-controller-manager|
### kube-node*节点端口检查:
|Protocol|Direction|Port Range|Purpose|
|-|-|-|-|
|TCP|Inbound|10250|Kubelet API|
|TCP|Inbound|30000-32767|NodePort Services|
#### 可以关掉防火墙:
```bash
systemctl stop firewalld
systemctl disable firewalld
```
## 1.6 关闭 SeLinux (否则 kubelet 挂载目录时可能报错 Permission denied)
```bash
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
```
## 1.7 关闭 swap (禁用swap以提高性能)
```bash
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
```
## 1.8 设置系统时区、同步时间
```bash
timedatectl set-timezone Asia/Shanghai
systemctl enable --now chronyd
```
|查看同步状态|输出|说明|
|-|-|-|
|timedatectl status|System clock synchronized: yes<br> NTP service: active<br> RTC in local TZ: no|System clock synchronized: yes,表示时钟已同步<br>NTP service: active,表示开启了时钟同步服务|
```bash
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog && systemctl restart crond
```
## 1.9 修改 /etc/sysctl.conf 解决流量路径不正确问题
```bash
# 如果有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
# 可能没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
# 执行命令以使配置生效
modprobe br_netfilter
sysctl -p
```
## 1.10 配置主机互信
分别在各节点配置hosts映射:
```bash
cat >> /etc/hosts <<EOF
172.16.1.20 k8smaster
172.16.1.21 k8snode1
172.16.1.22 k8snode2
EOF
```
kube-master生成ssh密钥,分发公钥到各节点:
```bash
#生成ssh密钥,直接一路回车
ssh-keygen -t rsa
#复制刚刚生成的密钥到各节点可信列表中,需分别输入各主机密码
ssh-copy-id root@k8smaster
ssh-copy-id root@k8snode1
ssh-copy-id root@k8snode2
```
## 1.11 配置yum源
```bash
rm -rf /etc/yum.repos.d/local.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
#安装必要依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
#添加aliyun docker-ce yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#重建yum缓存
yum makecache fast
```
# 二、安装Docker
## 2.1 查看可用Docker版本
```bash
yum list docker-ce.x86_64 --showduplicates | sort -r
```
## 2.2 安装指定版本Docker
```bash
#!/bin/bash
# 安装 docker
# 参考文档如下
# https://docs.docker.com/install/linux/docker-ce/centos/
# https://docs.docker.com/install/linux/linux-postinstall/
DOCKER_VERSION=19.03.12-3.el7
# 卸载旧版本
yum remove -y docker \
docker-client \
docker-client-latest \
docker-ce-cli \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
# 安装并启动 docker
yum install -y docker-ce-$DOCKER_VERSION docker-ce-cli-$DOCKER_VERSION containerd.io
systemctl enable docker
systemctl start docker
```
## 2.3 确保网络模块开机自动加载
```bash
lsmod | grep overlay
lsmod | grep br_netfilter
```
若上面命令无返回值输出或提示文件不存在,需执行以下命令:
```bash
cat > /etc/modules-load.d/docker.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
```
## 2.4 配置Docker
```bash
#修改cgroup驱动为systemd[k8s官方推荐]、限制容器日志量、修改存储类型,最后的docker家目录可修改
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": [
"https://7uuu3esz.mirror.aliyuncs.com",
"https://docker.mirrors.ustc.edu.cn",
"https://mirror.ccs.tencentyun.com",
"https://reg-mirror.qiniu.com",
"https://hub-mirror.c.163.com",
"https://dockerhub.azk8s.cn",
"https://registry.docker-cn.com"
],
"data-root": "/data/docker"
}
EOF
systemctl daemon-reload
systemctl restart docker
```
## 2.5 验证Docker是否正常
```bash
#查看docker信息,判断是否与配置一致
docker info
#hello-docker测试
docker run --rm hello-world
#删除测试image
docker rmi hello-world
```
![docker-test](./pics/docker-test.png)<br>
## 2.6 添加用户到Docker组
```bash
#添加用户到docker组
usermod -aG docker <USERNAME>
#当前会话立即更新docker组
newgrp docker
```
# 三、部署kubernetes集群
## 3.1 添加kubernetes源
```bash
# 配置K8S的yum源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#重建yum缓存,输入y添加证书认证
yum makecache fast
```
## 3.2 安装kubeadm、kubelet、kubectl
### 在Kube-master节点安装kubeadm、kubelet、kubectl
```bash
#!/bin/bash
KUBE_VERSION=1.18.5
# 卸载旧版本
yum remove -y kubelet kubeadm kubectl
yum install -y kubelet-$KUBE_VERSION kubeadm-$KUBE_VERSION kubectl-$KUBE_VERSION --disableexcludes=kubernetes
# 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用
# yum install -y --nogpgcheck kubelet-$KUBE_VERSION kubeadm-$KUBE_VERSION kubectl-$KUBE_VERSION 安装
systemctl enable --now kubelet
```
## 3.3 配置自动补全命令
```bash
#安装bash自动补全插件
yum install bash-completion -y
#设置kubectl与kubeadm命令补全,下次login生效
kubectl completion bash >/etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm
```
## 3.4 预拉取kubernetes镜像
### 查看指定k8s版本需要哪些镜像
```bash
kubeadm config images list --kubernetes-version v1.18.5
```
![k8s-version](./pics/k8s-version.png)<br>
### 在Master节点 /root/k8s 目录下,新建脚本 get-k8s-images.sh,执行脚本拉取镜像:
```bash
#!/bin/bash
# Script For Quick Pull K8S Docker Images
KUBE_VERSION=v1.18.5
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0
# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
```
备份镜像供其他节点使用
```bash
docker save k8s.gcr.io/kube-proxy:v1.18.5 \
k8s.gcr.io/kube-apiserver:v1.18.5 \
k8s.gcr.io/kube-controller-manager:v1.18.5 \
k8s.gcr.io/kube-scheduler:v1.18.5 \
k8s.gcr.io/pause:3.2 \
k8s.gcr.io/coredns:1.6.7 \
k8s.gcr.io/etcd:3.4.3-0 > k8s-imagesV1.18.5.tar
```
## 3.5 初始化kube-master节点
### 设置网络参数
```bash
#!/bin/bash
# 只在 master 节点执行
# 替换 x.x.x.x 为 master 节点的内网IP
echo "export MASTER_IP=x.x.x.x" >> /etc/profile
# 替换 apiserver.demo 为 您想要的 dnsName
echo "export APISERVER_NAME=apiserver.demo" >> /etc/profile
# Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中
echo "export POD_SUBNET=10.100.0.1/16" >> /etc/profile
# Kubernetes 服务所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中
echo "export SERVICE_SUBNET=10.96.0.0/16" >> /etc/profile
source /etc/profile
echo "127.0.0.1 $(hostname)" >> /etc/hosts
echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
systemctl restart network
```
### 修改kubelet配置默认cgroup driver
```bash
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
systemctl restart kubelet
```
### 生成kubeadm初始化配置文件
[可选] 仅当需自定义初始化配置时用
```bash
kubeadm config print init-defaults > init.default.yaml
```
### 测试环境是否正常
```bash
kubeadm init phase preflight [--config kubeadm-config.yaml]
```
### 初始化master
```bash
#!/bin/bash
# 只在 master 节点执行
KUBE_VERSION=v1.18.5
# 脚本出错时终止执行
set -e
if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
echo -e "\033[31;1m请确保您已经设置了环境变量 POD_SUBNET 和 APISERVER_NAME \033[0m"
echo 当前POD_SUBNET=$POD_SUBNET
echo 当前APISERVER_NAME=$APISERVER_NAME
exit 1
fi
# 查看完整配置选项 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: $KUBE_VERSION
imageRepository: k8s.gcr.io
imagePullPolicy: IfNotPresent
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
serviceSubnet: "${SERVICE_SUBNET}"
podSubnet: "${POD_SUBNET}"
dnsDomain: "cluster.local"
EOF
# kubeadm init
# 根据您服务器网速的情况,您需要等候 3 - 10 分钟
kubeadm init --config=kubeadm-config.yaml --upload-certs
# 配置 kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config
```
### 为日常使用集群的用户添加kubectl使用权限
```bash
su lotusroot
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/admin.conf
sudo chown $(id -u):$(id -g) $HOME/.kube/admin.conf
echo "export KUBECONFIG=$HOME/.kube/admin.conf" >> ~/.bashrc
exit
```
### 配置master认证
```bash
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
. /etc/profile
```
如果不配置这个,会提示如下输出:<br>
The connection to the server localhost:8080 was refused - did you specify the right host or port?
## 3.5 安装网络插件
```bash
#参考文档 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
echo "安装calico-3.13.1"
rm -f calico-3.13.1.yaml
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
kubectl apply -f calico-3.13.1.yaml
# 或者安装flannel网络
echo "安装flannel"
#下载flannel最新配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
```
## 3.6 查看kube-master节点状态
```bash
# 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态
watch kubectl get pod -n kube-system -o wide
# 查看 master 节点初始化结果
kubectl get nodes -o wide
```
# 四、安装 Ingress Controller
## 在 master 节点上执行
```bash
# 只在 master 节点执行
# 安装
kubectl apply -f nginx-ingress.yaml
# 卸载,只在您想选择其他 Ingress Controller 的情况下卸载
kubectl delete -f nginx-ingress.yaml
```
## 配置域名解析
将域名 *.demo.yourdomain.com 解析到 demo-worker-a-2 的 IP 地址 z.z.z.z (也可以是 demo-worker-a-1 的地址 y.y.y.y)
## 验证配置
在浏览器访问 a.demo.yourdomain.com,将得到 404 NotFound 错误页面
## nginx-ingress.yaml
```yaml
# 如果打算用于生产环境,请参考 https://github.com/nginxinc/kubernetes-ingress/blob/v1.5.5/docs/installation.md 并根据您自己的情况做进一步定制
apiVersion: v1
kind: Namespace
metadata:
name: nginx-ingress
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress
namespace: nginx-ingress
---
apiVersion: v1
kind: Secret
metadata:
name: default-server-secret
namespace: nginx-ingress
type: Opaque
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2akNDQWFZQ0NRREFPRjl0THNhWFhEQU5CZ2txaGtpRzl3MEJBUXNGQURBaE1SOHdIUVlEVlFRRERCWk8KUjBsT1dFbHVaM0psYzNORGIyNTBjbTlzYkdWeU1CNFhEVEU0TURreE1qRTRNRE16TlZvWERUSXpNRGt4TVRFNApNRE16TlZvd0lURWZNQjBHQTFVRUF3d1dUa2RKVGxoSmJtZHlaWE56UTI5dWRISnZiR3hsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUwvN2hIUEtFWGRMdjNyaUM3QlBrMTNpWkt5eTlyQ08KR2xZUXYyK2EzUDF0azIrS3YwVGF5aGRCbDRrcnNUcTZzZm8vWUk1Y2Vhbkw4WGM3U1pyQkVRYm9EN2REbWs1Qgo4eDZLS2xHWU5IWlg0Rm5UZ0VPaStlM2ptTFFxRlBSY1kzVnNPazFFeUZBL0JnWlJVbkNHZUtGeERSN0tQdGhyCmtqSXVuektURXUyaDU4Tlp0S21ScUJHdDEwcTNRYzhZT3ExM2FnbmovUWRjc0ZYYTJnMjB1K1lYZDdoZ3krZksKWk4vVUkxQUQ0YzZyM1lma1ZWUmVHd1lxQVp1WXN2V0RKbW1GNWRwdEMzN011cDBPRUxVTExSakZJOTZXNXIwSAo1TmdPc25NWFJNV1hYVlpiNWRxT3R0SmRtS3FhZ25TZ1JQQVpQN2MwQjFQU2FqYzZjNGZRVXpNQ0F3RUFBVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWpLb2tRdGRPcEsrTzhibWVPc3lySmdJSXJycVFVY2ZOUitjb0hZVUoKdGhrYnhITFMzR3VBTWI5dm15VExPY2xxeC9aYzJPblEwMEJCLzlTb0swcitFZ1U2UlVrRWtWcitTTFA3NTdUWgozZWI4dmdPdEduMS9ienM3bzNBaS9kclkrcUI5Q2k1S3lPc3FHTG1US2xFaUtOYkcyR1ZyTWxjS0ZYQU80YTY3Cklnc1hzYktNbTQwV1U3cG9mcGltU1ZmaXFSdkV5YmN3N0NYODF6cFErUyt1eHRYK2VBZ3V0NHh3VlI5d2IyVXYKelhuZk9HbWhWNThDd1dIQnNKa0kxNXhaa2VUWXdSN0diaEFMSkZUUkk3dkhvQXprTWIzbjAxQjQyWjNrN3RXNQpJUDFmTlpIOFUvOWxiUHNoT21FRFZkdjF5ZytVRVJxbStGSis2R0oxeFJGcGZnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdi91RWM4b1JkMHUvZXVJTHNFK1RYZUprckxMMnNJNGFWaEMvYjVyYy9XMlRiNHEvClJOcktGMEdYaVN1eE9ycXgrajlnamx4NXFjdnhkenRKbXNFUkJ1Z1B0ME9hVGtIekhvb3FVWmcwZGxmZ1dkT0EKUTZMNTdlT1l0Q29VOUZ4amRXdzZUVVRJVUQ4R0JsRlNjSVo0b1hFTkhzbysyR3VTTWk2Zk1wTVM3YUhudzFtMApxWkdvRWEzWFNyZEJ6eGc2clhkcUNlUDlCMXl3VmRyYURiUzc1aGQzdUdETDU4cGszOVFqVUFQaHpxdmRoK1JWClZGNGJCaW9CbTVpeTlZTW1hWVhsMm0wTGZzeTZuUTRRdFFzdEdNVWozcGJtdlFmazJBNnljeGRFeFpkZFZsdmwKMm82MjBsMllxcHFDZEtCRThCay90elFIVTlKcU56cHpoOUJUTXdJREFRQUJBb0lCQVFDZklHbXowOHhRVmorNwpLZnZJUXQwQ0YzR2MxNld6eDhVNml4MHg4Mm15d1kxUUNlL3BzWE9LZlRxT1h1SENyUlp5TnUvZ2IvUUQ4bUFOCmxOMjRZTWl0TWRJODg5TEZoTkp3QU5OODJDeTczckM5bzVvUDlkazAvYzRIbjAzSkVYNzZ5QjgzQm9rR1FvYksKMjhMNk0rdHUzUmFqNjd6Vmc2d2szaEhrU0pXSzBwV1YrSjdrUkRWYmhDYUZhNk5nMUZNRWxhTlozVDhhUUtyQgpDUDNDeEFTdjYxWTk5TEI4KzNXWVFIK3NYaTVGM01pYVNBZ1BkQUk3WEh1dXFET1lvMU5PL0JoSGt1aVg2QnRtCnorNTZud2pZMy8yUytSRmNBc3JMTnIwMDJZZi9oY0IraVlDNzVWYmcydVd6WTY3TWdOTGQ5VW9RU3BDRkYrVm4KM0cyUnhybnhBb0dCQU40U3M0ZVlPU2huMVpQQjdhTUZsY0k2RHR2S2ErTGZTTXFyY2pOZjJlSEpZNnhubmxKdgpGenpGL2RiVWVTbWxSekR0WkdlcXZXaHFISy9iTjIyeWJhOU1WMDlRQ0JFTk5jNmtWajJTVHpUWkJVbEx4QzYrCk93Z0wyZHhKendWelU0VC84ajdHalRUN05BZVpFS2FvRHFyRG5BYWkyaW5oZU1JVWZHRXFGKzJyQW9HQkFOMVAKK0tZL0lsS3RWRzRKSklQNzBjUis3RmpyeXJpY05iWCtQVzUvOXFHaWxnY2grZ3l4b25BWlBpd2NpeDN3QVpGdwpaZC96ZFB2aTBkWEppc1BSZjRMazg5b2pCUmpiRmRmc2l5UmJYbyt3TFU4NUhRU2NGMnN5aUFPaTVBRHdVU0FkCm45YWFweUNweEFkREtERHdObit3ZFhtaTZ0OHRpSFRkK3RoVDhkaVpBb0dCQUt6Wis1bG9OOTBtYlF4VVh5YUwKMjFSUm9tMGJjcndsTmVCaWNFSmlzaEhYa2xpSVVxZ3hSZklNM2hhUVRUcklKZENFaHFsV01aV0xPb2I2NTNyZgo3aFlMSXM1ZUtka3o0aFRVdnpldm9TMHVXcm9CV2xOVHlGanIrSWhKZnZUc0hpOGdsU3FkbXgySkJhZUFVWUNXCndNdlQ4NmNLclNyNkQrZG8wS05FZzFsL0FvR0FlMkFVdHVFbFNqLzBmRzgrV3hHc1RFV1JqclRNUzRSUjhRWXQKeXdjdFA4aDZxTGxKUTRCWGxQU05rMXZLTmtOUkxIb2pZT2pCQTViYjhibXNVU1BlV09NNENoaFJ4QnlHbmR2eAphYkJDRkFwY0IvbEg4d1R0alVZYlN5T294ZGt5OEp0ek90ajJhS0FiZHd6NlArWDZDODhjZmxYVFo5MWpYL3RMCjF3TmRKS2tDZ1lCbyt0UzB5TzJ2SWFmK2UwSkN5TGhzVDQ5cTN3Zis2QWVqWGx2WDJ1VnRYejN5QTZnbXo5aCsKcDNlK2JMRUxwb3B0WFhNdUFRR0xhUkcrYlNNcjR5dERYbE5ZSndUeThXczNKY3dlSTdqZVp2b0ZpbmNvVlVIMwphdmxoTUVCRGYxSjltSDB5cDBwWUNaS2ROdHNvZEZtQktzVEtQMjJhTmtsVVhCS3gyZzR6cFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
server-names-hash-bucket-size: "1024"
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- update
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- list
- watch
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- k8s.nginx.org
resources:
- virtualservers
- virtualserverroutes
verbs:
- list
- watch
- get
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: nginx-ingress
roleRef:
kind: ClusterRole
name: nginx-ingress
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9113"
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.5.5
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: prometheus
containerPort: 9113
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
#- -v=3 # Enables extensive logging. Useful for troubleshooting.
#- -report-ingress-status
#- -external-service=nginx-ingress
#- -enable-leader-election
- -enable-prometheus-metrics
#- -enable-custom-resources
```