云原生实战

云平台

阿里云购买服务器 , 直接选购按量付费

1
2
3
4
5
6
7
8
# 安装nginx
yum install nginx
systemctl start nginx

# 设置开机自启动
systemctl enable nginx

# 记得启动完成nginx之后需要在安全组与之对应的端口放开, 否则就无法访问

关闭所有端口

容器化

VPC私有网络

同一VPC下的网络是互通的, 不同的VPC下的网络不是互通的


直接购买三台服务器
为多台服务器分配私有VPC, 选择刚刚创建的私有网络, 记得要勾选分配IPv4地址

三台服务器的在私网都是互通的

Docker配置

移除以前的docker

1
2
3
4
5
6
7
8
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine

配置yum源

1
2
3
4
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker

1
2
3
4
5
sudo yum install -y docker-ce docker-ce-cli containerd.io


#以下是在安装k8s的时候使用
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

启动

1
systemctl enable docker --now

配置阿里云加速器

1
2
3
4
5
6
7
8
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://ylmjgfp5.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

搭建Kubernetes集群

创建一个VPC私有网络

创建交换机

12345Xia
创建三台服务器实例

基础环境

基础环境, 每一台机器都要执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 各个机器设置自己的域名
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
hostnamectl set-hostname k8s-master

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

安装kubelet、kubeadm、kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF


sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

使用systemctl status kubelet查看状态, 一会重启, 一起启动

初始化主节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#所有机器添加master域名映射,以下需要修改为自己的
echo "172.31.0.204 cluster-endpoint" >> /etc/hosts



#主节点初始化(只要主节点输入这段命令, 其他节点不需要输入)
kubeadm init \
--apiserver-advertise-address=172.31.0.204 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

#所有网络范围不重叠

下载完成之后主节点会出现如下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join cluster-endpoint:6443 --token 1zh4aw.zsett7epmg4ka8g6 \
--discovery-token-ca-cert-hash sha256:ac3d763eeb14a03d137c501f392346773b9a5e36d2b262789aa4c98d1c27dda6 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token 1zh4aw.zsett7epmg4ka8g6 \
--discovery-token-ca-cert-hash sha256:ac3d763eeb14a03d137c501f392346773b9a5e36d2b262789aa4c98d1c27dda6

然后在主节点输入

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

常用命令

1
2
3
4
5
6
7
8
9
10
11
#查看集群所有节点
kubectl get nodes

#根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml

#查看集群部署了哪些应用?
docker ps === kubectl get pods -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
# 查看运行的pod
kubectl get pods -A

安装网络组件(主节点输入)
calico

1
2
3
4
5
6
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O

kubectl apply -f calico.yaml

# 查看运行的pod
kubectl get pods -A

加入主节点(node1, node2输入)

这行命令在24小时有效
新令牌(可以重新生成最新的令牌)
kubeadm token create –print-join-command

1
2
kubeadm join cluster-endpoint:6443 --token 1zh4aw.zsett7epmg4ka8g6 \
--discovery-token-ca-cert-hash sha256:ac3d763eeb14a03d137c501f392346773b9a5e36d2b262789aa4c98d1c27dda6

然后在主节点查看一下所有的节点

1
2
3
4
5
# 查看一下所有的节点
kubectl get nodes

# 查看一下应用状态
kubectl get pod -A

补充一个linux命令, 每一秒刷新一次查看应用的状态

1
watch -n 1 kubectl get pod -A

集群的自我修复能力测试

1
2
3
4
5
# 重启云服务器
reboot

# 再次查看状态
kubectl get pod -A

部署dashboard

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

设置访问端口

1
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

将 type: ClusterIP 改为 type: NodePort

1
2
kubectl get svc -A |grep kubernetes-dashboard
## 找到端口,在安全组放行


需要在安全组中放行30077端口

访问: https://集群任意IP:端口 https://47.106.11.25:30077

如果chrome出现了你的连接不是专用连接
可以在页面输入thisisunsafe
你的连接不是专用连接的解决办法

创建访问账号

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
1
kubectl apply -f dash.yaml

获取令牌

1
2
#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

token

1
eyJhbGciOiJSUzI1NiIsImtpZCI6IjVTb0ItUllsSEU5cjZkbXg0RlVEMmlnQnRHRzJLcHRDOFBXVi1yX3VxZEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpocnd6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxNDc0YTFlZS03NGE5LTRjZWUtOWZjNS1lNjU2Y2I0ZmE4NjYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.cIVZrURSJXK9iTy9CM5lH5PbyqOihS1w61sgqiIF0bAfjpNrqBqlbniZXKWKcbtwEjiK5OVI7XWWgKgW7sCMkEVU_vpbOqwu62JuhLJDAXE1KBKruz-UW4PfTzYENCCTfdCozngLBok0VTEsyB9GRH_V3n1mhBKpsc3Ya0wstXH2u9nyDx4nOZkmy9PrWZXY9WIVFlN9OQ-1BouW_RtFE69sMxAcL2p2wpW-qlr76kgNOUD8oghS1BkAETKkRlChQEZKuWYlORpvy56JphZXIrGYPIVOfl0FALxb7baLkiahYS5UiZNvTMvmO0ii6OaSpZXY_blwMvBubU5B3s_g0A

dashboard

云原生实战

命名空间

使用命令创建命名空间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 查看命名空间
kubectl get ns
kubectl get namespace

# 查看所有应用
kubectl get pods -A
# 这个查看的是default的应用
kubectl get pods

# 查看pod
kubectl get pod -n kubernetes-dashboard

# 创建命名空间
kubectl create ns hello
kubectl get ns

# 删除命名空间
kubectl delete ns hello

使用yaml的方式创建

1
2
3
4
5
6
7
8
9
10
11
12
# 通过yaml的方式创建命名空间
vi hello.yaml

apiVersion: v1
kind: Namespace
metadata:
name: hello

kubectl apply -f hello.yaml

# 删除yaml方式创建的命名空间
kubectl delete -f hello.yaml

Pod

在kubernetes中运行的应用就叫pod
一个pod中可以运行多个容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# 创建nginx的pod
kubectl run mynginx --image=nginx

# 查看default的pod
kubectl get pod

# 查看所有的pod
kubectl get pod -A

# 也可以指定查看default的pod, 不指定默认查看的就是default
kubectl get pod -n default

# 描述mynginx的详细信息
kubectl describe pod mynginx

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m29s default-scheduler Successfully assigned default/mynginx to k8s-node2
Normal Pulling 7m28s kubelet Pulling image "nginx"
Normal Pulled 7m20s kubelet Successfully pulled image "nginx" in 8.585906172s
Normal Created 7m19s kubelet Created container mynginx
Normal Started 7m19s kubelet Started container mynginx
# 这里Scheduled说明了nginx创建在了k8s-node2

# STATUS显示Running表示创建的应用正在运行了
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mynginx 1/1 Running 0 3m47s

# 到k8s-node2上查看nginx容器
docker ps | grep mynginx

# 删除pod
kubectl delete pod mynginx
# 完整的写法可以指定命名空间
kubectl delete pod mynginx -n xxx

# 查看日志
kubectl logs mynginx
# 持续监控日志
kubectl logs -f mynginx

# 查看更完善的信息, 可以查看应用运行的IP
# 每一个pod - k8s都会分配一个ip
kubectl get pod -owide
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mynginx 1/1 Running 0 3m38s 192.168.169.133 k8s-node2 <none> <none>

# 使用pod的ip+pod里面运行容器的端口
# 请求一下IP(任何一个k8s都可以请求)
curl 192.168.169.133

# 进入容器
kubectl exec -it mynginx -- /bin/bash

# 随便改点啥
cd /usr/share/nginx/html/
echo "roudoukouya" > index.html
exit
[root@k8s-master ~]# curl 192.168.169.133
roudoukouya

使用yaml方式创建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
vi pod.yaml

apiVersion: v1
kind: Pod
metadata:
labels:
run: mynginx
name: mynginx
spec:
containers:
- image: nginx
name: mynginx

# 创建
kubectl apply -f pod.yaml

# 描述, 这次容器还是创建在了node2上
kubectl describe pod mynginx
Normal Scheduled 72s default-scheduler Successfully assigned default/mynginx to k8s-node2

# 删除yaml
kubectl delete -f pod.yaml

# 再次查看一下
kubectl get pod

web页面创建pod

填写yaml创建, 点击上传

多个容器的情况

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
vi multicontainer-pod.yaml

apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp
name: myapp
spec:
containers:
- image: nginx
name: nginx
- image: tomcat:8.5.68
name: tomcat

# 创建
kubectl apply -f multicontainer-pod.yaml

# 描述详细信息
kubectl describe pod myapp

# 查看容器分配的IP
kubectl get pod -o wide
curl 192.168.36.67:80
curl 192.168.36.67:8080

# 尝试创建两个nginx在同一个pod中
apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp02
name: myapp02
spec:
containers:
- image: nginx
name: nginx01
- image: nginx
name: nginx02

# 查看pod, 显示error, 并且没有同时启动两个nginx
kubectl get pod
myapp02 1/2 Error 1 52s

# 查看日志, 很明显nginx02的要启动的端口被占用了, 所有nginx02启动不起来
kubectl logs myapp02 nginx01
kubectl logs myapp02 nginx02
(98: Address already in use)

Deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 先删除之前创建的pod
kubectl delete pod myapp myapp02 mynginx

# 创建nginx pod
kubectl run mynginx --image=nginx

kubectl create deployment mytomcat --image=tomcat:8.5.68

# 监控pod
watch -n 1 kubectl get pod

# 删除刚刚创建的两个应用
kubectl delete pod mynginx
kubectl delete pod mytomcat-6f5f895f4f-6m68l
kubectl delete pod mytomcat-6f5f895f4f-bprxz

# 可以发现每一次删除tomcat, k8s都会再次创建出来一个新的应用
# k8s的自愈能力

# 删除由deployment创建的应用
kubectl get deploy
kubectl delete deploy mytomcat
kubectl delete deployment my-dep-01

多副本

1
2
3
4
5
6
# 创建多副本的应用, replicas指定副本的数量
# 这样可以创建多个应用, 平均分配在各个机器上
kubectl create deploy my-dep --image=nginx --replicas=3

# 查看完善信息
kubectl get pod -o wide

使用web表单创建多副本

也可以在web页面使用yaml创建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-dep
name: my-dep
spec:
replicas: 3
selector:
matchLabels:
app: my-dep
template:
metadata:
labels:
app: my-dep
spec:
containers:
- image: nginx
name: nginx

扩缩容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 创建
kubectl create deploy my-dep --image=nginx --replicas=3

# 扩缩容
kubectl scale deploy/my-dep --replicas=5
kubectl scale deploy/my-dep --replicas=2

# 监控
watch -n 1 kubectl get pod

# 获取deploy
kubectl get deploy
kubectl get deployment

# 修改yaml的方式来修改replicas副本数量
kubectl edit deploy my-dep

spec:
progressDeadlineSeconds: 600
replicas: 2

# 保存退出
:wq

自愈&故障转移

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# master节点
kubectl get pod -o wide
my-dep-5b7868d854-8dbvz k8s-node1 ...

# 在node1查看
docker ps | grep my-dep-5b7868d854-8dbvz
66ba6931e335 nginx ...

# 停止这个容器
docker stop 66ba6931e335
# 可以发现, 虽然停止了容器, 但是过了一会, 容器就自愈又重启了

# 监控
kubectl get pod -w

# 然后将node1节点给关机, 五分钟之后就可以监控到会把node1中应用转移到了node2

滚动更新

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 把yaml文件展示出来
kubectl get deploy my-dep -oyaml

# 监控
kubectl get pod -w

# 自动更新, 将会启动一个新的版本的容器, 然后停止旧版本的容器
kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record

# 再次查看yaml, nginx镜像被替换成了指定的版本
kubectl get deploy my-dep -oyaml
spec:
containers:
- image: nginx:1.16.1

版本回退

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 查看历史版本
kubectl rollout history deployment/my-dep

REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record=true

# 回退到第一个版本
kubectl rollout undo deploy/my-dep --to-revision=1

# 查看当前nginx的版本, 此时是最新的nginx版本
kubectl get deploy/my-dep -oyaml | grep image

f:imagePullPolicy: {}
f:image: {}
- image: nginx
imagePullPolicy: Always


工作负载

Service

ClusterIP

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# 查看容器的详细信息
kubectl get pod -owide

# 访问IP, 可以通过不同的IP访问到不同的nginx
[root@k8s-master ~]# curl 192.168.36.95
3333
[root@k8s-master ~]# curl 192.168.169.153
1111
[root@k8s-master ~]# curl 192.168.36.94
2222

# 将应用内的nginx的80端口暴露出去到8000端口
kubectl expose deploy my-dep --port=8000 --target-port=80

# 查看一下CLUSTER-IP
kubectl get service

# 集群内访问CLUSTER-IP, 这个service会负载均衡将请求分配
[root@k8s-master ~]# curl 10.96.69.174:8000
1111
[root@k8s-master ~]# curl 10.96.69.174:8000
3333
[root@k8s-master ~]# curl 10.96.69.174:8000
2222

# 同样的在容器内部也是也可以请求service IP的
root@my-dep-5b7868d854-74qft:/usr/share/nginx/html# curl 10.96.69.174:8000
3333
root@my-dep-5b7868d854-74qft:/usr/share/nginx/html# curl 10.96.69.174:8000
2222
root@my-dep-5b7868d854-74qft:/usr/share/nginx/html# curl 10.96.69.174:8000
1111

# 在容器内部也可以通过域名来访问(域名的格式是 服务名.命名空间.svc)
root@my-dep-5b7868d854-74qft:/usr/share/nginx/html# curl my-dep.default.svc:8000
3333
root@my-dep-5b7868d854-74qft:/usr/share/nginx/html# curl my-dep.default.svc:8000
2222
root@my-dep-5b7868d854-74qft:/usr/share/nginx/html# curl my-dep.default.svc:8000
1111

# 但是域名在k8s的终端上是不能访问的
[root@k8s-master ~]# curl my-dep.default.svc:8000
curl: (6) Could not resolve host: my-dep.default.svc; Unknown error

# 创建mytomcat应用
kubectl create deploy my-tomcat --image=tomcat

# 查看service
kubectl get service

# 测试service的服务发现
# 将deployment缩放成2个容器, 再次curl测试, 此时就只会显示1111, 2222两种情况,
# 而不会显示3333
[root@k8s-master ~]# curl 10.96.69.174:8000
1111
[root@k8s-master ~]# curl 10.96.69.174:8000
2222

# 如果再缩放成3个容器, 再次curl测试, 这时, 将会出现
[root@k8s-master ~]# curl 10.96.69.174:8000
1111
[root@k8s-master ~]# curl 10.96.69.174:8000
2222
[root@k8s-master ~]# curl 10.96.69.174:8000
...
<p><em>Thank you for using nginx.</em></p>
...


NodePort

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 查看service
kubectl get svc

# 删除service
kubectl delete svc my-dep

# 创建service为NodePort
kubectl expose deploy my-dep --port=8000 --target-port=80 --type=NodePort

# 查看应用是否启动
kubectl get pod

# 查看service的IP
kubectl get service

NodePort范围在 30000-32767 之间


每个浏览器访问的服务器的结果不一致, 这是由负载均衡平均分配的请求

Ingress


安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml

#修改镜像
vi deploy.yaml
#将image的值改为如下值:
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0

# 加载yaml配置文件
kubectl apply -f deploy.yaml

# 检查安装的结果
kubectl get pod,svc -n ingress-nginx

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.96.216.183 <none> 80:30391/TCP,443:30303/TCP 4s
service/ingress-nginx-controller-admission ClusterIP 10.96.139.106 <none> 443/TCP 4s

# 最后别忘记把svc暴露的端口要放行

然后访问
80:30391/TCP,443:30303/TCP
80端口访问30391, 443端口访问30303
http://47.107.101.255:30391/
https://47.107.101.255:30303/


虽然访问到了404,但是这也说明了成功的访问到了nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
# 查看service
kubectl get svc -A

# 查看nodes
kubectl get nodes

# 编写yaml
vi test.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-server
spec:
replicas: 2
selector:
matchLabels:
app: hello-server
template:
metadata:
labels:
app: hello-server
spec:
containers:
- name: hello-server
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-demo
name: nginx-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-demo
template:
metadata:
labels:
app: nginx-demo
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-demo
name: nginx-demo
spec:
selector:
app: nginx-demo
ports:
- port: 8000
protocol: TCP
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-server
name: hello-server
spec:
selector:
app: hello-server
ports:
- port: 8000
protocol: TCP
targetPort: 9000

# 应用yaml文件
kubectl apply -f test.yaml

# 查看service的IP
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-server ClusterIP 10.96.185.158 <none> 8000/TCP 19s
nginx-demo ClusterIP 10.96.3.222 <none> 8000/TCP 19s

# 请求service的IP
curl 10.96.3.222:8000
curl 10.96.185.158:8000


域名访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# 再次编写配置文件
vi ingress-rule.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-host-bar
spec:
ingressClassName: nginx
rules:
- host: "hello.atguigu.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-server
port:
number: 8000
- host: "demo.atguigu.com"
http:
paths:
- pathType: Prefix
path: "/nginx" # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
backend:
service:
name: nginx-demo ## java,比如使用路径重写,去掉前缀nginx
port:
number: 8000

# 应用配置文件
kubectl apply -f ingress-rule.yaml

# 查看ingress, 查看绑定的域名
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-host-bar nginx hello.atguigu.com,demo.atguigu.com 80 33s

# 查看端口
kubectl get pod,svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.96.216.183 <none> 80:30391/TCP,443:30303/TCP 12h
service/ingress-nginx-controller-admission ClusterIP 10.96.139.106 <none> 443/TCP 12h

# 然后再windows的hosts配置文件进行修改
C:\Windows\System32\drivers\etc

47.107.101.255 hello.atguigu.com
47.107.101.255 demo.atguigu.com

当请求的域名前缀是hello, 把请求转给hello-service
当请求的域名前缀是demo, 把请求转给nginx-demo
然后浏览器访问
http://hello.atguigu.com:30391/
http://demo.atguigu.com:30391/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 查看ingress
kubectl get ing

# 编辑ingress, 将path修改成/nginx
kubectl edit ing ingress-host-bar

- host: demo.atguigu.com
http:
paths:
- backend:
service:
name: nginx-demo
port:
number: 8000
path: /nginx
pathType: Prefix

:wq

# 再次访问浏览器

请求http://demo.atguigu.com:30391

请求http://demo.atguigu.com:30391/nginx

第二个请求是带有nginx版本号的
第一个请求是nginx没有版本号的, 因为第一个请求是被ingress的nginx管理了
第二个请求则是被nginx-demo管理的

1
2
3
4
5
6
# 当进入nginx容器进行修改
cd /usr/share/nginx/html/

/usr/share/nginx/html# echo roudoukouya > nginx

# 再次请求, 这次请求将会把nginx文件给下载下来

路径重写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
vi ingress-rule.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: ingress-host-bar
spec:
ingressClassName: nginx
rules:
- host: "hello.atguigu.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-server
port:
number: 8000
- host: "demo.atguigu.com"
http:
paths:
- pathType: Prefix
path: "/nginx(/|$)(.*)" # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
backend:
service:
name: nginx-demo ## java,比如使用路径重写,去掉前缀nginx
port:
number: 8000

# 删除配置文件创建的service
kubectl delete -f ingress-rule.yaml
# 再次创建
kubectl apply -f ingress-rule.yaml

# 查看一下
kubectl get ing

此时访问http://demo.atguigu.com:30391/nginx
将会把路径重写成http://demo.atguigu.com:30391/ 直接访问nginx的根目录的默认欢迎页

流量限制

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# 再次修改hosts文件
47.107.101.255 hello.atguigu.com
47.107.101.255 demo.atguigu.com
47.107.101.255 haha.atguigu.com

# 创建一个新的yaml文件
vi ingress-rule-2.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-limit-rate
annotations:
nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
ingressClassName: nginx
rules:
- host: "haha.atguigu.com"
http:
paths:
- pathType: Exact
path: "/"
backend:
service:
name: nginx-demo
port:
number: 8000

# 应用yaml
kubectl apply -f ingress-rule-2.yaml

# 查看是否应用成功
kubectl get ing

浏览器请求http://haha.atguigu.com:30391/

疯狂按F5

NFS环境搭建

PV&PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
########################################## 所有节点都安装
yum install -y nfs-utils

########################################## 主节点
# nfs主节点
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

mkdir -p /nfs/data
systemctl enable rpcbind --now
systemctl enable nfs-server --now
# 配置生效
exportfs -r

# 查看暴露出去路径
exportfs

########################################## 子节点
# 换成master的私有IP地址
ip addr
# 查看master暴露出来可以挂载的路径
showmount -e 172.31.0.204

# 执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /root/nfsmount
mkdir -p /nfs/data

# 这也换成master的私有IP地址
mount -t nfs 172.31.0.204:/nfs/data /nfs/data

# 写入一个测试文件
echo "hello nfs server" > /nfs/data/test.txt

# 每个节点都查看一下, 说明了这个测试文件在每个节点上都保存拥有了
cat /nfs/data/test.txt

原始方式挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
vi deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-pv-demo
name: nginx-pv-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-pv-demo
template:
metadata:
labels:
app: nginx-pv-demo
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
nfs:
server: 172.31.0.204
path: /nfs/data/nginx-pv

# 应用配置文件
kubectl apply -f deploy.yaml

# 描述
kubectl describe pod nginx-pv-demo-5f8d846496-fjph9
# 此时是挂载失败了, 要先创建把/nfs/data/nginx-pv创建出来才能进行挂载
# Output: mount.nfs: mounting 172.31.0.204:/nfs/data/nginx-pv failed, reason given by server: No such file or directory

# 创建nginx-pv
cd /nfs/data/
mkdir -p nginx-pv

# 删除
kubectl delete deploy nginx-pv-demo

# 创建
cd ~
kubectl apply -f deploy.yaml

# 查看容器的名称
kubectl get pod

# 随便往映射文件中写入文本
echo roudoukouya > /nfs/data/nginx-pv/index.html

# 进入容器
kubectl exec -it nginx-pv-demo-5f8d846496-cwjsd -- /bin/bash
cat /usr/share/nginx/html/index.html
# roudoukouya
kubectl exec -it nginx-pv-demo-5f8d846496-n795r --/bin/bash
cat /usr/share/nginx/html/index.html
# roudoukouya

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
# 删除deploy部署的deploy
kubectl delete -f deploy.yaml

#nfs主节点
mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03

# 创建pv配置文件
vi pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01-10m
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/01
server: 172.31.0.204
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv02-1gi
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/02
server: 172.31.0.204
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv03-3gi
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/03
server: 172.31.0.204

# 应用
kubectl apply -f pv.yaml

# 查看pv
kubectl get pv
kubectl get persistentvolume
# NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
# pv01-10m 10M RWX Retain Available nfs 26s
# pv02-1gi 1Gi RWX Retain Available nfs 26s
# pv03-3gi 3Gi RWX Retain Available nfs 26s

# 创建pvc配置文件
vi pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: nfs

# 应用
kubectl apply -f pvc.yaml

# 查看pv
kubectl get persistentvolume
# NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
# pv01-10m 10M RWX Retain Available nfs 3m25s
# pv02-1gi 1Gi RWX Retain Bound default/nginx-pvc nfs 3m25s
# pv03-3gi 3Gi RWX Retain Available nfs 3m25s

# 删除pvc.yaml配置文件绑定的资源
kubectl delete -f pvc.yaml

# 查看
kubectl get persistentvolume
# NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
# pv01-10m 10M RWX Retain Available nfs 5m33s
# pv02-1gi 1Gi RWX Retain Released default/nginx-pvc nfs 5m33s
# pv03-3gi 3Gi RWX Retain Available nfs 5m33s

# 应用
kubectl apply -f pvc.yaml

# 查看
kubectl get persistentvolume
# NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
# pv01-10m 10M RWX Retain Available nfs 6m27s
# pv02-1gi 1Gi RWX Retain Released default/nginx-pvc nfs 6m27s
# pv03-3gi 3Gi RWX Retain Bound default/nginx-pvc nfs 6m27s


# 创建yaml
vi dep02.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy-pvc
name: nginx-deploy-pvc
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deploy-pvc
template:
metadata:
labels:
app: nginx-deploy-pvc
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
persistentVolumeClaim:
claimName: nginx-pvc

# 应用
kubectl apply -f dep02.yaml


kubectl get pv,pvc
# NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
# persistentvolume/pv01-10m 10M RWX Retain Available nfs 9m13s
# persistentvolume/pv02-1gi 1Gi RWX Retain Released default/nginx-pvc nfs 9m13s
# persistentvolume/pv03-3gi 3Gi RWX Retain Bound default/nginx-pvc nfs 9m13s

# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
# persistentvolumeclaim/nginx-pvc Bound pv03-3gi 3Gi RWX nfs 3m30s

ConfigMap

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
# 创建一个配置文件
vi redis.conf

appendonly yes

kubectl create cm redis-conf --from-file=redis.conf

kubectl get cm

rm -rf redis.conf

kubectl get cm redis-conf -oyaml

apiVersion: v1
data:
redis.conf: |
appendonly yes
kind: ConfigMap
metadata:
creationTimestamp: "2023-05-13T03:40:47Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:redis.conf: {}
manager: kubectl-create
operation: Update
time: "2023-05-13T03:40:47Z"
name: redis-conf
namespace: default
resourceVersion: "58168"
uid: 7efa1eac-ad15-4b21-9322-7a2c92886e74


vi redis.yaml

apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis
command:
- redis-server
- "/redis-master/redis.conf" #指的是redis容器内部的位置
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf

# 创建pod
kubectl apply -f redis.yaml

# 查看pod
kubectl get pod

# 进入redis容器
kubectl exec -it redis -- bash
cat /redis-master/redis.conf
# appendonly yes


kubectl get cm

# 编辑配置文件(添加一行requirepass)
kubectl edit cm redis-conf

apiVersion: v1
data:
redis.conf: |
appendonly yes
requirepass 123456

# 再次进入redis容器, 可以发现刚刚编辑的requirepass, 过了一会就出现了在redis.conf中
kubectl exec -it redis -- bash
cat /redis-master/redis.conf
# appendonly yes
# requirepass 123456

# 检查默认配置, 默认配置并没有生效requirepass, 这需要重新创建一个pod, 就会出现requirepass
kubectl exec -it redis -- redis-cli
CONFIG GET appendonly
# 1) "appendonly"
# 2) "yes"

CONFIG GET requirepass
# 1) "requirepass"
# 2) ""

Secret

Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。 将这些信息放在 secret 中比放在 Pod 的定义或者 容器镜像 中来说更加安全和灵活。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 登录docker
docker login

# 退出docker
docker logout

# 然后输入账号密码
Username:
Password:

# 提交镜像
docker commit b43e2104abd6 roudoukou/k8s-test
# 返回的sha256验证码
# sha256:ba6d93505b636cfc0746b0489c8f2c840b5ec1a66b2c667e8d4b165be02369b0

# 将镜像推送上docker-hub
docker push roudoukou/k8s-test

# 拉取镜像
docker pull leifengyang/guignginx:v1.0

# 创建配置文件
vi mypod.yaml

apiVersion: v1
kind: Pod
metadata:
name: private-nginx
spec:
containers:
- name: private-nginx
image: leifengyang/guignginx:v1.0
imagePullSecrets:
- name: leifengyang-docker

# 应用
kubectl apply -f mypod.yaml

kubectl get pod

kubectl delete -f mypod.yaml


1
2
3
4
5
6
7
8
9
kubectl create secret docker-registry leifengyang-docker \
--docker-username=leifengyang \
--docker-password=Lfy123456 \
--docker-email=534096094@qq.com

kubectl get secret

kubectl get secret leifengyang-docker -oyaml

创建了secret之后, 可以通过账号密码来下载私有的容器镜像

KubeSphere

1. kubenetes上安装kubesphere

安装Docker

所有节点安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
sudo yum remove docker*
sudo yum install -y yum-utils

#配置docker的yum地址
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo


#安装指定版本
sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

# 启动&开机启动docker
systemctl enable docker --now

# docker加速配置
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://ylmjgfp5.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

安装kubenetes

每个机器使用内网ip互通

每个机器配置自己的hostname,不能用localhost

每个节点都设置hostname, 然后关闭虚拟化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#设置每个机器自己的hostname
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

安装kubelet、kubeadm、kubectl

每个节点都执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 配置k8s的yum源地址
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


# 安装 kubelet,kubeadm,kubectl
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9

# 启动kubelet
sudo systemctl enable --now kubelet

# 所有机器配置master域名
echo "172.31.0.204 k8s-master" >> /etc/hosts

初始化master节点

只需要在主节点master执行(node节点不要执行, 需要替换成master的IP)

1
2
3
4
5
6
7
kubeadm init \
--apiserver-advertise-address=172.31.0.204 \
--control-plane-endpoint=k8s-master \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

如果部署手误了, 可以使用如下命令重置, 然后重新部署

1
kubeadm reset

记录关键信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join k8s-master:6443 --token 5ybzw0.jtbnx7tbb0t83tp9 \
--discovery-token-ca-cert-hash sha256:f83a1db517489e43d02c836953d0cf680ea6a9a0c816567522635df8669c00c2 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token 5ybzw0.jtbnx7tbb0t83tp9 \
--discovery-token-ca-cert-hash sha256:f83a1db517489e43d02c836953d0cf680ea6a9a0c816567522635df8669c00c2

master节点输入

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装Calico网络插件

只用在master节点输入这个

1
2
3
4
5
6
7
8
9
# curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
# 使用下面这个版本, emm, 卡这个bug卡了一个晚上了
curl https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O

kubectl apply -f calico.yaml

# 查看运行的pod
kubectl get pods -A

所有的node节点输入

1
2
3
# 把node节点加入到master
kubeadm join k8s-master:6443 --token 5ybzw0.jtbnx7tbb0t83tp9 \
--discovery-token-ca-cert-hash sha256:f83a1db517489e43d02c836953d0cf680ea6a9a0c816567522635df8669c00c2
1
2
# 查看节点
kubectl get nodes

nfs文件系统 - 安装nfs-server

每个节点安装

1
2
# 在每个机器。
yum install -y nfs-utils

master下执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 在master 执行以下命令 
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports


# 执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /nfs/data


# 在master执行
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

# 使配置生效
exportfs -r


#检查配置是否生效
exportfs

nfs文件系统 - 配置nfs-client(选做)

所有node节点下执行(需要替换成master的IP)

1
2
3
4
5
showmount -e 172.31.0.204

mkdir -p /nfs/data

mount -t nfs 172.31.0.204:/nfs/data /nfs/data

nfs文件系统 - 配置默认存储

配置动态供应的默认存储类

master下执行(需要替换成自己的master IP 172.31.0.204)

1
vi sc.yaml

sc.yaml(要替换成master的 IP)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
# resources:
# limits:
# cpu: 10m
# requests:
# cpu: 10m
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.31.0.204 ## 指定自己nfs服务器地址
- name: NFS_PATH
value: /nfs/data ## nfs服务器共享的目录
volumes:
- name: nfs-client-root
nfs:
server: 172.31.0.204
path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
1
2
3
4
5
6
kubectl apply -f sc.yaml 

# 查看storageclass
# 确认配置是否生效
kubectl get sc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
kubectl get sc

# 创建pvc申请书
vi pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi

# 应用
kubectl apply -f pvc.yaml

kubectl get pvc

kubectl get pv

metrics-server

1
vi metrics.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --kubelet-insecure-tls
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
1
2
3
4
5
6
7
8
9
10
kubectl apply -f metrics.yaml

kubectl get pod -A

kubectl top nodes

# 查看pod占用的内存, CPU
kubectl top pods

kubectl top pods -A

安装KubeSphere

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yum install -y wget

yum install -y vim

# 下载配置文件, 或者使用下面提供的两个配置文件
wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

# 建议用vi粘贴代码进去, 之后再用vim编辑代码
vi kubesphere-installer.yaml

# 修改配置文件(将false改为true, 下面提供的配置文件已经修改好了, 可以直接粘贴)
vi cluster-configuration.yaml

kubesphere-installer.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterconfigurations.installer.kubesphere.io
spec:
group: installer.kubesphere.io
versions:
- name: v1alpha1
served: true
storage: true
scope: Namespaced
names:
plural: clusterconfigurations
singular: clusterconfiguration
kind: ClusterConfiguration
shortNames:
- cc

---
apiVersion: v1
kind: Namespace
metadata:
name: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ks-installer
namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ks-installer
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apps
resources:
- '*'
verbs:
- '*'
- apiGroups:
- extensions
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- '*'
verbs:
- '*'
- apiGroups:
- rbac.authorization.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apiregistration.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- tenant.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- certificates.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- devops.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- '*'
verbs:
- '*'
- apiGroups:
- logging.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- jaegertracing.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- storage.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- admissionregistration.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- policy
resources:
- '*'
verbs:
- '*'
- apiGroups:
- autoscaling
resources:
- '*'
verbs:
- '*'
- apiGroups:
- networking.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- config.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- iam.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- notification.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- auditing.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- events.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- core.kubefed.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- installer.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- storage.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- security.istio.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- monitoring.kiali.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- kiali.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- networking.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- kubeedge.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- types.kubefed.io
resources:
- '*'
verbs:
- '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ks-installer
subjects:
- kind: ServiceAccount
name: ks-installer
namespace: kubesphere-system
roleRef:
kind: ClusterRole
name: ks-installer
apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
app: ks-install
spec:
replicas: 1
selector:
matchLabels:
app: ks-install
template:
metadata:
labels:
app: ks-install
spec:
serviceAccountName: ks-installer
containers:
- name: installer
image: kubesphere/ks-installer:v3.1.1
imagePullPolicy: "Always"
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 20m
memory: 100Mi
volumeMounts:
- mountPath: /etc/localtime
name: host-time
volumes:
- hostPath:
path: /etc/localtime
type: ""
name: host-time

cluster-configuration.yaml (已经将false改成true了, endpointIps需要换成对应的master的IP)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.1.1
spec:
persistence:
storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
authentication:
jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
local_registry: "" # Add your private registry address if it is needed.
etcd:
monitoring: true # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
endpointIps: 172.31.0.204 # etcd cluster EndpointIps. It can be a bunch of IPs here.
port: 2379 # etcd port.
tlsEnable: true
common:
redis:
enabled: true
openldap:
enabled: true
minioVolumeSize: 20Gi # Minio PVC size.
openldapVolumeSize: 2Gi # openldap PVC size.
redisVolumSize: 2Gi # Redis PVC size.
monitoring:
# type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
es: # Storage backend for logging, events and auditing.
# elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
# elasticsearchDataReplicas: 1 # The total number of data nodes.
elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
console:
enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
port: 30880
alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
enabled: true # Enable or disable the KubeSphere Alerting System.
# thanosruler:
# replicas: 1
# resources: {}
auditing: # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
enabled: true # Enable or disable the KubeSphere Auditing Log System.
devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
enabled: true # Enable or disable the KubeSphere DevOps System.
jenkinsMemoryLim: 2Gi # Jenkins memory limit.
jenkinsMemoryReq: 1500Mi # Jenkins memory request.
jenkinsVolumeSize: 8Gi # Jenkins volume size.
jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
enabled: true # Enable or disable the KubeSphere Events System.
ruler:
enabled: true
replicas: 2
logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
enabled: true # Enable or disable the KubeSphere Logging System.
logsidecar:
enabled: true
replicas: 2
metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
enabled: false # Enable or disable metrics-server.
monitoring:
storageClass: "" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
# prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
prometheusMemoryRequest: 400Mi # Prometheus request memory.
prometheusVolumeSize: 20Gi # Prometheus PVC size.
# alertmanagerReplicas: 1 # AlertManager Replicas.
multicluster:
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster.
network:
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
# Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
enabled: true # Enable or disable network policies.
ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
store:
enabled: true # Enable or disable the KubeSphere App Store.
servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
enabled: true # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
kubeedge: # Add edge nodes to your cluster and deploy workloads on edge nodes.
enabled: true # Enable or disable KubeEdge.
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
- "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 应用完之后耐心等等
kubectl apply -f kubesphere-installer.yaml

kubectl apply -f cluster-configuration.yaml

# 添加secret
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

# 监控日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

# Console: http://172.31.0.204:30880
# Account: admin
# Password: P@88w0rd

# 描述
kubectl describe pod -n pod名 命名空间
kubectl describe pod pod名 -n 命名空间

集群密码: 12345Xia
需要开放30000-32767端口, 这个是NodePort默认端口

2. Linux单节点部署KubeSphere

centos7.9 4C8G 配置主机名 安全组开放30000/32767

官方文档

准备kubeKey

1
2
3
4
5
6
7
8
# 设置一下主机名
hostnamectl set-hostname node1

export KKZONE=cn

curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -

chmod +x kk

使用KubeKey引导安装集群

1
2
3
4
5
6
7
8
./kk create cluster --with-kubernetes v1.22.12 --with-kubesphere v3.3.2

# 中间输入一下yes
yes

# 验证结果
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

如果见到如下内容

1
2
3
4
5
6
10:41:13 CST [ERRO] node1: conntrack is required.
10:41:13 CST [ERRO] node1: socat is required.

# 请先安装conntrack, socat, 缺什么安装什么, 安装完成之后再执行kk脚本
yum install -y conntrack
yum install -y socat

安装完成之后出现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#####################################################
### Welcome to KubeSphere! ###
#####################################################

Console: http://172.31.0.2:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.

#####################################################
https://kubesphere.io 2023-05-21 12:04:23
#####################################################
12:04:24 CST success: [node1]
12:04:24 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

这种方式安装是最小安装, 如果想开启其他功能可以在自定义资源进行开启

另外遇到的一个bug

1
2
3
4
5
6
7
8
# 使用kk脚本之后, 出现的recognize "/etc/kubernetes/network-plugin.yaml"
error: unable to recognize "/etc/kubernetes/network-plugin.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"

# 额, 其实我也不知道怎么解决, github找的一个issue, 但是不会解决
# https://github.com/kubesphere/kubesphere/issues/5272
# 卸载kubenetes(光卸载还不够, 最好还是重装系统), 参考官方文档重新安装吧
# https://www.kubesphere.io/zh/docs/v3.3/quick-start/all-in-one-on-linux/

3. Linux多节点部署KubeSphere

多节点安装

两台centos7.9 2C4G(没钱了) 配置主机名 安全组开放30000/32767

准备kubeKey

1
2
3
4
5
6
7
8
9
10
# 设置一下主机名
hostnamectl set-hostname master
hostnamectl set-hostname node1

# master节点执行
export KKZONE=cn

curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -

chmod +x kk

创建配置文件

1
2
# 指定kubernetes和kubesphere版本
./kk create config --with-kubernetes v1.22.12 --with-kubesphere v3.3.2

编辑配置文件

vi config-sample.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master, address: 172.31.0.2, internalAddress: 172.31.0.2, user: root, password: "12345@Xia"}
- {name: node1, address: 172.31.0.3, internalAddress: 172.31.0.3, user: root, password: "12345@Xia"}
roleGroups:
etcd:
- master
control-plane:
- master
worker:
- master
- node1
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy

domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.22.12
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.3.2
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
# resources: {}
jenkinsMemoryLim: 8Gi
jenkinsMemoryReq: 4Gi
jenkinsVolumeSize: 8Gi
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
terminal:
timeout: 600



使用配置文件创建集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 这个只需要在master节点上安装就可以了
./kk create cluster -f config-sample.yaml

# 中途输入yes
yes

# 然后耐心等等, 大概十多分钟

# 缺什么安装什么
12:36:51 CST [ERRO] master: conntrack is required.
12:36:51 CST [ERRO] master: socat is required.
12:36:51 CST [ERRO] node1: conntrack is required.
12:36:51 CST [ERRO] node1: socat is required.

# 每个节点都安装
yum install -y conntrack
yum install -y socat

见到这一段就说明安装成功了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#####################################################
### Welcome to KubeSphere! ###
#####################################################

Console: http://172.31.0.2:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.

#####################################################
https://kubesphere.io 2023-05-21 12:04:23
#####################################################
12:04:24 CST success: [node1]
12:04:24 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

访问一下30880端口

多租户系统实战

中间件部署实战

应用部署需要关注的信息【应用部署三要素】
1、应用的部署方式
2、应用的数据挂载(数据,配置文件)
3、应用的可访问性

部署MySQL

mysql容器启动

1
2
3
4
5
6
7
docker run -p 3306:3306 --name mysql-01 \
-v /mydata/mysql/log:/var/log/mysql \
-v /mydata/mysql/data:/var/lib/mysql \
-v /mydata/mysql/conf:/etc/mysql/conf.d \
-e MYSQL_ROOT_PASSWORD=root \
--restart=always \
-d mysql:5.7

mysql配置示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[client]
default-character-set=utf8mb4

[mysql]
default-character-set=utf8mb4

[mysqld]
init_connect='SET collation_connection = utf8mb4_unicode_ci'
init_connect='SET NAMES utf8mb4'
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
lower_case_table_names=1

mysql部署分析

1、集群内部,直接通过应用的 【服务名.项目名】 直接访问
mysql -uroot -hhis-mysql-glgf.his -p
2、集群外部,

部署

创建mysql容器
创建配置文件

键是配置文件的名称
值是配置文件的内容

持久卷的声明



工作负载

配置一个环境变量
MYSQL_ROOT_PASSWORD
123456


读写/var/lib/mysql


只读/etc/mysql/conf.d

登录mysql

查看命名空间

因为自动生成的服务会出现suxm这几个随机的字符, 所以我们可以手动创建服务






刚刚是采用的内部域名创建的, 这次采用虚拟IP的方式创建




之后会暴露一个端口, 只用IP加端口就可以连接上mysql了

部署Redis

redis容器启动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#创建配置文件
## 1、准备redis配置文件内容
mkdir -p /mydata/redis/conf && vim /mydata/redis/conf/redis.conf


##配置示例
appendonly yes
port 6379
bind 0.0.0.0


#docker启动redis
docker run -d -p 6379:6379 --restart=always \
-v /mydata/redis/conf/redis.conf:/etc/redis/redis.conf \
-v /mydata/redis-01/data:/data \
--name redis-01 redis:6.2.5 \
redis-server /etc/redis/redis.conf

redis部署分析

部署

创建配置


创建工作负载





命令redis-server
参数/etc/redis/redis.conf


然后创建存储卷






查看一下容器内的配置文件是否加载进去了

当然也可以自己手动创建服务, 因为自动生成会多几个字符

删除刚刚创建出来的redis服务

然后创建, 指定工作负载





刚刚创建的是内部网络, 这次来创建虚拟IP地址访问


记得勾选外部访问

之后会暴露出来一个端口, 可以使用该端口进行外部远程连接redis

将redis的副本数量修改成3, 然后可以发现存储卷的数量也变成了3份


当调整mysql副本数量, 存储卷仍然是1个, 这是因为mysql的存储卷是事先创建出来, 而不是创建服务时创建的存储卷, 当调整了副本的数量, 存储卷的数量也不会发生变化
一般来说都是在创建服务指定工作负载的时候来创建一个新的存储卷, 而不是事先创建一个存储卷

部署ElasticSearch

es容器启动

1
2
3
4
5
6
7
8
9
10
11
# 创建数据目录
mkdir -p /mydata/es-01 && chmod 777 -R /mydata/es-01

# 容器启动
docker run --restart=always -d -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-v es-config:/usr/share/elasticsearch/config \
-v /mydata/es-01/data:/usr/share/elasticsearch/data \
--name es-01 \
elasticsearch:7.13.4

es部署分析

注意: 子路径挂载,配置修改后,k8s不会对其Pod内的相关配置文件进行热更新,需要自己重启Pod

应用商店

可以使用 dev-zhao 登录,从应用商店部署

应用仓库

使用企业空间管理员(wuhan-boss)登录,设置应用仓库

学习Helm即可,去helm的应用市场添加一个仓库地址,比如:bitnami

部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
# 只需要在node1上执行一下就行了, 其实一会也会手动删除这个容器

# 创建数据目录
mkdir -p /mydata/es-01 && chmod 777 -R /mydata/es-01

# 容器启动
docker run --restart=always -d -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-v es-config:/usr/share/elasticsearch/config \
-v /mydata/es-01/data:/usr/share/elasticsearch/data \
--name es-01 \
elasticsearch:7.13.4

# 查看
docker ps | grep es-01

# 再给次权限
chmod 777 -R /mydata/es-01

# 进入容器
docker exec -it es-01 /bin/bash

pwd
# /usr/share/elasticsearch

cd config/
# elasticsearch.keystore jvm.options log4j2.file.properties role_mapping.yml users
# elasticsearch.yml jvm.options.d log4j2.properties roles.yml users_roles

cat elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0

cat jvm.options
################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html
## for more information.
##
################################################################



################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## and the min and max should be set to the same value. For
## example, to set the heap to 4 GB, create a new file in the
## jvm.options.d directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################


################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################

## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly

## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly
14-:-XX:+UseG1GC

## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log

## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m


创建一个管理员账户

然后将管理员拉进企业空间

创建es配置文件





版本填写elasticsearch:7.13.4


discovery.type single-node
ES_JAVA_OPTS -Xms512m -Xmx512m


/usr/share/elasticsearch/config/elasticsearch.yml

/usr/share/elasticsearch/config/jvm.options

不知道kubesphere哪里抽风了, 这里没有选择创建pvc模板, 只好先提前创建一个pvc了, 或者直接使用临时的pvc

如果有添加存储模板, 就选择添加存储模板
/usr/share/elasticsearch/data

1
2
3
4
5
6
7
8
# 服务创建完成之后就可以删除这个容器了
docker ps | grep es-01

docker rm -f es-01

# 之后kubesphere会自己创建一个



创建工作负载



创建外网访问的的NodeIP




打开浏览器访问http://47.106.11.25:32348/

应用商店的安装

可以通过开启应用商店的方式来安装应用, 简单粗暴, 不过应用商店的应用比较少



开启数据持久化
设置账号密码

还可以开启外网访问

然后放开端口, 这里存粹是省事, 没有用自己的集群, 用的青云白送的10小时kubesphere

访问https://rabbitmq-snpokhhn.c.kubesphere.cloud:30443

通过应用仓库部署

使用这种方式来部署也很方便

https://helm.sh/

搜索redis

1
2
# 仓库地址
https://charts.bitnami.com/bitnami

先切换到企业空间, 添加应用仓库bitnami

然后再创建应用




本地部署ruoyi项目

本地下载nacos
修改bin/application.properties

1
2
3
4
5
6
7
8
9
10
11
12
13
#*************** Config Module Related Configurations ***************#
### If use MySQL as datasource:
### Deprecated configuration property, it is recommended to use `spring.sql.init.platform` replaced.
# spring.datasource.platform=mysql
spring.sql.init.platform=mysql

### Count of DB:
db.num=1

### Connect URL of DB:
db.url.0=jdbc:mysql://127.0.0.1:3306/nacos1?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
db.user.0=root
db.password.0=123456

开启mysql并且执行该sql文件

然后单机启动

1
startup.cmd -m standalone

修改localhost:8848中配置中心的mysql地址和redis地址就可以了

上云部署ruoyi项目

部署mysql

创建数据库ry-cloud并导入数据脚本ry_2021xxxx.sql(必须),quartz.sql(可选)
创建数据库ry-config并导入数据脚本ry_config_2021xxxx.sql(必须)
将如下的sql文件都执行到k8s部署的mysql容器上

部署nacos


创建nacos服务


创建镜像nacos/nacos-server:v2.0.3



补充, 添加一个环境变量, 让nacos单节点启动
MODE standalone

暂时不用挂载配置文件


这就说明了可以通过 ping 副本名.服务名.企业空间.svc.cluster.local
ping his-nacos-v1-0.his-nacos.his.svc.cluster.local
ping his-nacos-v1-1.his-nacos.his.svc.cluster.local
cluster集群配置中也可以不用指定IP, 直接指定这个域名


然后删除这个nacos服务, 因为刚刚创建的并没有挂载配置文件

接下来正式部署nacos

由于mysql可以使用his-mysql-node.his这个主机名直接连接上, 所以nacos网关的mysql的jdbc连接也采用这个主机名称


一个踩坑, Nacos 2.0.3开启持久化 application.properties
得使用spring.datasource.platform, 不然单机模式下启动的并没有做到真正的持久化

1
2
spring.datasource.platform=mysql
# spring.sql.init.platform=mysql

application.properties

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
#
# Copyright 1999-2021 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#*************** Spring Boot Related Configurations ***************#
### Default web context path:
server.servlet.contextPath=/nacos
### Include message field
server.error.include-message=ALWAYS
### Default web server port:
server.port=8848

#*************** Network Related Configurations ***************#
### If prefer hostname over ip for Nacos server addresses in cluster.conf:
# nacos.inetutils.prefer-hostname-over-ip=false

### Specify local server's IP:
# nacos.inetutils.ip-address=


#*************** Config Module Related Configurations ***************#
### If use MySQL as datasource:
### Deprecated configuration property, it is recommended to use `spring.sql.init.platform` replaced.
spring.datasource.platform=mysql
# spring.sql.init.platform=mysql

### Count of DB:
db.num=1

### Connect URL of DB:
db.url.0=jdbc:mysql://his-mysql-node.his:3306/ry-config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
db.user.0=root
db.password.0=123456

### Connection pool configuration: hikariCP
db.pool.config.connectionTimeout=30000
db.pool.config.validationTimeout=10000
db.pool.config.maximumPoolSize=20
db.pool.config.minimumIdle=2

#*************** Naming Module Related Configurations ***************#

### If enable data warmup. If set to false, the server would accept request without local data preparation:
# nacos.naming.data.warmup=true

### If enable the instance auto expiration, kind like of health check of instance:
# nacos.naming.expireInstance=true

### Add in 2.0.0
### The interval to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.interval=60000

### The expired time to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.expired-time=60000

### The interval to clean expired metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.interval=5000

### The expired time to clean metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.expired-time=60000

### The delay time before push task to execute from service changed, unit: milliseconds.
# nacos.naming.push.pushTaskDelay=500

### The timeout for push task execute, unit: milliseconds.
# nacos.naming.push.pushTaskTimeout=5000

### The delay time for retrying failed push task, unit: milliseconds.
# nacos.naming.push.pushTaskRetryDelay=1000

### Since 2.0.3
### The expired time for inactive client, unit: milliseconds.
# nacos.naming.client.expired.time=180000

#*************** CMDB Module Related Configurations ***************#
### The interval to dump external CMDB in seconds:
# nacos.cmdb.dumpTaskInterval=3600

### The interval of polling data change event in seconds:
# nacos.cmdb.eventTaskInterval=10

### The interval of loading labels in seconds:
# nacos.cmdb.labelTaskInterval=300

### If turn on data loading task:
# nacos.cmdb.loadDataAtStart=false


#*************** Metrics Related Configurations ***************#
### Metrics for prometheus
#management.endpoints.web.exposure.include=*

### Metrics for elastic search
management.metrics.export.elastic.enabled=false
#management.metrics.export.elastic.host=http://localhost:9200

### Metrics for influx
management.metrics.export.influx.enabled=false
#management.metrics.export.influx.db=springboot
#management.metrics.export.influx.uri=http://localhost:8086
#management.metrics.export.influx.auto-create-db=true
#management.metrics.export.influx.consistency=one
#management.metrics.export.influx.compressed=true

#*************** Access Log Related Configurations ***************#
### If turn on the access log:
server.tomcat.accesslog.enabled=true

### The access log pattern:
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i

### The directory of access log:
server.tomcat.basedir=file:.

#*************** Access Control Related Configurations ***************#
### If enable spring security, this option is deprecated in 1.2.0:
#spring.security.enabled=false

### The ignore urls of auth
nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**

### The auth system to use, currently only 'nacos' and 'ldap' is supported:
nacos.core.auth.system.type=nacos

### If turn on auth system:
nacos.core.auth.enabled=false

### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
nacos.core.auth.caching.enabled=true

### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version.
nacos.core.auth.enable.userAgentAuthWhite=false

### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false.
### The two properties is the white list for auth and used by identity the request from other server.
nacos.core.auth.server.identity.key=
nacos.core.auth.server.identity.value=

### worked when nacos.core.auth.system.type=nacos
### The token expiration in seconds:
nacos.core.auth.plugin.nacos.token.cache.enable=false
nacos.core.auth.plugin.nacos.token.expire.seconds=18000
### The default token (Base64 String):
nacos.core.auth.plugin.nacos.token.secret.key=

### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username
#nacos.core.auth.ldap.url=ldap://localhost:389
#nacos.core.auth.ldap.basedc=dc=example,dc=org
#nacos.core.auth.ldap.userDn=cn=admin,${nacos.core.auth.ldap.basedc}
#nacos.core.auth.ldap.password=admin
#nacos.core.auth.ldap.userdn=cn={0},dc=example,dc=org
#nacos.core.auth.ldap.filter.prefix=uid
#nacos.core.auth.ldap.case.sensitive=true


#*************** Istio Related Configurations ***************#
### If turn on the MCP server:
nacos.istio.mcp.server.enabled=false

#*************** Core Related Configurations ***************#

### set the WorkerID manually
# nacos.core.snowflake.worker-id=

### Member-MetaData
# nacos.core.member.meta.site=
# nacos.core.member.meta.adweight=
# nacos.core.member.meta.weight=

### MemberLookup
### Addressing pattern category, If set, the priority is highest
# nacos.core.member.lookup.type=[file,address-server]
## Set the cluster list with a configuration file or command-line argument
# nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809
## for AddressServerMemberLookup
# Maximum number of retries to query the address server upon initialization
# nacos.core.address-server.retry=5
## Server domain name address of [address-server] mode
# address.server.domain=jmenv.tbsite.net
## Server port of [address-server] mode
# address.server.port=8080
## Request address of [address-server] mode
# address.server.url=/nacos/serverlist

#*************** JRaft Related Configurations ***************#

### Sets the Raft cluster election timeout, default value is 5 second
# nacos.core.protocol.raft.data.election_timeout_ms=5000
### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute
# nacos.core.protocol.raft.data.snapshot_interval_secs=30
### raft internal worker threads
# nacos.core.protocol.raft.data.core_thread_num=8
### Number of threads required for raft business request processing
# nacos.core.protocol.raft.data.cli_service_thread_num=4
### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat
# nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe
### rpc request timeout, default 5 seconds
# nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000

#*************** Distro Related Configurations ***************#

### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second.
# nacos.core.protocol.distro.data.sync.delayMs=1000

### Distro data sync timeout for one sync data, default 3 seconds.
# nacos.core.protocol.distro.data.sync.timeoutMs=3000

### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds.
# nacos.core.protocol.distro.data.sync.retryDelayMs=3000

### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds.
# nacos.core.protocol.distro.data.verify.intervalMs=5000

### Distro data verify timeout for one verify, default 3 seconds.
# nacos.core.protocol.distro.data.verify.timeoutMs=3000

### Distro data load retry delay when load snapshot data failed, default 30 seconds.
# nacos.core.protocol.distro.data.load.retryDelayMs=30000

### enable to support prometheus service discovery
#nacos.prometheus.metrics.enabled=true

### Since 2.3
#*************** Grpc Configurations ***************#

## sdk grpc(between nacos server and client) configuration
## Sets the maximum message size allowed to be received on the server.
#nacos.remote.server.grpc.sdk.max-inbound-message-size=10485760

## Sets the time(milliseconds) without read activity before sending a keepalive ping. The typical default is two hours.
#nacos.remote.server.grpc.sdk.keep-alive-time=7200000

## Sets a time(milliseconds) waiting for read activity after sending a keepalive ping. Defaults to 20 seconds.
#nacos.remote.server.grpc.sdk.keep-alive-timeout=20000


## Sets a time(milliseconds) that specify the most aggressive keep-alive time clients are permitted to configure. The typical default is 5 minutes
#nacos.remote.server.grpc.sdk.permit-keep-alive-time=300000

## cluster grpc(inside the nacos server) configuration
#nacos.remote.server.grpc.cluster.max-inbound-message-size=10485760

## Sets the time(milliseconds) without read activity before sending a keepalive ping. The typical default is two hours.
#nacos.remote.server.grpc.cluster.keep-alive-time=7200000

## Sets a time(milliseconds) waiting for read activity after sending a keepalive ping. Defaults to 20 seconds.
#nacos.remote.server.grpc.cluster.keep-alive-timeout=20000

## Sets a time(milliseconds) that specify the most aggressive keep-alive time clients are permitted to configure. The typical default is 5 minutes
#nacos.remote.server.grpc.cluster.permit-keep-alive-time=300000

cluster.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#
# Copyright 1999-2021 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#it is ip
#example
his-nacos-v1-0.his-nacos.his.svc.cluster.local:8848
his-nacos-v1-1.his-nacos.his.svc.cluster.local:8848
his-nacos-v1-2.his-nacos.his.svc.cluster.local:8848

TIPS: 单节点部署不用挂载cluster.conf, 后面的部署可能会出现bug

然后创建一个有状态服务



指定nacos的版本nacos/nacos-server:v2.0.3




只读/home/nacos/conf/application.properties子路径填写application.properties

只读/home/nacos/conf/cluster.conf子路径填写cluster.conf

接着创建一个对外暴露端口的nacos




配置完成之后可以访问http://139.198.115.124:31695/nacos

接着创建prod命名空间

maven打包

跳过测试然后打包

本地运行打好的jar包java -jar ruoyi-auth.jar
出现报错charset

1
Caused by: java.nio.charset.MalformedInputException: Input length = 1

需要指定一下编码集
java -Dfile.encoding=utf8 -jar ruoyi-auth.jar
将打包的jar整理成如下格式上传到服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
.
├── ruoyi-auth
│ ├── Dockerfile
│ └── target
│ └── ruoyi-auth.jar
├── ruoyi-file
│ ├── Dockerfile
│ └── target
│ └── ruoyi-modules-file.jar
├── ruoyi-gateway
│ ├── Dockerfile
│ └── target
│ └── ruoyi-gateway.jar
├── ruoyi-gen
│ ├── Dockerfile
│ └── target
│ └── ruoyi-modules-gen.jar
├── ruoyi-job
│ ├── Dockerfile
│ └── target
│ └── ruoyi-modules-job.jar
├── ruoyi-monitor
│ ├── Dockerfile
│ └── target
│ └── ruoyi-visual-monitor.jar
└── ruoyi-system
├── Dockerfile
└── target
└── ruoyi-modules-system.jar

Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
FROM openjdk:8-jdk
LABEL maintainer=leifengyang


#docker run -e PARAMS="--server.port 9090"
ENV PARAMS="--server.port=8080 --spring.profiles.active=prod --spring.cloud.nacos.discovery.server-addr=his-nacos.his:8848 --spring.cloud.nacos.config.server-addr=his-nacos.his:8848 --spring.cloud.nacos.config.namespace=prod --spring.cloud.nacos.config.file-extension=yml"
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone

COPY target/*.jar /app.jar
EXPOSE 8080

#
ENTRYPOINT ["/bin/sh","-c","java -Dfile.encoding=utf8 -Djava.security.egd=file:/dev/./urandom -jar app.jar ${PARAMS}"]

开通阿里云容器镜像服务

1
2
3
4
# 登录阿里云容器镜像仓库
$ sudo docker login --username=肉豆蔻吖 registry.cn-shenzhen.aliyuncs.com

12345Xia

创建一个命名空间

1
2
3
4
5
6
7
# 拉取
$ docker pull registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi:[镜像版本号]

# 推送
$ docker login --username=肉豆蔻吖 registry.cn-shenzhen.aliyuncs.com
$ docker tag [ImageId] registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi:[镜像版本号]
$ docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi:[镜像版本号]

docker打包, 推送

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# 打包成镜像
cd ruoyi-file/
docker build -t ruoyi-file:v1 -f Dockerfile .

# 可以简写不写Dockerfile
cd ruoyi-gateway/
docker build -t ruoyi-gateway:v1 .

cd ruoyi-gen/
docker build -t ruoyi-gen:v1 .

cd ruoyi-job/
docker build -t ruoyi-job:v1 .

cd ruoyi-monitor/
docker build -t ruoyi-monitor:v1 .

cd ruoyi-system/
docker build -t ruoyi-system:v1 .

cd ruoyi-auth/
docker build -t ruoyi-auth:v1 .

# 登录
docker login --username=肉豆蔻吖 registry.cn-shenzhen.aliyuncs.com

# 查看镜像
[root@k8s-master ruoyi-system]# docker images | grep ruoyi
ruoyi-auth v1 0db642b44949 4 seconds ago 617MB
ruoyi-system v1 c0b63feca088 8 minutes ago 632MB
ruoyi-monitor v1 b0f479101e96 10 minutes ago 592MB
ruoyi-job v1 fd3bcefdb9d0 10 minutes ago 629MB
ruoyi-gen v1 df9dae1e1581 10 minutes ago 628MB
ruoyi-gateway v1 1a37fc67241f 13 minutes ago 625MB
ruoyi-file v1 85a54d4c38ea 15 minutes ago 622MB

# 命名
docker tag c0b63feca088 registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-system:v1
docker tag b0f479101e96 registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-monitor:v1
docker tag fd3bcefdb9d0 registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-job:v1
docker tag df9dae1e1581 registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-gen:v1
docker tag 1a37fc67241f registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-gateway:v1
docker tag 85a54d4c38ea registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-file:v1
docker tag 0db642b44949 registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-auth:v1

# 查看镜像
[root@k8s-master ruoyi-system]# docker images | grep ruoyi
ruoyi-auth v1 0db642b44949 49 seconds ago 617MB
registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-auth v1 0db642b44949 49 seconds ago 617MB
ruoyi-system v1 c0b63feca088 12 hours ago 632MB
registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-system v1 c0b63feca088 12 hours ago 632MB
ruoyi-monitor v1 b0f479101e96 12 hours ago 592MB
registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-monitor v1 b0f479101e96 12 hours ago 592MB
registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-job v1 fd3bcefdb9d0 12 hours ago 629MB
ruoyi-job v1 fd3bcefdb9d0 12 hours ago 629MB
ruoyi-gen v1 df9dae1e1581 12 hours ago 628MB
registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-gen v1 df9dae1e1581 12 hours ago 628MB
ruoyi-gateway v1 1a37fc67241f 12 hours ago 625MB
registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-gateway v1 1a37fc67241f 12 hours ago 625MB
ruoyi-file v1 85a54d4c38ea 12 hours ago 622MB
registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-file v1 85a54d4c38ea 12 hours ago 622MB

# 推送
docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-auth:v1
docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-system:v1
docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-monitor:v1
docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-job:v1
docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-gen:v1
docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-gateway:v1
docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-file:v1

kubesphere部署微服务

1
2
3
4
5
6
7
8
# 拉取镜像
docker pull registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-auth:v1
docker pull registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-system:v1
docker pull registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-monitor:v1
docker pull registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-job:v1
docker pull registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-gen:v1
docker pull registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-gateway:v1
docker pull registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-file:v1

创建无状态服务


选择这个镜像registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-monitor:v1



部署完成之后查看日志,没有报错就ok

然后以相同的方式部署其他的微服务

kubesphere部署前端

前端UI进行打包
将target的路径localhost修改成ruoyi-gateway.his

然后进行打包

1
npm run build:prod

然后将打包好的dist文件拖拽到nginx中html目录下
修改nginx.conf 的server_name为_ (表示可以处理任何IP代理的请求, 因为部署到k8s的pod的IP随时可能会发生变化 ,这个地方不能写死了, 一旦有请求进来了, 不管是发给哪一个pod的IP, 都进行处理)
proxy_pass指定代理网关http://ruoyi-gateway.his:8080/;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
worker_processes  1;

events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;

server {
listen 80;
server_name _;

location / {
root /home/ruoyi/projects/ruoyi-ui;
try_files $uri $uri/ /index.html;
index index.html index.htm;
}

location /prod-api/{
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://ruoyi-gateway.his:8080/;
}

# 避免actuator暴露
if ($request_uri ~ "/actuator") {
return 403;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}

1
2
3
4
5
6
7
8
9
10
11
12
13
# 下载解压工具
yum install -y unzip zip

# 解压nginx
unzip nginx

cd nginx

# docker打包
docker build -t registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-ui:v1 -f dockerfile .

# 推送
docker push registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-ui:v1

然后kubesphere创建服务
应用负载 - 服务 - 创建 - 无状态服务

registry.cn-shenzhen.aliyuncs.com/roudoukou-ruoyi/ruoyi-ui:v1


部署的效果图

一个补充

在服务器重启之后, nacos的启动会依赖mysql
会出现一种情况, 当nacos的pod启动的时候, 但是mysql的pod还没有启动
这样就会照成nacos的启动失败, 然后导致其他的微服务连接不上nacos, 最后导致的结果就是出现好几个微服务都不能正常启动
这个时候就可以给nacos添加一个存活探针

上云部署尚医通

部署mongodb





部署sentinel



leifengyang/sentinel:1.8.2

中间件配置

中间件 集群内地址 外部访问地址
Nacos his-nacos.his:8848 http://139.198.127.220:31695/nacos
MySQL his-mysql-node.his:3306 139.198.127.220:32264
Redis his-redis-node.his:6379 139.198.127.220:32756
Sentinel his-sentinel.his:8080
MongoDB mongodb.his:27017
RabbitMQ his-rabbitmq.his:5672 139.198.127.220:30853
ElasticSearch his-es.his:9200 139.198.127.220:30820

配置nacos的配置文件

service-cmn-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
server:
port: 8080
mybatis-plus:
configuration:
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
mapper-locations: classpath:mapper/*.xml
global-config:
db-config:
logic-delete-value: 1
logic-not-delete-value: 0
spring:
cloud:
sentinel:
transport:
dashboard: http://his-sentinel.his:8080
redis:
host: his-redis-node.his
port: 6379
database: 0
timeout: 1800000
password:
lettuce:
pool:
max-active: 20 #最大连接数
max-wait: -1 #最大阻塞等待时间(负数表示没限制)
max-idle: 5 #最大空闲
min-idle: 0 #最小空闲
datasource:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://his-mysql-node.his:3306/yygh_cmn?characterEncoding=utf-8&useSSL=false
username: root
password: 123456
hikari:
connection-test-query: SELECT 1
connection-timeout: 60000
idle-timeout: 500000
max-lifetime: 540000
maximum-pool-size: 12
minimum-idle: 10
pool-name: GuliHikariPool
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8

service-hosp-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
server:
port: 8080
mybatis-plus:
configuration:
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
mapper-locations: classpath:mapper/*.xml
feign:
sentinel:
enabled: true
client:
config:
default: #配置全局的feign的调用超时时间 如果 有指定的服务配置 默认的配置不会生效
connectTimeout: 30000 # 指定的是 消费者 连接服务提供者的连接超时时间 是否能连接 单位是毫秒
readTimeout: 50000 # 指定的是调用服务提供者的 服务 的超时时间() 单位是毫秒
spring:
main:
allow-bean-definition-overriding: true #当遇到同样名字的时候,是否允许覆盖注册
cloud:
sentinel:
transport:
dashboard: his-sentinel.his:8080
data:
mongodb:
host: mongodb.his
port: 27017
database: yygh_hosps #指定操作的数据库
rabbitmq:
host: his-rabbitmq.his
port: 5672
username: admin
password: admin
datasource:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://his-mysql-node.his:3306/yygh_hosp?characterEncoding=utf-8&useSSL=false
username: root
password: 123456
hikari:
connection-test-query: SELECT 1
connection-timeout: 60000
idle-timeout: 500000
max-lifetime: 540000
maximum-pool-size: 12
minimum-idle: 10
pool-name: GuliHikariPool
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
redis:
host: his-redis-node.his

service-order-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
server:
port: 8080
mybatis-plus:
configuration:
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
mapper-locations: classpath:mapper/*.xml
feign:
sentinel:
enabled: true
client:
config:
default: #配置全局的feign的调用超时时间 如果 有指定的服务配置 默认的配置不会生效
connectTimeout: 30000 # 指定的是 消费者 连接服务提供者的连接超时时间 是否能连接 单位是毫秒
readTimeout: 50000 # 指定的是调用服务提供者的 服务 的超时时间() 单位是毫秒
spring:
main:
allow-bean-definition-overriding: true #当遇到同样名字的时候,是否允许覆盖注册
cloud:
sentinel:
transport:
dashboard: http://his-sentinel.his:8080
rabbitmq:
host: his-rabbitmq.his
port: 5672
username: admin
password: admin
datasource:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://his-mysql-node.his:3306/yygh_order?characterEncoding=utf-8&useSSL=false
username: root
password: 123456
hikari:
connection-test-query: SELECT 1
connection-timeout: 60000
idle-timeout: 500000
max-lifetime: 540000
maximum-pool-size: 12
minimum-idle: 10
pool-name: GuliHikariPool
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
redis:
host: his-redis-node.his
# 微信
#weixin:
# appid: wx8397f8696b538317
# partner: 1473426802
# partnerkey: T6m9iK73b0kn9g5v426MKfHQH7X8rKwb
# notifyurl: http://a31ef7db.ngrok.io/WeChatPay/WeChatPayNotify
weixin:
appid: wx74862e0dfcf69954
partner: 1558950191
partnerkey: T6m9iK73b0kn9g5v426MKfHQH7X8rKwb
notifyurl: http://qyben.free.idcfengye.com/api/order/weixin/notify
cert: C:\Users\lfy\Desktop\yygh-parent\service\service-order\src\main\resources\apiclient_cert.p12

service-oss-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
server:
port: 8205

spring:
cloud:
sentinel:
transport:
dashboard: http://his-sentinel.his:8080
#阿里云 OSS
aliyun:
oss:
file:
endpoint: oss-cn-beijing.aliyuncs.com
keyid: LTAI4FhtGRtRGtPvmLBv8vxk
keysecret: sq8e8WLYoKwJoCNLbjRdlSTaOaFumD
#bucket可以在控制台创建,也可以使用java代码创建
bucketname: online-teach-file

service-sms-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
server:
port: 8080

spring:
cloud:
sentinel:
transport:
dashboard: http://his-sentinel.his:8080
rabbitmq:
host: his-rabbitmq.his
port: 5672
username: admin
password: admin
redis:
host: his-redis-node.his
#阿里云 短信
aliyun:
sms:
regionId: default
accessKeyId: LTAI0YbQf3pX8WqC
secret: jX8D04DmDI3gGKjW5kaFYSzugfqmmT

service-statistics-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
server:
port: 8208
feign:
sentinel:
enabled: true
client:
config:
default: #配置全局的feign的调用超时时间 如果 有指定的服务配置 默认的配置不会生效
connectTimeout: 30000 # 指定的是 消费者 连接服务提供者的连接超时时间 是否能连接 单位是毫秒
readTimeout: 50000 # 指定的是调用服务提供者的 服务 的超时时间() 单位是毫秒
spring:
main:
allow-bean-definition-overriding: true #当遇到同样名字的时候,是否允许覆盖注册
cloud:
sentinel:
transport:
dashboard: http://his-sentinel.his:8080

service-task-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
server:
port: 8080
spring:
cloud:
sentinel:
transport:
dashboard: his-sentinel.his:8080
rabbitmq:
host: his-rabbitmq.his
port: 5672
username: admin
password: admin

service-user-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
server:
port: 8080
mybatis-plus:
configuration:
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
mapper-locations: classpath:mapper/*.xml
feign:
sentinel:
enabled: true
client:
config:
default: #配置全局的feign的调用超时时间 如果 有指定的服务配置 默认的配置不会生效
connectTimeout: 30000 # 指定的是 消费者 连接服务提供者的连接超时时间 是否能连接 单位是毫秒
readTimeout: 50000 # 指定的是调用服务提供者的 服务 的超时时间() 单位是毫秒
spring:
main:
allow-bean-definition-overriding: true #当遇到同样名字的时候,是否允许覆盖注册
cloud:
sentinel:
transport:
dashboard: http://his-sentinel.his:8080
datasource:
type: com.zaxxer.hikari.HikariDataSource
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://his-mysql-node.his:3306/yygh_user?characterEncoding=utf-8&useSSL=false
username: root
password: 123456
hikari:
connection-test-query: SELECT 1
connection-timeout: 60000
idle-timeout: 500000
max-lifetime: 540000
maximum-pool-size: 12
minimum-idle: 10
pool-name: GuliHikariPool
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
wx:
open:
# 微信开放平台 appid
app_id: wxc606fb748aedee7c
# 微信开放平台 appsecret
app_secret: 073e8e1117c1054b14586c8aa922bc9c
# 微信开放平台 重定向url(需要在微信开放平台配置)
redirect_url: http://qyben.free.idcfengye.com/api/user/weixin/callback
#redirect_url: http://qyben.free.idcfengye.com/api/user/weixin/callback
#wx:
# open:
# # 微信开放平台 appid
# app_id: wxed9954c01bb89b47
# # 微信开放平台 appsecret
# app_secret: 2cf9a4a81b6151d560e9bbc625c3297b
# # 微信开放平台 重定向url(需要在微信开放平台配置)
# redirect_url: http://guli.shop/api/ucenter/wx/callback
yygh:
#预约挂号平台baserul
baseUrl: http://localhost:3000

server-gateway-prod.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
server:
port: 80
spring:
application:
name: server-gateway
cloud:
nacos:
discovery:
server-addr: his-nacos.his:8848
gateway:
discovery: #是否与服务发现组件进行结合,通过 serviceId(必须设置成大写) 转发到具体的服务实例。默认为false,设为true便开启通过服务中心的自动根据 serviceId 创建路由的功能。
locator: #路由访问方式:http://Gateway_HOST:Gateway_PORT/大写的serviceId/**,其中微服务应用名默认大写访问。
enabled: true
routes:
- id: service-user
uri: lb://service-user
predicates:
- Path=/*/user/**
- id: service-cmn
uri: lb://service-cmn
predicates:
- Path=/*/cmn/**
- id: service-sms
uri: lb://service-sms
predicates:
- Path=/*/sms/**
- id: service-hosp
uri: lb://service-hosp
predicates:
- Path=/*/hosp/**
- id: service-order
uri: lb://service-order
predicates:
- Path=/*/order/**
- id: service-statistics
uri: lb://service-statistics
predicates:
- Path=/*/statistics/**
- id: service-cms
uri: lb://service-cms
predicates:
- Path=/*/cms/**
- id: service-oss
uri: lb://service-oss
predicates:
- Path=/*/oss/**

最后修改hospital-manage

mysql导入sql文件

将项目中的data/sql的sql文件导入到k8s搭建的mysql数据库

开启devops

自定义资源 - 搜索”ClusterConfiguration”
ks-installer编辑, 将enabled的值改成true

1
2
devops:
enabled: true

创建一个角色self-provisioner分配给账户
然后创建一个devops工程



使用xiamu普通账户创建devops工程, 然后切换到admin管理员进行编辑
点击一下空白处,然后设置label为maven

拉取代码


然后编写一个shell


然后点击运行



可以看见成功的把代码拉取下来了, 并且ls打印出了文件的目录结构

项目打包

编写两个shell命令

1
2
ls
mvn clean package -Dmaven.test.skip=true


配置阿里云镜像仓库ks-devops-agent


然后运行, 查看运行日志, 下载的jar包都是从阿里云镜像下载的, 这说明了配置maven镜像仓库配置成功了

构建镜像

这部分直接复制下面的jkenis配置文件覆盖就可以了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
pipeline {
agent {
node {
label 'maven'
}

}
stages {
stage('拉取代码') {
agent none
steps {
container('maven') {
git(url: 'https://gitee.com/leifengyang/yygh-parent', branch: 'master', changelog: true, poll: false)
sh 'ls -al'
}

}
}

stage('项目编译') {
agent none
steps {
container('maven') {
sh 'ls -al'
sh 'mvn clean package -Dmaven.test.skip=true'
}

}
}

stage('default-2') {
parallel {
stage('构建hospital-manage镜像') {
agent none
steps {
container('maven') {
sh '''ls hospital-manage/target

docker build -t hospital-manage:latest -f hospital-manage/Dockerfile ./hospital-manage/'''
}

}
}

stage('构建server-gateway镜像') {
agent none
steps {
container('maven') {
sh '''ls server-gateway/target

docker build -t server-gateway:latest -f server-gateway/Dockerfile ./server-gateway/'''
}

}
}

stage('构建service-cmn镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-cmn/target

docker build -t service-cmn:latest -f service/service-cmn/Dockerfile ./service/service-cmn/'''
}

}
}

stage('构建service-hosp镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-hosp/target

docker build -t service-hosp:latest -f service/service-hosp/Dockerfile ./service/service-hosp/'''
}

}
}

stage('构建service-order镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-order/target

docker build -t service-order:latest -f service/service-order/Dockerfile ./service/service-order/'''
}

}
}

stage('构建service-oss镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-oss/target

docker build -t service-oss:latest -f service/service-oss/Dockerfile ./service/service-oss/'''
}

}
}

stage('构建service-sms镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-sms/target

docker build -t service-sms:latest -f service/service-sms/Dockerfile ./service/service-sms/'''
}

}
}

stage('构建service-statistics镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-statistics/target

docker build -t service-statistics:latest -f service/service-statistics/Dockerfile ./service/service-statistics/'''
}

}
}

stage('构建service-task镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-task/target

docker build -t service-task:latest -f service/service-task/Dockerfile ./service/service-task/'''
}

}
}

stage('构建service-user镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-user/target

docker build -t service-user:latest -f service/service-user/Dockerfile ./service/service-user/'''
}

}
}


}
}

stage('push latest') {
when {
branch 'master'
}
steps {
container('maven') {
sh 'docker tag $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:latest '
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/$APP_NAME:latest '
}

}
}

stage('deploy to dev') {
steps {
input(id: 'deploy-to-dev', message: 'deploy to dev?')
kubernetesDeploy(configs: 'deploy/dev-ol/**', enableConfigSubstitution: true, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID")
}
}

stage('deploy to production') {
steps {
input(id: 'deploy-to-production', message: 'deploy to production?')
kubernetesDeploy(configs: 'deploy/prod-ol/**', enableConfigSubstitution: true, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID")
}
}

}
environment {
DOCKER_CREDENTIAL_ID = 'dockerhub-id'
GITHUB_CREDENTIAL_ID = 'github-id'
KUBECONFIG_CREDENTIAL_ID = 'demo-kubeconfig'
REGISTRY = 'docker.io'
DOCKERHUB_NAMESPACE = 'docker_username'
GITHUB_ACCOUNT = 'kubesphere'
APP_NAME = 'devops-java-sample'
}
parameters {
string(name: 'TAG_NAME', defaultValue: '', description: '')
}
}

运行测试:

发布镜像



推送镜像到阿里云远程仓库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
pipeline {
agent {
node {
label 'maven'
}

}
stages {
stage('拉取代码') {
agent none
steps {
container('maven') {
git(url: 'https://gitee.com/leifengyang/yygh-parent', branch: 'master', changelog: true, poll: false)
sh 'ls -al'
}

}
}

stage('项目编译') {
agent none
steps {
container('maven') {
sh 'ls -al'
sh 'mvn clean package -Dmaven.test.skip=true'
}

}
}

stage('default-2') {
parallel {
stage('构建hospital-manage镜像') {
agent none
steps {
container('maven') {
sh '''ls hospital-manage/target

docker build -t hospital-manage:latest -f hospital-manage/Dockerfile ./hospital-manage/'''
}

}
}

stage('构建server-gateway镜像') {
agent none
steps {
container('maven') {
sh '''ls server-gateway/target

docker build -t server-gateway:latest -f server-gateway/Dockerfile ./server-gateway/'''
}

}
}

stage('构建service-cmn镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-cmn/target

docker build -t service-cmn:latest -f service/service-cmn/Dockerfile ./service/service-cmn/'''
}

}
}

stage('构建service-hosp镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-hosp/target

docker build -t service-hosp:latest -f service/service-hosp/Dockerfile ./service/service-hosp/'''
}

}
}

stage('构建service-order镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-order/target

docker build -t service-order:latest -f service/service-order/Dockerfile ./service/service-order/'''
}

}
}

stage('构建service-oss镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-oss/target

docker build -t service-oss:latest -f service/service-oss/Dockerfile ./service/service-oss/'''
}

}
}

stage('构建service-sms镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-sms/target

docker build -t service-sms:latest -f service/service-sms/Dockerfile ./service/service-sms/'''
}

}
}

stage('构建service-statistics镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-statistics/target

docker build -t service-statistics:latest -f service/service-statistics/Dockerfile ./service/service-statistics/'''
}

}
}

stage('构建service-task镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-task/target

docker build -t service-task:latest -f service/service-task/Dockerfile ./service/service-task/'''
}

}
}

stage('构建service-user镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-user/target

docker build -t service-user:latest -f service/service-user/Dockerfile ./service/service-user/'''
}

}
}

}
}

stage('default-3') {
parallel {
stage('推送hospital-manage镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag hospital-manage:latest $REGISTRY/$DOCKERHUB_NAMESPACE/hospital-manage:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/hospital-manage:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送server-gateway镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag server-gateway:latest $REGISTRY/$DOCKERHUB_NAMESPACE/server-gateway:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/server-gateway:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送service-cmn镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-cmn:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-cmn:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-cmn:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送service-hosp镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-hosp:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-hosp:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-hosp:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送service-order镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-order:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-order:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-order:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送service-oss镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-oss:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-oss:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-oss:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送service-sms镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-sms:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-sms:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-sms:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送service-statistics镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-statistics:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-statistics:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-statistics:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送service-task镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-task:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-task:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-task:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}

stage('推送service-user镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-user:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-user:SNAPSHOP-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-user:SNAPSHOP-$BUILD_NUMBER'
}

}

}
}
}
}

stage('deploy to dev') {
steps {
input(id: 'deploy-to-dev', message: 'deploy to dev?')
kubernetesDeploy(configs: 'deploy/dev-ol/**', enableConfigSubstitution: true, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID")
}
}

stage('deploy to production') {
steps {
input(id: 'deploy-to-production', message: 'deploy to production?')
kubernetesDeploy(configs: 'deploy/prod-ol/**', enableConfigSubstitution: true, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID")
}
}

}
environment {
DOCKER_CREDENTIAL_ID = 'dockerhub-id'
GITHUB_CREDENTIAL_ID = 'github-id'
KUBECONFIG_CREDENTIAL_ID = 'demo-kubeconfig'
REGISTRY = 'registry.cn-shenzhen.aliyuncs.com'
DOCKERHUB_NAMESPACE = 'roudoukou-ruoyi'
GITHUB_ACCOUNT = 'kubesphere'
APP_NAME = 'devops-java-sample'
}
parameters {
string(name: 'TAG_NAME', defaultValue: '', description: '')
}
}

部署到dev环境

编辑流水线,创建一个凭证


kubeconfig默认选择的是$KUBECONFIG_CREDENTIAL_ID(用默认有的)
修改配置文件路径hospital-manage/deploy/**


创建秘钥aliyun-docker-hub, 秘钥名应该跟deploy.yml中的name名一样

仓库地址registry.cn-shenzhen.aliyuncs.com, 然后填写阿里云的容器镜像服务的用户名和密码
顺带点击一下验证

'$KUBECONFIG_CREDENTIAL_ID'替换成"$KUBECONFIG_CREDENTIAL_ID"
添加一个环境变量ALIYUNHUB_NAMESPACE = 'registry.cn-shenzhen.aliyuncs.com'
在nacos的每一个文件中都添加上redis的配置

1
2
redis:
host: his-redis-node.his

如果使用的是阿里云的服务器,可以使用专有网络来推送镜像, 这样推送镜像走的流量是内网的流量, 更快

配置邮箱提醒

一个是流水线的邮箱通知, 另一个配置的是kubesphere系统的邮箱通知

开启邮箱通知


记得将body中的单引号换成双引号

配置ks-jenkins的邮箱地址, 这个相当于是系统通知, 刚刚上面配置的是流水线通知

然后平台设置邮件

然后创建一个devops工程进行测试邮箱服务

记得将这个变量的单引号变成双引号

因为单引号不能读取这个变量

完整流水线代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
pipeline {
agent {
node {
label 'maven'
}

}
stages {
stage('拉取代码') {
agent none
steps {
container('maven') {
git(url: 'https://gitee.com/leifengyang/yygh-parent', branch: 'master', changelog: true, poll: false)
sh 'ls -al'
}

}
}

stage('项目编译') {
agent none
steps {
container('maven') {
sh 'ls -al'
sh 'mvn clean package -Dmaven.test.skip=true'
}

}
}

stage('default-2') {
parallel {
stage('构建hospital-manage镜像') {
agent none
steps {
container('maven') {
sh '''ls hospital-manage/target

docker build -t hospital-manage:latest -f hospital-manage/Dockerfile ./hospital-manage/'''
}

}
}

stage('构建server-gateway镜像') {
agent none
steps {
container('maven') {
sh '''ls server-gateway/target

docker build -t server-gateway:latest -f server-gateway/Dockerfile ./server-gateway/'''
}

}
}

stage('构建service-cmn镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-cmn/target

docker build -t service-cmn:latest -f service/service-cmn/Dockerfile ./service/service-cmn/'''
}

}
}

stage('构建service-hosp镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-hosp/target

docker build -t service-hosp:latest -f service/service-hosp/Dockerfile ./service/service-hosp/'''
}

}
}

stage('构建service-order镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-order/target

docker build -t service-order:latest -f service/service-order/Dockerfile ./service/service-order/'''
}

}
}

stage('构建service-oss镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-oss/target

docker build -t service-oss:latest -f service/service-oss/Dockerfile ./service/service-oss/'''
}

}
}

stage('构建service-sms镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-sms/target

docker build -t service-sms:latest -f service/service-sms/Dockerfile ./service/service-sms/'''
}

}
}

stage('构建service-statistics镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-statistics/target

docker build -t service-statistics:latest -f service/service-statistics/Dockerfile ./service/service-statistics/'''
}

}
}

stage('构建service-task镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-task/target

docker build -t service-task:latest -f service/service-task/Dockerfile ./service/service-task/'''
}

}
}

stage('构建service-user镜像') {
agent none
steps {
container('maven') {
sh '''ls service/service-user/target

docker build -t service-user:latest -f service/service-user/Dockerfile ./service/service-user/'''
}

}
}

}
}

stage('default-3') {
parallel {
stage('推送hospital-manage镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag hospital-manage:latest $REGISTRY/$DOCKERHUB_NAMESPACE/hospital-manage:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/hospital-manage:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送server-gateway镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag server-gateway:latest $REGISTRY/$DOCKERHUB_NAMESPACE/server-gateway:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/server-gateway:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送service-cmn镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-cmn:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-cmn:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-cmn:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送service-hosp镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-hosp:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-hosp:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-hosp:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送service-order镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-order:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-order:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-order:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送service-oss镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-oss:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-oss:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-oss:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送service-sms镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-sms:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-sms:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-sms:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送service-statistics镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-statistics:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-statistics:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-statistics:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送service-task镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-task:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-task:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-task:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('推送service-user镜像') {
agent none
steps {
container('maven') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag service-user:latest $REGISTRY/$DOCKERHUB_NAMESPACE/service-user:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/service-user:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

}
}

stage('default-4') {
parallel {
stage('hospital-manage - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'hospital-manage/deploy/**')
}
}

stage('server-gateway - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'server-gateway/deploy/**')
}
}

stage('service-cmn - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'service/service-cmn/deploy/**')
}
}

stage('service-hosp - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'service/service-hosp/deploy/**')
}
}

stage('service-order - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'service/service-order/deploy/**')
}
}

stage('service-oss - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'service/service-oss/deploy/**')
}
}

stage('service-sms - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'service/service-sms/deploy/**')
}
}

stage('service-statistics - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'service/service-statistics/deploy/**')
}
}

stage('service-task - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'service/service-task/deploy/**')
}
}

stage('service-user - 部署到dev环境') {
steps {
kubernetesDeploy(enableConfigSubstitution: true, deleteResource: false, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID", configs: 'service/service-user/deploy/**')
}
}

}
}

stage('发送确认邮件') {
agent none
steps {
mail(to: '1023876294@qq.com', subject: '构建结果', body: "构建成功了 $BUILD_NUMBER")
}
}

}
environment {
DOCKER_CREDENTIAL_ID = 'dockerhub-id'
GITHUB_CREDENTIAL_ID = 'github-id'
KUBECONFIG_CREDENTIAL_ID = 'demo-kubeconfig'
REGISTRY = 'registry.cn-shenzhen.aliyuncs.com'
DOCKERHUB_NAMESPACE = 'roudoukou-ruoyi'
GITHUB_ACCOUNT = 'kubesphere'
APP_NAME = 'devops-java-sample'
ALIYUNHUB_NAMESPACE = 'roudoukou-ruoyi'
}
parameters {
string(name: 'TAG_NAME', defaultValue: '', description: '')
}
}

导入mongodb数据

配置mongodb外网访问

连接mongodb

导入如下的数据

本地运行尚医通后台项目

gitee仓库地址

https://gitee.com/xialonggui/yygh-admin
https://gitee.com/xialonggui/yygh-site

双击idea的shift键, 搜索到JwtHelper
将生成的token记录下来, 在src\store\modules\user.js文件修改

然后修改config/dev.env.js 和 config/prod.env.js
修改成暴露出来的server-gateway网关地址

安装相关依赖

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 建议将node版本降低到14版本, 不然会出来严重的历史残留bug, 比如sass版本的不匹配
# 所以最简单的办法就是降低nodejs的版本
node -v
# v14.21.3

# 安装node-sass
npm i node-sass --sass_binary_site=https://npm.taobao.org/mirrors/node-sass/

# 安装
npm install --registry=https://registry.npm.taobao.org

# 运行
npm run dev

至此本地项目就运行起来了
然后在gitee上创建一个代码仓库, 为什么要创建仓库呢?
因为这个雷丰阳老师提供仓库的地址的网关是没有修改的, 我们得修改成自己的网关地址, 也就是得使用自己的代码

上云部署尚医通后台项目



稍微等待一下, 会自动拉取代码

webhook

1
http://139.198.117.67:30880/devops_webhook/git/?url=https://gitee.com/xialonggui/yygh-admin.git

然后修改Jenkinsfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
pipeline {
agent {
node {
label 'nodejs'
}

}
stages {
stage('拉取代码') {
agent none
steps {
container('nodejs') {
git(url: 'https://gitee.com/xialonggui/yygh-admin.git', credentialsId: 'gitee-id', branch: 'master', changelog: true, poll: false)
sh 'ls -al'
}

}
}

stage('项目编译') {
agent none
steps {
container('nodejs') {
sh 'npm i node-sass --sass_binary_site=https://npm.taobao.org/mirrors/node-sass/'
sh 'npm install --registry=https://registry.npm.taobao.org'
sh 'npm run build'
sh 'ls'
}

}
}

stage('构建镜像') {
agent none
steps {
container('nodejs') {
sh 'ls'
sh 'docker build -t yygh-admin:latest -f Dockerfile .'
}

}
}

stage('推送镜像') {
agent none
steps {
container('nodejs') {
withCredentials([usernamePassword(credentialsId : 'aliyun-registry' ,usernameVariable : 'DOCKER_USER_VAR' ,passwordVariable : 'DOCKER_PWD_VAR' ,)]) {
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
sh 'docker tag yygh-admin:latest $REGISTRY/$DOCKERHUB_NAMESPACE/yygh-admin:SNAPSHOT-$BUILD_NUMBER'
sh 'docker push $REGISTRY/$DOCKERHUB_NAMESPACE/yygh-admin:SNAPSHOT-$BUILD_NUMBER'
}

}

}
}

stage('部署到dev环境') {
agent none
steps {
kubernetesDeploy(configs: 'deploy/**', enableConfigSubstitution: true, kubeconfigId: "$KUBECONFIG_CREDENTIAL_ID")
}
}

//1、配置全系统的邮件: 全系统的监控
//2、修改ks-jenkins的配置,里面的邮件; 流水线发邮件
stage('发送确认邮件') {
agent none
steps {
mail(to: '1023876294@qq.com', subject: 'yygh-admin构建结果', body: "构建成功了 $BUILD_NUMBER")
}
}

}
environment {
DOCKER_CREDENTIAL_ID = 'dockerhub-id'
GITHUB_CREDENTIAL_ID = 'github-id'
KUBECONFIG_CREDENTIAL_ID = 'demo-kubeconfig'
REGISTRY = 'registry.cn-shenzhen.aliyuncs.com'
DOCKERHUB_NAMESPACE = 'roudoukou-ruoyi'
GITHUB_ACCOUNT = 'kubesphere'
APP_NAME = 'devops-java-sample'
ALIYUNHUB_NAMESPACE = 'roudoukou-ruoyi'
}
}

记得推送代码到仓库上

然后访问这个端口, 此时已经说明了后台部署成功了

本地运行尚医通前台项目

修改utils/request.js成网关地址

1
2
3
4
5
6
npm i

npm run build

npm run start

然后浏览器访问http://127.0.0.1:3000/

上云部署尚医通前台项目


webhook

1
http://139.198.117.67:30880/devops_webhook/git/?url=https://gitee.com/xialonggui/yygh-site.git

设置webhook


当提交了代码之后, gitee就会通过webhook给流水线devops发送请求, 让流水线拉取代码运行

让master成为node节点工作(补充)

https://www.hangge.com/blog/cache/detail_2431.html

1
kubectl taint node k8s-master node-role.kubernetes.io/master-

卸载kubernetes

https://www.cnblogs.com/wangzy-Zj/p/13273351.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum clean all
yum remove kube*

云原生实战
https://xiamu.icu/Java/云原生实战/
作者
肉豆蔻吖
发布于
2023年5月7日
许可协议