Kubesphere-DevOPS搭建

0-离线安装k3s集群

针对生产环境下的 K3s,一个不可逾越的问题就是离线安装。在你的离线环境需要准备以下 3 个组件:

K3s 的安装脚本

K3s 的二进制文件

K3s 依赖的镜像

以上三个组件都可以通过K3s Release页面下载,如果在国内使用,推荐从 http://mirror.cnrancher.com​编辑>获得这些组件。

Containerd + 手动部署镜像方式
假设你已经将同一版本的 K3s 的安装脚本(k3s-install.sh)、K3s 的二进制文件(k3s)、K3s 依赖的镜像(k3s-airgap-images-amd64.tar)下载到了/root目录下。

如果你使用的容器运行时为containerd,在启动 K3s 时,它会检查/var/lib/rancher/k3s/agent/images/是否存在可用的镜像压缩包,如果存在,就将该镜像导入到containerd 镜像列表中。所以我们只需要下载 K3s 依赖的镜像到/var/lib/rancher/k3s/agent/images/目录,然后启动 K3s 即可。

1. 导入镜像到 containerd 镜像列表

sudo mkdir -p /var/lib/rancher/k3s/agent/images/
sudo cp /root/k3s-airgap-images-amd64.tar /var/lib/rancher/k3s/agent/images/

 

1. 将 K3s 安装脚本和 K3s 二进制文件移动到对应目录并授予可执行权限

sudo chmod a+x /root/k3s /root/k3s-install.sh
sudo cp /root/k3s /usr/local/bin/

 

1. 安装 K3s

INSTALL_K3S_SKIP_DOWNLOAD=true /root/k3s-install.sh

稍等片刻,即可查看到 K3s 已经成功启动:

1. 配置k3s 集群代理方便安装软件
server 位置

/etc/systemd/system/k3s.service.env

agent位置

/etc/systemd/system/k3s-agent.service.env
HTTP_PROXY=http://192.168.1.60:7897
HTTPS_PROXY=http://192.168.1.60:7897
NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16

1-安装kubesphere

执行以下命令开始安装:

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml

如果需要开启devops则需要先下载cluster-configuration.yaml,在其中搜索devops并设置为true

devops:
  enabled: true # 将“false”更改为“true”。

检查安装日志:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

使用 kubectl get pod –all-namespaces 查看所有 Pod 是否在 KubeSphere 的相关命名空间中正常运行。如果是,请通过以下命令检查控制台的端口(默认为 30880):

kubectl get svc/ks-console -n kubesphere-system

确保在安全组中打开了端口 30880,并通过 NodePort (IP:30880) 使用默认帐户和密码 (admin/P@88w0rd) 访问 Web 控制台。

登录控制台后,您可以在系统组件中检查不同组件的状态。如果要使用相关服务,可能需要等待某些组件启动并运行。

2-开启devops组件

以 admin 用户登录控制台,点击左上角的平台管理,选择集群管理。

点击定制资源定义,在搜索栏中输入 clusterconfiguration,点击搜索结果查看其详细页面。

信息

定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。

在自定义资源中,点击 ks-installer 右侧的 ,选择编辑 YAML。

在该 YAML 文件中,搜索 devops,将 enabled 的 false 改为 true。完成后,点击右下角的确定,保存配置。

devops:

    enabled: true # 将“false”更改为“true”。

在 kubectl 中执行以下命令检查安装过程:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

6.通过 kubectl 验证组件的安装 执行以下命令来检查容器组的状态:

kubectl get pod -n kubesphere-devops-system

如果组件运行成功,输出结果如下:

NAME                         READY   STATUS   RESTARTS   AGE
devops-jenkins-5cbbfbb975-hjnll   1/1     Running   0         40m
s2ioperator-0                 1/1     Running   0         41m

3-安装gitlab

注意事项: GITLAB_HOST 不能设置成ip,会导致pod无法启动 ssh 远端示例ssh://git@192.168.1.33:31022/zyj/zyj-cloud.git

1.安装pg postgresql-rc.yml

apiVersion: v1
kind: ReplicationController
metadata:
  name: postgresql
  namespace: gitlab
spec:
  replicas: 1
  selector:
    name: postgresql
  template:
    metadata:
      name: postgresql
      labels:
        name: postgresql
    spec:
      containers:
      - name: postgresql
        image: sameersbn/postgresql:14-20230628
        env:
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: DB_EXTENSION
          value: pg_trgm
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql
          name: data
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        emptyDir: {}

postgresql-svc.yml

apiVersion: v1
kind: ReplicationController
metadata:
  name: postgresql
  namespace: gitlab
spec:
  replicas: 1
  selector:
    name: postgresql
  template:
    metadata:
      name: postgresql
      labels:
        name: postgresql
    spec:
      containers:
      - name: postgresql
        image: sameersbn/postgresql:14-20230628
        env:
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: DB_EXTENSION
          value: pg_trgm
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql
          name: data
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        emptyDir: {}

2.安装redis redis-rc.yml

apiVersion: v1
kind: ReplicationController
metadata:
  name: redis
  namespace: gitlab
spec:
  replicas: 1
  selector:
    name: redis
  template:
    metadata:
      name: redis
      labels:
        name: redis
    spec:
      containers:
      - name: redis
        image: redis:6.2.6
        ports:
        - name: redis
          containerPort: 6379
        volumeMounts:
        - mountPath: /var/lib/redis
          name: data
        livenessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        emptyDir: {}

redis-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: gitlab
  labels:
    name: redis
spec:
  ports:
    - name: redis
      port: 6379
      targetPort: redis
  selector:
    name: redis

安装gitlab gitlab-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: gitlab
  namespace: gitlab
spec:
  replicas: 1
  selector:
    name: gitlab
  template:
    metadata:
      name: gitlab
      labels:
        name: gitlab
    spec:
      containers:
      - name: gitlab
        image: sameersbn/gitlab:16.10.0
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: GITLAB_TIMEZONE
          value: Asia/Shanghai

        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: long-and-random-alpha-numeric-string

        - name: GITLAB_ROOT_PASSWORD
          value: jgj123456
        - name: GITLAB_ROOT_EMAIL
          value: zhanglei@zyj.com

        - name: GITLAB_HOST
          value: git.jgj.com
        - name: GITLAB_PORT
          value: "80"
        - name: GITLAB_SSH_PORT
          value: "31022"

        - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
          value: "true"
        - name: GITLAB_NOTIFY_PUSHER
          value: "false"

        - name: GITLAB_BACKUP_SCHEDULE
          value: daily
        - name: GITLAB_BACKUP_TIME
          value: 01:00

        - name: DB_TYPE
          value: postgres
        - name: DB_HOST
          value: postgresql
        - name: DB_PORT
          value: "5432"
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production

        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"

        - name: SMTP_ENABLED
          value: "false"
        - name: SMTP_DOMAIN
          value: www.example.com
        - name: SMTP_HOST
          value: smtp.gmail.com
        - name: SMTP_PORT
          value: "587"
        - name: SMTP_USER
          value: mailer@example.com
        - name: SMTP_PASS
          value: password
        - name: SMTP_STARTTLS
          value: "true"
        - name: SMTP_AUTHENTICATION
          value: login

        - name: IMAP_ENABLED
          value: "false"
        - name: IMAP_HOST
          value: imap.gmail.com
        - name: IMAP_PORT
          value: "993"
        - name: IMAP_USER
          value: mailer@example.com
        - name: IMAP_PASS
          value: password
        - name: IMAP_SSL
          value: "true"
        - name: IMAP_STARTTLS
          value: "false"
        ports:
        - name: http
          containerPort: 80
        - name: ssh
          containerPort: 31022
        volumeMounts:
        - mountPath: /home/git/data
          name: data
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 180
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        emptyDir: {}


gitlab-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: gitlab
  namespace: gitlab
  labels:
    name: gitlab
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: http
      nodePort: 31080
    - name: ssh
      port: 22
      targetPort: ssh
      nodePort: 31022
  selector:
    name: gitlab

4-安装nexus3

helm install bitnami/nexus3

指定secret的话需要手动配置nexus3-root-password的secret

affinity: {}
caCerts:
  enabled: false
  secret: null
chownDataDir: true
commonLabels: {}
config:
  anonymous:
    enabled: false
    roles:
    - nx-anonymous
    - nx-metrics
  blobStores: []
  cleanup: []
  enabled: false
  ldap:
    authPassword:
      key: null
      secret: null
    authRealm: null
    authScheme: simple
    authUsername: null
    connectionRetryDelaySeconds: 300
    connectionTimeoutSeconds: 30
    enabled: false
    groupBaseDn: null
    groupIdAttribute: null
    groupMemberAttribute: null
    groupMemberFormat: null
    groupObjectClass: null
    groupSubtree: false
    groupType: dynamic
    host: null
    ldapGroupsAsRoles: false
    maxIncidentsCount: 3
    name: null
    port: 636
    protocol: ldaps
    searchBase: null
    useTrustStore: true
    userBaseDn: null
    userEmailAddressAttribute: email
    userIdAttribute: sAMAccountName
    userLdapFilter: null
    userMemberOfAttribute: memberOf
    userObjectClass: user
    userPasswordAttribute: null
    userRealNameAttribute: cn
    userSubtree: false
  realms:
    enabled: false
    values: []
  repoCredentials:
    enabled: false
    secret: null
  repos: []
  roles: []
  rootPassword:
    key: password
    secret: null
  tasks: []
  users: []
deployment: true
env: []
envVars:
  jvmAdditionalMemoryOptions: -XX:MaxDirectMemorySize=2048m
  jvmAdditionalOptions: ""
  jvmMaxHeapSize: 1024m
  jvmMinHeapSize: 1024m
extraInitContainers: []
extraVolumeMounts: []
extraVolumes: []
fullnameOverride: ""
highAvailability:
  enabled: false
  replicas: 3
image:
  pullPolicy: IfNotPresent
  pullSecrets: []
  repository: sonatype/nexus3
  tag: ""
imagePullSecrets: []
ingress:
  annotations: {}
  enabled: false
  hosts: []
  ingressClassName: ""
  tls: []
license:
  enabled: false
  key: nexus.license
  secret: null
livenessProbe:
  failureThreshold: 10
  httpGet:
    path: /service/rest/v1/status
    port: http
  initialDelaySeconds: 60
  periodSeconds: 30
  timeoutSeconds: 1
logback:
  maxHistory: 30
metrics:
  enabled: false
  serviceMonitor:
    additionalLabels: {}
    enabled: false
    endpointConfig: {}
    interval: null
nameOverride: ""
nodeSelector: {}
persistence:
  accessMode: ReadWriteOnce
  annotations: {}
  enabled: true
  size: 10Gi
  storageClass: local-path
plugins: []
podAnnotations: {}
podLabels: {}
podSecurityContext:
  fsGroup: 200
priorityClassName: ""
properties:
- nexus.scripts.allowCreation=true
readinessProbe:
  failureThreshold: 10
  httpGet:
    path: /service/rest/v1/status
    port: http
  initialDelaySeconds: 60
  periodSeconds: 30
  timeoutSeconds: 1
resources: {}
rootPassword:
  key: password
  secret: nexus3-root-password
securityContext:
  runAsGroup: 200
  runAsUser: 200
service:
  additionalPorts: []
  annotations: {}
  clusterIP: null
  nodePort: 30081
  port: 8081
  type: NodePort
serviceAccount:
  annotations: {}
  automountToken: false
  create: true
  labels: {}
  name: ""
storeProperties: []
terminationGracePeriodSeconds: 30
testResources: false
tolerations: []
topologySpreadConstraints: []

5-安装harbor

一、helm安装nginx-ingress-controller

values.yml修改(本文章配置项的值均为修改后,因为是dev环境不使用负载均衡器 后面不再赘述):

kind: DaemonSet
...
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
service:
	type: NodePort

修改完values.yml后直接执行helm install ..安装即可

二、从kubesphere应用商店部署harbor

准备工作: 需要创建一个企业空间、一个项目和一个用户帐户 (project-regular) 供本教程操作使用。该帐户需要是平台普通用户,并邀请至项目中赋予 operator 角色作为项目操作员。本教程中,请以 project-regular 身份登录控制台,在企业空间 demo-workspace 中的 demo-project 项目中进行操作。有关更多信息,请参见创建企业空间、项目、用户和角色。

步骤 1:从应用商店中部署 Harbor

本示例为http模式,不开启https。 在 demo-project 项目的概览页面,点击左上角的应用商店。

找到 Harbor,点击应用信息页面上的安装。

设置名称并选择应用版本。请确保将 Harbor 部署在 demo-project 中,点击下一步。

在应用设置页面,编辑 Harbor 的配置文件

service:
	type: nodePort
	tls:
        auto:
          commonName: 192.168.1.33	
		enable: false
#此处是web管理面板 给外部提供的入口,不改为http会导致登陆时报403错误
externalURL: http://192.168.1.33:30002

步骤2:测试harbor管理页面及docker登陆私有仓库

访问 http://192.168.1.33:30002 使用 账号 admin 密码Harbor12345 登陆web 测试是否可用

docker 客户端需配置daemon.json:

docker daemon.json配置:

{

"insecure-registries": ["192.168.1.33:30002"]

}

docker登陆私有仓库

sudo docker login 192.168.1.33:30002 -u admin -p Harbor12345

相关链接:

kubesphere应用商店部署harbor

[在 KubeSphere 中部署 Harbor]

6-创建devops项目

1.  在kubesphere的企业空间中创建空间
    工作台-> 平台资源->企业空间->创建
2.  进入企业空间,进入左侧devops项目点击创建(此项目为devops流水线项目 如 zyj-gateway)
3.  进入企业空间,进入左侧项目点击创建(此项目为工作空间留到后面项目相关部署使用 如 zyj-cloud)

7-部署准备

创建git访问凭证 1.1 首先创建一个gitlab账户,配置仓库访问权限,记住用户名和密码 1.2 进入到devops项目中,在左侧devops项目设置中添加凭证 类型选择用户名和密码,填充一个上面的用户名和密码

创建harbor凭证 2.1 登录harbor 创建项目,在项目中的机器人账户菜单,添加一个机器人账户,保存机器人账户的名称和token 2.2 添加凭证,类型选择用户名和密码,填充一个上面的用户名和密码

创建集群配置 添加凭证,选择kubeconfig,kubesphere会自动生成一个当前登录用户的集群配置文件

在集群中配置maven私服 在kubesphere-devops-worker命名空间的configmap中找到ks-devops-agent,编辑MavenSetting配置镜像

 <mirrors>
   <mirror>
    <id>nexus</id>
    <mirrorOf>public</mirrorOf>
    <url><http://192.168.1.33:31722/repository/maven-public</url>
    </mirror>
    <mirror>//解决无法找到spring security
      <id>nexus-central</id>
      <mirrorOf>central</mirrorOf>
      <url>http://192.168.1.33:31722/repository/maven-central/</url>
	</mirror>
</mirrors>

      5.创建imagePullSecret -n 指定部署空间,imagePullSecret 要与部署空间一致

kubectl create secret docker-registry zyjloginharbor –docker-username=机器人账户名 –docker-password=机器人账户凭证 –docker-server=192.168.1.33:30002 -n zyj-cloud

这样会生成类似下边的secret值,因为机器人账号可能有特殊字符会导致username 不正确,如果看到username不正确,可以手动修改secret,auth为username:password的base64编码

{"auths":{"192.168.1.33:30002":{"username":"username","password":"password","auth":"dXNlcm5hbWU6cGFzc3dvcmQ="}}}

更改k3s 镜像配置,允许访问insecure registry

vi /etc/rancher/k3s/registries.yaml

添加以下内容

mirrors:
  "192.168.1.33:30002":
    endpoint:
      - "http://192.168.1.33:30002"

8-在git仓库中创建Dockerfile和deployment.yml

Dockerfile

FROM 192.168.1.33:30002/zyj/openjdk:17-jdk-alpine
ADD target/zyj-gateway.jar /opt/zyj-gateway.jar
ENV PORT 80
ENV TIMEZONE "Asia/Shanghai"
ENV PROFILES_ACTIVE ""
RUN ln -sf /usr/share/zoneinfo/$TIMEZONE /etc/localtime
RUN echo $TIMEZONE > /etc/timezone
EXPOSE $PORT
ENTRYPOINT ["java", "-Xms256m","-Xmx512m", "-Dspring.profiles.active=${PROFILES_ACTIVE}", "-Djava.security.egd=file:/dev/./urandom", "-Dserver.port=${PORT}", "-Duser.timezone=${TIMEZONE}", "-jar", "/opt/zyj-gateway.jar"]

deployment.yml

部署文件中同时定义了service将80端口映射到pod的80,用来提供给ingress在集群内部路由使用

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zyj-gateway
  namespace: zyj-cloud
  labels:
    app: zyj-gateway
    "deployed-time": ${DATETIME}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zyj-gateway
  template:
    metadata:
      labels:
        app: zyj-gateway
        "deployed-time": ${DATETIME}
    spec:
      imagePullSecrets:
        - name:  zyjloginharbor
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: zyj-gateway
                topologyKey: kubernetes.io/hostname
      containers:
        - name: zyj-gateway
          image: 192.168.1.33:30002/zyj/zyj-gateway:latest
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 512m
              memory: 512Mi
          ports:
            - name: http
              protocol: TCP
              containerPort: 80
          livenessProbe:
            tcpSocket:
              port: 80
            initialDelaySeconds: 30
            timeoutSeconds: 5
          readinessProbe:
            tcpSocket:
              port: 80
            initialDelaySeconds: 30
            timeoutSeconds: 5
      restartPolicy: Always

---
apiVersion: v1
kind: Service
metadata:
  name: zyj-gateway-svc
  namespace: zyj-cloud
  labels:
    app: zyj-gateway
spec:
  ports:
    - name: tcp-metrics
      protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: zyj-gateway
  type: ClusterIP

9-创建流水线

在kubesphere的devops项目中创建流水线,编辑jenkinsfile 示例流水线

pipeline {
  agent {
    label 'mavenjdk17'
  }
  stages {
    stage('checkout scm') {
      steps {
        git(url: 'http://gitlab.zyj.com/zyj/zyj-cloud.git', credentialsId: 'gitlab-user-password', branch: 'master', changelog: true, poll: false)
      }
    }

    stage('build & push') {
      agent none
      steps {
        container('maven') {
          sh 'mvn clean package -Dmaven.test.skip=true -pl zyj-gateway -am'
          sh '''
                echo '[[registry]]' |  tee -a /etc/containers/registries.conf
                echo 'prefix = "192.168.1.33:30002"' |  tee -a /etc/containers/registries.conf
                echo 'location = "192.168.1.33:30002"' |  tee -a /etc/containers/registries.conf
                echo 'insecure = true' |  tee -a /etc/containers/registries.conf
            '''
          withCredentials([usernamePassword(credentialsId: 'zyj-harbor-token', passwordVariable: 'DOCKER_PASSWORD', usernameVariable: 'DOCKER_USERNAME')]) {
            sh 'echo "$DOCKER_PASSWORD" | podman login $REGISTRY -u "$DOCKER_USERNAME" --password-stdin'
            sh 'cd zyj-gateway && podman build  -t $REGISTRY/$HARBOR_NAMESPACE/$APP_NAME:latest .'
            sh 'podman push  $REGISTRY/$HARBOR_NAMESPACE/$APP_NAME:latest'
          }

        }

      }
    }

    stage('deploy to dev') {
      agent none
      steps {
        container('maven') {
          withCredentials([kubeconfigFile(credentialsId: 'kubeconfig', variable: 'KUBECONFIG')]) {
            sh 'export DATETIME=$(date +%Y-%m-%d_%H-%M-%S) && envsubst < ./zyj-gateway/manifests/deployment.yaml | kubectl apply -f -'
          }

        }

      }
    }

  }
  environment {
    REGISTRY = '192.168.1.33:30002'
    HARBOR_NAMESPACE = 'zyj'
    APP_NAME = 'zyj-gateway'
    HARBOR_CREDENTIAL = credentials('zyj-harbor-token')
    BRANCH_NAME = 'dev'
    PROJECT_NAME = 'kubesphere-sample-dev'
  }
}

全部配置查看 kubesphere-devops-system空间下的jenkins-casc-config配置中的jenkins.yml

这个文件中定义了所有可用的容器

10-创建应用路由

1、进入Kubesphere后台界面,点击 平台管理-集群管理

2、点击 集群设置-网关设置-创建

1)配置访问模式为 NodePort

2)配置工作进程数(此配置决定外部打入的请求启用多少个进程进行处理, 适当调大进程数可增加处理并发请求的能力) worker-processes 4

3、点击 应用负载-应用路由-创建

1)输入名称 如:zyj-gateway

2)选择项目 如:zyj-cloud,单击下一步

3)点击 添加路由规则

4)输入域名如api.dev.zyj.com

5)协议选择HTTP,路径保持默认的 “/”即可, 选择服务 如:zyj-gateway-svc , 选择端口 如:80 ,单击下一步

6)添加元数据 注解 选择 nginx.ingress.kubernetes.io/rewrite-target 值输入 / (表示将 /path 路径重定向到后端服务能够识别的根路径 /上面。)

以上配置完成后可生成如下的ingress路由配置

zyj-gateway-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: zyj-gateway-ingress
  namespace: zyj-cloud
  uid: 67fd43f6-3e31-469c-9197-2334656ff32d
  resourceVersion: '1399275'
  generation: 4
  creationTimestamp: '2024-04-16T06:51:28Z'
  labels:
    k8slens-edit-resource-version: v1
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"kubesphere.io/creator":"admin","nginx.ingress.kubernetes.io/rewrite-target":"/"},"creationTimestamp":"2024-04-16T06:38:41Z","generation":1,"labels":{"k8slens-edit-resource-version":"v1"},"managedFields":[{"apiVersion":"networking.k8s.io/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubesphere.io/creator":{},"f:nginx.ingress.kubernetes.io/rewrite-target":{}}},"f:spec":{"f:rules":{}}},"manager":"Mozilla","operation":"Update","time":"2024-04-16T06:38:41Z"},{"apiVersion":"networking.k8s.io/v1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:loadBalancer":{"f:ingress":{}}}},"manager":"nginx-ingress-controller","operation":"Update","subresource":"status","time":"2024-04-16T06:39:01Z"},{"apiVersion":"networking.k8s.io/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:k8slens-edit-resource-version":{}}}},"manager":"node-fetch","operation":"Update","time":"2024-04-16T06:48:59Z"}],"name":"zyj-gateway-ingress","namespace":"zyj-cloud","selfLink":"/apis/networking.k8s.io/v1/namespaces/zyj-cloud/ingresses/zyj-gateway-ingress","uid":"ac5ba110-a24a-459f-8cb0-5ac5aa219cf7"},"spec":{"ingressClassName":"nginx","rules":[{"host":"api.dev.zyj.com","http":{"paths":[{"backend":{"service":{"name":"zyj-gateway-svc","port":{"number":80}}},"path":"/","pathType":"ImplementationSpecific"}]}}]}}
    kubesphere.io/creator: admin
    nginx.ingress.kubernetes.io/rewrite-target: /
  managedFields:
    - manager: kubectl-client-side-apply
      operation: Update
      apiVersion: networking.k8s.io/v1
      time: '2024-04-16T06:51:28Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
            f:kubesphere.io/creator: {}
            f:nginx.ingress.kubernetes.io/rewrite-target: {}
          f:labels:
            .: {}
            f:k8slens-edit-resource-version: {}
        f:spec:
          f:ingressClassName: {}
    - manager: nginx-ingress-controller
      operation: Update
      apiVersion: networking.k8s.io/v1
      time: '2024-04-16T06:51:38Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          f:loadBalancer:
            f:ingress: {}
      subresource: status
    - manager: node-fetch
      operation: Update
      apiVersion: networking.k8s.io/v1
      time: '2024-04-16T07:29:24Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:rules: {}
  selfLink: >-
    /apis/networking.k8s.io/v1/namespaces/zyj-cloud/ingresses/zyj-gateway-ingress
status:
  loadBalancer:
    ingress:
      - ip: 192.168.1.33
spec:
  ingressClassName: nginx
  rules:
    - host: api.dev.zyj.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: zyj-gateway-svc
                port:
                  number: 80

7) 修改客户端本机的Hosts文件 添加 api.dev.zyj.com 192.168.1.33

8)再次访问api.dev.zyj.com 返回json结构, 说明zyj-gateway服务已部署完毕并通过路由做了转发

{"code":500,"msg":"404 NOT_FOUND"}

© 版权声明
THE END
如果内容对您有所帮助,就支持一下吧!
点赞0 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容