如何找到专业提供WordPress网站设计服务的机构?

摘要:网站设计机构有哪些,wordpress 封面,南昌做网站需要多少钱,网站建设公司 深圳文章目录 [toc]构建 debian 基础镜像部署 zookeeper配置 namespace配置 gfs 的 endpoints配置 pv 和 pvc
网站设计机构有哪些,wordpress 封面,南昌做网站需要多少钱,网站建设公司 深圳文章目录 [toc]构建 debian 基础镜像部署 zookeeper配置 namespace配置 gfs 的 endpoints配置 pv 和 pvc配置 configmap配置 service配置 statefulset 部署 kafka配置 configmap配置 service配置 statefulset 这里采用的部署方式如下#xff1a; 使用自定义的 debian 镜像作为… 文章目录 [toc]构建 debian 基础镜像部署 zookeeper配置 namespace配置 gfs 的 endpoints配置 pv 和 pvc配置 configmap配置 service配置 statefulset 部署 kafka配置 configmap配置 service配置 statefulset 这里采用的部署方式如下 使用自定义的 debian 镜像作为基础镜像 目的1可以塞很多排查工具进去目的2一个统一的基础镜像方便维护目的3centos 后期不维护了避免尴尬场景 通过 gfs 做数据持久化通过 pv 和 pvc 的形式将二进制文件挂载到 pod 内kafka 的二进制文件里面带有了 zookeeper这里就只使用 kafka 的二进制文件 kafka 二进制文件下载地址 构建 debian 基础镜像 FROM debian:11ENV TZAsia/Shanghai ENV LANGen_US.UTF-8RUN echo /etc/apt/sources.list \for i in stable stable-proposed-updates stable-updates;\do \echo deb http://mirrors.cloud.aliyuncs.com/debian ${i} main contrib non-free /etc/apt/sources.list;\echo deb-src http://mirrors.cloud.aliyuncs.com/debian ${i} main contrib non-free /etc/apt/sources.list;\echo deb http://mirrors.aliyun.com/debian ${i} main contrib non-free /etc/apt/sources.list;\echo deb-src http://mirrors.aliyun.com/debian ${i} main contrib non-free /etc/apt/sources.list;\done \apt-get update \DEBIAN_FRONTENDnoninteractive apt-get install -y --no-install-recommends vim \curl wget bind9-utils telnet unzip net-tools tree nmap ncat \apt-get clean apt-get autocleanDEBIAN_FRONTENDnoninteractive 非交互模式 --no-install-recommends 此选项告诉 apt-get 不要安装与请求的软件包一起推荐的软件包。Debian 软件包的依赖关系可以分为两种类型“Depends” 和 “Recommends”。通过使用此选项您表示只想安装 “Depends” 部分中列出的软件包并跳过 “Recommends” 部分。如果您不需要所有推荐的软件包这可以帮助保持安装的系统更加精简 构建镜像 docker build -t debian11_amd64_base:v1.0 .部署 zookeeper 配置 namespace 我的环境做了软连接下面的命令中出现的 k 表示 kubectl 命令 k create ns bigdata配置 gfs 的 endpoints 正如开头提到的我这边使用的是 gfs 来做的持久化需要通过 endpoints 来暴露给 k8s 集群内部使用相关的资料可以看我其他的文章 CentOS 7.6 部署 GlusterFS 分布式存储系统Kubernetes 集群使用 GlusterFS 作为数据持久化存储 --- apiVersion: v1 kind: Endpoints metadata:annotations:name: glusterfs-bigdatanamespace: bigdata subsets: - addresses:- ip: 172.72.0.130- ip: 172.72.0.131ports:- port: 49152protocol: TCP --- apiVersion: v1 kind: Service metadata:annotations:name: glusterfs-bigdatanamespace: bigdata spec:ports:- port: 49152protocol: TCPtargetPort: 49152sessionAffinity: Nonetype: ClusterIP配置 pv 和 pvc --- apiVersion: v1 kind: PersistentVolume metadata:annotations:labels:software: bigdataname: bigdata-software-pv spec:accessModes:- ReadOnlyManycapacity:storage: 10Giglusterfs:endpoints: glusterfs-bigdatapath: online-share/kubernetes/software/readOnly: falsepersistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolumeClaim metadata:annotations:labels:software: bigdataname: bigdata-software-pvcnamespace: bigdata spec:accessModes:- ReadOnlyManyresources:requests:storage: 10Giselector:matchLabels:software: bigdata检查 pvc 是否处于 bound 状态 k get pvc -n bigdata正确创建的情况下STATUS 的状态是 bound NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE bigdata-software-pvc Bound bigdata-software-pv 10Gi ROX 87s配置 configmap --- apiVersion: v1 data:startZk.sh: |-#!/bin/bashset -xecho ${POD_NAME##*-} ${ZK_DATA}/myidsed s|{{ ZK_DATA }}|${ZK_DATA}|g ${CM_DIR}/zookeeper.properties ${ZK_CONF}/zookeeper.propertiesecho ${ZK_CONF}/zookeeper.propertiesn0while (( n ${REPLICAS} ))doecho server.$((n-1))${APP_NAME}-$((n-1)).${APP_NAME}-svc.${NAMESPACE}.svc.cluster.local:2888:3888 ${ZK_CONF}/zookeeper.propertiesdonecat ${ZK_CONF}/zookeeper.propertiesKAFKA_HEAP_OPTS-Xmx${JAVA_OPT_XMX} -Xms${JAVA_OPT_XMS} -Xss512k -XX:UseG1GC -XX:PrintGCDetails -XX:PrintGCTimeStamps -XX:MaxGCPauseMillis200 -XX:InitiatingHeapOccupancyPercent45 -Djava.io.tmpdir/tmp -Xloggc:${LOG_DIR}/gc.log -Dsun.net.inetaddr.ttl10${ZK_HOME}/bin/zookeeper-server-start.sh ${ZK_CONF}/zookeeper.propertieszookeeper.properties: |-dataDir{{ ZK_DATA }}clientPort2181maxClientCnxns0initLimit1syncLimit1 kind: ConfigMap metadata:annotations:labels:app: zkname: zk-cmnamespace: bigdata配置 service --- apiVersion: v1 kind: Service metadata:annotations:labels:app: zkname: zk-svcnamespace: bigdata spec:ports:- name: tcpport: 2181- name: serverport: 2888- name: electport: 3888selector:app: zk配置 statefulset 启动 statefulset 之前需要先给节点打上标签因为针对 pod 做了节点和 pod 的亲和性因为 zookeeper 的数据是通过 hostpath 的方式来持久化的所以需要固定节点同时需要 pod 亲和性来控制一个节点只能出现一个 zookeeper 的 pod避免 hostpath 出现问题 k label node 172.72.0.129 zk k label node 172.72.0.130 zk k label node 172.72.0.131 zk创建 statefulset --- apiVersion: apps/v1 kind: StatefulSet metadata:annotations:labels:app: zkname: zknamespace: bigdata spec:replicas: 3selector:matchLabels:app: zkserviceName: zk-svctemplate:metadata:annotations:labels:app: zkspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: zkoperator: ExistspodAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- zktopologyKey: kubernetes.io/hostnamecontainers:- command:- bash- /app/zk/cm/startZk.shenv:- name: APP_NAMEvalue: zk- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP- name: ZK_HOMEvalue: /app/software/kafka_2.12-2.3.0- name: REPLICASvalue: 3- name: ZK_DATAvalue: /app/zk/data# LOG_DIR 是 kafka-run-class.sh 启动 zk 的时候使用的环境变量- name: LOG_DIRvalue: /app/zk/log- name: ZK_CONFvalue: /app/zk/conf- name: CM_DIRvalue: /app/zk/cm- name: JAVA_HOMEvalue: /app/software/jdk1.8.0_231- name: JAVA_OPT_XMSvalue: 512m- name: JAVA_OPT_XMXvalue: 512mimage: debian11_amd64_base:v1.0imagePullPolicy: IfNotPresentlivenessProbe:tcpSocket:port: 2181failureThreshold: 3initialDelaySeconds: 10periodSeconds: 30successThreshold: 1timeoutSeconds: 5readinessProbe:tcpSocket:port: 2181failureThreshold: 3initialDelaySeconds: 20periodSeconds: 30successThreshold: 1timeoutSeconds: 5name: zkports:- containerPort: 2181name: tcp- containerPort: 2888name: server- containerPort: 3888name: electvolumeMounts:- mountPath: /app/zk/dataname: data- mountPath: /app/zk/logname: log- mountPath: /app/zk/cmname: cm- mountPath: /app/zk/confname: conf- mountPath: /app/softwarename: softwarereadOnly: truerestartPolicy: AlwayssecurityContext: {}terminationGracePeriodSeconds: 0volumes:- emptyDir: {}name: log- emptyDir: {}name: conf- configMap:name: zk-cmname: cm- name: softwarepersistentVolumeClaim:claimName: bigdata-software-pvc- hostPath:path: /data/k8s_data/zookeepertype: DirectoryOrCreatename: data部署 kafka 配置 configmap --- apiVersion: v1 data:server.properties: |-broker.id{{ broker.id }}broker.rack{{ broker.rack }}log.dirs{{ DATA_DIR }}listenersINTERNAL://0.0.0.0:9092, EXTERNAL://0.0.0.0:9093advertised.listenersINTERNAL://{{ broker.name }}:9092,EXTERNAL://{{ broker.host }}:9093listener.security.protocol.mapINTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXTinter.broker.listener.nameINTERNALzookeeper.connect{{ ZOOKEEPER_CONNECT }}auto.create.topics.enablefalsedefault.replication.factor2num.partitions: 3num.network.threads: 3num.io.threads: 6socket.send.buffer.bytes: 102400socket.receive.buffer.bytes: 102400socket.request.max.bytes: 104857600num.recovery.threads.per.data.dir: 1offsets.topic.replication.factor: 2transaction.state.log.replication.factor: 2transaction.state.log.min.isr: 2log.retention.hours: 168log.segment.bytes: 1073741824log.retention.check.interval.ms: 300000zookeeper.connection.timeout.ms: 6000group.initial.rebalance.delay.ms: 0delete.topic.enable: truestartKafka.sh: |-#!/bin/bashset -xif [ -f ${DATA_DIR}/meta.properties ];thenKAFKA_BROKER_ID$(awk -F /broker.id/ {print $NF} app/kafka/data/meta.properties)elseKAFKA_BROKER_ID${POD_NAME##*-}fiZOOKEEPER_CONNECTzk-0.zk-svc.bigdata.svc.cluster.local:2181,zk-1.zk-svc.bigdata.svc.cluster.local:2181,zk-2.zk-svc.bigdata.svc.cluster.local:2181sed s|{{ broker.id }}|${KAFKA_BROKER_ID}|g ${CM_DIR}/server.properties ${CONF_DIR}/server.propertiessed -i s|{{ broker.rack }}|${NODE_NAME}|g ${CONF_DIR}/server.propertiessed -i s|{{ broker.host }}|${NODE_NAME}|g ${CONF_DIR}/server.propertiessed -i s|{{ broker.name }}|${POD_NAME}.${APP_NAME}-svc.${NAMESPACE}.svc.cluster.local|g ${CONF_DIR}/server.propertiessed -i s|{{ ZOOKEEPER_CONNECT }}|${ZOOKEEPER_CONNECT}|g ${CONF_DIR}/server.propertiessed -i s|{{ DATA_DIR }}|${DATA_DIR}|g ${CONF_DIR}/server.propertiescat ${CONF_DIR}/server.propertiesexport KAFKA_HEAP_OPTS-Xmx${JAVA_OPT_XMX} -Xms${JAVA_OPT_XMS} -Xss512k -XX:UseG1GC -XX:PrintGCDetails -XX:PrintGCTimeStamps -XX:MaxGCPauseMillis200 -XX:InitiatingHeapOccupancyPercent45 -Djava.io.tmpdir/tmp -Xloggc:${LOG_DIR}/gc.log -Dsun.net.inetaddr.ttl10${KAFKA_HOME}/bin/kafka-server-start.sh ${CONF_DIR}/server.propertiessleep 3 kind: ConfigMap metadata:annotations:labels:app: kafkaname: kafka-cmnamespace: bigdata配置 service --- apiVersion: v1 kind: Service metadata:annotations:labels:app: kafkaname: kafka-svcnamespace: bigdata spec:clusterIP: Noneports:- name: tcpport: 9092targetPort: 9092selector:app: kafka配置 statefulset --- apiVersion: apps/v1 kind: StatefulSet metadata:annotations:labels:app: kafkaname: kafkanamespace: bigdata spec:replicas: 3selector:matchLabels:app: kafkaserviceName: kafka-svctemplate:metadata:labels:app: kafkaspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- kafkatopologyKey: kubernetes.io/hostnamecontainers:- command:- /bin/bash- -c- . ${CM_DIR}/startKafka.shenv:- name: APP_NAMEvalue: kafka- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP- name: KAFKA_HOMEvalue: /app/software/kafka_2.12-2.3.0- name: DATA_DIRvalue: /app/kafka/data- name: LOG_DIRvalue: /app/kafka/log- name: CONF_DIRvalue: /app/kafka/conf- name: CM_DIRvalue: /app/kafka/configmap- name: JAVA_HOMEvalue: /app/software/jdk1.8.0_231- name: JAVA_OPT_XMSvalue: 512m- name: JAVA_OPT_XMXvalue: 512mname: kafkaimage: debian11_amd64_base:v1.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3initialDelaySeconds: 60periodSeconds: 20successThreshold: 1tcpSocket:port: kafkatimeoutSeconds: 1readinessProbe:failureThreshold: 3initialDelaySeconds: 20periodSeconds: 20successThreshold: 1tcpSocket:port: kafkatimeoutSeconds: 1ports:- containerPort: 9092hostPort: 9092name: kafka- containerPort: 9093hostPort: 9093name: kafkaoutvolumeMounts:- mountPath: /app/kafka/dataname: data- mountPath: /app/kafka/logname: log- mountPath: /app/kafka/configmapname: configmap- mountPath: /app/kafka/confname: conf- mountPath: /app/softwarename: softwarereadOnly: truerestartPolicy: AlwayssecurityContext: {}terminationGracePeriodSeconds: 10volumes:- emptyDir: {}name: log- emptyDir: {}name: conf- configMap:name: kafka-cmname: configmap- name: softwarepersistentVolumeClaim:claimName: bigdata-software-pvc- name: datahostPath:path: /data/k8s_data/kafkatype: DirectoryOrCreate