当前位置: 首页 > news >正文

skywalking镜像应用springboot的例子

目录 

1、skywalking-ui连接skywalking-oap服务失败问题

2、k8s环境 检查skywalking-oap服务状态

3、本地iidea启动服务连接skywalking oap服务

4、基于apache-skywalking-java-agent-9.4.0.tgz构建skywalking-agent镜像

4.1、Dockerfile内容如下

4.2、AbstractBuilder.METER_SERVICE" is null报错

4.3、应用的Dockerfile

5、spring-boo-adming使用skywalking-agent的yaml

6、MySQL数据库 

7、Elasticsearch 

8、Kibana

 9、nacos

10、skywalking-oap

11、skywalking-ui 

12、运行效果 


1、skywalking-ui连接skywalking-oap服务失败问题

报下面的错误,skywalking-ui.yml执行启动报错,连接skywalking-oap的地址必须加上http或者https协议的前缀,否则报错

 2025-07-12 03:09:11,602 com.linecorp.armeria.common.util.SystemInfo 525 [main] INFO  [] - IPv6: disabled (no IPv6 network interface)

 Exception in thread "main" java.lang.NullPointerException: authority

     at java.base/java.util.Objects.requireNonNull(Unknown Source)

     at com.linecorp.armeria.client.Endpoint.parse(Endpoint.java:97)

     at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)

     at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)

     at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown Source)

     at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown Source)

     at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)

     at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)

     at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)

     at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)

     at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)

     at org.apache.skywalking.oap.server.webapp.OapProxyService.<init>(OapProxyService.java:50)

     at org.apache.skywalking.oap.server.webapp.ApplicationStartUp.main(ApplicationStartUp.java:69)

2、k8s环境 检查skywalking-oap服务状态

# 检查 OAP 服务是否存在
kubectl get svc skywalking-oap -n default

# 测试 OAP 连通性(在集群内部)
kubectl run test-pod --rm -it --image=curlimages/curl -- sh
curl -X POST http://skywalking-oap:12800/graphql -H "Content-Type: application/json" -d '{"query": "query { status }"}'

3、本地iidea启动服务连接skywalking oap服务

"D:\Program Files\Java\jdk-17\bin\java" -javaagent:J:\my-example\nacos3.0.1-spring-boot-app\skywalking-agent\skywalking-agent.jar -DSW_AGENT_NAME=spring-boot-admin -DSW_AGENT_COLLECTOR_BACKEND_SERVICES=10.10.10.99:32662 -Dloader.path=config,lib -jar spring-boot-admin.jar

4、基于apache-skywalking-java-agent-9.4.0.tgz构建skywalking-agent镜像

4.1、Dockerfile内容如下

FROM centos:7.9.2009
USER root

# 定义 Arthas 目录环境变量
ENV ARTHAS_HOME=/opt/arthas

# 更改 YUM 源并清理缓存
RUN mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo_bak && \
    rm -rf /etc/yum.repos.d/* && \
    curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo && \
    sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo && \
    yum clean all && yum makecache fast && yum update -y && \
    yum install -y \
      gcc gcc-c++ kernel-devel yum-utils device-mapper-persistent-data lvm2 \
      tcpdump vim git wget net-tools libpcap libpcap-devel automake make \
      pam-devel openldap-devel cyrus-sasl-devel openssl-devel telnet rsync \
      bzip2 iptables lsof curl su-exec expect net-tools \
      gcc-c++ make gd-devel libxml2-devel libcurl-devel libjpeg-devel \
      libpng-devel openssl-devel bison flex \
      glibc-devel libstdc++ && \
    yum clean all && rm -rf /var/cache/yum/*

# 设置时区
RUN rm -f /etc/localtime && \
    ln -sv /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && \
    echo "Asia/Shanghai" > /etc/timezone && \
    echo "TZ=Asia/Shanghai" >> /etc/profile

# 安装 JDK 17
COPY jdk-17.0.14_linux-x64_bin.rpm /home/
RUN rpm -ivh --nodeps /home/jdk-17.0.14_linux-x64_bin.rpm && \
    rm -f /home/jdk-17.0.14_linux-x64_bin.rpm && \
    echo "export JAVA_HOME=/usr/java/jdk-17" >> /etc/profile && \
    echo "export CLASSPATH=.:\$JAVA_HOME/lib" >> /etc/profile && \
    echo "export PATH=\$PATH:\$JAVA_HOME/bin" >> /etc/profile && \
    source /etc/profile
    
COPY arthas-bin /opt/arthas/
ENV ARTHAS_HOME=/opt/arthas
# 赋予可执行权限
RUN chmod +x $ARTHAS_HOME/*

# 安装 Arthas(修正版本和链接)
#RUN mkdir -p $ARTHAS_HOME && wget -O $ARTHAS_HOME/arthas-boot.jar https://repo1.maven.org/maven2/com/aliyun/arthas/arthas-boot/3.7.8/arthas-boot-3.7.8.jar && echo "alias arthas='java -jar $ARTHAS_HOME/arthas-boot.jar'" >> /etc/profile

# 处理 SkyWalking Agent

# 处理 SkyWalking Agent(修复目录不存在问题)
RUN mkdir -p /usr/skywalking/agent  # 提前创建上级目录
ADD skywalking-agent /tmp/skywalking-agent
# 验证解压后的目录名称,若正确则继续移动
RUN if [ -d "/tmp/skywalking-agent" ]; then \
        mv /tmp/skywalking-agent/* /usr/skywalking/agent && \
        ls /usr/skywalking/agent/ && \
        #cp -r /usr/skywalking/agent/optional-plugins/* /usr/skywalking/agent/plugins/ && \
        #cp -r /usr/skywalking/agent/optional-reporter-plugins/* /usr/skywalking/agent/plugins/ && \
        rm -rf /tmp/*; \
    else \
        echo "Error: /tmp/skywalking-agent not found"; \
        exit 1; \
    fi


# 系统配置(最终修复 exit code:1 问题)
RUN set -ex && \
    # 安装必要组件:SELinux 工具、iptables 基础命令
    yum install -y selinux-policy-targeted policycoreutils iptables && \
    # 修改 SELinux 配置文件为 disabled
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config && \
    # 优化 SELinux 状态判断语法(添加括号确保优先级)
    if (sestatus 2>/dev/null || true) | grep -q "SELinux status:.*enabled"; then \
        # 容错处理:即使 setenforce 失败(如 SELinux 已禁用),也不终止步骤
        setenforce 0 || true; \
    else \
        echo "SELinux is already disabled, skipping setenforce"; \
    fi && \
    # 确保 iptables 命令存在后清理规则,容错执行失败
    if [ -x "$(command -v iptables)" ]; then \
        iptables -F && iptables -X || true; \
    else \
        echo "iptables command not found, skipping cleanup"; \
    fi && \
    # 清理 yum 缓存
    yum clean all

# 环境变量
ENV LANG=C.UTF-8 \
    TZ=Asia/Shanghai \
    MYPATH=/ \
    JAVA_HOME=/usr/java/jdk-17 \
    PATH=$PATH:/usr/java/jdk-17/bin

# 处理入口脚本
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh

WORKDIR $MYPATH

EXPOSE 22 8080 8888 8563 3568 6123 6122 6124 8081 443

MAINTAINER app

ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]

 4.2、AbstractBuilder.METER_SERVICE" is null报错

报错内容

Exception in thread "Thread-1" java.lang.NullPointerException: Cannot invoke "org.apache.skywalking.apm.agent.core.meter.MeterService.register(org.apache.skywalking.apm.agent.core.meter.BaseMeter)" because "org.apache.skywalking.apm.agent.core.meter.AbstractBuilder.METER_SERVICE" is null

描述:上面构建镜像,执行下面的命令,在k8s环境运行会出现错误

cp -r /usr/skywalking/agent/optional-plugins/* /usr/skywalking/agent/plugins/ && \
cp -r /usr/skywalking/agent/optional-reporter-plugins/* /usr/skywalking/agent/plugins/ 

所以构建镜像的时候,把这两个注释了。

github上的issue

[Bug] skywalking swck inject bug · apache/skywalking · Discussion #13241

4.3、应用的Dockerfile

## AdoptOpenJDK 停止发布 OpenJDK 二进制,而 Eclipse Temurin 是它的延伸,提供更好的稳定性

FROM openjdk:17-jdk-oracle

## 创建目录,并使用它作为工作目录
RUN mkdir -p /opt/spring-boot-admin/
WORKDIR /opt/spring-boot-admin/
COPY /target/lib /opt/spring-boot-admin/lib
COPY /target/config /opt/spring-boot-admin/config
COPY /target/spring-boot-admin.jar /opt/spring-boot-admin/spring-boot-admin.jar

## 设置 TZ 时区
## 设置 JAVA_OPTS 环境变量,可通过 docker run -e "JAVA_OPTS=" 进行覆盖
ENV TZ=Asia/Shanghai

## 暴露后端项目的 8080 端口

EXPOSE 8080 9527
ENV JAVA_OPT="-Xms1024m -Xmx1024m -Xss1m -Xshare:off -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5555 -XX:ReservedCodeCacheSize=50m -XX:+TieredCompilation -XX:TieredStopAtLevel=1 -XX:MaxDirectMemorySize=100m"
ENTRYPOINT exec java $JAVA_OPT -Dloader.path=/opt/spring-boot-admin/config,/opt/spring-boot-admin/lib -jar /opt/spring-boot-admin/spring-boot-admin.jar
 

5、spring-boo-adming使用skywalking-agent的yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-boot-admin
  namespace: default
  labels: 
    app: spring-boot-admin
spec: 
  replicas: 1
  selector:
    matchLabels:
      app: spring-boot-admin
  template:
    metadata:
      labels:
        app: spring-boot-admin
    spec:
      initContainers:
        - name: skywalking-agent
          image: skywalking-agent:2.0
          imagePullPolicy: IfNotPresent
          workingDir: /
          command: ["sh"]
          args:
            [
              "-c",
              "mkdir -p /skywalking/agent && cp -r /usr/skywalking/agent/* /skywalking/agent",
            ]
          volumeMounts:
            - name: skywalking-agent
              mountPath: /skywalking/agent
      containers:
        - name: spring-boot-admin
          image: spring-boot-admin
          imagePullPolicy: IfNotPresent
          env:
            - name: JAVA_TOOL_OPTIONS
              value: -javaagent:/usr/skywalking/agent/skywalking-agent.jar
            - name: SW_AGENT_NAME
              value: spring-boot-admin
            - name: SW_LOGGING_LEVEL
              value: DEBUG
            - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES
              value: skywalking-oap.default.svc.cluster.local:11800
            - name: SW_METER_ACTIVE
              value: "false"
            - name: SERVER_PORT
              value: "8080"
            - name: "JAVA_OPT"
              value: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=9527"
          resources:
            limits:
              memory: "2Gi"
            requests:
              memory: "1Gi"
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
            - name: debug
              containerPort: 9527
              protocol: TCP
          volumeMounts:
            - name: date
              mountPath: /etc/localtime
            - name: skywalking-agent
              mountPath: /usr/skywalking/agent
      volumes:
        - name: date
          hostPath:
            path: /etc/localtime
        - name: skywalking-agent
          emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: spring-boot-admin
  namespace: default
  labels:
    app: spring-boot-admin
spec:
  type: NodePort
  ports:
    - name: http
      port: 8080
      targetPort: 8080
      nodePort: 30315
    - name: debug
      port: 9527
      targetPort: 9527
      nodePort: 30316
  selector:
    app: spring-boot-admin
 

6、MySQL数据库 

# ======================
# MySQL 初始化配置 (修复公钥检索和用户权限)
# ======================
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-initdb-config
  namespace: default
data:
  01-change-auth.sql: |
    -- 确保 root 用户存在并设置正确密码
    ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '${MYSQL_ROOT_PASSWORD}';
    
    -- 创建/更新 root 用户允许从任何主机访问
    CREATE USER IF NOT EXISTS 'root'@'%' IDENTIFIED WITH mysql_native_password BY '${MYSQL_ROOT_PASSWORD}';
    GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
    
    -- 刷新权限使更改生效
    FLUSH PRIVILEGES;
---
# ======================
# MySQL 主配置 (修复变量名错误)
# ======================
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config
  namespace: default
data:
  my.cnf: |
    [mysqld]
    bind-address = 0.0.0.0
    port = 3306
    server-id = 1
    skip-host-cache
    skip-name-resolve
    skip_name_resolve = ON
    datadir = /var/lib/mysql
    socket = /var/run/mysqld/mysqld.sock
    pid-file = /var/run/mysqld/mysqld.pid
    secure-file-priv = /var/lib/mysql-files
    character-set-server = utf8mb4
    collation-server = utf8mb4_unicode_ci
    init_connect = 'SET NAMES utf8mb4, collation_connection = utf8mb4_unicode_ci'
    default-time-zone = '+8:00'
    skip-character-set-client-handshake
    log_bin = mysql-bin
    binlog_format = ROW
    expire_logs_days = 7
    max_binlog_size = 100M
    max_connections = 500
    max_connect_errors = 1000
    wait_timeout = 300
    innodb_buffer_pool_size = 4G
    innodb_log_file_size = 1G
    innodb_flush_log_at_trx_commit = 1
    local_infile = OFF
    
    # 关键:使用正确的变量名设置认证插件
    default_authentication_plugin = mysql_native_password
    
    [client]
    socket = /var/run/mysqld/mysqld.sock
    default-character-set = utf8mb4
    
    [mysql]
    default-character-set = utf8mb4
---
# ======================
# MySQL Secret 配置 (确保密码正确)
# ======================
apiVersion: v1
kind: Secret
metadata:
  name: mysql-secret
  namespace: default
type: Opaque
data:
  # 确认密码是 "123456" 的 base64 编码
  root-password: MTIzNDU2  # echo -n '123456' | base64
---
# ======================
# MySQL 持久卷声明
# ======================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: default
  labels:
    app: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 160Gi
  storageClassName: nfs-client
---
# ======================
# MySQL 服务
# ======================
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: default
spec:
  selector:
    app: mysql
  type: ClusterIP
  ports:
    - name: mysql
      protocol: TCP
      port: 3306
      targetPort: 3306
---
# ======================
# MySQL 部署 (完整修复)
# ======================
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: default
spec:
  selector:
    matchLabels:
      app: mysql
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      # 安全上下文
      securityContext:
        runAsUser: 999
        fsGroup: 999
        runAsNonRoot: true
      
      # 存储卷
      volumes:
        - name: mysql-data
          persistentVolumeClaim:
            claimName: mysql-pvc
        - name: mysql-config
          configMap:
            name: mysql-config
            items:
              - key: my.cnf
                path: my.cnf
        - name: initdb
          configMap:
            name: mysql-initdb-config
      
      # 主容器
      containers:
        - name: mysql
          image: mysql:8.0.33
          imagePullPolicy: IfNotPresent
          args: 
            - "--defaults-file=/etc/mysql/my.cnf"
            - "--character-set-server=utf8mb4"
            - "--collation-server=utf8mb4_unicode_ci"
            - "--default-authentication-plugin=mysql_native_password"  # 强制使用旧版认证
          ports:
            - containerPort: 3306
              name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: root-password
            - name: TZ
              value: Asia/Shanghai
            - name: MYSQL_ROOT_HOST
              value: "%"  # 允许所有主机连接
          volumeMounts:
            - name: mysql-data
              mountPath: /var/lib/mysql
            - name: mysql-config
              mountPath: /etc/mysql/my.cnf
              subPath: my.cnf
            - name: initdb
              mountPath: /docker-entrypoint-initdb.d
          # 添加启动前命令创建sock文件目录
          lifecycle:
            postStart:
              exec:
                command:
                  - "/bin/sh"
                  - "-c"
                  - |
                    mkdir -p /var/run/mysqld
                    chown -R mysql:mysql /var/run/mysqld
                    chmod 777 /var/run/mysqld
          livenessProbe:
            exec:
              command:
                - mysqladmin
                - ping
                - "-uroot"
                - "-p$(MYSQL_ROOT_PASSWORD)"
                - "--protocol=socket"
            initialDelaySeconds: 90  # 增加延迟确保MySQL完全启动
            periodSeconds: 20
            timeoutSeconds: 10
          readinessProbe:
            exec:
              command:
                - mysqladmin
                - ping
                - "-uroot"
                - "-p$(MYSQL_ROOT_PASSWORD)"
                - "--protocol=socket"
            initialDelaySeconds: 60
            periodSeconds: 15
            timeoutSeconds: 5
          resources:
            requests:
              memory: "4Gi"
              cpu: "1000m"
            limits:
              memory: "8Gi"
              cpu: "2000m"
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: false

7、Elasticsearch 

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es7-cluster
  namespace: default
spec:
  serviceName: elasticsearch7
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch7
  template:
    metadata:
      labels:
        app: elasticsearch7
    spec:
      containers:
      - name: elasticsearch7
        image: elasticsearch:7.16.2
        imagePullPolicy: IfNotPresent
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-es
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.zen.minimum_master_nodes
            value: "1"
          - name: discovery.seed_hosts
            value: "es7-cluster-0.elasticsearch7"
          - name: cluster.initial_master_nodes
            value: "es7-cluster-0"
          - name: ES_JAVA_OPTS
            value: "-Xms1g -Xmx1g"
      initContainers:
      - name: fix-permissions
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536"]
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-client"
      resources:
        requests:
          storage: 200Gi
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch7
  namespace: default
spec:
  selector:
    app: elasticsearch7
  type: NodePort
  ports:
  - port: 9200
    nodePort: 30002
    targetPort: 9200

8、Kibana

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: default
  labels:
    app: kibana
    component: ui
spec:
  ports:
  - name: http
    port: 5601
    targetPort: 5601
    nodePort: 30001
  type: NodePort
  selector:
    app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: default
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: kibana
      annotations:
        co.elastic.logs/enabled: "true"
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
        runAsNonRoot: true
      containers:
      - name: kibana
        image: kibana:7.16.2
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: "500m"
            memory: "1Gi"
          requests:
            cpu: "200m"
            memory: "512Mi"
        env:
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch7.default.svc.cluster.local:9200
        - name: SERVER_HOST
          value: "0.0.0.0"
        - name: I18N_LOCALE
          value: "zh-CN"
        - name: SERVER_PUBLICBASEURL
          value: "http://localhost:30001"
        - name: NODE_OPTIONS
          value: "--max-old-space-size=700"
        ports: 
        - containerPort: 5601
          name: http
        livenessProbe:
          httpGet:
            path: /api/status
            port: 5601
          initialDelaySeconds: 120
          periodSeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /api/status
            port: 5601
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 3
        securityContext:
          readOnlyRootFilesystem: true  # 保持根文件系统只读
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - ALL
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh", "-c", "sleep 15"]
        volumeMounts:
        - name: kibana-data
          mountPath: /usr/share/kibana/data  # 挂载数据目录
      volumes:
      - name: kibana-data
        emptyDir: {}  # 使用临时存储(生产环境建议使用持久卷) 

 9、nacos

# ======================
# ConfigMap
# ======================
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: default
data:
  application.properties: |
    nacos.standalone=true
    nacos.core.member.lookup.type=file
    nacos.server.main.port=8848
    nacos.core.protocol.raft.data.enabled=false
    nacos.core.distro.enabled=false
    nacos.core.cluster.enabled=false
    nacos.core.raft.enabled=false
    nacos.naming.data.warmup=true
    nacos.naming.data.warmup.delay=0
    spring.sql.init.platform=mysql
    db.num=1
    db.url.0=jdbc:mysql://mysql:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
    db.user=${MYSQL_USER}
    db.password=${MYSQL_PASSWORD}
    nacos.core.auth.enabled=true
    nacos.core.auth.server.identity.key=serverIdentity
    nacos.core.auth.server.identity.value=security
    nacos.core.auth.plugin.nacos.token.secret.key=${NACOS_AUTH_TOKEN}
    nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**
    nacos.console.port=12306
    nacos.remote.server.grpc.port=9848
    nacos.core.protocol.raft.data.port=7848
    management.endpoints.web.exposure.include=prometheus
    nacos.server.contextPath=/nacos
    nacos.console.contextPath=
    nacos.console.ui.enabled=true
    server.tomcat.accesslog.enabled=true
    server.tomcat.accesslog.max-days=30
    server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i

  nacos-logback.xml: |
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration scan="true" scanPeriod="10 seconds">
      <contextName>nacos</contextName>
      <property name="LOG_HOME" value="/home/nacos/logs"/>
      <property name="APP_NAME" value="nacos"/>
      <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
          <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
      </appender>
      <appender name="naming-server" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_HOME}/naming-server.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
          <fileNamePattern>${LOG_HOME}/naming-server.log.%d{yyyy-MM-dd}.%i</fileNamePattern>
          <maxFileSize>1GB</maxFileSize>
          <maxHistory>7</maxHistory>
          <totalSizeCap>7GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
          <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
      </appender>
      <appender name="config-server" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_HOME}/config-server.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
          <fileNamePattern>${LOG_HOME}/config-server.log.%d{yyyy-MM-dd}.%i</fileNamePattern>
          <maxFileSize>1GB</maxFileSize>
          <maxHistory>7</maxHistory>
          <totalSizeCap>7GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
          <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
      </appender>
      <appender name="config-dump" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_HOME}/config-dump.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
          <fileNamePattern>${LOG_HOME}/config-dump.log.%d{yyyy-MM-dd}.%i</fileNamePattern>
          <maxFileSize>1GB</maxFileSize>
          <maxHistory>7</maxHistory>
          <totalSizeCap>7GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
          <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
        </encoder>
      </appender>
      <root level="INFO">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="naming-server"/>
        <appender-ref ref="config-server"/>
        <appender-ref ref="config-dump"/>
      </root>
      <logger name="com.alibaba.nacos" level="INFO" additivity="false">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="naming-server"/>
        <appender-ref ref="config-server"/>
        <appender-ref ref="config-dump"/>
      </logger>
    </configuration>

---
apiVersion: v1
kind: Secret
metadata:
  name: nacos-db-secret
  namespace: default
type: Opaque
data:
  mysql-user: "cm9vdA=="
  mysql-password: "MTIzNDU2"

---
apiVersion: v1
kind: Secret
metadata:
  name: nacos-token-secret
  namespace: default
type: Opaque
data:
  auth-token: "ZmYxMjM0NTY3ODkwYWFhYmJiY2NjZGRkZWVlZmZmZGRkY2JiYWFiYWExMjM0NTY3ODkwMTIzNDU2Nzg5MDEyMzQ1Njc4OTA="

---
apiVersion: v1
kind: Service
metadata:
  name: nacos-headless
  namespace: default
  labels:
    app: nacos
spec:
  clusterIP: None
  ports:
    - name: http
      port: 8848
      targetPort: 8848
    - name: rpc
      port: 7848
      targetPort: 7848
    - name: grpc
      port: 9848
      targetPort: 9848
    - name: console
      port: 12306
      targetPort: 12306
  selector:
    app: nacos

---
apiVersion: v1
kind: Service
metadata:
  name: nacos
  namespace: default
  labels:
    app: nacos
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 8848
      targetPort: 8848
    - name: rpc
      port: 7848
      targetPort: 7848
    - name: grpc
      port: 9848
      targetPort: 9848
    - name: console
      port: 12306
      targetPort: 12306
  selector:
    app: nacos

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: default
  labels:
    app: nacos
spec:
  serviceName: nacos-headless
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      app: nacos
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8848"
    spec:
      securityContext:
        fsGroup: 1000
      containers:
        - name: nacos
          image: nacos/nacos-server:v3.0.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8848
              name: http
            - containerPort: 7848
              name: rpc
            - containerPort: 9848
              name: grpc
            - containerPort: 12306
              name: console
          #resources:
            #limits:
              #cpu: "1"
              #memory: "1.5Gi"
            #requests:
              #cpu: "500m"
              #memory: "1Gi"
          env:
            - name: MODE
              value: "standalone"
            - name: SERVICE_NAME
              value: "nacos-headless"
            - name: SPRING_DATASOURCE_PLATFORM
              value: "mysql"
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
            - name: TZ
              value: "Asia/Shanghai"
            - name: NACOS_REPLICAS
              value: "1"            
            - name: MYSQL_USER
              valueFrom:
                secretKeyRef:
                  name: nacos-db-secret
                  key: mysql-user
            - name: MYSQL_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: nacos-db-secret
                  key: mysql-password
            - name: NACOS_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                  name: nacos-token-secret
                  key: auth-token
            - name: NACOS_SERVER_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: LOG_HOME
              value: "/home/nacos/logs"
            - name: JAVA_OPT
              value: "-Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -Dnacos.standalone=true -Dnacos.core.auth.enabled=true -Djava.security.egd=file:/dev/./urandom -Drocksdb.tmp.path=/home/nacos/tmp -Dserver.max-http-header-size=524288 -Dnacos.core.cluster.enabled=false -Dnacos.core.distro.enabled=false -Dnacos.core.raft.enabled=false -Dnacos.core.protocol.raft.data.enabled=false -Dnacos.naming.data.warmup=true"
            - name: NACOS_TMP_DIR
              value: "/home/nacos/tmp"
          securityContext:
            runAsUser: 1000
            runAsNonRoot: true
            allowPrivilegeEscalation: false
            capabilities:
              drop: ["ALL"]
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh", "-c", "sleep 30"]
          volumeMounts:
            - name: config
              mountPath: /home/nacos/conf/application.properties
              subPath: application.properties
            - name: config
              mountPath: /home/nacos/conf/nacos-logback.xml
              subPath: nacos-logback.xml
            - name: data
              mountPath: /home/nacos/data
            - name: logs
              mountPath: /home/nacos/logs
            - name: rocksdb-tmp
              mountPath: /home/nacos/tmp
          command: ["/bin/sh", "-c"]
          args:
            - |
              exec java $JAVA_OPT \
                -Xlog:gc*:file=/home/nacos/logs/nacos_gc.log:time,tags:filecount=10,filesize=102400 \
                -Dloader.path=/home/nacos/plugins,/home/nacos/plugins/health,/home/nacos/plugins/cmdb,/home/nacos/plugins/selector \
                -Dnacos.home=/home/nacos \
                -jar /home/nacos/target/nacos-server.jar \
                --spring.config.additional-location=file:/home/nacos/conf/ \
                --spring.config.name=application \
                --logging.config=/home/nacos/conf/nacos-logback.xml
      initContainers:
        - name: fix-permissions
          image: busybox:1.35
          imagePullPolicy: IfNotPresent
          command:
            - "/bin/sh"
            - "-c"
            - |
              mkdir -p /home/nacos/tmp
              chown -R 1000:1000 /home/nacos
              chmod 777 /home/nacos/tmp
          securityContext:
            runAsUser: 0
          volumeMounts:
            - name: data
              mountPath: /home/nacos/data
            - name: logs
              mountPath: /home/nacos/logs
            - name: rocksdb-tmp
              mountPath: /home/nacos/tmp
      volumes:
        - name: config
          configMap:
            name: nacos-cm
            items:
              - key: application.properties
                path: application.properties
              - key: nacos-logback.xml
                path: nacos-logback.xml
        - name: rocksdb-tmp
          emptyDir: {}
  volumeClaimTemplates:
    - metadata:
        name: data
        labels:
          type: nacos-data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "nfs-client"
        resources:
          requests:
            storage: 5Gi
    - metadata:
        name: logs
        labels:
          type: nacos-logs
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "nfs-client"
        resources:
          requests:
            storage: 5Gi

10、skywalking-oap

apiVersion: v1
kind: ServiceAccount
metadata:
  name: skywalking-oap
  namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: skywalking-oap
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: skywalking-oap
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: skywalking-oap
subjects:
- kind: ServiceAccount
  name: skywalking-oap
  namespace: default
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: oap-config
  namespace: default
data:
  application.yml: |
    cluster:
      selector: ${SW_CLUSTER:standalone}
      standalone:
      zookeeper:
        namespace: ${SW_NAMESPACE:""}
        hostPort: ${SW_CLUSTER_ZK_HOST_PORT:localhost:2181}
        baseSleepTimeMs: ${SW_CLUSTER_ZK_SLEEP_TIME:1000} # initial amount of time to wait between retries
        maxRetries: ${SW_CLUSTER_ZK_MAX_RETRIES:3} # max number of times to retry
        enableACL: ${SW_ZK_ENABLE_ACL:false} # disable ACL in default
        schema: ${SW_ZK_SCHEMA:digest} # only support digest schema
        expression: ${SW_ZK_EXPRESSION:skywalking:skywalking}
        internalComHost: ${SW_CLUSTER_INTERNAL_COM_HOST:""}
        internalComPort: ${SW_CLUSTER_INTERNAL_COM_PORT:-1}
      kubernetes:
        namespace: ${SW_CLUSTER_K8S_NAMESPACE:default}
        labelSelector: ${SW_CLUSTER_K8S_LABEL:app=collector,release=skywalking}
        uidEnvName: ${SW_CLUSTER_K8S_UID:SKYWALKING_COLLECTOR_UID}
      consul:
        serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"}
        hostPort: ${SW_CLUSTER_CONSUL_HOST_PORT:localhost:8500}
        aclToken: ${SW_CLUSTER_CONSUL_ACLTOKEN:""}
        internalComHost: ${SW_CLUSTER_INTERNAL_COM_HOST:""}
        internalComPort: ${SW_CLUSTER_INTERNAL_COM_PORT:-1}
      etcd:
        endpoints: ${SW_CLUSTER_ETCD_ENDPOINTS:localhost:2379}
        namespace: ${SW_CLUSTER_ETCD_NAMESPACE:/skywalking}
        serviceName: ${SW_CLUSTER_ETCD_SERVICE_NAME:"SkyWalking_OAP_Cluster"}
        authentication: ${SW_CLUSTER_ETCD_AUTHENTICATION:false}
        user: ${SW_CLUSTER_ETCD_USER:}
        password: ${SW_CLUSTER_ETCD_PASSWORD:}
        internalComHost: ${SW_CLUSTER_INTERNAL_COM_HOST:""}
        internalComPort: ${SW_CLUSTER_INTERNAL_COM_PORT:-1}
      nacos:
        serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"}
        hostPort: ${SW_CLUSTER_NACOS_HOST_PORT:localhost:8848}
        namespace: ${SW_CLUSTER_NACOS_NAMESPACE:"public"}
        contextPath: ${SW_CLUSTER_NACOS_CONTEXT_PATH:""}
        username: ${SW_CLUSTER_NACOS_USERNAME:""}
        password: ${SW_CLUSTER_NACOS_PASSWORD:""}
        # Nacos auth accessKey
        accessKey: ${SW_CLUSTER_NACOS_ACCESSKEY:""}
        secretKey: ${SW_CLUSTER_NACOS_SECRETKEY:""}
        internalComHost: ${SW_CLUSTER_INTERNAL_COM_HOST:""}
        internalComPort: ${SW_CLUSTER_INTERNAL_COM_PORT:-1}
    core:
      selector: ${SW_CORE:default}
      default:
        # Mixed: Receive agent data, Level 1 aggregate, Level 2 aggregate
        # Receiver: Receive agent data, Level 1 aggregate
        # Aggregator: Level 2 aggregate
        role: ${SW_CORE_ROLE:Mixed} # Mixed/Receiver/Aggregator
        restHost: ${SW_CORE_REST_HOST:0.0.0.0}
        restPort: ${SW_CORE_REST_PORT:12800}
        restContextPath: ${SW_CORE_REST_CONTEXT_PATH:/}
        restMaxThreads: ${SW_CORE_REST_MAX_THREADS:200}
        restIdleTimeOut: ${SW_CORE_REST_IDLE_TIMEOUT:30000}
        restAcceptQueueSize: ${SW_CORE_REST_QUEUE_SIZE:0}
        httpMaxRequestHeaderSize: ${SW_CORE_HTTP_MAX_REQUEST_HEADER_SIZE:8192}
        gRPCHost: ${SW_CORE_GRPC_HOST:0.0.0.0}
        gRPCPort: ${SW_CORE_GRPC_PORT:11800}
        maxConcurrentCallsPerConnection: ${SW_CORE_GRPC_MAX_CONCURRENT_CALL:0}
        maxMessageSize: ${SW_CORE_GRPC_MAX_MESSAGE_SIZE:52428800} #50MB
        gRPCThreadPoolSize: ${SW_CORE_GRPC_THREAD_POOL_SIZE:-1}
        gRPCSslEnabled: ${SW_CORE_GRPC_SSL_ENABLED:false}
        gRPCSslKeyPath: ${SW_CORE_GRPC_SSL_KEY_PATH:""}
        gRPCSslCertChainPath: ${SW_CORE_GRPC_SSL_CERT_CHAIN_PATH:""}
        gRPCSslTrustedCAPath: ${SW_CORE_GRPC_SSL_TRUSTED_CA_PATH:""}
        downsampling:
          - Hour
          - Day
        # Set a timeout on metrics data. After the timeout has expired, the metrics data will automatically be deleted.
        enableDataKeeperExecutor: ${SW_CORE_ENABLE_DATA_KEEPER_EXECUTOR:true} # Turn it off then automatically metrics data delete will be close.
        dataKeeperExecutePeriod: ${SW_CORE_DATA_KEEPER_EXECUTE_PERIOD:5} # How often the data keeper executor runs periodically, unit is minute
        recordDataTTL: ${SW_CORE_RECORD_DATA_TTL:3} # Unit is day
        metricsDataTTL: ${SW_CORE_METRICS_DATA_TTL:7} # Unit is day
        # The period of L1 aggregation flush to L2 aggregation. Unit is ms.
        l1FlushPeriod: ${SW_CORE_L1_AGGREGATION_FLUSH_PERIOD:500}
        # The threshold of session time. Unit is ms. Default value is 70s.
        storageSessionTimeout: ${SW_CORE_STORAGE_SESSION_TIMEOUT:70000}
        # The period of doing data persistence. Unit is second.Default value is 25s
        persistentPeriod: ${SW_CORE_PERSISTENT_PERIOD:25}
        topNReportPeriod: ${SW_CORE_TOPN_REPORT_PERIOD:10} # top_n record worker report cycle, unit is minute
        # Extra model column are the column defined by in the codes, These columns of model are not required logically in aggregation or further query,
        # and it will cause more load for memory, network of OAP and storage.
        # But, being activated, user could see the name in the storage entities, which make users easier to use 3rd party tool, such as Kibana->ES, to query the data by themselves.
        activeExtraModelColumns: ${SW_CORE_ACTIVE_EXTRA_MODEL_COLUMNS:false}
        # The max length of service + instance names should be less than 200
        serviceNameMaxLength: ${SW_SERVICE_NAME_MAX_LENGTH:70}
        # The period(in seconds) of refreshing the service cache. Default value is 10s.
        serviceCacheRefreshInterval: ${SW_SERVICE_CACHE_REFRESH_INTERVAL:10}
        instanceNameMaxLength: ${SW_INSTANCE_NAME_MAX_LENGTH:70}
        # The max length of service + endpoint names should be less than 240
        endpointNameMaxLength: ${SW_ENDPOINT_NAME_MAX_LENGTH:150}
        # Define the set of span tag keys, which should be searchable through the GraphQL.
        # The max length of key=value should be less than 256 or will be dropped.
        searchableTracesTags: ${SW_SEARCHABLE_TAG_KEYS:http.method,http.status_code,rpc.status_code,db.type,db.instance,mq.queue,mq.topic,mq.broker}
        # Define the set of log tag keys, which should be searchable through the GraphQL.
        # The max length of key=value should be less than 256 or will be dropped.
        searchableLogsTags: ${SW_SEARCHABLE_LOGS_TAG_KEYS:level,http.status_code}
        # Define the set of alarm tag keys, which should be searchable through the GraphQL.
        # The max length of key=value should be less than 256 or will be dropped.
        searchableAlarmTags: ${SW_SEARCHABLE_ALARM_TAG_KEYS:level}
        # The max size of tags keys for autocomplete select.
        autocompleteTagKeysQueryMaxSize: ${SW_AUTOCOMPLETE_TAG_KEYS_QUERY_MAX_SIZE:100}
        # The max size of tags values for autocomplete select.
        autocompleteTagValuesQueryMaxSize: ${SW_AUTOCOMPLETE_TAG_VALUES_QUERY_MAX_SIZE:100}
        # The number of threads used to prepare metrics data to the storage.
        prepareThreads: ${SW_CORE_PREPARE_THREADS:2}
        # Turn it on then automatically grouping endpoint by the given OpenAPI definitions.
        enableEndpointNameGroupingByOpenapi: ${SW_CORE_ENABLE_ENDPOINT_NAME_GROUPING_BY_OPENAPI:true}
        # The period of HTTP URI pattern recognition. Unit is second.
        syncPeriodHttpUriRecognitionPattern: ${SW_CORE_SYNC_PERIOD_HTTP_URI_RECOGNITION_PATTERN:10}
        # The training period of HTTP URI pattern recognition. Unit is second.
        trainingPeriodHttpUriRecognitionPattern: ${SW_CORE_TRAINING_PERIOD_HTTP_URI_RECOGNITION_PATTERN:60}
        # The max number of HTTP URIs per service for further URI pattern recognition.
        maxHttpUrisNumberPerService: ${SW_CORE_MAX_HTTP_URIS_NUMBER_PER_SVR:3000}
        # If disable the hierarchy, the service and instance hierarchy relation will not be built. And the query of hierarchy will return empty result.
        # All the hierarchy relations are defined in the `hierarchy-definition.yml`.
        # Notice: some of the configurations only available for kubernetes environments.
        enableHierarchy: ${SW_CORE_ENABLE_HIERARCHY:true}
        # The int value of the max heap memory usage percent. The default value is 96%.
        maxHeapMemoryUsagePercent: ${SW_CORE_MAX_HEAP_MEMORY_USAGE_PERCENT:96}
        # The long value of the max direct memory usage. The default max value is -1, representing no limit. The unit is in bytes.
        maxDirectMemoryUsage: ${SW_CORE_MAX_DIRECT_MEMORY_USAGE:-1}
    storage:
      selector: ${SW_STORAGE:banyandb}
      banyandb:
        # Since 10.2.0, the banyandb configuration is separated to an independent configuration file: `bydb.yaml`.
      elasticsearch:
        namespace: ${SW_NAMESPACE:""}
        clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}
        protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"}
        connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:3000}
        socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000}
        responseTimeout: ${SW_STORAGE_ES_RESPONSE_TIMEOUT:15000}
        numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0}
        user: ${SW_ES_USER:""}
        password: ${SW_ES_PASSWORD:""}
        trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""}
        trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""}
        secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool.
        dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index.
        indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes
        indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # Replicas number of new indexes
        # Specify the settings for each index individually.
        # If configured, this setting has the highest priority and overrides the generic settings.
        specificIndexSettings: ${SW_STORAGE_ES_SPECIFIC_INDEX_SETTINGS:""}
        # Super data set has been defined in the codes, such as trace segments.The following 3 config would be improve es performance when storage super size data in es.
        superDatasetDayStep: ${SW_STORAGE_ES_SUPER_DATASET_DAY_STEP:-1} # Represent the number of days in the super size dataset record index, the default value is the same as dayStep when the value is less than 0
        superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} #  This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin traces.
        superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # Represent the replicas number in the super size dataset record index, the default value is 0.
        indexTemplateOrder: ${SW_STORAGE_ES_INDEX_TEMPLATE_ORDER:0} # the order of index template
        bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:5000} # Execute the async bulk record data every ${SW_STORAGE_ES_BULK_ACTIONS} requests
        batchOfBytes: ${SW_STORAGE_ES_BATCH_OF_BYTES:10485760} # A threshold to control the max body size of ElasticSearch Bulk flush.
        # flush the bulk every 5 seconds whatever the number of requests
        flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:5}
        concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
        resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000}
        metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:10000}
        scrollingBatchSize: ${SW_STORAGE_ES_SCROLLING_BATCH_SIZE:5000}
        segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}
        profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200}
        asyncProfilerTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_ASYNC_PROFILER_TASK_SIZE:200}
        profileDataQueryBatchSize: ${SW_STORAGE_ES_QUERY_PROFILE_DATA_BATCH_SIZE:100}
        oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:"{\"analyzer\":{\"oap_analyzer\":{\"type\":\"stop\"}}}"} # the oap analyzer.
        oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:"{\"analyzer\":{\"oap_log_analyzer\":{\"type\":\"standard\"}}}"} # the oap log analyzer. It could be customized by the ES analyzer configuration to support more language log formats, such as Chinese log, Japanese log and etc.
        advanced: ${SW_STORAGE_ES_ADVANCED:""}
        # Enable shard metrics and records indices into multi-physical indices, one index template per metric/meter aggregation function or record.
        logicSharding: ${SW_STORAGE_ES_LOGIC_SHARDING:false}
        # Custom routing can reduce the impact of searches. Instead of having to fan out a search request to all the shards in an index, the request can be sent to just the shard that matches the specific routing value (or values).
        enableCustomRouting: ${SW_STORAGE_ES_ENABLE_CUSTOM_ROUTING:false}
      mysql:
        properties:
          jdbcUrl: ${SW_JDBC_URL:"jdbc:mysql://localhost:3306/swtest?rewriteBatchedStatements=true&allowMultiQueries=true"}
          dataSource.user: ${SW_DATA_SOURCE_USER:root}
          dataSource.password: ${SW_DATA_SOURCE_PASSWORD:root@1234}
          dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true}
          dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250}
          dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048}
          dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true}
        metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000}
        maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:2000}
        asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:4}
      postgresql:
        properties:
          jdbcUrl: ${SW_JDBC_URL:"jdbc:postgresql://localhost:5432/skywalking"}
          dataSource.user: ${SW_DATA_SOURCE_USER:postgres}
          dataSource.password: ${SW_DATA_SOURCE_PASSWORD:123456}
          dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true}
          dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250}
          dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048}
          dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true}
        metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000}
        maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:2000}
        asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:4}
    agent-analyzer:
      selector: ${SW_AGENT_ANALYZER:default}
      default:
        # The default sampling rate and the default trace latency time configured by the 'traceSamplingPolicySettingsFile' file.
        traceSamplingPolicySettingsFile: ${SW_TRACE_SAMPLING_POLICY_SETTINGS_FILE:trace-sampling-policy-settings.yml}
        slowDBAccessThreshold: ${SW_SLOW_DB_THRESHOLD:default:200,mongodb:100} # The slow database access thresholds. Unit ms.
        forceSampleErrorSegment: ${SW_FORCE_SAMPLE_ERROR_SEGMENT:true} # When sampling mechanism active, this config can open(true) force save some error segment. true is default.
        segmentStatusAnalysisStrategy: ${SW_SEGMENT_STATUS_ANALYSIS_STRATEGY:FROM_SPAN_STATUS} # Determine the final segment status from the status of spans. Available values are `FROM_SPAN_STATUS` , `FROM_ENTRY_SPAN` and `FROM_FIRST_SPAN`. `FROM_SPAN_STATUS` represents the segment status would be error if any span is in error status. `FROM_ENTRY_SPAN` means the segment status would be determined by the status of entry spans only. `FROM_FIRST_SPAN` means the segment status would be determined by the status of the first span only.
        # Nginx and Envoy agents can't get the real remote address.
        # Exit spans with the component in the list would not generate the client-side instance relation metrics.
        noUpstreamRealAddressAgents: ${SW_NO_UPSTREAM_REAL_ADDRESS:6000,9000}
        meterAnalyzerActiveFiles: ${SW_METER_ANALYZER_ACTIVE_FILES:datasource,threadpool,satellite,go-runtime,python-runtime,continuous-profiling,java-agent,go-agent} # Which files could be meter analyzed, files split by ","
        slowCacheReadThreshold: ${SW_SLOW_CACHE_SLOW_READ_THRESHOLD:default:20,redis:10} # The slow cache read operation thresholds. Unit ms.
        slowCacheWriteThreshold: ${SW_SLOW_CACHE_SLOW_WRITE_THRESHOLD:default:20,redis:10} # The slow cache write operation thresholds. Unit ms.
    log-analyzer:
      selector: ${SW_LOG_ANALYZER:default}
      default:
        lalFiles: ${SW_LOG_LAL_FILES:envoy-als,mesh-dp,mysql-slowsql,pgsql-slowsql,redis-slowsql,k8s-service,nginx,default}
        malFiles: ${SW_LOG_MAL_FILES:"nginx"}
    event-analyzer:
      selector: ${SW_EVENT_ANALYZER:default}
      default:
    receiver-sharing-server:
      selector: ${SW_RECEIVER_SHARING_SERVER:default}
      default:
        # For HTTP server
        restHost: ${SW_RECEIVER_SHARING_REST_HOST:0.0.0.0}
        restPort: ${SW_RECEIVER_SHARING_REST_PORT:0}
        restContextPath: ${SW_RECEIVER_SHARING_REST_CONTEXT_PATH:/}
        restMaxThreads: ${SW_RECEIVER_SHARING_REST_MAX_THREADS:200}
        restIdleTimeOut: ${SW_RECEIVER_SHARING_REST_IDLE_TIMEOUT:30000}
        restAcceptQueueSize: ${SW_RECEIVER_SHARING_REST_QUEUE_SIZE:0}
        httpMaxRequestHeaderSize: ${SW_RECEIVER_SHARING_HTTP_MAX_REQUEST_HEADER_SIZE:8192}
        # For gRPC server
        gRPCHost: ${SW_RECEIVER_GRPC_HOST:0.0.0.0}
        gRPCPort: ${SW_RECEIVER_GRPC_PORT:0}
        maxConcurrentCallsPerConnection: ${SW_RECEIVER_GRPC_MAX_CONCURRENT_CALL:0}
        maxMessageSize: ${SW_RECEIVER_GRPC_MAX_MESSAGE_SIZE:52428800} #50MB
        gRPCThreadPoolSize: ${SW_RECEIVER_GRPC_THREAD_POOL_SIZE:0}
        gRPCSslEnabled: ${SW_RECEIVER_GRPC_SSL_ENABLED:false}
        gRPCSslKeyPath: ${SW_RECEIVER_GRPC_SSL_KEY_PATH:""}
        gRPCSslCertChainPath: ${SW_RECEIVER_GRPC_SSL_CERT_CHAIN_PATH:""}
        gRPCSslTrustedCAsPath: ${SW_RECEIVER_GRPC_SSL_TRUSTED_CAS_PATH:""}
        authentication: ${SW_AUTHENTICATION:""}
    receiver-register:
      selector: ${SW_RECEIVER_REGISTER:default}
      default:
    receiver-trace:
      selector: ${SW_RECEIVER_TRACE:default}
      default:
    receiver-jvm:
      selector: ${SW_RECEIVER_JVM:default}
      default:
    receiver-clr:
      selector: ${SW_RECEIVER_CLR:default}
      default:
    receiver-profile:
      selector: ${SW_RECEIVER_PROFILE:default}
      default:
    receiver-async-profiler:
      selector: ${SW_RECEIVER_ASYNC_PROFILER:default}
      default:
        # Used to manage the maximum size of the jfr file that can be received, the unit is Byte, default is 30M
        jfrMaxSize: ${SW_RECEIVER_ASYNC_PROFILER_JFR_MAX_SIZE:31457280}
        # Used to determine whether to receive jfr in memory file or physical file mode
        #
        # The memory file mode have fewer local file system limitations, so they are by default. But it costs more memory.
        #
        # The physical file mode will use less memory when parsing and is more friendly to parsing large files.
        # However, if the storage of the tmp directory in the container is insufficient, the oap server instance may crash.
        # It is recommended to use physical file mode when volume mounting is used or the tmp directory has sufficient storage.
        memoryParserEnabled: ${SW_RECEIVER_ASYNC_PROFILER_MEMORY_PARSER_ENABLED:true}
    receiver-zabbix:
      selector: ${SW_RECEIVER_ZABBIX:-}
      default:
        port: ${SW_RECEIVER_ZABBIX_PORT:10051}
        host: ${SW_RECEIVER_ZABBIX_HOST:0.0.0.0}
        activeFiles: ${SW_RECEIVER_ZABBIX_ACTIVE_FILES:agent}
    service-mesh:
      selector: ${SW_SERVICE_MESH:default}
      default:
    envoy-metric:
      selector: ${SW_ENVOY_METRIC:default}
      default:
        acceptMetricsService: ${SW_ENVOY_METRIC_SERVICE:true}
        alsHTTPAnalysis: ${SW_ENVOY_METRIC_ALS_HTTP_ANALYSIS:""}
        alsTCPAnalysis: ${SW_ENVOY_METRIC_ALS_TCP_ANALYSIS:""}
        # `k8sServiceNameRule` allows you to customize the service name in ALS via Kubernetes metadata,
        # the available variables are `pod`, `service`, f.e., you can use `${service.metadata.name}-${pod.metadata.labels.version}`
        # to append the version number to the service name.
        # Be careful, when using environment variables to pass this configuration, use single quotes(`''`) to avoid it being evaluated by the shell.
        k8sServiceNameRule: ${K8S_SERVICE_NAME_RULE:"${pod.metadata.labels.(service.istio.io/canonical-name)}.${pod.metadata.namespace}"}
        istioServiceNameRule: ${ISTIO_SERVICE_NAME_RULE:"${serviceEntry.metadata.name}.${serviceEntry.metadata.namespace}"}
        # When looking up service informations from the Istio ServiceEntries, some
        # of the ServiceEntries might be created in several namespaces automatically
        # by some components, and OAP will randomly pick one of them to build the
        # service name, users can use this config to exclude ServiceEntries that
        # they don't want to be used. Comma separated.
        istioServiceEntryIgnoredNamespaces: ${SW_ISTIO_SERVICE_ENTRY_IGNORED_NAMESPACES:""}
        gRPCHost: ${SW_ALS_GRPC_HOST:0.0.0.0}
        gRPCPort: ${SW_ALS_GRPC_PORT:0}
        maxConcurrentCallsPerConnection: ${SW_ALS_GRPC_MAX_CONCURRENT_CALL:0}
        maxMessageSize: ${SW_ALS_GRPC_MAX_MESSAGE_SIZE:0}
        gRPCThreadPoolSize: ${SW_ALS_GRPC_THREAD_POOL_SIZE:0}
        gRPCSslEnabled: ${SW_ALS_GRPC_SSL_ENABLED:false}
        gRPCSslKeyPath: ${SW_ALS_GRPC_SSL_KEY_PATH:""}
        gRPCSslCertChainPath: ${SW_ALS_GRPC_SSL_CERT_CHAIN_PATH:""}
        gRPCSslTrustedCAsPath: ${SW_ALS_GRPC_SSL_TRUSTED_CAS_PATH:""}
    kafka-fetcher:
      selector: ${SW_KAFKA_FETCHER:-}
      default:
        bootstrapServers: ${SW_KAFKA_FETCHER_SERVERS:localhost:9092}
        namespace: ${SW_NAMESPACE:""}
        partitions: ${SW_KAFKA_FETCHER_PARTITIONS:3}
        replicationFactor: ${SW_KAFKA_FETCHER_PARTITIONS_FACTOR:2}
        enableNativeProtoLog: ${SW_KAFKA_FETCHER_ENABLE_NATIVE_PROTO_LOG:true}
        enableNativeJsonLog: ${SW_KAFKA_FETCHER_ENABLE_NATIVE_JSON_LOG:true}
        consumers: ${SW_KAFKA_FETCHER_CONSUMERS:1}
        kafkaHandlerThreadPoolSize: ${SW_KAFKA_HANDLER_THREAD_POOL_SIZE:-1}
        kafkaHandlerThreadPoolQueueSize: ${SW_KAFKA_HANDLER_THREAD_POOL_QUEUE_SIZE:-1}
    cilium-fetcher:
      selector: ${SW_CILIUM_FETCHER:-}
      default:
        peerHost: ${SW_CILIUM_FETCHER_PEER_HOST:hubble-peer.kube-system.svc.cluster.local}
        peerPort: ${SW_CILIUM_FETCHER_PEER_PORT:80}
        fetchFailureRetrySecond: ${SW_CILIUM_FETCHER_FETCH_FAILURE_RETRY_SECOND:10}
        sslConnection: ${SW_CILIUM_FETCHER_SSL_CONNECTION:false}
        sslPrivateKeyFile: ${SW_CILIUM_FETCHER_PRIVATE_KEY_FILE_PATH:}
        sslCertChainFile: ${SW_CILIUM_FETCHER_CERT_CHAIN_FILE_PATH:}
        sslCaFile: ${SW_CILIUM_FETCHER_CA_FILE_PATH:}
        convertClientAsServerTraffic: ${SW_CILIUM_FETCHER_CONVERT_CLIENT_AS_SERVER_TRAFFIC:true}
    receiver-meter:
      selector: ${SW_RECEIVER_METER:default}
      default:
    receiver-otel:
      selector: ${SW_OTEL_RECEIVER:default}
      default:
        enabledHandlers: ${SW_OTEL_RECEIVER_ENABLED_HANDLERS:"otlp-metrics,otlp-logs"}
        enabledOtelMetricsRules: ${SW_OTEL_RECEIVER_ENABLED_OTEL_METRICS_RULES:"apisix,nginx/*,k8s/*,istio-controlplane,vm,mysql/*,postgresql/*,oap,aws-eks/*,windows,aws-s3/*,aws-dynamodb/*,aws-gateway/*,redis/*,elasticsearch/*,rabbitmq/*,mongodb/*,kafka/*,pulsar/*,bookkeeper/*,rocketmq/*,clickhouse/*,activemq/*,kong/*"}
    receiver-zipkin:
      selector: ${SW_RECEIVER_ZIPKIN:-}
      default:
        # Defines a set of span tag keys which are searchable.
        # The max length of key=value should be less than 256 or will be dropped.
        searchableTracesTags: ${SW_ZIPKIN_SEARCHABLE_TAG_KEYS:http.method}
        # The sample rate precision is 1/10000, should be between 0 and 10000
        sampleRate: ${SW_ZIPKIN_SAMPLE_RATE:10000}
        ## The below configs are for OAP collect zipkin trace from HTTP
        enableHttpCollector: ${SW_ZIPKIN_HTTP_COLLECTOR_ENABLED:true}
        restHost: ${SW_RECEIVER_ZIPKIN_REST_HOST:0.0.0.0}
        restPort: ${SW_RECEIVER_ZIPKIN_REST_PORT:9411}
        restContextPath: ${SW_RECEIVER_ZIPKIN_REST_CONTEXT_PATH:/}
        restMaxThreads: ${SW_RECEIVER_ZIPKIN_REST_MAX_THREADS:200}
        restIdleTimeOut: ${SW_RECEIVER_ZIPKIN_REST_IDLE_TIMEOUT:30000}
        restAcceptQueueSize: ${SW_RECEIVER_ZIPKIN_REST_QUEUE_SIZE:0}
        ## The below configs are for OAP collect zipkin trace from kafka
        enableKafkaCollector: ${SW_ZIPKIN_KAFKA_COLLECTOR_ENABLED:false}
        kafkaBootstrapServers: ${SW_ZIPKIN_KAFKA_SERVERS:localhost:9092}
        kafkaGroupId: ${SW_ZIPKIN_KAFKA_GROUP_ID:zipkin}
        kafkaTopic: ${SW_ZIPKIN_KAFKA_TOPIC:zipkin}
        # Kafka consumer config, JSON format as Properties. If it contains the same key with above, would override.
        kafkaConsumerConfig: ${SW_ZIPKIN_KAFKA_CONSUMER_CONFIG:"{\"auto.offset.reset\":\"earliest\",\"enable.auto.commit\":true}"}
        # The Count of the topic consumers
        kafkaConsumers: ${SW_ZIPKIN_KAFKA_CONSUMERS:1}
        kafkaHandlerThreadPoolSize: ${SW_ZIPKIN_KAFKA_HANDLER_THREAD_POOL_SIZE:-1}
        kafkaHandlerThreadPoolQueueSize: ${SW_ZIPKIN_KAFKA_HANDLER_THREAD_POOL_QUEUE_SIZE:-1}
    receiver-browser:
      selector: ${SW_RECEIVER_BROWSER:default}
      default:
        # The sample rate precision is 1/10000. 10000 means 100% sample in default.
        sampleRate: ${SW_RECEIVER_BROWSER_SAMPLE_RATE:10000}
    receiver-log:
      selector: ${SW_RECEIVER_LOG:default}
      default:
    query:
      selector: ${SW_QUERY:graphql}
      graphql:
        # Enable the log testing API to test the LAL.
        # NOTE: This API evaluates untrusted code on the OAP server.
        # A malicious script can do significant damage (steal keys and secrets, remove files and directories, install malware, etc).
        # As such, please enable this API only when you completely trust your users.
        enableLogTestTool: ${SW_QUERY_GRAPHQL_ENABLE_LOG_TEST_TOOL:false}
        # Maximum complexity allowed for the GraphQL query that can be used to
        # abort a query if the total number of data fields queried exceeds the defined threshold.
        maxQueryComplexity: ${SW_QUERY_MAX_QUERY_COMPLEXITY:3000}
        # Allow user add, disable and update UI template
        enableUpdateUITemplate: ${SW_ENABLE_UPDATE_UI_TEMPLATE:false}
        # "On demand log" allows users to fetch Pod containers' log in real time,
        # because this might expose secrets in the logs (if any), users need
        # to enable this manually, and add permissions to OAP cluster role.
        enableOnDemandPodLog: ${SW_ENABLE_ON_DEMAND_POD_LOG:false}
    # This module is for Zipkin query API and support zipkin-lens UI
    query-zipkin:
      selector: ${SW_QUERY_ZIPKIN:-}
      default:
        # For HTTP server
        restHost: ${SW_QUERY_ZIPKIN_REST_HOST:0.0.0.0}
        restPort: ${SW_QUERY_ZIPKIN_REST_PORT:9412}
        restContextPath: ${SW_QUERY_ZIPKIN_REST_CONTEXT_PATH:/zipkin}
        restMaxThreads: ${SW_QUERY_ZIPKIN_REST_MAX_THREADS:200}
        restIdleTimeOut: ${SW_QUERY_ZIPKIN_REST_IDLE_TIMEOUT:30000}
        restAcceptQueueSize: ${SW_QUERY_ZIPKIN_REST_QUEUE_SIZE:0}
        # Default look back for traces and autocompleteTags, 1 day in millis
        lookback: ${SW_QUERY_ZIPKIN_LOOKBACK:86400000}
        # The Cache-Control max-age (seconds) for serviceNames, remoteServiceNames and spanNames
        namesMaxAge: ${SW_QUERY_ZIPKIN_NAMES_MAX_AGE:300}
        ## The below config are OAP support for zipkin-lens UI
        # Default traces query max size
        uiQueryLimit: ${SW_QUERY_ZIPKIN_UI_QUERY_LIMIT:10}
        # Default look back on the UI for search traces, 15 minutes in millis
        uiDefaultLookback: ${SW_QUERY_ZIPKIN_UI_DEFAULT_LOOKBACK:900000}
    #This module is for PromQL API.
    promql:
      selector: ${SW_PROMQL:default}
      default:
        # For HTTP server
        restHost: ${SW_PROMQL_REST_HOST:0.0.0.0}
        restPort: ${SW_PROMQL_REST_PORT:9090}
        restContextPath: ${SW_PROMQL_REST_CONTEXT_PATH:/}
        restMaxThreads: ${SW_PROMQL_REST_MAX_THREADS:200}
        restIdleTimeOut: ${SW_PROMQL_REST_IDLE_TIMEOUT:30000}
        restAcceptQueueSize: ${SW_PROMQL_REST_QUEUE_SIZE:0}
        # The below config is for the API buildInfo, set the value to mock the build info.
        buildInfoVersion: ${SW_PROMQL_BUILD_INFO_VERSION:"2.45.0"}
        buildInfoRevision: ${SW_PROMQL_BUILD_INFO_REVISION:""}
        buildInfoBranch: ${SW_PROMQL_BUILD_INFO_BRANCH:""}
        buildInfoBuildUser: ${SW_PROMQL_BUILD_INFO_BUILD_USER:""}
        buildInfoBuildDate: ${SW_PROMQL_BUILD_INFO_BUILD_DATE:""}
        buildInfoGoVersion: ${SW_PROMQL_BUILD_INFO_GO_VERSION:""}
    #This module is for LogQL API.
    logql:
      selector: ${SW_LOGQL:default}
      default:
        # For HTTP server
        restHost: ${SW_LOGQL_REST_HOST:0.0.0.0}
        restPort: ${SW_LOGQL_REST_PORT:3100}
        restContextPath: ${SW_LOGQL_REST_CONTEXT_PATH:/}
        restMaxThreads: ${SW_LOGQL_REST_MAX_THREADS:200}
        restIdleTimeOut: ${SW_LOGQL_REST_IDLE_TIMEOUT:30000}
        restAcceptQueueSize: ${SW_LOGQL_REST_QUEUE_SIZE:0}
    alarm:
      selector: ${SW_ALARM:default}
      default:
    telemetry:
      selector: ${SW_TELEMETRY:prometheus}
      none:
      prometheus:
        host: ${SW_TELEMETRY_PROMETHEUS_HOST:0.0.0.0}
        port: ${SW_TELEMETRY_PROMETHEUS_PORT:1234}
        sslEnabled: ${SW_TELEMETRY_PROMETHEUS_SSL_ENABLED:false}
        sslKeyPath: ${SW_TELEMETRY_PROMETHEUS_SSL_KEY_PATH:""}
        sslCertChainPath: ${SW_TELEMETRY_PROMETHEUS_SSL_CERT_CHAIN_PATH:""}
    configuration:
      selector: ${SW_CONFIGURATION:none}
      none:
      grpc:
        host: ${SW_DCS_SERVER_HOST:""}
        port: ${SW_DCS_SERVER_PORT:80}
        clusterName: ${SW_DCS_CLUSTER_NAME:SkyWalking}
        period: ${SW_DCS_PERIOD:20}
        maxInboundMessageSize: ${SW_DCS_MAX_INBOUND_MESSAGE_SIZE:4194304}
      apollo:
        apolloMeta: ${SW_CONFIG_APOLLO:http://localhost:8080}
        apolloCluster: ${SW_CONFIG_APOLLO_CLUSTER:default}
        apolloEnv: ${SW_CONFIG_APOLLO_ENV:""}
        appId: ${SW_CONFIG_APOLLO_APP_ID:skywalking}
      zookeeper:
        period: ${SW_CONFIG_ZK_PERIOD:60} # Unit seconds, sync period. Default fetch every 60 seconds.
        namespace: ${SW_CONFIG_ZK_NAMESPACE:/default}
        hostPort: ${SW_CONFIG_ZK_HOST_PORT:localhost:2181}
        # Retry Policy
        baseSleepTimeMs: ${SW_CONFIG_ZK_BASE_SLEEP_TIME_MS:1000} # initial amount of time to wait between retries
        maxRetries: ${SW_CONFIG_ZK_MAX_RETRIES:3} # max number of times to retry
      etcd:
        period: ${SW_CONFIG_ETCD_PERIOD:60} # Unit seconds, sync period. Default fetch every 60 seconds.
        endpoints: ${SW_CONFIG_ETCD_ENDPOINTS:http://localhost:2379}
        namespace: ${SW_CONFIG_ETCD_NAMESPACE:/skywalking}
        authentication: ${SW_CONFIG_ETCD_AUTHENTICATION:false}
        user: ${SW_CONFIG_ETCD_USER:}
        password: ${SW_CONFIG_ETCD_password:}
      consul:
        # Consul host and ports, separated by comma, e.g. 1.2.3.4:8500,2.3.4.5:8500
        hostAndPorts: ${SW_CONFIG_CONSUL_HOST_AND_PORTS:1.2.3.4:8500}
        # Sync period in seconds. Defaults to 60 seconds.
        period: ${SW_CONFIG_CONSUL_PERIOD:60}
        # Consul aclToken
        aclToken: ${SW_CONFIG_CONSUL_ACL_TOKEN:""}
      k8s-configmap:
        period: ${SW_CONFIG_CONFIGMAP_PERIOD:60}
        namespace: ${SW_CLUSTER_K8S_NAMESPACE:default}
        labelSelector: ${SW_CLUSTER_K8S_LABEL:app=collector,release=skywalking}
      nacos:
        # Nacos Server Host
        serverAddr: ${SW_CONFIG_NACOS_SERVER_ADDR:127.0.0.1}
        # Nacos Server Port
        port: ${SW_CONFIG_NACOS_SERVER_PORT:8848}
        # Nacos Configuration Group
        group: ${SW_CONFIG_NACOS_SERVER_GROUP:skywalking}
        # Nacos Configuration namespace
        namespace: ${SW_CONFIG_NACOS_SERVER_NAMESPACE:public}
        # Unit seconds, sync period. Default fetch every 60 seconds.
        period: ${SW_CONFIG_NACOS_PERIOD:60}
        # Nacos auth username
        username: ${SW_CONFIG_NACOS_USERNAME:nacos}
        password: ${SW_CONFIG_NACOS_PASSWORD:nacos}
        # Nacos auth accessKey
        accessKey: ${SW_CONFIG_NACOS_ACCESSKEY:""}
        secretKey: ${SW_CONFIG_NACOS_SECRETKEY:""}
    exporter:
      selector: ${SW_EXPORTER:-}
      default:
        # gRPC exporter
        enableGRPCMetrics: ${SW_EXPORTER_ENABLE_GRPC_METRICS:false}
        gRPCTargetHost: ${SW_EXPORTER_GRPC_HOST:127.0.0.1}
        gRPCTargetPort: ${SW_EXPORTER_GRPC_PORT:9870}
        # Kafka exporter
        enableKafkaTrace: ${SW_EXPORTER_ENABLE_KAFKA_TRACE:false}
        enableKafkaLog: ${SW_EXPORTER_ENABLE_KAFKA_LOG:false}
        kafkaBootstrapServers: ${SW_EXPORTER_KAFKA_SERVERS:localhost:9092}
        # Kafka producer config, JSON format as Properties.
        kafkaProducerConfig: ${SW_EXPORTER_KAFKA_PRODUCER_CONFIG:""}
        kafkaTopicTrace: ${SW_EXPORTER_KAFKA_TOPIC_TRACE:skywalking-export-trace}
        kafkaTopicLog: ${SW_EXPORTER_KAFKA_TOPIC_LOG:skywalking-export-log}
        exportErrorStatusTraceOnly: ${SW_EXPORTER_KAFKA_TRACE_FILTER_ERROR:false}
    health-checker:
      selector: ${SW_HEALTH_CHECKER:-}
      default:
        checkIntervalSeconds: ${SW_HEALTH_CHECKER_INTERVAL_SECONDS:5}
    status-query:
      selector: ${SW_STATUS_QUERY:default}
      default:
        # Include the list of keywords to filter configurations including secrets. Separate keywords by a comma.
        keywords4MaskingSecretsOfConfig: ${SW_DEBUGGING_QUERY_KEYWORDS_FOR_MASKING_SECRETS:user,password,token,accessKey,secretKey,authentication}
    configuration-discovery:
      selector: ${SW_CONFIGURATION_DISCOVERY:default}
      default:
        disableMessageDigest: ${SW_DISABLE_MESSAGE_DIGEST:false}
    receiver-event:
      selector: ${SW_RECEIVER_EVENT:default}
      default:
    receiver-ebpf:
      selector: ${SW_RECEIVER_EBPF:default}
      default:
        # The continuous profiling policy cache time, Unit is second.
        continuousPolicyCacheTimeout: ${SW_CONTINUOUS_POLICY_CACHE_TIMEOUT:60}
        gRPCHost: ${SW_EBPF_GRPC_HOST:0.0.0.0}
        gRPCPort: ${SW_EBPF_GRPC_PORT:0}
        maxConcurrentCallsPerConnection: ${SW_EBPF_GRPC_MAX_CONCURRENT_CALL:0}
        maxMessageSize: ${SW_EBPF_ALS_GRPC_MAX_MESSAGE_SIZE:0}
        gRPCThreadPoolSize: ${SW_EBPF_GRPC_THREAD_POOL_SIZE:0}
        gRPCSslEnabled: ${SW_EBPF_GRPC_SSL_ENABLED:false}
        gRPCSslKeyPath: ${SW_EBPF_GRPC_SSL_KEY_PATH:""}
        gRPCSslCertChainPath: ${SW_EBPF_GRPC_SSL_CERT_CHAIN_PATH:""}
        gRPCSslTrustedCAsPath: ${SW_EBPF_GRPC_SSL_TRUSTED_CAS_PATH:""}
    receiver-telegraf:
      selector: ${SW_RECEIVER_TELEGRAF:default}
      default:
        activeFiles: ${SW_RECEIVER_TELEGRAF_ACTIVE_FILES:vm}
    aws-firehose:
      selector: ${SW_RECEIVER_AWS_FIREHOSE:default}
      default:
        host: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_HOST:0.0.0.0}
        port: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_PORT:12801}
        contextPath: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_CONTEXT_PATH:/}
        maxThreads: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_MAX_THREADS:200}
        idleTimeOut: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_IDLE_TIME_OUT:30000}
        acceptQueueSize: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_ACCEPT_QUEUE_SIZE:0}
        maxRequestHeaderSize: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_MAX_REQUEST_HEADER_SIZE:8192}
        firehoseAccessKey: ${SW_RECEIVER_AWS_FIREHOSE_ACCESS_KEY:}
        enableTLS: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_ENABLE_TLS:false}
        tlsKeyPath: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_TLS_KEY_PATH:}
        tlsCertChainPath: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_TLS_CERT_CHAIN_PATH:}
    ai-pipeline:
      selector: ${SW_AI_PIPELINE:default}
      default:
        uriRecognitionServerAddr: ${SW_AI_PIPELINE_URI_RECOGNITION_SERVER_ADDR:}
        uriRecognitionServerPort: ${SW_AI_PIPELINE_URI_RECOGNITION_SERVER_PORT:17128}
        baselineServerAddr: ${SW_API_PIPELINE_BASELINE_SERVICE_HOST:}
        baselineServerPort: ${SW_API_PIPELINE_BASELINE_SERVICE_PORT:18080}
---
apiVersion: v1
kind: Secret
metadata:
  name: skywalking-secrets
  namespace: default
type: Opaque
data:
  nacos-username: "bmFjb3M="
  nacos-password: "bmFjb3M="
  es-username: ""
  es-password: ""
---
apiVersion: v1
kind: Service
metadata:
  name: skywalking-oap
  namespace: default
  labels:
    app: skywalking-oap
    component: observability
spec:
  ports:
  - port: 12800
    name: rest
    targetPort: 12800
  - port: 11800
    name: grpc
    targetPort: 11800
  - port: 1234
    name: metrics
    targetPort: 1234
  selector:
    app: skywalking-oap
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: skywalking-oap
  namespace: default
  labels:
    app: skywalking-oap
spec:
  replicas: 1
  selector:
    matchLabels:
      app: skywalking-oap
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: skywalking-oap
        release: skywalking
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "1234"
    spec:
      serviceAccountName: skywalking-oap
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
        runAsNonRoot: true
      containers:
      - name: skywalking-oap
        image: apache/skywalking-oap-server:10.2.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 11800
          name: grpc
        - containerPort: 12800
          name: rest
        - containerPort: 1234
          name: metrics
        resources:
          requests:
            memory: "1.5Gi"
            cpu: "500m"
          limits:
            cpu: "1"
            memory: "2.5Gi"
        env:
        # NACOS日志修复
        - name: NACOS_CLIENT_LOG_PATH
          value: "/dev/null"
        - name: NACOS_CLIENT_SNAPSHOT_PATH
          value: "/tmp/nacos/snapshot"
        - name: JM_LOG_PATH
          value: "/dev/null"
        - name: LOGGING_LEVEL_COM_ALIBABA_NACOS
          value: "ERROR"
        - name: SW_STORAGE
          value: "elasticsearch"
          
        # JVM参数
        - name: JAVA_OPTS
          value: >
            -Dnacos.logging.path=/dev/null
            -Dnacos.logging.default.config.enabled=false
            -Dcom.alibaba.nacos.config.log.dir=/dev/null
            -Dcom.alibaba.nacos.naming.log.dir=/dev/null
            -Dlogging.path=/dev/null
            -Dcom.linecorp.armeria.warnNettyVersions=false
            -XX:+UseContainerSupport
            -XX:MaxRAMPercentage=75.0
        - name: SW_CONFIGURATION
          value: "nacos"
        - name: SW_CLUSTER_NACOS_CONTEXT_PATH
          value: "/nacos"
        - name: SW_CLUSTER
          value: "nacos"
        - name: SW_CONFIG_NACOS_SERVER_ADDR
          value: "nacos-0.nacos-headless.default.svc.cluster.local"
        - name: SW_CONFIG_NACOS_SERVER_PORT
          value: "8848"
        - name: SW_CLUSTER_NACOS_HOST_PORT
          value: "nacos-0.nacos-headless.default.svc.cluster.local:8848"
        - name: SW_CLUSTER_NACOS_USERNAME
          valueFrom:
            secretKeyRef:
              name: skywalking-secrets
              key: nacos-username
        - name: SW_CLUSTER_NACOS_PASSWORD
          valueFrom:
            secretKeyRef:
              name: skywalking-secrets
              key: nacos-password
        - name: SW_STORAGE_ES_CLUSTER_NODES
          value: "es7-cluster-0.elasticsearch7.default.svc.cluster.local:9200"
        - name: SW_STORAGE_ES_HTTP_PROTOCOL
          value: "http"
        - name: SW_ES_USER
          valueFrom:
            secretKeyRef:
              name: skywalking-secrets
              key: es-username
        - name: SW_ES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: skywalking-secrets
              key: es-password
        - name: real_host
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: TZ
          value: Asia/Shanghai
        - name: SW_CORE_RECORD_DATA_TTL
          value: "3"
        - name: SW_CORE_METRICS_DATA_TTL
          value: "3"
        - name: SW_STORAGE_ES_BULK_ACTIONS
          value: "2000"
        - name: SW_TELEMETRY
          value: "prometheus"
        - name: SW_TELEMETRY_PROMETHEUS_PORT
          value: "1234"
        securityContext:
          privileged: true
          #runAsUser: 0
          capabilities:
            add: ["ALL"]
          readOnlyRootFilesystem: false
          allowPrivilegeEscalation: true
        volumeMounts:
        - name: config
          mountPath: /skywalking/config/application.yml
          subPath: application.yml
        - name: temp-snapshot
          mountPath: /tmp/nacos
          subPath: nacos
      volumes:
      - name: config
        configMap:
          name: oap-config
      - name: temp-snapshot
        emptyDir: {}

11、skywalking-ui 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: skywalking-ui
  namespace: default
  labels:
    app: skywalking-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: skywalking-ui
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: skywalking-ui
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
        runAsNonRoot: true
      containers:
      - name: skywalking-ui
        image: apache/skywalking-ui:10.2.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          name: http
        env:
        - name: SW_OAP_ADDRESS
          value: http://skywalking-oap:12800
        - name: SW_AUTH
          value: "false"
        - name: TZ
          value: Asia/Shanghai
        - name: JAVA_OPTS
          value: "-Xmx512m -Xms256m"
        - name: SW_SERVER_HOST
          value: "0.0.0.0"
        - name: SW_TIMEOUT
          value: "10000"
        resources:
          limits:
            cpu: "500m"
            memory: "768Mi"
          requests:
            cpu: "100m"
            memory: "256Mi"
        securityContext:
          privileged: true
          #runAsUser: 0
          capabilities:
            add: ["ALL"]
          readOnlyRootFilesystem: false
          allowPrivilegeEscalation: true
        lifecycle:
          preStop:
            exec:
              command: ["sh", "-c", "sleep 15"]
---
apiVersion: v1
kind: Service
metadata:
  name: skywalking-ui
  namespace: default
  labels:
    app: skywalking-ui
spec:
  type: NodePort
  selector:
    app: skywalking-ui
  ports:
    - protocol: TCP
      name: http
      port: 80
      targetPort: 8080
      nodePort: 30157

12、运行效果 

http://www.dtcms.com/a/276108.html

相关文章:

  • try-catch-finally可能输出的答案?
  • Docker-镜像构建原因
  • C语言基础教程--从入门到精通
  • Spring Boot整合MyBatis+MySQL+Redis单表CRUD教程
  • STM32中的RTC(实时时钟)详解
  • R 语言绘制 10 种精美火山图:转录组差异基因可视化
  • JavaScript 常见10种设计模式
  • 码头智能哨兵:AI入侵检测系统如何终结废钢盗窃困局
  • Redis专题总结
  • MyBatis实现一对多,多对一,多对多查询
  • Golang操作MySQL json字段优雅写法
  • CPU缓存一致性协议:深入解析MESI协议与多核并发设计
  • HTML/JOSN复习总结
  • 7. JVM类加载器与双亲委派模型
  • PyQt5 — QTimeEdit 学习笔记
  • Java中的wait和notify、Condition接口的使用
  • 分类问题与多层感知机
  • pip国内镜像源一览
  • [es自动化更新] Updatecli编排配置.yaml | dockerfilePath值文件.yml
  • springboot+swagger2文档从swagger-bootstrap-ui更换为knife4j及文档接口参数不显示问题
  • 【高等数学】第三章 微分中值定理与导数的应用——第七节 曲率
  • DirectX Repair修复工具下载,.NET修复,DirectX修复
  • python 中 ‘5‘ 和 5 有什么本质区别?
  • 【深度学习】 1 Deep Learning
  • 12. JVM的垃圾回收器
  • LangChain 代理(Agents)学习
  • 网页五子棋-对战
  • python学习打卡:DAY 37 早停策略和模型权重的保存
  • web网站无法抓包排查;burp无法抓包情况
  • comfyUI-controlNet-线稿软边缘