Helm部署kong+konga的路由管理系统
部署postgres服务
创建存储类
创建存储类对应的deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:annotations: {}labels:app: eip-nfs-postgresql-storageclassname: eip-nfs-postgresql-storageclassnamespace: kube-systemresourceVersion: '26709116'
spec:progressDeadlineSeconds: 600replicas: 1revisionHistoryLimit: 10selector:matchLabels:app: eip-nfs-postgresql-storageclassstrategy:type: Recreatetemplate:metadata:creationTimestamp: nulllabels:app: eip-nfs-postgresql-storageclassspec:containers:- env:- name: PROVISIONER_NAMEvalue: nfs-postgresql-storageclass- name: NFS_SERVERvalue: 172.16.0.20- name: NFS_PATHvalue: /data/nfs/image: 'eipwork/nfs-subdir-external-provisioner:v4.0.2'imagePullPolicy: IfNotPresentname: nfs-client-provisionerresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /persistentvolumesname: nfs-client-rootdnsPolicy: ClusterFirstrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}serviceAccount: eip-nfs-client-provisionerserviceAccountName: eip-nfs-client-provisionerterminationGracePeriodSeconds: 30volumes:- name: nfs-client-rootpersistentVolumeClaim:claimName: nfs-pvc-postgresql-storageclass
创建存储类需要的yaml文件postgres-stoargaclass.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:annotations:k8s.kuboard.cn/storageNamespace: kongk8s.kuboard.cn/storageType: nfs_client_provisionername: postgresql-storageclassresourceVersion: '26709001'
parameters:archiveOnDelete: 'false'
provisioner: nfs-postgresql-storageclass
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
创建存储类
kubectl apply -f eip-nfs-postgresql-storageclass.yaml
kubectl apply -f postgres-stoargaclass.yaml
创建postgresql
root@iZj6c72dzbei17o2cuksmeZ:~/yaml# mkdir konga
root@iZj6c72dzbei17o2cuksmeZ:~/yaml# cd konga/
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" already exists with the same configuration, skipping
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
也可以下载好helm的chart
helm pull oci://registry-1.docker.io/bitnamicharts/postgresql --untar
Pulled: registry-1.docker.io/bitnamicharts/postgresql:16.6.6
Digest: sha256:a8a0fd5ecbec861cc8462a417a8804c182caa2ee1666abc1a0f8a7f9126c2e40
创建postgres数据库
指定登录数据库及所对应账号密码
指定刚刚创建的存储类所对应的硬盘大小
指定刚刚创建的存储类
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga/postgresql# helm install postgres bitnami/postgresql \--set auth.postgresPassword=kongaAa123456 \--set auth.database=konga \--namespace kong --create-namespace \--set primary.persistence.size=100Gi \--set primary.persistence.storageClass=postgresql-storageclassNAME: postgres
LAST DEPLOYED: Wed Apr 30 11:51:32 2025
NAMESPACE: kong
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: postgresql
CHART VERSION: 16.6.6
APP VERSION: 17.4.0Did you know there are enterprise versions of the Bitnami catalog? For enhanced secure software supply chain features, unlimited pulls from Docker, LTS support, or application customization, see Bitnami Premium or Tanzu Application Catalog. See https://www.arrow.com/globalecs/na/vendors/bitnami for more information.** Please be patient while the chart is being deployed **PostgreSQL can be accessed via port 5432 on the following DNS names from within your cluster:postgres-postgresql.kong.svc.cluster.local - Read/Write connectionTo get the password for "postgres" run:export POSTGRES_PASSWORD=$(kubectl get secret --namespace kong postgres-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)To connect to your database run the following command:kubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namespace kong --image docker.io/bitnami/postgresql:17.4.0-debian-12-r17 --env="PGPASSWORD=$POSTGRES_PASSWORD" \--command -- psql --host postgres-postgresql -U postgres -d konga -p 5432> NOTE: If you access the container using bash, make sure that you execute "/opt/bitnami/scripts/postgresql/entrypoint.sh /bin/bash" in order to avoid the error "psql: local user with ID 1001} does not exist"To connect to your database from outside the cluster execute the following commands:kubectl port-forward --namespace kong svc/postgres-postgresql 5432:5432 &PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d konga -p 5432WARNING: The configured password will be ignored on new installation in case when previous PostgreSQL release was deleted through the helm command. In that case, old PVC will have an old password, and setting it through helm won't take effect. Deleting persistent volumes (PVs) will solve the issue.WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:- primary.resources- readReplicas.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
查看创建好的deployment
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga/postgresql# helm list -n kong
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
postgres kong 1 2025-04-30 12:49:21.853152013 +0800 CST deployed postgresql-16.6.6 17.4.0
查看和删除创建过的deployment
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga# helm list --namespace kong
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
postgres kong 1 2025-04-30 11:30:10.790353413 +0800 CST deployed postgresql-16.6.6 17.4.0
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga# helm uninstall postgres --namespace kong
release "postgres" uninstalled
安装konga
root@iZj6c2vhsafoay7j7vyy89Z:~# cd konga/
root@iZj6c2vhsafoay7j7vyy89Z:~/konga# git clone https://github.com/dangtrinhnt/konga-helm-chart.git
Cloning into 'konga-helm-chart'...
remote: Enumerating objects: 25, done.
remote: Counting objects: 100% (25/25), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 25 (delta 4), reused 25 (delta 4), pack-reused 0 (from 0)
Receiving objects: 100% (25/25), 8.36 KiB | 8.36 MiB/s, done.
Resolving deltas: 100% (4/4), done.
root@iZj6c2vhsafoay7j7vyy89Z:~/konga# cd konga-helm-chart/
root@iZj6c2vhsafoay7j7vyy89Z:~/konga/konga-helm-chart# vim values.yaml
创建kong
pull下来helm 的chart内容
helm repo add kong https://charts.konghq.com
helm repo update
helm pull kong/kong --untar
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga# helm repo add kong https://charts.konghq.com
"kong" has been added to your repositories
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kong" chart repository
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga# helm pull kong/kong --untar
我们自己定义一个value.yaml
ingressController:enabled: trueadmin:type: NodePorthttp:enabled: truetls:enabled: falseproxy:type: NodePorthttp:enabled: truetls:enabled: falseenv:database: "off"
指定我们自己创建的value.yaml创建kong
root@iZj6c72dzbei17o2cuksmeZ:~/yaml/konga/kong# helm install kong kong/kong -n kong --create-namespace -f values.yaml
NAME: kong
LAST DEPLOYED: Wed Apr 30 14:54:03 2025
NAMESPACE: kong
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To connect to Kong, please execute the following commands:
HOST=$(kubectl get nodes --namespace kong -o jsonpath='{.items[0].status.addresses[0].address}')
PORT=$(kubectl get svc --namespace kong kong-kong-proxy -o jsonpath='{.spec.ports[0].nodePort}')
export PROXY_IP=${HOST}:${PORT}
curl $PROXY_IPOnce installed, please follow along the getting started guide to start using
Kong: https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/getting-started/WARNING: Kong Manager will not be functional because the Admin API is not
enabled. Setting both .admin.enabled and .admin.http.enabled and/or
.admin.tls.enabled to true to enable the Admin API over HTTP/TLS.
⚠️ 启动后报错了
/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:426throw new Error("Unknown authenticationOk message type" + util.inspect(msg));^
Error: Unknown authenticationOk message typeMessage { name: 'authenticationOk', length: 23 }at Connection.parseR (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:426:9)at Connection.parseMessage (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:345:17)at Socket.<anonymous> (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:105:22)at Socket.emit (events.js:310:20)at Socket.EventEmitter.emit (domain.js:482:12)at addChunk (_stream_readable.js:286:12)at readableAddChunk (_stream_readable.js:268:9)at Socket.Readable.push (_stream_readable.js:209:10)at TCP.onStreamRead (internal/stream_base_commons.js:186:23)
解决办法
这个报错是因为 Konga 使用的 pg(PostgreSQL)库版本太老,无法与你当前使用的 PostgreSQL 服务器版本兼容。这个错误通常出现在:
我部署的 PostgreSQL 是 14 或以上版本(17.4.0-debian-12-r17);
而 pantsel/konga 镜像里的 sails-postgresql 和 pg 库是老版本,不支持新的认证协议(如 SCRAM-SHA-256)。
所以我就是通过降低postgres的办法
helm upgrade postgres bitnami/postgresql \--set auth.postgresPassword=kongaAa123456 \--set auth.database=konga \ --namespace kong --create-namespace \--set primary.persistence.size=100Gi \--set primary.persistence.storageClass=postgresql-storageclass \--set image.tag=11.20.0-debian-11-r4
⚠️ 之后又报另一个错
A hook (load-db) failed to load!
Error (E_UNKNOWN) :: Encountered an unexpected error
error: relation "public.konga_users" does not existat Connection.parseE (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:539:11)at Connection.parseMessage (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:366:17)at Socket.<anonymous> (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:105:22)at Socket.emit (events.js:310:20)at Socket.EventEmitter.emit (domain.js:482:12)at addChunk (_stream_readable.js:286:12)at readableAddChunk (_stream_readable.js:268:9)at Socket.Readable.push (_stream_readable.js:209:10)at TCP.onStreamRead (internal/stream_base_commons.js:186:23) {originalError: error: relation "public.konga_users" does not existat Connection.parseE (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:539:11)at Connection.parseMessage (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:366:17)at Socket.<anonymous> (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:105:22)at Socket.emit (events.js:310:20)at Socket.EventEmitter.emit (domain.js:482:12)at addChunk (_stream_readable.js:286:12)at readableAddChunk (_stream_readable.js:268:9)at Socket.Readable.push (_stream_readable.js:209:10)at TCP.onStreamRead (internal/stream_base_commons.js:186:23) {length: 118,severity: 'ERROR',code: '42P01',detail: undefined,hint: undefined,position: '377',internalPosition: undefined,internalQuery: undefined,where: undefined,schema: undefined,table: undefined,column: undefined,dataType: undefined,constraint: undefined,file: 'parse_relation.c',line: '1156',routine: 'parserOpenTable'},_e: error: relation "public.konga_users" does not existat Connection.parseE (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:539:11)at Connection.parseMessage (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:366:17)at Socket.<anonymous> (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:105:22)at Socket.emit (events.js:310:20)at Socket.EventEmitter.emit (domain.js:482:12)at addChunk (_stream_readable.js:286:12)at readableAddChunk (_stream_readable.js:268:9)at Socket.Readable.push (_stream_readable.js:209:10)at TCP.onStreamRead (internal/stream_base_commons.js:186:23) {length: 118,severity: 'ERROR',code: '42P01',detail: undefined,hint: undefined,position: '377',internalPosition: undefined,internalQuery: undefined,where: undefined,schema: undefined,table: undefined,column: undefined,dataType: undefined,constraint: undefined,file: 'parse_relation.c',line: '1156',routine: 'parserOpenTable'},rawStack: 'error: relation "public.konga_users" does not exist\n' +' at Connection.parseE (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:539:11)\n' +' at Connection.parseMessage (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:366:17)\n' +' at Socket.<anonymous> (/app/node_modules/sails-postgresql/node_modules/pg/lib/connection.js:105:22)\n' +' at Socket.emit (events.js:310:20)\n' +' at Socket.EventEmitter.emit (domain.js:482:12)\n' +' at addChunk (_stream_readable.js:286:12)\n' +' at readableAddChunk (_stream_readable.js:268:9)\n' +' at Socket.Readable.push (_stream_readable.js:209:10)\n' +' at TCP.onStreamRead (internal/stream_base_commons.js:186:23)',details: 'Details: error: relation "public.konga_users" does not exist\n'
}成功信息是bash-5.0# node /app/bin/konga.js prepare \
> --adapter postgres \
> --uri postgres://postgres:kongaAa123456@postgres-postgresql:5432/konga
Preparing database...
debug: Hook:api_health_checks:process() called
debug: Hook:health_checks:process() called
debug: Hook:start-scheduled-snapshots:process() called
debug: Hook:upstream_health_checks:process() called
debug: Hook:user_events_hook:process() called
debug: Seeding User...
debug: User seed planted
debug: Seeding Kongnode...
debug: Kongnode seed planted
debug: Seeding Emailtransport...
debug: Emailtransport seed planted
debug: Database migrations completed!
解决办法,初始化迁移数据
bash-5.0# node /app/bin/konga.js prepare \
> --adapter postgres \
> --uri postgres://postgres:kongaAa123456@postgres-postgresql:5432/konga
Preparing database...
debug: Hook:api_health_checks:process() called
debug: Hook:health_checks:process() called
debug: Hook:start-scheduled-snapshots:process() called
debug: Hook:upstream_health_checks:process() called
debug: Hook:user_events_hook:process() called
debug: Seeding User...
debug: User seed planted
debug: Seeding Kongnode...
debug: Kongnode seed planted
debug: Seeding Emailtransport...
debug: Emailtransport seed planted
debug: Database migrations completed!
登录web控制台
配置目标 | 配置位置 | 说明 |
---|---|---|
🧑💻 Konga 的登录账号 | 第一次初始化时自动创建(可用 prepare 设置) | Konga 的 Web 控制台登录用户 |
🔐 连接 Kong Admin API 的账号密码或 Token | Konga 控制台 ➝ Kong Nodes 配置 | 用于让 Konga 控制台访问你的 Kong 实例 |
http://node-ip:31337/#!/services
下面会出现连接不上的原因,解决办法是通过将kong-kong的deployment中的proxy这个pod的KONG_ADMIN_LISTEN的值改成0.0.0.0:8001,且要将8001映射到kong-kong-manager的服务上