云原生LVS+Keepalived高可用方案(二)
云原生环境下的LVS+Keepalived负载均衡与高可用方案
概述
在云原生架构中,高可用负载均衡是确保关键服务持续可用的核心技术。Keepalived与kube-proxy的IPVS模式相结合,能够为Kubernetes控制平面和数据平面提供高性能、低延迟的流量分发能力。本方案将从Keepalived DaemonSet部署、kube-proxy IPVS配置优化、调度算法选择、VIP管理规范以及生产环境验证与故障排查等方面,提供一套完整的云原生负载均衡与高可用解决方案。
一、Keepalived DaemonSet部署方案
1.1 DaemonSet配置优化
在Kubernetes环境中,Keepalived通常通过DaemonSet部署在Master节点上,实现API Server的VIP高可用。优化后的DaemonSet配置如下:
apiVersion: apps/v1kind: DaemonSetmetadata:name: keepalivednamespace: kube-systemlabels:app: keepalivedspec:selector:matchLabels:app: keepalivedtemplate:metadata:labels:app: keepalivedannotations:container.apparmor.security.beta.kubernetes.io/keepalived: unconfinedspec:hostNetwork: truednsPolicy: ClusterFirstWithHostNettolerations:- key: node-role.kubernetes.io/control-planeoperator: Existseffect: NoSchedulecontainers:- name: keepalivedimage: bitnami/keepalived:2.2.4securityContext:privileged: truecommand:- /usr/sbin/keepalived- --dont-fork- --log-console- --use-file=/etc/keepalived/keepalived.confenv:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamevolumeMounts:- name: keepalived-confmountPath: /etc/keepalived/keepalived.confsubPath: keepalived.conf- name: health-checkmountPath: /usr/local/bin/health_check.shsubPath: health_check.shreadOnly: truevolumes:- name: keepalived-confconfigMap:name: keepalived-config- name: health-checkconfigMap:name: health-check-configdefaultMode: 0755nodeSelector:node-role.kubernetes.io/control-plane: "true"1.2 Keepalived主节点配置
apiVersion: v1kind: ConfigMapmetadata:name: keepalived-confignamespace: kube-systemdata:keepalived.conf: |global_defs {router_id kube-apiserverscript_user rootenable_script_security}vrrp_script chk_apiserver {script "/usr/local/bin/health_check.sh"interval 2weight -5fall 2rise 1}vrrp_instance VI_API {state MASTERinterface eth0virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass YourStrongPass123!}virtual_ipaddress {192.168.1.100/24 dev eth0}track_script {chk_apiserver}preempt_delay 30}1.3 健康检查脚本
#!/bin/bash# 检查API Server健康状态curl -k -s --connect-timeout 2 https://127.0.0.1:6443/healthz > /dev/nullif [ $? -eq 0 ]; thenexit 0elseexit 1fi1.4 云环境单播配置
在云环境中,由于VRRP组播可能被网络策略限制,需要使用单播模式:
vrrp_instance VI_API {state MASTERinterface eth0virtual_router_id 51priority 100# 云环境单播配置unicast_src_ip 10.0.0.10unicast_peer {10.0.0.1110.0.0.12}authentication {auth_type PASSauth_pass YourStrongPass123!}virtual_ipaddress {192.168.1.100/24 dev eth0}track_script {chk_apiserver}}二、kube-proxy IPVS模式配置优化
2.1 kube-proxy ConfigMap配置
apiVersion: v1kind: ConfigMapmetadata:name: kube-proxynamespace: kube-systemdata:config.conf: |apiVersion: kubeproxy.config.k8s.io/v1kind: KubeProxyConfigurationmode: "ipvs"ipvs:strictARP: truescheduler: "wrr"syncPeriod: 30sminSyncPeriod: 0stcpTimeout: 0stcpFinTimeout: 0sudpTimeout: 0smetricsBindAddress: "0.0.0.0:10249"bindAddress: "0.0.0.0"clusterCIDR: "10.244.0.0/16"healthzBindAddress: "0.0.0.0:10256"2.2 调度算法选择策略
轮询(rr) | 短连接场景,后端Pod性能均衡 |
|
加权轮询(wrr) | 短连接场景,后端Pod性能差异较大 |
|
最少连接(lc) | 长连接场景,动态分配请求 |
|
加权最少连接(wlc) | 长连接场景,后端Pod性能差异较大 |
|
源地址哈希(sh) | 需要会话保持的应用场景 |
|
三、IPVS调度算法在Kubernetes中的实现
3.1 调度算法实现机制
- kube-proxy监听API Server:持续监听API Server中的Service和Endpoint变化
- 规则更新:当检测到变化时,kube-proxy通过调用IPVS内核模块更新IPVS规则
- 流量分发:根据配置的调度算法,将流量从Service的Cluster IP转发到后端Pod
- 健康检查:通过Pod的livenessProbe和readinessProbe信息,自动将不健康的Pod从IPVS规则中移除
3.2 调度算法性能对比
| 调度算法 | 规则更新频率 | 网络带宽 | 资源消耗 | 适用场景 |
|---|---|---|---|---|
轮询(rr) | 微秒级 | 高 | 低 | 短连接场景,后端Pod性能均衡 |
加权轮询(wrr) | 微秒级 | 高 | 低 | 短连接场景,后端Pod性能差异较大 |
最少连接(lc) | 微秒级 | 高 | 低 | 长连接场景,动态分配请求 |
加权最少连接(wlc) | 微秒级 | 高 | 低 | 长连接场景,后端Pod性能差异较大 |
源地址哈希(sh) | 微秒级 | 中等 | 低 | 需要会话保持的应用场景 |
四、VIP管理规范
4.1 VIP与Cluster IP的范围规划
- Cluster IP:Kubernetes默认Cluster IP范围为
10.96.0.0/12,由kube-proxy自动管理 - VIP:Keepalived管理的VIP通常用于API Server等关键服务,应选择在此范围之外的地址,如
192.168.0.0/16等外部网段
4.2 网络策略配置
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: keepalived-egressnamespace: kube-systemspec:podSelector:matchLabels:app: keepalivedpolicyTypes:- Egressegress:- to:- ipBlock:cidr: 10.0.0.0/24ports:- protocol: 112 # IP协议号112,VRRP协议五、生产环境验证与故障排查
5.1 VIP漂移测试
# 验证VIP初始状态ip addr show eth0 | grep 192.168.1.100kubectl get pods -n kube-system -l app=keepalived# 模拟主节点故障systemctl stop keepalived# 验证VIP漂移watch -n 1 'ip addr show eth0 | grep 192.168.1.100'kubectl get pods -n kube-system -l app=keepalived# 恢复主节点systemctl start keepalived5.2 负载均衡测试
# 验证IPVS规则ipvsadm -Ln | grep 192.168.1.100# 测试API Server连通性curl -k https://192.168.1.100:6443/healthz# 检查连接统计ipvsadm -Ln --stats | grep 192.168.1.1005.3 故障排查方法
5.3.1 VIP无法获取
# 检查Keepalived配置cat /etc/keepalived/keepalived.conf# 检查Keepalived状态systemctl status keepalivedps aux | grep keepalived# 检查VIP绑定ip addr show eth0 | grep 192.168.1.100# 检查VRRP通信tcpdump -i eth0 -nn host 224.0.0.185.3.2 请求无法正确分发
# 检查kube-proxy配置kubectl get cm kube-proxy -n kube-system -o yaml# 检查kube-proxy状态kubectl get pods -n kube-system -l k8s-app=kube-proxy# 检查IPVS规则ipvsadm -Ln# 检查后端Pod健康状态kubectl get pods -o wide六、内核参数调优
# /etc/sysctl.d/k8s-ipvs.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-arptables = 1# IPVS连接跟踪net.ipv4.vs.conntrack = 1net.ipv4.vs.expire_nodest_conn = 1net.ipv4.vs.expire_quiescent_template = 1# 连接跟踪表大小net.nf_conntrack_max = 1000000net.netfilter.nf_conntrack_max = 1000000# 应用配置sysctl -p /etc/sysctl.d/k8s-ipvs.conf七、监控与告警集成
7.1 Prometheus ServiceMonitor
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata:name: kube-proxynamespace: kube-systemspec:selector:matchLabels:k8s-app: kube-proxyendpoints:- port: metricsinterval: 15spath: /metricsnamespaceSelector:matchNames:- kube-system7.2 监控指标
- 连接数:
ipvs_connections_total - 带宽:
ipvs_bandwidth_bytes_total - VIP状态:通过自定义脚本监控VIP绑定状态
- 健康检查:监控API Server健康状态
八、安全配置建议
8.1 不要禁用SELinux
# 正确的安全配置setsebool -P haproxy_connect_any=on8.2 使用强密码
authentication {auth_type PASSauth_pass Your$tr0ngP@ss123!}8.3 网络安全
- 在云环境中配置安全组允许IP协议112
- 使用单播模式避免组播限制
- 配置适当的网络策略
九、部署流程
9.1 环境准备
# 安装必要工具yum install -y ipvsadm keepalived# 配置内核参数echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.confsysctl -p9.2 部署步骤
# 1. 部署kube-proxy配置kubectl apply -f kube-proxy-configmap.yaml# 2. 重启kube-proxykubectl rollout restart daemonset kube-proxy -n kube-system# 3. 部署Keepalived配置kubectl apply -f keepalived-configmap.yamlkubectl apply -f health-check-config.yaml# 4. 部署Keepalived DaemonSetkubectl apply -f keepalived-daemonset.yaml# 5. 验证部署kubectl get pods -n kube-system -l app=keepalivedip addr show eth0 | grep 192.168.1.100十、总结与最佳实践
10.1 方案优势
- 高可用性:通过Keepalived实现API Server的VIP高可用
- 高性能:kube-proxy的IPVS模式性能显著优于iptables模式
- 灵活性:支持多种调度算法,可根据业务场景选择
- 安全性:通过网络策略和安全配置保障系统安全
- 可观察性:集成监控与告警系统
10.2 最佳实践建议
调度算法选择:
- 短连接场景:使用轮询(rr)或加权轮询(wrr)
- 长连接场景:使用最少连接(lc)或加权最少连接(wlc)
- 会话保持:使用源地址哈希(sh)
VIP管理:
- VIP与Cluster IP范围必须明确区分
- 云环境使用单播模式
- 定期执行VIP漂移测试
网络插件:
- 确保网络插件兼容VRRP协议
- 配置适当的安全组规则
生产验证:
- 定期进行故障切换测试
- 监控关键指标
- 设置合理的告警阈值
