当前位置: 首页 > news >正文

Cilium动手实验室: 精通之旅---13.Cilium LoadBalancer IPAM and L2 Service Announcement

Cilium动手实验室: 精通之旅---13.Cilium LoadBalancer IPAM and L2 Service Announcement

  • 1. LAB环境
  • 2. L2公告策略
    • 2.1 部署Death Star
    • 2.2 访问服务
    • 2.3 部署L2公告策略
    • 2.4 服务宣告
  • 3. 可视化 ARP 流量
    • 3.1 部署新服务
    • 3.2 准备可视化
    • 3.3 再次请求
  • 4. 自动IPAM
    • 4.1 IPAM Pool
    • 4.2 配置L2策略
    • 4.3 创建服务
  • 5. 弹性负载均衡
    • 5.1 监控ARP
    • 5.2 删除节点
    • 5.3 检查回退
    • 5.4 小测验
  • 6. 测试
    • 6.1 需求
    • 6.2 解题

1. LAB环境

LAB环境访问地址

https://isovalent.com/labs/cilium-lb-ipam-l2-announcements/

kind安装1个1控制节点2worker节点

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:disableDefaultCNI: truekubeProxyMode: "none"
nodes:- role: control-planeextraPortMappings:# Hubble relay- containerPort: 31234hostPort: 31234# Hubble UI- containerPort: 31235hostPort: 31235- role: worker- role: worker

cilium安装

cilium install \--version v1.17.1 \--set kubeProxyReplacement=true \--set k8sServiceHost="kind-control-plane" \--set k8sServicePort=6443 \--set l2announcements.enabled=true \--set l2announcements.leaseDuration="3s" \--set l2announcements.leaseRenewDeadline="1s" \--set l2announcements.leaseRetryPeriod="500ms" \--set devices="{eth0,net0}" \--set externalIPs.enabled=true \--set operator.replicas=2

启用 Hubble 以进行可视化:

cilium hubble enable --ui

检查 Cilium 是否正常运行:

cilium status --wait

检查 L2 公告设置:

cilium config view | grep l2

输出结果:

root@server:~# yq cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:disableDefaultCNI: truekubeProxyMode: "none"
nodes:- role: control-planeextraPortMappings:# Hubble relay- containerPort: 31234hostPort: 31234# Hubble UI- containerPort: 31235hostPort: 31235- role: worker- role: worker
root@server:~# cilium install \--version v1.17.1 \--set kubeProxyReplacement=true \--set k8sServiceHost="kind-control-plane" \--set k8sServicePort=6443 \--set l2announcements.enabled=true \--set l2announcements.leaseDuration="3s" \--set l2announcements.leaseRenewDeadline="1s" \--set l2announcements.leaseRetryPeriod="500ms" \--set devices="{eth0,net0}" \--set externalIPs.enabled=true \--set operator.replicas=2
🔮 Auto-detected Kubernetes kind: kind
ℹ️  Using Cilium version 1.17.1
🔮 Auto-detected cluster name: kind-kind
ℹ️  Detecting real Kubernetes API server addr and port on Kind
🔮 Auto-detected kube-proxy has not been installed
ℹ️  Cilium will fully replace all functionalities of kube-proxy
root@server:~# cilium hubble enable --ui
root@server:~# cilium status --wait/¯¯\/¯¯\__/¯¯\    Cilium:             OK\__/¯¯\__/    Operator:           OK/¯¯\__/¯¯\    Envoy DaemonSet:    OK\__/¯¯\__/    Hubble Relay:       OK\__/       ClusterMesh:        disabledDaemonSet              cilium                   Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator          Desired: 2, Ready: 2/2, Available: 2/2
Deployment             hubble-relay             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui                Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 3cilium-envoy             Running: 3cilium-operator          Running: 2clustermesh-apiserver    hubble-relay             Running: 1hubble-ui                Running: 1
Cluster Pods:          5/5 managed by Cilium
Helm chart version:    1.17.1
Image versions         cilium             quay.io/cilium/cilium:v1.17.1@sha256:8969bfd9c87cbea91e40665f8ebe327268c99d844ca26d7d12165de07f702866: 3cilium-envoy       quay.io/cilium/cilium-envoy:v1.31.5-1739264036-958bef243c6c66fcfd73ca319f2eb49fff1eb2ae@sha256:fc708bd36973d306412b2e50c924cd8333de67e0167802c9b48506f9d772f521: 3cilium-operator    quay.io/cilium/operator-generic:v1.17.1@sha256:628becaeb3e4742a1c36c4897721092375891b58bae2bfcae48bbf4420aaee97: 2hubble-relay       quay.io/cilium/hubble-relay:v1.17.1@sha256:397e8fbb188157f744390a7b272a1dec31234e605bcbe22d8919a166d202a3dc: 1hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.1@sha256:0e0eed917653441fded4e7cdb096b7be6a3bddded5a2dd10812a27b1fc6ed95b: 1hubble-ui          quay.io/cilium/hubble-ui:v0.13.1@sha256:e2e9313eb7caf64b0061d9da0efbdad59c6c461f6ca1752768942bfeda0796c6: 1
root@server:~# cilium config view | grep l2
enable-l2-announcements                           true
enable-l2-neigh-discovery                         true
l2-announcements-lease-duration                   3s
l2-announcements-renew-deadline                   1s
l2-announcements-retry-period                     500ms

全部符合我们的预期

2. L2公告策略

2.1 部署Death Star

部署 Death Star 工作负载和相应的服务:

root@server:~# yq deathstar.yaml 
---
apiVersion: v1
kind: Service
metadata:name: deathstarlabels:color: red
spec:type: ClusterIPports:- port: 80selector:org: empireclass: deathstar
---
apiVersion: apps/v1
kind: Deployment
metadata:name: deathstar
spec:selector:matchLabels:org: empireclass: deathstarreplicas: 2template:metadata:labels:org: empireclass: deathstarname: deathstarspec:containers:- name: deathstarimage: docker.io/cilium/starwarsimagePullPolicy: IfNotPresent
root@server:~# kubectl apply -f deathstar.yaml
service/deathstar created
deployment.apps/deathstar created

等待 Death Star 部署准备就绪:

kubectl rollout status deployment deathstar

检查服务:

kubectl get svc deathstar --show-labels

我们想从集群外部访问 Death Star。为此,我们可以向服务添加外部 IP。现在,让我们在其上手动设置 IP 地址。我们将使用 12.0.0.100 作为外部 IP 地址:

SVC_IP=12.0.0.100
kubectl patch service deathstar -p '{"spec":{"externalIPs":["'$SVC_IP'"]}}'

验证服务是否具有正确的外部 IP:

kubectl get svc deathstar

输出结果如下:

root@server:~# kubectl apply -f deathstar.yaml
service/deathstar created
deployment.apps/deathstar created
root@server:~# kubectl rollout status deployment deathstar
deployment "deathstar" successfully rolled out
root@server:~# kubectl get svc deathstar --show-labels
NAME        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE     LABELS
deathstar   ClusterIP   10.96.28.40   <none>        80/TCP    2m21s   color=red
root@server:~# SVC_IP=12.0.0.100
kubectl patch service deathstar -p '{"spec":{"externalIPs":["'$SVC_IP'"]}}'
service/deathstar patched
root@server:~# kubectl get svc deathstar
NAME        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
deathstar   ClusterIP   10.96.28.40   12.0.0.100    80/TCP    2m52s

2.2 访问服务

名为 clab-garp-demo-neighbor 的 Docker 容器已部署在与分配给服务的 IP 相同的网络中。在其中执行一个 shell:

docker exec -e SVC_IP=$SVC_IP -ti clab-garp-demo-neighbor bash

尝试访问新创建的服务:

curl --connect-timeout 1 http://$SVC_IP/v1/

连接超时是因为此服务尚未通过 ARP 公布,因此容器不知道如何访问它。

root@neighbor:/# curl --connect-timeout 1 http://$SVC_IP/v1/
curl: (28) Connection timed out after 1000 milliseconds

2.3 部署L2公告策略

在节点上的 net0 接口上公布外部 IP(但不是负载均衡器 IP),并且它适用于带有标签 color=blue 的服务。
添加 nodeSelector 条目,以避免将 Control Plane 节点用作负载均衡器的入口点。
应用策略:

root@server:~# yq l2policy.yaml
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:name: policy1
spec:externalIPs: trueloadBalancerIPs: falseinterfaces:- net0serviceSelector:matchLabels:color: bluenodeSelector:matchExpressions:- key: node-role.kubernetes.io/control-planeoperator: DoesNotExist
root@server:~# kubectl apply -f l2policy.yaml
ciliuml2announcementpolicy.cilium.io/policy1 created

再次尝试访问

root@neighbor:/# curl --connect-timeout 1 http://$SVC_IP/v1/
curl: (28) Connection timed out after 1000 milliseconds

因为 L2 策略适用于标记为 color=blue 的服务,但 Death Star 服务当前标记为 color=red,所以连接仍然超时 。

2.4 服务宣告

将服务修改为使用 color=blue 标签:

kubectl label svc deathstar color=blue --overwrite

并尝试再次连接:

root@neighbor:/# curl --connect-timeout 1 http://$SVC_IP/v1/|jq
{"name": "Death Star","hostname": "deathstar-65c8d4f687-cfzsz","model": "DS-1 Orbital Battle Station","manufacturer": "Imperial Department of Military Research, Sienar Fleet Systems","cost_in_credits": "1000000000000","length": "120000","crew": "342953","passengers": "843342","cargo_capacity": "1000000000000","hyperdrive_rating": "4.0","starship_class": "Deep Space Mobile Battlestation","api": ["GET   /v1","GET   /v1/healthz","POST  /v1/request-landing","PUT   /v1/cargobay","GET   /v1/hyper-matter-reactor/status","PUT   /v1/exhaust-port"]
}

现在可以访问该服务,因为 IP 是通过 ARP 向网络宣布的!

3. 可视化 ARP 流量

3.1 部署新服务

部署一个名为 deathstar-2 的新服务,它指向相同的 Death Star 服务:

root@server:~# yq deathstar-2.yaml 
---
apiVersion: v1
kind: Service
metadata:name: deathstar-2labels:color: blue
spec:type: ClusterIPexternalIPs:- 12.0.0.101ports:- port: 80selector:org: empireclass: deathstar
root@server:~# kubectl apply -f deathstar-2.yaml
service/deathstar-2 created
root@server:~# 

此服务已经有一个预定义的静态外部 IP 12.0.0.101,并标有 color=blue,因此它将由我们之前部署的 policy1 L2 公告策略进行通告。
验证服务:

root@server:~# kubectl get svc deathstar-2
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
deathstar-2   ClusterIP   10.96.116.132   12.0.0.101    80/TCP    30s

3.2 准备可视化

Cilium 在 kube-system 命名空间中为与服务关联的每个 L2 租约创建一个 Lease 资源。

查看 deathstar-2 服务的租约:

root@server:~# kubectl get leases -n kube-system cilium-l2announce-default-deathstar-2 -o yaml
apiVersion: coordination.k8s.io/v1
kind: Lease
metadata:creationTimestamp: "2025-05-27T02:33:06Z"name: cilium-l2announce-default-deathstar-2namespace: kube-systemresourceVersion: "3410"uid: 75fefb56-0627-44d2-94fe-0097c66b92fa
spec:acquireTime: "2025-05-27T02:33:06.655416Z"holderIdentity: kind-worker2leaseDurationSeconds: 3leaseTransitions: 0renewTime: "2025-05-27T02:34:15.290214Z"

托管租约的节点在 spec.holderIdentity 中指定。检索它:

LEASE_NODE=$(kubectl -n kube-system get leases cilium-l2announce-default-deathstar-2 -o jsonpath='{.spec.holderIdentity}')
echo $LEASE_NODE

接下来,找到在该节点上运行的 Cilium 代理 pod:

LEASE_CILIUM_POD=$(kubectl -n kube-system get pod -l k8s-app=cilium --field-selector spec.nodeName=$LEASE_NODE -o name)
echo $LEASE_CILIUM_POD

现在,登录到 CIlium 代理 pod:

kubectl -n kube-system exec -ti $LEASE_CILIUM_POD -- bash

在 Pod 中安装 tcpdump 和 termshark:

apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install tcpdump termshark

在 Pod 的后台启动 tcpdump。过滤 ARP 数据包并将流写入 arp.pcap 文件:

tcpdump -i any arp -w arp.pcap

3.3 再次请求

再次向服务发出请求:

root@server:~# docker exec -ti clab-garp-demo-neighbor \curl --connect-timeout 1 http://12.0.0.101/v1/
{"name": "Death Star","hostname": "deathstar-65c8d4f687-fq7xd","model": "DS-1 Orbital Battle Station","manufacturer": "Imperial Department of Military Research, Sienar Fleet Systems","cost_in_credits": "1000000000000","length": "120000","crew": "342953","passengers": "843342","cargo_capacity": "1000000000000","hyperdrive_rating": "4.0","starship_class": "Deep Space Mobile Battlestation","api": ["GET   /v1","GET   /v1/healthz","POST  /v1/request-landing","PUT   /v1/cargobay","GET   /v1/hyper-matter-reactor/status","PUT   /v1/exhaust-port"]
}

启动termshark

mkdir -p /root/.config/termshark/
echo -e "[main]\ndark-mode = true" > /root/.config/termshark/termshark.toml
TERM=xterm-256color termshark -r arp.pcap

看到 12.0.0.101 的 ARP 请求和响应

请添加图片描述

4. 自动IPAM

4.1 IPAM Pool

将 IP 分配给服务,让我们为 color=blue 服务创建一个名为 pool-blue 的 IPAM 池。
应用此策略后,12.0.0.128/25 范围内的 IP 地址将被分配给与 color=blue 标签选择器匹配的 LoadBalancer 服务。

root@server:~# yq pool-blue.yaml
# # Second pool, label selector
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:name: "pool-blue"
spec:blocks:- cidr: "12.0.0.128/25"serviceSelector:matchLabels:color: blue
root@server:~# kubectl apply -f pool-blue.yaml
ciliumloadbalancerippool.cilium.io/pool-blue created

4.2 配置L2策略

root@server:~# cat l2policy.yaml 
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:name: policy1
spec:externalIPs: true# config loadBalancerIPs values to true loadBalancerIPs: true  interfaces:- net0serviceSelector:matchLabels:color: bluenodeSelector:matchExpressions:- key: node-role.kubernetes.io/control-planeoperator: DoesNotExist
root@server:~# kubectl apply -f l2policy.yaml
ciliuml2announcementpolicy.cilium.io/policy1 configured
root@server:~# 

4.3 创建服务

为 Death Star Pod 创建一个名为 deathstar-3 的新服务,但没有为其分配静态 IP:

kubectl expose deployment deathstar --name deathstar-3 --port 80 --type LoadBalancer

检查服务:

root@server:~# kubectl expose deployment deathstar --name deathstar-3 --port 80 --type LoadBalancer
service/deathstar-3 exposed
root@server:~# kubectl get svc deathstar-3 --show-labels
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   LABELS
deathstar-3   LoadBalancer   10.96.197.206   <pending>     80:31590/TCP   5s    <none>

目前没有外部 IP,因为它目前没有与 IPAM 池匹配的标签。

color=blue 标签添加到服务中:

kubectl label svc deathstar-3 color=blue

再次检查服务:

root@server:~# kubectl label svc deathstar-3 color=blue
service/deathstar-3 labeled
root@server:~# kubectl get svc deathstar-3 --show-labels
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   LABELS
deathstar-3   LoadBalancer   10.96.197.206   12.0.0.128    80:31590/TCP   86s   color=blue
root@server:~# 

它现在已收到与蓝色 IPAM 池关联的范围内的外部 IP。由于 color: blue 也与我们之前部署的 L2 公告策略相对应,因此此服务应该已经通过 ARP 提供。让我们检查一下:

root@server:~# SVC2_IP=$(kubectl get svc deathstar-3 -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $SVC2_IP
docker exec -ti clab-garp-demo-neighbor curl --connect-timeout 1 $SVC2_IP/v1/
12.0.0.128
{"name": "Death Star","hostname": "deathstar-65c8d4f687-fq7xd","model": "DS-1 Orbital Battle Station","manufacturer": "Imperial Department of Military Research, Sienar Fleet Systems","cost_in_credits": "1000000000000","length": "120000","crew": "342953","passengers": "843342","cargo_capacity": "1000000000000","hyperdrive_rating": "4.0","starship_class": "Deep Space Mobile Battlestation","api": ["GET   /v1","GET   /v1/healthz","POST  /v1/request-landing","PUT   /v1/cargobay","GET   /v1/hyper-matter-reactor/status","PUT   /v1/exhaust-port"]
}

5. 弹性负载均衡

5.1 监控ARP

再次检索服务 IP,对其进行 arping 并在 Docker 容器中检查它的 ARP 响应。

docker exec -ti clab-garp-demo-neighbor arping 12.0.0.100

5.2 删除节点

Kubernetes 为每个服务提供 Leases 资源类型,其中包含该信息。
该资源存储在 kube-system 命名空间中,其名称格式 cilium-l2announce-<namespace>-<service> 为 。
由于我们的服务在 default 命名空间中称为 deathstar,因此我们需要查找 cilium-l2announce-default-deathstar .

查看租约资源的规范:

kubectl -n kube-system get leases cilium-l2announce-default-deathstar -o yaml | yq .spec

该资源具有 spec.holderIdentity 字段,该字段指示当前持有租约的节点是 kind-worker
由于我们的节点是 Docker 容器,因此删除节点不会完全关闭数据路径,因为它的 veth 对将留在后面。因此,为了模拟节点删除,我们需要识别 veth 对,以便我们可以关闭节点上的接口。
首先,检索租约的 MAC 地址。正如我们所看到的,我们可以通过使用 ARP 解析 IP 来获取该信息:

docker exec -ti clab-garp-demo-neighbor arp 12.0.0.100

接下来,让我们从节点获取 veth 对编号:

docker exec kind-worker ip a | grep -B1 aa:c1:ab:b7:2b:f7

最后,在 VM 上检索该 veth 对的接口名称:

ip a | grep if15

现在,让我们通过删除托管该节点上的 Docker 容器来模拟节点上的问题:

docker kill kind-worker

并删除 veth 接口:

ip link set net2 down

再次检查租约:

kubectl -n kube-system get leases cilium-l2announce-default-deathstar -o yaml | yq .spec.holderIdentity

输出如下:

root@server:~# kubectl -n kube-system get leases cilium-l2announce-default-deathstar -o yaml | yq .spec
acquireTime: "2025-05-27T02:30:39.025729Z"
holderIdentity: kind-worker
leaseDurationSeconds: 3
leaseTransitions: 0
renewTime: "2025-05-27T02:49:29.227768Z"
root@server:~# docker exec -ti clab-garp-demo-neighbor arp 12.0.0.100
Address                  HWtype  HWaddress           Flags Mask            Iface
12.0.0.100               ether   aa:c1:ab:b7:2b:f7   C                     net0
root@server:~# docker exec kind-worker ip a | grep -B1 aa:c1:ab:b7:2b:f7
15: net0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default link/ether aa:c1:ab:b7:2b:f7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
root@server:~# ip a | grep if15
16: net2@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue master br-garp-clab state UP group default 
root@server:~# docker kill kind-worker
kind-worker
root@server:~# ip link set net2 down
root@server:~# kubectl -n kube-system get leases cilium-l2announce-default-deathstar -o yaml | yq .spec.holderIdentity
kind-worker2
root@server:~# 

如我们预料,持有者身份变成了kind-worker2

5.3 检查回退

此时,几次超时后,arping 命令现在应解析为不同的 MAC 地址,表明负载均衡器租约已移至另一个节点:

58 bytes from aa:c1:ab:b7:2b:f7 (12.0.0.100): index=147 time=4.512 usec
58 bytes from aa:c1:ab:b7:2b:f7 (12.0.0.100): index=148 time=3.949 usec
58 bytes from aa:c1:ab:b7:2b:f7 (12.0.0.100): index=149 time=3.665 usec
58 bytes from aa:c1:ab:b7:2b:f7 (12.0.0.100): index=150 time=3.782 usec
58 bytes from aa:c1:ab:b7:2b:f7 (12.0.0.100): index=151 time=3.929 usec
Timeout
Timeout
Timeout
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=152 time=5.142 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=153 time=4.234 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=154 time=4.654 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=155 time=4.272 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=156 time=4.097 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=157 time=3.932 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=158 time=4.425 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=159 time=3.929 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=160 time=4.734 usec
58 bytes from aa:c1:ab:86:92:38 (12.0.0.100): index=161 time=4.047 usec

并尝试再次访问该服务:

root@server:~# docker exec -ti clab-garp-demo-neighbor curl 12.0.0.100/v1/
{"name": "Death Star","hostname": "deathstar-65c8d4f687-cfzsz","model": "DS-1 Orbital Battle Station","manufacturer": "Imperial Department of Military Research, Sienar Fleet Systems","cost_in_credits": "1000000000000","length": "120000","crew": "342953","passengers": "843342","cargo_capacity": "1000000000000","hyperdrive_rating": "4.0","starship_class": "Deep Space Mobile Battlestation","api": ["GET   /v1","GET   /v1/healthz","POST  /v1/request-landing","PUT   /v1/cargobay","GET   /v1/hyper-matter-reactor/status","PUT   /v1/exhaust-port"]
}

它可以正常工作了。
检查 Cilium 配置中的值:

root@server:~# cilium config view | grep l2-announcements
enable-l2-announcements                           true
l2-announcements-lease-duration                   3s
l2-announcements-renew-deadline                   1s
l2-announcements-retry-period                     500ms

5.4 小测验

×	L2 announcements is activated by default in Cilium
√	LB-IPAM is activated by default in Cilium
√	L2 announcements use ARP to announce service IPs outside of the cluster
√	L2 announcements can be used without an LB-IPAM ip pool

6. 测试

6.1 需求

  • 在端口 80 上使用 LoadBalancer 类型的新服务公开 Death Star;
  • 创建一个新的 Cilium Load-Balancer IP Pool,将 172.18.42.0/29 范围内的 IP 分配给标记为 org=empire 的服务;
  • 创建一个新的 L2 公告策略,以在接口 eth0 上为所有标记为 announce=arp 的服务宣布服务 IP;
  • 检查是否可以使用带有 curl <SVC_IP>/v1/ 的负载均衡器服务 IP 从 VM 访问服务。

6.2 解题

root@server:~# kubectl expose deployment deathstar --name deathstar --port 80 --type LoadBalancer
service/deathstar exposed
root@server:~# k get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
deathstar    LoadBalancer   10.96.47.105   <pending>     80:31555/TCP   3m10s
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        9m10s
root@server:~# k label svc deathstar org=empire
service/deathstar labeled
root@server:~# k label svc deathstar announce=arp
service/deathstar labeled
root@server:~# k get svc --show-labels 
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     LABELS
deathstar    LoadBalancer   10.96.47.105   172.18.42.0   80:31555/TCP   3m38s   org=empire,announce=arp
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        9m38s   component=apiserver,provider=kubernetes
root@server:~# yq ippool.yaml
# # Second pool, label selector
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:name: empire
spec:blocks:- cidr: "172.18.42.0/29"serviceSelector:matchLabels:org: empire
root@server:~# yq l2policy.yaml
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:name: empire
spec:externalIPs: falseloadBalancerIPs: true  interfaces:- eth0serviceSelector:matchLabels:announce: arpnodeSelector:matchExpressions:- key: node-role.kubernetes.io/control-planeoperator: DoesNotExistroot@server:~# k apply -f ippool.yaml 
ciliumloadbalancerippool.cilium.io/empire created
root@server:~# k apply -f l2policy.yaml 
ciliuml2announcementpolicy.cilium.io/empire created

好了提交

请添加图片描述

新徽标GET!

请添加图片描述

相关文章:

  • 异步跟栈 webpack
  • 【Elasticsearch】映射:fielddata 详解
  • Linux云原生架构:从内核到分布式系统的进化之路
  • 深入解析 Qwen3-Embedding 的模型融合技术:球面线性插值(Slerp)的应用
  • 信息收集:从图像元数据(隐藏信息收集)到用户身份的揭秘 --- 7000
  • 第1课、LangChain 介绍
  • 风控系统中常用的概念和架构学习
  • uni-app学习笔记三十三--触底加载更多和下拉刷新的实现
  • Linux性能调优:从内核到应用的极致优化
  • <3>-MySQL表的操作
  • unity ngui button按钮点击时部分区域响应,部分区域不响应
  • unity实现自定义粒子系统
  • 【无人机】地面站crazyfile-cfclient免安装方法,Python3.10的整体环境配置打包
  • 支付系统架构图
  • 【设计模式】1.简单工厂、工厂、抽象工厂模式
  • jmeter聚合报告中参数详解
  • 重新定义 AI 协同:三款开源 MCP 工具开启智能体从“聊天”到“操控”
  • Bootstrap Table开源的企业级数据表格集成
  • LLMs 系列科普文(12)
  • 七、Sqoop Job:简化与自动化数据迁移任务及免密执行
  • 仙游哪里可以做网站的/上海短视频seo优化网站
  • wordpress 删除gravatar/余姚关键词优化公司
  • 筑巢做网站怎么样/一级造价工程师
  • 网站建设方式/nba最新比赛直播
  • 电商网站运营团队建设方案/it培训学校
  • 郑州新站网站推广工具/百度浏览器广告怎么投放