当前位置: 首页 > news >正文

Ubuntu 24.04.2安装k8s 1.33.4 配置cilium

软件版本:
ubuntu24.04.2,
kubeadm v1.33.34
kubernetes v1.33.4
containerd v2.0.2
cilium version v1.18.0

服务器角色分配
node1 192.168.2.21 Ubuntu 24.04.2 LTS master
node2 192.168.2.22 Ubuntu 24.04.2 LTS node
node3 192.168.2.23 Ubuntu 24.04.2 LTS node
node4 192.168.2.24 Ubuntu 24.04.2 LTS node
node5 192.168.2.25 Ubuntu 24.04.2 LTS node
node6 192.168.2.26 Ubuntu 24.04.2 LTS node

第一步、基础设置
所有机器均需要操作

关闭swap

sed -ri 's/^([^#].*swap.*)$/#\1/' /etc/fstab && grep swap /etc/fstab && swapoff -a && free -h# 关闭防火墙
ufw disable# 设置时区
timedatectl set-timezone Asia/Shanghai
systemctl restart systemd-timesyncd.service# 开启ipv4转发
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
# Kubernetes & Cilium 必需参数
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1# 可选:调优内核参数(如连接跟踪表大小)
net.netfilter.nf_conntrack_max = 1048576
EOF# 应用配置(等效于 sysctl -p /etc/sysctl.d/k8s.conf)
sysctl --systemsysctl net.ipv4.ip_forward

配置hostname以及hosts

cat >> /etc/hosts << EOF
192.168.2.21 ops-test-021
192.168.2.22 ops-test-022
192.168.2.23 ops-test-023
192.168.2.24 ops-test-024
192.168.2.25 ops-test-025
192.168.2.26 ops-test-026
192.168.2.27 ops-test-027
192.168.2.28 ops-test-028
192.168.2.29 ops-test-029
192.168.2.30 ops-test-030
EOF

第二步:安装containerd
所有节点均需操作
2.1 下载和配置containerd
从 https://github.com/containerd/containerd/releases 下载
二进制文件是为基于 glibc 的 Linux 发行版(如 Ubuntu 和 Rocky Linux)动态构建的。 此二进制文件可能不适用于基于 musl 的发行版,例如 Alpine Linux。 此类发行版的用户可能必须从源或第三方软件包安装 containerd
旧的 Linux 发行版上不起作用,并将在 containerd 2.0 中删除。cri-containerd-xx

## (containerd2.x文档) https://github.com/containerd/containerd/blob/main/docs/containerd-2.0.md
wget https://github.com/containerd/containerd/releases/download/v2.0.2/containerd-2.0.2-linux-amd64.tar.gzapt install runc
$ tar Cxzvf /usr/local containerd-2.0.2-linux-amd64.tar.gz
bin/
bin/containerd-shim-runc-v2
bin/containerd-shim
bin/ctr
bin/containerd-shim-runc-v1
bin/containerd
bin/containerd-stressmkdir -p /etc/containerd && containerd config default > /etc/containerd/config.tomlsed -i "s#registry.k8s.io/pause:3.10#registry.aliyuncs.com/google_containers/pause:3.10#g" /etc/containerd/config.toml
#添加SystemdCgroup = true参数
sed -i "/ShimCgroup = ''/a \            SystemdCgroup = true" /etc/containerd/config.toml#安装containerd.service官网提供的
wget -P https://raw.githubusercontent.com/containerd/containerd/main/containerd.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd.service

2.2 安装crictl

CRICTL_VERSION=v1.33.0
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$CRICTL_VERSION/crictl-$CRICTL_VERSION-linux-amd64.tar.gz
tar zxvf crictl-$CRICTL_VERSION-linux-amd64.tar.gz -C /usr/local/bin

2.3 配置私有Harbor镜像仓库

修改配置
大概在52行开始

vim +52 /etc/containerd/config.toml 51     [plugins.'io.containerd.cri.v1.images'.registry]52       config_path = '/etc/containerd/certs.d'    #修改该行的配置信息 

重新启动containerd

第三步:安装K8S组件

# 更新源
sudo apt update && sudo apt upgrade -y
# 安装工具
apt install -y apt-transport-https ca-certificates curl gpg
# 创建目录,有的版本有,看情况创建
mkdir -p -m 755 /etc/apt/keyrings
# 下载秘钥
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg && \
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#  添加软件源1.33
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list# 更新,安装软件,防止更新
apt update && \
apt install -y kubelet kubectl kubeadm && \
apt-mark hold kubelet kubeadm kubectl
# 设置开机自启
sudo systemctl enable --now kubelet
# 查看版本
kubeadm version第四步:初始化集群
4.1 下载相关镜像
# 先下载阿里云镜像,node1节点即可,即master节点
sudo kubeadm config images pull \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.33.4 \
--cri-socket=unix:///run/containerd/containerd.sock4.2 在master节点生成初始化集群的配置文件
kubeadm config print init-defaults > kubeadm-config.yaml4.3 配置文件需要修改如下内容
# 修改kubeadm-config配置文件
vim kubeadm-config.yaml# 管理节点的IP地址
advertiseAddress: 192.168.2.21# 本机注册到集群后的节点名称
name: ops-test-008#版本
kubernetesVersion: 1.33.3#跳过kube-proxy 这装
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentimagePullSerial: truename: ops-test-008taints: null
skipPhases:  # 添加在这里- addon/kube-proxy  # 关键:跳过 kube-proxy 安装
timeouts:
--增加如上两行#在 networking 部分添加 podSubnet(必须与后续 Cilium 的 ipv4NativeRoutingCIDR 一致)
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16  # 新增此行# 集群镜像下载地址,修改为阿里云
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers4.4 通过配置文件初始化集群
kubeadm init --config kubeadm-config.yaml ---执行结果-----root@ops-test-021:~# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.33.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ops-test-021] and IPs [10.96.0.1 192.168.2.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ops-test-021] and IPs [192.168.2.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ops-test-021] and IPs [192.168.2.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002761381s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.2.21:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 2.402727849s
[control-plane-check] kube-scheduler is healthy after 3.021801856s
[control-plane-check] kube-apiserver is healthy after 4.502387192s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ops-test-021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ops-test-021 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNSYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.2.21:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:9394cee06110b4ad157125ef8b791466fe1f118840c33e0f987fb6c7bfd6b8c8 4.5 据集群初始化后的提示,执行如下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config4.6 其它节点加入集群
kubeadm join 192.168.2.21:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:9394cee06110b4ad157125ef8b791466fe1f118840c33e0f987fb6c7bfd6b8c8 root@ops-test-022:~# kubeadm join 192.168.2.21:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:9394cee06110b4ad157125ef8b791466fe1f118840c33e0f987fb6c7bfd6b8c8 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
W0828 11:14:09.307801    3443 configset.go:78] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" is forbidden: User "system:bootstrap:abcdef" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.502524921s
[kubelet-start] Waiting for the kubelet to perform the TLS BootstrapThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.root@ops-test-022:~# 查看集群状态
root@ops-test-021:~# kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
ops-test-021   NotReady   control-plane   8m19s   v1.33.4
ops-test-022   NotReady   <none>          3m45s   v1.33.4
ops-test-023   NotReady   <none>          3m34s   v1.33.4
ops-test-024   NotReady   <none>          3m20s   v1.33.4
ops-test-026   NotReady   <none>          9s      v1.33.4#验证节点 PodCIDR 分配
root@ops-test-021:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
ops-test-021    10.244.0.0/24
ops-test-022    10.244.1.0/24
ops-test-023    10.244.2.0/24
ops-test-024    10.244.3.0/24
ops-test-026    10.244.4.0/244.7 部署Cilium工具4.7.1. 安装 Cilium CLI 工具
curl -L --remote-name https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
tar xzvf cilium-linux-amd64.tar.gz
sudo mv cilium /usr/local/bin
验证:
cilium version4.7.2. 使用默认的VXLAN模式cilium install \--set enableKubeProxyReplacement=true \--set kubeProxyReplacement=true \--set ipam.mode=kubernetes \--set tunnel=vxlan \  # 将路由模式改为 VXLAN--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \--set ipam.operator.clusterPoolIPv4MaskSize=24 \--set hubble.enabled=true \--set hubble.relay.enabled=true \--set hubble.ui.enabled=true4.7.3. 安装 Cilium(Native-Routing 模式,云原生路由模式)--本次使用的方式cilium install \--set enableKubeProxyReplacement=true \--set kubeProxyReplacement=true \--set ipam.mode=kubernetes \--set routingMode=native \--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \--set ipam.operator.clusterPoolIPv4MaskSize=24 \--set ipv4NativeRoutingCIDR=10.244.0.0/16 \--set autoDirectNodeRoutes=true \--set hubble.enabled=true \--set hubble.relay.enabled=true \--set hubble.ui.enabled=true --set enableKubeProxyReplacement=true`                        显式启用 kube-proxy 替代功能
--set kubeProxyReplacement=strict`                            使用 Cilium 的 eBPF 功能完全替代 kube-proxy
--set ipam.mode=kubernetes`                                   使用 Kubernetes 控制器进行 Pod IP 分配
--set routingMode=native`                                     启用原生路由模式,节点间直接路由通信
--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16` 设置 Pod 网络地址池 
--set ipam.operator.clusterPoolIPv4MaskSize=24`               每个节点分配一个 `/24` 网段用于 Pod IP
--set ipv4NativeRoutingCIDR=10.244.0.0/16`                    指定原生路由可达的 Pod 网络范围 
--set autoDirectNodeRoutes=true`                              自动添加节点间直连路由
--set hubble.enabled=true`                                    启用 Hubble 网络流量观测功能
--set hubble.relay.enabled=true`                              启用 Hubble Relay(Hubble 数据中转服务)
--set hubble.ui.enabled=true`                                 启用 Hubble Web UI(可视化界面)--执行结果-----
root@ops-test-021:~# cilium install \--set enableKubeProxyReplacement=true \--set kubeProxyReplacement=true \--set ipam.mode=kubernetes \--set routingMode=native \--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \--set ipam.operator.clusterPoolIPv4MaskSize=24 \--set ipv4NativeRoutingCIDR=10.244.0.0/16 \--set autoDirectNodeRoutes=true \--set hubble.enabled=true \--set hubble.relay.enabled=true \--set hubble.ui.enabled=true 
ℹ️  Using Cilium version 1.18.0
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has not been installed
ℹ️  Cilium will fully replace all functionalities of kube-proxy4.7.4. Native Routing + Cilium Ingress (云原生路由+启用ingress)cilium install \--set kubeProxyReplacement=strict \         # 完全替换 kube-proxy--set enableKubeProxyReplacement=true \     # 启用 kube-proxy 替换--set ipam.mode=kubernetes \                # 使用 Kubernetes IPAM--set routingMode=native \                  # 启用 Native Routing--set ipv4NativeRoutingCIDR=10.244.0.0/16 \ # 设置 Pod CIDR--set autoDirectNodeRoutes=true \           # 自动管理节点路由--set ingressController.enabled=true \      # 启用 Cilium Ingress Controller--set ingressController.loadbalancerMode=dedicated \  # 使用专用 LB 模式(推荐)--set hubble.enabled=true \                 # 启用 Hubble--set hubble.relay.enabled=true \           # 启用 Hubble Relay--set hubble.ui.enabled=true                # 启用 Hubble UI4.7.5 验证安装cilium status---验证结果
root@ops-test-021:~# cilium status/¯¯\/¯¯\__/¯¯\    Cilium:             OK\__/¯¯\__/    Operator:           OK/¯¯\__/¯¯\    Envoy DaemonSet:    OK\__/¯¯\__/    Hubble Relay:       OK\__/       ClusterMesh:        disabledDaemonSet              cilium                   Desired: 5, Ready: 5/5, Available: 5/5
DaemonSet              cilium-envoy             Desired: 5, Ready: 5/5, Available: 5/5
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui                Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 5cilium-envoy             Running: 5cilium-operator          Running: 1clustermesh-apiserver    hubble-relay             Running: 1hubble-ui                Running: 1
Cluster Pods:          4/4 managed by Cilium
Helm chart version:    1.18.0
Image versions         cilium             quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2: 5cilium-envoy       quay.io/cilium/cilium-envoy:v1.34.4-1753677767-266d5a01d1d55bd1d60148f991b98dac0390d363@sha256:231b5bd9682dfc648ae97f33dcdc5225c5a526194dda08124f5eded833bf02bf: 5cilium-operator    quay.io/cilium/operator-generic:v1.18.0@sha256:398378b4507b6e9db22be2f4455d8f8e509b189470061b0f813f0fabaf944f51: 1hubble-relay       quay.io/cilium/hubble-relay:v1.18.0@sha256:c13679f22ed250457b7f3581189d97f035608fe13c87b51f57f8a755918e793a: 1hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15: 1hubble-ui          quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392: 1
root@ops-test-021:~# 4.7.6 hubble ui应用#vim hubble-ui-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: hubble-uinamespace: kube-systemannotations:nginx.ingress.kubernetes.io/rewrite-target: /
spec:ingressClassName: nginxrules:- host: hubble.cctbb.comhttp:paths:- path: /pathType: Prefixbackend:service:name: hubble-uiport:number: 80# kubectl apply -f hubble-ui-ingress.yaml访问:http://hubble.cctbb.com

文章转载自:

http://nRS6MkKo.qmfhh.cn
http://YmYUFrIN.qmfhh.cn
http://i7DmSJ3c.qmfhh.cn
http://IIqDAI45.qmfhh.cn
http://RwJmZBfP.qmfhh.cn
http://lP3DhcZZ.qmfhh.cn
http://h2Qlnv4H.qmfhh.cn
http://QxluPZby.qmfhh.cn
http://emh8TNhz.qmfhh.cn
http://C47XFrhI.qmfhh.cn
http://WTkGRHQC.qmfhh.cn
http://CVQnUq5x.qmfhh.cn
http://MEuztk65.qmfhh.cn
http://vL11W5Bp.qmfhh.cn
http://JqQ5k1Tb.qmfhh.cn
http://QbFwwuWQ.qmfhh.cn
http://0hKlxMg4.qmfhh.cn
http://R8lITRyY.qmfhh.cn
http://9uiAb4ND.qmfhh.cn
http://6pomcM6j.qmfhh.cn
http://iIBLCHUX.qmfhh.cn
http://obitsP2x.qmfhh.cn
http://qYzITIYz.qmfhh.cn
http://oSJ3cvxT.qmfhh.cn
http://yTb9m8pg.qmfhh.cn
http://wAAVQ0b5.qmfhh.cn
http://ggFpWNoq.qmfhh.cn
http://ZXRyoZE8.qmfhh.cn
http://ru512Jnj.qmfhh.cn
http://kkqXDMAH.qmfhh.cn
http://www.dtcms.com/a/368929.html

相关文章:

  • nextcyber——暴力破解
  • Process Explorer 学习笔记(第三章3.2.3):工具栏与参考功能
  • C++两个字符串的结合
  • c51串口通信原理及实操
  • Java垃圾回收算法详解:从原理到实践的完整指南
  • MongoDB 6.0 新特性解读:时间序列集合与加密查询
  • IAR借助在瑞萨RH850/U2A MCU MCAL支持,加速汽车软件开发
  • 状压 dp --- 棋盘覆盖问题
  • 机器学习周报十二
  • 力扣:2322. 从树中删除边的最小分数
  • 人工智能常见分类
  • C++ 音视频开发常见面试题及答案汇总
  • C/C++ Linux系统编程:线程控制详解,从线程创建到线程终止
  • swoole 中 Coroutine\WaitGroup 和channel区别和使用场景
  • HDFS架构核心
  • Python的语音配音软件,使用edge-tts进行文本转语音,支持多种声音选择和语速调节
  • 每周资讯 | 中国游戏市场将在2025年突破500亿美元;《恋与深空》收入突破50亿元
  • 别再手工缝合API了!开源LLMOps神器LMForge,让你像搭积木一样玩转AI智能体!
  • 问卷系统项目自动化测试
  • 事务管理的选择:为何 @Transactional 并非万能,TransactionTemplate 更值得信赖
  • React Fiber 风格任务调度库
  • Sentinel和Cluster,到底该怎么选?
  • 紧固卓越,智选固万基——五金及紧固件一站式采购新典范
  • android 四大组件—Activity源码详解
  • B树,B+树,B*树(无代码)
  • Redis到底什么,该怎么用
  • mysql中null值对in子查询的影响
  • 时间轮算法在workerman心跳检测中的实战应用
  • 不同行业视角下的数据分析
  • 探索Go语言接口的精妙世界