当前位置: 首页 > news >正文

sparkml 多列共享labelEncoder

背景描述

比如两列 from城市 to城市

我们的需求是两侧同一个城市必须labelEncoder后编码相同.

代码


from __future__ import annotationsfrom typing import Dict, Iterable, List, Optionalfrom pyspark.sql import SparkSession, functions as F, types as T
from pyspark.ml.feature import StringIndexerclass SharedLabelEncoder:"""共享标签编码器:对多列使用同一套 label->index 映射。- handle_invalid: "keep"(未知值编码为未知索引)、"skip"(返回 None)、"error"(抛错)- unknown 索引默认等于 len(labels),仅在 handle_invalid="keep" 时使用。"""def __init__(self, labels: Optional[List[str]] = None, handle_invalid: str = "keep"):self.labels: List[str] = labels or []self.label_to_index: Dict[str, int] = {v: i for i, v in enumerate(self.labels)}self.handle_invalid = handle_invaliddef fit(self, df, cols: Iterable[str]) -> "SharedLabelEncoder":# 将多列堆叠为单列 value 后,用 StringIndexer 拟合一次,得到统一 labelsstacked = Nonefor c in cols:col_df = df.select(F.col(c).cast(T.StringType()).alias("value")).na.fill({"value": ""})stacked = col_df if stacked is None else stacked.unionByName(col_df)indexer = StringIndexer(inputCol="value", outputCol="value_idx", handleInvalid="keep")model = indexer.fit(stacked)self.labels = list(model.labels)self.label_to_index = {v: i for i, v in enumerate(self.labels)}return selfdef _build_udf(self, spark: SparkSession):m_b = spark.sparkContext.broadcast(self.label_to_index)unknown_index = len(self.labels)def map_value(v: Optional[str]) -> Optional[int]:if v is None:return None if self.handle_invalid == "skip" else unknown_index if self.handle_invalid == "keep" else Noneidx = m_b.value.get(v)if idx is not None:return idxif self.handle_invalid == "keep":return unknown_indexif self.handle_invalid == "skip":return Noneraise ValueError(f"未知标签: {v}")return F.udf(map_value, T.IntegerType())def transform(self, df, input_cols: Iterable[str], suffix: str = "_idx"):udf_map = self._build_udf(df.sparkSession)out = dffor c in input_cols:out = out.withColumn(c + suffix, udf_map(F.col(c).cast(T.StringType())))return outdef save(self, path: str):import jsonobj = {"labels": self.labels, "handle_invalid": self.handle_invalid}with open(path, "w", encoding="utf-8") as f:json.dump(obj, f, ensure_ascii=False)@staticmethoddef load(path: str) -> "SharedLabelEncoder":import jsonwith open(path, "r", encoding="utf-8") as f:obj = json.load(f)return SharedLabelEncoder(labels=obj.get("labels", []), handle_invalid=obj.get("handle_invalid", "keep"))def main():spark = SparkSession.builder.appName("shared_label_encoder").getOrCreate()spark.sparkContext.setLogLevel("ERROR")data = [(1, "北京", "上海", 1),(2, "上海", "北京", 0),(3, "广州", "深圳", 1),(4, "深圳", "广州", 0),(5, "北京", "广州", 1),(6, "上海", "深圳", 0),]columns = ["id", "origin_city", "dest_city", "label"]df = spark.createDataFrame(data, schema=columns)# 拟合共享编码器(基于两列)encoder = SharedLabelEncoder(handle_invalid="keep").fit(df, ["origin_city", "dest_city"])# 变换两列到相同索引空间out_df = encoder.transform(df, ["origin_city", "dest_city"])print("编码结果:")out_df.show(truncate=False)# 保存/加载并复用path = "./shared_label_encoder_city.json"encoder.save(path)encoder2 = SharedLabelEncoder.load(path)new_df = spark.createDataFrame([(7, "北京", "杭州", 1)], schema=columns)  # 杭州为新值out_new = encoder2.transform(new_df, ["origin_city", "dest_city"])print("加载导出后的encoder并复用:")out_new.show(truncate=False)main()

输出

编码结果:
+---+-----------+---------+-----+---------------+-------------+
|id |origin_city|dest_city|label|origin_city_idx|dest_city_idx|
+---+-----------+---------+-----+---------------+-------------+
|1  |北京       |上海     |1    |1              |0            |
|2  |上海       |北京     |0    |0              |1            |
|3  |广州       |深圳     |1    |2              |3            |
|4  |深圳       |广州     |0    |3              |2            |
|5  |北京       |广州     |1    |1              |2            |
|6  |上海       |深圳     |0    |0              |3            |
+---+-----------+---------+-----+---------------+-------------+加载导出后的encoder并复用:
+---+-----------+---------+-----+---------------+-------------+
|id |origin_city|dest_city|label|origin_city_idx|dest_city_idx|
+---+-----------+---------+-----+---------------+-------------+
|7  |北京       |杭州     |1    |1              |4            |
+---+-----------+---------+-----+---------------+-------------+

http://www.dtcms.com/a/398085.html

相关文章:

  • 【TS5】Electron与Flutter
  • 线程池高频面试题(核心原理+配置实践+常见误区)
  • 【LeetCode热题100(28/100)】两数相加
  • 网站搭建思路如何使用模板建设网站
  • 注册网站的步骤二手房出售
  • 新疆燃气从业人员考试真题练习
  • 知识图谱的表示与推理对自然语言处理中因果性语义逻辑的影响与启示研究
  • go go-zero的学习,持续中...
  • C++篇 类和对象(3)万能工具怎么用?
  • 跨端边云时序数据管理新范式:Apache IoTDB 的 DB+AI 融合之道
  • 线程同步与互斥和生产消费模型
  • Java怎么终止一个线程
  • 软件项目管理中, UT测试如何体现
  • 神经网络工具箱
  • 软考系统架构设计师知识点-软件系统质量属性
  • 西安网站建设的软件哪个免费的网页制作软件最好
  • 【安装配置】【搭建本地Maven私服】
  • 一维卡尔曼滤波(无过程噪声)详解
  • AUTOSAR---汽车软件架构的标准化与未来展望
  • 压阻式应变传感器
  • Pydantic库应用
  • 【Linux手册】多线程编程的关键支撑:线程池与线程安全
  • 数字孪生:技术应用与实践案例
  • 阿里云上CentOS6.9(停止维护)导致的yum下载chrony失败如何解决?
  • ubuntu中mysql初始化报错
  • 上海网站推广排名百度图片搜索图片识别
  • 安庆有做网站的吗已矣seo排名点击软件
  • 优雅的 async/await 错误处理模式指南
  • 八、神经网络(下)
  • 鲜花购物商城(WebSocket及时通讯、协同过滤算法、支付宝沙盒支付、Echarts图形化分析、快递物流API)