当前位置: 首页 > news >正文

duckdb和pyarrow读写arrow格式的方法

arrow格式被多种分析型数据引擎广泛采用,如datafusion、polars。duckdb有一个arrow插件,原来是core插件,1.3版后被废弃,改为社区级插件,名字改为nanoarrow, 别名还叫arrow。

安装

D install arrow from community;
D copy (from 'foods.csv') to 'foods.arrow';
D load arrow;
D from 'foods.arrow';
IO Error:
Expected continuation token (0xFFFFFFFF) but got 1702125923
D from read_csv('foods.arrow');
┌────────────┬──────────┬────────┬──────────┐
│  category  │ calories │ fats_g │ sugars_g │
│  varchar   │  int64   │ double │  int64   │
├────────────┼──────────┼────────┼──────────┤
│ vegetables │       450.52 │
│ seafood    │      1505.00 │D copy (from 'foods.csv') to 'foods2.arrow';
D from 'foods2.arrow' limit 4;
┌────────────┬──────────┬────────┬──────────┐
│  category  │ calories │ fats_g │ sugars_g │
│  varchar   │  int64   │ double │  int64   │
├────────────┼──────────┼────────┼──────────┤
│ vegetables │       450.52 │
│ seafood    │      1505.00 │
│ meat       │      1005.00 │
│ fruit      │       600.011 │
└────────────┴──────────┴────────┴──────────┘

注意安装arrow插件后不会自动加载,所以加载arrow插件前生成的foods.arrow实际上是csv格式,而foods2.arrow才是arrow格式。

python的pyarrow模块也支持读写arrow格式,但是它不能识别duckdb生成的arrow文件,它还能生成其他格式文件,比如parquet和feather。以下示例来自arrow文档。

>>> import pandas as pd
>>> import pyarrow as pa
>>> with pa.memory_map('foods2.arrow', 'r') as source:
...     loaded_arrays = pa.ipc.open_file(source).read_all()
...
Traceback (most recent call last):File "<python-input-11>", line 2, in <module>loaded_arrays = pa.ipc.open_file(source).read_all()~~~~~~~~~~~~~~~~^^^^^^^^File "C:\Users\lt\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyarrow\ipc.py", line 234, in open_filereturn RecordBatchFileReader(source, footer_offset=footer_offset,options=options, memory_pool=memory_pool)File "C:\Users\lt\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyarrow\ipc.py", line 110, in __init__self._open(source, footer_offset=footer_offset,~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^options=options, memory_pool=memory_pool)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "pyarrow\\ipc.pxi", line 1090, in pyarrow.lib._RecordBatchFileReader._openFile "pyarrow\\error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_statusFile "pyarrow\\error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Not an Arrow file>>> import pyarrow.parquet as pq
>>> import pyarrow.feather as ft
>>> dir(pq)
['ColumnChunkMetaData', 'ColumnSchema', 'FileDecryptionProperties', 'FileEncryptionProperties', 'FileMetaData', 'ParquetDataset', 'ParquetFile', 'ParquetLogicalType', 'ParquetReader', 'ParquetSchema', 'ParquetWriter', 'RowGroupMetaData', 'SortingColumn', 'Statistics', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_filters_to_expression', 'core', 'filters_to_expression', 'read_metadata', 'read_pandas', 'read_schema', 'read_table', 'write_metadata', 'write_table', 'write_to_dataset']
>>> dir(ft)
['Codec', 'FeatherDataset', 'FeatherError', 'Table', '_FEATHER_SUPPORTED_CODECS', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_feather', '_pandas_api', 'check_chunked_overflow', 'concat_tables', 'ext', 'os', 'read_feather', 'read_table', 'schema', 'write_feather']
>>> import numpy as np
>>> arr = pa.array(np.arange(10))
>>> schema = pa.schema([
...     pa.field('nums', arr.type)
... ])
>>> with pa.OSFile('arraydata.arrow', 'wb') as sink:
...     with pa.ipc.new_file(sink, schema=schema) as writer:
...         batch = pa.record_batch([arr], schema=schema)
...         writer.write(batch)
...
>>> with pa.memory_map('arraydata.arrow', 'r') as source:
...     loaded_arrays = pa.ipc.open_file(source).read_all()
...
>>> arr2= loaded_arrays[0]
>>> arr
<pyarrow.lib.Int64Array object at 0x000001A3D8FD9FC0>
[0,1,2,3,4,5,6,7,8,9
]
>>> arr2
<pyarrow.lib.ChunkedArray object at 0x000001A3D8FD9C00>
[[0,1,2,3,4,5,6,7,8,9]
]
>>> table = pa.Table.from_arrays([arr], names=["col1"])
>>> ft.write_feather(table, 'example.feather')
>>> table
pyarrow.Table
col1: int64
----
col1: [[0,1,2,3,4,5,6,7,8,9]]
>>> table2= ft.read_table("example.feather")
>>> table2
pyarrow.Table
col1: int64
----
col1: [[0,1,2,3,4,5,6,7,8,9]]

从上述例子可见,arrow文件读出的结构和写入前有区别,从pyarrow.lib.Int64Array变成了pyarrow.lib.ChunkedArray,也多嵌套了一层。feather格式倒是读写前后一致。

pyarrow生成的arrow文件能被duckdb读取,如下所示。

D load arrow;
D from 'arraydata.arrow';
┌───────┐
│ nums  │
│ int64 │
├───────┤
│     0 │
│     1 │
│     2 │
│     3 │
│     4 │
│     5 │
│     6 │
│     7 │
│     8 │
│     9 │
└───────┘
http://www.dtcms.com/a/277704.html

相关文章:

  • H3C无线旁挂2层直接转发组网模拟实验
  • opendrive文件的格式
  • 专业PPT图片提取工具,操作简单
  • 【Python练习】041. 编写一个函数,检查一个二叉树是否是平衡二叉树
  • 大数据在UI前端的应用深化研究:用户行为数据的情感分析
  • MySQL实操:将Word表格数据导入MySQL表
  • python学习——Matplotlib库的基础
  • 4. MyISAM vs InnoDB:深入解析MySQL两大存储引擎
  • c语言进阶 深度剖析数据在内存中的存储
  • Spring-----MVC配置和基本原理
  • Opencv---blobFromImage
  • macos安装iper3
  • Java面试(基础题)-第一篇!
  • C++模版编程:类模版与继承
  • QCustomPlot绘图结合滑动条演示
  • anaconda常用命令
  • 第一个Flink 程序 WordCount,词频统计(批处理)
  • 从架构到代码:飞算JavaAI电商订单管理系统技术解构
  • 关键点检测 roboflow 折弯识别
  • 从“被动巡检”到“主动预警”:塔能物联运维平台重构路灯管理模式
  • Word 文字编辑状态下按回车换行后是非正文格式
  • 【LeetCode 热题 100】23. 合并 K 个升序链表——(解法一)逐一合并
  • FastAPI快速构建完成一个简单的demo,(curd)
  • 深入理解 Java JVM
  • BERT系列模型
  • Spring Boot 配置注解处理器 - spring-boot-configuration-processor
  • Python I/O 库【输入输出】全面详解
  • JavaScript加强篇——第九章 正则表达式高级应用(终)
  • Python __main__ 全面深度解析
  • C++ 右值引用和移动语义的应用场景