如何用python读取大的xml文件,示例为1.9G的xml文件
最近博主被xml搞得很奔溃,因为在之前的小作业中使用ElementTree(元素树)解析xml文件非常的丝滑,但是当将xml文件改成大约2G的大文件时,他就罢工啦!
xml的文件格式如下图所示:
之前的解析代码如下:
def load_xml(self, file):'''function: use the file in XML formatArgs:file (str): path to the xml fileReturns: ''' # with open(file, 'r', encoding='utf-8') as f:# xml = f.read()#load the xml file# root = ET.fromstring(xml) #root.tag is rowtree = ET.parse(file)root = tree.getroot()for child in root:docID = child.find('DOCNO').text #get docID# print(docID)content = child.find('TITLE').text +child.find('ARTIST').text +child.find('YEAR').text +child.find('LYRICS').text + child.find('GENRE').text #get content# print("content",content)self.docs_df.loc[docID] = content# use a dataframe to store each doc:docID,content(headline+text)
之前的报错信息:
Traceback (most recent call last):File "/Users/xiaoxudemacbook/edinburgh/TTDS/cw/cw3/code/index.py", line 115, in <module>index = Index('data.xml', 'englishST.txt')File "/Users/xiaoxudemacbook/edinburgh/TTDS/cw/cw3/code/index.py", line 21, in __init__self.load_xml(xml_path)File "/Users/xiaoxudemacbook/edinburgh/TTDS/cw/cw3/code/index.py", line 34, in load_xmlroot = ET.fromstring(xml) #root.tag is documentFile "/Users/xiaoxudemacbook/opt/anaconda3/lib/python3.9/xml/etree/ElementTree.py", line 1344, in XMLparser.feed(text)
xml.etree.ElementTree.ParseError: out of memory: line 1, column 0
博主搜索过如何修复这个问题的方法,大部分人提到的是用lxml和iterparse,虽然它在google中是排列第一的解决方案,但是博主发现它已经是2014的,而且感觉修改起来比较复杂,博主还在CSDN中发了问答,然后突然想起我们的chatGPT小宝贝。它给出了非常清楚的4种解决方案,博主就贴在下面啦(ps:博主用第一种很简单的方法就解决了,但是根据菜鸟教程中Python XML解析中提供的3中方法,博主感觉好像SAX (simple API for XML )是针对大型xml文件的良方,有空的时候要回头看看,目前还是赶due要紧):
This error message suggests that the XML file being parsed by the load_xml() function is too large to fit in memory. When an XML file is parsed using the ET.fromstring() function, the entire contents of the file are loaded into memory as a string, which can be a problem for very large files.
To resolve this error, you can try one or more of the following solutions:
1. Use the ET.parse() function instead of ET.fromstring() to parse the XML file.
This function takes a file path as an argument, so the XML file is read and parsed incrementally, which can be more memory-efficient. Here’s an example of how you can modify the code:
makefile
Copy code
Replace this line:
root = ET.fromstring(xml)
With this line:
tree = ET.parse(xml_path)
root = tree.getroot()
2. Split the XML file into smaller files before parsing them
If the XML file contains a large number of documents, you can split it into smaller files that each contain a subset of the documents. This way, you can parse each file separately and avoid loading the entire XML file into memory at once.
3. Increase the amount of memory available to the Python process.
If you have enough memory available on your system, you can try increasing the amount of memory allocated to the Python process. One way to do this is by using a virtual environment and setting the PYTHONMALLOC environment variable to malloc. For example, you can try running the following command before running your Python script:
PYTHONMALLOC=malloc python index.py
This will tell Python to use the malloc memory allocator, which can increase the amount of memory available to the process.
4. Use a streaming XML parser.
If the XML file is too large to fit in memory even after trying the above solutions, you can use a streaming XML parser such as xml.sax or lxml.etree. These parsers allow you to process the XML file incrementally, without loading the entire file into memory at once. However, using a streaming parser can be more complex and require more code changes than the other solutions.
附:菜鸟教程Python XML解析:https://www.runoob.com/python/python-xml.html