多个时间段的 XML 到 Pandas 数据帧 [英] XML to pandas dataframe for multiple time periods

查看:36
本文介绍了多个时间段的 XML 到 Pandas 数据帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试查询以 xml 格式返回数据的 API,然后将该数据放入 Pandas 数据帧中.

I am attempting to query an api that returns the data in xml format and then put that data into a pandas dataframe.

我在 stackoverflow 上整理了来自其他海报的代码片段,当下面的 URL 中 fromUtc 和 untilUtc 仅相隔一小时时,这些代码就起作用了.但是,我希望能够查询几天或几周的数据,而不是每次只查询一个小时.

I have put together pieces of code from other posters on stackoverflow which works when the fromUtc and the untilUtc are only one hour apart in the below URL. However I want to be able to query several days or weeks of data rather than just an hour period period each time.

以下是 1 小时的时间范围(我的代码适用):

Below is 1hr timeframe (which my code works with):

url = "https://platform.aggm.at/mgm/api/timeseriesList.do?key=b73a4778a543fadd3f72bc9ebfe42d4c&fromUtc=2018-01-01T06&untilUtc=2018-01-01T07&group=904"

但是,如果 URL 是如下所示的多天,我无法弄清楚如何将所有数据提取到数据框中:

However I can't work out how to pull all the data into a dataframe if the URL is for multiple days like the below:

url = "https://platform.aggm.at/mgm/api/timeseriesList.do?key=b73a4778a543fadd3f72bc9ebfe42d4c&fromUtc=2018-01-01T06&untilUtc=2018-04-01T06&group=904"

这是一小时时间范围内的工作代码:

Here is the working code for a one hour time frame:

import xml.etree.ElementTree as et
import requests
import pandas as pd

#for elem in root.findall(".//Value"):
    #print(elem.tag, elem.attrib, elem.text)

#from xml.etree import cElementTree as ElementTree

class XmlListConfig(list):
    def __init__(self, aList):
        for element in aList:
            if element:
                # treat like dict
                if len(element) == 1 or element[0].tag != element[1].tag:
                    self.append(XmlDictConfig(element))
                # treat like list
                elif element[0].tag == element[1].tag:
                    self.append(XmlListConfig(element))
            elif element.text:
                text = element.text.strip()
                if text:
                    self.append(text)


class XmlDictConfig(dict):
    def __init__(self, parent_element):
        if parent_element.items():
            self.update(dict(parent_element.items()))
        for element in parent_element:
            if element:
                # treat like dict - we assume that if the first two tags
                # in a series are different, then they are all different.
                if len(element) == 1 or element[0].tag != element[1].tag:
                    aDict = XmlDictConfig(element)
                # treat like list - we assume that if the first two tags
                # in a series are the same, then the rest are the same.
                else:
                    # here, we put the list in dictionary; the key is the
                    # tag name the list elements all share in common, and
                    # the value is the list itself 
                    aDict = {element[0].tag: XmlListConfig(element)}
                # if the tag has attributes, add those to the dict
                if element.items():
                    aDict.update(dict(element.items()))
                self.update({element.tag: aDict})
            # this assumes that if you've got an attribute in a tag,
            # you won't be having any text. This may or may not be a 
            # good idea -- time will tell. It works for the way we are
            # currently doing XML configuration files...
            elif element.items():
                self.update({element.tag: dict(element.items())})
                # when there is one child to an element with attributes AND text
                #The line just below this was added.
                self[element.tag].update({"TSO-Value":element.text})
            # finally, if there are no child tags and no attributes, extract
            # the text
            else:
                self.update({element.tag: element.text})

url = "https://platform.aggm.at/mgm/api/timeseriesList.do?key=b73a4778a543fadd3f72bc9ebfe42d4c&fromUtc=2018-01-01T06&untilUtc=2018-01-01T07&group=904"
response = requests.get(url)
response.content
root = et.fromstring(response.content)
xmldict = XmlDictConfig(root)

#https://stackoverflow.com/questions/32855045/splitting-nested-dictionary
#retrieve one of the values inside the dictionary
inner = xmldict['TimeseriesList']
df = pd.DataFrame.from_dict(inner)

new_inner = inner['Timeseries']
print(new_inner)
df2 = pd.DataFrame.from_dict(new_inner)


values = new_inner # initial data


def getValueOrDefault(v):
    if v is None:
        return {'FromUTC': None, 'UntilUTC': None, 'TSO-Value': None}
    return v['Value']

values = [{**value['Header'], **getValueOrDefault(value['Values'])} for value in values]
print(values)
df3 = pd.DataFrame(values)

当我查询一小时的数据时,我在我的 df2 中得到以下两个字典:标题:

When I query one hour of data, I get the following two dictionaries in my df2: Header:

{'TimeserieId': '1501', 'ObjectID': 'NominierterEKVOst', 'Unit': 'kWh/h', 'Granularity': 'HOUR', 'Name': 'Nominated Consumption East', 'LastUpdate': '2019-11-19T15:25:00.000Z'}

值:

{'Value': {'FromUTC': '2018-01-01T06:00:00.000Z', 'UntilUTC': '2018-01-01T07:00:00.000Z', 'TSO-Value': '10128309'}}

我使用以下函数将其放入以下数据框中:

Which I use the following function to put into the below dataframe:

def getValueOrDefault(v):
    if v is None:
        return {'FromUTC': None, 'UntilUTC': None, 'TSO-Value': None}
    return v['Value']

values = [{**value['Header'], **getValueOrDefault(value['Values'])} for value in values]
print(values)
df3 = pd.DataFrame(values)

这将返回一个数据帧,如下所示:

This returns a dataframe as follows:

但是当我增加数据的时间段时,我查询的代码无法处理它.

But when I increase the time period of data I am querying the code is unable to handle it.

这次我的 df2 包含:

This time my df2 contains:

{'TimeserieId': '1501', 'ObjectID': 'NominierterEKVOst', 'Unit': 'kWh/h', 'Granularity': 'HOUR', 'Name': 'Nominated Consumption East', 'LastUpdate': '2019-11-19T15:25:00.000Z'}

然后是以下不包括起始日期和截止日期的内容:

and then the below which doesnt include the from and until dates:

{'Value': ['10128309', '10090691', '9991207.0', '10025856', '10030502', '10158945', '10158071', '10302802', '10838279', '10853112', '11108562', '11046172', '11216328', '11278472', '11288031', '11241307', '11164816', '11017874', '10808995', '10664421', '10498511', '10648369', '11028336', '12492439', '12492750', '12447412', '12365682', '12250841', '12225688', '12207470', '12321979', '12349964', '12303415', '12198112', '12237306', '12242819', '12216428', '12250504', '12265349', '11978096', '11936941', '11876989', '11298411', '11067736', '11134122', '11064653', '11351798', '12602242', '12910271', '12874984', '12790243', '12896733', '12871346', '12800547', '13204986', '13050597', '13225956', '13388547', '13510211', '13519767', '13262630', '12817374', '12323831', '12137506', '11946898', '11625450', '11540814', '11521041', '11586489', '12000038', '12391238', '12601717', '13231766', '13210762', '12947699', '13028445', '13555487', '12936937', '13038339', '13033435', '13078160', '13330834', '13441336', '13205542', '13142700', '13115554', '12055131', '11601545', '11415094', '11323713', '11282856', '11256287', '11244198', '11984312', '12134719', '13009439', '14598346', '14885711', '14849889', '14490393', '14312574', '13654674', '13051538', '12533006', '12614777', '12618908', '12594414', '12603372', '12639542', '12583482', '12523456', '12379896', '11692829', '11149465', '11120051', '11135499', '11130259', '11080760', '11271191', '10909230', '10962510', '11520114', '12022168', '12079581', '12077174', '11948640', '11895253', '11917234', '11946389', '12056458', '11995725', '11985354', '12008127', '11924274', '11783698', '11548238', '11135481', '10679563', '10750011', '10076521', '10470355', '10709176', '10756600', '10320698', '10491483', '10538155', '10650800', '10899565', '10890840', '10881940', '10856757', '10686689', '10798309', '10830784', '10953838', '10960305', '10959465', '11078191', '11001972', '10868302', '10550175', '10373976', '10470765', '10463628', '10651108', '10688276', '11069214', '12540496', '12974473']}

我的目标是在数据框中获取上述值,并在它们旁边获得相应的时间段.目前这不包括在内,我不知道为什么.

I am aiming to get the above values in the dataframe with their corresponding time period next to them. Currently this isnt included and I cannot work out why.

如果有更简单的方法将 xml 拉入数据框,那么任何帮助或建议将不胜感激.

If there is an easier way of pulling the xml into a dataframe, then any help or suggestions would be appreciated.

推荐答案

考虑 XSLT,一种专用语言,旨在将 XML 文件转换为其他格式,包括表格 CSV 文件!Python 可以运行 XSLT 1.0 及其第三方、功能丰富且易于使用的库,lxml 扩展了内置的 ElementTree API.或者,Python 可以调用外部 XSLT 处理器来运行脚本.

Consider XSLT, the special-purpose language designed to transform XML files to other formats including a tabular CSV file! Python can run XSLT 1.0 with its third-party, feature-rich and easy-to-use library, lxml that extends the built-in ElementTree API. Alternatively, Python can call an external XSLT processor to run the script.

从那里,Pandas 可以直接使用 StringIO 或使用 read_csv 从文件中读取结果树.使用这种方法,任一 URL 版本都可以工作.

From there, Pandas can read the result tree directly with StringIO or from file with read_csv. With this approach, either URL version work.

XSLT (另存为 .xsl 文件或嵌入字符串)

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:output method="text" omit-xml-declaration="yes" indent="yes"/>
    <xsl:strip-space elements="*"/>

    <xsl:template match="/Data">
       <!-- HEADERS -->         
       <xsl:text>TimeserieId,ObjectID,Unit,Granularity,Name,LastUpdate,</xsl:text>
       <xsl:text>FromUTC,UntilUTC,TSO_Value&#xa;</xsl:text>
       <xsl:apply-templates select="descendant::Value"/>
    </xsl:template>

    <xsl:template match="Value">
       <!-- DATA -->
       <xsl:value-of select="concat(ancestor::Timeseries/Header/TimeserieId, ',',
                                    ancestor::Timeseries/Header/ObjectID, ',',
                                    ancestor::Timeseries/Header/Unit, ',',
                                    ancestor::Timeseries/Header/Granularity, ',',
                                    ancestor::Timeseries/Header/Name, ',',
                                    ancestor::Timeseries/Header/LastUpdate, ',',
                                    @FromUTC, ',',
                                    @UntilUTC, ',',
                                    text())" />
       <xsl:text>&#xa;</xsl:text>
    </xsl:template>    

</xsl:stylesheet>

Python (没有dictlistfor循环、if逻辑)

Python (no dict, list, for loop, if logic)

from io import StringIO
import requests as rq
import lxml.etree as et
import pandas as pd

# RETRIEVE WEB CONTENT
url = ("https://platform.aggm.at/mgm/api/timeseriesList.do?"
       "key=b73a4778a543fadd3f72bc9ebfe42d4c&"
       "fromUtc=2018-01-01T06&untilUtc=2018-04-01T06&group=904")
response = rq.get(url)
response.content

# LOAD XML AND XSL
doc = et.fromstring(response.content)    
style = et.fromstring("""xslt string""")
# style = et.parse("/path/to/Script.xsl")

# TRANSFORM
transform = et.XSLT(style)
result = transform(doc)

输出

# STRING READ
time_series_df = pd.read_csv(StringIO(str(result)))

time_series_df.head(10)    
#    TimeserieId           ObjectID   Unit Granularity                        Name                LastUpdate                   FromUTC                  UntilUTC   TSO_Value
# 0         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T06:00:00.000Z  2018-01-01T07:00:00.000Z  10128309.0
# 1         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T07:00:00.000Z  2018-01-01T08:00:00.000Z  10090691.0
# 2         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T08:00:00.000Z  2018-01-01T09:00:00.000Z   9991207.0
# 3         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T09:00:00.000Z  2018-01-01T10:00:00.000Z  10025856.0
# 4         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T10:00:00.000Z  2018-01-01T11:00:00.000Z  10030502.0
# 5         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T11:00:00.000Z  2018-01-01T12:00:00.000Z  10158945.0
# 6         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T12:00:00.000Z  2018-01-01T13:00:00.000Z  10158071.0
# 7         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T13:00:00.000Z  2018-01-01T14:00:00.000Z  10302802.0
# 8         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T14:00:00.000Z  2018-01-01T15:00:00.000Z  10838279.0
# 9         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T15:00:00.000Z  2018-01-01T16:00:00.000Z  10853112.0

# IO FILE WRITE / READ
with open('Output.csv', 'wb') as f:
    f.write(result)

time_series_df = pd.read_csv('Output.csv')

time_series_df.head(10)        
#   TimeserieId           ObjectID   Unit Granularity                        Name                LastUpdate                   FromUTC                  UntilUTC   TSO_Value
# 0         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T06:00:00.000Z  2018-01-01T07:00:00.000Z  10128309.0
# 1         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T07:00:00.000Z  2018-01-01T08:00:00.000Z  10090691.0
# 2         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T08:00:00.000Z  2018-01-01T09:00:00.000Z   9991207.0
# 3         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T09:00:00.000Z  2018-01-01T10:00:00.000Z  10025856.0
# 4         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T10:00:00.000Z  2018-01-01T11:00:00.000Z  10030502.0
# 5         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T11:00:00.000Z  2018-01-01T12:00:00.000Z  10158945.0
# 6         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T12:00:00.000Z  2018-01-01T13:00:00.000Z  10158071.0
# 7         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T13:00:00.000Z  2018-01-01T14:00:00.000Z  10302802.0
# 8         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T14:00:00.000Z  2018-01-01T15:00:00.000Z  10838279.0
# 9         1501  NominierterEKVOst  kWh/h        HOUR  Nominated Consumption East  2019-11-19T15:25:00.000Z  2018-01-01T15:00:00.000Z  2018-01-01T16:00:00.000Z  10853112.0

这篇关于多个时间段的 XML 到 Pandas 数据帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆