从html提取内容并以CSV格式写入提取的内容 [英] Fetching content from html and write fetched content in a specific format in CSV

查看:76
本文介绍了从html提取内容并以CSV格式写入提取的内容的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有类似的HTML代码:

I have HTML Code like:

<!-- Snippet snippets/search_result_text.html end -->
</h2>





      <p class="filter-list">


          <span class="facet">Organisations:</span>

            <span class="filtered pill">**Reserve Bank of Australia**
              <a href="/dataset?groups=business" class="remove" title="Remove"><i class="icon-remove"></i></a>
            </span>



          <span class="facet">Groups:</span>

            <span class="filtered pill">**Business Support and Regulation**
              <a href="/dataset?organization=reservebankofaustralia" class="remove" title="Remove"><i class="icon-remove"></i></a>
            </span>


      </p>



</form>




<!-- Snippet snippets/search_form.html end -->




<!-- Snippet snippets/search_package_list.html start -->



        <ul class="dataset-list unstyled">






<!-- Snippet snippets/package_item.html start -->






<li class="dataset-item">

    <div class="dataset-content">
      <h3 class="dataset-heading">



        <a href="/dataset/banks-assets">**Banks – Assets**</a>




      </h3>


        <div>These data are derived from returns submitted to the Australian Prudential Regulation Authority (APRA) by banks authorised under the Banking Act 1959. APRA assumed...</div>

    </div>

      <ul class="dataset-resources unstyled">

          <li>

            <a href="/dataset/banks-assets" class="label" data-format="xls">XLS</a>

          </li>

      </ul>


</li>
<!-- Snippet snippets/package_item.html end -->





<!-- Snippet snippets/package_item.html start -->






<li class="dataset-item">

    <div class="dataset-content">
      <h3 class="dataset-heading">



        <a href="/dataset/consolidated-exposures-immediate-and-ultimate-risk-basis">**Consolidated Exposures – Immediate and Ultimate Risk Basis**</a>




      </h3>


        <div>In March 2003, banks and selected Registered Financial Corporations (RFCs) began reporting their international assets, liabilities and country exposures to APRA in ARF/RRF 231...</div>

    </div>

      <ul class="dataset-resources unstyled">

          <li>

            <a href="/dataset/consolidated-exposures-immediate-and-ultimate-risk-basis" class="label" data-format="xls">XLS</a>

          </li>

      </ul>


</li>
<!-- Snippet snippets/package_item.html end -->

我想提取上面用粗体字母表示的数据,并希望以csv特定格式编写数据,例如:

I want to extract data which is in bold letters above and the want to write in csv specific format like:

Group                               Organisation              Title              
Business Support and Regulation    Reserve Bank of Australia   Banks-Assets
Business Support and Regulation    Reserve Bank of Australia   Consolidated Exposures – Immediate and Ultimate Risk Basis

,依此类推. 我有提供两个不同文件的python代码.

and so on.... I have my python code which gives two different files.

webpage_urls = ["https://data.gov.au/dataset?q=&groups=business&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&organization=reservebankofaustralia&_groups_limit=0",
                "https://data.gov.au/dataset?q=&groups=business&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&organization=department-of-finance&_groups_limit=0",
                "https://data.gov.au/dataset?q=&groups=business&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&organization=departmentofagriculturefisheriesandforestry&_groups_limit=0",
                "https://data.gov.au/dataset?organization=department-of-communications&q=&groups=business&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&_groups_limit=0",
                "https://data.gov.au/dataset?organization=ip-australia&q=&groups=business&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&_groups_limit=0",
                "https://data.gov.au/dataset?q=&organization=australiancommunicationsandmediaauthority&groups=business&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&_groups_limit=0",
                "https://data.gov.au/dataset?q=&organization=www-mitchellshirecouncil-vic-gov-au&groups=business&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&_groups_limit=0",
                "https://data.gov.au/dataset?q=&groups=business&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&organization=digital-transformation-agency&_groups_limit=0"]
# fetching data from all urls
data = []
dfs = []

for i in webpage_urls:
    wiki2 = i
    page= urllib.request.urlopen(wiki2)
    soup = BeautifulSoup(page)

    lobbying = {}
    data2 = soup.find_all('h3', class_="dataset-heading")
    for element in data2:
        lobbying[element.a.get_text()] = {}
    data2[0].a["href"]
    prefix = "https://data.gov.au"
    for element in data2:
        print()
        lobbying[element.a.get_text()]["link"] = prefix + element.a["href"]
        #print(lobbying)
        df = pd.DataFrame.from_dict(lobbying, orient='index').rename_axis('Titles').reset_index()
        dfs.append(df)
df = pd.concat(dfs, ignore_index=True)
df1 = df.drop_duplicates(subset = 'Titles')
print (df1)
df1.to_csv('D:/output2.csv')

for i in webpage_urls:
    wiki2 = i
    page= urllib.request.urlopen(wiki2)
    soup = BeautifulSoup(page)

    # fetching organisations
    data3 = soup.find_all('li', class_="nav-item active")
    lobbying1 = []
    for element in data3:
        lobbying1.append(element.span.get_text())
        data.append(lobbying1)



df_ = pd.DataFrame(data, columns = ['Organisations', 'Groups'])
df2 = df_.drop_duplicates(subset = 'Organisations')
with pd.option_context('display.max_rows', 999):
    print (df2)
df2.to_csv('D:/output_new.csv')

上面的

也提供了链接.请以三列的形式在单个csv中帮助获取所需的格式.

above one is giving link also. Please help in in getting desired format in single csv with three columns.

推荐答案

我尝试稍微修改一下原始解决方案-最好只循环一次,并用所有数据创建一个大的DataFrame.然后只为新的DataFrames选择带有子集[['col1','col2']的列.

I try a bit modify original solution - best is loop only once and create one big DataFrame with all data. then only select columns with subset [['col1','col2'] for new DataFrames.

还可以使用

Also for remove numbers with () is possible use str.replace:

for i in webpage_urls:
    wiki2 = i
    page= urllib.request.urlopen(wiki2)
    soup = BeautifulSoup(page, "lxml")

    lobbying = {}
    #always only 2 active li, so select first by [0]  and second by [1]
    org = soup.find_all('li', class_="nav-item active")[0].span.get_text()
    groups = soup.find_all('li', class_="nav-item active")[1].span.get_text()

    data2 = soup.find_all('h3', class_="dataset-heading")
    for element in data2:
        lobbying[element.a.get_text()] = {}
    data2[0].a["href"]
    prefix = "https://data.gov.au"
    for element in data2:
        lobbying[element.a.get_text()]["link"] = prefix + element.a["href"]
        lobbying[element.a.get_text()]["Organisation"] = org
        lobbying[element.a.get_text()]["Group"] = groups
        #print(lobbying)
        df = pd.DataFrame.from_dict(lobbying, orient='index') \
               .rename_axis('Titles').reset_index()
        dfs.append(df)
df = pd.concat(dfs, ignore_index=True)
df1 = df.drop_duplicates(subset = 'Titles').reset_index(drop=True)



df1['Organisation'] = df1['Organisation'].str.replace('\(\d+\)', '')
df1['Group'] = df1['Group'].str.replace('\(\d+\)', '')


print (df1.head())
                                              Titles             Organisation  \
0                                     Banks – Assets  Reserve Bank of Aus...    
1  Consolidated Exposures – Immediate and Ultimat...  Reserve Bank of Aus...    
2  Foreign Exchange Transactions and Holdings of ...  Reserve Bank of Aus...    
3  Finance Companies and General Financiers – Sel...  Reserve Bank of Aus...    
4                   Liabilities and Assets – Monthly  Reserve Bank of Aus...    

                                                link                    Group  
0           https://data.gov.au/dataset/banks-assets  Business Support an...   
1  https://data.gov.au/dataset/consolidated-expos...  Business Support an...   
2  https://data.gov.au/dataset/foreign-exchange-t...  Business Support an...   
3  https://data.gov.au/dataset/finance-companies-...  Business Support an...   
4  https://data.gov.au/dataset/liabilities-and-as...  Business Support an...   


df2 = df1[['Titles', 'link']]
print (df2.head())
                                              Titles  \
0                                     Banks – Assets   
1  Consolidated Exposures – Immediate and Ultimat...   
2  Foreign Exchange Transactions and Holdings of ...   
3  Finance Companies and General Financiers – Sel...   
4                   Liabilities and Assets – Monthly   

                                                link  
0           https://data.gov.au/dataset/banks-assets  
1  https://data.gov.au/dataset/consolidated-expos...  
2  https://data.gov.au/dataset/foreign-exchange-t...  
3  https://data.gov.au/dataset/finance-companies-...  
4  https://data.gov.au/dataset/liabilities-and-as...  


df3 = df1[['Group','Organisation','Titles']]
print (df3.head())
                     Group             Organisation  \
0  Business Support an...   Reserve Bank of Aus...    
1  Business Support an...   Reserve Bank of Aus...    
2  Business Support an...   Reserve Bank of Aus...    
3  Business Support an...   Reserve Bank of Aus...    
4  Business Support an...   Reserve Bank of Aus...    

                                              Titles  
0                                     Banks – Assets  
1  Consolidated Exposures – Immediate and Ultimat...  
2  Foreign Exchange Transactions and Holdings of ...  
3  Finance Companies and General Financiers – Sel...  
4                   Liabilities and Assets – Monthly  

这篇关于从html提取内容并以CSV格式写入提取的内容的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆