《Python網絡數據采集》讀書筆記(六)
阿新 • • 發佈:2018-04-29
CSV1、
urllib.request.urlretrieve可以根據文件的URL下載文件:
# -*- coding: utf-8 -*- from urllib.request import urlretrieve from urllib.request import urlopen from bs4 import BeautifulSoup html = urlopen("http://www.pythonscraping.com/") bsObj = BeautifulSoup(html, "lxml") imageLocation = bsObj.find("a", {"id":"logo"}).find("img")["src"] #print(imageLocation) urlretrieve(imageLocation, "logo.jpg")
這段程序從 http://pythonscraping.com下載logo圖片,然後在程序運行的文件夾裏保存為logo.jpg文件。
下面的程序會把 http://pythonscraping.com 主頁上所有src屬性且圖片後綴為.jpg的文件都下載下來:
# -*- coding: utf-8 -*- import os from urllib.request import urlretrieve from urllib.request import urlopen from bs4 import BeautifulSoup downloadDirectory = "downloaded" baseUrl = "http://pythonscraping.com" def getAbsoluteURL(baseUrl, source): if source.startswith("http://www."): url = "http://"+source[11:] elif source.startswith("http://"): url = source elif source.startswith("www."): url = source[4:] url = "http://"+source else: url = baseUrl+"/"+source if baseUrl not in url: return None return url def getDownloadPath(baseUrl, absoluteUrl, downloadDirectory): path = absoluteUrl.replace("www.", "") path = path.replace(baseUrl, "") path = downloadDirectory + path if path.endswith(".jpg"): directory = os.path.dirname(path) if not os.path.exists(directory): os.makedirs(directory) #print(path) return path html = urlopen("http://www.pythonscraping.com") bsObj = BeautifulSoup(html, "lxml") downloadList = bsObj.findAll(src=True) for download in downloadList: #print(download["src"]) fileUrl = getAbsoluteURL(baseUrl, download["src"]) if fileUrl is not None: print(fileUrl) urlretrieve(fileUrl, getDownloadPath(baseUrl, fileUrl, downloadDirectory))
2、
# -*- coding: utf-8 -*- import csv csvFile = open("test.csv", 'w+') try: writer = csv.writer(csvFile) writer.writerow(('number', 'number plus 2', 'number times 2')) for i in range(10): writer.writerow( (i, i+2, i*2)) finally: csvFile.close()
運行以上代碼後,你會看到一個CSV文件:
number | number plus 2 | number times 2 |
0 | 2 | 0 |
1 | 3 | 2 |
2 | 4 | 4 |
3 | 5 | 6 |
4 | 6 | 8 |
5 | 7 | 10 |
6 | 8 | 12 |
7 | 9 | 14 |
8 | 10 | 16 |
9 | 11 | 18 |
獲取維基百科詞條中的HTML表格並寫入CSV文件。
# -*- coding: utf-8 -*- import csv from urllib.request import urlopen from bs4 import BeautifulSoup html = urlopen("http://en.wikipedia.org/wiki/Comparison_of_text_editors") bsObj = BeautifulSoup(html, "lxml") # 主對比表格是當前頁面上的第一個表格 table = bsObj.findAll("table",{"class":"wikitable"})[0] rows = table.findAll("tr") csvFile = open("editors.csv", 'wt', newline="", encoding='utf-8') writer = csv.writer(csvFile) try: for row in rows: csvRow = [] for cell in row.findAll(['td', 'th']): csvRow.append(cell.get_text()) writer.writerow(csvRow) finally: csvFile.close()
《Python網絡數據采集》讀書筆記(六)