本文实例讲述了Python实现简单HTML表格解析的方法。分享给大家供大家参考。具体分析如下:
这里依赖libxml2dom,确保首先安装!导入到你的脚步并调用parse_tables() 函数。
1. source = a string containing the source code you can pass in just the table or the entire page code
2. headers = a list of ints OR a list of strings
If the headers are ints this is for tables with no header, just list the 0 based index of the rows in which you want to extract data.
If the headers are strings this is for tables with header columns (with the tags) it will pull the information from the specified columns
3. The 0 based index of the table in the source code. If there are multiple tables and the table you want to parse is the third table in the code then pass in the number 2 here
It will return a list of lists. each inner list will contain the parsed information.
具体代码如下:
#The goal of table parser is to get specific information from specific #columns in a table. #Input: source code from a typical website #Arguments: a list of headers the user wants to return #Output: A list of lists of the data in each row import libxml2dom def parse_tables(source, headers, table_index): """parse_tables(string source, list headers, table_index) headers may be a list of strings if the table has headers defined or headers may be a list of ints if no headers defined this will get data from the rows index. This method returns a list of lists """ #Determine if the headers list is strings or ints and make sure they #are all the same type j = 0 print 'Printing headers: ',headers #route to the correct function #if the header type is int if type(headers[0]) == type(1): #run no_header function return no_header(source, headers, table_index) #if the header type is string elif type(headers[0]) == type('a'): #run the header_given function return header_given(source, headers, table_index) else: #return none if the headers aren't correct return None #This function takes in the source code of the whole page a string list of #headers and the index number of the table on the page. It returns a list of #lists with the scraped information def header_given(source, headers, table_index): #initiate a list to hole the return list return_list = [] #initiate a list to hold the index numbers of the data in the rows header_index = [] #get a document object out of the source code doc = libxml2dom.parseString(source,html=1) #get the tables from the document tables = doc.getElementsByTagName('table') try: #try to get focue on the desired table main_table = tables[table_index] except: #if the table doesn't exits then return an error return ['The table index was not found'] #get a list of headers in the table table_headers = main_table.getElementsByTagName('th') #need a sentry value for the header loop loop_sentry = 0 #loop through each header looking for matches for header in table_headers: #if the header is in the desired headers list if header.textContent in headers: #add it to the header_index header_index.append(loop_sentry) #add one to the loop_sentry loop_sentry+=1 #get the rows from the table rows = main_table.getElementsByTagName('tr') #sentry value detecting if the first row is being viewed row_sentry = 0 #loop through the rows in the table, skipping the first row for row in rows: #if row_sentry is 0 this is our first row if row_sentry == 0: #make the row_sentry not 0 row_sentry = 1337 continue #get all cells from the current row cells = row.getElementsByTagName('td') #initiate a list to append into the return_list cell_list = [] #iterate through all of the header index's for i in header_index: #append the cells text content to the cell_list cell_list.append(cells[i].textContent) #append the cell_list to the return_list return_list.append(cell_list) #return the return_list return return_list #This function takes in the source code of the whole page an int list of #headers indicating the index number of the needed item and the index number #of the table on the page. It returns a list of lists with the scraped info def no_header(source, headers, table_index): #initiate a list to hold the return list return_list = [] #get a document object out of the source code doc = libxml2dom.parseString(source, html=1) #get the tables from document tables = doc.getElementsByTagName('table') try: #Try to get focus on the desired table main_table = tables[table_index] except: #if the table doesn't exits then return an error return ['The table index was not found'] #get all of the rows out of the main_table rows = main_table.getElementsByTagName('tr') #loop through each row for row in rows: #get all cells from the current row cells = row.getElementsByTagName('td') #initiate a list to append into the return_list cell_list = [] #loop through the list of desired headers for i in headers: try: #try to add text from the cell into the cell_list cell_list.append(cells[i].textContent) except: #if there is an error usually an index error just continue continue #append the data scraped into the return_list return_list.append(cell_list) #return the return list return return_list
希望本文所述对大家的Python程序设计有所帮助。
Python,HTML表格,解析
《魔兽世界》大逃杀!60人新游玩模式《强袭风暴》3月21日上线
暴雪近日发布了《魔兽世界》10.2.6 更新内容,新游玩模式《强袭风暴》即将于3月21 日在亚服上线,届时玩家将前往阿拉希高地展开一场 60 人大逃杀对战。
艾泽拉斯的冒险者已经征服了艾泽拉斯的大地及遥远的彼岸。他们在对抗世界上最致命的敌人时展现出过人的手腕,并且成功阻止终结宇宙等级的威胁。当他们在为即将于《魔兽世界》资料片《地心之战》中来袭的萨拉塔斯势力做战斗准备时,他们还需要在熟悉的阿拉希高地面对一个全新的敌人──那就是彼此。在《巨龙崛起》10.2.6 更新的《强袭风暴》中,玩家将会进入一个全新的海盗主题大逃杀式限时活动,其中包含极高的风险和史诗级的奖励。
《强袭风暴》不是普通的战场,作为一个独立于主游戏之外的活动,玩家可以用大逃杀的风格来体验《魔兽世界》,不分职业、不分装备(除了你在赛局中捡到的),光是技巧和战略的强弱之分就能决定出谁才是能坚持到最后的赢家。本次活动将会开放单人和双人模式,玩家在加入海盗主题的预赛大厅区域前,可以从强袭风暴角色画面新增好友。游玩游戏将可以累计名望轨迹,《巨龙崛起》和《魔兽世界:巫妖王之怒 经典版》的玩家都可以获得奖励。
更新日志
- 小骆驼-《草原狼2(蓝光CD)》[原抓WAV+CUE]
- 群星《欢迎来到我身边 电影原声专辑》[320K/MP3][105.02MB]
- 群星《欢迎来到我身边 电影原声专辑》[FLAC/分轨][480.9MB]
- 雷婷《梦里蓝天HQⅡ》 2023头版限量编号低速原抓[WAV+CUE][463M]
- 群星《2024好听新歌42》AI调整音效【WAV分轨】
- 王思雨-《思念陪着鸿雁飞》WAV
- 王思雨《喜马拉雅HQ》头版限量编号[WAV+CUE]
- 李健《无时无刻》[WAV+CUE][590M]
- 陈奕迅《酝酿》[WAV分轨][502M]
- 卓依婷《化蝶》2CD[WAV+CUE][1.1G]
- 群星《吉他王(黑胶CD)》[WAV+CUE]
- 齐秦《穿乐(穿越)》[WAV+CUE]
- 发烧珍品《数位CD音响测试-动向效果(九)》【WAV+CUE】
- 邝美云《邝美云精装歌集》[DSF][1.6G]
- 吕方《爱一回伤一回》[WAV+CUE][454M]