python多線程爬取西刺代理的示例代碼
西刺代理是一個(gè)國(guó)內(nèi)IP代理,由于代理倒閉了,所以我就把原來(lái)的代碼放出來(lái)供大家學(xué)習(xí)吧。
鏡像地址:https://www.blib.cn/url/xcdl.html
首先找到所有的tr標(biāo)簽,與class='odd'的標(biāo)簽,然后提取出來(lái)。

然后再依次找到tr標(biāo)簽里面的所有td標(biāo)簽,然后只提取出里面的[1,2,5,9]這四個(gè)標(biāo)簽的位置,其他的不提取。

最后可以寫(xiě)出提取單一頁(yè)面的代碼,提取后將其保存到文件中。
import sys,re,threadingimport requests,lxmlfrom queue import Queueimport argparsefrom bs4 import BeautifulSouphead = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.100 Safari/537.36'}if __name__ == '__main__': ip_list=[] fp = open('SpiderAddr.json','a+',encoding='utf-8') url = 'https://www.blib.cn/url/xcdl.html' request = requests.get(url=url,headers=head) soup = BeautifulSoup(request.content,'lxml') data = soup.find_all(name='tr',attrs={'class': re.compile('|[^odd]')}) for item in data: soup_proxy = BeautifulSoup(str(item),'lxml') proxy_list = soup_proxy.find_all(name='td') for i in [1,2,5,9]: ip_list.append(proxy_list[i].string) print('[+] 爬行列表: {} 已轉(zhuǎn)存'.format(ip_list)) fp.write(str(ip_list) + ’n’) ip_list.clear()
爬取后會(huì)將文件保存為 SpiderAddr.json 格式。

最后再使用另一段代碼,將其轉(zhuǎn)換為一個(gè)SSR代理工具直接能識(shí)別的格式,{’http’: ’http://119.101.112.31:9999’}
import sys,re,threadingimport requests,lxmlfrom queue import Queueimport argparsefrom bs4 import BeautifulSoupif __name__ == '__main__': result = [] fp = open('SpiderAddr.json','r') data = fp.readlines() for item in data: dic = {} read_line = eval(item.replace('n','')) Protocol = read_line[2].lower() if Protocol == 'http': dic[Protocol] = 'http://' + read_line[0] + ':' + read_line[1] else: dic[Protocol] = 'https://' + read_line[0] + ':' + read_line[1] result.append(dic) print(result)

完整多線程版代碼如下所示。
import sys,re,threadingimport requests,lxmlfrom queue import Queueimport argparsefrom bs4 import BeautifulSouphead = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.100 Safari/537.36'}class AgentSpider(threading.Thread): def __init__(self,queue): threading.Thread.__init__(self) self._queue = queue def run(self): ip_list=[] fp = open('SpiderAddr.json','a+',encoding='utf-8') while not self._queue.empty(): url = self._queue.get() try:request = requests.get(url=url,headers=head)soup = BeautifulSoup(request.content,'lxml')data = soup.find_all(name='tr',attrs={'class': re.compile('|[^odd]')})for item in data: soup_proxy = BeautifulSoup(str(item),'lxml') proxy_list = soup_proxy.find_all(name='td') for i in [1,2,5,9]: ip_list.append(proxy_list[i].string) print('[+] 爬行列表: {} 已轉(zhuǎn)存'.format(ip_list)) fp.write(str(ip_list) + ’n’) ip_list.clear() except Exception:passdef StartThread(count): queue = Queue() threads = [] for item in range(1,int(count)+1): url = 'https://www.xicidaili.com/nn/{}'.format(item) queue.put(url) print('[+] 生成爬行鏈接 {}'.format(url)) for item in range(count): threads.append(AgentSpider(queue)) for t in threads: t.start() for t in threads: t.join()# 轉(zhuǎn)換函數(shù)def ConversionAgentIP(FileName): result = [] fp = open(FileName,'r') data = fp.readlines() for item in data: dic = {} read_line = eval(item.replace('n','')) Protocol = read_line[2].lower() if Protocol == 'http': dic[Protocol] = 'http://' + read_line[0] + ':' + read_line[1] else: dic[Protocol] = 'https://' + read_line[0] + ':' + read_line[1] result.append(dic) return resultif __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-p','--page',dest='page',help='指定爬行多少頁(yè)') parser.add_argument('-f','--file',dest='file',help='將爬取到的結(jié)果轉(zhuǎn)化為代理格式 SpiderAddr.json') args = parser.parse_args() if args.page: StartThread(int(args.page)) elif args.file: dic = ConversionAgentIP(args.file) for item in dic: print(item) else: parser.print_help()
以上就是python多線程爬取西刺代理的示例代碼的詳細(xì)內(nèi)容,更多關(guān)于python多線程爬取代理的資料請(qǐng)關(guān)注好吧啦網(wǎng)其它相關(guān)文章!
相關(guān)文章:
1. JS中6個(gè)對(duì)象數(shù)組去重的方法2. Java commons-httpclient如果實(shí)現(xiàn)get及post請(qǐng)求3. 資深程序員:給Python軟件開(kāi)發(fā)測(cè)試的25個(gè)忠告!4. 一文帶你徹底理解Java序列化和反序列化5. PHP程序員簡(jiǎn)單的開(kāi)展服務(wù)治理架構(gòu)操作詳解(二)6. PHP利用curl發(fā)送HTTP請(qǐng)求的實(shí)例代碼7. Python基于requests庫(kù)爬取網(wǎng)站信息8. vscode運(yùn)行php報(bào)錯(cuò)php?not?found解決辦法9. PHP laravel實(shí)現(xiàn)導(dǎo)出PDF功能10. python中文本字符處理的簡(jiǎn)單方法記錄

網(wǎng)公網(wǎng)安備