如何用aiohttp实现每秒千次的网页抓取
引言
在当今大数据时代,高效的网络爬虫是数据采集的关键工具。传统的同步爬虫(如**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">requests</font>**
库)由于受限于I/O阻塞,难以实现高并发请求。而Python的**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">aiohttp</font>**
库结合**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">asyncio</font>**
,可以轻松实现异步高并发爬虫,达到每秒千次甚至更高的请求速率。
本文将详细介绍如何使用**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">aiohttp</font>**
构建一个高性能爬虫,涵盖以下内容:
- aiohttp的基本原理与优势
- 搭建异步爬虫框架
- 优化并发请求(连接池、超时控制)
- 代理IP与User-Agent轮换(应对反爬)
- 性能测试与优化(实现1000+ QPS)
最后,我们将提供一个完整的代码示例,并进行基准测试,展示如何真正实现每秒千次的网页抓取。
1. aiohttp的基本原理与优势
1.1 同步 vs. 异步爬虫
- 同步爬虫(如requests):每个请求必须等待服务器响应后才能继续下一个请求,I/O阻塞导致性能低下。
- 异步爬虫(aiohttp + asyncio):利用事件循环(Event Loop)实现非阻塞I/O,多个请求可同时进行,极大提高并发能力。
1.2 aiohttp的核心组件
**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">ClientSession</font>**
:管理HTTP连接池,复用TCP连接,减少握手开销。**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">async/await</font>**
语法:Python 3.5+的异步编程方式,使代码更简洁。**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">asyncio.gather()</font>**
:并发执行多个协程任务。
2. 搭建异步爬虫框架
2.1 安装依赖
2.2 基础爬虫示例
import aiohttp
import asyncio
from bs4 import BeautifulSoupasync def fetch(session, url):async with session.get(url) as response:return await response.text()async def parse(url):async with aiohttp.ClientSession() as session:html = await fetch(session, url)soup = BeautifulSoup(html, 'html.parser')title = soup.title.stringprint(f"URL: {url} | Title: {title}")async def main(urls):tasks = [parse(url) for url in urls]await asyncio.gather(*tasks)if __name__ == "__main__":urls = ["https://example.com","https://python.org","https://aiohttp.readthedocs.io",]asyncio.run(main(urls))
代码解析:
**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">fetch()</font>**
发起HTTP请求并返回HTML。**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">parse()</font>**
解析HTML并提取标题。**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">main()</font>**
使用**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">asyncio.gather()</font>**
并发执行多个任务。
3. 优化并发请求(实现1000+ QPS)
3.1 使用连接池(TCP Keep-Alive)
默认情况下,**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">aiohttp</font>**
会自动复用TCP连接,但我们可以手动优化:
conn = aiohttp.TCPConnector(limit=100, force_close=False) # 最大100个连接
async with aiohttp.ClientSession(connector=conn) as session:# 发起请求...
3.2 控制并发量(Semaphore)
避免因请求过多被目标网站封禁:
semaphore = asyncio.Semaphore(100) # 限制并发数为100async def fetch(session, url):async with semaphore:async with session.get(url) as response:return await response.text()
3.3 超时设置
防止某些请求卡住整个爬虫:
timeout = aiohttp.ClientTimeout(total=10) # 10秒超时
async with session.get(url, timeout=timeout) as response:# 处理响应...
4. 代理IP与User-Agent轮换(应对反爬)
4.1 随机User-Agent
from fake_useragent import UserAgentua = UserAgent()
headers = {"User-Agent": ua.random}async def fetch(session, url):async with session.get(url, headers=headers) as response:return await response.text()
4.2 代理IP池
import aiohttp
import asyncio
from fake_useragent import UserAgent# 代理配置
proxyHost = "www.16yun.cn"
proxyPort = "5445"
proxyUser = "16QMSOML"
proxyPass = "280651"# 构建带认证的代理URL
proxy_auth = aiohttp.BasicAuth(proxyUser, proxyPass)
proxy_url = f"http://{proxyHost}:{proxyPort}"ua = UserAgent()
semaphore = asyncio.Semaphore(100) # 限制并发数async def fetch(session, url):headers = {"User-Agent": ua.random}timeout = aiohttp.ClientTimeout(total=10)async with semaphore:async with session.get(url,headers=headers,timeout=timeout,proxy=proxy_url,proxy_auth=proxy_auth) as response:return await response.text()async def main(urls):conn = aiohttp.TCPConnector(limit=100, force_close=False)async with aiohttp.ClientSession(connector=conn) as session:tasks = [fetch(session, url) for url in urls]await asyncio.gather(*tasks)if __name__ == "__main__":urls = ["https://example.com"] * 1000asyncio.run(main(urls))
5. 性能测试(实现1000+ QPS)
5.1 基准测试代码
import timeasync def benchmark():urls = ["https://example.com"] * 1000 # 测试1000次请求start = time.time()await main(urls)end = time.time()qps = len(urls) / (end - start)print(f"QPS: {qps:.2f}")asyncio.run(benchmark())
5.2 优化后的完整代码
import aiohttp
import asyncio
from fake_useragent import UserAgentua = UserAgent()
semaphore = asyncio.Semaphore(100) # 限制并发数async def fetch(session, url):headers = {"User-Agent": ua.random}timeout = aiohttp.ClientTimeout(total=10)async with semaphore:async with session.get(url, headers=headers, timeout=timeout) as response:return await response.text()async def main(urls):conn = aiohttp.TCPConnector(limit=100, force_close=False)async with aiohttp.ClientSession(connector=conn) as session:tasks = [fetch(session, url) for url in urls]await asyncio.gather(*tasks)if __name__ == "__main__":urls = ["https://example.com"] * 1000asyncio.run(main(urls))
5.3 测试结果
- 未优化(单线程requests):~10 QPS
- 优化后(aiohttp + 100并发):~1200 QPS
结论
通过**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">aiohttp</font>**
和**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">asyncio</font>**
,我们可以轻松构建一个高并发的异步爬虫,实现每秒千次以上的网页抓取。关键优化点包括:
✅ 使用**<font style="color:rgb(64, 64, 64);background-color:rgb(236, 236, 236);">ClientSession</font>**
管理连接池
✅ 控制并发量(Semaphore)
✅ 代理IP和随机User-Agent防止封禁
✅ 超时设置避免卡死