python3でscrapyを使ってWEBクローラー実装
環境
Windows10(64bit) Python3.5 cygwin scrapy1.1
手順
cygwinで作業。
pip-windowsはwindowsのPythonのalias。
scrapyをインストール
$ pip-windows install scrapy
プロジェクトの雛形作成
crawl_testというプロジェクトを作成する。
$ scrapy startproject crawl_test
以下のファイル郡が作成される。
$ find crawl_test/ crawl_test/ crawl_test/crawl_test crawl_test/crawl_test/items.py crawl_test/crawl_test/pipelines.py crawl_test/crawl_test/settings.py crawl_test/crawl_test/spiders crawl_test/crawl_test/spiders/__init__.py crawl_test/crawl_test/spiders/__pycache__ crawl_test/crawl_test/__init__.py crawl_test/crawl_test/__pycache__ crawl_test/scrapy.cfg
スパイダーの作成
scrapy genspiderの第1引数はPythonファイル名、第2引数はクロール起点のURL
$ cd crawl_test/ $ scrapy genspider yahoo_news_spider news.yahoo.co.jp
作成されたPythonファイル
$ cat crawl_test/spiders/yahoo_news_spider.py # -*- coding: utf-8 -*- import scrapy class YahooNewsSpiderSpider(scrapy.Spider): name = "yahoo_news_spider" allowed_domains = ["news.yahoo.co.jp/"] start_urls = ( 'http://www.news.yahoo.co.jp//', ) def parse(self, response): pass
修正しないと動かないらしいので修正
parseメソッドの中はパース処理を適宜実装
# -*- coding: utf-8 -*- import scrapy class YahooNewsSpiderSpider(scrapy.Spider): name = "yahoo_news_spider" allowed_domains = ["news.yahoo.co.jp/"] start_urls = ( 'http://www.news.yahoo.co.jp//', ) custom_settings = { "DOWNLOAD_DELAY": 0.5, #0.5秒ごとにクロール } def parse(self, response): title = response.xpath("//title/text()").extract() body = response.xpath("//body/text()").extract() print ("--------------------------------") print (title, body) print ("--------------------------------")
実行したらエラー発生した
$ scrapy crawl yahoo_news_spider 2016-08-27 06:31:07 [scrapy] INFO: Scrapy 1.1.2 started (bot: crawl_test) 2016-08-27 06:31:07 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'crawl_test.spiders', 'SPIDER_MODULES': ['crawl_test.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'crawl_test'} 2016-08-27 06:31:07 [scrapy] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.logstats.LogStats'] Unhandled error in Deferred: 2016-08-27 06:31:07 [twisted] CRITICAL: Unhandled error in Deferred: Traceback (most recent call last): File "c:\python3.5\lib\site-packages\scrapy\commands\crawl.py", line 57, in run self.crawler_process.crawl(spname, **opts.spargs) File "c:\python3.5\lib\site-packages\scrapy\crawler.py", line 163, in crawl return self._crawl(crawler, *args, **kwargs) File "c:\python3.5\lib\site-packages\scrapy\crawler.py", line 167, in _crawl d = crawler.crawl(*args, **kwargs) File "c:\python3.5\lib\site-packages\twisted\internet\defer.py", line 1274, in unwindGenerator return _inlineCallbacks(None, gen, Deferred()) --- <exception caught here> --- File "c:\python3.5\lib\site-packages\twisted\internet\defer.py", line 1128, in _inlineCallbacks result = g.send(result) File "c:\python3.5\lib\site-packages\scrapy\crawler.py", line 72, in crawl self.engine = self._create_engine() File "c:\python3.5\lib\site-packages\scrapy\crawler.py", line 97, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "c:\python3.5\lib\site-packages\scrapy\core\engine.py", line 68, in __init__ self.downloader = downloader_cls(crawler) File "c:\python3.5\lib\site-packages\scrapy\core\downloader\__init__.py", line 88, in __init__ self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "c:\python3.5\lib\site-packages\scrapy\middleware.py", line 58, in from_crawler return cls.from_settings(crawler.settings, crawler) File "c:\python3.5\lib\site-packages\scrapy\middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "c:\python3.5\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object mod = import_module(module) File "c:\python3.5\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 986, in _gcd_import File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 673, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 665, in exec_module File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed File "c:\python3.5\lib\site-packages\scrapy\downloadermiddlewares\retry.py", line 23, in <module> from scrapy.xlib.tx import ResponseFailed File "c:\python3.5\lib\site-packages\scrapy\xlib\tx\__init__.py", line 3, in <module> from twisted.web import client File "c:\python3.5\lib\site-packages\twisted\web\client.py", line 42, in <module> from twisted.internet.endpoints import TCP4ClientEndpoint, SSL4ClientEndpoint File "c:\python3.5\lib\site-packages\twisted\internet\endpoints.py", line 34, in <module> from twisted.internet.stdio import StandardIO, PipeAddress File "c:\python3.5\lib\site-packages\twisted\internet\stdio.py", line 30, in <module> from twisted.internet import _win32stdio builtins.ImportError: cannot import name '_win32stdio' 2016-08-27 06:31:07 [twisted] CRITICAL: Traceback (most recent call last): File "c:\python3.5\lib\site-packages\twisted\internet\defer.py", line 1128, in _inlineCallbacks result = g.send(result) File "c:\python3.5\lib\site-packages\scrapy\crawler.py", line 72, in crawl self.engine = self._create_engine() File "c:\python3.5\lib\site-packages\scrapy\crawler.py", line 97, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "c:\python3.5\lib\site-packages\scrapy\core\engine.py", line 68, in __init__ self.downloader = downloader_cls(crawler) File "c:\python3.5\lib\site-packages\scrapy\core\downloader\__init__.py", line 88, in __init__ self.middleware = DownloaderMiddlewareManager.from_crawler(crawler) File "c:\python3.5\lib\site-packages\scrapy\middleware.py", line 58, in from_crawler return cls.from_settings(crawler.settings, crawler) File "c:\python3.5\lib\site-packages\scrapy\middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "c:\python3.5\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object mod = import_module(module) File "c:\python3.5\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 986, in _gcd_import File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 673, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 665, in exec_module File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed File "c:\python3.5\lib\site-packages\scrapy\downloadermiddlewares\retry.py", line 23, in <module> from scrapy.xlib.tx import ResponseFailed File "c:\python3.5\lib\site-packages\scrapy\xlib\tx\__init__.py", line 3, in <module> from twisted.web import client File "c:\python3.5\lib\site-packages\twisted\web\client.py", line 42, in <module> from twisted.internet.endpoints import TCP4ClientEndpoint, SSL4ClientEndpoint File "c:\python3.5\lib\site-packages\twisted\internet\endpoints.py", line 34, in <module> from twisted.internet.stdio import StandardIO, PipeAddress File "c:\python3.5\lib\site-packages\twisted\internet\stdio.py", line 30, in <module> from twisted.internet import _win32stdio ImportError: cannot import name '_win32stdio'
これが必要?
$ pip-windows install pypiwin32
違った・・・
https://pypi.python.org/pypi/pypiwin32
から
pypiwin32-219-cp35-none-win_amd64.whl
をダウンロード
既にインストールされてた
$ pip-windows install pypiwin32-219-cp35-none-win_amd64.whl Requirement already satisfied (use --upgrade to upgrade): pypiwin32==219 from file:///C:/cygwin/tmp/pypiwin32-219-cp35-none-win_amd64.whl in c:\python3.5\lib\site-packages
これ?
$ pip-windows install twisted-win
動いた!
$ scrapy crawl yahoo_news_spider
なんかエラー発生した
www.news.yahoo.co.jpのドメインが見つからない。。
2016-08-27 06:45:11 [scrapy] ERROR: Error downloading <GET http://www.news.yahoo.co.jp/robots.txt>: DNS lookup failed: address 'www.news.yahoo.co.jp' not found: [Errno 11001] getaddrinfo failed. Traceback (most recent call last): File "c:\python3.5\lib\site-packages\twisted\internet\defer.py", line 1126, in _inlineCallbacks result = result.throwExceptionIntoGenerator(g) File "c:\python3.5\lib\site-packages\twisted\python\failure.py", line 389, in throwExceptionIntoGenerator return g.throw(self.type, self.value, self.tb) File "c:\python3.5\lib\site-packages\scrapy\core\downloader\middleware.py", line 43, in process_request defer.returnValue((yield download_func(request=request,spider=spider)))
pingでも駄目
$ ping www.news.yahoo.co.jp ping: unknown host www.news.yahoo.co.jp
そもそもwww.news.yahoo.co.jpってドメイン自体無いみたい。
ソース確認すると開始URLにwwwが含まれてたので削除する
yahoo_news_spider.py
start_urls = ( 'http://news.yahoo.co.jp//', )
再度実行
今度はエラー無く動いたみたい
$ scrapy crawl yahoo_news_spider 2016-08-27 07:06:06 [scrapy] INFO: Scrapy 1.1.2 started (bot: crawl_test) 2016-08-27 07:06:06 [scrapy] INFO: Overridden settings: {'SPIDER_MODULES': ['crawl_test.spiders'], 'NEWSPIDER_MODULE': 'crawl_test.spiders', 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'crawl_test'} 2016-08-27 07:06:06 [scrapy] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.logstats.LogStats'] 2016-08-27 07:06:06 [scrapy] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2016-08-27 07:06:06 [scrapy] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2016-08-27 07:06:06 [scrapy] INFO: Enabled item pipelines: [] 2016-08-27 07:06:06 [scrapy] INFO: Spider opened 2016-08-27 07:06:06 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2016-08-27 07:06:06 [scrapy] DEBUG: Crawled (200) <GET http://news.yahoo.co.jp/robots.txt> (referer: None) 2016-08-27 07:06:07 [scrapy] DEBUG: Crawled (200) <GET http://news.yahoo.co.jp//> (referer: None) 2016-08-27 07:06:07 [scrapy] INFO: Closing spider (finished) 2016-08-27 07:06:07 [scrapy] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 439, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 10625, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2016, 8, 27, 6, 6, 7, 591372), 'log_count/DEBUG': 2, 'log_count/INFO': 7, 'response_received_count': 2, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2016, 8, 27, 6, 6, 6, 565903)} 2016-08-27 07:06:07 [scrapy] INFO: Spider closed (finished) -------------------------------- ['Yahoo!'] ['\n\n', '\n\n\n', '\n', '\n', '\n', '\n\n\n', '\n', '\n', '\n', '\n', '\n\n\n'] --------------------------------
parseメソッドをカスタマイズしてファイル出力したりするみたい。
詳しくは公式サイト見た方が良さそ
http://doc.scrapy.org/en/latest/topics/spiders.html