欧美free性护士vide0shd,老熟女,一区二区三区,久久久久夜夜夜精品国产,久久久久久综合网天天,欧美成人护士h版

首頁綜合 正文
目錄

柚子快報激活碼778899分享:scrapy中間件統(tǒng)計爬蟲信息

柚子快報激活碼778899分享:scrapy中間件統(tǒng)計爬蟲信息

http://yzkb.51969.com/

scrapy擴展中間件的使用extensions

1.爬蟲統(tǒng)計擴展中間件

在一次爬蟲采集中,突然需求方來來個需求,說要知道每天某來源爬蟲采集數(shù)量的情況

首先分析,scrpay日志輸出窗口是有我們想要的信息的,

item_scraped_count:整個爬蟲item的總個數(shù)

finish_time:爬蟲完成時間

elapsed_time_seconds:爬蟲運行時間

我們在scrapy源碼中可以看到logstats.py文件

import logging

from twisted.internet import task

from scrapy.exceptions import NotConfigured

from scrapy import signals

logger = logging.getLogger(__name__)

class LogStats:

"""Log basic scraping stats periodically"""

def __init__(self, stats, interval=60.0):

self.stats = stats

self.interval = interval

self.multiplier = 60.0 / self.interval

self.task = None

@classmethod

def from_crawler(cls, crawler):

interval = crawler.settings.getfloat('LOGSTATS_INTERVAL')

if not interval:

raise NotConfigured

o = cls(crawler.stats, interval)

crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)

crawler.signals.connect(o.spider_closed, signal=signals.spider_closed)

return o

def spider_opened(self, spider):

self.pagesprev = 0

self.itemsprev = 0

self.task = task.LoopingCall(self.log, spider)

self.task.start(self.interval)

def log(self, spider):

items = self.stats.get_value('item_scraped_count', 0)

pages = self.stats.get_value('response_received_count', 0)

irate = (items - self.itemsprev) * self.multiplier

prate = (pages - self.pagesprev) * self.multiplier

self.pagesprev, self.itemsprev = pages, items

msg = ("Crawled %(pages)d pages (at %(pagerate)d pages/min), "

"scraped %(items)d items (at %(itemrate)d items/min)")

log_args = {'pages': pages, 'pagerate': prate,

'items': items, 'itemrate': irate}

logger.info(msg, log_args, extra={'spider': spider})

def spider_closed(self, spider, reason):

if self.task and self.task.running:

self.task.stop()

上述代碼就是負責(zé)爬蟲相關(guān)的日志輸出

同時可以注意到CoreStats.py中賦值了elapsed_time_seconds,finish_time,finish_reason等字段,如果想要獲取爬蟲相關(guān)的信息統(tǒng)計,我們只要寫一個新的類繼承CoreStats即可

2.自己實現(xiàn)一個組件,統(tǒng)計信息

我定義一個自己的擴展組件,名字為SpiderStats,來統(tǒng)計爬蟲信息

同時將獲取到的信息存入MongoDB代碼如下

"""

author:tyj

"""

import datetime

import os

from pymongo import MongoClient, ReadPreference

from scrapy import crawler

from scrapy.utils.conf import get_config

from scrapy.extensions.corestats import CoreStats

from scrapy.extensions.logstats import LogStats

import logging

logger = logging.getLogger(__name__)

class SpiderStats(CoreStats):

batch = None

sources = None

def item_scraped(self, item, spider):

batch = item.get("batch")

if batch:

self.batch = batch

if item.get("sources"):

self.sources = item.get("sources")

def spider_closed(self, spider):

items = self.stats.get_value('item_scraped_count', 0)

finish_time = self.stats.get_value('finish_time') + datetime.timedelta(hours=8)

finish_time = finish_time.strftime('%Y-%m-%d %H:%M:%S')

start_time = self.stats.get_value('start_time') + datetime.timedelta(hours=8)

start_time = start_time.strftime('%Y-%m-%d %H:%M:%S')

result_ = {}

result_["total_items"] = items

result_["start_time"] = start_time

result_["finish_time"] = finish_time

result_["batch"] = self.batch

result_["sources"] = self.sources

print("items:", items, "start_time:", start_time, "finish_time:", finish_time, self.batch,self.sources)

section = "mongo_cfg_prod"

MONGO_HOST = get_config().get(section=section,option='MONGO_HOST',fallback='')

MONGO_DB = get_config().get(section=section,option='MONGO_DB',fallback='')

MONGO_USER = get_config().get(section=section,option='MONGO_USER',fallback='')

MONGO_PSW = get_config().get(section=section,option='MONGO_PSW',fallback='')

AUTH_SOURCE = get_config().get(section=section,option='AUTH_SOURCE',fallback='')

mongo_url = 'mongodb://{0}:{1}@{2}/?authSource={3}&replicaSet=rs01'.format(MONGO_USER, MONGO_PSW,

MONGO_HOST,

AUTH_SOURCE)

client = MongoClient(mongo_url)

db = client.get_database(MONGO_DB, read_preference=ReadPreference.SECONDARY_PREFERRED)

coll = db["ware_detail_price_statistic"]

coll.insert(result_)

client.close()

大家可以根據(jù)自己的需求來增加相應(yīng)的擴展中間件,來符合自己業(yè)務(wù)場景需求

柚子快報激活碼778899分享:scrapy中間件統(tǒng)計爬蟲信息

http://yzkb.51969.com/

參考閱讀

評論可見,查看隱藏內(nèi)容

本文內(nèi)容根據(jù)網(wǎng)絡(luò)資料整理,出于傳遞更多信息之目的,不代表金鑰匙跨境贊同其觀點和立場。

轉(zhuǎn)載請注明,如有侵權(quán),聯(lián)系刪除。

本文鏈接:http://gantiao.com.cn/post/19005498.html

發(fā)布評論

您暫未設(shè)置收款碼

請在主題配置——文章設(shè)置里上傳

掃描二維碼手機訪問

文章目錄