红联Linux门户
Linux帮助

Scrapy 1.1.0rc1发布,支持Python 3

发布时间:2016-02-05 09:21:06来源:红联作者:baihuo
Scrapy 1.1.0rc1 发布,1.1.0的更新如下:

Scrapy 1.1 has beta Python 3 support (requires Twisted >= 15.5). See:ref:`news_betapy3` for more details and some limitations.

Hot new features:

Item loaders now support nested loaders (:issue:`1467`).

FormRequest.from_response improvements (:issue:`1382`, :issue:`1137`).

Added setting :setting:`AUTOTHROTTLE_TARGET_CONCURRENCY` and improved AutoThrottle docs (:issue:`1324`).

Added response.text to get body as unicode (:issue:`1730`).

Anonymous S3 connections (:issue:`1358`).

Deferreds in downloader middlewares (:issue:`1473`). This enables better robots.txt handling (:issue:`1471`).

HTTP caching now follows RFC2616 more closely, added settings:setting:`HTTPCACHE_ALWAYS_STORE` and:setting:`HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS` (:issue:`1151`).

Selectors were extracted to the parsel library (:issue:`1409`). This means you can use Scrapy Selectors without Scrapy and also upgrade the selectors engine without needing to upgrade Scrapy.

These bug fixes may require your attention:

Don't retry bad requests (HTTP 400) by default (:issue:`1289`). If you need the old behavior, add 400 to :setting:`RETRY_HTTP_CODES`.

Fix shell files argument handling (:issue:`1710`, :issue:`1550`). If you try scrapy shell index.html it will try to load the URL http://index.html, use scrapy shell ./index.html to load a local file.

Robots.txt compliance is now enabled by default for newly-created projects (:issue:`1724`). Scrapy will also wait for robots.txt to be downloaded before proceeding with the crawl (:issue:`1735`). If you want to disable this behavior, update :setting:`ROBOTSTXT_OBEY` in settings.py file after creating a new project.

Exporters now work on unicode, instead of bytes by default (:issue:`1080`). If you use PythonItemExporter, you may want to update your code to disable binary mode which is now deprecated.

Accept XML node names containing dots as valid (:issue:`1533`).

Scrapy 是一套基于基于Twisted的异步处理框架,纯python实现的爬虫框架,用户只需要定制开发几个模块就可以轻松的实现一个爬虫,用来抓取网页内容以及各种图片,非常之方便。

软件详情:https://github.com/scrapy/scrapy/blob/master/docs/news.rst

下载地址:https://github.com/scrapy/scrapy/tree/1.1.0rc1

来自:开源中国社区
文章评论

共有 0 条评论