红联Linux门户
Linux帮助

用wget命令的爬虫功能抓取网页到本地文件,再用grep命令分析文件

发布时间:2016-12-09 09:46:19来源:linux网站作者:iw1210
1.抓取网页到本地
选择一个网页,比如 http://www.oschina.net/code/snippet_1391852_26067,用wget抓取到本地。
$ wget http://www.oschina.net/code/snippet_1391852_26067
--2016-12-09 9:20:48--  http://www.oschina.net/code/snippet_1391852_26067
Resolving www.oschina.net (www.oschina.net)... 60.174.156.100
Connecting to www.oschina.net (www.oschina.net)|60.174.156.100|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘snippet_1391852_26067’
snippet_1391852_260     [ <=>                  ]  44.19K  --.-KB/s   in 0.1s   
2016-12-09 9:20:49 (314 KB/s) - ‘snippet_1391852_26067’ saved [45251]
查看:
$ ls
snippet_1391852_26067
已经抓取到本地。
 
2.在文件中查找某单词
比如查找return:
$ grep -o return snippet_1391852_26067 
return
return
return
return
return
return
return
return
return
return
return
return
 
3.输出文件中某单词个数
比如输出return的个数:
$ grep -c return snippet_1391852_26067 
12
完毕。
 
本文永久更新地址:http://www.linuxdiyf.com/linux/26728.html