错误1.由linux下允许打开的最大文件数量引起
	错误消息:
	java.io.IOException: background merge hit exception: _0:C500->_0 _1:C500->_0 _2:C500->_..... [optimize]
	at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2310)
	at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2249)
	at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2219)
	at org.apache.nutch.indexer.lucene.LuceneWriter.close(LuceneWriter.java:237)
	at org.apache.nutch.indexer.IndexerOutputFormat$1.close(IndexerOutputFormat.java:48)
	at org.apache.Hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:474)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411)
	at org.apache.hadoop.mapred.Child.main(Child.java:170)
	Caused by: java.io.FileNotFoundException: /var/lib/crawlzilla/nutch-crawler/mapred/local/index/_682243155/_6a.frq (Too many open files)
	at java.io.RandomAccessFile.open(Native Method)
	at java.io.RandomAccessFile.(RandomAccessFile.java:212)
	at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.(SimpleFSDirectory.java:76)
	at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:97)
	at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.(NIOFSDirectory.java:87)
	at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:67)
	at org.apache.lucene.index.SegmentReader$CoreReaders.(SegmentReader.java:129)
	at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:576)
	at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:609)
	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4239)
	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3917)
	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:231)
	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:288)
	原因及解决方法:
	Java程序运行在Unix/Linux环境下,并且该Java程序需要对文件做大量的操作,则会产生这样的异常。
	Unix/Linux环境下有文件句柄的限制,可以使用ulimit -n查看当前环境允许打开的文件句柄数量,默认为1024。但是在我们的Java程序并发接近于或者多余1024而同时又在频繁的读写文件,所以会出
	现以上异常,解决方式是按实际需要增大对文件句柄的限制数。
	命令:
	ulimit –n 32768
	
	错误2.硬盘空间不足
	错误消息:
	Error: java.io.IOException: No space left on device
	at java.io.FileOutputStream.writeBytes(Native Method)
	at java.io.FileOutputStream.write(FileOutputStream.java:260)
	at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:190)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
	at java.io.DataOutputStream.write(DataOutputStream.java:90)
	at org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.java:84)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
	at java.io.DataOutputStream.write(DataOutputStream.java:90)
	at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:218)
	at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
	at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2454)
	原因及解决方法:
	当硬盘不足时nutch会等待可用空间,如果是分布式的话可以增加一个运行节点。
	
	错误3.namenode ID冲突导致HDFS不能启动
	错误消息:
	2010-07-21 10:12:11,987 ERROR org.apache.Hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in
	/home/admin/joe.wangh/hadoop/data/dfs.data.dir: namenode namespaceID = 898136669; datanode namespaceID = 2127444065
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:288)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:206)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1239)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1194)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1202)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1324)
	......
	原因及解决方法:
	每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就
	是每次fotmat前,清空tmp一下的所有目录。
	解决方法1:
	停掉hadoop集群,然后将本地的datanode的数据文件夹删除 。文件夹位置在conf/hdfs-site.xml中配置,默认的话放在/usr/local/hadoop-datastore/hadoop-hadoop(这个是该机器的登录
	名)/dfs/data    ,但我的默认放在 /tmp/hadoop-dev(我的hadoop启动的用户名)  /dfs/data
	解决方法2(推荐):
	停止datanode节点,找到<dfs.data.dir>/current/VERSION,把里面的namenode id改为错误信息里提示的,我这例子里是898136669
	
	错误4.配置文件出错
	错误消息:
	[Fatal Error] hadoop-site.xml:15:7: The content of elements must consist of well
-formed character data or markup.
	Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException:
	The content of elements must consist of well-formed character data or mark up.
	原因及解决方法
	nutch-site.xml等配置文件其中一个标签</property>前面多了一个尖括号。

