<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	「[原创] Hadoop 2.x的DistributedCache无法工作的问题」的评论	</title>
	<atom:link href="https://www.codelast.com/%E5%8E%9F%E5%88%9B-hadoop-2-x%E7%9A%84distributedcache%E6%97%A0%E6%B3%95%E5%B7%A5%E4%BD%9C%E7%9A%84%E9%97%AE%E9%A2%98/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.codelast.com/%e5%8e%9f%e5%88%9b-hadoop-2-x%e7%9a%84distributedcache%e6%97%a0%e6%b3%95%e5%b7%a5%e4%bd%9c%e7%9a%84%e9%97%ae%e9%a2%98/</link>
	<description>最优化之路</description>
	<lastBuildDate>Tue, 28 Apr 2020 02:09:18 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>
		评论者：胡军涛		</title>
		<link>https://www.codelast.com/%e5%8e%9f%e5%88%9b-hadoop-2-x%e7%9a%84distributedcache%e6%97%a0%e6%b3%95%e5%b7%a5%e4%bd%9c%e7%9a%84%e9%97%ae%e9%a2%98/comment-page-1/#comment-4096</link>

		<dc:creator><![CDATA[胡军涛]]></dc:creator>
		<pubDate>Mon, 30 Jul 2018 02:38:39 +0000</pubDate>
		<guid isPermaLink="false">http://www.codelast.com/?p=8131#comment-4096</guid>

					<description><![CDATA[啊，怎么我的评论没了]]></description>
			<content:encoded><![CDATA[<p>啊，怎么我的评论没了</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		评论者：胡军涛		</title>
		<link>https://www.codelast.com/%e5%8e%9f%e5%88%9b-hadoop-2-x%e7%9a%84distributedcache%e6%97%a0%e6%b3%95%e5%b7%a5%e4%bd%9c%e7%9a%84%e9%97%ae%e9%a2%98/comment-page-1/#comment-4095</link>

		<dc:creator><![CDATA[胡军涛]]></dc:creator>
		<pubDate>Mon, 30 Jul 2018 02:37:02 +0000</pubDate>
		<guid isPermaLink="false">http://www.codelast.com/?p=8131#comment-4095</guid>

					<description><![CDATA[博主，你好。我在按照你的方法运行程序时遇到了一些问题：
代码如下：
1.  job.addCacheFile(new Path(INPUT_PATH2).toUri());  //其中INPUT_PATH2=&quot;hdfs://172.16.136.187:9000/user/root/input/tb.csv&quot;  172.16.136.187为Hadoop 集群Master地址

2. URI[] caches = context.getCacheFiles();

3. BufferedReader reader = new BufferedReader(new FileReader(caches[0].getPath()));

上面是我认为出问题的代码部分

运行报错如下：
2018-07-30 10:31:30,505 WARN [org.apache.hadoop.fs.FileUtil] - Command &#039;D:\Coding\Java\hadoop-2.6.0\bin\winutils.exe symlink D:\Coding\Java\WorkSpace\MapReduce_01\tb.csv \tmp\hadoop-hujuntaolucky\mapred\local\1532917890006\tb.csv&#039; failed 1 with: CreateSymbolicLink error (1314): ???????????



2018-07-30 10:31:30,505 WARN [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Failed to create symlink: \tmp\hadoop-hujuntaolucky\mapred\local\1532917890006\tb.csv &#060;- D:\Coding\Java\WorkSpace\MapReduce_01/tb.csv
2018-07-30 10:31:30,830 WARN [org.apache.hadoop.mapred.LocalJobRunner] - job_local697191123_0001
java.lang.Exception: java.io.FileNotFoundException: \user\root\input\tb.csv (系统找不到指定的路径。)
	at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.io.FileNotFoundException: \user\root\input\tb.csv (系统找不到指定的路径。)
at java.io.FileInputStream.open0(Native Method)
	at java.io.FileInputStream.open(Unknown Source)
	at java.io.FileInputStream.(Unknown Source)
	at java.io.FileInputStream.(Unknown Source)
	at java.io.FileReader.(Unknown Source)
	at WordCount.Book3$BookJoinMapper.setup(Book3.java:110)

其中这个Book3.java:110 指向的是BufferedReader reader = new BufferedReader(new FileReader(caches[0].getPath())); 这句代码

希望博主看到后能给个回复，谢谢啦！]]></description>
			<content:encoded><![CDATA[<p>博主，你好。我在按照你的方法运行程序时遇到了一些问题：<br />
代码如下：<br />
1.  job.addCacheFile(new Path(INPUT_PATH2).toUri());  //其中INPUT_PATH2="hdfs://172.16.136.187:9000/user/root/input/tb.csv"  172.16.136.187为Hadoop 集群Master地址</p>
<p>2. URI[] caches = context.getCacheFiles();</p>
<p>3. BufferedReader reader = new BufferedReader(new FileReader(caches[0].getPath()));</p>
<p>上面是我认为出问题的代码部分</p>
<p>运行报错如下：<br />
2018-07-30 10:31:30,505 WARN [org.apache.hadoop.fs.FileUtil] - Command 'D:\Coding\Java\hadoop-2.6.0\bin\winutils.exe symlink D:\Coding\Java\WorkSpace\MapReduce_01\tb.csv \tmp\hadoop-hujuntaolucky\mapred\local\1532917890006\tb.csv' failed 1 with: CreateSymbolicLink error (1314): ???????????</p>
<p>2018-07-30 10:31:30,505 WARN [org.apache.hadoop.mapred.LocalDistributedCacheManager] - Failed to create symlink: \tmp\hadoop-hujuntaolucky\mapred\local\1532917890006\tb.csv &lt;- D:\Coding\Java\WorkSpace\MapReduce_01/tb.csv<br />
2018-07-30 10:31:30,830 WARN [org.apache.hadoop.mapred.LocalJobRunner] - job_local697191123_0001<br />
java.lang.Exception: java.io.FileNotFoundException: \user\root\input\tb.csv (系统找不到指定的路径。)<br />
	at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)<br />
	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)<br />
Caused by: java.io.FileNotFoundException: \user\root\input\tb.csv (系统找不到指定的路径。)<br />
at java.io.FileInputStream.open0(Native Method)<br />
	at java.io.FileInputStream.open(Unknown Source)<br />
	at java.io.FileInputStream.(Unknown Source)<br />
	at java.io.FileInputStream.(Unknown Source)<br />
	at java.io.FileReader.(Unknown Source)<br />
	at WordCount.Book3$BookJoinMapper.setup(Book3.java:110)</p>
<p>其中这个Book3.java:110 指向的是BufferedReader reader = new BufferedReader(new FileReader(caches[0].getPath())); 这句代码</p>
<p>希望博主看到后能给个回复，谢谢啦！</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
