`

linux上单机haoop配置笔记

阅读更多
先说一下我的环境
Win7
Visualbox4.2.10
ubuntu-12.04.2-desktop-i386.iso
hadoop0.20.2
jdk1.6.10

我的配置文件

Hosts
10.13.19.55 master

Profile
export HADOOP_HOME=/usr/local/hadoop  
export JAVA_HOME=/usr/local/java
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$HADOOP_HOME:$HADOOP_HOME/lib 
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:


hadoop-env.sh
# The java implementation to use.  Required.
export JAVA_HOME=/usr/local/java

hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>   
    <property>   
    <name>dfs.name.dir</name>   
    <value>/usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2</value>   
    </property>   
    <property>   
    <name>dfs.data.dir</name>   
    <value>/usr/local/hadoop/data1,/usr/local/hadoop/data2</value>   
    </property>   
    <property>   
    <name>dfs.replication</name>   
    <value>1</value>   
    </property>   
 </configuration>


core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop/tmp</value>
    </property>
</configuration>


mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- Put site-specific property overrides in this file. -->
<property> 
  <name>mapred.job.tracker</name> 
  <value>master:9001</value> 
</property> 
</configuration>




Masters
master

slavers
 master


遇到问题之一:Retrying connect to server
13/03/28 12:33:41 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).



原因:
1.hadoop没有启动起来,可用jps看一下是否有相关的进程。
2.看一下core-site.xml中 的fs.default.nam的值是否为hdfs://localhost:9000


在hadoop安装目录下创建4个文件夹:
data1,data2,datalog1,datalog2


有时通过jps查看,发现找不到namenode进程,那么可以用bin/stop-all.sh关闭一下,然后格式化,之后再启动hadoop:

hadoop@hadoop-VirtualBox:/usr/local/hadoop$ bin/hadoop namenode -format
13/03/28 14:08:10 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop-VirtualBox/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /usr/local/hadoop/datalog1 ? (Y or N) Y
Re-format filesystem in /usr/local/hadoop/datalog2 ? (Y or N) Y
13/03/28 14:08:20 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop,adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare
13/03/28 14:08:20 INFO namenode.FSNamesystem: supergroup=supergroup
13/03/28 14:08:20 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/03/28 14:08:20 INFO common.Storage: Image file of size 96 saved in 0 seconds.
13/03/28 14:08:20 INFO common.Storage: Storage directory /usr/local/hadoop/datalog1 has been successfully formatted.
13/03/28 14:08:20 INFO common.Storage: Image file of size 96 saved in 0 seconds.
13/03/28 14:08:20 INFO common.Storage: Storage directory /usr/local/hadoop/datalog2 has been successfully formatted.
13/03/28 14:08:20 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop-VirtualBox/127.0.1.1
************************************************************/


第一次格式化有如图上的提示,说明格式化成功

查看如下图则说明hadoop已经完全启动
hadoop@hadoop-VirtualBox:/usr/local/hadoop$ jps
3491 SecondaryNameNode
3778 TaskTracker
3293 DataNode
3816 Jps
3093 NameNode
3562 JobTracker


通过页面查看hadoop运行状态:
http://master:50030 (MapReduce的Web页面)
http://master:50070  (HDFS的Web页面)


缺少core包
有些网友通过如下命令可以编译WordCount类,但是我是0.21.0的hadoop解压后根目录中没有core包(这个我还没弄明白是什么原因),后来我装了个0.20.2
javac -classpath /home/admin/hadoop/hadoop-0.19.1-core.jar WordCount.java -d /home/admin/WordCount


编译:
hadoop@hadoop-VirtualBox:~/wordcount$ javac -classpath /usr/local/hadoop/hadoop-0.20.2-core.jar WordCount.java -d /home/hadoop/wordcount
Note: WordCount.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
打包:
hadoop@hadoop-VirtualBox:~/wordcount$ jar cvf WordCount.jar *.class
added manifest
adding: WordCount.class(in = 3286) (out= 1666)(deflated 49%)
adding: WordCount$MapClass.class(in = 1928) (out= 796)(deflated 58%)
adding: WordCount$Reduce.class(in = 1591) (out= 643)(deflated 59%)
准备数据文件
hadoop@hadoop-VirtualBox:~/wordcount$ hadoop fs -mkdir input
在/home/hadoop目录下新建两个文件input1.txt,input2.txt
文件中内容如下:
input1.txt:
hello, i love word
You are ok

Input2.txt
Hello, i love china
are you ok?

将文件加入到hadoop文件系统
hadoop@hadoop-VirtualBox:~$ hadoop fs -put input.* input
查看input目录文件
hadoop@hadoop-VirtualBox:~$ hadoop fs -ls input
Found 2 items
-rw-r--r--   2 hadoop supergroup         32 2013-03-28 14:26 /user/hadoop/input/input1.txt
-rw-r--r--   2 hadoop supergroup         29 2013-03-28 14:27 /user/hadoop/input/input2.txt




运行wordcount

hadoop@hadoop-VirtualBox:~/wordcount$ hadoop jar WordCount.jar WordCount input output
13/03/28 14:32:55 INFO mapred.FileInputFormat: Total input paths to process : 2
13/03/28 14:32:55 INFO mapred.JobClient: Running job: job_201303281409_0001
13/03/28 14:32:56 INFO mapred.JobClient:  map 0% reduce 0%
13/03/28 14:33:10 INFO mapred.JobClient:  map 100% reduce 0%
13/03/28 14:33:25 INFO mapred.JobClient:  map 100% reduce 100%
13/03/28 14:33:27 INFO mapred.JobClient: Job complete: job_201303281409_0001
13/03/28 14:33:27 INFO mapred.JobClient: Counters: 18
13/03/28 14:33:27 INFO mapred.JobClient:   Job Counters 
13/03/28 14:33:27 INFO mapred.JobClient:     Launched reduce tasks=1
13/03/28 14:33:27 INFO mapred.JobClient:     Launched map tasks=2
13/03/28 14:33:27 INFO mapred.JobClient:     Data-local map tasks=2
13/03/28 14:33:27 INFO mapred.JobClient:   FileSystemCounters
13/03/28 14:33:27 INFO mapred.JobClient:     FILE_BYTES_READ=152
13/03/28 14:33:27 INFO mapred.JobClient:     HDFS_BYTES_READ=61
13/03/28 14:33:27 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=374
13/03/28 14:33:27 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=73
13/03/28 14:33:27 INFO mapred.JobClient:   Map-Reduce Framework
13/03/28 14:33:27 INFO mapred.JobClient:     Reduce input groups=11
13/03/28 14:33:27 INFO mapred.JobClient:     Combine output records=14
13/03/28 14:33:27 INFO mapred.JobClient:     Map input records=4
13/03/28 14:33:27 INFO mapred.JobClient:     Reduce shuffle bytes=158
13/03/28 14:33:27 INFO mapred.JobClient:     Reduce output records=11
13/03/28 14:33:27 INFO mapred.JobClient:     Spilled Records=28
13/03/28 14:33:27 INFO mapred.JobClient:     Map output bytes=118
13/03/28 14:33:27 INFO mapred.JobClient:     Map input bytes=61
13/03/28 14:33:27 INFO mapred.JobClient:     Combine input records=14
13/03/28 14:33:27 INFO mapred.JobClient:     Map output records=14
13/03/28 14:33:27 INFO mapred.JobClient:     Reduce input records=14


查看output文件夹
hadoop@hadoop-VirtualBox:~/wordcount$ hadoop fs -ls output
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2013-03-28 14:32 /user/hadoop/output/_logs
-rw-r--r--   2 hadoop supergroup         73 2013-03-28 14:33 /user/hadoop/output/part-00000

查看mapreduce结果
hadoop@hadoop-VirtualBox:~/wordcount$ hadoop fs -cat output/part-00000
Hello,  1
You     1
are     2
china   1
hello,  1
i       2
love    2
ok      1
ok?     1
word    1
you     1



遇到问题之二:output文件夹已经存在
如果你未进行格式化,就直接再次运行wordcount,将会抛出如下异常,因为输出文件件只能用一次,你要么将这个输出文件夹删除,要么换一个输出文件夹
删除:
hadoop@hadoop-VirtualBox:~/wordcount$ hadoop fs -rmr  /user/hadoop/output
Deleted hdfs://localhost:9000/user/hadoop/output


hadoop@hadoop-VirtualBox:~/wordcount$ hadoop jar WordCount.jar WordCount input output
Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://localhost:9000/user/hadoop/output already exists
        at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:111)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:772)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249)
        at WordCount.run(WordCount.java:119)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at WordCount.main(WordCount.java:124)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)



也可以直接运行hadoop自带的WordCount程序,其他步骤相同,只是不用自己打WordCount.jar,命令如下,这里有些奇怪,我反编译hadoop-0.20.2-examples.jar发现wordcount.class类的首字母为大写,但执行时非要用小写才行:
hadoop@hadoop-VirtualBox:~/wordcount$ hadoop  jar /usr/local/hadoop/hadoop-0.20.2-examples.jar wordcount input output4
13/03/28 14:59:41 INFO input.FileInputFormat: Total input paths to process : 2
13/03/28 14:59:41 INFO mapred.JobClient: Running job: job_201303281409_0009
13/03/28 14:59:42 INFO mapred.JobClient:  map 0% reduce 0%
13/03/28 14:59:52 INFO mapred.JobClient:  map 100% reduce 0%
13/03/28 15:00:04 INFO mapred.JobClient:  map 100% reduce 100%
13/03/28 15:00:06 INFO mapred.JobClient: Job complete: job_201303281409_0009
13/03/28 15:00:06 INFO mapred.JobClient: Counters: 17
13/03/28 15:00:06 INFO mapred.JobClient:   Job Counters 
13/03/28 15:00:06 INFO mapred.JobClient:     Launched reduce tasks=1
13/03/28 15:00:06 INFO mapred.JobClient:     Launched map tasks=2
13/03/28 15:00:06 INFO mapred.JobClient:     Data-local map tasks=2
13/03/28 15:00:06 INFO mapred.JobClient:   FileSystemCounters
13/03/28 15:00:06 INFO mapred.JobClient:     FILE_BYTES_READ=152
13/03/28 15:00:06 INFO mapred.JobClient:     HDFS_BYTES_READ=61
13/03/28 15:00:06 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=374
13/03/28 15:00:06 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=73
13/03/28 15:00:06 INFO mapred.JobClient:   Map-Reduce Framework
13/03/28 15:00:06 INFO mapred.JobClient:     Reduce input groups=11
13/03/28 15:00:06 INFO mapred.JobClient:     Combine output records=14
13/03/28 15:00:06 INFO mapred.JobClient:     Map input records=4
13/03/28 15:00:06 INFO mapred.JobClient:     Reduce shuffle bytes=80
13/03/28 15:00:06 INFO mapred.JobClient:     Reduce output records=11
13/03/28 15:00:06 INFO mapred.JobClient:     Spilled Records=28
13/03/28 15:00:06 INFO mapred.JobClient:     Map output bytes=118
13/03/28 15:00:06 INFO mapred.JobClient:     Combine input records=14
13/03/28 15:00:06 INFO mapred.JobClient:     Map output records=14
13/03/28 15:00:06 INFO mapred.JobClient:     Reduce input records=14
hadoop@hadoop-VirtualBox:~/wordcount$ hadoop fs -ls output4
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2013-03-28 14:59 /user/hadoop/output4/_logs
-rw-r--r--   2 hadoop supergroup         73 2013-03-28 14:59 /user/hadoop/output4/part-r-00000

  • 大小: 18.8 KB
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics