Monday 26 March 2012

Outlet Coach Store1installation of JDK ( with JRE )

1installation of JDK ( with JRE ) ,the JRE is not enough ,to prepare the MapReduce and Hadoop compiler were dependent on the jdk .( note that JDK must be more than 1.6, this tutorial is using the jdk1.
6.0_24 ) 2 ,install cygwin ,Download http: / / www.cygwin.com ,this tutorial uses a 1.7.9 must choose the install package :Net Category----openssh and OpenSSL :Editors Category---vim optional package ,Dr Dre Headphones,easy to modify the configuration file in Devel Category----subversion 3 configuration cygwin environment variable click on the desktop cygwin icon ,restart cygwin ,execute the following command $VIM / etc / profile in the last added such as downlink :export JAVA_HOME = / cygdrive / D / Java / jdk1.
6.0_24 export HADOOP_HOME / cygdrive / D / hadoop-0.21.0 export PATH = $ PATH: $JAVA_HOME / bin: $HADOOP_HOME / bin export CLASSPATH = $ CLASSPATH: :.$JAVA_HOME / lib: $JAVA_HOME / JRE / lib 4 installed in the cygwin sshd click on the desktop icon to start the cygwin ,cygwin ,execute the following command $ssh-host-config ,when asked to enter yes / no, No .
When you see the Have fun ,the general said sshd service is installed successfully .5 start sshd services on the desktop of the my computer right click on the icon ,click on the management ,at the open window to the left menu select services and Applications ,in the list on the right, in the sshd line right click ,select 6 configuration SSH login Click desktop cygwin icon ,restart cygwin ,Outlet Coach Store,execute the following command # generate a key file $ssh-keygen way to enter # trust certificate $Cd ~ / .
SSH $CP id_rsa.pub authorized_keys exit cygwin # to click on the desktop icon ,start cygwin $SSH localhost if not prompted to enter the password ,then the success 7 download the hadoop-0.
21.0 installer ,UGG UK,hadoop-0.21.0.tar.gz 8 installation of Hadoop hadoop-0.21.0.tar.gz codecs, such as decompression to D : hadoop-0.21.0 configuration conf under hadoop-env.sh ,Zhi only need to modify the JAVA_HOME JDK installation path can be ( note here is not the path to windows wind & # 26684 ;catalogue D: Java dk1.
6.0_24 ,but Linux wind & # 26684 / cygwin / D ;the / Java / jdk1.6.0_24 ) configuration under the conf mapred.site ,increased following code < ;configuration> ;< ;property> ;< ;name> ;FS The.
Default.name< ;/ name> ;< ;value> ;HDFS: / / localhost: 9000< / value> ;< ;property> ;< ;name> ;property> ;< ;mapred.job.tracker< ;name> ;< ;value> ;HDFS ;/ / localhost : 9001< / value> ;< ;property> ;< ;property> ;< ;;name> ;dfs.
replication< ;< ;value> ;name> 1< ;value> ;< ;property> ;< ;format a new / configuration> ;9 distributed-files system $bin / Hadoop namecode - format 10.start the Hadoop daemons $bin / start-all.
sh 11.browser the web interface for NameCode and JobTracker ,by default they are available at NameCode - http: / / localhost: 50070 JobTracker-http: / / localhost: 50030 12.common Exception 12.
1 in Window start Hadoop-0.21.0 version ,will appear below this kind of error :after continuous and find out the reasons and try ,finally has to solve the wrong way ,only need to $ {HADOOP_HOME } / bin / hadoop-config.
sh file the 190th line of the content of 12.2 Hadoop cannot normally start ( 1) $bin / Hadoop after start-all.sh implementation ,to rev. Abnormality of Exception in thread ." ;main" ;java.
lang.IllegalArgumentException: Invalid URI for NameNode address ( check fs.defaultFS ) :File: / / / has no authority. Localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress ( NameNode.
java: 214 ) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize ( SecondaryNameNode.java: 135 ) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.
< ;init> ;( SecondaryNameNode.java: 119 ) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main ( SecondaryNameNode.java: 481 ) four :mapred-site.xml configuration using the HDFS: / / localhost: 9001 ,without the use of localhost: 9001 configuration information is as follows: 11 / 04 / 2023 : 33: 25 INFO security.
Groups: Group mapping impl = org.apache.hadoop.sec urity.ShellBasedUnixGroupsMapping ;cacheTimeout = 30000011 / 04 / 2023 : 33: 25 WARN fs.FileSystem: " ;localhost: 9000" ;is a deprecated filesyste M name .
Use " ;HDFS: / / localhost: 9000 / " ;instead. 11 / 04 / 2023 : 33: 25 WARN conf.Configuration: mapred.task.id is deprecated Instead ,use mapreduce.task.attempt.id 11 / 04 / 2023 : 33: 25 WARN fs.
FileSystem: " ;localhost: 9000" ;is a deprecated filesyste m name . Use " ;HDFS: / / localhost: 9000 / " ;instead. 11 / 04 / 2023 : 33: 25 WARN fs.FileSystem: " ;localhost: 9000" ;is a deprecated filesyste m name .
Use " ;HDFS: / / localhost: 9000 / " ;instead. Solution: mapred-site.xml configuration in HDFS: / / localhost: 9000 ,and do not use the localhost : 9000 < property> ;configuration ;< ;name> ;fs.
default.name< ;< ;value> ;name> HDFS : / / localhost: 9000< / value> ;< ;property> ;< ;property> ;< ;name> ;mapred.job.tracker< ;name> ;< ;value> ;HDFS ;: / / localhost: 9001< / value> ;< ;Hadoop ;property> cannot normally start ( 5) five ,Cheap Nike Air Max,no namenode to stop anomaly Problem solving: abnormal information is as follows :11 / 04 / 2021 : 48: 50 INFO ipc.
Client: Retrying connect to server: localhost / 127.0.0. 1 : 9000 Already tried time ( s ) 0 .11 / 04 / 2021 : 48: 51 INFO ipc.Client: Retrying connect to server: localhost / 127.
0.0. 1 : 9000 Already tried time ( s ) 1 .11 / 04 / 2021 : 48: 52 INFO ipc.Client: Retrying connect to server: localhost / 127.0.0. 1 : 9000 Already tried time ( s ) 2 .11 / 04 / 2021 : 48: 53 INFO ipc.
Client: Retrying connect to server: localhost / 127.0.0. 1 : 9000 Already tried time ( s ) 3 .11 / 04 / 2021 : 48: 54 INFO ipc.Client: Retrying connect to server: localhost / 127.
0.0. 1 : 9000 Already tried time ( s ) 4 .11 / 04 / 2021 : 48: 55 INFO ipc.Client: Retrying connect to server: localhost / 127.0.0. 1 : 9000 Already tried time ( s ) 5 .11 / 04 / 2021 : 48: 56 INFO ipc.
Client: Retrying connect to ser Ver: localhost / 127.0.0. 1 : 9000 Already tried time ( s ) 6 .11 / 04 / 2021 : 48: 57 INFO ipc.Client: Retrying connect to server: localhost / 127.
0.0. 1 : 9000 Already tried time ( s ) 7 .11 / 04 / 2021 : 48: 58 INFO ipc.Client: Retrying connect to server: localhost / 127.0.0. 1 : 9000 Already tried 8 time ( s ) .Solution: this problem is not caused by namenode starting up ,why no namenode to stop ,Moncler Down Jackets,may be preceded by some data has some influence on namenode ,you need to perform :$bin / Hadoop namenode - Format $bin / Hadoop start-all.
sh then Hadoop cannot normally start ( 6) five ,no datanode to stop abnormal problem :sometimes data structure problems have failed to start the datanode problem .Then use the Hadoop namenode - format back & # 26684 ;type of remained ineffective ,/ tmp files is not clear .
It also requires the removal of / TMP / Hadoop * file in .Step :one ,first remove the Hadoop : / / / tmp Hadoop FS - RMR / tmp two ,Hadoop stop-all.sh three ,stop delete / TMP / Hadoop * RM - RF / TMP / Hadoop * four ,26684 & # ;type Hadoop Hadoop namenode - Format five ,start Hadoop start-all.
Related articles:

No comments:

Post a Comment