Explorer les Livres électroniques
Catégories
Explorer les Livres audio
Catégories
Explorer les Magazines
Catégories
Explorer les Documents
Catégories
04
Le Hadoop nécessite Java pour s'exécuter. Tout d'abord, nous allons installer Java et nous
installerons Hadoop.mais avant nous allons mettre à jour notre liste de paquets:
manhadoop@bigdata:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/manhadoop/.ssh/id_rsa):
Created directory '/home/manhadoop/.ssh'.
Your identification has been saved in /home/manhadoop/.ssh/id_rsa.
Your public key has been saved in /home/manhadoop/.ssh/id_rsa.pub.
The key fingerprint is:
50:6b:f3:fc:0f:32:bf:30:79:c2:41:71:26:cc:7d:e3 manhadoop@bigdata
The key's randomart image is:
+--[ RSA 2048]----+
| .oo.o |
| . .o=. o |
| . + . o . |
| o = E |
| S + |
| . + |
| O + |
| O o |
| o.. |
+-----------------+
Hadoop Installation
Téléchargez Hadoop
j'utilise hadoop-2.7.3.tar.gz, mais cela devrait fonctionner avec
n'importe quelle autre version.
Étape 7 Changez la propriété et les permissions du répertoire / usr / local / hadoop. Ici,
'manhadoop' est un nom d'utilisateur Ubuntu.
manhadoop@bigdata:~$ sudo chown -R manhadoop:hadoop /usr/local/hadoop
manhadoop@bigdata:~$ sudo chmod -R 777 /usr/local/hadoop
Étape 8 Décompressez le fichier hadoop-2.7.3.tar.gz:
manhadoop@bigdata:~/Bureau$ tar xvzf hadoop-2.6.0.tar.gz
Étape 9 Déplacez le contenu du dossier hadoop-2.7.3 dans / usr / local / hadoop:
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
export HADOOP_HOME_WARN_SUPPRESS="TRUE"
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
10-3/usr/local/hadoop/etc/hadoop/core-site.xml:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
1-4 /usr/local/hadoop/etc/hadoop/mapred-site.xml:
manhadoop@bigdata:~$ cp /usr/local/hadoop/etc/hadoop/mapred-
site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
manhadoop@bigdata:~$ gedit /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
10-5 /usr/local/hadoop/etc/hadoop/yarn-site.xml:
manhadoop@bigdata:~$ gedit /usr/local/hadoop/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
10-6 /usr/local/hadoop/etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is
created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
manhadoop@bigdata:~$ hadoop namenode -format
manhadoop@bigdata: /usr/local/hadoop/sbin$ ls
distribute-exclude.sh start-all.cmd stop-balancer.sh
hadoop-daemon.sh start-all.sh stop-dfs.cmd
hadoop-daemons.sh start-balancer.sh stop-dfs.sh
hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh
hdfs-config.sh start-dfs.sh stop-yarn.cmd
httpfs.sh start-secure-dns.sh stop-yarn.sh
kms.sh start-yarn.cmd yarn-daemon.sh
mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh
refresh-namenodes.sh stop-all.cmd
slaves.sh stop-all.sh
manhadoop@bigdata:/usr/local/hadoop/sbin $ start-dfs.sh
manhadoop@bigdata: /usr/local/hadoop/sbin$ start-yarn.sh
manhadoop@bigdata: /usr/local/hadoop/sbin$ jps
2248 DataNode
2682 ResourceManager
2416 SecondaryNameNode
2128 NameNode
2800 NodeManager
26106 Jps
manhadoop@bigdata:/usr/local/hadoop/sbin $ stop-dfs.sh
manhadoop@bigdata: /usr/local/hadoop/sbin$ stop-yarn.sh
http://localhost:50070/
Hbase installation:
Étape1 :Téléchargez hbase-1.2.2.tar.gz et enregistrez-le dans manhadoop / Bureau.
Étape 3 Changez la propriété et les permissions du répertoire / usr / local / hbase. Ici, 'manhadoop'
est un nom d'utilisateur Ubuntu.
manhadoop@bigdata:~$ sudo chown -R manhadoop:hadoop /usr/local/hbase
manhadoop@bigdata:~$ sudo chmod -R 777 /usr/local/hbase
Étape 4 Décompressez le fichier hbase-1.2.2.tar.gz:
source ~/.bashrc manhadoop@bigdata:~/Bureau$ tar xvzf hadoop-1.2.2.tar.gz
Étape 5 Déplacez le contenu du dossier hbase-1.4.2 dans / usr / local / hbase:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
export HBASE_MANAGES_ZK=true
6-3 /usr/local/hbase/conf/hbase-site.xml:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/manhadoop/zookeeper</value>
</property>
</configuration>
2248 DataNode
2682 ResourceManager
2416 SecondaryNameNode
2128 NameNode
2800 NodeManager
26106 Jps
manhadoop@bigdata: /usr/local/hbase/bin$ start-hbase.sh
localhost: starting zookeeper, logging to
/usr/local/hbase/bin/../logs/hbase-manhadoop-zookeeper-laptop.out
starting master, logging to /usr/local/hbase/logs/hbase-manhadoop-master-
laptop.out
starting regionserver, logging to /usr/local/hbase/logs/hbase-manhadoop-1-
regionserver-laptop.out
2274 DataNode
2158 NameNode
2769 NodeManager
3310 HQuorumPeer
3490 HRegionServer
2439 SecondaryNameNode
3373 HMaster
3615 Jps
2650 ResourceManager
hbase(main):001:0>
http://localhost:16010