Académique Documents
Professionnel Documents
Culture Documents
Resource Person
Mr.D.Kesavaraja M.E , MBA, (PhD) , MISTE
Assistant Professor,
Department of Computer Science and Engineering,
Dr.Sivanthi Aditanar College of Engineering
Tiruchendur - 628215
Day 02
Cloud Computing Open Nebula IaaS
Cloud Computing
Virtualization VM Ware Demo
IaaS, PaaS, SaaS
XaaS
What is OpenNebula?
Ecosystem
Open Nebula Setup
Open Nebula Installation
Procedure to configure IaaS
Virtual Machine Creation
Virtual Block
Storage Controller
Virtual Machine migration
Introduction :
Apache Hadoop is an Open Source framework build for distributed Big Data storage
and processing data across computer clusters. The project is based on the following
components:
1. Hadoop Common it contains the Java libraries and utilities needed by other
Hadoop modules.
2. HDFS Hadoop Distributed File System A Java based scalable file system
distributed across multiple nodes.
3. MapReduce YARN framework for parallel big data processing.
4. Hadoop YARN: A framework for cluster resource management.
Procedure :
# su - hadoop
$ vi .bash_profile
Append the following lines at the end of the file:
## JAVA env variables
export JAVA_HOME=/usr/java/default
export PATH=$PATH:$JAVA_HOME/bin
export
CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
## HADOOP env variables
export HADOOP_HOME=/opt/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
$ vi etc/hadoop/core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://master.hadoop.lan:9000/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///opt/volume/namenode</value>
</property>
$ vi etc/hadoop/hadoop-env.sh
Edit the following line to point to your Java system path.
export JAVA_HOME=/usr/java/default/
Hadoop NodeManager
Step 7: Manage Hadoop Services
21. To stop all hadoop instances run the below commands:
$ stop-yarn.sh
$ stop-dfs.sh
$ su - root
# vi /etc/rc.local
Add these excerpt to rc.local file.
su - hadoop -c "/opt/hadoop/sbin/start-dfs.sh"
su - hadoop -c "/opt/hadoop/sbin/start-yarn.sh"
exit 0
Then, add executable permissions for rc.local file and enable, start and check
service status by issuing the below commands:
$ chmod +x /etc/rc.d/rc.local
$ systemctl enable rc-local
$ systemctl start rc-local
$ systemctl status rc-local
What is Fuse?
FUSE permits you to write down a traditional user land application as a bridge
for a conventional file system interface.
The hadoop-hdfs-fuse package permits you to use your HDFS cluster as if it
were a conventional file system on Linux.
Its assumed that you simply have a operating HDFS cluster and grasp the
hostname and port that your NameNode exposes.
The Hadoop fuse installation and configuration with Mounting HDFS, HDFS
mount using fuse is done by following the below steps.
Step 1 : Required Dependencies
Step 2 : Download and Install FUSE
Step 3 : Install RPM Packages
Step 4 : Modify HDFS FUSE
Step 5 : Check HADOOP Services
Step 6 : Create a Directory to Mount HADOOP
Extract hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz
[hadoop@hadoop ~]#tar -zxvf hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz
tmp user
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.ToolRunner;
if (args.length < 2) {
System.err.println("HdfsWriter [local input path] [hdfs output path]");
return 1;
}
Step 3: Verify whether the file is written into HDFS and check the contents of the
file.
MAP REDUCE
[student@localhost ~]$ su
Password:
[root@localhost student]# su - hadoop
Last login: Wed Aug 31 10:14:26 IST 2016 on pts/1
[hadoop@localhost ~]$ mkdir mapreduce
[hadoop@localhost ~]$ cd mapreduce
[hadoop@localhost mapreduce]$ vi WordCountMapper.java
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Mapper;
public class WordCountMapper extends Mapper<LongWritable, Text, Text,
IntWritable>
{
private final static IntWritable one= new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws
IOException, InterruptedException
{
String line = value.toString();
StringTokenizer tokenizer = newStringTokenizer (line);
while(tokenizer.hasMoreTokens())
{
word.set(tokenizer.nextToken());
context.write(word,one);
}
}
}
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
//to accept the hdfs input and output dir at run time
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
return job.waitForCompletion(true) ? 0 : 1;
Download CentOS
As you download and use CentOS Linux, the CentOS Project invites you to be a part
of the community as a contributor. There are many ways to contribute to the
project, from documentation, QA, and testing to coding changes for SIGs, providing
mirroring or hosting, and helping other users.
Front-end Installation
This page shows you how to install OpenNebula from the binary packages.
Using the packages provided in our site is the recommended method, to ensure the
installation of the latest version and to avoid possible packages divergences of
different distributions. There are two alternatives here: you can add our package
repositories to your system, or visit the software menu to download the latest
package for your Linux distribution.
If there are no packages for your distribution, head to the Building from Source
Code guide.
Step 1. Disable SElinux in CentOS/RHEL 7
SElinux can cause some problems, like not trusting oneadmin users SSH
credentials. You can disable it changing in the file /etc/selinux/config this line:
SELINUX=disabled
After this file is changed reboot the machine.
Step 2. Add OpenNebula Repositories
CentOS/RHEL 7
To add OpenNebula repository execute the following as root:
Debian/Ubuntu
To add OpenNebula repository on Debian/Ubuntu execute as root:
Debian 8
Ubuntu 14.04
Ubuntu 16.04
There are packages for the Front-end, distributed in the various components that
conform OpenNebula, and packages for the virtualization host.
To install a CentOS/RHEL OpenNebula Front-end with packages from our
repository, execute the following as root.
apt-get update
apt-get install opennebula opennebula-sunstone opennebula-gate opennebula-flow
/usr/share/one/install_gems
The previous script is prepared to detect common Linux distributions and install the
required libraries. If it fails to find the packages needed in your system, manually
install these packages:
sqlite3 development library
mysql client development library
curl development library
libxml2 and libxslt development libraries
ruby development library
gcc and g++
make
If you want to install only a set of gems for an specific component read Building
from Source Code where it is explained in more depth.
Step 5. Enabling MySQL/MariaDB (Optional)
You can skip this step if you just want to deploy OpenNebula as quickly as possible.
However if you are deploying this for production or in a more serious environment,
make sure you read the MySQL Setup section.
Note that it is possible to switch from SQLite to MySQL, but since its more
cumbersome to migrate databases, we suggest that if in doubt, use MySQL from the
start.
Step 6. Starting OpenNebula
Log in as the oneadmin user follow these steps:
The /var/lib/one/.one/one_auth fill will have been created with a randomly-
generated password. It should contain the following: oneadmin:<password> . Feel
free to change the password before starting OpenNebula. For example:
Warning
This will set the oneadmin password on the first boot. From that point, you must use
the oneuser passwd command to change oneadmins password.
You are ready to start the OpenNebula daemons:
oneuser show
USER 0 INFORMATION
ID :0
NAME : oneadmin
GROUP : oneadmin
PASSWORD : 3bc15c8aae3e4124dd409035f32ea2fd6835efc9
AUTH_DRIVER : core
ENABLED : Yes
USER TEMPLATE
TOKEN_PASSWORD="ec21d27e2fe4f9ed08a396cbd47b08b8e0a4ca3c"
If you get an error message, then the OpenNebula daemon could not be started
properly:
oneuser show
Failed to open TCP connection to localhost:2633 (Connection refused - connect(2)
for "localhost" port 2633)
The OpenNebula logs are located in /var/log/one , you should have at least the
files oned.log and sched.log , the core and scheduler logs. Check oned.log for any
error messages, marked with [E] .
Sunstone
Now you can try to log in into Sunstone web interface. To do this point your browser
to http://<fontend_address>:9869 . If everything is OK you will be greeted with a
login page. The user is oneadmin and the password is the one in the
file /var/lib/one/.one/one_auth in your Front-end.
If the page does not load, make sure you
check /var/log/one/sunstone.log and /var/log/one/sunstone.error . Also, make sure
TCP port 9869 is allowed through the firewall.
Directory Structure
The following table lists some notable paths that are available in your Front-end
after the installation:
Path Description
Log files,
/var/log/one/ notably: oned.log , sched.log , sunstone.log and
<vmid>.log
/var/lib/one/datastores/<
Storage for the datastores
dsid>/
/var/lib/one/.one/one_aut
oneadmin credentials
h
/var/lib/one/remotes/hook
Hook scripts
s/
Path Description
/var/lib/one/remotes/vmm
Virtual Machine Manager Driver scripts
/
/var/lib/one/remotes/auth
Authentication Driver scripts
/
/var/lib/one/remotes/mark
MarketPlace Driver scripts
et/
/var/lib/one/remotes/data
Datastore Driver scripts
store/
/var/lib/one/remotes/vnm
Networking Driver scripts
/
Debian/Ubuntu
To add OpenNebula repository on Debian/Ubuntu execute as root:
Debian 8
Ubuntu 14.04
Ubuntu 16.04
When the package was installed in the Front-end, an SSH key was generated and
the authorized_keys populated. We will sync
the id_rsa , id_rsa.pub and authorized_keys from the Front-end to the nodes.
Additionally we need to create a known_hosts file and sync it as well to the nodes.
To create the known_hosts file, we have to execute this command as
user oneadmin in the Front-end with all the node names as parameters:
Now we need to copy the directory /var/lib/one/.ssh to all the nodes. The easiest
way is to set a temporary password to oneadmin in all the hosts and copy the
directory from the Front-end:
You should verify that connecting from the Front-end, as user oneadmin , to the
nodes, and from the nodes to the Front-end, does not ask password:
ssh <node1>
ssh <frontend>
exit
exit
ssh <node2>
ssh <frontend>
exit
exit
ssh <node3>
ssh <frontend>
exit
exit
brctl show
bridge name bridge id STP enabled interfaces
br0 8000.001e682f02ac no eth0
br1 8000.001e682f02ad no eth1
Note
Remember that this is only required in the Hosts, not in the Front-end. Also
remember that it is not important the exact name of the resources ( br0 , br1 ,
etc...), however its important that the bridges and NICs have the same name in all
the Hosts.
Finally, return to the Hosts list, and check that the Host switch to ON status. It
should take somewhere between 20s to 1m. Try clicking on the refresh button to
check the status more frequently.
Hands on Demo
Step:1
Step:2
click infrastructure and select cluster to create new cluster
Step:3
Step:5
Step:8
Step:9
select scheduling and choose your host and clusters
Step:10
select VM and click instantiate
Step:11
click Instances and select Vms option ,VM status shown below,
Step:12
Step:13
Now the output will be as follows
EX.NO:2
EX.NO:4