Académique Documents
Professionnel Documents
Culture Documents
which data is placed in tables and data schema is carefully designed before the database is
built. NoSQLdatabases are especially useful for working with large sets of distributed data.
What is NoSQL?
NoSQL encompasses a wide variety of different database technologies that were
developed in response to the demands presented in building modern applications:
Developers are working with applications that create massive volumes of new,
rapidly changing data types — structured, semi-structured, unstructured and
polymorphic data.
Applications that once served a finite audience are now delivered as services
that must be always-on, accessible from many different devices and scaled
globally to millions of users.
Relational databases were not designed to cope with the scale and agility challenges
that face modern applications, nor were they built to take advantage of the commodity
storage and processing power available today.
Graph stores are used to store information about networks of data, such as
social connections. Graph stores include Neo4J and Giraph.
Key-value stores are the simplest NoSQL databases. Every single item in the
database is stored as an attribute name (or 'key'), together with its value.
Examples of key-value stores are Riak and Berkeley DB. Some key-value stores,
such as Redis, allow each value to have a type, such as 'integer', which adds
functionality.
Wide-column stores such as Cassandra and HBase are optimized for queries
over large datasets, and store columns of data together, instead of rows.
NoSQL databases are built to allow the insertion of data without a predefined
schema. That makes it easy to make significant application changes in real-time,
without worrying about service interruptions – which means development is
faster, code integration is more reliable, and less database administrator time is
needed. Developers have typically had to add application-side code to enforce
data quality controls, such as mandating the presence of specific fields, data
types or permissible values. More sophisticated NoSQL databases allow
validation rules to be applied within the database, allowing users to enforce
governance across data, while maintaining the agility benefits of a dynamic
schema.
Auto-sharding
Because of the way they are structured, relational databases usually scale
vertically – a single server has to host the entire database to ensure acceptable
performance for cross- table joins and transactions. This gets expensive quickly,
places limits on scale, and creates a relatively small number of failure points for
database infrastructure. The solution to support rapidly growing applications is to
scale horizontally, by adding servers instead of concentrating more capacity in a
single server.
'Sharding' a database across many server instances can be achieved with SQL
databases, but usually is accomplished through SANs and other complex
arrangements for making hardware act as a single server. Because the database
does not provide this ability natively, development teams take on the work of
deploying multiple relational databases across a number of machines. Data is
stored in each database instance autonomously. Application code is developed
to distribute the data, distribute queries, and aggregate the results of data across
all of the database instances. Additional code must be developed to handle
resource failures, to perform joins across the different databases, for data
rebalancing, replication, and other requirements. Furthermore, many benefits of
the relational database, such as transactional integrity, are compromised or
eliminated when employing manual sharding.
Cloud computing makes this significantly easier, with providers such as Amazon
Web Services providing virtually unlimited capacity on demand, and taking care
of all the necessary infrastructure administration tasks. Developers no longer
need to construct complex, expensive platforms to support their applications, and
can concentrate on writing application code. Commodity servers can provide the
same processing and storage capabilities as a single high-end server for a
fraction of the price.
Replication
Most NoSQL databases also support automatic database replication to maintain
availability in the event of outages or planned maintenance events. More
sophisticated NoSQL databases are fully self-healing, offering automated failover
and recovery, as well as the ability to distribute the database across multiple
geographic regions to withstand regional failures and enable data localization.
Unlike relational databases, NoSQL databases generally have no requirement for
separate applications or expensive add-ons to implement replication.
Integrated Caching
A number of products provide a caching tier for SQL database systems. These
systems can improve read performance substantially, but they do not improve
write performance, and they add operational complexity to system deployments.
If your application is dominated by reads then a distributed cache could be
considered, but if your application has just a modest write volume, then a
distributed cache may not improve the overall experience of your end users, and
will add complexity in managing cache invalidation.
Types One type (SQL database) with minor Many different types including key-value
stores, document databases, wide-column
variations stores, and graph databases
Development Developed in 1970s to deal with first Developed in late 2000s to deal with
History wave of data storage applications limitations of SQL databases, especially
scalability, multi-structured data, geo-
distribution and agile development sprints
Examples MySQL, Postgres, Microsoft SQL Server, MongoDB, Cassandra, HBase, Neo4j
Oracle Database
Data Storage Individual records (e.g., 'employees') Varies based on database type. For
Model are stored as rows in tables, with each example, key-value stores function similarly
column storing a specific piece of data to SQL databases, but have only two
about that record (e.g., 'manager,' 'date columns ('key' and 'value'), with more
hired,' etc.), much like a spreadsheet. complex information sometimes stored as
Related data is stored in separate BLOBs within the 'value' columns.
tables, and then joined together when Document databases do away with the
more complex queries are executed. table-and-row model altogether, storing all
For example, 'offices' might be stored in relevant data together in single 'document'
one table, and 'employees' in another. in JSON, XML, or another format, which can
When a user wants to find the work nest values hierarchically.
address of an employee, the database
engine joins the 'employee' and 'office'
tables together to get all the
information necessary.
Schemas Structure and data types are fixed in Typically dynamic, with some enforcing data
advance. To store information about a validation rules. Applications can add new
new data item, the entire database fields on the fly, and unlike SQL table rows,
must be altered, during which time the dissimilar data can be stored together as
necessary. For some databases (e.g., wide-
database must be taken offline. column stores), it is somewhat more
challenging to add new fields dynamically.
Scaling Vertically, meaning a single server must Horizontally, meaning that to add capacity,
be made increasingly powerful in order a database administrator can simply add
to deal with increased demand. It is more commodity servers or cloud instances.
possible to spread SQL databases over The database automatically spreads data
many servers, but significant additional across servers as necessary.
engineering is generally required, and
core relational features such as JOINs,
referential integrity and transactions
are typically lost.
Consistency Can be configured for strong Depends on product. Some provide strong
consistency consistency (e.g., MongoDB, with tunable
consistency for reads) whereas others offer
eventual consistency (e.g., Cassandra).
The Java collections framework (JCF) is a set of classes and interfaces that implement
commonly reusable collection data structures. Although referred to as a framework, it works in
a manner of a library. The JCF provides both interfaces that define various collections and
classes that implement them.
The Java collections framework gives the programmer access to prepackaged data structures
as well as to algorithms for manipulating them. A collection is an object that can hold
references to other objects. The collection interfaces declare the operations that can be
performed on each type of collection..
Basically the default implementation of hashCode() provided by Object is derived by mapping
the memory address to an integer value. ... However it is possible to override the hashCode
method in your implementation class. equals() This particular method is used to make equal
comparison between two objects.
1) If two Objects are equal according to equal(), then calling the hashcode method on each of
those two objects should produce same hashcode. 2) It is not required that if two objects are
unequal according to the equal(), then calling the hashcodemethod on each of the two
objects must produce distinct values.
HashMap maintains an array of buckets. Each bucket is a linkedlist of key value pairs
encapsulated as Entry objects. This array of buckets is called table. Each node of the linked list
is an instance of a private class called Entry.
There are four things we should know about before going into internals of how
HashMap works -
All keys with the same hash value, are put in the same bucket, as also keys with different
hash key values.
1. Map<String, String> countries=new HashMap<String,String>();
2.
3. countries.put("India", "New Delhi");
4. countries.put("US", "Washington DC");
5. countries.put("Russia", "Moscow");
6. countries.put("China", "Beijing");
7.
8. String capital=countries.get("India");
Now in the above code, when you call the put(K key, V value) or get(Object key), here
the function computes the index of the bucket where the Entry should be. And then it
iterates through the list to look for the Entry, that has the same Key using the equals()
method. In case of put(K, V), if there is already an existing entry it is replaced, else a
new Entry is created.
Since the function( get, put or remove) has to keep iterating the linked List to see if there
is an associated entry, this could result in a performance issue. Imagine having to store
around a 1000 values. So this could mean each linked list would have around 63 entries(
1000/16), which makes it 63 iterations for every put, get or remove operation. To
overcome this performance bottleneck, HashMap can increase the size of it’s inner array
to accommodate linked lists.
When we create a HashMap, it’s default initial capacity is 16 and load Factor is 0.75. The
initial capacity here is size of inner array of linked lists. Now every time you add a new
value with put, the function stores 2 data, size of map which gets updated every time you
add or remove an Entry, and another one is threshold, which is capacity of inner
array*load factor, and this keeps getting refreshed for every entry. Now when a new
Entry is added, it checks size> threshold, and if yes, creates a new array with a doubled
size. So resizing here creates twice the number of existing buckets, and redistributes all
the existing entries.
The purpose of a map is to store items based on a key that can be used to
retrieve/delete the item at a later point. Similar functionality can only be achieved with
a list in the limited case where the key happens to be the position in the list.
When you add items to a HashMap, you are not guaranteed to retrieve the items in
the same order you put them in.
hashCode() - HashMap provides put(key, value) for storing and get(key) method for
retrieving values from HashMap. When put() method is used to store (Key, Value)
pair, HashMap implementation calls hashcode on Key object to calculate a hash that is
used to find a bucket where Entry object will be stored.
he Map Interface. A Map is an object that maps keys to values. A map cannot contain
duplicate keys: Each key can map to at most one value. ... The Javaplatform contains
three general-purpose Map implementations: HashMap , TreeMap , and
LinkedHashMap .
Threads in Java. The base means for concurrency are is the java.lang.Threads class.
A Thread executes an object of type java.lang.Runnable . Runnable is an interface with
defines the run() method. This method is called by the Thread object and contains the
work which should be done..
he Java programming language and the Java virtual machine (JVM) have been
designed to support concurrent programming, and all execution takes place in the
context of threads. ... The programmer must ensure read and write access to objects is
properly coordinated (or "synchronized") between threads.
The Executor framework is an abstraction layer over the actual implementation of java
multithreading. It is the first concurrent utility framework in java and used for
standardizing invocation, scheduling, execution and control of asynchronous tasks in
parallel threads.
The Callable interface is similar to Runnable , in that both are designed for classes
whose instances are potentially executed by another thread. A Runnable , however,
does not return a result and cannot throw a checked exception.
If you just invoke run() directly, it's executed on the calling thread, just like any
othermethod call. Thread.start() is required to actually create a new thread so that the
runnable's run method is executed in parallel. The difference is that Thread.start()
starts a thread, while Runnable.run() just calls a method.
java.lang.Thread class provides the join() method which allows one thread to waituntil
another thread completes its execution. If t is a Thread object whose thread is currently
executing, then t.join(); it causes the current thread to pause its execution until thread it
join completes its execution.
To protect your code against this, create a private "lock" object, instance or static,
and synchronize on that object instead. At run time every class has an instance of a
Class object. That is the object that is locked on by static synchronized methods.
(Any synchronized method or block has to lock on some object.)
However, since the synchronization is at the object level, 2 threads running different
instances of the object will not be synchronized. If we have a static variable in
a Java class that is called by the method, we would like it to besynchronized across
instances of the class.
Static variables are indeed shared between threads, but the changes made in
one thread may not be visible to another thread immediately, making it seem like there
are two copies of the variable. ... Memory writes that happen in one threadcan "leak
through" and be seen by another thread, but this is by no means guaranteed.
t1.setUncaughtExceptionHandler(new
UncaughtExceptionHandler(){
@Override
public void uncaughtException(Thread t, Throwable e) {
System.out.println("exception occured:"+e.getMessage());
}
});
22. What is Java Thread Dump, How can we get
Java Thread dump of a Program?
Thread dump is list of all the threads active in the JVM, thread dumps are very
helpful in analyzing bottlenecks in the application and analyzing deadlock
situations. There are many ways using which we can generate Thread dump –
Using Profiler, Kill -3 command, jstack tool etc. I prefer jstack tool to generate
thread dump of a program because it’s easy to use and comes with JDK
installation. Since it’s a terminal based tool, we can create script to generate
thread dump at regular intervals to analyze it later on. Read this post to know
more about generating thread dump in java.
To analyze a deadlock, we need to look at the java thread dump of the application,
we need to look out for the threads with state as BLOCKED and then the
resources it’s waiting to lock, every resource has a unique ID using which we can
find which thread is already holding the lock on the object.
Avoid Nested Locks, Lock Only What is Required and Avoid waiting indefinitely
are common ways to avoid deadlock situation, read this post to learn how
to analyze deadlock in java with sample program.
A thread pool manages the collection of Runnable threads and worker threads
execute Runnable from the queue.
1. Java Stream API for collection classes for supporting sequential as well as parallel
processing
2. Iterable interface is extended with forEach() default method that we can use to
iterate over a collection. It is very helpful when used with lambda
expressions because it’s argument Consumer is a function interface.
3. Miscellaneous Collection API improvements such
as forEachRemaining(Consumer action)method
in Iterator interface, Map replaceAll(), compute(), merge() methods.
2. What is Java Collections Framework? List out
some benefits of Collections framework?
Collections are used in every programming language and initial java release
contained few classes for collections: Vector, Stack, Hashtable, Array. But
looking at the larger scope and usage, Java 1.2 came up with Collections
Framework that group all the collections interfaces, implementations and
algorithms.
Java Collections have come through a long way with usage of Generics and
Concurrent Collection classes for thread-safe operations. It also includes blocking
interfaces and their implementations in java concurrent package.
Some of the benefits of collections framework are;
Reduced development effort by using core collection classes rather than
implementing our own collection classes.
Code quality is enhanced with the use of well tested collections framework classes.
Reduced effort for code maintenance by using collection classes shipped with JDK.
Reusability and Interoperability
What is the benefit of Generics in Collections
Framework?
Java 1.5 came with Generics and all collection interfaces and implementations
use it heavily. Generics allow us to provide the type of Object that a collection can
contain, so if you try to add any element of other type it throws compile time error.
This avoids ClassCastException at Runtime because you will get the error at
compilation. Also Generics make code clean since we don’t need to use casting
and instanceof operator. I would highly recommend to go through Java Generic
Tutorial to understand generics in a better way.
Set is a collection that cannot contain duplicate elements. This interface models
the mathematical set abstraction and is used to represent sets, such as the deck
of cards.
List is an ordered collection and can contain duplicate elements. You can access
any element from it’s index. List is more like array with dynamic length.
A Map is an object that maps keys to values. A map cannot contain duplicate
keys: Each key can map to at most one value.
What is an Iterator?
Iterator interface provides methods to iterate over any Collection. We can get
iterator instance from a Collection using iterator() method. Iterator takes the place
of Enumeration in the Java Collections Framework. Iterators allow the caller to
remove elements from the underlying collection during the iteration. Java
Collection iterator provides a generic way for traversal through the elements of a
collection and implements Iterator Design Pattern.
//using iterator
Iterator<String> it = strList.iterator();
while(it.hasNext()){
String obj = it.next();
System.out.println(obj);
}
Using iterator is more thread-safe because it makes sure that if underlying list
elements are modified, it will throw ConcurrentModificationException.
What do you understand by iterator fail-fast
property?
Iterator fail-fast property checks for any modification in the structure of the
underlying collection everytime we try to get the next element. If there are any
modifications found, it throws ConcurrentModificationException. All the
implementations of Iterator in Collection classes are fail-fast by design except the
concurrent collection classes like ConcurrentHashMap and
CopyOnWriteArrayList.
What is UnsupportedOperationException?
UnsupportedOperationException is the exception used to indicate that the
operation is not supported. It’s used extensively in JDK classes, in collections
framework java.util.Collections.UnmodifiableCollection throws this
exception for all add and removeoperations.
When we call put method by passing key-value pair, HashMap uses Key
hashCode() with hashing to find out the index to store the key-value pair. The
Entry is stored in the LinkedList, so if there are already existing entry, it uses
equals() method to check if the passed key already exists, if yes it overwrites the
value else it creates a new entry and store this key-value Entry.
When we call get method by passing Key, again it uses the hashCode() to find
the index in the array and then use equals() method to find the correct Entry and
return it’s value. Below image will explain these detail clearly.
The other important things to know about HashMap are capacity, load factor,
threshold resizing. HashMap initial default capacity is 16 and load factor is 0.75.
Threshold is capacity multiplied by load factor and whenever we try to add an
entry, if map size is greater than threshold, HashMap rehashes the contents of
map into a new array with a larger capacity. The capacity is always power of 2, so
if you know that you need to store a large number of key-value pairs, for example
in caching data from database, it’s good idea to initialize the HashMap with correct
capacity and load factor.
What is the importance of hashCode() and equals()
methods?
HashMap uses Key object hashCode() and equals() method to determine the
index to put the key-value pair. These methods are also used when we try to get
value from HashMap. If these methods are not implemented correctly, two
different Key’s might produce same hashCode() and equals() output and in that
case rather than storing it at different location, HashMap will consider them same
and overwrite them.
Similarly all the collection classes that doesn’t store duplicate data use
hashCode() and equals() to find duplicates, so it’s very important to implement
them correctly. The implementation of equals() and hashCode() should follow
these rules.
If the class overrides equals() method, it should also override hashCode() method.
The class should follow the rules associated with equals() and hashCode() for all
instances. Please refer earlier question for these rules.
If a class field is not used in equals(), you should not use it in hashCode() method.
Best practice for user defined key class is to make it immutable, so that hashCode()
value can be cached for fast performance. Also immutable classes make sure that
hashCode() and equals() will not change in future that will solve any issue with
mutability.
For example, let’s say I have a class MyKey that I am using for HashMap key.
//MyKey name argument passed is used for equals() and
hashCode()
MyKey key = new MyKey("Pankaj"); //assume hashCode=1234
myHashMap.put(key, "Value");
// Below code will change the key hashCode() and
equals()
// but it's location is not changed.
key.setName("Amit"); //assume new hashCode=7890
//below will return null, because HashMap will try to
look for key
//in the same index as it was stored but since key is
mutated,
//there will be no match and it will return null.
myHashMap.get(new MyKey("Pankaj"));
This is the reason why String and Integer are mostly used as HashMap
keys.
0. Set<K> keySet(): Returns a Set view of the keys contained in this map. The set is
backed by the map, so changes to the map are reflected in the set, and vice-versa.
If the map is modified while an iteration over the set is in progress (except through
the iterator’s own remove operation), the results of the iteration are undefined. The
set supports element removal, which removes the corresponding mapping from the
map, via the Iterator.remove, Set.remove, removeAll, retainAll, and clear
operations. It does not support the add or addAll operations.
1. Collection<V> values(): Returns a Collection view of the values contained in this
map. The collection is backed by the map, so changes to the map are reflected in
the collection, and vice-versa. If the map is modified while an iteration over the
collection is in progress (except through the iterator’s own remove operation), the
results of the iteration are undefined. The collection supports element removal,
which removes the corresponding mapping from the map, via the Iterator.remove,
Collection.remove, removeAll, retainAll and clear operations. It does not support the
add or addAll operations.
2. Set<Map.Entry<K, V>> entrySet(): Returns a Set view of the mappings contained
in this map. The set is backed by the map, so changes to the map are reflected in
the set, and vice-versa. If the map is modified while an iteration over the set is in
progress (except through the iterator’s own remove operation, or through the
setValue operation on a map entry returned by the iterator) the results of the
iteration are undefined. The set supports element removal, which removes the
corresponding mapping from the map, via the Iterator.remove, Set.remove,
removeAll, retainAll and clear operations. It does not support the add or addAll
operations.
What is difference between HashMap and
Hashtable?
HashMap and Hashtable both implements Map interface and looks similar,
however there are following difference between HashMap and Hashtable.
0. HashMap allows null key and values whereas Hashtable doesn’t allow null key and
values.
1. Hashtable is synchronized but HashMap is not synchronized. So HashMap is better
for single threaded environment, Hashtable is suitable for multi-threaded
environment.
2. LinkedHashMap was introduced in Java 1.4 as a subclass of HashMap, so
incase you want iteration order, you can easily switch from HashMap to
LinkedHashMap but that is not the case with Hashtable whose iteration order is
unpredictable.
3. HashMap provides Set of keys to iterate and hence it’s fail-fast but Hashtable
provides Enumeration of keys that doesn’t support this feature.
4. Hashtable is considered to be legacy class and if you are looking for modifications
of Map while iterating, you should use ConcurrentHashMap.
How to decide between HashMap and TreeMap?
For inserting, deleting, and locating elements in a Map, the HashMap offers the
best alternative. If, however, you need to traverse the keys in a sorted order, then
TreeMap is your better alternative. Depending upon the size of your collection, it
may be faster to add elements to a HashMap, then convert the map to a TreeMap
for sorted key traversal.
Although ArrayList is the obvious choice when we work on list, there are few times
when array are good to use.
If the size of list is fixed and mostly used to store and traverse them.
For list of primitive data types, although Collections use autoboxing to reduce the
coding effort but still it makes them slow when working on fixed size primitive data
types.
If you are working on fixed multi-dimensional situation, using [][] is far more easier
than List<List<>>
What is difference between ArrayList and
LinkedList?
ArrayList and LinkedList both implement List interface but there are some
differences between them.
What is EnumSet?
java.util.EnumSet is Set implementation to use with enum types. All of the
elements in an enum set must come from a single enum type that is specified,
explicitly or implicitly, when the set is created. EnumSet is not synchronized and
null elements are not allowed. It also provides some useful methods like
copyOf(Collection c), of(E first, E… rest) and complementOf(EnumSet s).
Avoid ConcurrentModificationException
CopyOnWriteArrayList Example
HashMap vs ConcurrentHashMap
What is BlockingQueue?
java.util.concurrent.BlockingQueue is a Queue that supports
operations that wait for the queue to become non-empty when retrieving and
removing an element, and wait for space to become available in the queue when
adding an element.
This class contains methods for collection framework algorithms, such as binary
search, sorting, shuffling, reverse etc.
But, in most real life scenarios, we want sorting based on different parameters.
For example, as a CEO, I would like to sort the employees based on Salary, an
HR would like to sort them based on the age. This is the situation where we need
to use Comparator interface because Comparable.compareTo(Object
o) method implementation can sort based on one field only and we can’t chose
the field on which we want to sort the Object.
Comparable interface is used to provide the natural sorting of objects and we can
use it to provide sorting based on single logic.
Comparator interface is used to provide different algorithms for sorting and we can
chose the comparator we want to use to sort the given collection of objects.
1. Singleton Pattern
Singleton pattern restricts the instantiation of a class and ensures that only one
instance of the class exists in the java virtual machine. It seems to be a very
simple design pattern but when it comes to implementation, it comes with a lot of
implementation concerns. The implementation of Singleton pattern has always
been a controversial topic among developers. Check out Singleton Design
Patternto learn about different ways to implement Singleton pattern and pros and
cons of each of the method. This is one of the most discussed java design
patterns.
2. Factory Pattern
Factory design pattern is used when we have a super class with multiple sub-
classes and based on input, we need to return one of the sub-class. This pattern
take out the responsibility of instantiation of a class from client program to the
factory class. We can apply Singleton pattern on Factory class or make the factory
method static. Check out Factory Design Pattern for example program
and factory pattern benefits. This is one of the most widely used java design
pattern.
In Abstract Factory pattern, we get rid of if-else block and have a factory class for
each sub-class and then an Abstract Factory class that will return the sub-class
based on the input factory class. Check out Abstract Factory Pattern to know
how to implement this pattern with example program.
4. Builder Pattern
This pattern was introduced to solve some of the problems with Factory and
Abstract Factory design patterns when the Object contains a lot of attributes.
Builder pattern solves the issue with large number of optional parameters and
inconsistent state by providing a way to build the object step-by-step and provide
a method that will actually return the final Object. Check out Builder Pattern for
example program and classes used in JDK.
5. Prototype Pattern
Prototype pattern is used when the Object creation is a costly affair and requires a
lot of time and resources and you have a similar object already existing. So this
pattern provides a mechanism to copy the original object to a new object and then
modify it according to our needs. This pattern uses java cloning to copy the object.
Prototype design pattern mandates that the Object which you are copying should
provide the copying feature. It should not be done by any other class. However
whether to use shallow or deep copy of the Object properties depends on the
requirements and it’s a design decision. Check out Prototype Pattern for sample
program.
1. Adapter Pattern
Adapter design pattern is one of the structural design pattern and it’s used so that
two unrelated interfaces can work together. The object that joins these unrelated
interface is called an Adapter. As a real life example, we can think of a mobile
charger as an adapter because mobile battery needs 3 volts to charge but the
normal socket produces either 120V (US) or 240V (India). So the mobile charger
works as an adapter between mobile charging socket and the wall socket. Check
out Adapter Patternfor example program and it’s usage in Java.
2. Composite Pattern
Composite pattern is one of the Structural design pattern and is used when we
have to represent a part-whole hierarchy. When we need to create a structure in a
way that the objects in the structure has to be treated the same way, we can apply
composite design pattern.
Lets understand it with a real life example – A diagram is a structure that consists
of Objects such as Circle, Lines, Triangle etc and when we fill the drawing with
color (say Red), the same color also gets applied to the Objects in the drawing.
Here drawing is made up of different parts and they all have same operations.
Check out Composite Pattern article for different component of composite
pattern and example program.
3. Proxy Pattern
Proxy pattern intent is to “Provide a surrogate or placeholder for another object to
control access to it”. The definition itself is very clear and proxy pattern is used
when we want to provide controlled access of a functionality.
Let’s say we have a class that can run some command on the system. Now if we
are using it, it’s fine but if we want to give this program to a client application, it
can have severe issues because client program can issue command to delete
some system files or change some settings that you don’t want. Check out Proxy
Pattern post for the example program with implementation details.
4. Flyweight Pattern
Flyweight design pattern is used when we need to create a lot of Objects of a
class. Since every object consumes memory space that can be crucial for low
memory devices, such as mobile devices or embedded systems, flyweight design
pattern can be applied to reduce the load on memory by sharing objects. String
Pool implementation in java is one of the best example of Flyweight pattern
implementation. Check out Flyweight Pattern article for sample program and
implementation process.
5. Facade Pattern
Facade Pattern is used to help client applications to easily interact with the
system. Suppose we have an application with set of interfaces to use
MySql/Oracle database and to generate different types of reports, such as HTML
report, PDF report etc. So we will have different set of interfaces to work with
different types of database. Now a client application can use these interfaces to
get the required database connection and generate reports. But when the
complexity increases or the interface behavior names are confusing, client
application will find it difficult to manage it. So we can apply Facade pattern here
and provide a wrapper interface on top of the existing interface to help client
application. Check out Facade Pattern post for implementation details and
sample program.
6. Bridge Pattern
When we have interface hierarchies in both interfaces as well as implementations,
then bridge design pattern is used to decouple the interfaces from implementation
and hiding the implementation details from the client programs. Like Adapter
pattern, it’s one of the Structural design pattern.
7. Decorator Pattern
Decorator design pattern is used to modify the functionality of an object at runtime.
At the same time other instances of the same class will not be affected by this, so
individual object gets the modified behavior. Decorator design pattern is one of the
structural design pattern (such as Adapter Pattern, Bridge Pattern, Composite
Pattern) and uses abstract classes or interface with composition to implement.
2. Mediator Pattern
Mediator design pattern is used to provide a centralized communication medium
between different objects in a system. Mediator design pattern is very helpful in an
enterprise application where multiple objects are interacting with each other. If the
objects interact with each other directly, the system components are tightly-
coupled with each other that makes maintainability cost higher and not flexible to
extend easily. Mediator pattern focuses on provide a mediator between objects for
communication and help in implementing lose-coupling between objects.
Air traffic controller is a great example of mediator pattern where the airport
control room works as a mediator for communication between different flights.
Mediator works as a router between objects and it can have it’s own logic to
provide way of communication. Check out Mediator Pattern post for
implementation details with example program.
We know that we can have multiple catch blocks in a try-catch block code. Here
every catch block is kind of a processor to process that particular exception. So
when any exception occurs in the try block, it’s sent to the first catch block to
process. If the catch block is not able to process it, it forwards the request to next
object in chain i.e next catch block. If even the last catch block is not able to
process it, the exception is thrown outside of the chain to the calling program.
4. Observer Pattern
Observer design pattern is useful when you are interested in the state of an object
and want to get notified whenever there is any change. In observer pattern, the
object that watch on the state of another object are called Observer and the
object that is being watched is called Subject.
Java provides inbuilt platform for implementing Observer pattern through
java.util.Observable class and java.util.Observer interface. However it’s not widely
used because the implementation is really simple and most of the times we don’t
want to end up extending a class just for implementing Observer pattern as java
doesn’t provide multiple inheritance in classes.
Java Message Service (JMS) uses Observer pattern along with Mediator pattern
to allow applications to subscribe and publish data to other applications. Check
out Observer Pattern post for implementation details and example program.
5. Strategy Pattern
Strategy pattern is used when we have multiple algorithm for a specific task and
client decides the actual implementation to be used at runtime.
Check out Strategy Pattern post for implementation details and example
program.
6. Command Pattern
Command Pattern is used to implement lose coupling in a request-response
model. In command pattern, the request is send to the invoker and invoker pass it
to the encapsulated command object. Command object passes the request to the
appropriate method of Receiver to perform the specific action.
Let’s say we want to provide a File System utility with methods to open, write and
close file and it should support multiple operating systems such as Windows and
Unix.
To implement our File System utility, first of all we need to create the receiver
classes that will actually do all the work. Since we code in terms of java interfaces,
we can have FileSystemReceiver interface and it’s implementation classes for
different operating system flavors such as Windows, Unix, Solaris etc. Check
out Command Pattern post for the implementation details with example program.
7. State Pattern
State design pattern is used when an Object change it’s behavior based on it’s
internal state.
If we have to change the behavior of an object based on it’s state, we can have a
state variable in the Object and use if-else condition block to perform different
actions based on the state. State pattern is used to provide a systematic and lose-
coupled way to achieve this through Context and State implementations.
Check out State Pattern post for implementation details with example program.
8. Visitor Pattern
Visitor pattern is used when we have to perform an operation on a group of similar
kind of Objects. With the help of visitor pattern, we can move the operational logic
from the objects to another class.
For example, think of a Shopping cart where we can add different type of items
(Elements), when we click on checkout button, it calculates the total amount to be
paid. Now we can have the calculation logic in item classes or we can move out
this logic to another class using visitor pattern. Let’s implement this in our example
of visitor pattern. Check out Visitor Pattern post for implementation details.
9. Interpreter Pattern
is used to defines a grammatical representation for a language and provides an
interpreter to deal with this grammar.
The best example of this pattern is java compiler that interprets the java source
code into byte code that is understandable by JVM. Google Translator is also an
example of interpreter pattern where the input can be in any language and we can
get the output interpreted in another language.
Spring boot on the other hand is built on a totally different mantra. It's basically a
suite, pre-configured, pre-sugared set of frameworks/technologies to reduce boiler plate
configuration providing you the shortest way to have a Spring web application up and
running with smallest line of code/configuration out-of-the-box. As you can see from
there Spring Boot page, it took less than 20 LOC to have a simple RESTful application
up and running with almost zero configuration. It definitely has a ton of way to
configure application to match your need, this is the Spring Boot Reference Guide for
your reference
Spring is a light weight and open source framework created by Rod Johnson in
2003. Spring is a complete and a modular framework, i mean spring
framework can be used for all layer implementations for a real time application
or spring can be used for the development of particular layer of a real time
application unlike struts [ only for front end related ] and hibernate [ only for
database related ], but with spring we can develop all layers
Spring framework is said to be a non-invasive means it doesn’t force a
programmer to extend or implement their class from any predefined class or
interface given by Spring API, in struts we used to extend Action Class right
that’s why struts is said to be invasive
In case of struts framework, it will forces the programmer that, the
programmer class must extend from the base class provided by struts API
Spring is light weight framework because of its POJO model
Spring Framework made J2EE application development little easier, by
introducing POJO model
Spring having this much of demand because of the following 3 reasons….
Simplicity
Testability
Loose Coupling
spring boot:
first of all Spring Boot is not a framework, it is a way to ease to create stand-alone
application with minimal or zero configurations. It is approach to develop spring based
application with very less configuration. It provides defaults for code and annotation
configuration to quick start new spring projects within no time. Spring Boot leverages
existing spring projects as well as Third party projects to develop production ready
applications. It provides a set of Starter Pom’s or gradle build files which one
can use to add required dependencies and also facilitate auto configuration.
Spring Boot automatically configures required classes depending on the libraries on its
classpath. Suppose your application want to interact with DB, if there are Spring Data
libraries on class path then it automatically sets up connection to DB along with the
Data Source class.
The impact of this design principle are profound and these includes; good
architecture, maintainability, uniform & standard product creation, lesser
number of decisions for the user, increase in the overall productivity, faster
development etc just to mention a few.
The idea is that, system/framework would provide sensible defaults for their
users [by convention] and if one deviates/departs from the these defaults then
only one needs to make any configuration changes. Let’s take
some Examples:
Example 1: (Deployment Simplified)
Suppose user is creating a web application following Spring MVC then it’s
obvious that the user will be needing a container like Tomcat to deploy this
application. If your framework can provide embedded Tomcat then user don’t
have to waste their time & effort in installing, configuring their Tomcat
instance. If one doesn’t want this default behavior then framework should have
flexibility for that also.
Work For You : Look out for the dependent jars of spring-boot-starter-
web artifactId
<dependency><groupId>org.springframework.boot</groupId><artifactId>
spring-boot-starter-web</artifactId></dependency>
Let me know, what have you found? :-)
Spring Boot enables building production-ready applications quickly and provides non-
functional features:
While developing a number of smaller microservices might look easy, but there are
number of inherent complexities that are associated with microservices architectures.
Let’s look at some of the challenges:
First thing to observe is we are using some dependencies named like spring-boot-
starter-*. Remember I said “95% of the times I use the same configuration. So when
you add springboot-starter-web dependency by default it will pull all the
commonly used libraries while developing Spring MVC applications such as spring-
webmvc, jackson-json, validation-api and tomcat.
We have added spring-boot-starter-data-jpa dependency. This pulls all
the spring-data-jpa dependencies and also adds Hibernate libraries because the
majority of the applications use Hibernate as JPA implementation.
2. Auto Configuration
Not only the spring-boot-starter-web adds all these libraries but also configures
the commonly registered beans like DispatcherServlet, ResourceHandlers,
MessageSource etc beans with sensible defaults.
We also added spring-boot-starter-Thymeleaf which not only adds the Thymeleaf
library dependencies but also configures ThymeleafViewResolver beans as well
automatically.
We haven’t defined any of the DataSource, EntityManagerFactory,
TransactionManageretc beans but they are automatically gets created. How? If we
have any in-memory database drivers like H2 or HSQL in our classpath then
SpringBoot will automatically create an in-memory DataSource and then
registers EntityManagerFactory, TransactionManager beans automatically
with sensible defaults. But we are using MySQL, so we need to explicitly provide
MySQL connection details. We have configured those MySQL connection details
in application.properties file and SpringBoot creates a DataSource using these
properties.
3. Embedded Servlet Container Support
The most important and surprising thing is we have created a simple Java class
annotated with some magical annotation @SpringApplication having a main method
and by running that main we are able to run the application and access it
at http://localhost:8080/.
What is the core problem that Spring Framework solves? Think long and
hard. What’s the problem Spring Framework solves?
Most important feature of Spring Framework is Dependency Injection. At the core of all
Spring Modules is Dependency Injection or IOC Inversion of Control.
Because, when DI or IOC is used properly, we can develop loosely coupled applications.
And loosely coupled applications can be easily unit tested.
Spring MVC Framework provides decoupled way of developing web applications. With
simple concepts like Dispatcher Servlet, ModelAndView and View Resolver, it makes it
easy to develop web applications.
Spring based applications have a lot of configuration. For example, when we use Spring
MVC, we need to configure component scan, dispatcher servlet, a view resolver, web
jars(for delivering static content) among other things.
1. <bean
2.
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
3. <property name="prefix">
4. <value>/WEB-INF/views/</value>
5. </property>
6. <property name="suffix">
7. <value>.jsp</value>
8. </property>
9. </bean>
10.
11. <mvc:resources mapping="/webjars/**" location="/webjars/"/>
Below code snippet shows typical configuration of a dispatcher servlet in a web
application.
<servlet>
<servlet-name>dispatcher</servlet-name>
<servlet-class>
org.springframework.web.servlet.DispatcherServlet
</servlet-class>
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/todo-servlet.xml</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>dispatcher</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
Spring Boot brings in new thought process around this.
Can we bring more intelligence into this? When a spring mvc jar is added into an
application, can we auto configure some beans automatically?
How about auto configuring a Data Source if Hibernate jar is on the classpath?
How about auto configuring a Dispatcher Servlet if Spring MVC jar is on the
classpath?
Spring Boot looks at a) Frameworks available on the CLASSPATH b) Existing
configuration for the application. Based on these, Spring Boot provides basic
configuration needed to configure the application with these frameworks. This is called
Auto Configuration.
Spring MVC is associated with controller operations, i.e., serving the request and
respone in the Spring Framework. It eases and reduces the development effort
required on the frontend. It provides a tag library for displaying the content on
the UI. You can also customize the tags. You can define mutiple content
negotiation handlers for the same request controller. For example same
controller can return a JSP or a json. And the list goes on. You can check the
documentation for further details.
Spring Boot lets you create the whole spring framework based web apps quickly.
It is all java classes configuration based. You can add different modules of spring
as per your needs.
Project Execution
organizational effectiveness
team building
performance management
motivation skills
conflict resolution
diversity appreciation
staff development
problem solving
adaptability
change management
consultative skills
sense of urgency
judgment
decision making
customer relations management
negotiation skills
Project Closure
presentation skills
data management
evaluation skills
efficiently synthesize project information and accurately establish project scope
set project costs and productivity benchmarks
successfully manage and control budgets up to $X
develop good working relationships with stakeholders at all levels to build consensus
effectively lead and coordinate project teams of up to X members
solve critical issues in a time-sensitive environment
proven quality assurance, risk management and change management expertise
Elasticsearch is a highly scalable open-source full-text search and analytics engine. It allows you
to store, search, and analyze big volumes of data quickly and in near real time. It is generally
used as the underlying engine/technology that powers applications that have complex search
features and requirements.
Here are a few sample use-cases that Elasticsearch could be used for:
You run an online web store where you allow your customers to search for products that you sell.
In this case, you can use Elasticsearch to store your entire product catalog and inventory and
provide search and autocomplete suggestions for them.
You want to collect log or transaction data and you want to analyze and mine this data to look
for trends, statistics, summarizations, or anomalies. In this case, you can use Logstash (part of
the Elasticsearch/Logstash/Kibana stack) to collect, aggregate, and parse your data, and then
have Logstash feed this data into Elasticsearch. Once the data is in Elasticsearch, you can run
searches and aggregations to mine any information that is of interest to you.
You run a price alerting platform which allows price-savvy customers to specify a rule like "I am
interested in buying a specific electronic gadget and I want to be notified if the price of gadget
falls below $X from any vendor within the next month". In this case you can scrape vendor
prices, push them into Elasticsearch and use its reverse-search (Percolator) capability to match
price movements against customer queries and eventually push the alerts out to the customer
once matches are found.
You have analytics/business-intelligence needs and want to quickly investigate, analyze,
visualize, and ask ad-hoc questions on a lot of data (think millions or billions of records). In this
case, you can use Elasticsearch to store your data and then use Kibana (part of the
Elasticsearch/Logstash/Kibana stack) to build custom dashboards that can visualize aspects of
your data that are important to you. Additionally, you can use the Elasticsearch aggregations
functionality to perform complex business intelligence queries against your data.
For the rest of this tutorial, you will be guided through the process of getting Elasticsearch up
and running, taking a peek inside it, and performing basic operations like indexing, searching,
and modifying your data. At the end of this tutorial, you should have a good idea of what
Elasticsearch is, how it works, and hopefully be inspired to see how you can use it to either build
sophisticated search applications or to mine intelligence from your data.
Basic Concepts
There are a few concepts that are core to Elasticsearch. Understanding these concepts from the
outset will tremendously help ease the learning process.
Clusteredit
A cluster is a collection of one or more nodes (servers) that together holds your entire data and
provides federated indexing and search capabilities across all nodes. A cluster is identified by a
unique name which by default is "elasticsearch". This name is important because a node can only
be part of a cluster if the node is set up to join the cluster by its name.
Make sure that you don’t reuse the same cluster names in different environments, otherwise you
might end up with nodes joining the wrong cluster. For instance you could use logging-
dev, logging-stage, and logging-prod for the development, staging, and production clusters.
Note that it is valid and perfectly fine to have a cluster with only a single node in it. Furthermore,
you may also have multiple independent clusters each with its own unique cluster name.
Nodeedit
A node is a single server that is part of your cluster, stores your data, and participates in the
cluster’s indexing and search capabilities. Just like a cluster, a node is identified by a name
which by default is a random Universally Unique IDentifier (UUID) that is assigned to the node
at startup. You can define any node name you want if you do not want the default. This name is
important for administration purposes where you want to identify which servers in your network
correspond to which nodes in your Elasticsearch cluster.
A node can be configured to join a specific cluster by the cluster name. By default, each node is
set up to join a cluster named elasticsearch which means that if you start up a number of
nodes on your network and—assuming they can discover each other—they will all automatically
form and join a single cluster named elasticsearch.
In a single cluster, you can have as many nodes as you want. Furthermore, if there are no other
Elasticsearch nodes currently running on your network, starting a single node will by default
form a new single-node cluster named elasticsearch.
Indexedit
An index is a collection of documents that have somewhat similar characteristics. For example,
you can have an index for customer data, another index for a product catalog, and yet another
index for order data. An index is identified by a name (that must be all lowercase) and this name
is used to refer to the index when performing indexing, search, update, and delete operations
against the documents in it.
In a single cluster, you can define as many indexes as you want.
A type used to be a logical category/partition of your index to allow you to store different types of
documents in the same index, eg one type for users, another type for blog posts. It is no longer possible to
create multiple types in an index, and the whole concept of types will be removed in a later version.
See Removal of mapping types for more.
Documentedit
A document is a basic unit of information that can be indexed. For example, you can have a
document for a single customer, another document for a single product, and yet another for a
single order. This document is expressed in JSON (JavaScript Object Notation) which is a
ubiquitous internet data interchange format.
Within an index/type, you can store as many documents as you want. Note that although a
document physically resides in an index, a document actually must be indexed/assigned to a type
inside an index.
cd %PROGRAMFILES%\Elastic\Elasticsearch\bin
with PowerShell:
cd $env:PROGRAMFILES\Elastic\Elasticsearch\bin
And now we are ready to start our node and single cluster:
.\elasticsearch.exe
[2016-09-16T14:17:51,251][INFO ][o.e.n.Node ] []
initializing ...
[2016-09-16T14:17:51,329][INFO ][o.e.e.NodeEnvironment ] [6-bjhwl]
using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space
[317.7gb], net total_space [453.6gb], spins? [no], types [ext4]
[2016-09-16T14:17:51,330][INFO ][o.e.e.NodeEnvironment ] [6-bjhwl]
heap size [1.9gb], compressed ordinary object pointers [true]
[2016-09-16T14:17:51,333][INFO ][o.e.n.Node ] [6-bjhwl]
node name [6-bjhwl] derived from node ID; set [node.name] to override
[2016-09-16T14:17:51,334][INFO ][o.e.n.Node ] [6-bjhwl]
version[6.3.0], pid[21261], build[f5daa16/2016-09-16T09:12:24.346Z],
OS[Linux/4.4.0-36-generic/amd64], JVM[Oracle Corporation/Java
HotSpot(TM) 64-Bit Server VM/1.8.0_60/25.60-b23]
[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [aggs-matrix-stats]
[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [ingest-common]
[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [lang-expression]
[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [lang-mustache]
[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [lang-painless]
[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [percolator]
[2016-09-16T14:17:51,968][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [reindex]
[2016-09-16T14:17:51,968][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [transport-netty3]
[2016-09-16T14:17:51,968][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded module [transport-netty4]
[2016-09-16T14:17:51,968][INFO ][o.e.p.PluginsService ] [6-bjhwl]
loaded plugin [mapper-murmur3]
[2016-09-16T14:17:53,521][INFO ][o.e.n.Node ] [6-bjhwl]
initialized
[2016-09-16T14:17:53,521][INFO ][o.e.n.Node ] [6-bjhwl]
starting ...
[2016-09-16T14:17:53,671][INFO ][o.e.t.TransportService ] [6-bjhwl]
publish_address {192.168.8.112:9300}, bound_addresses
{{192.168.8.112:9300}
[2016-09-16T14:17:53,676][WARN ][o.e.b.BootstrapCheck ] [6-bjhwl]
max virtual memory areas vm.max_map_count [65530] likely too low,
increase to at least [262144]
[2016-09-16T14:17:56,731][INFO ][o.e.h.HttpServer ] [6-bjhwl]
publish_address {192.168.8.112:9200}, bound_addresses {[::1]:9200},
{192.168.8.112:9200}
[2016-09-16T14:17:56,732][INFO ][o.e.g.GatewayService ] [6-bjhwl]
recovered [0] indices into cluster_state
[2016-09-16T14:17:56,748][INFO ][o.e.n.Node ] [6-bjhwl]
started
Without going too much into detail, we can see that our node named "6-bjhwl" (which will be a
different set of characters in your case) has started and elected itself as a master in a single
cluster. Don’t worry yet at the moment what master means. The main thing that is important here
is that we have started one node within one cluster.
As mentioned previously, we can override either the cluster or node name. This can be done
from the command line when starting Elasticsearch as follows:
./elasticsearch -Ecluster.name=my_cluster_name -
Enode.name=my_node_name
Also note the line marked http with information about the HTTP address (192.168.8.112) and
port (9200) that our node is reachable from. By default, Elasticsearch uses port 9200 to provide
access to its REST API. This port is configurable if necessary.
Elasticsearch uses Apache Lucene to create and manage this inverted index.
Schema
Whilst you are not required to specify a schema before indexing documents,
it is necessary to add mapping declarations if you require anything but the
most basic fields and operations.
Query DSL
The Query DSL is Elasticsearch's way of making Lucene's query syntax
accessible to users, allowing complex queries to be composed using a JSON
syntax.
Like Lucene, there are basic queries such as term or prefix queries and also
compound queries like the bool query.
have no idea what SaaS, IaaS, and PaaS mean or which of these cloud solutions
We’ve written this article to explain what cloud services are available and help
going into details, let’s compare IaaS, PaaS, and SaaS to transportation:
car, you’re responsible for its maintenance, and upgrading means buying a
new car.
IaaS is like leasing a car. When you lease a car, you choose the car you
want and drive it wherever you wish, but the car isn’t yours. Want an
PaaS is like taking a taxi. You don’t drive a taxi yourself, but simply tell
the driver where you need to go and relax in the back seat.
SaaS is like going by bus. Buses have assigned routes, and you share
These analogies will help you better understand our more detailed explanations.
LetNow that you know what SaaS, PaaS, and IaaS mean, let’s be more specific
about when each should be used and what their advantages and disadvantages
are.
general understanding of when they’re used. Let’s provide some more details.
Personal purposes. Millions of individuals all over the world use email
Docs), and so on. People may not realize it, but all of these cloud services
which can be accessed only from a computer (or a network) it’s installed
on, SaaS solutions are cloud-based. Thus, you can access them from
room.
accessed from any computer. You only need to sign in. Many SaaS
solutions have mobile apps, so they can be accessed from mobile devices
as well.
there are any bugs or technical troubles, the vendor will fix them while you
affordable. There’s no need to pay for the whole IT infrastructure; you pay
only for the service at the scale you need. If you need extra functionality,
them is a piece of cake. We’ve already mentioned what you need to do:
just sign up. It’s as simple as that. There’s no need to install anything.
couple of them:
You have no control over the hardware that handles your data.
Only a vendor can manage the parameters of the software you’re using.
No wonder that software developers use PaaS services such as Heroku, Elastic
of the world. PaaS services allow them to access the same software
You have no control over the virtual machine that’s processing your data.
PaaS solutions are less flexible than IaaS. For example, you can’t create
provides hardware infrastructure that you can use in a variety of ways. It’s like
having a set of tools that you can use for constructing the item you need.
with the help of IaaS (for example, using Elastic Compute Cloud from
Virtual data centers. IaaS is the best solution for building virtual data
computing power, and IaaS is the most economical way to get it.
businesses:
is rather pricey.
data.
infrastructure.
responsibility.
depends on your business goals, so first of all consider what your company
needs. Here are some common business needs that can easily be met with the
Platform as a Service.
Service.
If you feel that you can’t make the right choice on your own, we can help you
As you can see, each cloud service (IaaS, PaaS, and SaaS) is tailored to the
business needs of its target audience. From the technical point of view, IaaS
gives you the most control but requires extensive expertise to manage the
In fact, email services such as Gmail and Hotmail are examples of cloud-based
SaaS services. Other examples of SaaS services are office tools (Office 365 and
subscription) pricing model. All software and hardware are provided and
Hosted applications
Operating system
–
Servers and storage
Networking resources
Data center
go pricing model.
Operating system
Networking resources
Data center
storage, and networking resources. In other words, IaaS is a virtual data center.
IaaS services can be used for a variety of purposes, from hosting websites to
analyzing big data. Clients can install and use whatever operating systems and
tools they like on the infrastructure they get. Major IaaS providers include
model.