Vous êtes sur la page 1sur 13

SMU

ASSIGNMENT
SEMESTER – 1
MBO024

Statistics For Management

SUBMITTED BY:
MUSHTAQ AHMAD PARA
MBA
ROLL NO.- 520950361
ASSIGNMENTS
MB 0024
STATISTICS FOR MANAGEMENT
Q1. What do you mean by sample survey? What are the different sampling methods? Briefly describe them.

Ans. Sample is a finite subset of a population drawn from it to estimate the characteristics of the
population. Sampling is a tool which enables us to draw conclusions about the characteristics of the
population.

Survey sampling describes the process of selecting a sample of elements from a target population in
order to conduct a survey.
A survey may refer to many different types or techniques of observation, but in the context of survey
sampling it most often refers to a questionnaire used to measure the characteristics and/or attitudes
of people. The purpose of sampling is to reduce the cost and/or the amount of work that it would take
to survey the entire target population. A survey that measures the entire target population is called a
census.

Sample survey can also be described as the technique used to study about a population with the
help of a sample. Population is the totality all objects about which the study is proposed. Sample is
only a portion of this population, which is selected using certain statistical principles called sampling
designs (this is for guaranteeing that a representative sample is obtained for the study). Once the
sample decided information will be collected from this sample, which process is called sample survey.

It is incumbent on the researcher to clearly define the target population. There are no strict rules to
follow, and the researcher must rely on logic and judgment. The population is defined in keeping with
the objectives of the study.

Sometimes, the entire population will be sufficiently small, and the researcher can include the entire
population in the study. This type of research is called a census study because data is gathered on
every member of the population.

Usually, the population is too large for the researcher to attempt to survey all of its members. A small,
but carefully chosen sample can be used to represent the population. The sample reflects the
characteristics of the population from which it is drawn.

Sampling methods are classified as either probability or non-probability. In probability samples,


each member of the population has a known non-zero probability of being selected. Probability
methods include random sampling, systematic sampling, and stratified sampling. In non-
probability sampling, members are selected from the population in some non-random manner. These
include convenience sampling, judgment sampling, quota sampling, and snowball sampling.
The advantage of probability sampling is that sampling error can be calculated. Sampling error is the
degree to which a sample might differ from the population. When inferring to the population, results
are reported plus or minus the sampling error. In non-probability sampling, the degree to which the
sample differs from the population remains unknown.

Probability Sampling Methods

1. Random sampling is the purest form of probability sampling. Each member of the population
has an equal and known chance of being selected. When there are very large populations, it is
often difficult or impossible to identify every member of the population, so the pool of available
subjects becomes biased.
2. Systematic sampling is often used instead of random sampling. It is also called an Nth name
selection technique. After the required sample size has been calculated, every Nth record is
selected from a list of population members. As long as the list does not contain any hidden
order, this sampling method is as good as the random sampling method. Its only advantage
over the random sampling technique is simplicity. Systematic sampling is frequently used to
select a specified number of records from a computer file.
3. Stratified sampling is commonly used probability method that is superior to random sampling
because it reduces sampling error. A stratum is a subset of the population that share at least
one common characteristic. Examples of stratums might be males and females, or managers
and non-managers. The researcher first identifies the relevant stratums and their actual
representation in the population. Random sampling is then used to select a sufficient number of
subjects from each stratum. "Sufficient" refers to a sample size large enough for us to be
reasonably confident that the stratum represents the population.

Stratified sampling is often used when one or more of the stratums in the population have a low
incidence relative to the other stratums.

Non Probability Methods

1. Convenience sampling is used in exploratory research where the researcher is interested in


getting an inexpensive approximation of the truth. As the name implies, the sample is selected
because they are convenient. This non-probability method is often used during preliminary
research efforts to get a gross estimate of the results, without incurring the cost or time
required to select a random sample.

2. Judgment sampling is a common non-probability method. The researcher selects the sample
based on judgment. This is usually extension of convenience sampling. For example, a
researcher may decide to draw the entire sample from one "representative" city, even though
the population includes all cities. When using this method, the researcher must be confident
that the chosen sample is truly representative of the entire population.

3. Quota sampling is the non-probability equivalent of stratified sampling. Like stratified


sampling, the researcher first identifies the stratums and their proportions as they are
represented in the population. Then convenience or judgment sampling is used to select the
required number of subjects from each stratum. This differs from stratified sampling, where the
stratums are filled by random sampling.

4. Snowball sampling is a special non-probability method used when the desired sample
characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in
these situations. Snowball sampling relies on referrals from initial subjects to generate
additional subjects. While this technique can dramatically lower search costs, it comes at the
expense of introducing bias because the technique itself reduces the likelihood that the sample
will represent a good cross section from the population.
Q2. What is the different between correlation and regression? What do you understand by Rank Correlation? When we
use rank correlation and when we use Pearsonian Correlation Coefficient? Fit a linear regression line in the following
data –

X 12 15 18 20 27 34 28 48
Y 123 150 158 170 180 184 176 130
Ans. Correlation

When two or more variables move in sympathy with other, then they are said to be correlated. If both
variables move in the same direction then they are said to be positively correlated. If the variables
move in opposite direction then they are said to be negatively correlated. If they move haphazardly
then there is no correlation between them.
Correlation analysis deals with
1) Measuring the relationship between variables.
2) Testing the relationship for its significance.
3) Giving confidence interval for population correlation measure.

Regression
Regression is defined as, “the measure of the average relationship between two or more variables in
terms of the original units of the data.” Correlation analysis attempts to study the relationship
between the two variables x and y. Regression analysis attempts to predict the average x for a given
y. In Regression it is attempted to quantify the dependence of one variable on the other. The
dependence is expressed in the form of the equations.

Different between correlation and regression

Correlation and linear regression are not the same. Consider these differences:
• Correlation quantifies the degree to which two variables are related. Correlation does not find a
best-fit line (that is regression). You simply are computing a correlation coefficient (r) that tells
you how much one variable tends to change when the other one does.

• With correlation you don't have to think about cause and effect. You simply quantify how well two
variables relate to each other. With regression, you do have to think about cause and effect as
the regression line is determined as the best way to predict Y from X.

• With correlation, it doesn't matter which of the two variables you call "X" and which you call "Y".
You'll get the same correlation coefficient if you swap the two. With linear regression, the
decision of which variable you call "X" and which you call "Y" matters a lot, as you'll get a
different best-fit line if you swap the two. The line that best predicts Y from X is not the same as
the line that predicts X from Y.

• Correlation is almost always used when you measure both variables. It rarely is appropriate when
one variable is something you experimentally manipulate. With linear regression, the X variable
is often something you experimental manipulate (time, concentration...) and the Y variable is
something you measure.

• The correlation answers the STRENGTH of linear association between paired variables, say X and
Y. On the other hand, the regression tells us the FORM of linear association that best predicts Y
from the values of X.

(2a) Correlation is calculated whenever:

– Both X and Y is measured in each subject and quantifies how much they are linearly
associated.
– In particular the Pearson's product moment correlation coefficient is used when the
assumption of both X and Y are sampled from normally-distributed populations are satisfied
– Or the Spearman's moment order correlation coefficient is used if the assumption of normality
is not satisfied.
– Correlation is not used when the variables are manipulated, for example, in experiments.

(2b) linear regression is used whenever:

– At least one of the independent variables (Xi's) is to predict the dependent variable Y. Note:
Some of the Xi's are dummy variables, i.e. Xi = 0 or 1, which are used to code some nominal
variables.
– If one manipulates the X variable, e.g. in an experiment.

• Linear regression are not symmetric in terms of X and Y. That is interchanging X and Y will give a
different regression model (i.e. X in terms of Y) against the original Y in terms of X.
On the other hand, if you interchange variables X and Y in the calculation of correlation
coefficient you will get the same value of this correlation coefficient.
• The "best" linear regression model is obtained by selecting the variables (X's) with at least
strong correlation to Y, i.e. >= 0.80 or <= -0.80

• The same underlying distribution is assumed for all variables in linear regression. Thus, linear
regression will underestimate the correlation of the independent and dependent when they (X's
and Y) come from different underlying distributions.

Spearman's rank correlation coefficient or Spearman's rho, named after Charles Spearman and
often denoted by the Greek letter ρ (rho) or as rs, is a nonparametric measure of correlation – that is,
it assesses how well an arbitrary monotonic function could describe the relationship between two
variables, without making any other assumptions about the particular nature of the relationship
between the variables. Certain other measures of correlation are parametric in the sense of being
based on possible relationships of a parameterized form, such as a linear relationship.

In principle, ρ is simply a special case of the Pearson product-moment coefficient in which two sets of
data Xi and Yi are converted to rankings xi and yi before calculating the coefficient. In practice,
however, a simpler procedure is normally used to calculate ρ. The raw scores are converted to ranks,
and the differences di, between the ranks of each observation on the two variables are calculated.

If there are no tied ranks, then ρ is given by:

Where:
di = xi − yi = the difference between the ranks of corresponding values Xi and Yi, and
n = the number of values in each data set (same for both sets).
If tied ranks exist, classic Pearson's correlation coefficient between ranks has to be used instead of
this formula.

One has to assign the same rank to each of the equal values. It is an average of their positions in the
ascending order of the values.
Conditions under which P.E can be used:

1. Samples should be drawn from a normal population.


2. The value of “r” must be determined from sample values.
3. Samples must have been selected at random.
Q3. What do you mean by business forecasting? What are the different methods of business forecasting? Describe the
effectiveness of time-series analysis as a mode of business forecasting. Describe the method of moving averages.

Ans. Business forecasting refers to the analysis of past and present economic conditions with the
object of drawing inferences about probable future business conditions. To forecast the future, various
data, information and facts concerning to economic condition of business for past and present are
analyzed. The process of forecasting includes the use of statistical and mathematical methods for long
term, short term, medium term or any specific term.

Following are the main methods of business forecasting:-

1. Business Barometers

Business indices are constructed to study and analyze the business activities on the basis of which
future conditions are predetermined. As business indices are the indicators of future conditions, so
they are also known as “Business Barometers” or “Economic Barometers‟. With the help of these
business barometers the trend of fluctuations in business conditions are made known and by
forecasting a decision can be taken relating to the problem. The construction of business barometer
consists of gross national product, wholesale prices, consumer prices, industrial production, stock
prices, bank deposits etc. These quantities may be converted into relatives on a certain base. The
relatives so obtained may be weighted and their average be computed. The index thus arrived at in
the business barometer.

The business barometers are of three types:

i. Barometers relating to general business activities: it is also known as general index of


business activity which refers to weighted or composite indices of individual index business
activities. With the help of general index of business activity long term trend and cyclical
fluctuations in the „economic activities of a country are measured but in some specific cases
the long term trends can be different from general trends. These types of index help in
formation of country economic policies.
ii. Business barometers for specific business or industry: These barometers are used as the
supplement of general index of business activity and these are constructed to measure the
future variations in a specific business or industry.
iii. Business barometers concerning to individual business firm: This type of barometer is
constructed to measure the expected variations in a specific individual firm of an industry.

1. Time Series Analysis is also used for the purpose of making business forecasting. The
forecasting through time series analysis is possible only when the business data of various years
are available which reflects a definite trend and seasonal variation.

3. Extrapolation is the simplest method of business forecasting. By extrapolation, a businessman


finds out the possible trend of demand of his goods and about their future price trends also. The
accuracy of extrapolation depends on two factors:
i) Knowledge about the fluctuations of the figures,
ii) Knowledge about the course of events relating to the problem under consideration.

4. Regression Analysis
The regression approach offers many valuable contributions to the solution of the forecasting problem.
It is the means by which we select from among the many possible relationships between variables in a
complex economy those which will be useful for forecasting. Regression relationship may involve one
predicted or dependent and one independent variables simple regression, or it may involve
relationships between the variable to be forecast and several independent variables under multiple
regressions. Statistical techniques to estimate the regression equations are often fairly complex and
time-consuming but there are many computer programs now available that estimate simple and
multiple regressions quickly.

5. Modern Econometric Methods


Econometric techniques, which originated in the eighteenth century, have recently gained in
popularity for forecasting. The term econometrics refers to the application of mathematical economic
theory and statistical procedures to economic data in order to verify economic theorems. Models take
the form of a set of simultaneous equations. The value of the constants in such equations is supplied
by a study of statistical time series.

6. Exponential Smoothing Method

This method is regarded as the best method of business forecasting as compared to other methods.
Exponential smoothing is a special kind of weighted average and is found extremely useful in short-
term forecasting of inventories and sales.

7. Choice of a Method of Forecasting


The selection of an appropriate method depends on many factors – the context of the forecast, the
relevance and availability of historical data, the degree of accuracy desired, the time period for which
forecasts are required, the cost benefit of the forecast to the company, and the time available for
making the analysis.

Effectiveness of Time Series Analysis:

Time series analysis is also used for the purpose of making business forecasting. The forecasting
through time series analysis is possible only when the business data of various years are available
which reflects a definite trend and seasonal variation. By time series analysis the long term trend,
secular trend, seasonal and cyclical variations are ascertained, analyzed and separated from the data
of various years.

Merits:

i) It is an easy method of forecasting.


ii) By this method a comparative study of variations can be made.
iii) Reliable results of forecasting are obtained as this method is based on mathematical model.

Method of Moving Averages

One of the most simple and popular technical analysis indicators is the moving averages method. This
method is known for its flexibility and user-friendliness. This method calculates the average price of
the currency or stock over a period of time.

The term “moving average” means that the average moves or follows a certain trend. The aim of this
tool is to indicate to the trader if there is a beginning of any new trend or if there is a signal of end to
the old trend. Traders use this method, as it is relatively easy to understand the direction of the
trends with the help of moving averages.

Moving average method is supposed to be the simplest one, as it helps to understand the chart
patterns in an easier way. Since the currency’s average price is considered, the price’s volatile
movements are evened. This method rules out the daily fluctuation in the prices and helps the trader
to go with the right trend, thus ensuring that the trader trades in his own good.

We come across different types of moving averages, which are based on the way these averages are
computed. Still, the basis of interpretation of averages is similar across all the types. The computation
of each type set itself different from other in terms of weightage it lays on the prices of the currencies.
Current price trend is always given a higher weightage. The three basic types of moving averages are
viz. simple, linear and exponential.

A simple moving average is the simplest way to calculate the moving price averages. The historical
closing prices over certain time period are added. This sum is divided by the number of instances used
in summation. For example, if the moving average is calculated for 15 days, the past 15 historical
closing prices are summed up and then divided by 15. This method is effective when the number of
prices considered is more, thus enabling the trader to understand the trend and its future direction
more effectively.

A linear moving average is the less used one out of all. But it solves the problem of equal weightage.
The difference between simple average and linear average method is the weightage that is provided
to the position of the prices in the latter. Let’s consider the above example. In linear average method,
the closing price on the
15th day is multiplied by 15, the 14th day closing price by 14 and so on till the 1 st day closing price by
1. These results are totalled and then divided by 15.

The exponential moving average method shares some similarity with the linear moving average
method. This method lays emphasis on the smoothing factor, there by weighing recent data with
higher points than the previous data. This method is more receptive to any market news than the
simple average method. Hence this makes exponential method more popular among traders.

Moving averages methods help to identify the correct trends and their respective levels of resistance.
Q4. What is definition of Statistics? What are the different characteristics of statistics? What are the
different functions of Statistics? What are the limitations of Statistics?
Ans. According to Croxton and Cowden, ‘Statistics is the science of collection, presentation,
analysis and interpretation of numerical data.’ Thus, Statistics contains the tools and
techniques required for the collection, presentation, analysis and interpretation of data. This
definition is precise and comprehensive.

Characteristic of Statistics

a. Statistics Deals with aggregate of facts: Single figure cannot be analyzed.


b. Statistics are affected to a marked extent by multiplicity of causes: The statistics of yield
of paddy is the result of factors such as fertility of soil, amount of rainfall, quality of seed
used, quality and quantity of fertilizer used, etc.
c. Statistics are numerically expressed: Only numerical facts can be statistically analyzed.
Therefore, facts as ‘price decreases with increasing production’ cannot be called statistics.
d. Statistics are enumerated or estimated according to reasonable standards of accuracy:
The facts should be enumerated (collected from the field) or estimated (computed) with
required degree of accuracy. The degree of accuracy differs from purpose to purpose. In
measuring the length of screws, an accuracy upto a millimetre may be required, whereas,
while measuring the heights of students in a class, accuracy upto a centimetre is enough.
e. Statistics are collected in a systematic manner: The facts should be collected according to
planned and scientific methods. Otherwise, they are likely to be wrong and misleading.
f. Statistics are collected for a pre-determined purpose: There must be a definite purpose
for collecting facts.
Eg. Movement of wholesale price of a commodity
g. Statistics are placed in relation to each other: The facts must be placed in such a way
that a comparative and analytical study becomes possible.
Thus, only related facts which are arranged in logical order can be called statistics.

Functions of Statistics

1. It simplifies mass data


2. It makes comparison easier
3. It brings out trends and tendencies in the data
4. It brings out hidden relations between variables.
5. Decision making process becomes easier.
Major limitations of Statistics are:
1. Statistics does not deal with qualitative data. It deals only with quantitative data.
2. Statistics does not deal with individual fact: Statistical methods can be applied only to
aggregate to facts.
3. Statistical inferences (conclusions) are not exact: Statistical inferences are true only on
an average. They are probabilistic statements.
4. Statistics can be misused and misinterpreted: Increasing misuse of Statistics has led to
increasing distrust in statistics.
5. Common men cannot handle Statistics properly: Only statisticians can handle statistics
properly.
Q5. What are the different stages of planning a statistical survey? Describe the various
methods for collecting data in a statistical survey.

Ans. The planning stage consists of the following sequence of activities.

1. Nature of the problem to be investigated should be clearly defined in an


un- ambiguous manner.
2. Objectives of investigation should be stated at the outset. Objectives
could be to obtain certain estimates or to establish a theory or to verify
a existing statement to find relationship between characteristics etc.
3. The scope of investigation has to be made clear. It refers to area to be
covered, identification of units to be studied, nature of characteristics to
be observed, accuracy of measurements, analytical methods, time, cost
and other resources required.
4. Whether to use data collected from primary or secondary source should
be determined in advance.
5. The organization of investigation is the final step in the process. It
encompasses the determination of number of investigators required,
their training, supervision work needed, funds required etc.

Collection of primary data can be done by anyone of the following methods.


i. Direct personal observation
ii. Indirect oral interview
iii. Information through agencies
iv. Information through mailed questionnaires
i. Information through schedule filled by investigators

Q6.What are the functions of classification? What are the requisites of a good classification?
What is Table and describe the usefulness of a table in mode of presentation of data?
Ans. The functions of classification are:

a. It reduce the bulk data


b. It simplifies the data and makes the data more comprehensible
c. It facilitates comparison of characteristics
d. It renders the data ready for any statistical analysis

Requisites of good classification are:

i. Unambiguous: It should not lead to any confusion


ii. Exhaustive: every unit should be allotted to one and only one class
iii. Mutually exclusive: There should not be any overlapping.
iv. Flexibility: It should be capable of being adjusted to changing
situation.
v. Suitability: It should be suitable to objectives of survey.
vi. Stability: It should remain stable throughout the investigation
vii.Homogeneity: Similar units are placed in the same class.
viii.Revealing: Should bring out essential features of the collected data.

Table is nothing but logical listing of related data in rows and columns.
Objectives of tabulation are:-

i. To simplify complex data


ii. To highlight important characteristics
iii. To present data in minimum space
iv. To facilitate comparison
v. To bring out trends and tendencies
vi. To facilitate further analysis

Vous aimerez peut-être aussi