Académique Documents
Professionnel Documents
Culture Documents
4)
5)
A subclass of information filtering systems that are meant to predict the preferences
or ratings that a user would give to a product. Recommender systems are widely
used in movies, news, research articles, products, social tags, music, etc.
6)
Cleaning data from multiple sources to transform it into a format that data analysts
or data scientists can work with is a cumbersome process because - as the number
of data sources increases, the time take to clean the data increases exponentially
due to the number of sources and the volume of data generated in these sources. It
might take up to 80% of the time for just cleaning data making it a critical part of
analysis task.
7)
analysis.
These are descriptive statistical analysis techniques which can be differentiated
based on the number of variables involved at a given point of time. For example,
the pie charts of sales based on territory involve only one variable and can be
referred to as univariate analysis.
Analysis that deals with the study of more than two variables to understand the
effect of variables on the responses is referred to as multivariate analysis.
8)
Data is usually distributed in different ways with a bias to the left or to the right or it
can all be jumbled up. However, there are chances that data is distributed around a
central value without any bias to the left or right and reaches normal distribution in
the form of a bell shaped curve. The random variables are distributed in the form of
a symmetrical bell shaped curve.
9)
10)
11)
An experimental design technique for determining the effect of a given sample size.
12)
13)
The process of filtering used by most of the recommender systems to find patterns
or information by collaborating viewpoints, various data sources and multiple
agents.
14)
Cluster sampling is a technique used when it becomes difficult to study the target
population spread across a wide area and simple random sampling cannot be
applied. Cluster Sample is a probability sample where each sampling unit is a
collection, or cluster of elements. Systematic sampling is a statistical technique
where elements are selected from an ordered sampling frame. In systematic
sampling, the list is progressed in a circular manner so once you reach the end of
the list,it is progressed from the top again. The best example for systematic
sampling is equal probability method.
15)
They are not different but the terms are used in different contexts. Mean is
generally referred when talking about a probability distribution or sample population
whereas expected value is generally referred in a random variable context.
Expected Value is the mean of all the means i.e. the value that is built from multiple
samples. Expected value is the population mean.
For Distributions
Mean value and Expected value are same irrespective of the distribution, under the
condition that the distribution is in the same population.
16)
P- Value > 0.05 denotes weak evidence against the null hypothesis which
P-value <= 0.05 denotes strong evidence against the null hypothesis which
A test has a true positive rate of 100% and false positive rate of
5%. There is a population with a 1/1000 rate of having the condition the
test identifies. Considering a positive test, what is the probability of
having that condition?
Lets suppose you are being tested for a disease, if you have the illness the test will
end up saying you have the illness. However, if you dont have the illness- 5% of the
times the test will end up saying you have the illness and 95% of the times the test
will give accurate result that you dont have the illness. Thus there is a 5% error in
case you do not have the illness.
Out of 1000 people, 1 person who has the disease will get true positive result.
Out of the remaining 999 people, 5% will also get true positive result.
Close to 50 people will get a true positive result for the disease.
This means that out of 1000 people, 51 people will be tested positive for the disease
even though only one person has the illness. There is only a 2% probability of you
having the disease even if your reports say that you have the disease.
20)
21)
Unsupervised Learning?
If an algorithm learns something from the training data so that the knowledge can
be applied to the test data, then it is referred to as Supervised Learning.
Classification is an example for Supervised Learning. If the algorithm does not learn
anything beforehand because there is no response variable or any training data,
then it is referred to as unsupervised learning. Clustering is an example for
unsupervised learning.
25)
26)
27)
Outlier values can be identified by using univariate or any other graphical analysis
method. If the number of outlier values is few then they can be assessed
individually but for large number of outliers the values can be substituted with
either the 99th or the 1st percentile values. All extreme values are not outlier
values.The most common ways to treat outlier values
28)
There are various methods to assess the results of a logistic regression analysis-
Using Classification Matrix to look at the true negatives and false positives.
Lift helps assess the logistic model by comparing it with random selection.
29)
After data preparation, start running the model, analyse the result and
tweak the approach. This is an iterative step till the best possible outcome is
achieved.
Start implementing the model and track the result to analyse the
30) How can you iterate over a list and also retrieve element indices at the
same time?
This can be done using the enumerate function which takes every element in a
sequence just like in a list and adds its location just before it.
31)
The extent of the missing values is identified after identifying the variables with
missing values. If any patterns are identified the analyst has to concentrate on them
as it could lead to interesting and meaningful business insights. If there are no
patterns identified, then the missing values can be substituted with mean or median
values (imputation) or they can simply be ignored.There are various factors to be
considered when answering this question-
Understand the problem statement, understand the data and then give the
mean value.
Should we even treat missing values is another important point to consider?
If 80% of the values for a variable are missing then you can answer that you would
be dropping the variable instead of treating the missing values.
32)
33)
36)
regularization solve?
37)
38)
39)
How do you decide whether your linear regression model fits the
data?
40)
41)
The simplest way to answer this question is we give the data and equation to the
machine. Ask the machine to look at the data and identify the coefficient values in
an equation.
For example for the linear regression y=mx+c, we give the data for the variable x, y
and the machine learns about the values of m and c from the data.
42) How are confidence intervals constructed and how will you interpret
them?
43) How will you explain logistic regression to an economist, physican
scientist and biologist?
44) How can you overcome Overfitting?
45) Differentiate between wide and tall data formats?
46) Is Nave Bayes bad? If yes, under what aspects.
47) How would you develop a model to identify plagiarism?
48) How will you define the number of clusters in a clustering algorithm?
49) Is it better to have too many false negatives or too many false
positives?
50) Is it possible to perform logistic regression with Microsoft Excel?
51) What do you understand by Fuzzy merging ? Which language will you
use to handle it?
52) What is the difference between skewed and uniform distribution?
53) You created a predictive model of a quantitative outcome variable
using multiple regressions. What are the steps you would follow to
validate the model?
54) What do you understand by Hypothesis in the content of Machine
Learning?
Test Set is to assess the performance of the model i.e. evaluating the
predictive power and generalization.
We need to build these estimates to solve this kind of a problem. Suppose, lets
assume Chicago has close to 10 million people and on an average there are 2
people in a house. For every 20 households there is 1 Piano. Now the question how
many pianos are there can be answered. 1 in 20 households has a piano, so
approximately 250,000 pianos are there in Chicago.
Now the next question is-How often would a Piano require tuning? There is no exact
answer to this question. It could be once a year or twice a year. You need to
Lets suppose that a piano tuner works for 50 weeks in a year considering a 5 day
week. Thus a piano tuner works for 250 days in a year. Lets suppose tuning a piano
takes 2 hours then in an 8 hour workday the piano tuner would be able to tune only
4 pianos. Considering this rate, a piano tuner can tune 1000 pianos a year.
Thus, 250 piano tuners are required in Chicago considering the above estimates.
2) There is a race track with five lanes. There are 25 horses of which you
want to find out the three fastest horses. What is the minimal number of
races needed to identify the 3 fastest horses of those 25?
Divide the 25 horses into 5 groups where each group contains 5 horses. Race
between all the 5 groups (5 races) will determine the winners of each group. A race
between all the winners will determine the winner of the winners and must be the
fastest horse. A final race between the 2nd and 3rd place from the winners group
along with the 1st and 2nd place of thee second place group along with the third place
horse will determine the second and third fastest horse from the group of 25.
3) Estimate the number of french fries sold by McDonald's everyday.
4) How many times in a day does a clocks hand overlap?
5) You have two beakers. The first beaker contains 4 litre of water and the
second one contains 5 litres of water.How can you our exactly 7 litres of
water into a bucket?
6) A coin is flipped 1000 times and 560 times heads show up. Do you think
the coin is biased?
7) Estimate the number of tennis balls that can fit into a plane.
8) How many haircuts do you think happen in US every year?
9) In a city where residents prefer only boys, every family in the city
continues to give birth to children until a boy is born. If a girl is born, they
plan for another child. If a boy is born, they stop. Find out the proportion
of boys to girls in the city.
Probability and Statistics Interview Questions for Data Science
1.
2.
Suppose that you now get a pack of 2 electronic chips coming from the same
company either A or B. When you test the first electronic chip it appears to be good.
What is the probability that the second electronic chip you received is also good?
3.
A coin is tossed 10 times and the results are 2 tails and 8 heads. How will you
analyse whether the coin is fair or not? What is the p-value for the same?
4.
5.
An ant is placed on an infinitely long twig. The ant can move one step
backward or one step forward with same probability during discrete time steps. Find
out the probability with which the ant will return to the starting point.
2.
In which libraries for Data Science in Python and R, does your strength lie?
3.
What kind of data is important for specific business requirements and how, as
a data scientist will you go about collecting that data?
4.
Tell us about the biggest data set you have processed till date and for what
kind of analysis.
5.
6.
Suppose you are given a data set, what will you do with it to find out if it
suits the business needs of your project or not.
7.
What were the business outcomes or decisions for the projects you worked
on?
8.
What unique skills you think can you add on to our data science team?
9.
10.
11.
12.
What has been the most useful business insight or development you have
found?
13.
How will you explain an A/B test to an engineer who does not know statistics?
14.
When does parallelism helps your algorithms run faster and when does it
make them run slower?
15.
How can you ensure that you dont analyse something that ends up
producing meaningless results?
16.
17.
18.
19.
20.
Another way is to train and test data sets by sampling them multiple times.
Predict on all those datasets to find out whether or not the resultant models are
similar and are performing well.
These are some of the more general questions around data, statistics and data
science that can be asked in the interviews. We will come up with more questions
specific to language, Python/ R, in the subsequent articles, and fulfil our goal of
providing a set of 100 data science interview questions and answers.
Ans. Odds are determined from probabilities and range between 0 and infinity. Odds are defined as the
ratio of the probability of success and the probability of failure.
E.g. Suppose that seven out of 10 males are admitted to an engineering school while three of 10 females
are admitted. The probabilities for admitting a male are,
p = 7/10 = .7
q = 1 - .7 = .3
If you are male, the probability of being admitted is 0.7 and the probability of not being admitted is 0.3.
Here are the same probabilities for females,
p = 3/10 = .3
q = 1 - .3 = .7
If you are female it is just the opposite, the probability of being admitted is 0.3 and the probability of not
being admitted is 0.7.
Now we can use the probabilities to compute the odds of admission for both males and females,
odds(male) = .7/.3 = 2.33333
odds(female) = .3/.7 = .42857
Next, we compute the odds ratio for admission,
OR = 2.3333/.42857 = 5.44
Thus, for a male, the odds of being admitted are 5.44 times larger than the odds for a female being
admitted.
2
Ans.
The variance is equal to v / (v - 2), where v is the degrees of freedom (see last section)
and v >2.
The variance is always greater than 1, although it is close to 1 when there are many degrees of
freedom. With infinite degrees of freedom, the t distribution is the same as the standard
normal distribution.
Ans. The Z-distribution is a normal distribution with meanzero and standard deviation 1
5
Ans. The standard error is an estimate of the standard deviation of a statistic. The standard
error is computed from known sample statistics.
Ans. Smoothing is usually done to help us better see patterns, trends for example, in time series.
Generally smooth out the irregular roughness to see a clearer signal. For seasonal data, we might smooth
out the seasonality so that we can identify the trend. Smoothing doesnt provide us with a model, but it
can be a good first step in describing various components of the series.
A time series is a sequence of observations, which are ordered in time. Inherent in the collection of data
taken over time is some form of random variation. There exist methods for reducing of canceling the
effect due to random variation. A widely used technique is "smoothing". This technique, when properly
applied, reveals more clearly the underlying trend, seasonal and cyclic components.
Smoothing techniques are used to reduce irregularities (random fluctuations) in time series data. They
provide a clearer view of the true underlying behavior of the series. Moving averages rank among the
most popular techniques for the preprocessing of time series. They are used to filter random "white noise"
from the data, to make the time series smoother or even to emphasize certain informational components
contained in the time series.
Exponential smoothing is a very popular scheme to produce a smoothed time series. Whereas in moving
averages the past observations are weighted equally, Exponential Smoothing assigns exponentially
decreasing weights as the observation get older. In other words, recent observations are given relatively
more weight in forecasting than the older observations. Double exponential smoothing is better at
handling trends. Triple Exponential Smoothing is better at handling parabola trends.
Exponential smoothing is a widely method used of forecasting based on the time series itself. Unlike
regression models, exponential smoothing does not imposed any deterministic model to fit the series other
than what is inherent in the time series itself.
Ans. Autoregressive is a stochastic process used in statistical calculations in which future values are
estimated based on a weighted sum of past values. An autoregressive process operates under the
premise that past values have an effect on current values. A process considered AR(1) is the first order
process, meaning that the current value is based on the immediately preceding value. An AR(2) process
has the current value based on the previous two values.
Anautoregressive model is when a value from a time series is regressed on previous values from that
same time series. for example, ytyt on yt1yt1:
yt=0+1yt1+t.yt=0+1yt1+t.
In this regression model, the response variable in the previous time period has become the predictor and
the errors have our usual assumptions about errors in a simple linear regression model. The order of an
autoregression is the number of immediately preceding values in the series that are used to predict the
value at the present time. So, the preceding model is a first-order autoregression, written as AR(1).
If we want to predict yy this year (ytyt) using measurements of global temperature in the previous two
years (yt1,yt2yt1,yt2), then the autoregressive model for doing so would be:
yt=0+1yt1+2yt2+t.yt=0+1yt1+2yt2+t.
This model is a second-order autoregression, written as AR(2), since the value at time tt is predicted from
the values at times t1t1 and t2t2. More generally, a kthkth-order autoregression, written as AR(k), is
a multiple linear regression in which the value of the series at any time t is a (linear) function of the
values at times t1,t2,,tkt1,t2,,tk.
8
What is lag?
Corr(yt,ytk)
This value of k is the time gap being considered and is called the lag. A lag 1 autocorrelation (i.e., k = 1 in
the above) is the correlation between values that are one time period apart. More generally,
a lag k autocorrelation is the correlation between values that are k time periods apart.
9
Differencing
Transforming
11 What is homoscedasticity?
Ans. Homoscedasticity describes a situation in which the error term (that is, the noise or
random disturbance in the relationship between the independent variables and the
dependent variable) is the same across all values of the independent variables.
12 What is the formula for calculating sample variance?
The coefficient is called the first-order autocorrelation coefficient and takes values fom -1 to +1
Second-order when:
ut=1ut-1+ 2ut-2+et
16 What is multi-collinearity?
Ans. Multicollinearity refers to a situation in which two or more explanatory variables in a multiple
regression model are highly correlated. We have perfect multicollinearity if the correlation between two
independent variables is equal to 1 or -1. In practice, we rarely face perfect multicollinearity in a data set.
More commonly, the issue of multicollinearity arises when there is a high degree of correlation (either
positive or negative) between two or more independent variables.
Multicollinearity is a statistical phenomenon in which two or more predictor variables in a multiple
regression model are highly correlated. In this situation the coefficient estimates may change erratically in
response to small changes in the model or the data. Multicollinearity does not reduce the predictive power
or reliability of the model as a whole; it only affects calculations regarding individual predictors. That is,
a multiple regression model with correlated predictors can indicate how well the entire bundle of
predictors predicts the outcome variable, but it may not give valid results about any individual predictor,
or about which predictors are redundant with others.
17 Which command can we use in R for exponential smoothing with trend and seasonal
parameters?
Ans. HoltWinters (data)
In logistic regression analysis, deviance is used in lieu of sum of squares calculations. Deviance is
analogous to the sum of squares calculations in linear regression and is a measure of the lack of fit to the
data in a logistic regression model. When a "saturated" model is available (a model with a theoretically
perfect fit), deviance is calculated by comparing a given model with the saturated model. This
computation gives the likelihood-ratio test.
In the above equation D represents the deviance and ln represents the natural logarithm. The log of this
likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence
the need for a negative sign.
An R2 of 0 means that the dependent variable cannot be predicted from the independent variable.
An R2 of 1 means the dependent variable can be predicted without error from the independent
variable.
An R2 between 0 and 1 indicates the extent to which the dependent variable is predictable. An
R2 of 0.10 means that 10 percent of the variance in Y is predictable from X; an R2 of 0.20 means
that 20 percent is predictable; and so on.
Ans.
23 What is a Type I error rate?
24 What is a Type II error rate?
Ans.
Ans. The Akaike Information Criterion (AIC) is a way of selecting a model from a set of models. The
chosen model is the one that minimizes the Kullback-Leibler distance between the model and the truth.
Its based on information theory, but a heuristic way to think about it is as a criterion that seeks a model
that has a good fit to the truth but few parameters. It is defined as:
AIC = -2 ( ln ( likelihood )) + 2 K
In studies involving human subjects, we often use gender and age classes as the blocking factors. We
could simply divide our subjects into age classes, however this does not consider gender. Therefore we
partition our subjects by gender and from there into age classes. Thus we have a block of subjects that is
defined by the combination of factors, gender and age class.
> 0 (real)
The average number of events in an interval
49 What is an ROC?
Ans. ROC (Receiver operating characteristic) curve is used to visualize the
performance of a binary classifier. The ROC curve is the plot between sensitivity and
(1- specificity). (1- specificity) is also known as false positive rate and sensitivity is
also known as True Positive rate. For a good model the curve should rise steeply.
More close to Sensitivity better the model. Area under the curve is 1.Greater the
area, better the predictive ability of the model is.
Ans. 50%
54 If X and Y are two variables having a coefficient of correlation 0.3, and X and Z are two variables
having a coefficient of correlation -0.7, which correlation is higher?
Ans. X and Z. Inverse relation and more strong than x and y.
55 What kind of distribution would you expect the residuals in a linear regression model to have?
Ans. Normal distribution
56 What is VIF? Answer how it is calculated.
Ans. Each of the predictor variables is regressed up on other predictor vars. If that Rsquared is high then
this variable has collinearity with others
VIF >= 1.5 (not more than 2) means there is multicollinearity in data.
57 What is the major difference between Stepwise Regression and Forward Regression?
Ans.
Ans. A regression equation is a polynomial regression equation if the power of independent variable is
more than 1. The equation below represents a polynomial equation:
y=a+b*x^2
In this regression technique, the best fit line is not a straight line. It is rather a curve that fits into the data
points.
60 What minimum and maximum value of r, the coefficient of correlation?
Ans. -1 to + 1
61 What do you mean by Se, the Standard Error of Estimate?
Ans. The standard error of the estimate is a measure of the accuracy of predictions. Recall that the
regression line is the line that minimizes the sum of squared deviations of prediction (also called the sum
of squares error). The standard error of the estimate is closely related to this quantity and is defined
below:
where est is the standard error of the estimate, Y is an actual score, Y' is a predicted score, and N is the
number of pairs of scores. The numerator is the sum of squared differences between the actual scores and
the predicted scores.
65 How do you test the overall significance of a Binary Logistic Regression model?
Ans. AIC (low value is better)
SC (Schwarz Criterion)
Hosmer-Lemeshow statistic
ROC
66 What is Power of a hypothesis test?
Ans. Power is the probability of rejecting the null hypothesis when in fact it is false.Power is the
probability of making a correct decision (to reject the null hypothesis) when the null hypothesis is false.
= I Type II Error
67 When do you use Durbin-Watson test?
Ans. the DurbinWatson statistic is a test statistic used to detect the presence of autocorrelation (a
relationship between values separated from each other by a given time lag) in the residuals (prediction
errors) from a regression analysis.
68 Under what condition the mean of sample proportions will be equal to population
proportion if you do repeated sampling
Ans. The Central Limit Theorem for Proportions
Let p be the probability of success, q be the probability of failure. The sampling distribution for samples
of size n is approximately normal with mean
Upper Edge = Q3
Ans.
Ans. P = .5
85 What are the parameters of an ARIMA model?
Ans. ARIMA (p,d,q)
87 What are the components of a Time Series data? Explain them in terms of time periods.
Ans. Trend
A trend exists when there is a long-term increase or decrease in the data. It does not have to be
linear. Sometimes we will refer to a trend changing direction when it might go from an
increasing trend to a decreasing trend.
Seasonal
A seasonal pattern exists when a series is influenced by seasonal factors (e.g., the quarter of the
year, the month, or day of the week). Seasonality is always of a fixed and known period.
Cyclic
A cyclic pattern exists when data exhibit rises and falls that are not of fixed period. The duration
of these fluctuations is usually of at least 2 years.
Irregular
These are sudden changes occurring in a time series which are unlikely to be repeated, it is that
component of a time series which cannot be explained by trend, seasonal or cyclic movements .It is
because of this fact these variations some-times called residual or random component. These variations
though accidental in nature, can cause a continual change in the trend, seasonal and cyclical oscillations
during the forthcoming period. Floods, fires, earthquakes, revolutions, epidemics and strikes etc,. are the
root cause of such irregularities.
88 What is the basic purpose of ANOVA?
Ans. the purpose of analysis of variance (ANOVA) is to test for significant differences between means.
89 Is Sampling Distribution a probability distribution? Explain.
Ans. Suppose that we draw all possible samples of size n from a given population. Suppose further that
we compute a statistic (e.g., a mean, proportion, standard deviation) for each sample. The probability
distribution of this statistic is called a sampling distribution. And the standard deviation of this
statistic is called the standard error.
90 For two mutually exclusive events A and B, what is the probability of A or B?
Ans. P(A or B) = P(A) + P(B)
91 Which one is more severe? Type I or Type IIwhy?
Ans. n most fields of science, Type II errors are not seen to be as problematic as a TypeI error. With
the Type II error, a chance to reject the null hypothesis was lost, and no conclusion is inferred from a
non-rejected null. The Type I error is more serious, because you have wrongly rejected the null
hypothesis.
o me, both of these situations are dangerous. In one case the patient receives unnecessary treatment. In the
other, the patient doesn't receive needed treatment.
Here's another example. Suppose our hypothesis is "This material is not radioactive." A type 1 error
would be to conclude that the material is radioactive, when it really isn't. A type 2 error would be to
conclude
that
the
material
is
not
radioactive,
when
it
really
is.
In this case, the type 2 error is more dangerous.
Finally, here is a really vivid example. Suppose our hypothesis is "This gun isn't loaded." A type 1 error
would be to conclude that is loaded, when it really isn't. A type 2 error would be to conclude it isn't
loaded, when it really is. The type 2 error is clearly more dangerous!
In general, a type 1 error is not more dangerous than a type 2 error. It depends on the nature of your
hypothesis.
Ans. The sampling method for each sample is simple random sampling.
The test is conducted on paired data. (As a result, the data sets are not independent.)
The sampling distribution is approximately normal, which is generally true if any of the following
conditions apply.
The population data are symmetric, unimodal, without outliers, and the sample
size is 15 or less.
The population data are slightly skewed, unimodal, without outliers, and the sample
size is 16 to 40.
95 What should be the nature of Time Series data with which you can develop an ARIMA
model?
Ans. Stationary
96 What is the basic purpose of regression?
Ans. Regression analysis is a form of predictive modelling technique which investigates the relationship
between a dependent (target) and independent variable (s) (predictor). This technique is used for
forecasting, time series modelling and finding the causal effect relationship between the variables.
You can even use this regression curve to predict values of y for x's that are outside of the range of the
current dataset (extrapolation). For example, relationship between rash driving and number of road
accidents by a driver is best studied through regression.
97 Why would use Tukeys 4-quadrant approach for model building?
Ans.
98 What is an outlier in the context of regression?
Ans. Data points that diverge in a big way from the overall pattern are called outliers. There are four ways
that a data point might be considered an outlier.
It might be distant from the rest of the data, even without extreme X or Y values