Vous êtes sur la page 1sur 16

MBA SEMESTER III MB0050 Research Methodology- 4 Credits (Book ID: B1206 ) Assignment Set- 1 (60 Marks) Note:

Each question carries 10 Marks. Answer all the questions


1. a. Differentiate between nominal, ordinal, interval and ratio scales, with an example of each. [Answer] Nominal Scale The nominal scale (also called dummy coding) simply places people, events, perceptions, etc. into categories based on some common trait. Some data are naturally suited to the nominal scale such as males vs. females, redheads vs. blondes vs. brunettes, and African American vs. Asian. The nominal scale forms the basis for such analyses as Analysis of Variance (ANOVA) because those analyses require that some category is compared to at least one other category. The nominal scale is the lowest form of measurement because it doesnt capture information about the focal object other than whether the object belongs or doesnt belong to a category; either you are a smoker or not a smoker, you attended college or you didnt, a subject has some experience with computers, an average amount of experience with computers, or extensive experience with computers. No data is captured that can place the measured object on any kind of scale say, for example, on a continuum from one to ten. Coding of nominal scale data can be accomplished using numbers, letters, labels, or any symbol that represents a category into which an object can either belong or not belong. Ordinal Scale The ordinal scale has at least one major advantage over the nominal scale. The ordinal scale contains all of the information captured in the nominal scale but it also ranks data from lowest to highest. Rather than simply categorize data by placing an object either into or not into a category, ordinal data give you some idea of where data lie in relation to each other. For example, suppose you are conducting a study on cigarette smoking and you capture how many packs of cigarettes three smokers consume in a day. It turns out that the first subject smokes one pack a day, the second smokes two packs a day, and the third smokes ten packs a day. Using an ordinal scale, your data would look like this. Ten packs a day smoker Two packs a day smoker One pack a day smoker

The ordinal scale rank orders the subjects by how many packs of cigarettes they smoke in one day. Notice, however, that although you can use the ordinal scale to rank the subjects, there is some important data missing; the first smoker occupies a rank the same distance from the second smoker as the second smoker occupies a rank the same distance from the third smoker. Consequently, no information exists in the ordinal scale to indicate the distance one smoker is from the others except for the ranking. Richer than nominal scaling, ordinal scaling still suffers from some information loss in the data. Interval Scale Unlike the nominal scale that simply places objects into or out of a category or the ordinal scale that rank orders objects, the interval scale indicates the distance one object is from another. In the social sciences, there is a famous example often taught to students on this distinction. Suppose you are near the shore of a lake and you see three tree stumps sticking out of the water. Using the water as a reference point, it would be easy to measure which stump rises highest out of the water. In this way, you can create a relative measure of the height of the stumps from the surface of the water. For example, the first stump may breach the water by twenty-four centimeters, the second by twenty-six centimeters, and the third by twentyeight centimeters. Unlike the nominal and ordinal scales, you can make relative distance measurements among objects using the interval scale. However, the distance the stumps extend out of the water gives you no indication of how long the stumps actually are. Its possible that the bottom of the lake is irregular making the tallest stump look tallest only in relation to the water. Using interval scaling, you have no indication of the absolute length of the stumps. Still, the interval scale contains richer information that the two lower levels of scaling. Ratio Scale The scale that contains the richest information about an object is ratio scaling. The ratio scale contains all of the information of the previous three levels plus it contains an absolute zero point. To use the example above, the ratio scale allows you to measure the stumps from the bottom of the lake; the bottom of the lake represents the absolute zero point. The distinction between interval and ratio scales is an important one in the social sciences. Although both can capture continuous data, you have to be careful not to assume that the lowest possible score in your data collection automatically represents an absolute zero point. Take extraversion captured using a psychometrically sound survey instrument. The items that capture this construct may range from zero to ten on the survey but there is no guarantee that a score of zero on the survey places a subject at the absolute zero point on the extraversion construct. Yes, you know that a subject with a score of eight on the scale is more extraverted than someone with a score of seven, but those numbers only exist for comparison between each other, not in comparison to some absolute score of zero extraversion.

1. b. What are the purposes of measurement in social science research? [Answer] Measurement in social research is not an easy affair. Here's a set of pages that you may find useful in deciding what and how to measure. Two main methods of getting input from people are survey and interview. A critical trick in each is in asking the right questions. Basics Questioning Measurement scales Ethnographic measurement

No discussion of scientific method is complete without an argument for the importance of fundamental measurement - measurement of the kind characterizing length and weight. Yet few social scientists attempt to construct fundamental measures. This is not because social scientists disapprove of fundamental measurement. It is because they despair of obtaining it. The conviction that fundamental measurement is unobtainable in social science and education has such a grip that we do not see our despair is unnecessary. Fundamental measurement is not only obtainable in social science but, in an unaware and hence incomplete form, is widely relied on. Social scientists are practicing fundamental measurement without knowing it and hence without enjoying its benefits or building on its strengths. The realization that fundamental measurements can be made in social science research is usually traced to Luce and Tukey (1964) who show that fundamental measurement can be constructed from an axiomatization of comparisons among responses to arbitrary pairs of quantities of two specified kinds. But Thurstone's 1927 Law of comparative Judgement contains an equivalent idea and his empirical work (e.g., 1928a, 1928b, 1929) contains results which are rough examples of fundamental measurement. Fundamental measurement also occurs in Bradley and Terry 1952 and Rasch 1958, 1960 and 1966. The fundamental measurement which follows from Rasch's 'specific objectivity' is developed in Rasch 1960, 1961, 1967 and 1977. Rasch's specific objectivity and R. A. Fisher's estimation sufficiency are two sides of the same approach to inference. Andersen (1977) shows that the only measuring processes which support specific objectivity and hence fundamental measurement are those which have sufficient statistics for their parameters. It follows that sufficient statistics lead to and are necessary for fundamental measurement. In spite of this considerable literature advancing, explaining and illustrating the successful application of fundamental measurement in social science research, most current psychometric practice is either unaware of the opportunity or considers it impractical.

2. a. What are the sources from which one may be able to identify research problems? [Answer] Research can be defined as the search for knowledge, or as any systematic investigation, to establish novel facts, solve new or existing problems, prove new ideas, or develop new theories, usually using a scientific method. The primary purpose for basic research (as opposed to applied research) is discovering, interpreting, and the development of methods and systems for the advancement of human knowledge on a wide variety of scientific matters of our world and the universe. Scientific research relies on the application of the scientific method, a harnessing of curiosity. This research provides scientific information and theories for the explanation of the nature and the properties of the world around us. It makes practical applications possible. Scientific research is funded by public authorities, by charitable organizations and by private groups, including many companies. Scientific research can be subdivided into different classifications according to their academic and application disciplines. Generally, research is understood to follow a certain structural process. Though step order may vary depending on the subject matter and researcher, the following steps are usually part of most formal research, both basic and applied: Observations and Formation of the topic: Consists of the subject area of ones interest and following that subject area to conduct subject related research. The subject area should not be randomly chosen since it requires reading a vast amount of literature on the topic to determine the gap in the literature the researcher intends to narrow. A keen interest in the chosen subject area is advisable. The research will have to be justified by linking its importance to already existing knowledge about the topic. Hypothesis: A testable prediction which designates the relationship between two or more variables. Conceptual definition: Description of a concept by relating it to other concepts. Operational definition: Details in regards to defining the variables and how they will be measures/assessed in the study. Gathering of data: Consists of identifying a population and selecting samples, gathering information from and/or about these samples by using specific research instruments. The instruments used for data collection must be valid and reliable. Analysis of data: Involves breaking down the individual pieces of data in order to draw conclusions about it.

Data Interpretation: This can be represented through tables, figures and pictures, and then described in words.

A common misconception is that a hypothesis will be proven. Generally a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. If the outcome is inconsistent with the hypothesis, then the hypothesis is rejected. However, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. This careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. In this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true. A useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. As the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. In this case a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. Researchers can also use a null hypothesis, which state no relationship or difference between the independent or dependent variables. A null hypothesis use a sample of all possible people to make a conclusion about the population. Artistic research, also seen as 'practice-based research', can take form when creative works are considered both the research and the object of research itself. It is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth. Historical research is embodied in the historical method. Historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past The historical method comprises the techniques and guidelines by which historians use historical sources and other evidence to research and then to write history. There are various history guidelines commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis. This includes lower criticism and sensual criticism. Though items may vary depending on the subject matter and researcher, the following concepts are usually part of most formal historical research: Identification of origin date Evidence of localization Recognition of authorship Analysis of data Identification of integrity Attribution of credibility

b. Why literature survey is important in research? [Answer] Literature survey is the documentation of a comprehensive review of the published and unpublished work from secondary sources data in the areas of specific interest to the researcher. The library is a rich storage base for secondary data and researchers used to spend several weeks and sometimes months going through books, journals, newspapers, magazines, conference proceedings, doctoral dissertations, master's theses, government publications and financial reports to find information on their research topic. With computerized databases now readily available and accessible the literature search is much speedier and easier and can be done without entering the portals of a library building. The researcher could start the literature survey even as the information from the unstructured and structured interviews is being gathered. Reviewing the literature on the topic area at this time helps the researcher to focus further interviews more meaningfully on certain aspects found to be important is the published studies even if these had not surfaced during the earlier questioning. So the literature survey is important for gathering the secondary data for the research which might be proved very helpful in the research. The literature survey can be conducted for several reasons. The literature review can be in any area of the business. For your literature survey you are to select a relevant topic (see below), review information from various sources (books, journals, conference proceedings, web sites, product literature, etc.), and formulate a coherent report about that topic. Write the report from the viewpoint that you are going to form a dot com startup company dealing with that topic, and want you and your colleagues to know what the current state-of-the-art and state-of-practice are. Or if you feel more academic than entrepreneurial, write it as if it is a survey you want to submit to a prestigious journal for publication (and maybe we can after the semester is done. Candidate topics include the following (with a focus on their design and engineering applications). The majority of your survey should cover recent work (within the last 4 years). You should cover recent existing systems and/or emerging research. You can propose other current topics relevant to design and engineering information technology if you wish. 1) STEP implementations in industrial practice (or advanced research). Of particular interest are STEP based repositories (environments where applications interact with the database(s) via fine-grained dynamic data sharing (e.g., via APIs like ODBC and JDBC) rather than batchoriented file exchange). 2) Information systems role in integrating design and engineering supply chains. 3) OO information modeling techniques (e.g., UML) 4) Data mining

5) XML 6) Middleware (CORBA, DCOM, JINI, SOAP ) 7) ERP systems 8) PDM systems 9) Engineering & design related ASPs ** 10) Toolkits for creating web-based engineering services (e.g., EAI) 11) Design & analysis integration (e.g., see www.eislab.gatech.edu/research/dai/ as a starting point) 12) Collaborative engineering environments and research initiatives (e.g., NASA ISE, Boeing PSI) 13) IT initiatives in engineering research and education (at NSF, DARPA, at Georgia Tech COE and other universities, etc.) 14) Usage of <IT area x> in <product domain y> (e.g., usage of STEP in the design and manufacture of machined parts)

3. a. What are the characteristics of a good research design? [Answer] The type of experimental design used for an experiment depends on many factors. However, there are good experimental designs and bad experimental designs. There are many basic designs used for experiments, depending on the objective. These designs all take important design features into account. Many of these designs minimize error and are based on statistical sampling methods. Sampling A sample is the number or items, objects or people that are used as the measurement of a specific population. An important part of experimental deign is the type of sampling that is used. Proper sampling means that the measurements are representative of that population. Measuring the entire population is very difficult as many populations are just too large, some populations may be inaccessible, the observation may be destructive, and sampling can actually be a more accurate way to take measurements then sampling a population as a whole. It is important to design the sampling method to minimize any error or response bias. You will need to determine the size of the necessary sample and type of sample needed for the study. Treatments An important part of the design is determining the treatments that are being tested. The treatments are the feature that is being tested or is different between the groups. An example of a treatment is looking at the growth habit of plants. The main difference between the plants is the amount of light that each plant is given every day. It is best to only change one factor as problems can arise if you change the light and water supply. If a difference does occur, then you do not know if it is a result of the light source or the quantity of water. Control All experiments must have a control. The control is the sample that all the treatments are compared against. For example, the control of plant growth would be a plant that receives light for a normal day, about eight hours. All the other light treatments are then compared to this control. The results will show which treatment produces a different result as some treatments may not differ from the control while others will. Randomization Randomizing the sampling is the main way to eliminate any bias within the design. Individuals and objects are randomly assigned to the experimental group, creating a homogeneous group. Designs can be completely randomized or use a randomized block design. A block design first splits the subjects into homogenous blocks and then randomly assigns the treatment to each block. A block design uses a control within the randomization.

Replication Replication is necessary to ensure that the result from the experiment is actually true. If a treatment is completely effective then the result will be the same for all replication. If the treatment is not effective, then the replication results will not be the same. Most research uses triplicate samples or treatments. This way, if one result is different from the first, then the third sample will show which result is true and which result may be a fluke. Replication increases the significance of results. These significant results can then be used to compose conclusions based on the experiment. It is a series of guide posts to keep one going in the right direction. It reduces wastage of time and cost. It encourages co-ordination and effective organization. It is a tentative plan which undergoes modifications, as circumstances demand, when the study progresses, new aspects, new conditions and new relationships come to light and insight into the study deepens. It has to be geared to the availability of data and the cooperation of the informants. It has also to be kept within the manageable limits

Characteristics of good research design 1. Analytical & critical --> going deeper into the depth of the idea. 2. Systematic-->employing valid procedure and principle. 3. Controlled-->keeping the variable constant 4. Accurate--> conducting a careful investigation. 5. Replicability-->having research design & procedures to enable the research to arrive at valid & cnclusive results. 6. Cylical-->having a succession of procedure the cycle that start with a problem and ends with a problem. 7. Empirical--> basing data on direct observation & general truth. 8. Requires courage--> calling the researchers will to continue the work in spite of the problem 9. Original work--> producing a work of your own by making use of scientific process. 10. Patient and unhurried activity--> requiring an effort making a capacity. 11. Hypothetical-->giving an intelligent guess before presenting the conclusion. 12. Done by an expert--> making the research more reliable and tested.

3. b. What are the components of a research design?

4. a. Distinguish between Doubles sampling and multiphase sampling. [Answer] Sample size calculations for a continuous outcome require specification of the anticipated variance; inaccurate specification can result in an underpowered or overpowered study. For this reason, adaptive methods whereby sample size is recalculated using the variance of a subsample have become increasingly popular. The first proposal of this type (Stein, 1945, Annals of Mathematical Statistics 16, 243-258) used all of the data to estimate the mean difference but only the first stage data to estimate the variance. Stein's procedure is not commonly used because many people perceive it as ignoring relevant data. This is especially problematic when the first stage sample size is small, as would be the case if the anticipated total sample size were small. A more naive approach uses in the denominator of the final test statistic the variance estimate based on all of the data. Applying the Helmert transformation, we show why this naive approach underestimates the true variance and how to construct an unbiased estimate that uses all of the data. We prove that the type I error rate of our procedure cannot exceed alpha. Multiphase sampling is an extension of two-phase sampling, also known as double sampling. Multiphase sampling must be distinguished from multistage sampling since, in multiphase sampling, the different phases of observation relate to sample units of the same type, while in multistage sampling, the sample units are of different types at different stages. Double and multiple sampling plans were invented to give a questionable lot another chance. For example, if in double sampling the results of the first sample are not conclusive with regard to accepting or rejecting, a second sample is taken. Application of double sampling requires that a first sample of size n1 is taken at random from the (large) lot. The number of defectives is then counted and compared to the first sample's acceptance number a1 and rejection number r1. Denote the number of defectives in sample 1 by d1 and in sample 2 by d2, then: If d1 a1, the lot is accepted. If d1 r1, the lot is rejected. If a1 < d1 < r1, a second sample is taken. If a second sample of size n2 is taken, the number of defectives, d2, is counted. The total number of defectives is D2 = d1 + d2. Now this is compared to the acceptance number a2 and the rejection number r2 of sample 2. In double sampling, r2 = a2 + 1 to ensure a decision on the sample. If D2 a2, the lot is accepted. If D2 r2, the lot is rejected.

4. b. What is replicated or interpenetrating sampling? [Answer]

There are k interviewers and they are each different in their manner of interviewing and hence may obtain slightly different responses. To make notation simple, we assume that each interviewer conducts the same number of interviews. Let n denote the total sample size and n = k* m. There are k subsamples and each interviewer will be assigned m subjects. Objective: to use simple random sampling to estimate Interviewer 1 Interviewer 2 Interviewer 3 - y31, y32, y33, ... , y3m Interviewer k - yk1, yk2, yk3, ... , ykm The average for the ith interviewer is denoted as: y11, y21, y12, y22, y13, y23, ... ... , , y1m y2m

The grand average is denoted as:

The grand average

is unbiased for and the estimated variance of

is:

The technique of interpenetreting the subsample gives an estimate of the variance of ybar that accounts for interviewer biases. In practice, the estimated variance given in the above formula is usually larger than the standard estimate of the variance by using simple random sampling.
Example for interpenetreting sample

A researcher has 10 research assistants, each with his/her own equipment that they use to measure the time (in seconds) it take for people to respond to a command. A simple

random sample of 80 people are taken. Since the researcher believes the assistants will produce slightly biased measurements, he decides to randomly divide the 80 people into 10 subsamples of 8 persons each. Each assistant is then assigned to one subsample. The measurements are given in the following table.

assistants

time it takes to respond

1 2 3 4 5 6 7 8 9 10

52
62 43 73 88 55 72 55 62 77

73

62

75

71

68

55

65

65 54 64 76 71 65 43 52 65

73 52 63 69 63 77 58 59 79

67 48 59 83 75 69 62 63 69

78 56 71 85 68 74 42 69 72

71 51 78 66 72 82 61 72 68

67 62 67 74 69 73 53 64 71

59 57 76 73 60 67 61 58 67

Minitab output:

5. a. How is secondary data useful to researcher?

[Answer] Secondary data is information gathered for purposes other than the completion of a research project. A variety of secondary information sources is available to the researcher gathering data on an industry, potential product applications and the market place. Secondary data is also used to gain initial insight into the research problem. Secondary data is classified in terms of its source either internal or external. Internal, or inhouse data, is secondary information acquired within the organization where research is being carried out. External secondary data is obtained from outside sources. The two major advantages of using secondary data in market research are time and cost savings.

The secondary research process can be completed rapidly generally in 2 to 3 week. Substantial useful secondary data can be collected in a matter of days by a skilful analyst. When secondary data is available, the researcher need only locate the source of the data and extract the required information. Secondary research is generally less expensive than primary research. The bulk of secondary research data gathering does not require the use of expensive, specialized, highly trained personnel. Secondary research expenses are incurred by the originator of the information.

There are also a number of disadvantages of using secondary data. These include:

Secondary information pertinent to the research topic is either not available, or is only available in insufficient quantities. Some secondary data may be of questionable accuracy and reliability. Even government publications and trade magazines statistics can be misleading. For example, many trade magazines survey their members to derive estimates of market size, market growth rate and purchasing patterns, then average out these results. Often these statistics are merely average opinions based on less than 10% of their members. Data may be in a different format or units than is required by the researcher. Much secondary data is several years old and may not reflect the current market conditions. Trade journals and other publications often accept articles six months before appear in print. The research may have been done months or even years earlier.

As a general rule, a thorough research of the secondary data should be undertaken prior to conducting primary research. The secondary information will provide a useful background and will identify key questions and issues that will need to be addressed by the primary research.

b. What are the criteria used for evaluation of secondary data? 6. What are the differences between observation and interviewing as methods of data collection? Give two specific examples of situations where either observation or interviewing would be more appropriate.

MBA SEMESTER III MB0050 Research Methodology- 4 Credits (Book ID: B1206) Assignment Set- 1 (60 Marks) Note: Each question carries 10 Marks. Answer all the questions

1. a. Explain the General characteristics of observation. [ 5 marks] b. What is the Utility of Observation in Business Research? [ 5 marks] 2. a. Briefly explain Interviewing techniques in Business Research? [ 5marks] b. What are the problems encountered in Interview? [ 5 marks] 3. a. What are the various steps in processing of data? [ 5 marks] b. How is data editing is done at the Time of Recording of Data? [ 5 marks] 4. a. What are the fundamental of frequency Distribution? [ 5 marks] b. What are the types and general rules for graphical representation of data? [ 5 marks] 5. Strictly speaking, would case studies be considered as scientific research? Why or why not? [10 marks] 6. a. Analyse the case study and descriptive approach to research. [5 marks]. b. Distinguish between research methods & research Methodology. [5 Marks]

Vous aimerez peut-être aussi