Vous êtes sur la page 1sur 19

Assignment of BSRM

[Type the document subtitle]

BANSAL [Pick the date]

Ques.1. Discuss the various Research Designs employed in business management projects with suitable illustrations.
Ans.1. Research Design is considered as a "blueprint" for research, dealing with at least four problems: which questions to study, which data are relevant, what data to collect, and how to analyze the results. The best design depends on the research question as well as the orientation of the researcher. Every design has its positive and negative sides. In sociology, there are three basic designs, which are considered to generate reliable data; these are cross-sectional, longitudinal, and cross-sequential. Research design can be divided into fixed and flexible research designs. Others have referred to this distinction as quantitative research designs and qualitative research designs, respectively. However, fixed designs need not be quantitative, and flexible design need not be qualitative. In fixed designs, the design of the study is fixed before the main stage of data collection takes place. Fixed designs are normally theory driven; otherwise its impossible to know in advance which variables need to be controlled and measured. Often, these variables are measured quantitatively. Flexible designs allow for more freedom during the data collection process. One reason for using a flexible research design can be that the variable of interest is not quantitatively measurable, such as culture. In other cases, theory might not be available before one starts the research. Types of Research Design:1. Philosophical This may cover a variety of approaches, but will draw primarily on existing literature, rather than new empirical data. A discursive study could examine a particular issue, perhaps from an alternative perspective (e.g. feminist). Alternatively, it might put forward a particular argument or examine a methodological issue. Examples: Davies, P. (1999) What is Evidence-Based Education? British Journal of Educational Studies, 47 (2), 108-121. *A discussion of the meaning of evidence-based education and its relevance to research and policy]. Pring, R. (2000) The False Dualism of Educational Research. Journal of Philosophy of Education, 34, 2, 247-260. [An argument against the idea that qualitative and quantitative research are from rigidly distinct paradigms].

2. Literature Review This may be an attempt to summarise or comment on what is already known about a particular topic. By collecting different sources together, synthesising and analysing critically, it essentially creates new knowledge or perspectives. There are a number of different forms a literature review might take. A systematic review will generally go to great lengths to ensure that all relevant sources (whether published or not) have been included. Details of the search strategies used and the criteria for inclusion must be made clear. A systematic review will often make a quantitative synthesis of the results of all the studies, for example by meta-analysis. Where a literature field is not sufficiently well conceptualised to allow this kind of synthesis, or where findings are largely qualitative (or inadequately quantified), it may not be appropriate to attempt a systematic review. In this case a literature review may help to clarify the key

concepts without attempting to be systematic. It may also offer critical or alternative perspectives to those previously put forward. Examples: Adair, J.G, Sharpe, D. and Huynh, C-L. (1990) Hawthorne Control Procedures in Educational Experiments: A reconsideration of their use and effectiveness Review of Educational Research, 59, 2, 215 228. [A systematic review and meta-analysis of studies that have tried to measure the Hawthorne Effect+. Black, P. and William, D. (1998) Assessment and classroom learning. Assessment in Education, 5, 1, 7-74. [Quite a long article, but it includes an excellent summary of a large field of research]. Brown, M., Askew, M., Baker, D., Denver, H and Millett, A. (1998) Is the National Numeracy Strategy Research-Based? British Journal of Educational Studies, 46, 4, 362-385 [A review of the evidence for and against the numeracy strategy].

3. Case Study This will involve collecting empirical data, generally from only one or a small number of cases. It usually provides rich detail about those cases, of a predominantly qualitative nature. There are a number of different approaches to case study work (e.g. ethnographic, hermeneutic, ethogenic, etc.) and the principles and methods followed should be made clear. A case study generally aims to provide insight into a particular situation and often stresses the experiences and interpretations of those involved. It may generate new understandings, explanations or hypotheses. However, it does not usually claim representativeness and should be careful not to over-generalise. Examples: Jimenez, R.T. and Gersten, R. (1999) Lessons and Dilemmas derived from the L iteracy Instruction of two Latina/o Teachers. American Educational Research Journal, 36, 2, 265 -302. [A detailed study of the behaviour and experiences of two teachers of English to minority students]. Ball, S. (1981) Beachside Comprehensive: a case study of secondary schooling. Cambridge, CUP. [This is a book, but a classic case study].

4. Survey Where an empirical study involves collecting information from a larger number of cases, perhaps using questionnaires, it is usually described as a survey. Alternatively, a survey might make use of already available data, collected for another purpose. A survey may be cross-sectional (data collected at one time) or longitudinal (collected over a period). Because of the larger number of cases, a survey will generally involve some quantitative analysis. Issues of generalisablity are usually important in presenting survey results, so it is vital to report how samples were chosen, what response rates were achieved and to comment on the validity and reliability of any instruments used. Examples: Francis, B. (2000) The Gendered Subject: students subject preferences and discussions of gender and subject ability. Oxford Review of Education, 26, 1, 35-48. Denscombe, M. (2000) Social Conditions for Stress: young peoples experience of doing GCSEs British Educational Research Journal, 26,. 3, 359-374.

5. Evaluation This might be an evaluation of a curriculum innovation or organisational change. An evaluation can be formative (designed to inform the process of development) or summative (to judge the effects). Often an evaluation will have elements of both. If an evaluation relates to a situation in which the researcher is also a participant it may be described as action research. Evaluations will often make use of case study and survey methods and a summative evaluation will ideally also use experimental methods. Examples: Burden, R. and Nichols, L. (2000) Evaluating the process of introducing a thinking skills programme into the secondary school curriculum. Research Papers in Education, 15, 3, 259-74. Ruddock, J. Berry, M., Brown, N. and Frost, D. (2000) Schools learning from other schools: cooperation in a climate of competition. Research Papers in Education, 15, 3, 293-306.

6. Experiment This involves the deliberate manipulation of an intervention in order to determine its effects. The intervention might involve individual pupils, teachers, schools or some other unit. Again, if the researcher is also a participant (e.g. a teacher) this could be described as action research. An experiment may compare a number of interventions with each other, or may compare one (or more) to a control group. If allocation to these different treatment groups is decided at random it may be called a true e xperiment; if allocation is on any other basis (e.g. using naturally arising or self-selected groups) it is usually called a quasiexperiment. Issues of generalisablity (often called external validity) are usually important in an experiment, so the same attention must be given to sampling, response rates and instrumentation as in a survey (see above). It is also important to establish causality (internal validity) by demonstrating the initial equivalence of the groups (or attempting to make suitable allowances), presenting evidence about how the different interventions were actually implemented and attempting to rule out any other factors that might have influenced the result. Examples: Finn, J.D. and Achilles, C.M. (1990) Answers and questions about class size: A statewide experiment. American Educational Research Journal, 27, 557-577. [A large-scale classic experiment to determine the effects of small classes on achievement].

Ques 2. Explain the different sampling techniques available in business statistics, give examples.
Ans.2. Sampling is concerned with the selection of a subset of individuals from within a population to estimate characteristics of the whole population. Researchers rarely survey the entire population because the cost of a census is too high. The three main advantages of sampling are that the cost is lower, data collection is faster, and since the data set is smaller it is possible to ensure homogeneity and to improve the accuracy and quality of the data. Probability Sampling

A probability sampling scheme is one in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined. The combination of these traits makes it possible to produce unbiased estimates of population totals, by weighting sampled units according to their probability of selection. Example:- We want to estimate the total income of adults living in a given street. We visit each household in that street, identify all adults living there, and randomly select one adult from each household. (For example, we can allocate each person a random number, generated from a uniform distribution between 0 and 1, and select the person with the highest number in each household). We then interview the selected person and find their income. People living on their own are certain to be selected, so we simply add their income to our estimate of the total. But a person living in a household of two adults has only a one-in-two chance of selection. To reflect this, when we come to such a household, we would count the selected person's income twice towards the total.

Methods of Probability Sampling:Simple Random Sampling:- In a simple random sample ('SRS') of a given size, all such subsets of the frame are given an equal probability. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. Furthermore, any given pair of elements has the same chance of selection as any other such pair (and similarly for triples, and so on). This minimises bias and simplifies analysis of results. In particular, the variance between individual results within the sample is a good indicator of variance in the overall population, which makes it relatively easy to estimate the accuracy of results. SRS may also be cumbersome and tedious when sampling from an unusually large target population. In some cases, investigators are interested in research questions specific to subgroups of the population. For example, researchers might be interested in examining whether cognitive ability as a predictor of job performance is equally applicable across racial groups. SRS cannot accommodate the needs of researchers in this situation because it does not provide subsamples of the population. Stratified sampling, which is discussed below, addresses this weakness of SRS. Systematic Sampling:- Systematic sampling relies on arranging the target population according to some ordering scheme and then selecting elements at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of every kth element from then onwards. In this case, k=(population size/sample size). It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first to the kth element in the list. A simple example would be to select every 10th name from the telephone directory (an 'every 10th' sample, also referred to as 'sampling with a skip of 10'). As long as the starting point is randomized, systematic sampling is a type of probability sampling. It is easy to implement and the stratification induced can make it efficient, if the variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling from databases. Example:- Suppose we wish to sample people from a long street that starts in a poor district (house #1) and ends in an expensive district (house #1000). A simple random selection of addresses from this street could easily end up with too many from the high end and too few from the low end (or vice versa), leading to an unrepresentative sample. Selecting (e.g.) every 10th street number along

the street ensures that the sample is spread evenly along the length of the street, representing all of these districts Stratified Sampling:- Where the population embraces a number of distinct categories, the frame can be organized by these categories into separate "strata." Each stratum is then sampled as an independent subpopulation, out of which individual elements can be randomly selected. There are several potential benefits to stratified sampling. First, dividing the population into distinct, independent strata can enable researchers to draw inferences about specific subgroups that may be lost in a more generalized random sample. Second, utilizing a stratified sampling method can lead to more efficient statistical estimates (provided that strata are selected based upon relevance to the criterion in question, instead of availability of the samples). Even if a stratified sampling approach does not lead to increased statistical efficiency, such a tactic will not result in less efficiency than would simple random sampling, provided that each stratum is proportional to the group's size in the population. Third, it is sometimes the case that data are more readily available for individual, pre-existing strata within a population than for the overall population; in such cases, using a stratified sampling approach may be more convenient than aggregating data across groups (though this may potentially be at odds with the previously noted importance of utilizing criterion-relevant strata). Finally, since each stratum is treated as an independent population, different sampling approaches can be applied to different strata, potentially enabling researchers to use the approach best suited (or most cost-effective) for each identified subgroup within the population.

Probability Proportional to Size Sampling:- In some cases the sample designer has access to an "auxiliary variable" or "size measure", believed to be correlated to the variable of interest, for each element in the population. These data can be used to improve accuracy in sample design. One option is to use the auxiliary variable as a basis for stratification, as discussed above. Another option is probability-proportional-to-size ('PPS') sampling, in which the selection probability for each element is set to be proportional to its size measure, up to a maximum of 1. In a simple PPS design, these selection probabilities can then be used as the basis for Poisson sampling. However, this has the drawback of variable sample size, and different portions of the population may still be over- or under-represented due to chance variation in selections. To address this problem, PPS may be combined with a systematic approach. Example: Suppose we have six schools with populations of 150, 180, 200, 220, 260, and 490 students respectively (total 1500 students), and we want to use student population as the basis for a PPS sample of size three. To do this, we could allocate the first school numbers 1 to 150, the second school 151 to 330 (= 150 + 180), the third school 331 to 530, and so on to the last school (1011 to 1500). We then generate a random start between 1 and 500 (equal to 1500/3) and count through the school populations by multiples of 500. If our random start was 137, we would select the schools which have been allocated numbers 137, 637, and 1137, i.e. the first, fourth, and sixth schools. Cluster Sampling:- Sometimes it is more cost-effective to select respondents in groups ('clusters'). Sampling is often clustered by geography, or by time periods. (Nearly all samples are in some sense 'clustered' in time - although this is rarely taken into account in the analysis.) For instance, if surveying households within a city, we might choose to select 100 city blocks and then interview every household within the selected

blocks. Clustering can reduce travel and administrative costs. In the example above, an interviewer can make a single trip to visit several households in one block, rather than having to drive to a different block for each household.

Non-Probability Sampling Non-probability sampling is any sampling method where some elements of the population have no chance of selection (these are sometimes referred to as 'out of coverage'/'under covered'), or where the probability of selection can't be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is non-random, non-probability sampling does not allow the estimation of sampling errors. These conditions give rise to exclusion bias, placing limits on how much information a sample can provide about the population. Information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population. Example: We visit every household in a given street, and interview the first person to answer the door. In any household with more than one occupant, this is a non-probability sample, because some people are more likely to answer the door (e.g. an unemployed person who spends most of their time at home is more likely to answer than an employed housemate who might be at work when the interviewer calls) and it's not practical to calculate these probabilities.

Types of Non-Probability Sampling Quota Sampling:- In this Sampling, the population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Then judgement is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60. It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example interviewers might be tempted to interview those who look most helpful. The problem is that these samples may be biased because not everyone gets a chance of selection. This random element is its greatest weakness and quota versus probability has been a matter of controversy for many years. Accidental Sampling:- Accidental sampling (sometimes known as grab, convenience or opportunity sampling) is a type of non-probability sampling which involves the sample being drawn from that part of the population which is close to hand. That is, a population is selected because it is readily available and convenient. It may be through meeting the person or including a person in the sample when one meets them or chosen by finding them through technological means such as the internet or through phone. The researcher using such a sample cannot scientifically make generalizations about the total population from this sample because it would not be representative enough. For example, if the interviewer were to conduct such a survey at a shopping center early in the morning on a given day, the people that he/she could interview would be limited to those given there at that given time, which would not represent the views of other members of society in such an area, if the survey were to be conducted at different times of day and several times per week. This type of sampling is most useful for pilot testing.

Line-Intercept Sampling:-Line-intercept sampling is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a "transect", intersects the element. Panel Sampling:-Panel sampling is the method of first selecting a group of participants through a random sampling method and then asking that group for the same information again several times over a period of time. Therefore, each participant is given the same survey or interview at two or more time points; each period of data collection is called a "wave". This longitudinal sampling-method allows estimates of changes in the population, for example with regard to chronic illness to job stress to weekly food expenditures. Panel sampling can also be used to inform researchers about within-person health changes due to age or to help explain changes in continuous dependent variables such as spousal interaction. There have been several proposed methods of analyzing panel data, including MANOVA, growth curves, and structural equation modelling with lagged effects.

Ques.3. List and explain the various Primary and Secondary Sources of Data collection, give suitable illustrations.
Ans.3. Primary Sources:- Primary sources are original sources from which the researcher directly collects data that have not been previously collected, e.g., collection of data directly by the researcher on brand awareness, brand preference, brand loyalty and other aspects of consumer behaviour from a sample of consumers by interviewing them. Primary data are first-hand information collected through various methods such as observation, interviewing, mailing etc. Secondary Sources:- These are sources containing data that have been collected and compiled for another purpose. The secondary sources consist of readily available compendia and already compiled statistical statements and reports whose data may be used by researches for their studies, e.g., census reports, annual reports and financial statements of companies, Statistical statements, Reports of Government Departments, Annual Reports on currency and finance published by the National Bank for Ethiopia, Statistical Statements relating to Cooperatives, Federal Cooperative Commission, Commercial Banks and Micro Finance Credit Institutions published by the National Bank for Ethiopia, Reports of the National Sample Survey Organisation, Reports of trade associations, publications of international organisations such as UNO, IMF, World Bank, ILO, WHO, etc., Trade and Financial Journals, newspapers, etc. Secondary sources consist of not only published records and reports, but also unpublished records. The latter category includes various records and registers maintained by firms and organisations, e.g., accounting and financial records, personnel records, register of members, minutes of meetings, inventory records, etc. Features of Secondary Sources:- Though secondary sources are diverse and consist of all sorts of materials, they have certain common characteristics:First, they are readymade and readily available, and do not require the trouble of constructing tools and administering them. Second, they consist of data over which a researcher has no original control over collection and classification. Others shape both the form and the content of secondary sources. Clearly, this is a feature, which can limit the research value of secondary sources. Finally, secondary sources are not limited in time and space. That is, the researcher using them need not have been present when and where they were gathered. Use of Secondary Data

The secondary data may be used in three ways by a researcher:1. 2. Some specific information from secondary sources may be used for reference purposes. Secondary data may be used as bench marks against which the findings of a research may be tested.

3. Secondary data may be used as the sole source of information for a research project. Such studies as Securities Market Behaviour, Financial Analysis of Companies, and Trends in credit allocation in commercial banks, Sociological Studies on crimes, historical studies, and the like depend primarily on secondary data. Year books, Statistical reports of government departments, reports of public organisations like Bureau of Public Enterprises, Census Reports etc. serve as major data sources for such research studies. Advantages 1. Secondary data, if available, can be secured quickly and cheaply.

2. Wider geographical area and longer reference period may be covered without much cost. Thus the use of secondary data extends the researcher's space and time reach. 3. The use of secondary data broadens the database from which scientific generalizations can be made. 4. The use of secondary data enables a researcher to verify the findings based on primary data.

Disadvantages 1. 2. The most important limitation is the available data may not meet, our specific research needs. The available data may not be as accurate as desired.

3. The secondary data are not up-to-date and become obsolete when they appear in print, because of time lag in producing them. 4. Finally information about the whereabouts of sources may not be available to all social scientists.

Methods of Collecting Primary Data The researcher directly collects primary data from their original sources. In this case, the researcher can collect the required data precisely according to his research needs, he can collect them when he wants them and in the form he needs them. But the collection of Primary data is costly and time consuming. Yet, for several types of social science research such as socio-economic surveys, social anthropological studies of rural communities and tribal communities, sociological studies of social problems and social institutions, marketing research, leadership studies, opinion polls, attitudinal surveys, readership, radio listening and T.V. viewing surveys, knowledge-awareness practice (KAP) studies, farm management studies, business management studies, etc., required data are not available from secondary sources and they have to be directly gathered from the primary sources. There are various methods of data collection. A Method is different from a Tool. While a method refers to the way or mode of gathering data, a tool is an instrument used for the method. For example, a

schedule is used for interviewing. The important methods are (a) observation, (b) interviewing, (c) mail survey, (d) experimentation, (e) simulation, and (f) projective technique. Observation involves gathering of data relating to the selected research by viewing and/or listening. Interviewing involves face-to-face conversation between the investigator and the respondent. Mailing is used for collecting data by getting questionnaires completed by respondents. Experimentation involves a study of independent variables under controlled conditions. Experiment may be conducted in a laboratory or in field in a natural setting. Simulation involves creation of an artificial situation similar to the actual life situation. Projective methods aim at drawing inferences on the characteristics of respondents by presenting to them stimuli. Each method has its advantages and disadvantages. Choice of Methods of Data Collection Which of the above methods of data collection should be selected for a proposed research project? This is one of the questions to be considered while designing the research plan. One or More methods has/have to be chosen. No method is universal. Each method's unique features should be compared with the needs and conditions of the study and thus the choice of the methods should be decided. OBSERVATION Meaning and Importance Observation means viewing or seeing. We go on observing something or other while we are awake. Most of such observations are just casual and have no specific purpose. But observation as a method of data collection is different from such casual viewing. Observation may be defined as a systematic viewing of a specific phenomenon in its proper setting or the specific purpose of gathering data for a particular study. Observation as a method includes both 'seeing' and 'hearing.' It is accompanied by perceiving as well. Observation also plays a major role in formulating and testing hypothesis in social sciences. Behavioural scientists observe interactions in small groups; anthropologists observe simple societies, and small communities; political scientists observe the behaviour of political leaders and political institutions. Types of Observation Observation may be classified in different ways. With reference to the investigators role, it may be classified into (a) participant observation, and (b) non-participant observation. In terms of mode of observation, it may be classified into (c) direct observation, and (d) indirect observation. With reference to the rigour of the system adopted, observation is classified into (e) controlled observation, and (f) uncontrolled observation EXPERIMENTATION Experimentation is a research process used to study the causal relationships between variables. It aims at studying the effect of an independent variable on a dependent variable, by keeping the other independent variables constant through some type of control. For example, a -social scientist may use experimentation for studying the effect of a method of family planning publicity on people's awareness of family planning techniques. Experimentation requires special efforts. It is often extremely difficult to design, and it is also a time consuming process. Why should then one take such trouble? Why not simply observe/survey the phenomenon? The fundamental weakness of any non-experimental study is its inability to specify causes

and effect. It can show only correlations between variables, but correlations alone never prow causation. The experiment is the only method, which can show the effect of an independent variable on dependent variable. In experimentation, the researcher can manipulate the independent variable and measure its effect on the dependent variable. For example, the effect of various types of promotional strategies on the sale of a given product can be studies by using different advertising media such as T.V., radio and Newspapers. Moreover, experiment provides the opportunity to var y the treatment (experimental variable) in a systematic manner, thus allowing for the isolation and precise specification of important differences. The applications of experimental method are Laboratory Experiment, and Field Experiment. SIMULATION Simulation is one of the forms of observational methods. It is a process of conducting experiments on a symbolic model representing a phenomenon. Abelson defines simulation as the exercise of a flexible imitation of processes and outcomes for the purpose of clarifying or explaining the underlying mechanisms involved. It is a symbolic abstraction, simplification and substitution for some referent system. In other words, simulation is a theoretical model of the elements, relations and processes which symbolize some referent system, e.g., the flow of money in the economic system may be simulated in a operating model consisting of a set of pipes through which liquid moves. Simulation is thus a technique of performing sampling experiments on the model of the systems. The experiments are done on the model instead of on the real system, because the latter would be too inconvenient and expensive. Simulation is a recent research technique; but it has deep roots in history. Chess has often been considered a simulation of medieval warfare. INTERVIEWING Interviewing is one of the major methods of data collection. It may be defined as two-way systematic conversation between an investigator and an informant, initiated for obtaining information relevant to as a specific study. It involves not only conversation, but also learning from the respondents gestures, facial expressions and pauses, and his environment. Interviewing requires face-to-face contact or contact over telephone and calls for interviewing skills. It is done by using a structured schedule or an unstructured guide. Interviewing may be us either as a main method or as a supplementary one in studies of persons. Interviewing is the only suitable method for gathering information from illiterate or less educated respondents. It is useful for collecting a wide range of data from factual demographic data to highly personal and intimate information relating to a person's opinions, attitudes, values, beliefs, past experience and future intentions. When qualitative information is required or probing is necessary to draw out fully, then interviewing is required. Where the area covered for the survey is a compact, or when a sufficient number of qualified interviewers are available, personal interview is feasible. Interview is often superior to other data-gathering methods. People are usually more willing to talk than to write. Once rapport is established, even confidential information may be obtained. It permits probing into the context and reasons for answers to questions. Interview can add flesh to statistical information. It enables the investigator to grasp the behavioural context of the data furnished by the respondents. It permits the investigator to seek clarifications and brings to the forefront those questions, that, for one reason or another, respondents do not want to answer.

Types of Interviews Telephone Interviewing Telephone interviewing is a non-personal method of data collection.

Group Interviews Group interview may be defined as a method of collecting primary data in which a number of individuals with a common interest interact with each other. In a personal interview, the flow of information is multidimensional. Interviewing Process The interviewing process consists of the following stages: Preparation. Introduction Developing rapport Carrying the interview forward Recording the interview, and Closing the interview

PANEL METHOD The panel method is a method of data collection, by which data is collected from the same sample respondents at intervals either by mail or by personal interview. This is used for longitudinal studies on economic conditions, expenditure pattern; consumer behaviour, recreational pattern, effectiveness of advertising, voting behaviour, and so on. The period, over which the panel members are contacted for information may spread over several months or years. The time interval at which they are contacted repeatedly may be 10 or 15 days, or one or two months depending on the nature of the study and the memory span of the respondents. The panel may be static or dynamic. A static or continuous panel is one in which the membership remains the same throughout the life of the panel, except for the members who drop out. The dropouts are not replaced. MAIL SURVEY The mail survey is another method of collecting primary data. This method involves sending questionnaires to the respondents with a request to complete them and return them by post. This can be used in the case of educated respondents only. The mail questionnaire should be simple so that the respondents can easily understand the questions and answer them. It should preferably contain mostly closed-end and multiplechoice questions so that it could be completed within a few Minutes. The distinctive feature of the mail survey is that the questionnaire is self-administered by the respondents themselves and the responses are recorded by them, and not by the investigator as in the case of personal interview method. It does not involve face-to-face conversation between the investigator and the respondent. Communication is carried out only in writing and this requires more cooperation from the respondents than does verbal

communication. There are some alternative methods of distributing questionnaires to the respondents. They are: (1) personal delivery, (2) attaching questionnaire to a, product, (3) advertising questionnaire in a newspaper or magazine, and (4) newsstand inserts. PROJECTIVE TECHNIQUES The direct methods of data collection, viz., personal interview, telephone interview and mail survey rely on respondents' own report of their behaviour, beliefs, attitudes, etc. But respondents may be unwilling to discuss controversial issues or to reveal intimate information about themselves or may be reluctant to express their true views fearing that they are generally disapproved. In order to overcome these limitations, indirect methods have been developed. Projective Techniques are such indirect methods. They become popular during 1950s as a part of motivation research. Projective techniques involve presentation of ambitious stimuli to the respondents for interpretation. In doing so, the respondents reveal their inner characteristics. The stimuli may be a picture, a photograph, an inkblot or an incomplete sentence. The basic assumption of projective techniques is that a person projects his own thoughts, ideas and attributes when he perceives and responds to ambiguous or unstructured stimulus materials. Thus a person's unconscious operations of the mind are brought to a conscious level in a disguised and projected form, and the person projects his inner characteristics. Projective Techniques may be divided into three broad categories: (a) visual projective techniques (b) verbal projective techniques, and (c) Expressive techniques. SOCIOMETRY Sociometry is a method for discovering, describing and evaluating social status, structure, and development through measuring the extent of acceptance or rejection between individuals in groups. Franz defines sociometry as a method used for the discovery and manipulation of social configurations by measuring the attractions and repulsions between individuals in a group. It is a means for studying the choice, communication and interaction patterns of individuals in a group. It is concerned with attractions and repulsions between individuals in a group. In this method, a person is asked to choose one or more persons according to specified criteria, in order to find out the person or persons with whom he will like to associate.

Ques.4. Explain the scaling techniques used in Management Research quoting relevant cases.
Ans.4. Scaling is the process of measuring or ordering entities with respect to quantitative attributes or traits. For example, a scaling technique might involve estimating individuals' levels of extraversion, or the perceived quality of products. Certain methods of scaling permit estimation of magnitudes on a continuum, while other methods provide only for relative ordering of the entities. Comparative and Non Comparative Scaling Pairwise Comparison:- Pairwise Comparison generally refers to any process of comparing entities in pairs to judge which of each entity is preferred, or has a greater amount of some quantitative property. The method of pairwise comparison is used in the scientific study of preferences, attitudes, voting systems, social choice, public choice, and multiagent AI systems. In psychology literature, it is often referred to as paired comparison.

If an individual or organization expresses a preference between two mutually distinct alternatives, this preference can be expressed as a pairwise comparison. If the two alternatives are x and y, the following are the possible pairwise comparisons: The agent prefers x over y: "x > y" or "xPy" The agent prefers y over x: "y > x" or "yPx" The agent is indifferent between both alternatives: "x = y" or "xIy" Probabilistic Models for Pairwise Comparison In terms of modern psychometric theory, Thurstone's approach, called the law of comparative judgment, is more aptly regarded as a measurement model. The BradleyTerryLuce (BTL) model (Bradley & Terry, 1952; Luce, 1959) is often applied to pairwise comparison data to scale preferences. The BTL model is identical to Thurstone's model if the simple logistic function is used. Thurstone used the normal distribution in applications of the model. The simple logistic function varies by less than 0.01 from the cumulative normal give across the range, given an arbitrary scale factor. In the BTL model, the probability that object j is judged to have more of an attribute than object i is:-

Where, is the scale location of object ; is the inverse logit function. For example, the scale location might represent the perceived quality of a product, or the perceived weight of an object. The BTL is very closely related to the Rasch model for measurement. Thurstone used the method of pairwise comparisons as an approach to measuring perceived intensity of physical stimuli, attitudes, preferences, choices, and values. He also studied implications of the theory he developed for opinion polls and political voting (Thurstone, 1959). Transitivity of Pairwise Comparisons For a given decision agent, if the information, objective, and alternatives used by the agent remain constant, then it is generally assumed that pairwise comparisons over those alternatives by the decision agent are transitive. Most agree upon what transitivity is, though there is debate about the transitivity of indifference. The rules of transitivity are as follows for a given decision agent. If xPy and yPz, then xPz If xPy and yIz, then xPz If xIy and yPz, then xPz If xIy and yIz, then xIz This corresponds to (xPy or xIy) being a total preorder, P being the corresponding strict weak order, and I being the corresponding equivalence relation. Probabilistic models require transitivity only within the bounds of errors of estimates of scale locations of entities. Thus, decisions need not be deterministically transitive in order to apply probabilistic models. However, transitivity will generally hold for a large number of comparisons if models such as the BTL can be effectively applied.

Rasch Scaling Model:- Rasch models are used for analysing data from assessments to measure variables such as abilities, attitudes, and personality traits. For example, they may be used to estimate a student's reading ability from answers to questions on a reading assessment, or the extremity of a person's attitude to capital punishment from responses on a questionnaire. Rasch models are particularly used in psychometrics, the field concerned with the theory and technique of psychological and educational measurement. In addition, they are increasingly being used in other areas, including the health profession and market research because of their general applicability. The mathematical theory underlying Rasch models is in some respects the same as item response theory. However, proponents of Rasch models argue it has a specific property that provides a criterion for successful measurement. Application of the models provides diagnostic information regarding how well the criterion is met. Application of the models can also provide information about how well items or questions on assessments work to measure the ability or trait. In the Rasch model, the probability of a specified response (e.g. right/wrong answer) is modelled as a function of person and item parameters. Specifically, in the simple Rasch model, the probability of a correct response is modelled as a logistic function of the difference between the person and item parameter. The mathematical form of the model is provided later in this article. In most contexts, the parameters of the model pertain to the level of a quantitative trait possessed by a person or item. For example, in educational tests, item parameters pertain to the difficulty of items while person parameters pertain to the ability or attainment level of people who are assessed. The higher a person's ability relative to the difficulty of an item, the higher the probability.

Rank-Ordering:- A ranking is a relationship between a set of items such that, for any two items, the first is either 'ranked higher than', 'ranked lower than' or 'ranked equal to' the second. In mathematics, this is known as a weak order or total preorder of objects. It is not necessarily a total order of objects because two different objects can have the same ranking. The rankings themselves are totally ordered. For example, materials are totally preordered by hardness, while degrees of hardness are totally ordered. By reducing detailed measures to a sequence of ordinal numbers, rankings make it possible to evaluate complex information according to certain criteria. Thus, for example, an Internet search engine may rank the pages it finds according to an estimation of their relevance, making it possible for the user quickly to select the pages they are likely to want to see. Analysis of data obtained by ranking commonly requires nonparametric statistics.

Bogardus Social Distance Scale:- The Bogardus social distance scale is a psychological testing scale created by Emory S. Bogardus to empirically measure people's willingness to participate in social contacts of varying degrees of closeness with members of diverse social groups, such as racial and ethnic groups. The scale asks people the extent to which they would be accepting of each group (a score of 1.00 for a group is taken to indicate no social distance): As close relatives by marriage (score 1.00) As my close personal friends (2.00)

As neighbours on the same street (3.00) As co-workers in the same occupation (4.00) As citizens in my country (5.00) As only visitors in my country (6.00) Would exclude from my country (7.00)

The Bogardus social distance scale is a cumulative scale (a Guttman scale), because agreement with any item implies agreement with all preceding items. The scale has been criticized as too simple because the social interactions and attitudes in close familial or friendship-type relationships may be qualitatively different from social interactions with and attitudes toward relationships with far-away contacts such as citizens or visitors in one's country. Research by Bogardus first in 1925 and then repeated in 1946, 1956, and 1966 shows that the extent of social distancing in the US is decreasing slightly and fewer distinctions are being made among groups. A web-based questionnaire has been running since late 1993. Internet users are encouraged to submit their responses here where the maintainer of this site has posted at least two papers that update research on social distance. For Bogardus, social distance is a function of affective distance between the members of two groups: In social distance studies the center of attention is on the feeling reactions of persons toward other persons and toward groups of people. Thus, for him, social distance is essentially a measure of how much or little sympathy the members of a group feel for another group. It might be important to note that Bogarduss conceptualization is not the only one in the sociological literature. Several sociologists have pointed out that social distance can also be conceptualized on the basis of other parameters such as the frequency of interaction between different groups or the normative distinctions in a society about who should be considered an insider or outsider. Guttman Scale:- In statistical surveys conducted by means of structured interviews or questionnaires, a subset of the survey items having binary (e.g., YES or NO) answers forms a Guttman scale (named after Louis Guttman) if they can be ranked in some order so that, for a rational respondent, the response pattern can be captured by a single index on that ordered scale. In other words, on a Guttman scale, items are arranged in an order so that an individual who agrees with a particular item also agrees with items of lower rank-order. For example, a series of items could be (1) "I am willing to be near ice cream"; (2) "I am willing to smell ice cream"; (3) "I am willing to eat ice cream"; and (4) "I love to eat ice cream". Agreement with any one item implies agreement with the lower-order items. This contrasts with topics studied using a Likert scale or a Thurstone scale. The concept of Guttman scale likewise applies to series of items in other kinds of tests, such as achievement tests, that have binary outcomes. For example, a test of math achievement might order questions based on their difficulty and instruct the examinee to begin in the middle. The assumption is if the examinee can successfully answer items of that difficulty (e.g., summing two 3-digit numbers), s/he would be able to answer the earlier questions (e.g., summing two 2-digit numbers). Some achievement tests are organized in a Guttman scale to reduce the duration of the test. By designing surveys and tests such that they contain Guttman scales, researchers can simplify the analysis of the outcome of surveys, and increase the robustness. Guttman scales also make it possible to detect and discard randomized answer patterns, as may be given by uncooperative respondents.

A hypothetical, perfect Guttman scale consists of a uni-dimensional set of items that are ranked in order of difficulty from least extreme to most extreme position. For example, a person scoring a "7" on a ten item Guttman scale, will agree with items 1-7 and disagree with items 8,9,10. An important property of Guttman's model is that a person's entire set of responses to all items can be predicted from their cumulative score because the model is deterministic. A well known example of a Guttman scale is the Bogardus Social Distance Scale.

Non Comparative Scaling Techniques Likert Scale:- A psychometric scale commonly involved in research that employs questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, or more accurately the Likert-type scale, even though the two are not synonymous. The scale is named after its inventor, psychologist Rensis Likert. Likert distinguished between a scale proper, which emerges from collective responses to a set of items (usually eight or more), and the format in which responses are scored along a range. Technically speaking, a Likert scale refers only to the former. The difference between these two concepts has to do with the distinction Likert made between the underlying phenomenon being investigated and the means of capturing variation that points to the underlying phenomenon When responding to a Likert questionnaire item, respondents specify their level of agreement or disagreement on a symmetric agree-disagree scale for a series of statements. Thus, the range captures the intensity of their feelings for a given item, while the results of analysis of multiple items (if the items are developed appropriately) reveals a pattern that has scaled properties of the kind Likert identified. Phrase Completion Scales:- It is a type of psychometric scale used in questionnaires. Developed in response to the problems associated with Likert scales, Phrase completions are concise, uni-dimensional measures that tap ordinal level data in a manner that approximates interval level data. Semantic Differential:- It is a type of a rating scale designed to measure the connotative meaning of objects, events, and concepts. The connotations are used to derive the attitude towards the given object, event or concept. Semantic differential was designed to measure the connotative meaning of concepts. The respondent is asked to choose where his or her position lies, on a scale between two bipolar adjectives (for example: "Adequate-Inadequate", "Good-Evil" or "Valuable-Worthless"). Semantic differentials can be used to describe not only persons, but also the connotative meaning of abstract conceptsa capacity used extensively in affect control theory. Thurstone Scale:- In psychology, the Thurstone scale was the first formal technique for measuring an attitude. It was developed by Louis Leon Thurstone in 1928, as a means of measuring attitudes towards religion. It is made up of statements about a particular issue, and each statement has a numerical value indicating how favorable or unfavorable it is judged to be. People check each of the statements to which they agree, and a mean score is computed, indicating their attitude. Thurstone's method of pair comparisons can be considered a prototype of a normal distribution-based method for scaling-dominance matrices. Even though the theory behind this method is quite complex, the algorithm itself is straightforward. For the basic Case V, the frequency dominance matrix is translated into proportions and interfaced with the standard scores. The scale is then obtained as a left-adjusted column marginal average of this standard score matrix. The underlying rationale for the method and basis for the measurement of the "psychological scale separation between any two stimuli" derives from Thurstone's Law of comparative judgment . The principal difficulty with this algorithm is its indeterminacy with respect

to one-zero proportions, which return z values as plus or minus infinity, respectively. The inability of the pair comparisons algorithm to handle these cases imposes considerable limits on the applicability of the method. The most frequent recourse when the 1.00-0.00 frequencies are encountered is their omission. Thus, e.g., Guilford (1954, p. 163) has recommended not using proportions more extreme than .977 or .023, and Edwards (1957, pp. 4142) has suggested that if the number of judges is large, say 200 or more, then we might use pij values of .99 and .01, but with less than 200 judges, it is probably better to disregard all comparative judgments for which pij is greater than .98 or less than .02." Since the o mission of such extreme values leaves empty cells in the Z matrix, the averaging procedure for arriving at the scale values cannot be applied, and an elaborate procedure for the estimation of unknown parameters is usually employed An alternative solution of this problem was suggested by Krus and Kennedy. With later developments in psychometric theory, it has become possible to employ direct methods of scaling such as application of the Rasch model or unfolding models such as the Hyperbolic Cosine Model (HCM). The Rasch model has a close conceptual relationship to Thurstone's law of comparative judgment, the principal difference being that it directly incorporates a person parameter. Also, the Rasch model takes the form of a logistic function rather than a cumulative normal function as it is so.

Vous aimerez peut-être aussi