31
Q.1 .Pressure you are invited to deliver a speech on the topic “ What is needed to make Bussiness research more scientfic”. What specific points would talk about? Ans: Scientific Method in Business Research The scientific method is a systematic method of investigation, evaluation, experimentation, interpretation and theorizing. It is characterized by critical discrimination generally and system, and empirical verification, according to wolfe. Generally speaking, the scientific method is characterized by a systematic study, based on theory and facts, universality or generality, objectivity of observation, predictability of results, and verifiability of the phenomenon. It consists of a number of formalities and procedures, which are time consuming. Time management is the basic requirement for the success of managerial decisions. Many management problems require timely solutions and decisions. In such situations, the management may not have adequate time at its disposal to make use of scientific studies before arriving at decisions. Laboratory experiments occupy a prominent place in the scientific method, which may not be useful in many situations of managerial decision-making. Physical science phenomena may be subjected to laboratory tests and physical control. Most of the managerial policies and decisions, however, affect human beings. An individual’s behavior differs substantially from that of another from time to time, place to place and environment to environment and it

BRM 524 Assignment

Embed Size (px)

Citation preview

Page 1: BRM 524 Assignment

Q.1 .Pressure you are invited to deliver a speech on the topic “ What is needed to make Bussiness research more scientfic”. What specific points would talk about?

Ans: Scientific Method in Business Research The scientific method is a systematic method of investigation, evaluation, experimentation, interpretation and theorizing. It is characterized by critical discrimination generally and system, and empirical verification, according to wolfe. Generally speaking, the scientific method is characterized by a systematic study, based on theory and facts, universality or generality, objectivity of observation, predictability of results, and verifiability of the phenomenon. It consists of a number of formalities and procedures, which are time consuming. Time management is the basic requirement for the success of managerial decisions. Many management problems require timely solutions and decisions. In such situations, the management may not have adequate time at its disposal to make use of scientific studies before arriving at decisions. Laboratory experiments occupy a prominent place in the scientific method, which may not be useful in many situations of managerial decision-making. Physical science phenomena may be subjected to laboratory tests and physical control. Most of the managerial policies and decisions, however, affect human beings. An individual’s behavior differs substantially from that of another from time to time, place to place and environment to environment and it cannot be placed under absolute control, for it is very difficult to employ the scientific method is the practice of management. Despite the development of dynamic methods pf management information systems, many decision areas, such as those of labor productivity, materials handling, product planning and consumer behavior, require complex data to be analyzed as a part of the scientific method which may not be appropriate in quick managerial decisions. The greater the complexity of the data, the lesser is the possibility of accuracy and the lesser is its utility in the management process.Consumer behavior, a trade union’s behavior, workers’ behavior, the tastes, temperaments and fashions of the population at large, the demonstration effect in the market, technological development, political change, social change, geographic change, and such other forces influence business policy a great deal.

Page 2: BRM 524 Assignment

Many of these forces, especially the human factor, are unpredictable. Despite a thorough scanning of the environment, a perfect prediction, on the basis of these variables, is just not possible, as it is in the physical sciences. The scientific method of research, therefore, has a limited applicability in such managerial decision areas.The scientific method is effective in the physical sciences, because physical phenomena can be verified and evaluated by the senses; but many managerial factors, like the behavioral aspects in organization, cannot be absolutely tested or verified physically. As a result, the scope of the scientific method in management is profoundly affected. Many management problems cannot be empirically tested, in spite of the extensive use of quantitative techniques in the latter half of this century. Though servicing, decision-making, marketing and promotional effectiveness, production planning the complexity of these techniques makes them unpopular with many practitioners. At the same time, the scientific method does not find favour with many organizations and functional executives because of the heavy demand it makes on their time, exposure, resources and manpower. Even in the science where quantitative, empirical and scientific methods are extensively employed, the qualitative approach is made simultaneously, thus limiting the importance of scientific method.The performance evaluation of the sales force is usually made by combining both quantitative and qualitative performance, though there is a possibility of making a quantitative analysis. The experimental method is seldom used in managerial analysis, unlike in the physical sciences, while the cause-effect relationship cannot be established beyond doubt in many cases. For example, there is a relationship between the sales revenue and the advertising budget; but it is not easy to establish which the actual cause of effect is because both are interlinked. The exact magnitude of the effect of each on the other cannot be easily determined, for various other factors-economic variables, market forces, changes in fashion, tastes, temperaments, and the competitors’ policies make a substantial impact on the sales volume. Similarly, business policies, marketing opportunities and product specifications attain dynamic dimensions in a dynamic economic, social and business environment. Evidently research, scientific methods and their results have very little policy implications in such situations; and that is why the recourse

Page 3: BRM 524 Assignment

invested on research do not yield any considerable returns. However, the scientific method that empirically tests a hypothesis has a far-reaching utility value, not only for theoretical purposes, but also for practical applications and policy decisions.

Question # 2 (A)

Explain the following terms with the help of examplesHypothesisReliabilityTheoryPracticalityDependent VariableParameterControlled VariableStandard ErrorMappingValidity

Ans: i) Hypothesis

ii) ReliabilityReliability has to do with the quality of measurement. In its everyday sense, reliability is the "consistency" or "repeatability" of your measures. Before we can define reliability precisely we have to lay the groundwork. First, you have to learn about the foundation of reliability, the true score theory of measurement. Along with that, you need to understand the different types of measurement error because errors in measures play a key role in degrading reliability. With this foundation, you can consider the basic theory of reliability, including a precise definition of reliability. There you will find out that we cannot calculate reliability -- we can only estimate it. Because of this, there a variety of different types of reliability that each have multiple ways to estimate reliability for that type. In the end, it's important to integrate the idea of reliability with the other major criteria for the quality of measurement -- validity -- and develop an understanding of the

Page 4: BRM 524 Assignment

relationships between reliability and validity in measurement.http://www.socialresearchmethods.net/kb/reliable.php

iii) TheoryThe word theory, when used by scientists, refers to an explanation of reality that has been thoroughly tested so that most scientists agree on it. It can be changed if new information is found. Theory is different from a working hypothesis, which is a theory that hasn't been fully tested; that is, a hypothesis is an unproven theory.http://en.wikipedia.org/wiki/Theory

iv) PracticalityPracticality' means the quality of something in being possible to be done, or being workable. For example: a plan to prevent piracy that involves military air carriers to escort every single cargo ship lacks practicality (I guess).Another case of 'practicality' might be the quality of something in being usable.'Practicality' of a person can mean the attribute of being sensible, pragmatic in thinking and doing, getting things done. The 'practicalities' of something means the concrete circumstances, detailed characteristics of a situation, e.g.: the practicalities of a life as a seaman.

v) Dependent VariableDependent Variable:A dependent variable is what you measure in the experiment and what is affected during the experiment. The dependent variable responds to the independent variable. It is called dependent because it "depends" on the independent variable. In a scientific experiment, you cannot have a dependent variable without an independent variable. Example: You are interested in how stress affects heart rate in humans. Your independent variable would be the stress and the dependent variable would be the heart rate. You can directly manipulate stress levels in your human subjects and measure how those stress levels change heart rate.

vi) Parameter

Page 5: BRM 524 Assignment

Definition: Parameters are values that are passed into a function or C++ template. For example, a function to add three numbers might have three parameters. The parameters that are passed into a template definition define the class that the template class uses. Eg a vector of payroll records. Functions that can take a varying number of parameters are called variadic.

vii) Controlled VariableA control variable is a variable that is held constant or whose impact is removed in order to analyze the relationship between other variables without interference. For example, suppose the relationship between age and frequency of delinquent activity is first investigated for male students, then separately for female students. The variable sex has been treated as a control variable, its effect being removed. In short, it is a variable whose effect must be neutralized or controlled.

viii) Standard Error http://www.linfo.org/standard_error.html

ix) MappingShare the helping material with others email us @ [email protected]

x) ValidityValidity is the extent to which a test measures what it claims to measure. It is vital for a test to be valid in order for the results to be accurately applied and interpreted. Validity isn’t determined by a single statistic, but by a body of research that demonstrates the relationship between the test and the behavior it is intended to measure.

Question # 2 (B)

Distinguish among the following and suggest the significance of each in a research context.Concept and Construct

Page 6: BRM 524 Assignment

Deduction and InductionOperational definition and Descriptive DefinitionDecision making under certainty and uncertainty

Ans: a) Concept and ConstructConcepts and constructs are both abstractions. Man, table, tree, insect, etc are abstractions and so are ideas of motivation, beauty, attitude, etc. The difference lies in the nature of "objects" the abstractions are made from: The existence of the objects from which the abstractions are made may be factual or hypothetical. Where factual, the abstraction is a concept (e.g., table) and where the object is hypothetical or inferential, the abstraction is a construct (e.g., motivation).Read more: http://wiki.answers.com/Q/What_is_the_difference_between_concepts_and_construct#ixzz19cP0Xhnq

b) Deduction and InductionEvery person uses reasoning to draw conclusions. In Western European cultures, two major forms of reasoning are used: induction and deduction. Culturally, both forms are used regularly. Formally, induction is considered to be a fallacious form of reasoning that leads to conclusions that cannot be supported.

Deduction Defined1. Deduction is reasoning from a general principle or statement to a specific example. A famous example of this would be: If man is mortal and Socrates is a man, then Socrates is mortal.

Induction Defined2. Induction is reasoning from a specific example to a general principle or statement. An example of this would be: If Socrates is dead and Socrates was a man, then man can die.

Significance3. Deduction serves as the basis for the scientific method. Read more: Difference Between Deduction & Induction | eHow.com http://www.ehow.com/facts_4781689_difference-between-deduction-induction.html#ixzz19cPu7xZJ

c) Operational definition and Descriptive DefinitionShare the helping material with others email us @ [email protected]

Page 7: BRM 524 Assignment

d) Decision making under certainty and uncertaintyThe conditions for making decisions can be divided into two types, certainty and uncertainty.Decisions made under certainty or uncertainty are based on our feelings and our experiences.1. CertaintyWe experience certainty about a specific question when we have a feeling of complete belief or complete confidence in a single answer to the question.Decisions such as deciding on a new carpet for the office or installing a new piece of equipment or promoting an employee to a supervisory position are made with a high level of certainty.While there is always some degree of uncertainty about the eventual outcome of such decisions there is enough clarity about the problem, the situation and the alternatives to consider the conditions to be certain.2. UncertaintyA decision under uncertainty is when there are many unknowns and no possibility of knowing what could occur in the future to alter the outcome of a decision. We feel uncertainty about a situation when we can't predict with complete confidence what the outcomes of our actions will be. We experience uncertainty about a specific question when we can't give a single answer with complete confidence.Launching a new product, a major change in marketing strategy or opening your first branch could be influenced by such factors as the reaction of competitors, new competitors, technological changes, changes in customer demand, economic shifts, government legislation and a host of conditions beyond your control.These are the type of decisions facing the senior executives of large corporations who must commit huge resources. The small business manager faces, relatively, the same type of conditions which could cause decisions that result in a disaster from which he or she may not be able to recover.

e) Decision rule and decision variable

Question # 3

Explain the four major sources of measurement error? Illustrate by example how each of these might affect measurement results in a face-to-face interview situation.

Ans.. Measurement Error

Measurement error is caused by difference between the information desired by the researcher and the information provided by the measurement process.

Page 8: BRM 524 Assignment

Experimental Error Experiments are designed to measure the impact of one or more independent variables on a dependent variable. Experimental error occurs when the effect of experimental situation itself is measured rather than the effect of independent variable. For example , a retail chain may increase the price of selected items constant in four similar outlets, in an attempt to discover the best pricing strategy. However, unique weather patterns, traffic conditions, or competitors’ activities may affect the sales at one set of stores and not the other. Thus, the experimental result will reflect the impact of variables other than price.

Population Specification Error Population specification error is caused by selecting an inappropriate universe or population from which to collect data. This is a potentially serious problem in both industrial and consumer research. A firm wishing to learn the criteria that are considered most important in the purchase of certain machine tools might conduct a survey among purchasing agents. Yet, in many firms the purchasing agents don’t determine or necessary even know the criteria behind brand selections. These decisions may be made by the machine operators, by committee or high level executives. A study that focuses on the purchasing agent as the person who decides which brands to order may be subject to population specification error.

Frame ErrorThe sampling frame is the list of population members from which the sample units are selected. An ideal frame identifies each member of the population once and only once. Frame error is caused by using inaccurate or incomplete sampling frame. For example, using the telephone directory as sampling frame for the population of a community contains a potential for frame error. Those families who don’t have listed numbers, both voluntarily or involuntarily, are likely to differ from those with listed numbers in such respects as income, gender and mobility.

Sampling ErrorSampling error is caused by the generation of nonrepresentative sample by means of a probability sampling method. For example, a random sample of 100 university students could produce a sample of all families. Such a sample wouldn’t be representative of the over all student body. Yet it could occur in classic sampling technique. Sampling error is the focal point of concern in classical statistics.

Selection ErrorSelection error occurs when a nonrepresentative sample is obtained by non probability sampling methods. For example, one of the authors talked with an interviewer who is afraid of dogs. In surveys that allowed any freedom of choice, this interviewer avoided home with dogs

Page 9: BRM 524 Assignment

present. Obviously such practice may introduce error in to the survey results. Selection error is a major problem in nonprobablity samples.

Nonresponse ErrorNonresponce error is caused by (1) failure to contact all members of a sample, and /or (2) the failure of some contacted members of the sample to respond to all or specific parts of the measurement instrument. Individuals who are difficult to contact or who are reluctant to cooperate will differ, on at least some characteristics, from those who are relatively easy to contact or who readily cooperate. If these differences include variable of interest , nonresponse error has occurred.

For example, people who are more likely to respond to a survey on a topic that interests them. If a firm were to conduct a mail survey to estimate the incidence’s foot among adults, non response error would be of major concern. Why? Those most likely in athlete’s foot, and thus more likely to respond to the survey, are current or recent suffers of the problem. If the firm were to choose the percentage of those responding who report having athlete’s foot as an estimate of the total population having athlete’s foot, the company would probably overestimate the extent of the problem.

Question # 4 (A)

Critically explain the following topics in your own words with examples from current situations. Be as specific as possible

1.             Research in fundamental area of busines

    2.      Ethics of research

 

We are going through a time of profound change in our understanding of the ethics of applied social research. From the time immediately after World War II until the early 1990s, there was a

Page 10: BRM 524 Assignment

gradually developing consensus about the key ethical principles that should underlie the research endeavor. Two marker events stand out (among many others) as symbolic of this consensus. The Nuremberg War Crimes Trial following World War II brought to public view the ways German scientists had used captive human subjects as subjects in oftentimes gruesome experiments. In the 1950s and 1960s, the Tuskegee Syphilis Study involved the withholding of known effective treatment for syphilis from African-American participants who were infected. Events like these forced the reexamination of ethical standards and the gradual development of a consensus that potential human subjects needed to be protected from being used as 'guinea pigs' in scientific research.

By the 1990s, the dynamics of the situation changed. Cancer patients and persons with AIDS fought publicly with the medical research establishment about the long time needed to get approval for and complete research into potential cures for fatal diseases. In many cases, it is the ethical assumptions of the previous thirty years that drive this 'go-slow' mentality. After all, we would rather risk denying treatment for a while until we achieve enough confidence in a treatment, rather than run the risk of harming innocent people (as in the Nuremberg and Tuskegee events). But now, those who were threatened with fatal illness were saying to the research establishment that they wanted to be test subjects, even under experimental conditions of considerable risk. You had several very vocal and articulate patient groups who wanted to be experimented on coming up against an ethical review system that was designed to protect them from being experimented on.

Although the last few years in the ethics of research have been tumultuous ones, it is beginning to appear that a new consensus is evolving that involves the stakeholder groups most affected by a problem participating more actively in the formulation of guidelines for research. While it's not entirely clear, at present, what the new consensus will be, it is almost certain that it will not fall at either extreme: protecting against human experimentation at all costs vs. allowing anyone who is willing to be experimented on.

Ethical Issues

There are a number of key phrases that describe the system of ethical protections that the contemporary social and medical research establishment have created to try to protect better the rights of their research participants. The principle of voluntary participation requires that people not be coerced into participating in research. This is especially relevant where researchers had previously relied on 'captive audiences' for their subjects -- prisons, universities, and places like that. Closely related to the notion of voluntary participation is the requirement of informed consent. Essentially, this means that prospective research participants must be fully informed about the procedures and risks involved in research and must give their consent to participate. Ethical standards also require that researchers not put participants in a situation where they might be at risk of harm as a result of their participation. Harm can be defined as both physical and psychological. There are two standards that are applied in order to help protect the privacy of research participants. Almost all research guarantees the participants confidentiality -- they are assured that identifying information will not be made available to anyone who is not directly involved in the study. The stricter standard is the principle of anonymity which essentially means that the participant will remain anonymous throughout the study -- even to the researchers

Page 11: BRM 524 Assignment

themselves. Clearly, the anonymity standard is a stronger guarantee of privacy, but it is sometimes difficult to accomplish, especially in situations where participants have to be measured at multiple time points (e.g., a pre-post study). Increasingly, researchers have had to deal with the ethical issue of a person's right to service. Good research practice often requires the use of a no-treatment control group -- a group of participants who do not get the treatment or program that is being studied. But when that treatment or program may have beneficial effects, persons assigned to the no-treatment control may feel their rights to equal access to services are being curtailed.

Even when clear ethical standards and principles exist, there will be times when the need to do accurate research runs up against the rights of potential participants. No set of standards can possibly anticipate every ethical circumstance. Furthermore, there needs to be a procedure that assures that researchers will consider all relevant ethical issues in formulating research plans. To address such needs most institutions and organizations have formulated an Institutional Review Board (IRB), a panel of persons who reviews grant proposals with respect to ethical implications and decides whether additional actions need to be taken to assure the safety and rights of participants. By reviewing proposals for research, IRBs also help to protect both the organization and the researcher against potential legal implications of neglecting to address important ethical issues of participant

   3.      Exploratory versus formalized research

 

Exploratory research is conducted into an issue or problem where there are few or no earlier st 4.      Complex probability sampling

 

 

A probability sampling method is any method of sampling that utilizes some form of random selection. In order to have a random selection method, you must set up some process or procedure that assures that the different units in your population have equal probabilities of being chosen. Humans have long practiced various forms of random selection, such as picking a name out of a hat, or choosing the short straw. These days, we tend to use computers as the mechanism for generating random numbers as the basis for random selection.

Some Definitions

Before I can explain the various probability methods we have to define some basic terms. These are:

N = the number of cases in the sampling frame n = the number of cases in the sample NCn = the number of combinations (subsets) of n from N

Page 12: BRM 524 Assignment

f = n/N = the sampling fraction

That's it. With those terms defined we can begin to define the different probability sampling methods.

Simple Random Sampling

The simplest form of random sampling is called simple random sampling. Pretty tricky, huh? Here's the quick description of simple random sampling:

Objective: To select n units out of N such that each NCn has an equal chance of being selected.

Procedure: Use a table of random numbers, a computer random number generator, or a mechanical device to select the sample.

A somewhat stilted, if accurate, definition. Let's see if we can make it a little more real. How do we select a simple random sample? Let's assume that we are doing some research with a small service agency that wishes to assess client's views of quality of service over the past year. First,

we have to get the sampling frame organized. To accomplish this, we'll go through agency records to identify every client over the past 12 months. If we're lucky, the agency has good accurate computerized records and can quickly produce such a list. Then, we have to actually draw the sample. Decide on the number of clients you would like to have in the final sample. For the sake of the example, let's say you want to select 100 clients to survey and that there were 1000 clients over the past 12 months. Then, the sampling fraction is f = n/N = 100/1000 = .10 or 10%. Now, to actually draw the sample, you have several options. You could print off the list of 1000 clients, tear then into separate strips, put the strips in a hat, mix them up real good, close your eyes and pull out the first 100. But this mechanical procedure would be tedious and the quality of the sample would depend on how thoroughly you mixed them up and how randomly you reached in. Perhaps a better procedure would be to use the kind of ball machine that is popular with many of the state lotteries. You would need three sets of balls numbered 0 to 9, one set for each of the digits from 000 to 999 (if we select 000 we'll call that 1000). Number the list of names from 1 to 1000 and then use the ball machine to select the three digits that selects each person. The obvious disadvantage here is that you need to get the ball machines. (Where do they make those things, anyway? Is there a ball machine industry?).

Neither of these mechanical procedures is very feasible and, with the development of inexpensive computers there is a much easier way. Here's a simple procedure that's especially useful if you have the names of the clients already on the computer. Many computer programs can generate a series of random numbers. Let's assume you can copy and paste the list of client names into a column in an EXCEL spreadsheet. Then, in the column right next to it paste the

Page 13: BRM 524 Assignment

function =RAND() which is EXCEL's way of putting a random number between 0 and 1 in the cells. Then, sort both columns -- the list of names and the random number -- by the random numbers. This rearranges the list in random order from the lowest to the highest random number. Then, all you have to do is take the first hundred names in this sorted list. pretty simple. You could probably accomplish the whole thing in under a minute.

Simple random sampling is simple to accomplish and is easy to explain to others. Because simple random sampling is a fair way to select a sample, it is reasonable to generalize the results from the sample back to the population. Simple random sampling is not the most statistically efficient method of sampling and you may, just because of the luck of the draw, not get good representation of subgroups in a population. To deal with these issues, we have to turn to other sampling methods.

Stratified Random Sampling

Stratified Random Sampling, also sometimes called proportional or quota random sampling, involves dividing your population into homogeneous subgroups and then taking a simple random sample in each subgroup. In more formal terms:

Objective: Divide the population into non-overlapping groups (i.e., strata) N1, N2, N3, ... Ni, such that N1 + N2 + N3 + ... + Ni = N. Then do a simple random sample of f = n/N in each strata.

There are several major reasons why you might prefer stratified sampling over simple random sampling. First, it assures that you will be able to represent not only the overall population, but also key subgroups of the population, especially small minority groups. If you want to be able to talk about subgroups, this may be the only way to effectively assure you'll be able to. If the subgroup is extremely small, you can use different sampling fractions (f) within the different strata to randomly over-sample the small group (although you'll then have to weight the within-group estimates using the sampling fraction whenever you want overall population estimates). When we use the same sampling fraction within strata we are conducting proportionate stratified random sampling. When we use different sampling fractions in the strata, we call this disproportionate stratified random sampling. Second, stratified random sampling will generally have more statistical precision than simple random sampling. This will only be true if the strata or groups are homogeneous. If they are, we expect that the variability within-groups is lower than the variability for the population as a whole. Stratified sampling capitalizes on that fact.

For example, let's say that the population of clients for our agency can be divided into three groups: Caucasian, African-American and Hispanic-American.

Page 14: BRM 524 Assignment

Furthermore, let's assume that both the African-Americans and Hispanic-Americans are relatively small minorities of the clientele (10% and 5% respectively). If we just did a simple random sample of n=100 with a sampling fraction of 10%, we would expect by chance alone that we would only get 10 and 5 persons from each of our two smaller groups. And, by chance, we could get fewer than that! If we stratify, we can do better. First, let's determine how many people we want to have in each group. Let's say we still want to take a sample of 100 from the population of 1000 clients over the past year. But we think that in order to say anything about subgroups we will need at least 25 cases in each group. So, let's sample 50 Caucasians, 25 African-Americans, and 25 Hispanic-Americans. We know that 10% of the population, or 100 clients, are African-American. If we randomly sample 25 of these, we have a within-stratum sampling fraction of 25/100 = 25%. Similarly, we know that 5% or 50 clients are Hispanic-American. So our within-stratum sampling fraction will be 25/50 = 50%. Finally, by subtraction we know that there are 850 Caucasian clients. Our within-stratum sampling fraction for them is 50/850 = about 5.88%. Because the groups are more homogeneous within-group than across the population as a whole, we can expect greater statistical precision (less variance). And, because we stratified, we know we will have enough cases from each group to make meaningful subgroup inferences.

Systematic Random Sampling

Here are the steps you need to follow in order to achieve a systematic random sample:

number the units in the population from 1 to N decide on the n (sample size) that you want or need k = N/n = the interval size randomly select an integer between 1 to k then take every kth unit

All of this will be much clearer with an example. Let's assume that we have a population that only has N=100 people in it and that you want to take a sample of n=20. To use systematic sampling, the population must be listed in a random order. The sampling fraction would be f = 20/100 = 20%. in

Page 15: BRM 524 Assignment

this case, the interval size, k, is equal to N/n = 100/20 = 5. Now, select a random integer from 1 to 5. In our example, imagine that you chose 4. Now, to select the sample, start with the 4th unit in the list and take every k-th unit (every 5th, because k=5). You would be sampling units 4, 9, 14, 19, and so on to 100 and you would wind up with 20 units in your sample.

For this to work, it is essential that the units in the population are randomly ordered, at least with respect to the characteristics you are measuring. Why would you ever want to use systematic random sampling? For one thing, it is fairly easy to do. You only have to select a single random number to start things off. It may also be more precise than simple random sampling. Finally, in some situations there is simply no easier way to do random sampling. For instance, I once had to do a study that involved sampling from all the books in a library. Once selected, I would have to go to the shelf, locate the book, and record when it last circulated. I knew that I had a fairly good sampling frame in the form of the shelf list (which is a card catalog where the entries are arranged in the order they occur on the shelf). To do a simple random sample, I could have estimated the total number of books and generated random numbers to draw the sample; but how would I find book #74,329 easily if that is the number I selected? I couldn't very well count the cards until I came to 74,329! Stratifying wouldn't solve that problem either. For instance, I could have stratified by card catalog drawer and drawn a simple random sample within each drawer. But I'd still be stuck counting cards. Instead, I did a systematic random sample. I estimated the number of books in the entire collection. Let's imagine it was 100,000. I decided that I wanted to take a sample of 1000 for a sampling fraction of 1000/100,000 = 1%. To get the sampling interval k, I divided N/n = 100,000/1000 = 100. Then I selected a random integer between 1 and 100. Let's say I got 57. Next I did a little side study to determine how thick a thousand cards are in the card catalog (taking into account the varying ages of the cards). Let's say that on average I found that two cards that were separated by 100 cards were about .75 inches apart in the catalog drawer. That information gave me everything I needed to draw the sample. I counted to the 57th by hand and recorded the book information. Then, I took a compass. (Remember those from your high-school math class? They're the funny little metal instruments with a sharp pin on one end and a pencil on the other that you used to draw circles in geometry class.) Then I set the compass at .75", stuck the pin end in at the 57th card and pointed with the pencil end to the next card (approximately 100 books away). In this way, I approximated selecting the 157th, 257th, 357th, and so on. I was able to accomplish the entire selection procedure in very little time using this systematic random sampling approach. I'd probably still be there counting cards if I'd tried another random sampling method. (Okay, so I have no life. I got compensated nicely, I don't mind saying, for coming up with this scheme.)

Cluster (Area) Random Sampling

The problem with random sampling methods when we have to sample a population that's disbursed across a wide geographic region is that you will have to cover a lot of ground geographically in order to get to each of the units you sampled. Imagine taking a simple random sample of all the residents of New York State in order to conduct personal interviews. By the luck of the draw you will wind up with respondents who come from all over the state. Your interviewers are going to have a lot of traveling to do. It is for precisely this problem that cluster or area random sampling was invented.

Page 16: BRM 524 Assignment

In cluster sampling, we follow these steps:

divide population into clusters (usually along geographic boundaries) randomly sample clusters measure all units within sampled clusters

For instance, in the figure we see a map of the counties in New York State. Let's say that we have to do a survey of town governments that will require us going to the towns personally. If we do a simple random sample state-wide we'll have to cover the entire state geographically. Instead, we decide to do a cluster sampling of five counties (marked

in red in the figure). Once these are selected, we go to every town government in the five areas. Clearly this strategy will help us to economize on our mileage. Cluster or area sampling, then, is useful in situations like this, and is done primarily for efficiency of administration. Note also, that we probably don't have to worry about using this approach if we are conducting a mail or telephone survey because it doesn't matter as much (or cost more or raise inefficiency) where we call or send letters to.

Multi-Stage Sampling

The four methods we've covered so far -- simple, stratified, systematic and cluster -- are the simplest random sampling strategies. In most real applied social research, we would use sampling methods that are considerably more complex than these simple variations. The most important principle here is that we can combine the simple methods described earlier in a variety of useful ways that help us address our sampling needs in the most efficient and effective manner possible. When we combine sampling methods, we call this multi-stage sampling.

For example, consider the idea of sampling New York State residents for face-to-face interviews. Clearly we would want to do some type of cluster sampling as the first stage of the process. We might sample townships or census tracts throughout the state. But in cluster sampling we would

Page 17: BRM 524 Assignment

then go on to measure everyone in the clusters we select. Even if we are sampling census tracts we may not be able to measure everyone who is in the census tract. So, we might set up a stratified sampling process within the clusters. In this case, we would have a two-stage sampling process with stratified samples within cluster samples. Or, consider the problem of sampling students in grade schools. We might begin with a national sample of school districts stratified by economics and educational level. Within selected districts, we might do a simple random sample of schools. Within schools, we might do a simple random sample of classes or grades. And, within classes, we might even do a simple random sample of students. In this case, we have three or four stages in the sampling process and we use both stratified and simple random sampling. By combining different sampling methods we are able to achieve a rich variety of probabilistic sampling methods that can be used in a wide range of social research contexts.

Question # 5 (A)

Being Manager Planning & Development under what conditions would you recommend:Probability sample & non-probability sampleSimple random sample and cluster sampleUsing the finite population adjustment factorA dis-appropriate stratified probability sampleExplain in detail with examples where possible

ANS… 1. Probability sample & non-probability sample

http://www.tardis.ed.ac.uk/~kate/qmcweb/s2.htm

2. Simple random sample and cluster sampleBasically in a stratified sampling procedure, the population is first partitioned into disjoint classes (the strata) which together are exhaustive. Thus each population element should be within one and only one stratum. Then a simple random sample is taken from each stratum, the sampling effort may either be a proportional allocation (each simple random sample would contain an amount of variates from a stratum which is proportional to the size of that stratum) or according to optimal allocation, where the target is to have a final sample with the minimum variabilty possible. The main difference between stratified and cluster sampling is that in stratified sampling all the strata need to be sampled. In cluster sampling one proceeds by first selecting a number of clusters at random and then sampling each cluster or conduct a census of each cluster. But usually not all clusters would be included.Read more: http://wiki.answers.com/Q/What_is_the_difference_between_stratified_random_sampling_and_cluster_sampling#ixzz19fNYYYqT

Page 18: BRM 524 Assignment

3. Using the finite population adjustment factorhttp://www.usm.maine.edu/~smax/course_material/FinitePopulationCorrectionFactor.pdf

4. A dis-appropriate stratified probability samplehttp://free-books-online.org/mathematics/business-mathematics-statistics/research-methods-business-mathematics-statistics/types-of-probability-sampling-2/

5. Explain in detail with examples where possible.

Question # 5 (B)

(b) Discuss planning of research keeping in view some industrial project, taking any example from real market situation.

Ans….

A common, and costly, mistake is to leap in and begin working without designing what it is you will do and planning how to carry it out. Spend time on working this out and discussing it with your supervisor and any available experts.

After this, you will most likely need to submit for approval a brief proposal providing details of your intended studies. You should provide information about: topic; aims of the research, or the problem to be investigated; relevance to your discipline; and the central concepts of the study.

Identifying the central concepts of your study often means considering possible attitudes, experiences, behaviours, social processes, reactions - whatever is interesting and possible to examine. Take into account such factors as your own personality, age, gender, interests and work load; the participants, their available roles, the activities that will take place, the times at which things happen and the places in which they happen; the accessibility and availability of material, of members of a group or an organisation, of individuals to be interviewed or documentary sources.

Methods to be used Many of these remarks apply across the board; and not to just one method.

Page 19: BRM 524 Assignment

Indeed, it is difficult, and usually fruitless, to compartmentalise research methods. As with most aspects of PhD work, you arrive at the best method to use by asking questions.

Experimental and quasi-experimental studies.

What design is most appropriate? What factors are to be controlled by you in the experimental

design? What factors are to be knowingly ignored, or dealt with by

randomisation? How is random allocation of treatments to experimental units to be

achieved? Where are the experimental units coming from?

Survey research

What form of survey is most appropriate for the task at hand: mail-out, face-to-face interview, telephone?

Who are the intended respondents? How will the respondents be chosen-by a randomisation process,

by a quota, by purposive selection, from administrative records, from an organisational hierarchy?

If randomisation has a role, how will it be introduced - by your, by a private survey organisation, by a governmental survey organisation?

Interview based studies

How many members of a group would you attempt to interview? How would you approach them initially?

What form will the interviews take - semi-structured, unstructured? What would be the approximate duration of the interviews? Would

you use a single interview per person or a series of interviews? What topics would you attempt to cover in the interviews?

Participant observation

What role or roles would you adopt in the setting? What level of participation would you employ - complete, participant as observer, observer as participant?

What would be the best activities in the setting in which to participate? What would be the best physical locations from which to observe?

What use, if any, should be made of informants? How would you

Page 20: BRM 524 Assignment

go about selecting them and establishing a relationship with them?

Textual analysis

What sort of textual material will you use - printed, graphical, audio or visual?

How will you obtain the texts to be analysed? What other aspects of the text will you examine, eg, origins of the

material? How will you select a sample of texts to be analysed from the range

of available material?

Conversation analysis

Will you record conversations for the analysis or use conversations recorded for other purposes?

If you record conversations for the analysis, how will you manage your role as observer? How will you obtain existing recordings? How will you gain the consent of the participants?

If you use conversations recorded for other purposes, how will you obtain existing recordings?

For each methodology, ask yourself how should the data be

recorded? analysed?

These may not be simple decisions to make. But, you need to clarify the steps involved in each part of the study, and establish a timetable for each part.

Of course, it is also important to recognise that you may well bring personal biases to the study. Examine how these might affect the data collection process and your interpretation of the results of your research. There are invariable ethical implications to proposed research programs.

If possible, seek professional advice before setting about data collection, especially on statistical matters that you may be unfamiliar with. Most universities employ professional statisticians within their mathematics, economics, agricultural, biological and social science departments. These people have to be experienced in the real-life issues of data collection and data analysis relevant to their discipline. Use them.

Looking at the fit between your aims and your research design.

Page 21: BRM 524 Assignment

Dealing with data.

Now I see how I should have done it all along. Is it too late to change?

I am three quarters of the way through but it isn't as fruitful as I thought it would be. How can I rescue my thesis?

Finding, formulating and exploring your topic.

Deciding on your structure.

Seeing a plot emerge.

Developing a picture of the thesis as a whole.

Preparing an outline for three years?

Is it really necessary to start with a really detailed outline of your thesis?

Seeking, receiving and handling feedback.

Strategies for getting the best feedback possible.

Overcoming reluctance to seek feedback.