Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Gender Inequality in Education in Saudi Arabia
A THESIS
SUBMITTED TO THE GRADUATE EDUCATIONAL COUNCIL
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
For the degree
MASTER OF SCIENCE
By
Adel Hassan Gadhi
Advisor Dr. Rahmatullah Imon
Ball State University
Muncie, Indiana
December 2016
i
Gender inequality in education in Saudi Arabia
A THESIS
SUBMITTED TO THE GRADUATE EDUCATIONAL COUNCIL
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE
MASTER OF SCIENCE
By
Adel Hassan Gadhi
Committee Approval:
…………………………………………………………………………………………….
Committee Chairman Date
……………………………………………………………………………………………
Committee Member Date
…………………………………………………………………………………………….
Committee Member Date
Department Head Approval:
……………………………………………………………………………………………
Head of Department Date
Graduate office Check:
……………………………………………………………………………………………
Dean of Graduate School
Date
Ball State University
Muncie, Indiana
Nov. 2016
ii
ACKNOWLEDGEMENTS
I would like to gratefully and thank my supervisor professor Dr. Rahmatullah Imon for the
continuous support of my thesis study, his patient guidance, motivation, encouragement and advice
he has provided throughout my time as his student. I have been extremely lucky to have a
supervisor who cared so much about my work, and who responded to my questions and queries so
promptly. His mentorship was paramount in providing a well-rounded experience consistent my
long-term career goals. For everything you’ve done for me Dr. Imon, thank you so much. I would
like also to thank the rest of my thesis committee: Dr. Munni Begum and Dr. Yayuan Xiao for
their encouragement, insightful comments and patience. Last but not the least, I would like to thank
my family: my parents, my aunt, my wife, my brothers and sisters, for supporting me throughout
my life.
Adel Hassan Gadhi
November 4, 2016
iii
ABSTRACT
In our research the prime objective was to investigate how female students feel about gender
inequality in Saudi Arabia. We could not find any relevant data in the literature and for this reason
we have to collect primary data for our research. After designing a questionnaire and determining
a desired sample size we started collecting data. We felt that the biggest challenge for us was to
collect data within a very tight time and resource constraints. We had no other alternate choice but
to select internet survey and encounter sampling as methods of collecting data. It is well known
that both internet survey and encounter sampling are not entirely a random sampling and often can
generate bias. This means we need to do bias correction when we use sample statistics for
estimating the population parameters. Most of the variables that we consider in our study are
qualitative in nature where sample proportions act as estimates of population proportion. But we
did not find any bias correction technique for sample proportion when the data come from internet
survey and/or encounter sampling. We believe that the most significant contribution of our
research is the development of a new bias correction technique for sample proportion which is
derived as Theorem 3.2 in Chapter 3. For further analysis of data we employ logistic regression
models and diagnostics for the validity of our results. Most of our respondents were married and
have children. Most of the respondents have education higher than high school but less than
university bachelor degree. Most of them are unemployed. Most of them feel that they are treated
equally in their own family and at workplace, although they feel severe discrimination in wages.
Almost all of the women who have a job prefer to work under a female boss. Although Saudi
Arabia is ruled by very strict religious law, according to our research 28.62% of women face sexual
harassment. Most of the women feel that more jobs for women will improve gender inequality.
iv
Logistic regression analyses reveal that there are evidences of gender inequality in the family, at
the workplace, and more severely in wages. Variables which influence these inequalities are
whether a woman has a job or not, their age, marital status, number of children, and knowledge
regarding gender inequality. We also observe that influential outliers sometimes make a huge
impact on the fit of the model and resulting conclusions.
v
Table of Contents
CHAPTER 1 ................................................................................................................... 1
INTRODUCTION ....................................................................................................................... 1
1.1ObjectiveoftheStudy....................................................................................................3
1.2SourcesofData..........................................................................................4
1.3Methodology ................................................................................................................. 5
CHAPTER 2 ................................................................................................................... 6
SAMPLINGDESIGNANDCOLLECTIONOFPRIMARYDATA ........................................... 6
2.1SamplingTechniques ................................................................................................... 6
2.2WhichSamplingTechnique ........................................................................................ 8
2.3SampleSizeDetermination ................................................... Error! Bookmark not defined.
2.4Questionnaire .............................................................................................................. 15
2.5InternetSurveyandEncounterSampling .................. Error! Bookmark not defined.
2.6DataCollectedforOurStudy ................................................................................... 17
CHAPTER 3 ................................................................................................................. 18
vi
BIASCORRECTIONMETHODS,TESTSFORNORMALITY,ANDLOGISTIC
REGRESSIONANALYSIS ........................................................................................................ 18
3.1BiasCorrection ............................................................................................................ 18
3.2TestsforNormality .................................................................................................... 22
3.3LogisticRegressionDiagnostics ............................................................................... 24
CHAPTER 4 ................................................................................................................................ 35
ANALYSISOFRESULTS .......................................................................................................... 35
4.1SummaryStatistics ..................................................................................................... 35
4.2Bias-CorrectedStatistics ........................................................................................... 46
4.3LogisticRegressionAnalysis ..................................................................................... 47
CHAPTER 5 ................................................................................................................................ 53
Conclusions and Areas of Further Research ........................................................................ 53
5.1 Conclusions .................................................................................................................... 53
5.2 Areas of Further Research..............................................................................................54
References .................................................................................................................................... 55
APPENDIX A .............................................................................................................................. 57
APPENDIX B .............................................................................................................................. 59
vii
List of Tables
Chapter 4
Table 4.1: Original and Bias-corrected Statistics for Different Variables ..................... 46
viii
List of Figures
Chapter 3
Figure 3.1: Encounter sampling for selecting fibers ........................................................... 19
Figure 3.2: Original and biased probability distributions .................................................. 19
Chapter 4
Figure 4.1: Histogram of Age ................................................................................................. 36
Figure 4.2: Normal Probability Plot of Age ......................................................................... 37
Figure 4.3: Pie-chart of Level of Education ......................................................................... 38
Figure 4.4: Pie-chart of Marital Status .................................................................................. 38
Figure 4.5: Pie-chart of Women Having Children or Not ................................................. 39
Figure 4.6: Number of Children of the Respondents .......................................................... 40
Figure 4.7: Number of Children of the Married Respondents .......................................... 40
Figure 4.8: Pie-chart of Job Status ......................................................................................... 41
Figure 4.9: Pie-chart of Equality in the Family ................................................................... 41
Figure 4.10: Pie-chart of Equality in the Workplace .......................................................... 42
Figure 4.11: Pie-chart of Equality in Wages ........................................................................ 43
Figure 4.12: Pie-chart of Preference for Boss ...................................................................... 43
ix
Figure 4.13: Pie-chart of Education Regarding Gender Inequality ................................. 44
Figure 4.14: Pie-chart of Forms of Gender Inequality ....................................................... 45
Figure 4.15: Pie-chart for Ways of Improvement of Gender Inequality ......................... 45
1
INTRODUCTION
Female education in Saudi Arabia has become different over the years according to the education
specialists [see Jackson (1998), Calvert and Al-Shetaiwi (2002), Hooks (2003)]. Saudi Arabia is
the largest country in the Middle East. Saudi Arabia was built up in 1932 by King Abdul Aziz.
Saudi Arabia stretches out to roughly 2,250,000 square kilometers (868,730 square miles).
According to General Authority for Statistics the total number of population in Saudi Arabia is
31015999.
In Saudi Arabia it is difficult to trace the beginnings of national education for women before the
unification of the Kingdom in 1930. Prior to that date most education for women was basically at
home with the help of a female teacher and concerned the Holy Quran and understanding the
Islamic Law. Modern education for women started formally in 1960, thirty years after that for men.
The government schools also needed female students but there was a need for proper
administration of education for women to ensure it was carried out according to the Islamic Law.
The government then established the General Presidency for Girls’ Education (GPGE) in 1959 to
take responsibility and they opened 15 girls’ schools in the following year, 1960.
Education in Saudi Arabia is not compulsory but it is open to anyone who wishes to join the official
government schools hence student preferences play a part in the output from the educational
system. The government does, however, provide free general, technical, vocational and higher
education with financial incentives for students (male and female) in some areas of general
education and in all vocational, technical, technological and higher education with free
2
transportation for all females. The number of women in education in Saudi Arabia has increased
sharply from 5,200 in 1960 to 2,121,893 in 2004-05 (GPGE, 2000).
Unfortunately, large numbers of females in higher education join education or humanities courses.
This has led to a surplus of humanities graduates in particular, many of them unemployed, and to
a serious shortage of graduates coming out of technical and vocational education. This could be
due to student preference, it could be due to the structure of technical and vocational education
(TEVT) or it could be due to the jobs available in education in particular. Paradoxically, to improve
technical and vocational education for women will require more female teachers before there can
be more graduates going into employment
It is needless to say for the development of Saudi Arabia we need more educated female. But the
main challenge is the culture. Many families do not want their daughters to be highly educated.
They face similar problems when they get married. But in addition to that we believe the gender
inequality that prevails in the society is one of the largest obstacles for which women are not
interested to study. Even those who are students they may feel gender inequality in every walk of
life- in their own family, in workplace, in their wages. That motivated us to design this study
regarding how female students in Saudi Arabia feel about gender inequality.
3
1.1 Objective of the Study
In this study our prime objective was to investigate gender inequality in education in Saudi Arabia.
Initially we wanted to address these research questions
• How do Saudi women, who study and work, view gender inequality?
• Do they believe that there are adequate programs to educate the people of Saudi Arabia
about gender inequality?
• Do they believe that Islam is the major cause of inequality between genders?
During the pilot survey it was revealed that the third question is quite sensitive in nature and
women are not really comfortable to answer this question. Then we took it out from the
questionnaire.
4
1.2 Sources of Data
Data that we are looking for our research are not available in the literature. For this reason we have
to collect primary data for our research. After beginning our research we realize that the biggest
challenge of our research is how to collect the data. We have to frame the design of it. After the
preparation of questionnaire we had to decide how many sample we need for our research and how
we can get those observation? Since it was not possible to get a simple random sample in such a
short period of time, we planned to do an internet survey. But it was not that easy either. At first
we created a facebook page and request all my relatives, friends, classmates and others to join the
group and encourage female students to take the survey. We also sent e-mail to a large number of
female students and requested them all to circulate it among the women. But even after waiting
for three months there was a little progress. We could not achieve even half of our target. Then we
started interviewing female students by ourselves. Through WhatsApp I invited female students to
meet us in the Library or in some other convenient place. It was not enough either. Finally we
requested our family members and friends to visit schools, colleges, universities, malls to do the
survey. It took us about six months to finish sampling.
5
1.3 Methodology
In this study the biggest challenge was the collection of primary data. So at the very beginning we
had to employ the methodology regarding survey sampling and the sample size determination. For
convenience we used internet survey and encounter sampling for the collection of data. It is now
evident that both internet survey and encounter sampling are nonprobability sampling which often
generates bias. So the next methodology that we use in our research was bias correction. The most
significant contribution of our research is the development of a new technique of bias correction
for sample proportion which is presented in Theorem 3.2. For further analysis of data we use some
modern statistical techniques. A good number of variables that we consider in our study are
qualitative in nature and they warrant categorical data analysis techniques. To understand the
relationship between variables we employ logistic regression models and employ diagnostics for
the validity of our results.
6
SAMPLING DESIGN AND COLLECTION OF PRIMARY DATA
In this chapter we describe how we have collected the primary data required for our research.
2.1 Sampling Techniques
Sampling techniques are a collection of methods by which the researcher can derive a sample from
a population. Naturally, if the aim of a certain study is to learn things about a certain population,
the optimum methodology is to test all members of that population. However as a rule you simply
cannot do this, since it can be prohibitively time-consuming and expensive, and ultimately
pointless. If we are interested in studying lives of light bulbs, we cannot test all light bulbs in a
batch to destruction, because then we won't have any left to sell. We cannot sample all the devices
in a batch to test its quality. So often we collect a portion from the population and study its
characteristics. The main advantages of the sampling method are
• Reduced Cost
• Greater Speed
• Greater Scope
• Greater Accuracy
In a similar way the basic principles of a sample survey are
§ Validity
§ Regularity
§ Optimization
7
§ Efficiency
2.1.1 Steps in a Sample Survey
There are several steps involved in a sample survey. They are briefly summarized as follows.
• Objectives of the Survey
• Target Population to be Sampled
• Data to be Collected
• Degree of Precision Desired
• Method of Measurement
• The Frame (List of the members of a population)
2.1.2 Selection of the Sample
After designing a sample survey as mentioned in the previous subsection we move forward for the
collection of data. This process requires several steps as well. These are as follows.
• Definition of the Population
• Nature of the Data
• Method of Collecting Data
• Sampling Plan
• Design of Questionnaire
• The Pretest
• Organization of the Field Work
• Summary and Analysis of the Data
• Information Gained for Future Surveys
• Publication of the Report
8
2.1.3 Problems in a Sample Survey
We may face problems in every step that we just mentioned in the subsection 2.1.2. In addition
to those we might face some more problems as follows.
• Specification of the Desired Technique
• Non availability of the Frame
• Non-response
2.2 Which Sampling Technique?
The way to determine who comprises the sample depends on a number of factors, such as the
availability of and access to the individuals in the representative group, the availability of resources
to use in the selection of the sample, and the technical expertise of those involved in the data
collection. Several basic sample techniques are possibilities. Of course we prefer random sampling
because the lack of randomness causes biased estimate of parameters [see Cochran (1977)]. There
are several random sampling procedures which are frequently used in practice are discussed in a
brief.
2.2.1 Simple Random Sampling
This sampling involves using the random number table (available in most advanced math books,
statistics research textbooks, and standard statistical software packages).
• A list is compiled of the eligible participants in each group.
• Each name is assigned a sequential number, beginning with 0.
• The first name is selected by pointing to a number on the random number table and
matching the digits to the appropriate name on the list.
9
• Beginning with that chosen number, the names included in the sample until the desired
number is satisfied are those whose numbers match the sequential listing of numbers on
the random number table.
The main advantage of this sampling method is that it is simple, but takes more time and effort.
2.2.2 Stratified Random Sampling
This is used when representation within a particular sample is necessary. Sometimes the population
under study might be nonhomogeneous. Proportional random sampling allows for the sample to
be taken using simple random sampling for each of the subgroups (often called strata), according
to their representation proportionately to the entire group. The main advantage of this method is
the sample represents each subgroup. However, it may be difficult to determine characteristics of
individuals to appropriate classify them in specific strata.
2.2.3 Systematic Sampling
Here observations are collected in a regular interval as follows
• A list of the members of the target population is compiled.
• A name on the list is chosen as a starting point.
• Every k-th name, depending on the desired sample size, is selected for inclusion in the
sample.
The main advantage of this technique is that it is very simple. We do not even need a frame before
conducting a sample survey. This is much faster than the previous two methods. The main
disadvantage of it is that the sample may not be representative and not everyone has an equal
chance of being selected.
10
2.2.4 Cluster Sampling
This sampling may be especially useful for school districts with multiple buildings at multiple
grade levels or with specific purposes. It is done in the following way.
• Entire groups, not individuals, are selected to participate in the data collection.
• Simple random sampling is applied to the representative “clusters” to select the clusters in
which all members will participate.
The main advantage of this technique is it is efficient for large numbers. It also does not need
names of individuals. But the disadvantage is that the increased likelihood over other sampling
techniques of risking a less representative sample than desired.
2.3 Sample Size Determination
The size of the group to be surveyed generally determines the size of the sample. Standard
sampling practice is to include all members of a particular group if the number in the group is 100
or under. So there is no sampling. We can use only descriptive statistics but no inferential statistics
in this situation. If the group size is 400-600, about 50% should be chosen through the application
of a sampling technique. For larger groups, 20% of the total number of the group is an appropriate
size. With 1,500 or more in the group, a sample size of 300 is considered adequate. For a nation-
wide survey a very small percentage of population (say 0.001%) could produce a big sample size.
A serious consideration in determining the sample size is the number of non-respondents. Selecting
300 students from a group of 1,500 may not be effective if only 50 parents return the survey forms.
The determination of sample size discussed so far are not backed by any standard theory, those
come from the experiences of experimenters who are often involved in sample survey. However,
11
there are some sophisticated techniques for estimating the sample size (Cochran, 1977). These
estimations take into account several points:
• What sampling technique is being used?
• How much precision the experimenter wants?
• How much margin of error one would allow in the inferential procedure?
For a very large population (nation wide survey) a sample size between 1200 and 1300 (e.g. Gallup
polls with 1000 samples for a country like the USA) could be enough (Newport, Saad and Moore,
1997) in a simple random sampling to infer within 3% margin of error (for 1% margin of error the
required sample size for the USA is 10000) if the sampling can be done very carefully and
efficiently.
2.3.1 The Principal Steps Involved in the Choice of a Sample Size
Here are the major steps that we need to remember for the determination of sample size.
• There must be some statement concerning what is expected of the sample
• Some equation that connects n with the desired precision of the sample must be found
• This equation will contain, as parameters, certain unknown properties of the population
• For subpopulations a separate calculation is made for the n in each subpopulation and the
total n is found by addition
• When more than one characteristic is measured, the calculations may lead to a series of
conflicting values of n. Some method must be found for reconciling these values
• The chosen value of n must be appraised to see whether it is consistent with the resources
available to take the sample
12
2.3.2 Estimation with Some Precision
It is really desired that the estimated parameters based on samples are precise. In statistics when
we have categorical data, we estimate parameters by proportions. For continuous data, average
could be a very popular estimator of the population mean.
Estimation of n based on P
(a) We want V 0V
Then
V ( p ) = 01V
nPQ
NnN n ( ) PQNV
PQN+10 (2.1)
If P is unknown, then we can take P = 0.5 which yields
Sample size n = ( ) ( )4/114/1
0 +NVN = ( ) 114 0 +NV
N
(2.2)
That is the worst possible case in the sense that it will yield the maximum n. For example, if N
= 1000, P = 0.1, and a choice of 0V = 0.0025 gives n 35. But when P is unknown, n 91.
For large N
n PQ / 0V if P is known
n = 1 / 4 0V if P is unknown (2.3)
For the above example, when N is large n 36 for P = 0.1 and n 100 for unknown P ( = 0.5).
To compute 0V , one possible choice is to make length of 100(1-d)% C.I. where the
Confidence Interval is p ± [ ( )2/dz e. s. e. ( p) + 1 / 2n]. Here the length of C.I. is
13
2 ( )2/dz e. s. e. ( p) + 1 / n 2 ( )2/dz e. s. e. ( p)
We need 2 ( )2/dz e. s. e. ( p) that implies
Estimated V ( p) ( )
2
2/2 dz= 0V (2.4)
Sometimes we choose n that will make the expected length of 100(1-d)% C.I. .
If we ignore ( N-n) / ( N-1), then the length of our desired confidence interval becomes
L = 2 z nPQ / + 1 / n L - 1 / n = 2 z nPQ / > 0 L > 1 / n n > 1 / L
Again
L - 1 / n = 2 z nPQ / 2L - 2L / n + 1 / 2n = 4 2z P Q / n
2n 2L - 2 L n + 1 = 4 2z P Q n 2n 2L - (2 L + 4 2z P Q) n + 1 = 0
n = [2 L + 4 2z P Q ± ( ) 22 442 LPQzL + ] / 2 2L
= [ L + 2 2z P Q ± 2242 44 QPzPQLz + ] / 2L
= 1 / L + [2 2z P Q ± 2 PQzLPQz 22 + ] / 2L
Consider only + sign, because for – sign, n will be out of bounds
PQzLPQz 22 +< n < 1 / L + [2 2z P Q - 2 PQzPQz 22 ] / 2L = 1 / L
which contradicts our previous result n > 1 / L. Thus
n = 1 / L + [2 2z P Q + 2 PQzLPQz 22 + ] / 2L (2.5)
14
Estimation of n for Continuous Data
For sample mean y , we have
V ( y ) = nS
NnN 2
Then
V ( y ) 0V nS
NnN 2
0V (1 – n / N) 2S / 0V n
n 0n / (1 + 0n / N) (2.6)
where 0n = 2S / 0V . For large N, the sample size could be n 0n .
We could estimate the sample size by controlling relative error r in the estimated population total
or mean, i.e., we want
P (| y -Y | > r Y ) = P ( |N y -N Y | > r N Y ) =
Therefore
r Y / s. e. ( y ) = z
Solving for n gives
n n / (1 + n / N ) (2.7)
where n = ( )2/ YrzS .
Sometimes we select the sample size in two steps.
Step 1. Take a simple random sample of size 1n from which estimate 21s .
15
Step 2. The required n will be obtained by previous sampling of the same or a similar
population so that the final estimate of y will have a preassigned variance 0V .
Then we take additional units to make the total sample size
+=10
21 21
nVsn
(2.8)
For our survey most of the data are qualitative in nature. Hence we employ rules (2.2) – (2.4).
Since we have a nationwide survey we obtain the sample size assuming large N. For = 0.05 and
d = 0.05, we obtain the desired sample size as 291.
2.4 Questionnaire
We prepared a questionnaire which contained 20 questions. After a pilot survey we found two
questions a bit sensitive and we dropped them from the questionnaire. The details questionnaire
used in our study is presented in Appendix A. Out of 18 questions, only two questions are
numerical, the rest 16 are qualitative data.
2.5 Internet Survey and Encounter Sampling
In sample survey the optimum methodology is to obtain a random sample which behaves as a
proper representation of the population. In practice it is really difficult to maintain the requirements
for a probability sampling. For convenience researchers often collect data which are not entirely
random and for this reason nonrandom data are more prevalent in practice. In our research we
collect our data using a couple of methods which are not entirely random in nature.
16
2.5.1 Internet Survey
The number of surveys being conducted over the internet has increased dramatically in the last 10
years, driven by a dramatic rise in internet penetration and the relatively low cost of conducting
web surveys in comparison with other methods. Web surveys have a number of advantages over
other modes of interview. They are convenient for respondents to take on their own time and at
their own pace. The lack of an interviewer means web surveys suffer from less social desirability
bias than interviewer-administered modes. Web surveys also allow researchers to use a host of
multimedia elements, such as having respondents view videos or listen to audio clips, which are
not available to other survey modes.
Although more surveys are being conducted via the Web, internet surveys are not without their
drawbacks. Surveys of the general population that rely only on the internet can be subject to
significant biases resulting from undercoverage and nonresponse. Not everyone in the U.S. or
Saudi Arabia has access to the internet and there are significant demographic differences between
those who do have access and those who do not. People with lower incomes, less education, living
in rural areas or age 65 and older are underrepresented among internet users and those with high-
speed internet access. There is no national list of email addresses from which people could be
sampled, and there is no standard convention for email addresses, as there is for phone numbers,
that would allow random sampling. Internet surveys of the general public must thus first contact
people by another method, such as through the mail or by phone, and ask them to complete the
survey online.
17
2.5.2 Encounter Sampling
A data set is known as encountered data when the investigator goes into the field, observes and
records what he observes or what he/she encounters. The long-established data collection
techniques need observations to be taken at random and under prescribed circumstances. This is
often not possible with real life problems – we have instead to make do with what forms or and
limited numbers of observations can be obtained, and on the occasions and at the places they
happen to arise. This process of data collection is known as encounter sampling. However, from
the definition of encounter sampling it is obvious that encountered data are not random and they
can show significant bias.
2.6 Data Collected for Our Study
In our research we have employed both the internet survey and encounter sampling to collect the
data. After the preparation of questionnaire we had to decide how many sample we need for our
research and how we can get those observation? In Section 2.3 we obtain that the required sample
size for this study will be 291. We had at best six months’ time for collection of data. Since it was
not possible to get a simple random sample in such a short period of time, we planned to do an
internet survey. But it was not that easy either. At first we created a facebook page and request all
my relatives, friends, classmates and others to join the group and encourage female students to
take the survey. We also sent e-mail to a large number of female students and requested them all
to circulate it among the women. We initially invited over one thousand women to participate in
this survey. But the number of non-respondents was huge and even after waiting for three months
there was a little progress. We could not achieve even half of our target. Then we started
interviewing female students by ourselves. Through WhatsApp I invited female students to meet
18
us in the Library or in some other convenient place. It was not enough either. Finally we requested
our family members and friends to visit schools, colleges, universities, malls to do the survey. It
took us about six months to finish sampling. After a careful scrutiny we found 2 faulty samples
which we discarded from the data set and finally our sample size becomes 289. However, we had
some missing observations in every variable that we study in our research.
19
BIAS CORRECTION METHODS, TESTS FOR NORMALITY,
AND LOGISTIC REGRESSION ANALYSIS
We have already mentioned that the data collected in our study are subjected to bias and warrants
bias correction. In this chapter we will discuss some methods of bias correction. Bias correction
methods are available for continuous data but most of the variables that we have in our study
contain qualitative data. In this chapter we develop a new method of bias correction for qualitative
data. We also discuss different aspects of logistic regression in this chapter that will be applied for
finding the relationship between different variables.
3.1 Bias Correction
In this section we will discuss some bias correction techniques. At first we will introduce a bias
correction technique for continuous data and later we will develop a new bias correction method
for binary data which should be useful for correcting bias in the estimate of proportions.
3.1.1 Bias and Weighted Distributions
If we sample fish in a pond by catching them in a net, there will be encounter bias (more usually
called size bias). This is because the mesh size will have the effect of lowering the incidence of
the smaller fish in the catch- some will slip through the net. If we were to sample harmful industrial
fibers (in monitoring adverse health effects) by examining fibers on a plane sticky surface by line-
intercept method (one type of encounter sampling), the similar problem may arise. In this case our
data would consist of the lengths of fibers crossed by the intercept line as shown in Figure 3.1.
20
Figure 3.1: Encounter sampling for selecting fibers
Figure 3.2: Original and biased probability distributions
3.1.2 Weighted Distribution Methods for Continuous Data
Our interest will be in the distribution of sizes, but the sampling methods just described are clearly
likely to produce seriously biased results. Here we are bound to obtain what are known as length-
biased or size-biased samples, and statistical inference drawn from such samples will be seriously
flawed because they relate to distribution of measured sizes, not to the population at large (as
shown in Figure 3.2), which will our real interest. Thus we will typically overestimate the mean
both in the fish and in the fiber examples, possibly to a serious extent.
21
The following theorem proposed by Barnett (2004) will give the amount of bias from the
contaminated distribution.
Theorem 3.1: Suppose X is nonnegative continuous random variable with mean and variance
2 , but what we actually sample is a random variable X*. A special but popular case of the size-
biased distribution has the p.d.f.
( ) ( )/* xxfxf = (3.1)
The variable actually sampled has expected value
( ) ( )[ ] +== 2
22** 1/ dxxfxXE
(3.2)
Corollary 3.1: If we take a random sample of size n, then the sample mean of the observed data
*x is biased upward by a factor + 2
2
1 .
The amount
BF = + 2
2
1 (3.3)
is known as the bias factor. It means if we take a random sample of size n, then the sample mean
of the observed data *x is biased upward by a factor + 2
2
1 . Here the problem is that we do not
know the true values of and 2 . However, Barnett (2004) proposed that the statistic
n
xxxEBF
n
ii
== 1
**
*/1
)( (3.4)
22
provides an intuitively appealing estimate of the bias factor + 2
2
1 that we write as Corollary
3.2.
Corollary 3.2: The bias corrected estimate of is given by
)( *
*
xEBFxx =
(3.5)
3.1.3 Bias Correction for Sample Proportion
In the previous subsection we have learnt how to correct bias in a sample mean. But so far as we
know there exists no bias correction method when we use sample proportion as an estimate of
population proportion. One may argue that we can treat a sample proportion as a sample mean
which takes only values 0 and 1. Definitely we can do that, but we cannot do the bias correction
in the same way here as we see in (3.4) for the values 0, we need to compute 1/0, which is
undefined. So we need a different bias correction technique for a sample proportion. Here we
propose a new method for bias correction which is described in Theorem 3.2.
Theorem 3.2: Suppose X* is a biased Bernoulli random variable with the probability of success
p*, while X is the bias corrected Bernoulli random variable with the probability of success p and
this bias is defined in (3.1). If **ˆ xp = is an estimate of p* then the bias corrected estimate of p is
11ˆ
*
=nxnp
(3.6)
Proof: For Bernoulli random variable X, =
n
iiX
1is binomial with np= and =
n12
.
Thus we obtain from (3.2)
23
+=+=
nnn 11
11 2
*
(3.7)
1)1( *=nn
(3.8)
11*
=nnpp
(3.9)
If we estimate p* by **ˆ xp = then the bias corrected estimate of p is
11ˆ
*
=nxnp
and that completes the proof of Theorem 3.2.
3.2 Tests for Normality
In all branches of knowledge it is necessary to apply statistical methods in a sensible way. In the
literature statistical misconceptions are conventional. The most commonly used statistical methods
are correlation, regression and experimental design. But all of them are based on one basic
assumption, that the observation follows normal (Gaussian) distribution. So it is assumed that the
populations from where the samples are collected are normally distributed. For this reason the
inferential methods require checking the normality assumption. An excellent review of different
aspects is available in Das and Imon (2016).
Violation of the normality assumption may lead to the use of suboptimal estimators, invalid
inferential statements and inaccurate predictions. So for the validity of conclusions we must test
the normality assumption.
24
3.2.1 Graphical Tests for Normality
Any statistical analysis enriched by including appropriate graphical checking of the observation.
To quote Chambers et al. (1983) ‘Graphical methods provide powerful diagnostic tools for
confirming assumptions, or, when the assumptions are not met, for suggesting corrective actions.
Without such tools, confirmation of assumptions can be replaced only by hope.’ A good number
of graphical plots such as histograms, stem-and-leaf plots, box plots, percent-percent (P-P) plots,
quantile-quantile (Q-Q) plots, plots of the empirical cumulative distribution function and other
variants of probability plots are available in the literature for testing the normality assumption.
The simplest graphical display for checking normality is the normal probability plot. This method
is based in the fact that if the ordered observations are plotted against their cumulative probabilities
on normal probability paper, the resulting points should lie approximately on a straight line.
3.2.2 Analytical Tests
The following analytical tests are very popular among the practitioners.
Shapiro – Wilk Test: A test based on the square of correlation of true observations and the
expectation of normalized order statistics.
Anderson – Darling Test: A test based on empirical distribution function.
Bowman – Shenton (Jarque – Bera) Test: Tests based on the coefficients of skewness and
kurtosis. In our research we would use the Jarque-Bera test. It is often very useful to test whether
a given data set approximates a normal distribution. This can be evaluated informally by checking
to see whether the mean and the median are nearly equal, whether the skewness is approximately
25
zero, and whether the kurtosis is close to 3. A more formal test for normality is given by the Jarque
– Bera statistic [see Jarque and Bera (1980)]
JB = [n / 6] [ 22 )3(+ KS / 4] (3.10)
The JB statistic follows a chi square distribution with 2 degrees of freedom. If the value of this
statistic is greater than the critical value of the chi square which is 5.99 at the 5% level, we reject
the null hypothesis of normality. Some recent and advanced tests for normality are available in
Imon (2002), and Rana, Habshah, and Imon (2009).
3.3 Logistic Regression Diagnostics
Logistic regression is useful for situation in which we want to be able to predict the presence or
absence of a characteristic or outcome based on values of a set of predictor variables. It is similar
to a linear regression model but is suited to models where the dependent variable is dichotomous.
Logistics regression procedures use one of three types of categorical response variables: binary,
ordinary, nominal. A binary response has two categories with no natural order (for example,
success-failure or yes-no). An ordinal response has three or more categories with a natural ordering
(for example, none, mild, and severe or extra fine, fine, medium, coarse, and extra coarse). A
nominal response has three or more categories with no natural ordering (for example, blue, black,
red, yellow or sunny, rainy, and cloudy). We use logistic regression when our response variable is
categorical. Here we will cover only the binary logistic regression.
Logistic Regression Model
Consider a simple k variable regression model
Where k = p + 1. We would logically let
++++= ppXXY ...110
26
if the i-th unit does not have the characteristic (3.11)
if the i-th unit does possess that characteristic.
Logistic Response Function
Generally, where the response variable is binary, there is considerable empirical evidence
indicating that the shape of the response function should be nonlinear (in variable). A
monotonically increasing (or decreasing) S – shaped (or reversed S – shaped) curve could be a
better choice. We can obtain this kind of curve if we choose the specific form of the function as
where . This function is called the logistic response function. Here Z is called the linear
predictor defined by
Z =
The model in terms of Y would be written as
3.3.1 Estimation of Regression Parameters
The binary response model violates a number of ordinary least squares (OLS) assumptions. Hence
we will use the maximum likelihood (ML) method to estimate parameters of a logistic regression
model assuming that Y is a Bernoulli random variable. The solution of parameters of a logistic
regression model is obtained by iteration.
We have already assumed that Y is a Bernoulli random variable with the pdf of Y given by
, i = 1, 2, … , n
=10
iy
( ) ( )( )ZZX
exp1exp+
=
XZ =
1ln
( )+= XY
ii Yi
Yiii Yf = 1)1()(
27
Since the Y’s are assumed to be independent, the likelihood function is
L = =
Since the logarithm is a monotonic function, so taking logarithm on the above expression, we get
ln L =
We can also rewrite the previous expression as
Now differentiating this with respect to the parameter vector, we obtain
It is easy to show that
(3.12)
where
W = (3.13)
It is not possible to get any analytic solution to the above system of equations and hence we
generally use the Newton-Raphson method to get a numerical solution.
For a two variable logistic regression model a popular choice is
)(1
i
n
ii Yf
=
ii Yi
n
i
Yi
=
1
1
)1(
( )==
+n
ii
n
i i
iiY
111ln
1ln
( )[ ]==
+n
i
Ti
n
i
Tii xxY
11exp1ln
TT XYXL=
ln
( ) WXXXX TT ==
[ ] =
)1(...........0.............................
0..).........1(0..........................0
0................
0)1(
)1( 22
11
nn
ii
( )( )
( )
( )==
201
220
21
0
1
01
000
ˆˆˆ
ˆ/ˆˆ5.0ˆˆ
ln
ˆˆˆ
Y=1 10ˆ1ˆ =
28
Here and are the average of the n values when Y = 0 and Y = 1 respectively, When
20s and 2
1s are the usual sample variances computed using Y = 0 and Y = 1 respectively, and
0n and 1n are the corresponding sample sizes, we obtain
3.3.2 Tests for the Goodness-of-Fit
2R in logistic Regression
The worth of a linear regression model was determined by using 2R , that is, it gives the
proportion or percentage of the total variation in the dependent variable Y explained by the
explanatory variable X but 2R computed as in linear regression should not be used in logistic
regression because it usually gives very low value. The standard regression theory tell us that
Where l = is the likelihood ratio statistic.
We can rewrite the above expression as
Since the likelihood function is a product of probabilities, it follows that the value of the
function must be less than or equal to 1. Thus the maximum possible value for 2R is given by
00ˆ X= 11ˆ X=
10ˆ1ˆ =
( ) ( )211ˆ
10
211
2002
++
=nn
snsn
2/2
11R
l n =
)0()(
LL
( )( )
n
LLR
/22
ˆ01=
( )ˆL
( ) ( )[ ] nLR /22 01Max =
29
In linear regression for the null model. Similarly, in logistic regression we would have =ˆ
for the null model, with denoting the percentage of 1’s in data set. It follows that
=
So we can write
(3.14)
When the data are quite sparse, the maximum possible value will be close to zero. Therefore
Nagelkerke (1991) suggests that 2R be used, with
2R = 2R / Max (
2R ) (3.15)
which is known as adjusted2R .
Deviance
The deviance in logistic regression corresponds to the error sum of squares (SSE) in linear
regression. Likelihood displacement is another name of deviance. Deviance is the difference
between observed likelihood and expected likelihood. It is used in logistic regression for statistical
inference.
We know the asymptotic property of likelihood ratio statistic l is given by
So we can write
(3.16)
The value of D is generally compared with the value of chi-square with n–p degrees of freedom.
Hosmer – Lemeshow Test
YY =ˆ
( ) ( )=
=n
i
yi
yi
ii yyL1
110 ( ) nnn 1
( ) [ ] nnnnR /22 )1(1Max =
2~ln2 pnl
( )=
+==n
i i
ii
i
ii y
yy
ylD1 1
ˆ1ln1ˆ
ln2ln2
30
For grouped data we use Hosmer-Lemeshow goodness-of-fit. The Hosmer-Lemeshow [see
Hosemer et al. (2013)] statistic is given by
(3.17)
where g denotes the number of groups, is the number of observation in the k-th group, is the
sum of the Y values for the k-th group, and is the average of the for the k-th group. Notice
that this differs slightly from the usual chi-squared goodness-of-fit test, as the denominator in
above expression is not the expected frequency. Rather, it is the expected frequency for the k-th
group multiplied times one minus the average of the estimated probabilities for the k-th group.
Thus each of the g denominators will be less than the g expected frequencies, and there will be a
considerable difference when is close to 1.
3.3.3 Interpretation of Parameters
Let us consider the linear predictor that has only one regressor. Then we have
and
That implies
= – =
If we take antilogs, we obtain the odds ratio
(3.18)
3.3.4 Hypothesis Tests
Wald Test
In linear regression t–statistics are used in assessing the value of individual regressors when other
regressors are in the model. In logistic regression we generally use the statistic
( )( )=
=g
i kkk
kkk
nnOC
1
2
ˆ1ˆˆ
kn kO
kˆ ˆ
( ) ii xxz 10ˆˆˆ += ( ) ( )1ˆˆ1ˆ 10 ++=+ ii xxz
( ) ( )ii xzxz ˆ1ˆ1 +=11
ln+ix
ix1
ln +
i
i
x
x
OddsOdds
ln 1
( )11R
ˆexpOddsOdds
O == +
i
i
x
x
kˆ
31
W = (3.19)
which is called the Wald statistic.
It should also be noted that there is no agreement as to the general form of what is being called
Wald statistic. We often see another version of Wald statistic in the form
W = (3.20)
That follows a chi-square distribution with 1 degree of freedom.
Likelihood Ratio Test
An alternative to the Wald test is the likelihood ratio test for each regressor. This likelihood ratio
test statistic for a particular regressor is the difference between the two deviance statistics: the
deviance without the regressor in the model minus the deviance with the regressor in the model.
3.3.5 Unusual Observations in Logistic Regression
In Statistics we often observe that the values of descriptive measures are often much influenced
by few extreme observations which are commonly known as outliers. According to Barnett and
Lewis (1993), ‘Observations which stand apart from the bulk of the data are called outliers.’
Different aspects of outliers with its consequences are discussed by Hadi, Imon and Werner (2009).
Hampel et al. (1986) claim that a routine data set typically contains about 1-10% outliers, and even
the highest quality data set cannot be guaranteed free of outliers. Usually outliers are extreme
values in the sample, but this statement is not always true, especially in regression analysis.
( )ii
es ˆ..
ˆ
( )ii
V ˆˆ 2
32
In a regression problem, observations are judged as outliers on the basis of how unsuccessful the
fitted regression equation is in accommodating them and that is why observations corresponding
to excessively large residuals are treated as outliers. In logistic regression we use the maximum
likelihood (ML) method for estimating the parameters and fitting the model.
Types of Outliers
Regression Outlier: A regression outlier is a point that deviates from the linear relationship
determined from the other points, or at least from the majority of those points.
X – Outlier: This is a point that is outlying in regard to the x–coordinate. In the literature an X–
outlier is more popularly known as a high leverage point.
Y – Outlier: This is a point that is outlying only because its Y–coordinate is extreme.
X – and Y – Outlier: A point that is outlying in both X and Y coordinates is known as X – and Y –
outlier.
High Leverage Points and Influential Observations
According to Hocking and Pendleton (1983), ‘high leverage points are those for which the input
vector ix , in some sense, far from the rest of the data’ (basically X-outliers) have enormous impact
on the distance from the centre and they are called high leverage points. According to Belsley,
Kuh and Welsch (1980) ‘An influential observation is one which, either individually or together
with several other observations, has a demonstrably larger impact on the calculated values of
various estimates ... than is the case for most of the other observations.’ In this situation parameter
estimates or predictions may depend more on the influential observations than on the majority of
the data and their omission from the data may result in substantial changes to important features
of an analysis.
33
Interrelationships among IO, HLP and Outlier
In a regression problem the issue of the influential observation is often discussed with the issues
of other two types of unusual observations; outliers and high leverage points. Chatterjee and Hadi
(1986) discussed the interrelationships among these three types of cases. Here we only note that
influential observations need not be outliers in the sense of having large residuals. It is generally
believed that outliers would be highly influential but that is not always true. Andrews and Pregibon
(1978) have presented some examples where outlying observations have little influence on the
results. Their examples illustrate the existence of outliers that do not matter. However, high
leverage points are likely to be influential, but it has been also observed (Chatterjee and Hadi,
1986) that ‘As with outliers, high leverage points need not be influential, and influential
observations are not necessarily high leverage points.’
Detection of Outliers
Standardized Pearson residuals are popularly used for the detection of outliers. Here the ith
Standardized Pearson residuals is defined by
, i = 1, 2, …, n (3.21)
where is the OLS estimates of the mean squared error (MSE) based on a data set with the i-th
observation deleted. As a rule of thumb, we call an observation an outlier when its corresponding
residual value exceeds 3 in absolute value. A good review of recent outlier detection techniques in
logistic regression is available in Imon (2008), and Hadi, Imon and Werner (2009).
Detection of High Leverage Points
( ) iii
Tii
i wxyt =1ˆ
ˆ
( )2ˆ i
34
The diagnostic approach for the identification of HLP’s is to inspect the observation which does
not match with the average leverage structure of the data. The diagonal element iiw of the weight
or leverage matrix W as defined in (3.13) is commonly used to identify X – outlier in linear
regression. Since W is an idempotent matrix of rank k + 1, It is easy to show that the average value
of iiw is (k+1)/n.
Rules of Thumb
Twice-the-mean-rule – Hoaglin and Welsch (1978): iiw > 2 (k+1)/n (3.21)
Thrice-the-mean-rule – Vellman and Welsch (1981): iiw > 3 (k+1)/n (3.22)
Some advanced techniques of detection of high leverage points in logistic regression are available
in Imon and Hadi (2013).
Detection of Influential Observations
Cook’s Distance: Cook (1977) proposed use of the distance measure which is known in the
statistical literature as Cook’s distance. We define the i-th Cook’s distance as
= , i =1, 2, 3, ....., n (3.23)
where ( )1ˆ is the estimated parameter of with the i-th observation deleted. Cook’s distance
can also be reexpressed as
ii
iiii w
wkr+
=11
CD2
, i =1, 2, 3, ....., n (3.24)
Cook and Weisberg (1982) suggested considering points to be influential for which CD > 1.
iCD( )( ) ( )( )
2ˆ)1(
ˆˆˆˆ
+kXX iTTi
35
Difference in Fits: Another very popular diagnostics used in measuring influence is difference in
fits (DFFITS). Belsley, Kuh and Welsch (1980) introduce DFFITS defined as
DFFITSi = , i = 1, 2, ..., n. (3.25)
where and are respectively the i-th fitted response and the estimated standard error with
the i-th observation deleted. It can be shown that DFFITS can be written as
ii
iiii w
wt=1
DFFITS , i =1, 2, 3, ....., n (3.26)
The cut-off value for DFFITS is C ( ) nk /1+ where C is a constant such as 2 or 3 or more.
Some advanced techniques of detection of influential observations in logistic regression is
available in Nurunnabi, Imon, and Nasser (2011).
( )
( ) iii
iii
wyy
ˆˆˆ
( )iy ( )iˆ
36
ANALYSIS OF RESULTS
In this chapter we report the results that we obtained from the primary data that we have collected
for research. At first we will present summary statistics of our data. Since our data are collected
through internet survey and encounter sampling, we suspect the summary statistics may be a bit
biased and we correct those biases. Finally we employ logistic regression to find out which
variables contribute significantly to gender discrimination.
4.1 Summary Statistics
At first we would like to present a summary statistics of all 13 variables that we investigate in our
study. The details data are presented in Table B1 of Appendix B.
Descriptive Statistics: Age, Education, Marital stat, Children, ... Variable N N* Mean SE Mean StDev Minimum Q1 Age 278 11 32.252 0.533 8.881 18.000 25.000 Education 285 4 2.7719 0.0286 0.4827 1.0000 3.0000 Marital status 274 15 0.6861 0.0486 0.8052 0.0000 0.0000 Children 203 86 0.5714 0.0348 0.4961 0.0000 0.0000 No of children 289 0 1.173 0.100 1.703 0.000 0.000 Job 289 0 0.4014 0.0289 0.4910 0.0000 0.0000 Equality in family 289 0 0.5952 0.0289 0.4917 0.0000 0.0000 Equality in workplace 288 1 0.6493 0.0282 0.4780 0.0000 0.0000 Equality in wage 281 8 0.2527 0.0260 0.4353 0.0000 0.0000 Preference on boss 288 1 0.8993 0.0178 0.3014 0.0000 1.0000 Adequate knowledge 283 6 0.6502 0.0284 0.4778 0.0000 0.0000 Form of gender inequal 283 6 2.6184 0.0685 1.1528 1.0000 1.0000 Improvement 284 5 2.4366 0.0691 1.1646 1.0000 1.0000 Variable Median Q3 Maximum Age 30.000 38.000 55.000 Education 3.0000 3.0000 3.0000 Marital status 1.0000 1.0000 3.0000 Children 1.0000 1.0000 1.0000 No of children 0.000 2.000 7.000 Job 0.0000 1.0000 1.0000 Equality in family 1.0000 1.0000 1.0000 Equality in workplace 1.0000 1.0000 1.0000 Equality in wage 0.0000 1.0000 1.0000
37
Preference on boss 1.0000 1.0000 1.0000 Adequate knowledge 1.0000 1.0000 1.0000 Form of gender inequal 3.0000 4.0000 4.0000 Improvement 3.0000 3.0000 5.0000
MINITAB output gives us a rough and ready idea about the variables but we would like to look it
a bit in details.
4.1.1 Age
According to our survey the average age of the respondents is 32.25 years with the median age 30
years. The minimum and maximum ages are 30 years and 55 years respectively. Figure 4.1 gives
a histogram of age of the respondents. Usually age is a perfect case of a normal data, but the
histogram shows a bit non-normal pattern. In Figure 4.2 we present a normal probability plot of
age and that looks nonnormal as well.
52.545.037.530.022.515.0
50
40
30
20
10
0
Age
Freq
uenc
y
Mean 32.25StDev 8.881N 278
Histogram of AgeNormal
Figure 4.1: Histogram of Age
38
706050403020100
99.9
99
9590
80706050403020105
1
0.1
Age
Perc
ent
Mean 32.25StDev 8.881N 278AD 5.446P-Value <0.005
Probability Plot of AgeNormal - 95% CI
Figure 4.2: Normal Probability Plot of Age
For a confirmation we run the Jarque-Bera test here. We get the measure of skewness as 0.66 and
kurtosis 2.53. These two values yield the JB statistic as 22.74 with a p-value 0.000 which means
we must reject the null hypothesis of normality here. This result also reemphasizes our view that
the data could be biased.
4.1.2 Education
The variable education has four labels: 1- Illiterate, 2- Under Grade 10, 3- Under Bachelor, and 4-
Bachelor and above. We observe from the pie-chart [Figure (4.3)] of the education level of
respondents is that most of the respondents have the level of education 3 which means they have
education level higher than grade 10 but lower than bachelor.
39
123*
Category
Pie Chart of Education
Figure 4.3: Pie-chart of Level of Education
4.1.3 Marital Status
The variable marital status has four levels too. They are 0- Unmarried, 1- Married, 2- Widow, and
3- Divorced. Our data shows that we have almost same proportion of married and unmarried
women in our data.
0123*
Category
Pie Chart of Marital status
Figure 4.4: Pie-chart of Marital Status
40
4.1.4 How many respondents have Children?
This variable has only two levels 0- no children and 1- have children. But Figure 4.5 shows three
major classifications because there are lots of missing values here. This graph shows that the
majority of women have children.
01*
Category
Pie Chart of Children
Figure 4.5: Pie-chart of Women Having Children or Not
4.1.5 Number of Children
The next data set is a numerical data that tells about the number of children of the respondents.
Here the average number of children for the females is 1.173. When we look at the histogram of
this data we see that the most of the female do not have any children. Is it a natural phenomenon?
In Saudi Arabia no unmarried women can have a child due to very strict religious and cultural
norms. For this reason we look at the same data for only married respondents and get a histogram
which is presented in Figure 4.7.
41
76543210
200
150
100
50
0
No of children
Freq
uenc
y
Histogram of No of children
Figure 4.6: Number of Children of the Respondents
76543210
50
40
30
20
10
0
No of children
Freq
uenc
y
Histogram of No of children
Figure 4.7: Number of Children of the Married Respondents
When we look at the second graph we see that the number of respondents with no child has been
dropped sharply when we take out unmarried women from it. Thus the average number of children
for the females goes up to 2.067 from 1.173.
42
4.1.6 Job Status
This variable has only two levels 0- no job, and 1- has job. Figure 4.8 clearly shows that most of
the respondents have jobs.
01
Category
Pie Chart of Job
Figure 4.8: Pie-chart of Job Status
4.1.7 Equality in Family
This variable has two levels 0- women are not treated equally in the family and 1- women are
treated equally in the family. Figure 4.9 clearly shows that in the most of the families women are
treated equally.
01
Category
Pie Chart of Equality in Family
Figure 4.9: Pie-chart of Equality in the Family
43
4.1.8 Equality at Workplace
This variable has two levels 0- women are not treated equally at the workplace, and 1- women are
treated equally in the workplace. Figure 4.9 clearly shows that in the most of the families women
are treated equally.
01*
Category
Pie Chart of Equality in Workplace
Figure 4.10: Pie-chart of Equality in the Workplace
4.1.9 Equality in Wage
This variable has two levels 0- women are not getting equal wage, and 1- women are getting equal
wage. Figure 4.11 clearly shows that there exists a discrimination in salary. Women are not getting
equal salary in comparison to men.
44
01*
Category
Pie Chart of Equality in Wage
Figure 4.11: Pie-chart of Equality in Wages
4.1.10 Preference for Boss
Our next variable is regarding women’s preference for boss. This variable has two levels 0-
preference for male boss, and 1- preference for female boss. Figure 4.11 clearly shows that there
is an overwhelming support for female boss. 89.93% women prefer a female boss in their
workplaces.
01*
Category
Pie Chart of Preference on Boss
Figure 4.12: Pie-chart of Preference for Boss
45
4.1.11 Adequate Education on Gender Inequality
This variable has two levels regarding adequacy of education regarding gender inequality where
0- no, and 1- yes. Figure 4.13 shows that most of the people are aware of gender inequality and
65.02% people have adequate education on the issue gender inequality.
01*
Category
Pie Chart of Edu on Gender Inequality
Figure 4.13: Pie-chart of Education Regarding Gender Inequality
4.1.12 Forms of Gender Inequality
This variable has four levels regarding different forms of gender inequality. They are 1- Gender
discrimination in getting a job, 2- Lower chance of promotion, 3- Unequal workplace treatment,
and 4- Harassment. Attached MINITAB output and Figure 4.14 show how women suffer most.
Tally for Discrete Variables: Form of gender inequality Form of gender inequality Percent 1 25.80 2 15.19 3 30.39 4 28.62
46
1234*
Category
Pie Chart of Form of gender inequality
Figure 4.14: Pie-chart of Forms of Gender Inequality
4.1.13 Ways of Improvement
Our last variable has five levels regarding ways of improvement of gender inequality. They are 1-
Equalize the number of female and male staff at work place, 2- Better maternity benefits, 3-
Paternity leave for husbands, 4- Equalize salary, and 5- Daycare for children
12345*
Category
Pie Chart of Improvement
Figure 4.15: Pie-chart for Ways of Improvement of Gender Inequality
47
Tally for Discrete Variables: Improvement Improvement Percent 1 32.75 2 12.68 3 33.45 4 20.42 5 0.70
The above MINITAB output and graph as presented in Figure 4.15 show that women feel that
more job for women and paternity leave for husband will contribute to improve the gender
inequality in Saudi Arabia.
4.2 Bias-Corrected Statistics
As we mentioned before that our data are not purely random and for this reason all statistics
computed from this data may be subject to bias as estimates of their corresponding population
parameters. For this reason we need to correct bias.
Table 4.1: Original and Bias-corrected Statistics for Different Variables
Variable Measures Original Bias-Corrected
Age of Women Mean 32.25 years 30.04 years
Married Proportion 0.6861 0.6849
Women Having Children Proportion 0.5714 0.5692
Job Proportion 0.4014 0.3993
Equality in the Family Proportion 0.5952 0.5937
Equality at the Workplace Proportion 0.6493 0.6481
Equality in Wage Proportion 0.2527 0.2465
Preference for Female Boss Proportion 0.8993 0.8989
Adequate Knowledge Proportion 0.6502 0.6489
48
We will compute two types of statistics here: sample mean and sample proportion. For the sample
mean we would use the bias correction formula of Barnett (2004) as given in (3.3) to (3.5). For the
sample proportion we would use our proposed bias correction formula (3.7) as outlined in Theorem
3.2. The original and bias corrected statistics for different variables under study are presented in
Table 4.1.
4.3 Logistic Regression Analysis
In this section we would employ logistic regression to see which variables contribute significantly
among the women to feel whether there is gender inequality or not.
4.3.1 Equality in the Family
At first we would like to see which variables make a woman feel that she is treated equally in the
family. Here the response variable is Equality in the family with 0 = no and 1 = yes. The predictors
are Age, Number of children, Job, Education, Marital status, and Adequate knowledge. It is worth
mentioning that although the variable Education has four labels, we classified it into two groups,
high and low. Students below category grade 10 are classified as low and above it are classified as
high and they are denoted by 0 and 1 respectively. In a similar fashion we rearranged the variable
Marital status as 0 = unmarried and 1 = ever married (married, widow, divorced). Results of this
fit are presented in the following MINITAB output.
The Hosmer-Lemeshow and other goodness-of-fit statistics show that the fit is good. Predictors
‘Adequate knowledge’ and ‘No. of children’ appear to be significant at the 5% level. If we increase
the level to 10% then two more variables become significant and they are ‘Job’ and ‘Marital
49
status’. Women how have adequate knowledge, less children, having a job and married feel that
they are equally treated in the family.
Binary Logistic Regression: Equality in the family versus Age, No of children, Job, Education, Marital status, Adequate knowledge Link Function: Logit Logistic Regression Table Odds 95% CI Predictor Coef SE Coef Z P Ratio Lower Upper Constant -2.88489 1.29762 -2.22 0.026 Age 0.0282739 0.0222389 1.27 0.204 1.03 0.98 1.07 No of children -0.287172 0.138802 -2.07 0.039 0.75 0.57 0.98 Job 0.670635 0.350238 1.91 0.056 1.96 0.98 3.88 Education 0.320976 1.06753 0.30 0.764 1.38 0.17 11.17 Marital status 0.831147 0.433721 1.92 0.055 2.30 0.98 5.37 Adequate knowledge 2.78900 0.341257 8.17 0.000 16.26 8.33 31.75 Log-Likelihood = -124.272 Test that all slopes are zero: G = 110.755, DF = 6, P-Value = 0.000 Goodness-of-Fit Tests Method Chi-Square DF P Pearson 192.871 173 0.143 Deviance 189.081 173 0.191 Hosmer-Lemeshow 7.883 8 0.445
The standardized Pearson residual (see Table B2 in Appendix B) identifies observation 238 as an
outlier and influential observation. However the above MINITAB output shows that its omission
does not make much difference in our conclusions.
Binary Logistic Regression: Equality in the family versus Age, No of children, Job, Education, Marital status, Adequate knowledge (without influential outlier) Link Function: Logit Logistic Regression Table Odds 95% CI Predictor Coef SE Coef Z P Ratio Lower Upper Constant -2.70764 0.779504 -3.47 0.001 Age 0.0236633 0.0228122 1.04 0.300 1.02 0.98 1.07 Marital status 0.859670 0.437032 1.97 0.049 2.36 1.00 5.56 No of children -0.279518 0.139889 -2.00 0.046 0.76 0.57 0.99 Job 0.681179 0.356786 1.91 0.056 1.98 0.98 3.98 Adequate knowledge 2.84751 0.345141 8.25 0.000 17.24 8.77 33.92 Education 0.249282 0.408857 0.61 0.542 1.28 0.58 2.86
50
Log-Likelihood = -122.321 Test that all slopes are zero: G = 113.613, DF = 6, P-Value = 0.000 Goodness-of-Fit Tests Method Chi-Square DF P Pearson 200.753 188 0.249 Deviance 194.230 188 0.362 Hosmer-Lemeshow 10.971 8 0.203
4.3.2 Equality at the Workplace
Next we would like to see which variables make a woman feel that she is treated equally at the
workplace. Here the response variable is Equality at the workplace with 0 = no and 1 = yes. The
predictors are Age, Number of children, Job, Education, Marital status, and Adequate knowledge.
Results of this fit are presented in the following MINITAB output.
Binary Logistic Regression: Equality at the workplace versus Age, No of children, Job, Education, Marital status, Adequate knowledge Link Function: Logit Logistic Regression Table Odds 95% CI Predictor Coef SE Coef Z P Ratio Lower Upper Constant -0.426817 1.59510 -0.27 0.789 Age -0.0316665 0.0300424 -1.05 0.292 0.97 0.91 1.03 No of children -0.0848301 0.186698 -0.45 0.650 0.92 0.64 1.32 Job 1.78540 0.488902 3.65 0.000 5.96 2.29 15.54 Adequate knowledge 4.35300 0.456641 9.53 0.000 77.71 31.75 190.18 Education -1.05598 1.28244 -0.82 0.410 0.35 0.03 4.30 Marital status 0.433582 0.581089 0.75 0.456 1.54 0.49 4.82 Log-Likelihood = -74.351 Test that all slopes are zero: G = 195.615, DF = 6, P-Value = 0.000 Goodness-of-Fit Tests Method Chi-Square DF P Pearson 227.161 173 0.004 Deviance 118.378 173 0.999 Hosmer-Lemeshow 13.642 8 0.092
The Hosmer-Lemeshow and other goodness-of-fit statistics show that this fit is not great, probably
just acceptable as the p-value of the Hosemar-Lemeshow statistic is only 0.092. Predictors
‘Adequate knowledge’ and ‘Job’ appear to be significant at the 5% level. Women who have
51
adequate knowledge and a job feel that they are equally treated at the workplace. When we employ
diagnostics we found 8 outliers, 7 of them are influential too. Cases 24, 35, 36, 55, 92, 93, and 279
are influential outliers. Case 6 is also an outlier. We fit the model again without these cases and
find the following output.
Binary Logistic Regression: Equality at the workplace versus Age, No of children, Job, Education, Marital status, Adequate knowledge (Without Outliers) Link Function: Logit Logistic Regression Table 95% CI Predictor Coef SE Coef Z P Odds Ratio Lower Constant -0.408167 1.26888 -0.32 0.748 Age -0.0810854 0.0398721 -2.03 0.042 0.92 0.85 Marital status 0.971058 0.766544 1.27 0.205 2.64 0.59 No of children -0.0389607 0.246623 -0.16 0.874 0.96 0.59 Job 3.21944 0.725925 4.43 0.000 25.01 6.03 Adequate knowledge 5.89109 0.727647 8.10 0.000 361.80 86.91 Education -1.09781 0.754061 -1.46 0.145 0.33 0.08 Predictor Upper Constant Age 1.00 Marital status 11.86 No of children 1.56 Job 103.78 Adequate knowledge 1506.06 Education 1.46 Log-Likelihood = -47.352 Test that all slopes are zero: G = 237.272, DF = 6, P-Value = 0.000 Goodness-of-Fit Tests Method Chi-Square DF P Pearson 109.759 182 1.000 Deviance 74.250 182 1.000 Hosmer-Lemeshow 4.276 8 0.831
We observe from this output that the omission of outliers has improved the fitting of the data to a
great extent as the p-value of the Hosmer-Lemeshow statistic jumps up to 0.831 from 0.092. In
addition to ‘Adequate knowledge’ and ‘Job’, ‘Age’ also emerged as significant at the 5% level.
Aged women face more inequality at the workplace.
52
4.3.3 Equality in Wage
Now we would like to see which variables make a woman feel that she is receiving equal wage.
Here the response variable is Equality in wage with 0 = no and 1 = yes. The predictors are Age,
Number of children, Job, Education, Marital status, and Adequate knowledge. Results of this fit
are presented in the following MINITAB output.
Binary Logistic Regression: Equality in wage versus Age, No of children, Job, Education, Marital status, Adequate knowledge Link Function: Logit Logistic Regression Table Odds 95% CI Predictor Coef SE Coef Z P Ratio Lower Upper Constant -2.59925 1.48278 -1.75 0.080 Age 0.0489999 0.0235065 2.08 0.037 1.05 1.00 1.10 No of children -0.0300005 0.146834 -0.20 0.838 0.97 0.73 1.29 Job -1.49290 0.422404 -3.53 0.000 0.22 0.10 0.51 Education 1.72032 1.24447 1.38 0.167 5.59 0.49 64.04 Marital status -0.302289 0.461604 -0.65 0.513 0.74 0.30 1.83 Adequate knowledge -2.17825 0.358074 -6.08 0.000 0.11 0.06 0.23 Log-Likelihood = -105.905 Test that all slopes are zero: G = 78.386, DF = 6, P-Value = 0.000 Goodness-of-Fit Tests Method Chi-Square DF P Pearson 187.462 169 0.157 Deviance 168.670 169 0.493 Hosmer-Lemeshow 8.494 8 0.387
The Hosmer-Lemeshow and other goodness-of-fit statistics show that this fit is good. Predictors
‘Adequate knowledge’, ‘Job’ and ‘Age’ appear to be significant at the 5% level. Women how have
adequate knowledge with a job and who are aged feel that they are getting less wage than their
male co-workers. Although observations 17 and 128 are outliers their omission did not make any
change in our conclusions and are not presented for brevity.
53
4.3.4 Preference for Female Boss
Finally we would like to see for which factors woman prefer a female boss at the workplace. Here
the response variable is preference for boss with 0 = male and 1 = female. The predictors are Age,
Number of children, Job, Education, Marital status, and Adequate knowledge. Results of this fit
are presented in the following MINITAB output.
Binary Logistic Regression: Preference for female boss versus Age, No of children, Job, Education, Marital status, Adequate knowledge Link Function: Logit Logistic Regression Table Odds 95% CI Predictor Coef SE Coef Z P Ratio Lower Upper Constant -1.50482 1.95644 -0.77 0.442 Age 0.187715 0.0757454 2.48 0.013 1.21 1.04 1.40 Marital status -1.16105 0.739696 -1.57 0.117 0.31 0.07 1.33 No of children 0.198515 0.309404 0.64 0.521 1.22 0.67 2.24 Job -1.47080 0.690995 -2.13 0.033 0.23 0.06 0.89 Adequate knowledge 1.58248 0.632776 2.50 0.012 4.87 1.41 16.82 Education -0.671760 0.882038 -0.76 0.446 0.51 0.09 2.88 Log-Likelihood = -41.040 Test that all slopes are zero: G = 21.756, DF = 6, P-Value = 0.001 Goodness-of-Fit Tests Method Chi-Square DF P Pearson 129.319 188 1.000 Deviance 67.170 188 1.000 Hosmer-Lemeshow 3.165 8 0.924
Here the fit is fantastic as the Hosmer-Lemeshow and other goodness-of-fit statistics have p-values
very close to 1.000. Predictors ‘Adequate knowledge’ ‘Job’ and ‘Age’ appear to be significant at
the 5% level. Women who have adequate knowledge and a job feel that they are equally treated at
the workplace. When we employ diagnostics we found 4 influential outliers (Cases 6, 12, 24, 40).
However, their omission did not make any change in our conclusions and are not presented for
brevity.
54
Conclusions and Areas of Further Research
In this chapter we will summarize the findings of our research to draw some conclusions and
outline ideas for our future research.
5.1 Conclusions
We collected primary data mainly by internet survey and encounter survey. It is evident that this
kind of data generates bias when they are used for estimating parameters. Although bias correction
formula is available for the sample mean, but it cannot be readily applied to sample proportion.
But most of our data are qualitative in nature where sample proportions are the most commonly
used statistic. To overcome this issue we developed a new method of bias correction for sample
proportion. Later we employ these bias correction methods for summarizing data that we collected
for our research.
According to our survey the average age of the respondents is 30.04 years. Among them 68.49%
are married. 56.92% of the women have children and the average number of children is 2.067 for
married women. Most of the respondents have education higher than high school but less than
university bachelor degree. 39.93% of the respondents have a job. 59.37% of women feel that they
are treated equally in the family. 64.81% of the respondents feel that they are treated well at the
workplace. However, only 24.65% women believe that they get equal wage in comparison to the
wages that the males receive for the same job. 89.89% of the respondents prefer to work under a
female boss. 64.89% women believe that people have adequate knowledge regarding gender
inequality. 30.39% of women feel that their workplace environment is not friendly and 28.62% of
55
them face sexual harassment. 32.75% of women feel that more job for women will help to improve
gender inequality while 33.45% of them feel that paternity leave for fathers will help.
Since most of our data are qualitative in nature we employ logistic regression to determine the
variables which may be responsible for gender inequality. We found that there are evidences of
gender inequality in the family, at the workplace and more severely in wages. Variables which
influence these inequalities are whether a woman has a job or not, age of women, their marital
status, number of children, and knowledge regarding gender inequality. We also observe that
influential outliers sometimes make a huge impact on the fit of the model and resulting
conclusions.
5.2 Areas of Further Research
In our current research our subjects were female students only. But gender inequality is a big issue
for all women. So we would like to extend this research to all women in Saudi Arabia. Because of
time and resource constraints our sampling procedure is not entirely random. We would like to
have random samples in our next research. Most of our data are categorical in nature and few of
them are ordinal too. We would like to employ more sophisticated categorical data analysis
techniques in our research.
56
References
1 Andrews, D. F. and Pregibon, D. (1978). Finding the outliers that matter, Journal of the Royal Statistical Society, Series B, 40, 85-93.
2 Barnett, V. (2004). Environmental Statistics: Methods and Applications, Wiley, New York.
3 Barnett, V. and Lewis, T. (1993). Outliers in Statistical Data, 2nd ed., Wiley, New York. 4 Belsley, D.A., Kuh, E. and Welsch, R.E. (1980). Regression Diagnostics: Identifying
Influential Data and Sources of Collinearity, Wiley, New York. 5 Calvert, J. R. and Al-Shetaiwi, A. S. (2002). Exploring the mismatch between skills and
jobs for women in Saudi Arabia in technical and vocational areas: The views of Saudi Arabian private sector business managers. International Journal of Training and Development, 6, 112-124.
6 Chambers, J.M., Cleveland, W. S., Kleiner, B. and Tukey, P. A. (1983). Graphical Methods for Data Analysis, Wiley, New York.
7 Chatterjee, S. and Hadi, A.S. (1986). Influential observations, high leverage points, and outliers in linear regression, Statistical Science, 1, 379-393.
8 Cochran, W. G. (1997). Sampling Techniques, 3rd ed., Wiley, New York
9 Cook, R. D. (1977). Detection of influential observations in linear regression, Technometrics, 19, 15-18.
10 Cook, R. D. and Weisberg, S. (1982). Residuals and Influence in Regression, Chapman and Hall, New York.
11 Das, K.R and Imon, A.H.M.R. (2016). A brief review of tests for normality, American Journal of Theoretical and Applied Statistics, 5, 5–12.
12 Hadi, A.S., Imon, A.H.M.R. and Werner, M. (2009). Detection of outliers, Wiley Interdisciplinary Reviews: Computational Statistics, 1, 57–70.
13 Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., and Stahel, P.J. (1986). Robust Statistics: The Approach Based on Influence Function, Wiley, New York.
14 Hoaglin, D.C. and Welsch, R.E. (1978). The hat matrix in regression and ANOVA, The American Statisticians, 32, 17-22.
15 Hocking, R.R. and Pendleton, O.J. (1983). The regression dilemma, Communications in Statistics-Theory and Methods, 12, 497-527.
16 Hooks, B. (2003). Feminism: A Movement to End Sexist Oppression. In: C. R. McCann and S. Kim (eds.). Feminist Theory Reader: Local and Global Perspectives, Routledge, U.K., 52-53.
17 Hosemar, D. W., Lemeshow, S. and Sturdivant, R. X. (2013). Applied Logistic Regression, 3rd ed., Wiley, New York
18 Imon, A. H. M. R. (2002). On deletion residuals, Calcutta Statistical Association Bulletin, 52, 65–79.
57
19 Imon, A.H.M.R. and Hadi, A.S. (2013). Identification of multiple high leverage points in logistic regression, Journal of Applied Statistics, 40, 2601-2616.
20 Jackson, R. M. (1998). Destined for Equality: Inevitable Rise of Women’s Status, Harvard University Press, MA
21 Jarque, C. M. and Bera, A. K. (1980). Efficient tests for normality, homoscedasticity and serial independence of regression residuals, Economics Letters, 6, 255–259.
22 Nagelkerke, N. (1991). A note on a general definition of the coefficient of determination, Biometrika, 78, 691-692.
23 Newport, F., Saad, L. and Moore, D. (1997). Where America Stands, Wiley, New York 24 Nurunabi, A.A.M., Imon, A.H.M.R. and Nasser, M. (2010). Identification of multiple
influential observations in logistic regression, Journal of Applied Statistics, 37, 1605–1624. 25 Velleman, P.F. and Welsch, R.E. (1981). Efficient computing of regression diagnostics,
The American Statistician, 35, 234-42.
58
Appendix A
Questionnaire
1- What is your age? 2- Educational Status
(a) Illiterate (b) Up to 10th Grade
(c) Under Bachelor (d) Bachelor and above
3- Marital Status (a) Married (b) Unmarried (c) Divorced/Widow
4- Do you have children? (a) Yes (b) No
5- If ‘yes’ how many?
6- Current Occupation (a) No job (b) Teaching (c) Bank
(d) Hospital (e) Company (f) Other
7- Why you do not work? (a) Taking care of family (b) Bad economy (c) Health problem
(d) Still in school (f) Family can support you
8- Why do you work?
9- Your family is mainly supported by: (a) You (b)Your spouse
(c) Equally supported (d) Someone else
59
10- Do you feel that, in general, men and women are treated equally in the house? (a) Yes (b) No
11- In your family, are you treated equally like other male members in the family? (a) Yes (b) No
12- Do you feel that, in general, men and women are treated equally in the workplace? (a) Yes (b) No
13- In your current workplace, do you feel that men and women are treated equally? (a) Yes (b) No
14- Do you think women get less money compared to men in the same work position? (a) Yes (b) No
15- Do you prefer a male/female boss? (a) Male (b) Female
16- Do you feel employees at your work place have adequate knowledge regarding gender inequality? (a) Yes (b) No
17- What form of gender inequality at work you see? (a) Gender discrimination (b) Lower chance of promotion
(c) Unequal workplace treatment (d) Harassment
18- Gender equality at work can be improved by: (a) Equalizes the number of female and male staff (b) Better maternity benefits
(c) Paternity leave for husbands (d) Equalize salary (e) Daycare for children
60
Appendix B
Table B1. Primary Data Collected for the Study
X1 Age Years X2 Education 1- Illiterate 2- Under Grade 10 3- Under Bachelor
4- Bachelor and above X3 Marital Status 0- Unmarried 1- Married 2- Widow 3- Divorced X4 Have Children 0- No 1- Yes X5 No of Children Number X6 Job 0- No 1- Yes X7 Equality in Family 0- No 1- YesX8 Equality at Workplace 0- No 1- YesX9 Equality in Wage 0- No 1- Yes
X10 Preference of Boss 0- Male 1- Female X11 Adequate Knowledge Regarding
Gender Inequality 0- No 1- Yes
X12 Forms of Gender Inequality 1- Gender discrimination 2- Lower chance of promotion 3- Unequal workplace treatment 4- Harassment
X13 Ways of Improvement 1- Equalize the number of female and male staff 2- Better maternity benefits 3- Paternity leave for husbands 4- Equalize salary 5- Daycare for children
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 25 3 1 0 0 0 0 0 0 1 0 1 3 20 2 1 0 0 0 1 1 1 1 1 2 3 18 2 1 0 0 0 0 0 * 1 0 2 1 27 3 * 1 2 0 1 0 * 0 * * * * 1 * 1 2 0 0 0 1 0 1 1 1
26 3 3 1 1 0 0 0 0 0 1 2 2 * 3 * 0 0 0 1 1 1 0 1 2 3
25 3 0 0 0 0 1 0 0 0 * 2 3 26 3 * 1 1 0 0 0 0 0 1 2 2 27 3 * 1 2 1 1 0 0 0 1 2 4 25 3 1 1 1 1 1 1 0 0 0 2 1 26 3 0 0 0 0 1 0 0 0 1 2 1 25 3 1 0 0 1 1 1 0 1 0 3 1 26 3 1 1 2 0 0 0 0 0 0 4 1 * 3 * 0 0 0 0 0 0 0 1 2 3
29 3 0 0 0 1 0 0 0 0 0 3 3 28 3 1 1 2 0 0 1 1 1 1 2 3 24 3 * 1 2 0 0 1 0 0 * 2 3 27 3 * 1 2 0 1 1 0 0 0 3 1 25 3 1 0 0 1 1 1 0 0 1 2 4 27 3 * 1 2 0 0 * * * * * * * 2 0 * 0 0 0 1 0 1 1 1 4
27 3 0 0 0 1 1 1 0 1 0 * 1
61
27 3 1 1 3 1 1 0 0 0 1 4 4 * 1 * 0 0 1 1 1 1 0 1 1 3
30 1 1 1 2 1 0 1 0 0 0 3 1 * 3 * 1 1 0 1 1 1 0 1 3 3 * 3 * 0 0 0 0 1 0 0 1 2 4
24 3 1 0 0 1 1 0 0 0 0 2 1 27 3 1 1 1 0 0 1 0 0 0 1 2 28 3 1 0 0 1 1 1 0 1 1 3 4 * 3 * 1 1 0 1 1 1 0 1 3 3
20 3 0 0 0 0 1 1 0 1 1 2 2 25 3 0 0 0 1 1 1 0 1 1 2 3 22 3 0 0 0 0 1 1 1 1 0 2 1 28 3 1 1 3 1 0 0 0 1 1 4 4 29 3 0 0 0 1 0 1 0 1 0 1 1 27 3 1 1 2 0 0 0 0 1 0 3 2 25 3 1 0 0 0 0 0 0 1 0 1 3 23 3 0 0 0 0 0 1 0 0 1 3 1 35 3 0 0 0 0 0 0 0 1 1 3 1 40 3 0 0 0 1 1 1 0 1 1 1 3 44 3 1 1 4 1 1 1 1 1 0 1 2 30 3 0 * 0 1 0 1 0 1 0 1 1 33 3 1 1 3 0 0 0 0 1 1 4 2 28 3 1 1 3 1 1 1 0 1 1 4 2 28 3 0 0 0 1 1 1 0 1 1 2 1 36 3 1 1 4 0 0 0 0 1 0 3 5 23 3 1 0 0 0 1 1 0 1 1 4 1 23 3 1 0 0 0 1 1 0 1 1 4 1 32 3 1 1 5 0 1 1 0 1 1 4 2 22 3 1 1 1 0 1 1 1 1 1 1 4 24 3 1 1 1 0 1 1 0 1 1 4 2 22 3 1 0 0 0 0 0 0 1 0 3 3 45 3 1 1 6 0 0 1 0 1 0 1 2 20 3 0 0 0 0 0 1 0 1 1 4 1 29 3 0 0 0 0 0 0 0 1 0 3 3 47 3 0 0 0 1 0 1 0 1 1 1 3 40 3 0 0 0 1 0 0 1 1 0 1 3 32 3 0 0 0 0 0 0 0 1 0 1 3 24 3 0 0 0 0 0 0 1 1 0 3 3 45 1 1 1 6 0 0 0 0 1 0 2 1 45 1 1 1 6 0 0 0 0 1 0 2 1 32 3 0 0 0 0 0 0 0 1 0 1 3 25 3 3 1 4 0 0 0 0 1 0 3 2 50 3 3 1 3 1 0 0 1 1 0 3 4 35 3 0 0 0 1 1 1 0 1 1 1 3 19 2 1 1 1 0 1 1 0 1 1 1 4 28 3 3 0 0 0 1 1 0 1 1 1 3 28 3 2 1 3 1 0 0 1 1 0 3 4 45 3 2 1 6 1 1 1 0 1 1 1 2 33 3 3 0 0 1 1 1 0 1 1 1 3
62
25 3 2 1 0 0 1 1 0 1 1 2 2 53 3 1 1 5 1 1 1 0 1 1 1 3 37 3 1 1 4 0 1 1 0 1 1 1 3 47 3 2 1 3 0 1 1 0 1 1 4 4 50 3 3 1 3 0 1 1 0 1 1 4 3 24 3 0 0 0 0 1 1 0 1 1 1 1 33 3 1 1 2 0 1 1 0 1 1 4 4 32 3 3 1 2 0 1 1 0 1 1 2 1 24 3 2 1 2 0 1 1 0 1 1 4 4 27 3 0 * 0 0 0 0 1 1 0 3 3 36 3 3 0 0 0 1 1 1 1 1 4 3 47 3 2 0 0 1 1 1 0 1 1 2 1 44 3 2 0 0 1 1 1 0 1 1 4 1 30 3 2 0 0 0 0 0 1 1 0 3 3 32 3 3 0 0 1 1 1 0 1 1 1 1 20 2 0 0 0 0 1 1 0 1 1 2 3 22 3 1 1 1 0 1 1 1 1 1 4 4 22 3 3 1 2 1 0 1 0 1 1 1 1 30 3 0 0 0 0 0 0 1 1 1 3 3 33 3 1 1 4 1 0 0 1 1 1 3 5 24 2 3 1 3 0 0 0 1 1 1 3 4 50 * 0 0 0 1 1 1 0 1 1 1 1 45 3 1 1 0 0 0 0 1 1 0 3 3 44 3 3 * 0 0 0 0 1 1 0 3 3 32 3 0 0 0 0 1 1 * 1 1 1 3 36 2 1 1 3 0 0 0 1 1 0 3 4 33 3 3 0 0 1 1 1 0 1 1 4 1 28 3 0 0 0 1 1 1 0 1 1 2 1 45 2 1 1 5 1 1 1 0 1 1 4 1 49 2 3 0 0 0 1 1 0 1 1 4 3 19 2 0 0 0 0 1 1 0 1 1 4 1 54 3 1 1 6 1 1 1 0 1 1 1 2 38 3 3 1 4 0 1 1 0 1 1 4 4 27 3 0 0 0 1 1 1 0 1 1 4 3 36 3 0 0 0 1 1 1 0 1 1 1 3 22 3 0 0 0 1 1 1 0 1 1 4 3 21 2 0 0 0 0 1 1 0 1 1 4 1 35 1 0 0 0 0 1 1 0 1 1 4 3 34 3 1 1 3 0 1 1 0 1 1 1 2 44 3 3 1 5 0 1 1 0 1 1 1 4 22 2 1 1 2 0 1 1 0 1 1 2 4 36 3 3 0 0 1 1 1 0 1 1 1 1 27 2 1 0 0 1 1 1 0 1 1 4 2 21 3 0 0 0 0 0 0 1 1 0 3 3 52 3 1 1 5 1 0 0 0 1 0 3 4 27 3 1 0 0 0 1 1 1 1 1 4 3 42 3 0 0 0 1 1 1 0 1 1 2 1 40 3 1 1 5 1 1 1 0 1 1 2 2 30 3 1 1 3 1 1 1 0 1 1 4 4
63
30 3 1 0 0 0 0 0 0 1 0 2 4 32 3 1 0 0 1 0 1 0 0 0 * * * 2 * 0 0 0 1 1 0 0 1 2 2 * 3 0 0 0 0 0 1 0 0 * * * * 3 * 0 0 1 0 0 1 0 1 4 1
32 3 1 1 2 1 0 0 0 1 0 1 2 33 3 1 1 5 0 1 1 1 1 1 4 1 33 3 1 1 5 0 1 1 1 1 1 4 1 43 3 0 0 0 0 0 0 0 1 0 3 3 19 2 0 0 0 0 0 0 1 1 0 3 3 50 3 1 1 7 1 1 1 0 1 1 4 2 20 2 1 1 1 0 1 1 0 1 1 4 4 25 3 0 0 0 0 1 1 0 1 1 2 3 32 3 0 0 0 1 1 1 0 1 1 1 1 45 2 1 1 4 0 0 0 1 1 0 4 4 35 3 1 0 0 1 1 1 0 1 1 2 1 27 3 0 0 0 1 1 0 1 1 0 3 1 55 3 1 1 4 1 1 1 0 1 1 4 2 55 3 1 1 4 1 1 1 0 1 1 4 2 22 2 1 0 0 0 1 1 0 1 1 4 3 36 3 1 1 2 1 1 1 0 1 1 2 2 41 3 0 0 0 1 1 1 0 1 1 1 4 34 3 0 * 0 1 0 0 1 1 0 3 1 49 2 0 * 0 1 0 0 1 1 0 3 3 25 2 1 * 0 1 1 1 0 1 1 1 1 37 3 1 1 4 1 1 1 0 1 1 2 1 23 3 1 0 0 0 1 1 0 1 1 1 2 20 3 0 * 0 0 1 1 0 1 1 4 1 32 3 0 * 0 0 0 0 1 1 0 3 3 45 1 1 1 2 0 0 0 1 1 0 3 4 26 3 0 * 0 0 1 1 0 1 1 4 1 54 3 1 1 4 1 1 1 0 1 1 1 2 27 3 1 1 2 1 1 1 0 1 1 1 4 32 3 0 * 0 0 1 1 0 1 1 4 3 45 3 0 * 0 1 1 1 0 1 1 2 3 26 3 0 * 0 0 1 1 0 1 1 1 3 36 3 1 1 3 0 0 0 0 1 0 3 3 51 3 1 1 4 0 1 1 0 1 1 4 4 33 3 1 1 3 0 0 0 0 1 0 3 4 26 3 0 * 0 0 0 0 0 1 0 3 3 25 3 0 * 0 0 0 0 0 1 0 3 3 39 2 0 * 0 1 1 1 0 1 1 1 3 31 3 0 * 0 1 0 1 0 1 1 4 1 28 3 1 0 0 0 1 1 0 1 1 4 3 28 3 1 0 0 0 1 1 0 1 1 4 3 23 3 0 * 0 0 0 0 0 1 0 3 3 21 3 0 * 0 0 0 0 0 1 0 3 1 19 2 0 * 0 0 0 1 0 1 1 4 1 27 3 1 1 1 0 1 0 0 1 0 3 4
64
33 3 0 * 0 1 0 1 0 1 1 2 1 30 3 0 * 0 0 0 0 0 1 0 3 3 29 3 1 1 1 1 1 1 0 1 1 4 1 35 3 0 * 0 1 1 0 1 1 0 3 1 26 3 1 1 2 0 1 1 0 1 1 4 4 24 3 0 * 0 0 0 0 0 1 1 3 3 34 3 1 1 3 1 0 1 0 1 1 1 4 25 3 0 * 0 1 1 1 0 1 1 1 3 25 3 1 1 2 1 1 1 0 1 1 1 2 29 3 1 0 0 0 1 1 0 1 1 4 3 40 3 1 1 2 1 1 1 0 1 1 1 1 40 3 1 1 2 1 1 1 0 1 1 1 1 40 3 1 1 2 1 1 1 0 1 1 1 1 30 3 1 1 3 1 1 1 0 1 1 1 2 31 3 0 * 0 1 0 1 0 1 1 1 3 42 3 1 1 4 0 1 1 0 1 1 4 2 22 3 0 * 0 0 1 1 1 1 1 4 1 37 3 0 * 0 1 1 1 0 1 1 1 3 48 2 1 1 5 0 0 0 1 1 0 3 4 24 3 1 1 1 1 1 1 0 1 1 1 2 33 2 0 * 0 0 0 0 1 1 0 3 3 29 3 0 * 0 0 0 0 1 1 0 3 3 26 3 0 * 0 0 1 0 0 1 0 3 3 39 3 0 * 0 0 0 0 * 1 0 3 3 35 3 0 * 0 1 0 1 0 1 1 4 1 33 2 1 1 4 1 0 1 0 1 1 1 1 40 3 0 * 0 1 1 1 0 1 1 4 1 21 2 0 * 0 0 1 1 1 1 1 4 1 48 3 1 1 6 0 1 1 0 1 1 1 4 25 2 1 1 2 0 0 0 0 1 0 3 4 43 3 0 * 0 1 0 1 0 1 1 1 1 22 1 1 1 2 0 1 1 * 1 1 4 4 32 3 0 * 0 1 0 1 0 1 1 1 3 27 3 0 * 0 0 0 0 1 1 0 3 3 26 3 0 * 0 0 0 0 1 1 0 3 3 24 2 0 * 0 0 0 1 0 1 1 4 3 22 2 0 * 0 0 0 1 0 1 1 2 1 25 2 0 * 0 0 1 1 0 1 1 4 1 36 3 1 0 0 1 1 1 0 1 1 2 1 26 3 0 * 0 1 1 1 0 1 1 4 1 28 3 1 0 0 1 1 1 0 1 1 4 4 30 3 1 1 3 1 1 1 0 1 1 1 2 30 3 1 1 1 1 1 1 0 1 1 1 2 25 2 0 * 0 0 0 0 1 1 0 3 1 28 3 0 * 0 0 0 1 1 1 1 4 1 38 3 0 * 0 0 0 0 1 1 0 3 3 34 2 1 1 4 0 0 0 1 1 0 3 4 20 2 0 * 0 0 1 1 0 1 1 4 1 47 3 0 * 0 0 1 0 1 1 0 3 3
65
46 3 0 * 0 0 0 0 1 1 0 3 3 30 2 0 * 0 1 1 1 0 1 1 1 1 21 2 0 * 0 0 1 1 0 1 1 1 1 32 3 1 1 3 0 1 1 0 1 1 1 4 32 3 1 0 0 1 1 1 0 1 1 2 1 40 3 0 0 0 1 1 1 0 1 1 1 3 27 3 1 1 2 1 1 1 0 1 1 1 4 30 3 1 1 2 0 1 1 0 1 1 4 4 22 3 0 * 0 0 0 1 0 1 1 4 1 28 3 0 * 0 0 1 0 1 1 0 3 3 36 3 1 1 2 0 0 0 1 1 0 3 4 44 3 1 0 0 1 1 1 0 1 1 4 3 45 2 0 * 0 0 1 0 1 1 0 3 3 24 3 0 * 0 0 1 1 0 1 1 2 1 31 2 0 * 0 0 1 1 1 1 1 4 * 39 3 0 * 0 1 1 1 0 1 1 1 3 23 2 1 0 0 0 1 1 0 1 1 4 1 49 3 1 1 5 0 0 0 1 1 0 3 2 35 3 0 * 0 0 1 0 1 1 0 3 3 35 3 0 * 0 0 1 0 1 1 0 3 3 33 3 0 * 0 1 1 1 0 1 1 2 1 26 2 1 1 2 0 0 0 1 1 0 3 4 54 3 0 * 0 1 1 1 0 1 1 1 1 27 3 1 1 3 1 1 0 0 1 0 3 1 21 2 0 * 0 0 0 1 0 1 1 4 1 26 2 1 0 0 0 0 0 1 1 0 3 3 20 3 0 * 0 0 1 1 0 1 1 4 1 35 3 1 1 2 0 0 0 1 1 0 3 4 47 3 1 1 4 0 0 0 1 1 0 3 2 26 3 1 1 1 0 0 0 1 1 0 3 4 40 3 1 1 4 0 0 0 1 1 0 3 4 32 2 0 * 0 1 1 1 0 1 1 1 3 45 3 0 * 0 1 1 1 0 1 1 1 3 36 3 1 1 2 0 0 0 0 1 0 3 4 23 3 0 * 0 0 0 1 0 1 1 4 1 25 2 1 1 3 0 0 1 0 1 1 4 4 30 3 0 * 0 1 1 1 0 1 1 4 1 45 3 1 1 5 1 1 1 0 1 1 1 2 32 3 1 1 3 1 1 1 0 1 0 3 3 25 3 0 * 0 0 1 1 0 1 1 4 1 49 3 0 * 0 1 1 1 0 1 1 * 3 45 3 1 1 3 1 0 1 0 1 1 4 1 35 3 1 * 2 1 1 1 0 1 1 4 3 47 3 0 * 0 0 1 0 1 1 0 3 3 36 * 1 1 2 1 1 1 0 1 1 2 4 44 * 0 * 0 0 1 0 1 1 0 3 3 39 3 0 * 0 1 1 1 0 1 1 4 1 41 3 1 1 3 0 0 0 * 1 0 3 4 23 2 0 * 0 0 0 1 0 1 0 4 1
66
37 3 1 1 3 1 1 1 0 1 1 1 4 32 3 0 * 0 0 0 0 0 1 1 3 1 42 3 0 * 0 1 0 0 1 1 0 3 1 29 3 0 * 0 0 0 0 1 1 0 3 3 23 2 0 * 0 0 1 1 0 0 0 3 1 48 3 0 * 0 1 1 1 0 1 1 1 1 33 2 1 1 4 1 0 0 1 1 0 3 4 43 3 0 * 0 0 1 0 * 1 0 3 3 28 * 0 * 0 0 1 1 0 1 1 4 1 25 3 0 * 0 1 1 1 0 1 1 1 1 32 3 1 1 3 0 0 1 0 1 0 3 4 45 3 0 0 0 0 0 0 1 1 0 3 3 21 2 0 * 0 0 1 1 0 1 1 4 1 33 3 1 1 2 1 1 1 0 1 1 1 2 32 3 0 0 0 0 0 0 0 1 0 3 3 28 3 0 * 0 1 1 1 0 1 1 1 3 43 2 1 1 4 0 1 1 0 1 * 4 4 31 3 0 * 0 1 1 1 0 1 1 4 4 24 3 0 * 0 0 1 1 0 1 1 4 1 22 2 1 1 2 0 0 0 1 1 0 3 4 35 3 0 1 2 0 0 0 1 1 0 3 4
67
Table B2. Logistic Regression Diagnostics for Four Models
Model 4.3.1 Model 4.3.2 Index St. Pearson
Res Lev DFFITS Index St. Pearson
Res Lev DFFITS
1 -0.90459 0.069517 -0.247255 1 -0.54973 0.070172 -0.15102 2 0.49824 0.037106 0.097808 2 0.21783 0.021725 0.03246 3 -0.50300 0.043067 -0.106707 3 -0.56388 0.080477 -0.16682 4 * * * 4 * * * 5 * * * 5 * * * 6 -2.22013 0.017906 -0.299780 6 -3.15245 0.018580 -0.43375 7 * * * 7 * * * 8 * * * 8 * * * 9 * * * 9 * * *
10 * * * 10 * * * 11 1.38724 0.047724 0.310556 11 1.14763 0.071139 0.31760 12 1.05820 0.058232 0.263133 12 -1.08499 0.063036 -0.28142 13 1.21153 0.057810 0.300102 13 1.10833 0.086448 0.34094 14 -0.47636 0.020198 -0.068394 14 -0.34305 0.021866 -0.05129 15 * * * 15 * * * 16 -0.87225 0.067627 -0.234912 16 0.52003 0.106112 0.17917 17 -1.97679 0.016864 -0.258900 17 0.34633 0.017008 0.04556 18 * * * 18 * * * 19 * * * 19 * * * 20 0.29231 0.014880 0.035925 20 0.12055 0.006396 0.00967 21 * * * 21 * * * 22 * * * 22 * * * 23 2.52767 0.071807 0.703050 23 0.48062 0.116237 0.17430 24 0.43620 0.021213 0.064216 24 -7.14614 0.007590 -0.62494 25 * * * 25 * * * 26 -0.61280 0.056409 -0.149831 26 0.99329 0.104876 0.34000 27 * * * 27 * * * 28 * * * 28 * * * 29 1.22789 0.059330 0.308372 29 -1.00217 0.088728 -0.31271 30 0.91935 0.046485 0.202988 30 1.81892 0.046824 0.40314 31 0.40059 0.027301 0.067111 31 0.17808 0.013436 0.02078 32 * * * 32 * * * 33 0.18766 0.107729 0.065207 33 0.75095 0.099928 0.25022 34 0.77696 0.056750 0.190575 34 0.26494 0.022672 0.04035 35 2.55157 0.018799 0.353180 35 3.26205 0.022247 0.49206 36 -1.38538 0.039178 -0.279751 36 -4.90279 0.014795 -0.60080 37 * * * 37 * * * 38 -0.48221 0.019434 -0.067886 38 -0.33827 0.020548 -0.04899 39 * * * 39 * * * 40 -2.34945 0.044328 -0.506001 40 0.53647 0.044568 0.11587 41 -1.90970 0.021405 -0.282437 41 -2.33144 0.028980 -0.40277 42 0.63949 0.045899 0.140261 42 0.32545 0.032950 0.06008 43 1.65513 0.035608 0.318037 43 1.67051 0.056355 0.40824 44 -0.61331 0.033010 -0.113317 44 1.46322 0.051177 0.33983
68
45 -1.83081 0.018904 -0.254132 45 -2.63560 0.020082 -0.37730 46 * * * 46 * * * 47 0.74450 0.047519 0.166292 47 0.27569 0.022072 0.04142 48 -0.40772 0.020062 -0.058338 48 -0.27501 0.019636 -0.03892 49 0.73514 0.066650 0.196449 49 0.52924 0.070237 0.14546 50 * * * 50 * * * 51 0.76003 0.052005 0.178012 51 0.42355 0.052461 0.09966 52 0.69060 0.041828 0.144291 52 0.43783 0.038734 0.08789 53 0.47062 0.019078 0.065633 53 0.31472 0.018712 0.04346 54 -0.60510 0.035959 -0.116866 54 -0.39788 0.038390 -0.07950 55 -0.34656 0.030952 -0.061938 55 4.57776 0.027221 0.76577 56 * * * 56 * * * 57 -0.76664 0.046834 -0.169938 57 -0.50109 0.047538 -0.11195 58 -3.06034 0.020687 -0.444798 58 0.20480 0.018488 0.02811 59 -0.69655 0.035483 -0.133600 59 -0.63106 0.053759 -0.15042 60 -0.92775 0.064033 -0.242662 60 -0.55991 0.061242 -0.14301 61 -0.40932 0.017388 -0.054450 61 -0.30484 0.019556 -0.04305 62 -0.43457 0.064940 -0.114525 62 -0.43421 0.101861 -0.14623 63 * * * 63 * * * 64 * * * 64 * * * 65 -0.35608 0.026222 -0.058431 65 -0.32151 0.037584 -0.06354 66 -0.78124 0.044056 -0.167714 66 -0.61386 0.064150 -0.16072 67 -1.59123 0.027238 -0.266268 67 0.24651 0.016785 0.03221 68 0.58062 0.036434 0.112902 68 0.22422 0.019946 0.03199 69 0.68911 0.063536 0.179495 69 0.56873 0.079775 0.16745 70 -0.59155 0.042215 -0.124190 70 -0.82844 0.074990 -0.23588 71 0.53766 0.047558 0.120144 71 0.20575 0.023158 0.03168 72 0.37589 0.026048 0.061472 72 0.19062 0.015838 0.02418 73 0.40420 0.021445 0.059837 73 0.30634 0.024215 0.04826 74 0.41856 0.030553 0.074307 74 0.21914 0.021787 0.03270 75 0.61206 0.027482 0.102889 75 0.42879 0.030676 0.07628 76 0.46939 0.030117 0.082715 76 0.47349 0.047390 0.10561 77 0.45319 0.035273 0.086656 77 0.49666 0.061935 0.12762 78 0.09374 0.084124 0.028409 78 -0.80251 0.086747 -0.24733 79 0.48294 0.016084 0.061746 79 0.37048 0.018159 0.05038 80 0.48902 0.015928 0.062216 80 0.36545 0.017547 0.04884 81 0.54218 0.020640 0.078710 81 0.32864 0.018993 0.04573 82 -0.60568 0.031979 -0.110088 82 -0.41720 0.033724 -0.07794 83 0.35245 0.024538 0.055899 83 0.35809 0.041831 0.07482 84 0.22180 0.017104 0.029258 84 0.16274 0.018555 0.02238 85 0.32817 0.031553 0.059235 85 0.22242 0.030108 0.03919 86 -0.96534 0.073223 -0.271342 86 -0.51343 0.067360 -0.13798 87 0.38065 0.026054 0.062259 87 0.18803 0.015202 0.02336 88 1.07975 0.075369 0.308273 88 0.39373 0.051493 0.09174 89 * * * 89 * * * 90 -2.53249 0.021896 -0.378909 90 0.12627 0.006787 0.01044 91 -1.79051 0.018563 -0.246248 91 -2.48370 0.022075 -0.37316 92 -2.19687 0.023367 -0.339815 92 -6.32353 0.009626 -0.62343 93 -1.44044 0.041601 -0.300106 93 -3.91388 0.024534 -0.62071
69
94 * * * 94 * * * 95 -0.82650 0.077495 -0.239550 95 -0.29412 0.048300 -0.06626 96 -0.81415 0.072946 -0.228378 96 -0.29785 0.046828 -0.06602 97 -0.42800 0.057670 -0.105880 97 -0.93200 0.072043 -0.25968 98 -0.41005 0.026041 -0.067049 98 -0.38053 0.036597 -0.07417 99 * * * 99 * * *
100 * * * 100 * * * 101 0.53885 0.056741 0.132159 101 0.14940 0.015360 0.01866 102 0.35005 0.060613 0.088917 102 0.33388 0.096171 0.10891 103 -0.44305 0.078239 -0.129080 103 0.38859 0.051911 0.09093 104 0.47972 0.046342 0.105748 104 0.23289 0.030288 0.04116 105 0.60440 0.027477 0.101592 105 0.43468 0.031246 0.07807 106 0.42841 0.016705 0.055840 106 0.15589 0.007390 0.01345 107 0.38192 0.013770 0.045129 107 0.17593 0.008765 0.01654 108 0.45787 0.023544 0.071098 108 0.14586 0.008026 0.01312 109 0.75206 0.182445 0.355272 109 0.65808 0.128162 0.25231 110 0.62090 0.041947 0.129920 110 0.33795 0.038815 0.06791 111 0.54972 0.018758 0.076006 111 0.39245 0.020298 0.05649 112 0.65079 0.044438 0.140343 112 0.49809 0.055801 0.12109 113 0.92705 0.071207 0.256687 113 0.34844 0.041357 0.07237 114 0.36206 0.026642 0.059900 114 0.19866 0.018300 0.02712 115 0.32939 0.027642 0.055538 115 0.09424 0.006449 0.00759 116 -0.56371 0.039261 -0.113954 116 -0.45535 0.047787 -0.10201 117 -0.60635 0.049562 -0.138464 117 -0.55181 0.076188 -0.15847 118 0.39408 0.021157 0.057938 118 0.31487 0.025630 0.05107 119 0.35461 0.016539 0.045986 119 0.19099 0.012601 0.02158 120 0.49308 0.030646 0.087671 120 0.18350 0.014316 0.02211 121 0.73867 0.050603 0.170536 121 0.25599 0.021242 0.03771 122 * * * 122 * * * 123 -0.95464 0.053509 -0.226983 123 1.21453 0.082721 0.36472 124 * * * 124 * * * 125 * * * 125 * * * 126 * * * 126 * * * 127 -0.71311 0.034022 -0.133830 127 -0.80885 0.049413 -0.18441 128 1.08948 0.100266 0.363695 128 0.62394 0.103067 0.21151 129 * * * 129 * * * 130 1.03916 0.060561 0.263843 130 -0.33813 0.040726 -0.06967 131 -0.33543 0.021376 -0.049575 131 -0.43519 0.050973 -0.10086 132 0.58908 0.073080 0.165407 132 0.23142 0.037916 0.04594 133 0.57299 0.035221 0.109480 133 0.22723 0.019901 0.03238 134 0.86673 0.040231 0.177451 134 0.55041 0.042518 0.11598 135 -1.51169 0.027632 -0.254833 135 0.23664 0.015288 0.02949 136 -0.40070 0.034382 -0.075611 136 -0.32375 0.040056 -0.06613 137 0.25748 0.013175 0.029751 137 0.13794 0.008690 0.01292 138 * * * 138 * * * 139 0.50629 0.049096 0.115041 139 0.30811 0.042282 0.06474 140 * * * 140 * * * 141 0.48560 0.036121 0.094004 141 0.22383 0.022542 0.03399 142 0.33644 0.010293 0.034309 142 0.15220 0.006232 0.01205
70
143 0.35899 0.015884 0.045607 143 0.18837 0.011745 0.02054 144 -0.64453 0.031574 -0.116379 144 -0.68158 0.047778 -0.15267 145 -0.70090 0.093382 -0.224945 145 -0.78474 0.169654 -0.35471 146 0.33795 0.028515 0.057899 146 0.09174 0.006133 0.00721 147 0.44222 0.019124 0.061748 147 0.16842 0.009388 0.01640 148 * * * 148 * * * 149 * * * 149 * * * 150 * * * 150 * * * 151 -0.53595 0.052201 -0.125779 151 -0.35511 0.051821 -0.08302 152 * * * 152 * * * 153 0.35777 0.023562 0.055575 153 0.21247 0.019638 0.03007 154 0.53815 0.029434 0.093717 154 0.19142 0.011997 0.02109 155 * * * 155 * * * 156 0.48818 0.037744 0.096686 156 0.28389 0.031572 0.05126 157 * * * 157 * * * 158 -0.46870 0.017365 -0.062307 158 -0.28659 0.015894 -0.03642 159 0.51687 0.041671 0.107780 159 0.52688 0.066720 0.14087 160 -0.45132 0.017386 -0.060034 160 -0.29850 0.017099 -0.03937 161 0.93014 0.049055 0.211257 161 -0.52302 0.052782 -0.12346 162 -0.41439 0.016820 -0.054201 162 -0.30063 0.018489 -0.04126 163 0.42713 0.036309 0.082908 163 0.13964 0.012250 0.01555 164 -2.68281 0.042360 -0.564244 164 0.28703 0.022510 0.04356 165 * * * 165 * * * 166 * * * 166 * * * 167 -0.40433 0.018050 -0.054819 167 -0.30914 0.020805 -0.04506 168 * * * 168 * * * 169 * * * 169 * * * 170 * * * 170 * * * 171 -1.53778 0.027271 -0.257484 171 0.23987 0.015673 0.03027 172 -0.44107 0.015608 -0.055539 172 -0.28079 0.015544 -0.03528 173 0.31949 0.012037 0.035265 173 0.13273 0.005819 0.01015 174 1.58214 0.031693 0.286233 174 -0.67258 0.047903 -0.15086 175 0.52808 0.018368 0.072236 175 0.33733 0.017727 0.04532 176 * * * 176 * * * 177 -2.54792 0.013325 -0.296092 177 0.15477 0.006787 0.01279 178 * * * 178 * * * 179 0.38779 0.017110 0.051165 179 0.13140 0.006259 0.01043 180 0.38431 0.021302 0.056698 180 0.32374 0.027737 0.05468 181 0.56044 0.032621 0.102915 181 0.28037 0.022287 0.04233 182 * * * 182 * * * 183 * * * 183 * * * 184 * * * 184 * * * 185 * * * 185 * * * 186 0.57521 0.029242 0.099833 186 0.45972 0.036194 0.08909 187 -0.71000 0.047042 -0.157750 187 0.52976 0.046079 0.11643 188 0.37718 0.014020 0.044977 188 0.17834 0.009205 0.01719 189 -0.36194 0.036744 -0.070691 189 -0.29845 0.043619 -0.06374 190 0.34085 0.015428 0.042667 190 0.12416 0.005867 0.00954 191 -0.40080 0.024594 -0.063642 191 -0.35828 0.036837 -0.07007
71
192 * * * 192 * * * 193 * * * 193 * * * 194 -0.49575 0.022421 -0.075077 194 -0.24923 0.017564 -0.03332 195 * * * 195 * * * 196 -1.93568 0.045658 -0.423392 196 0.12152 0.009299 0.01177 197 * * * 197 * * * 198 * * * 198 * * * 199 0.72180 0.069343 0.197026 199 0.55953 0.092336 0.17846 200 -0.41081 0.025426 -0.066355 200 -0.46253 0.046187 -0.10178 201 -2.90485 0.017258 -0.384948 201 0.19365 0.013555 0.02270 202 * * * 202 * * * 203 * * * 203 * * * 204 * * * 204 * * * 205 * * * 205 * * * 206 -1.45771 0.034297 -0.274711 206 0.28982 0.025940 0.04729 207 -1.42234 0.035529 -0.272994 207 0.28213 0.025621 0.04575 208 0.70137 0.034014 0.131611 208 0.29377 0.026285 0.04827 209 * * * 209 * * * 210 0.43407 0.017728 0.058315 210 0.15383 0.007457 0.01333 211 * * * 211 * * * 212 * * * 212 * * * 213 0.31543 0.011669 0.034274 213 0.13452 0.005892 0.01036 214 -0.36158 0.020343 -0.052105 214 -0.39923 0.039380 -0.08083 215 -1.74599 0.018607 -0.240411 215 0.40066 0.021109 0.05884 216 -0.48918 0.020969 -0.071591 216 -0.25251 0.017006 -0.03321 217 -0.34702 0.024196 -0.054644 217 -0.37539 0.041992 -0.07859 218 * * * 218 * * * 219 2.72491 0.083550 0.822754 219 -0.32169 0.047637 -0.07195 220 -0.54597 0.038536 -0.109304 220 -0.22759 0.022909 -0.03485 221 0.47785 0.034031 0.089692 221 0.12359 0.008654 0.01155 222 * * * 222 * * * 223 0.56388 0.019231 0.078960 223 0.38205 0.020067 0.05467 224 * * * 224 * * * 225 * * * 225 * * * 226 * * * 226 * * * 227 0.50154 0.016070 0.064096 227 0.35570 0.016919 0.04666 228 * * * 228 * * * 229 2.36211 0.015741 0.298716 229 -0.28850 0.016282 -0.03712 230 -0.77117 0.037996 -0.153260 230 -0.42702 0.033559 -0.07957 231 * * * 231 * * * 232 2.22793 0.049759 0.509824 232 -0.30718 0.049204 -0.06988 233 * * * 233 * * * 234 0.65118 0.036609 0.126939 234 0.31915 0.031542 0.05760 235 0.52430 0.029581 0.091538 235 0.26055 0.020619 0.03780 236 0.47945 0.035856 0.092460 236 0.22690 0.023086 0.03488 237 -0.41967 0.031464 -0.075642 237 -0.22160 0.021397 -0.03277 238 3.08963 0.035459 0.592390 238 -0.37448 0.031527 -0.06757 239 * * * 239 * * * 240 * * * 240 * * *
72
241 -0.41599 0.025308 -0.067032 241 -0.45596 0.044411 -0.09830 242 0.30667 0.028139 0.052182 242 0.22650 0.032484 0.04150 243 1.78920 0.044168 0.384611 243 -0.84203 0.080365 -0.24892 244 * * * 244 * * * 245 -0.55700 0.045342 -0.121390 245 -0.50355 0.068864 -0.13694 246 * * * 246 * * * 247 -0.53307 0.018387 -0.072957 247 -0.30336 0.016602 -0.03942 248 -0.47013 0.027521 -0.079087 248 -0.23733 0.018863 -0.03291 249 -0.54933 0.023655 -0.085505 249 -0.35861 0.024261 -0.05655 250 -0.42895 0.020745 -0.062433 250 -0.26048 0.017953 -0.03522 251 0.46588 0.033686 0.086984 251 0.12697 0.009159 0.01221 252 * * * 252 * * * 253 * * * 253 * * * 254 * * * 254 * * * 255 -1.45758 0.040130 -0.298029 255 0.26541 0.024268 0.04186 256 0.41207 0.014554 0.050078 256 0.16227 0.007411 0.01402 257 0.46234 0.027972 0.078429 257 0.19630 0.015549 0.02467 258 1.67262 0.035761 0.322117 258 1.36427 0.058213 0.33918 259 * * * 259 * * * 260 0.32569 0.022669 0.049602 260 0.21069 0.021715 0.03139 261 -2.92697 0.013521 -0.342668 261 0.17954 0.009743 0.01781 262 0.34071 0.010354 0.034851 262 0.15017 0.006058 0.01172 263 * * * 263 * * * 264 * * * 264 * * * 265 * * * 265 * * * 266 * * * 266 * * * 267 -0.49995 0.020395 -0.072137 267 -0.26813 0.016344 -0.03456 268 1.81682 0.040724 0.374337 268 3.67735 0.084307 1.11582 269 0.38279 0.012112 0.042384 269 0.16111 0.006951 0.01348 270 * * * 270 * * * 271 -0.71548 0.038596 -0.143357 271 -0.61589 0.058295 -0.15324 272 * * * 272 * * * 273 * * * 273 * * * 274 0.32965 0.021660 0.049049 274 0.20772 0.020032 0.02970 275 -0.47916 0.052075 -0.112307 275 -1.00535 0.129950 -0.38854 276 * * * 276 * * * 277 * * * 277 * * * 278 * * * 278 * * * 279 -0.44573 0.017658 -0.059760 279 3.36436 0.017800 0.45291 280 -0.53830 0.035545 -0.103341 280 -0.23055 0.022027 -0.03460 281 * * * 281 * * * 282 0.34948 0.010765 0.036457 282 0.14619 0.005839 0.01120 283 * * * 283 * * * 284 * * * 284 * * * 285 * * * 285 * * * 286 * * * 286 * * * 287 * * * 287 * * * 288 -0.39577 0.026391 -0.065159 288 -0.48328 0.053414 -0.11480 289 -0.35500 0.018456 -0.048680 289 -0.24126 0.018256 -0.03290
73
Model 4.3.3 Model 4.3.4 Index St. Pearson
Res Lev DFFITS Index St. Pearson
Res Lev DFFITS
1 -1.40983 0.093325 -0.145115 1 0.77903 0.150142 0.32744 2 2.54411 0.034446 0.090760 2 0.27019 0.052253 0.06344 3 * * * 3 0.78132 0.197051 0.38706 4 * * * 4 * * * 5 * * * 5 * * * 6 -0.31665 0.013934 -0.004474 6 -5.31415 0.016635 -0.69118 7 * * * 7 * * * 8 * * * 8 * * * 9 * * * 9 * * *
10 * * * 10 * * * 11 -0.46647 0.038636 -0.018747 11 -1.10396 0.108483 -0.38510 12 -0.66021 0.046924 -0.032505 12 -4.84528 0.023618 -0.75358 13 -0.49122 0.046791 -0.024113 13 1.14147 0.138505 0.45769 14 -0.89942 0.030330 -0.028132 14 -2.70311 0.047631 -0.60451 15 * * * 15 * * * 16 -0.88266 0.077197 -0.073838 16 -1.54487 0.121430 -0.57434 17 3.18891 0.012691 0.040989 17 0.14327 0.011837 0.01568 18 * * * 18 * * * 19 * * * 19 * * * 20 -0.16299 0.007502 -0.001232 20 -2.16228 0.072609 -0.60503 21 * * * 21 * * * 22 * * * 22 * * * 23 0.87370 0.079123 0.075070 23 0.75825 0.159966 0.33089 24 -0.14870 0.006327 -0.000947 24 -3.46570 0.046782 -0.76778 25 * * * 25 * * * 26 -0.72870 0.074530 -0.058684 26 -2.74473 0.119739 -1.01230 27 * * * 27 * * * 28 * * * 28 * * * 29 -0.47878 0.046603 -0.023403 29 -0.92942 0.145189 -0.38304 30 -1.39270 0.064007 -0.095239 30 -1.65320 0.079920 -0.48724 31 -0.24999 0.016297 -0.004142 31 0.53848 0.094192 0.17364 32 * * * 32 * * * 33 -0.65947 0.062844 -0.044223 33 0.43704 0.111694 0.15497 34 -0.32100 0.022160 -0.007274 34 0.49113 0.101731 0.16528 35 1.02452 0.032361 0.034263 35 0.38699 0.052977 0.09153 36 -0.21642 0.012648 -0.002772 36 0.39586 0.076052 0.11357 37 * * * 37 * * * 38 -0.92194 0.028725 -0.027266 38 0.35237 0.040728 0.07261 39 * * * 39 * * * 40 -0.49507 0.030709 -0.015685 40 -4.50267 0.027641 -0.75916 41 -0.47403 0.023659 -0.011487 41 0.05046 0.002961 0.00275 42 -0.47498 0.038420 -0.018978 42 0.11489 0.017471 0.01532 43 1.58767 0.045695 0.076022 43 0.12125 0.024959 0.01940 44 -0.62722 0.038317 -0.024990 44 0.38126 0.054005 0.09109
74
45 -0.34462 0.015088 -0.005279 45 0.08094 0.006807 0.00670 46 * * * 46 * * * 47 -0.34668 0.022696 -0.008051 47 0.36163 0.056571 0.08855 48 -1.06030 0.037676 -0.041511 48 0.12283 0.020066 0.01758 49 -0.54344 0.052734 -0.030253 49 0.51165 0.116143 0.18547 50 * * * 50 * * * 51 -0.30721 0.025950 -0.008185 51 0.07303 0.010447 0.00750 52 5.05019 0.026876 0.139477 52 0.40554 0.072142 0.11308 53 -0.30081 0.013558 -0.004134 53 0.23170 0.023606 0.03603 54 -0.90156 0.049052 -0.046504 54 0.71314 0.109492 0.25006 55 -1.24246 0.081598 -0.110390 55 0.04298 0.006665 0.00352 56 * * * 56 * * * 57 0.29866 0.070696 0.022720 57 0.34697 0.050222 0.07979 58 -0.32542 0.023341 -0.007777 58 0.03414 0.003042 0.00189 59 1.28896 0.045840 0.061925 59 0.14692 0.025156 0.02360 60 -1.60915 0.093976 -0.166907 60 0.30280 0.053191 0.07177 61 0.97157 0.028603 0.028608 61 0.31766 0.034442 0.06000 62 -2.59647 0.158658 -0.489635 62 0.04348 0.008328 0.00398 63 * * * 63 * * * 64 * * * 64 * * * 65 -0.80951 0.059210 -0.050947 65 0.36246 0.112750 0.12921 66 1.30424 0.053628 0.073907 66 0.07586 0.014958 0.00935 67 -0.33784 0.018705 -0.006440 67 0.14998 0.017274 0.01988 68 -0.37699 0.027047 -0.010480 68 0.26836 0.049619 0.06132 69 -0.62091 0.063006 -0.041752 69 0.30972 0.056423 0.07574 70 2.27098 0.038753 0.091556 70 0.63231 0.118734 0.23209 71 -0.20544 0.017077 -0.003569 71 0.04062 0.005138 0.00292 72 -0.28462 0.020108 -0.005841 72 0.33083 0.061239 0.08450 73 -0.32442 0.018619 -0.006155 73 0.23344 0.027699 0.03940 74 -0.26483 0.020948 -0.005666 74 0.02113 0.001817 0.00090 75 -0.36518 0.021264 -0.007934 75 0.05029 0.004429 0.00335 76 -0.49839 0.036617 -0.018943 76 0.02169 0.001467 0.00083 77 -0.54121 0.047630 -0.027067 77 0.01637 0.001032 0.00053 78 -0.72994 0.061458 -0.047798 78 0.28957 0.045004 0.06286 79 -0.36125 0.014499 -0.005315 79 0.08938 0.006699 0.00734 80 -0.35203 0.013921 -0.004970 80 0.09821 0.007371 0.00846 81 -0.28674 0.012672 -0.003680 81 0.20987 0.024251 0.03309 82 1.28659 0.049651 0.067218 82 0.34030 0.041820 0.07109 83 2.39058 0.034644 0.085791 83 0.08236 0.009184 0.00793 84 -0.28899 0.027783 -0.008258 84 0.06122 0.010108 0.00619 85 -0.38156 0.043170 -0.017215 85 0.11574 0.027120 0.01932 86 -0.11261 0.093905 -0.011671 86 0.47376 0.101089 0.15887 87 -0.27730 0.019132 -0.005409 87 0.36421 0.065500 0.09642 88 -0.65904 0.061441 -0.043143 88 0.21250 0.040352 0.04357 89 * * * 89 * * * 90 -0.13718 0.005866 -0.000809 90 0.55252 0.108007 0.19226 91 2.44768 0.017561 0.043752 91 0.08076 0.004697 0.00555 92 6.09221 0.007819 0.048009 92 0.15411 0.022258 0.02325 93 2.63837 0.027302 0.074054 93 0.13550 0.019585 0.01915
75
94 * * * 94 * * * 95 0.65644 0.073633 0.052177 95 0.07836 0.016517 0.01015 96 0.67255 0.071211 0.051566 96 0.08615 0.018267 0.01175 97 * * * 97 0.11634 0.011593 0.01260 98 0.66186 0.040640 0.028037 98 0.09671 0.015170 0.01200 99 * * * 99 * * *
100 * * * 100 * * * 101 -0.30697 0.031701 -0.010050 101 0.03203 0.003622 0.00193 102 -0.91348 0.155366 -0.168030 102 0.01731 0.001572 0.00069 103 -0.64238 0.061374 -0.042004 103 0.23455 0.049695 0.05364 104 -0.25978 0.026058 -0.006950 104 0.01742 0.001465 0.00067 105 -0.37479 0.022038 -0.008446 105 0.04577 0.003897 0.00286 106 -0.19360 0.007480 -0.001459 106 0.22530 0.022510 0.03419 107 -0.24398 0.009855 -0.002428 107 0.09609 0.007986 0.00862 108 -0.17037 0.007384 -0.001267 108 0.36901 0.068508 0.10007 109 0.14903 0.154294 0.027189 109 0.31287 0.082714 0.09395 110 -0.68012 0.051262 -0.036748 110 0.03606 0.002478 0.00180 111 -0.35362 0.015517 -0.005574 111 0.07366 0.006044 0.00574 112 -0.41985 0.036808 -0.016045 112 0.02358 0.001783 0.00100 113 -0.38783 0.025293 -0.010064 113 0.25965 0.051970 0.06079 114 -0.30790 0.023880 -0.007532 114 0.24824 0.050612 0.05732 115 -0.24421 0.020451 -0.005099 115 0.29581 0.075099 0.08429 116 0.04888 0.069190 0.003633 116 0.62843 0.133459 0.24663 117 -0.78278 0.071169 -0.059978 117 0.05141 0.009385 0.00500 118 2.98655 0.020082 0.061206 118 0.19284 0.021105 0.02831 119 -0.28516 0.014995 -0.004341 119 0.05463 0.004910 0.00384 120 -0.18903 0.011364 -0.002173 120 0.07188 0.009749 0.00713 121 -0.27987 0.019040 -0.005432 121 0.40239 0.078547 0.11748 122 * * * 122 * * * 123 -0.58896 0.051585 -0.032034 123 -1.92821 0.108385 -0.67228 124 * * * 124 * * * 125 * * * 125 * * * 126 * * * 126 * * * 127 -0.53068 0.033729 -0.018524 127 0.46338 0.055474 0.11230 128 4.67112 0.052623 0.259464 128 0.09437 0.017682 0.01266 129 * * * 129 * * * 130 -1.73005 0.037480 -0.067367 130 0.07464 0.011677 0.00811 131 0.79025 0.052190 0.043514 131 0.37432 0.091985 0.11914 132 -0.22366 0.026044 -0.005981 132 0.02297 0.002482 0.00115 133 -0.38679 0.027242 -0.010832 133 0.24315 0.040411 0.04990 134 -0.52109 0.030913 -0.016622 134 0.18389 0.018650 0.02535 135 -0.31256 0.016546 -0.005259 135 0.19927 0.022437 0.03019 136 0.55352 0.048064 0.027948 136 0.03743 0.004821 0.00261 137 -0.21086 0.011233 -0.002396 137 0.19045 0.026971 0.03171 138 * * * 138 * * * 139 -0.41858 0.045418 -0.019916 139 0.02738 0.003189 0.00155 140 * * * 140 * * * 141 -0.42885 0.036297 -0.016152 141 0.22218 0.037107 0.04362 142 -0.19630 0.006767 -0.001337 142 0.14109 0.012038 0.01557
76
143 -0.27780 0.013832 -0.003896 143 0.06002 0.005355 0.00440 144 1.49714 0.038841 0.060500 144 0.25975 0.038100 0.05169 145 0.74905 0.108486 0.091150 145 0.04474 0.008562 0.00416 146 -0.23187 0.018970 -0.004483 146 0.36146 0.098305 0.11935 147 -0.18329 0.008314 -0.001537 147 0.10536 0.012704 0.01195 148 * * * 148 * * * 149 * * * 149 * * * 150 * * * 150 * * * 151 0.50370 0.049108 0.026013 151 0.04569 0.006714 0.00376 152 * * * 152 * * * 153 -0.28488 0.020900 -0.006081 153 0.02125 0.001802 0.00090 154 -0.22109 0.011617 -0.002599 154 0.47771 0.066445 0.12745 155 -0.62577 0.038838 -0.025285 155 * * * 156 -0.44080 0.038879 -0.017831 156 0.05838 0.007434 0.00505 157 * * * 157 * * * 158 -1.10507 0.025898 -0.029380 158 0.13552 0.018294 0.01850 159 -0.53013 0.049888 -0.027836 159 0.01349 0.000784 0.00038 160 -1.02416 0.027174 -0.028608 160 0.18018 0.024598 0.02861 161 -0.77443 0.077498 -0.065058 161 0.46527 0.072387 0.12997 162 -1.08616 0.027092 -0.030246 162 0.28832 0.028506 0.04939 163 -0.37736 0.034818 -0.013613 163 0.05178 0.006121 0.00406 164 -0.37458 0.024091 -0.009247 164 0.27015 0.037395 0.05325 165 * * * 165 * * * 166 * * * 166 * * * 167 -1.03376 0.030361 -0.032369 167 0.35037 0.042414 0.07374 168 * * * 168 * * * 169 * * * 169 * * * 170 * * * 170 * * * 171 -0.32075 0.017139 -0.005593 171 0.18123 0.020420 0.02617 172 -1.23185 0.023305 -0.029394 172 0.17912 0.015356 0.02237 173 -0.17208 0.006452 -0.001117 173 0.30242 0.024172 0.04760 174 1.45981 0.039448 0.059952 174 0.23616 0.035546 0.04534 175 -0.30177 0.012568 -0.003841 175 0.17326 0.016371 0.02235 176 * * * 176 * * * 177 -0.17786 0.006579 -0.001178 177 0.15437 0.015011 0.01906 178 * * * 178 * * * 179 -0.14811 0.005810 -0.000866 179 0.40453 0.052476 0.09520 180 -0.35998 0.022063 -0.008122 180 0.15949 0.016943 0.02094 181 -0.38005 0.024701 -0.009625 181 0.16931 0.028645 0.02908 182 * * * 182 * * * 183 * * * 183 * * * 184 * * * 184 * * * 185 * * * 185 * * * 186 -0.41612 0.026651 -0.011394 186 0.03142 0.002377 0.00153 187 1.89670 0.030831 0.060337 187 0.24569 0.034507 0.04645 188 -0.25037 0.010441 -0.002642 188 0.08746 0.007392 0.00755 189 0.54065 0.058880 0.033825 189 0.02555 0.002944 0.00139 190 -0.15141 0.006065 -0.000924 190 0.49315 0.061847 0.12662 191 0.54817 0.037134 0.021141 191 0.09648 0.012864 0.01101
77
192 * * * 192 * * * 193 * * * 193 * * * 194 * * * 194 0.07669 0.008239 0.00699 195 * * * 195 * * * 196 -0.23520 0.018011 -0.004314 196 0.10998 0.019438 0.01549 197 * * * 197 * * * 198 * * * 198 * * * 199 -0.44805 0.056230 -0.026695 199 0.01466 0.000986 0.00046 200 -1.25154 0.048163 -0.063328 200 0.30639 0.056600 0.07505 201 -0.29274 0.016308 -0.004853 201 0.04973 0.004489 0.00334 202 * * * 202 * * * 203 * * * 203 * * * 204 * * * 204 * * * 205 * * * 205 * * * 206 -0.50828 0.032033 -0.016820 206 0.10162 0.009731 0.01007 207 -0.48272 0.031114 -0.015502 207 0.12285 0.013709 0.01448 208 -0.52163 0.032723 -0.017647 208 0.09245 0.008331 0.00847 209 * * * 209 * * * 210 -0.18871 0.007422 -0.001411 210 0.24809 0.027395 0.04164 211 * * * 211 * * * 212 * * * 212 * * * 213 -0.17655 0.006598 -0.001173 213 0.27498 0.021694 0.04095 214 0.67380 0.040662 0.028559 214 0.20651 0.032746 0.03800 215 2.57447 0.016339 0.042763 215 0.09749 0.005913 0.00752 216 0.67941 0.028972 0.020271 216 0.08426 0.008895 0.00798 217 0.73440 0.051025 0.039488 217 0.10587 0.019175 0.01480 218 * * * 218 * * * 219 0.78891 0.090859 0.078843 219 0.05117 0.007827 0.00455 220 0.55791 0.043409 0.025318 220 0.03968 0.004345 0.00262 221 -0.29780 0.021767 -0.006626 221 0.12126 0.018509 0.01665 222 * * * 222 * * * 223 -0.33587 0.014763 -0.005033 223 0.08894 0.007717 0.00784 224 * * * 224 * * * 225 * * * 225 * * * 226 * * * 226 * * * 227 -0.33435 0.013113 -0.004443 227 0.11859 0.009124 0.01138 228 * * * 228 * * * 229 0.87510 0.024071 0.021584 229 0.21645 0.018530 0.02974 230 -0.19533 0.049921 -0.010264 230 0.21379 0.037763 0.04235 231 * * * 231 * * * 232 0.40487 0.044312 0.018773 232 0.03114 0.003520 0.00185 233 * * * 233 * * * 234 1.70739 0.040961 0.072924 234 0.05252 0.003858 0.00327 235 -0.37519 0.023815 -0.009153 235 0.10279 0.012634 0.01163 236 -0.44022 0.037437 -0.017122 236 0.20171 0.031744 0.03652 237 0.74484 0.057559 0.045491 237 0.03257 0.004007 0.00207 238 1.04928 0.050811 0.056169 238 0.15897 0.021925 0.02380 239 * * * 239 * * * 240 * * * 240 * * *
78
241 0.81754 0.046480 0.039852 241 0.27786 0.049245 0.06324 242 -0.39357 0.044958 -0.018527 242 0.01768 0.001389 0.00066 243 -0.44670 0.039484 -0.018363 243 0.70188 0.137089 0.27976 244 * * * 244 * * * 245 0.75091 0.065126 0.052311 245 0.34417 0.078269 0.10029 246 * * * 246 * * * 247 0.90819 0.024408 0.022721 247 0.16455 0.020113 0.02357 248 0.74148 0.041953 0.032469 248 0.04342 0.005563 0.00325 249 1.09493 0.032889 0.037236 249 0.42839 0.044986 0.09298 250 0.88415 0.036261 0.033266 250 0.08406 0.012466 0.00944 251 -0.31372 0.023636 -0.007595 251 0.10029 0.014187 0.01203 252 * * * 252 * * * 253 * * * 253 * * * 254 * * * 254 * * * 255 -0.39976 0.027347 -0.011239 255 0.12316 0.016361 0.01588 256 -0.20907 0.007835 -0.001651 256 0.16928 0.014052 0.02021 257 -0.21503 0.013515 -0.002946 257 0.04485 0.005031 0.00319 258 -0.50676 0.036285 -0.019080 258 0.42296 0.070467 0.11646 259 * * * 259 * * * 260 -0.34334 0.028139 -0.009941 260 0.02829 0.002462 0.00141 261 -0.23606 0.010416 -0.002485 261 0.05471 0.005515 0.00407 262 -0.19132 0.006535 -0.001258 262 0.15503 0.012729 0.01760 263 * * * 263 * * * 264 * * * 264 * * * 265 * * * 265 * * * 266 * * * 266 * * * 267 * * * 267 0.08447 0.011514 0.00912 268 -2.13095 0.087059 -0.203211 268 -2.84198 0.089142 -0.88907 269 -0.19207 0.007022 -0.001358 269 0.11626 0.011163 0.01235 270 * * * 270 * * * 271 1.22744 0.050074 0.064702 271 0.12155 0.021515 0.01802 272 * * * 272 * * * 273 * * * 273 * * * 274 -0.33424 0.025620 -0.008788 274 0.03108 0.002740 0.00163 275 1.51821 0.084462 0.140060 275 0.25127 0.085627 0.07689 276 * * * 276 * * * 277 * * * 277 * * * 278 * * * 278 * * * 279 -0.99878 0.028088 -0.028864 279 0.19820 0.027463 0.03331 280 0.57176 0.041401 0.024694 280 0.04359 0.004811 0.00303 281 * * * 281 * * * 282 -0.18175 0.006190 -0.001132 282 0.18720 0.014467 0.02268 283 * * * 283 * * * 284 * * * 284 * * * 285 * * * 285 * * * 286 * * * 286 * * * 287 * * * 287 * * * 288 0.90945 0.054638 0.052563 288 0.41359 0.090760 0.13067 289 0.81072 0.038881 0.032797 289 0.09156 0.009027 0.00874
79