23
Hector Mendoza December 16, 2009 Public Policy Analysis and Evaluation Final Workforce Development Economic Development and Human Capital Throughout the world, governments and public officials speak about job creation and a better quality of life through economic development. Economic development is “the increase in the standard of living of a nation's population with sustained growth from a simple, low-income economy to a modern, high-income economy” (Lewis-Ambrose, 2009, para. 2). Its scope “includes the process and policies by which a nation improves the economic, political, and social well-being of its people” (O’Sullivan and Sheffrin, 2003, p. 471). Through economic development, a region typically experience improvements within its literacy rates, life expectancy, and poverty rates. For economic development to flourish, it is important that governments invest their resources toward its own population. Economic development is link to a nation’s human development and social infrastructure such as health and education. Adam 1

Workforce development evaluation

Embed Size (px)

DESCRIPTION

The impact of workforce development programs have proven to contribute a positive effect on enrollees within their earnings once they complete the training.

Citation preview

Page 1: Workforce development evaluation

Hector Mendoza December 16, 2009Public Policy Analysis and Evaluation Final

Workforce Development

Economic Development and Human Capital

Throughout the world, governments and public officials speak about job creation and a

better quality of life through economic development. Economic development is “the increase in

the standard of living of a nation's population with sustained growth from a simple, low-income

economy to a modern, high-income economy” (Lewis-Ambrose, 2009, para. 2). Its scope

“includes the process and policies by which a nation improves the economic, political, and social

well-being of its people” (O’Sullivan and Sheffrin, 2003, p. 471). Through economic

development, a region typically experience improvements within its literacy rates, life

expectancy, and poverty rates. For economic development to flourish, it is important that

governments invest their resources toward its own population.

Economic development is link to a nation’s human development and social infrastructure

such as health and education. Adam Smith considered human capital (the accumulation of

training, education, and knowledge of workers) as an important foundation for the development

of a strong national labor force. Smith argues that

“Fourthly, of the acquired and useful abilities of all the inhabitants or members of society…The acquisition of such talents, by the maintenance of the acquirer during his education, study, or apprenticeship, always costs a real expense, which is a capital fixed and realized, as it were, in his person. Those talents, as they make a part of his fortune, so do they likewise of that of the society to which he belongs. The improved dexterity of a workman may be considered in the same light as a machine or instrument of trade which facilitates and abridges labor, and which, though it costs a certain expense, repays that expense with a profit.” (Smith, 1776, para.17)

For that reason, education plays a critical role in helping the population to obtain the vital

cognitive skills needed to adapt and create an innovative environment.1

Page 2: Workforce development evaluation

By having strong cognitive skills, an individual enhances their “ability to perform

standard tasks and efficiently learn new tasks, receive and process new information,

communicate and coordinate activities with one another, evaluate and adjust to changed

circumstances, adopt new technologies, and bring new innovations in the production of

technology” (Lau, Jamison, and Louat, 1991, p. 2). With a highly skilled population, nations can

venture into research and development (R&D) to create new innovations and to improve the

living conditions within their environment. The relationship between education and R&D is that

“Education relates to the development and adoption of new technology in several ways. A substantial proportion of both basic and applied research is carried out within educational institutions. Educated people are more likely to become innovators than people with less schooling. The non-monetary benefits of technological change include the availability of new materials, processes, products, and services, in particular information services, which, in turn, improve living conditions for all members of society” (Vila, 2000, p. 26).

In having a skilled workforce, many companies (i.e. financial services, biotechnology, and

software engineering, etc.) will recognize in investing within the region by outsourcing their

operations to that location thus creating jobs and economic growth.

The Role of Public Workforce Development Programs

Workforce development plays an integral part in a region’s economic development by

assisting the population to obtain new job skills that potential employers demand. Since the end

of World War II, many manufacturing cities, such as Jersey City and Cleveland, experience

substantial job losses due to the effects of globalization. In Jersey City, it experienced

tremendous job losses between 1950 to 1975, “when the city had 74,790 private sector jobs but

declined to 59,506 jobs, a 20% decline due to manufacturing jobs being outsourced” (JCEDC

UEZ, 2005, p. 26). As for Cleveland, the city lost

“200,000 jobs between 1950 and 2000…manufacturing jobs were particularly hard hit as manufacturing as a proportion of Ohio’s gross state product dropped from 37 percent in

2

Page 3: Workforce development evaluation

1977 to 22 percent in 2001. Unemployment in the city typically ran three to four times that in surrounding suburbs. Average incomes in Cleveland between 1986 and 2001 fell from $29,935 to $27,681, a percent drop” (Krumholz and Berry, 2007, p. 134).

With substantial changes in the labor market (i.e. from a manufacturing economy to a

knowledge-base economy), many low-skill workers suffered a reduction in earnings thus

increasing poverty throughout the United States. As a result, Congress began to take interest to

enhance the job skills for disadvantaged individuals through public workforce development

programs.

In an effort to reduce poverty, the U.S. government implemented job skill development

programs such as the Comprehensive Employment and Training Act (CETA), which is the

predecessor of the 1980s Job Training Partnership Act (JTPA). The purpose of these acts

allowed the federal government to fiscally assist states and their localities to train low-income

unemployed and economically disadvantaged people via classroom training, on-the job-training,

job-search assistance, and other services. Below describes JTPA’s six basic categories in full

detail:

Job Training Partnership Act (JTPA)

Classroom training in occupation (CTOS) – in-class instruction in specific job skills such as word processing, electronics, repair, and home health care;

On-the-job training (OJT) – subsidized training that takes place as part of a paying job, often in a private-sector firm (JTPA usually pays half of the wages for up to six months, but the jobs are supposed to be permanent);

Job search assistance (JSA) – assessment of participants’ job skills and interests, along with training in job-finding techniques and help in locating openings;

Basic education – including adult basic education (ABE), high school, or General Education Development (GED, or high school equivalency) preparation, and English as a Second Language (ESL);

Work Experience – temporary entry-level jobs designed to provide basic employment skills and instill effective work habits (the jobs may be subsidized by JTPA if they are in the public sector); and

Miscellaneous services – including assessment, job-readiness training, customized training, vocational exploration, job shadowing, and tryout employment, among a variety of other services.

3

Page 4: Workforce development evaluation

(Orr, Bloom, Bell, Doolittle, Lin, and Cave, 1996,p. 4)

The government hoped that “public expenditures on these programs will enhance participants’

productive skills and, in turn, increase their future earnings and tax payments and reduce their

dependence on social welfare benefits” (LaLonde, 1995, p. 149).

Workforce Development Policies and Evaluation Objective

Within the domestic level numerous evaluations have evaluated U.S. job-training policies

such as the Comprehensive Employment and Training Act (CETA) and Job Training Partnership

Act (JTPA). In the international spectrum, a United Kingdom evaluation examined the impact of

the European Social Fund (ESF) Objective 4 (O4) program which “aimed to alleviate the threat

of social exclusion through long-term unemployment by developing the skills of the workforce

who were employed but potentially at risk of losing their jobs” (Devins and Johnson, 2003, p.

214). In addition, international organizations such as the United Nations (UN) or the

Organization for Economic Co-operation and Development (OECD) passed referendums to

promote job-training programs throughout the industrialize and developing world to increase

both employment opportunities and wages, and reduce poverty.

Many of the evaluations seek to answer whether public-funded job training programs has

a positive impact on trainee earnings, job placement and retention, and job advancement after

they graduate from the program. Other evaluations use cost-benefit analysis to inquire if public-

funding for these programs are beneficial to society. Furthermore, some evaluations intensively

look at government policies to determine if such programs create bias such as creaming1 towards

trainee eligibility and enrollment within a local job-training center. Other evaluations analyze if

classroom training is proficient compare to job search assistance or vice-versa. Some evaluate if

1 The words “Creaming” or “Cream-skimming” means serving individuals who are most employable at the expense of those who are most in need.

4

Page 5: Workforce development evaluation

small medium enterprises can assist in developing newly hired low-skill workers by training

them with monetary government aid. Overall, the objective for evaluations is to measure the

efficiency of government programs and implement the needed recommendations to improve the

systematic process of the program or to eliminate it for cost-saving purposes.

Evaluation Methodologies

Research Design

Numerous evaluations use random-, quasi-, and small aspects of Qualitative2 (i.e.

follow-up interviews) experiments to conduct their studies. Many researchers prefer to use

random experiments as its main research method to prevent subject selection bias3 that can weigh

down on the study results. For this reason, a growing body of research indicates “the importance

of randomized experiments to overcome the selection bias that plagued previous quasi-

experimental studies of employment and training programs” (Bloom, Orr, Bell, Cave, Doolittle,

Lin, and Bos, 1997, p. 550). In random experiment, “eligible program applicants are randomly

assigned to either a treatment group, which was allowed access to the program, or to a control

group, which was not” (Bloom, Orr, Bell, Cave, Doolittle, Lin, and Bos, 1997, p. 550). Random

assignment assures that the treatment group and control group do not differ systematically in any

way except access to the program (Bloom, Orr, Bell, Cave, Doolittle, Lin, and Bos, 1997, p.

550). The objective in using random experiments is not only to prevent bias but to also generate

valid and reliable estimates regarding a program’s impacts on a trainee’s earnings, employment,

educational attainment, and welfare when completing work assistance program. In addition,

random experiments do not use complex statistical techniques such as quasi-experiments. The

2 With the Qualitative aspect, majority of researchers use follow-up surveys or interviews to monitor enrollees’ progress and impact within a 3 to 44 month period after program completion. 3 Selection bias arises when program impacts are measured by comparing labor market outcomes of program participants with those of nonparticipants who differ in systematic ways (Bloom, Orr, Bell, Cave, Doolittle, Lin, and Bos, 1997, p. 550).

5

Page 6: Workforce development evaluation

techniques used to adjust for differences in observable attributes (i.e. sex, age, education, and

region of residence) in quasi-experiments are relatively “straightforward but subject to

specification errors; correcting for unobservable characteristics (i.e. motivation, family

connections) requires a convoluted procedure that can yield wildly different results” (Dar and

Gill, 1998, p. 81).

On the other hand, some researchers feel that random experiments alter the behavior of

individuals. They dimly view the technique because of its “failure to select individuals through

random assignment, changes their behavior as a result of their assignment to either group i.e.

enrolling in private programs or intensifying their job search, high costs because of the number

of participants in the sample, and ethical questions excluding a group of people from the

intervention” (Dar and Gill, 1998, p. 80-81). Quasi-experiments may lack in randomization but

it provides comparison groups and baselines that provide important information that can help

researchers evaluate the effectiveness of programs and policies. Quasi-experiments are

“Claimed to overcome threats to internal validity – and thus enhance their credibility – than studies that impose no controls on treatments and experimental subjects. Also, quasi-experimental designs have been shown to have better external validity than true experiments because the latter often impose controls that would be hard to impose elsewhere” (Stufflebeam and Shinkfield, 2007, p. 301).

Furthermore, quasi-experiments are cheaper to conduct compare to random experiments because

information is available “for a considerable number of observable individual and labor-market

characteristics such as education, age, sex, household wealth, and region of residence” (Dar and

Gill, 1998, p. 82-83).

Within quasi-experiments, some researchers strongly prefer using matched pairs because

of the observed characteristics of individuals enrolled in the workforce development programs.

Match pairs controls the observed characteristics of individuals in a control and treatment group

6

Page 7: Workforce development evaluation

to a certain extent because these groups are likely to have different success rates in the area of

focus. To control for these differences, “synthetic control groups are constructed using a

matched pairs approach...the synthetic control group, which is a subset of the entire control

group, is composed of individuals whose observable characteristics most closely match those of

the treatment group” (Dar and Gill, 1998, p. 82). Researchers favor this method because the

procedure is less arbitrary and the results are easier to interpret for non-statisticians.

Furthermore, the method measures the program of study by the “simple difference in the

variables that policymakers want answered such as reemployment probabilities and wages

between the control and treatment groups” (Dar and Gill, 1998, p. 83). On top of using match

pairing, researchers also use cost benefit analysis to measure the programs effective impact on

the studied subjects and to determine if reallocation of resources are needed.

Nevertheless, there is no perfect research model to obtain vital empirical data from

program outcomes. Even when program evaluators study the same area topic using the same

data, they often arrive at different estimates. For example, “the six evaluations of CETA’s

impact on the 1976 cohort’s earnings range from a decline of $1,210 to a gain of $1,350 for male

participants, and gains of $20 to $2,200 for female participants” (LaLonde, 1995, p. 158).

Subsequent analyses demonstrate that “the variability in these estimates results not from

sampling variability but from the very subtle differences among evaluators’ statistical models

and choices of who they put into their comparison groups” (LaLonde, 1995, p. 158). As a result,

it is important to remember that there is no single best methodology in program evaluation. The

best and appropriate standard for evaluation, no matter what experimental technique is being

used, is to practice with a research tool that produces the most sound and useful information on

program effectiveness.

7

Page 8: Workforce development evaluation

Data Collection and Use

Much of the data collected within evaluations are either primary or secondary. Many

quasi-experimental evaluations tend to use secondary data. Studies such as “Evaluating

Retraining Programs in OECD Countries: Lessons Learned”, written by Amit Dar and Indermit

S. Gill, examined the effectiveness of 11 retraining programs4 within OECD countries by using

data provided by member governments. In another evaluation, “Performance Incentives in the

Public Sector: Evidence from the Job Training Partnership Act” written by Michael Cragg, uses

“JTPA enrollment data during JTPA introductory period from 1983-1987 to test whether

enrollment probabilities for able individuals are high in high-incentive states such as cream-

skimming practices” fueled by federal government performance incentives (Cragg, 1997, p.

152). Other studies such as “The Impact of CETA Programs on Components of Participants’

Earnings” (written by Katherine P. Dickinson, Terry R. Johnson, and Richard W. West) used “a

random sample of participants in CETA who enrolled in 1975, which was taken from the

Continuous Longitudinal Manpower Surveys (CLMS)” (Dickenson, Johnson, and West, 1987, p.

431). Within this study, the comparison groups for CETA participants were drawn from the

Current Population Survey (CPS). Matching was used to “reduce pre-enrollment differences

between the CPS and CLMS samples so that regression estimates will be less sensitive to

incorrect specification of the estimation model” (Dickenson, Johnson, and West, 1987, p. 431).

Most random experiments produce primary data due to the use of random assignment.

As mention before, the use of a random experiment is meant to overcome the subject selection

bias that weighed down on previous quasi-experimental studies of employment and training

4 The study used data obtain from the retraining centers in the United States, Germany, Netherlands, Britain, Sweden, Australia, Canada, Denmark, and France.

8

Page 9: Workforce development evaluation

programs. The evaluation “The Benefit and Cost of JTPA Title II-A Programs: Key Findings

from the National Job Training Partnership Act Study” (written by Howard S. Bloom, Larry L.

Orr, Stephen H. Bell, George Cave, Fred Doolittle, Winston Lin, and Johannes M. Bos)

objectives were to generate valid and reliable estimates of the program’s impacts of the targeted

JTPA title II-A population. The evaluation’s sample used 16 local JTPA programs, also referred

as Service Delivery Areas (SDAs), from across the United States. The study’s sample consists

of applicants recruited by local SDA site staff and screened to determine if applicant met JTPA

eligibility5.

A total of 20,601 sample members were randomly assigned to the treatment group or

control group. The analysis of program impacts on earnings reported was based on data for

15,981 sample members (Bloom, Orr, Bell, Cave, Doolittle, Lin, and Bos, 1997, p. 553). Data for the

analysis were obtained from numerous sources such as a Background Information Form (BIF)

completed by sample members when they applied to JTPA; JTPA enrollment, tracking, and

expenditure records from the 16 SDAs that served as study sites; two waves of follow-up surveys

conducted by telephone, with personal interviews where necessary; state Unemployment

Insurance (UI) wage records for 12 of the study sites; state AFDC and food stamps records for

four of the study sites; a telephone survey of vocational/technical schools in the study sites to

determine the costs of their programs; and Published data on the instructional costs of high

schools and colleges (Bloom, Orr, Bell, Cave, Doolittle, Lin, and Bos, 1997, p. 553-554).

Evaluation Results

5 Eligible applicants were assessed to determine their training needs with: (1) classroom training, (2) a mix of on-the-job training (OJT) and/or job-search assistance (JSA) and (3) other services (Bloom, Orr, Bell, Cave, Doolittle, Lin, and Bos, 1997, p. 553).

9

Page 10: Workforce development evaluation

Studies indicate that youths and adult males do not experience positive impact on

earnings during and after the in-program period compare to women. Data indicate that gains

within the earnings of female participants’

“Estimated impact of CETA is -$817, indicating that adult men who enrolled in CETA in 1975 are estimated to have earned significantly less in 1977 than comparable men in the matched comparison group. In contrast, the estimated impact of CETA participation for adult women is significantly positive: adult female participants are estimated to have earned $905 more than comparable women in the matched comparison group” (Dickinson, Johnson, and West, 1987, p. 435).

All CETA program activities are estimated to have had “a negative impact on the earnings of

adult men, ranging from -$283 for classroom training to -$1,051 for work experience programs

while all program activities are estimated to have had a positive impact on the earnings of adult

women” (Dickinson, Johnson, and West, 1987, p. 436). Orley Ashenfelter and David Card’s

study, “Using the Longitudinal Structure of Earnings to Estimate the Effect of Training

Programs”, support this notion by demonstrating the small findings on adult males earnings for

1976 CETA enrollees as “300 current dollars per year” compare to adult females where they

experienced “an unambiguously positive earnings in the order of 800-1500 current dollars per

year” (Ashenfelter and Card, 1985, p. 660). As for youths, they experienced a negative net

benefit in being enrolled in JTPA where female youths experienced “a negligible net cost of -

$121 per enrollee, reflecting a very mall earnings impact for this group, while male youth non-

arrestees experienced a net cost of -$530, reflecting an insignificant negative estimated earnings

impact” (Bloom, Orr, Bell, Cave, Doolittle, Lin, and Bos, 1997, p. 571).

On the other hand, majority of evaluations all support that workforce developments plays

a positive impact within an individual’s salary after graduation. In addition, some studies argue

that an additional year of schooling “is associated with approximately an 8 percent increase in

the average worker’s earnings – about $1800 per year” (LaLonde, 1995, p. 156). Within “The 10

Page 11: Workforce development evaluation

Promise of Public Sector-Sponsored Training Programs” evaluation, non-experimental

evaluations of MDTA and CETA programs “indicate that when training is most effective it

raised the post-program earnings of its participants by perhaps $1,000 to $2,000 per year”

(LaLonde, 1995, p. 156). In the “The Benefit and Cost of JTPA Title II-A Programs: Key

Findings from the National Job Training Partnership Act Study”, it revealed that the positive

impact on net benefits that JTPA produced for adult enrollees varied from “$1,422 increase for

women and $1,822 for men” between the treatment and control groups (Bloom, Orr, Bell, Cave,

Doolittle, Lin, and Bos, 1997, p. 571).

In the “New Evidence on the Long-Term Effects of Employment Training Programs”

evaluation, the researcher (Kenneth A. Couch) uses cost-benefit analysis that supports job

training programs by reporting that

“The average cost from a social perspective for an Aid to Families with Dependent Children (AFDC) recipient in the NSW6 to be $2,674 in 1978 dollars. The cumulative increases in real earnings in the post-training period for the average AFDC trainee are $2,728. Without discounting, the observed increases in earnings for the AFCD trainees over the first 8 years following more than cover the social costs of training” (Couch, 1992, p. 385).

Furthermore, the same author supports his argument that “NSW’s effect on earnings of AFDC

recipients is sizable and statistically significant during the years from 1982 through 1986,

ranging from $375 to $525 in 1978 dollars” (Couch, 1992, p. 386). Overall, the workforce

development programs have proven to contribute a positive effect on enrollees within their

earnings once they complete the program.

6 National Supported Work experiment (NSW) – program was designed to provide immediate subsidized employment opportunities to trainees. This was seen as an alternative to classroom education as a method of providing individuals with experiences and skills which would later facilitate private-sector employment. The NSW provided work experiences primarily in service occupations for females and construction for males (Couch, 1992, p. 381)

11

Page 12: Workforce development evaluation

References

Ashenfelter, O., & Card, D. (1985). Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs. The Review of Economics and Statistics, 67(4), 648-660.

Bloom, H. S., Orr, L. L., Bell, S. H., & Cave, G., Doolittle, F. (1997). The Benefits and Costs of JTPA Title II-A Programs: Key Findings from the National Job Training Partnership Act Study. The Journal of Human Resources, 32(3), 549-576.

Couch, K. A. (1992). New Evidence on the Long-Term Effects of Employment Training Programs. Journal of Labor Economics, 10(4), 380-388.

Cragg, M. (1997). Performance Incentives in the Public Sector: Evidence from the Job Training Partnership Act. The Journal of Law, Economics, & Organization, 13(1), 147-168.

12

Page 13: Workforce development evaluation

Dar, A., & Gill, I. S. (1998). Evaluating Retraining Programs in OECD Countries: Lessons Learned. The World Bank Research Observer, 13(1), 79 - 101.

Devins, D., & Johnson, S. (2003). Training and Development Activities in SMEs: Some Findings from an Evaluation of the ESF Objective 4 Programme in Britian. International Small Business Journal, 21(2), 213-228.

Dickinson, K. P., Johnson, T. R., & West, R. W. (1987). The Impact of CETA Programs on Components of Participants' Earnings. Industrial and Labor Relations Review, 40(3), 430-441.

Jersey City Economic Development Corporation. (2005). An Economic Resource Guide. Jersey City, NJ: Lambo, Ann-Margaret. Web site: http://www.jcedc.org/JCNJ05.pdf.

Krumholz, N., & Berry, D. E. (2007). The Cleveland Experience: Municipal-led Economic and Workforce Initiatives During the 1990s. In M. I. Bennett & R. P. Giloth (Eds.), Economic Development in American Cities (pp. 133-158). Albany, NY: State University of New York Press.

LaLonde, R. J. (1995). The Promise of Public Sector-Sponsored Training Programs. The Journal of Economic Perspectives, 9(2), 149-168.

Lau, Lawrence J., Jamison, Dean T., and Louat, Frederic F. (1991). Education and Productivity in Developing Countries. Washington, D.C.: World Bank

Lewis-Ambrose, D. (2009). Economic Development Vs. Sustainable Development in a Small Island Context. Retrieved Dec. 10, 2009, from The Virgin Islands StandPoint, Tortola, British Virgin Islands. Web site: http://www.vistandpoint.com/news/opinions/3799-economic-development-vs-sustainable-development-in-a-small-island-context.

O'Sullivan, A., & Sheffrin, S. M. (2003). Economics: Principles in action. Upper Saddle River, NJ: Pearson Prentice Hall.

Orr, L.O., Bloom, H.S., Bell, S.H., Doolittle, F., Lin, W., and Cave, G. (1996). Does Training for the Disadvantaged Work? Washington, D.C.: The Urban Institute Press.

Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation Theory, Models, & Applications. San Francisco, CA: Jossey-Bass.

Smith, A. (1776). An Inquiry into the Nature and Causes of the Wealth of Nations. Retrieved Dec. 4, 2009, from The Adam Smith Institute, London, UK. Web site: http://www.adamsmith.org/smith/won-b2-c1.htm.

Vila, Luis E. (2000). The Non-monetary Benefits of Education. European Journal of Education, 35, pp. 21-32.

13