47
Electronic copy available at: http://ssrn.com/abstract=1600898 1 Columbia Law School Public Law & Legal Theory Working Paper Group Paper Number 10-238 MINIMALISM AND EXPERIMENTALISM IN THE ADMINISTRATIVE STATE (version of February, 2011) BY: PROFESSOR CHARLES F. SABEL & PROFESSOR WILLIAM H. SIMON COLUMBIA LAW SCHOOL

Minimalism and Experimentalism in the Administrative State

Embed Size (px)

DESCRIPTION

....

Citation preview

Electronic copy available at: http://ssrn.com/abstract=1600898

1

Columbia Law School Public Law & Legal Theory Working Paper Group

Paper Number 10-238 MINIMALISM AND EXPERIMENTALISM IN

THE ADMINISTRATIVE STATE

(version of February, 2011) BY:

PROFESSOR CHARLES F. SABEL

& PROFESSOR WILLIAM H. SIMON

COLUMBIA LAW SCHOOL

Electronic copy available at: http://ssrn.com/abstract=1600898

2

MINIMALISM AND EXPERIMENTALISM IN THE ADMINISTRATIVE STATE

Charles F. Sabel and William H. Simon†

This essay identifies and appraises the two most promising alternatives to the “command-and-control” style of public administration that was dominant from the New Deal to the 1980s but is now in disfavor. The first – Minimalism -- emphasizes public interventions that incorporate market concepts and practices and that centralize and minimize administrative discretion. The second -- Experimentalism -- emphasizes interventions in which central government affords broad discretion to local administrative units but measures and assesses their performance in ways designed to induce continuous learning and revision of standards. Minimalism has been prominent in legal scholarship and in the policy discourse of recent Presidential administrations, but its practical impact has been surprisingly limited. By contrast, Experimentalism, which has had a lower profile in academic and public discussion, has visibly influenced a broad range of critical policy initiatives in the U.S. and abroad. Indeed, key initiatives of the Obama administration, including the Food Safety Modernization Act and the “Race to the Top” education program, are virtually unintelligible from any other perspective. We argue that, in practice, Minimalism suffers from an excessive preoccupation with static efficiency norms and price signals and insufficient attention to learning and to “weak signals” of risk and opportunity. Experimentalist intervention is a more promising approach in the growing realm of policy challenges characterized by uncertainty about both the definition of the relevant problem and its solution.

Maurice T. Moore Professor of Law, Columbia University † Arthur Levitt Professor of Law, Columbia University

Electronic copy available at: http://ssrn.com/abstract=1600898

3

I. Introduction The administrative style that characterized American public law from the New Deal to the 1980s has been out of favor in recent years. This style, pejoratively called “command and control”, is identified with rule-bound bureaucracy and deference to ineffable expertise. The efforts of recent years to escape the limits of command-and-control administration have taken diverse forms, and most remain controversial. In this essay, we suggest that these efforts can be understood in terms of two broad models of public intervention, and we assess their advantages relative to each other and to command-and-control administration. The first model can be called Minimalism. The term is associated with Cass Sunstein,1 but it usefully connotes the understanding of administration in a broad range of legal scholarship, including especially the influential work of Bruce Ackerman and Jerry Mashaw.2 This model seeks to ground policy design in economic concepts and market practices and to minimize frontline administrative discretion and popular participation in administration. Its key normative reference points are efficiency and consistency. It favors cost-benefit analysis for elite bureaucrats and restrictive rules for lower-tier ones. Instead of directive regulations, it prescribes regimes of marketable duties, such as emissions permits, or marketable rights, such as welfare vouchers. Where people are prone to make bad choices, it urges “nudges” -- providing information or making provisional choices from which the beneficiary can opt out – rather than mandatory rules. We call the second model Experimentalism, though it bears a strong resemblance to what others call “new governance” or “responsive regulation.” Experimentalism has been influenced by both management theory and democratic ideals, especially as the two converge in John Dewey’s idea of “democratic experimentalism.”3 Its governing norm in

1 Robert Kuttner, “The Radical Minimalist”, The American Prospect (April 14, 2009). 2 E.g., Jerry Mashaw, Bureaucratic Justice: Managing Social Security Disability Claims (1985); Bruce Ackerman and Anne Alstott, The Stakeholder Society (1999). We focus on the parallels among Sunstein, Ackerman, and Mashaw in their treatment of administrative policy implementation. In the sphere of constitutional adjudication, Ackerman rejects Sunstein’s Minimalism. See Bruce Ackerman, “The Living Constitution”, 120 Harvard Law Review 1737, 1805-06 (2007). 3 John Dewey, The Public and Its Problems (1927); Michael Dorf and Charles Sabel, “A Constitution of Democratic Experimentalism”, 98 Columbia Law Review 267 (1998). See also Ian Ayres and John Braithwaite, Responsive Regulation:

4

institutional design is reliability – the capacity for learning and adaptation. In Experimentalist regimes, central institutions give autonomy to local ones to pursue generally declared goals. The center then monitors local performance, pools information in disciplined comparisons, and creates pressures and opportunities for continuous improvement at all levels. The regimes’ distinctive mechanisms of achieving both learning and coordination emphasize deliberative engagement among officials and stakeholders. Our analysis highlights and explains a fundamental policy re-orientation along Experimentalist lines in the U.S., the EU, and elsewhere since the 1990s. Minimalism has been influential in legal scholarship and popular policy discourse, but its impact on actual policy design has been surprisingly limited. President Obama frequently invoked Minimalist ideas in his campaign, and he appointed Cass Sunstein, a leading Minimalist, as administrator of the Office of Information and Regulatory Affairs. Yet, the Administration’s most important initiatives – in fiscal policy, banking, healthcare, food safety, education, and offshore oil drilling – have reflected these ideas only occasionally or dimly. By contrast, Experimentalism has had a lower profile in legal scholarship and popular policy discourse, but its premises are pervasively and centrally (if often imperfectly) manifest in recent regulatory and social welfare initiatives here and abroad. Some of the Obama administration’s most important initiatives, including the Food Safety Modernization Act4 and the “Race to the Top” education program5, can only be understood in Experimentalist terms. We argue that these developments reflect the fact that Experimentalist regimes are especially well suited for circumstances in which effective public intervention requires adaptation to changing circumstances and local variation. The central characteristic of these circumstances is “uncertainty” in Frank Knight’s sense – contingency that cannot be known or calculated actuarially or with formal rigor but can only be estimated impressionistically.6 In the realm of uncertainty, policy aims cannot be extensively defined in advance of implementation; they have to be discovered in the course of problem-solving. Yet, the characteristic Minimalist interventions presuppose a strong distinction

Trascending the Deregulation Debate (1994); New Governance in the EU and the US (Grainne de Burca and Joanne Scott 2005). 4 Food Safety Modernization Act of 2010, Pub.-L. 111-353; 90th Cong., 2d Sess. 5 U.S. Dept. of Education, Race to the Top Program: Executive Summary (Nov. 2009), at 6, available at www2.ed.gov/programs/racetothetop/executive-summary.pdf. 6 Frank Knight. Risk, Uncertainty, and Profit (1921).

5

between the enactment or elaboration of public goals and their administrative implementation. On the other hand, Experimentalist regimes depend on the controversial premise that public administration can integrate frontline discretion and stakeholder participation in a disciplined, accountable manner. Distrust of decentralized discretion and participation leads some Minimalists to reject Experimentalist reforms even in situations where Experimentalists regard them as most promising.7 Although this difference can only be resolved in practice, we identify some of the ways in which Experimentalism responds to it. In Part II, we defend our notion of Minimalism as a heuristic that usefully identifies influential premises of recent public law scholarship and we elaborate its limitations as a guide to policy implementation in regulation and social welfare. In Part III, we elaborate the Experimentalist alternative and argue for its comparative advantages in key policy spheres. We do not deny the value of Minimalist interventions in some contexts. However, we argue that key changes in the configuration of the problems addressed by both regulatory and social welfare policy favor Experimentalist intervention in many areas. II. Minimalism and Its Limits We begin by describing the convergent themes of Minimalism and then proceed to analyze the limitations of its treatment of, first, regulation, and then, social welfare policy. A. General Themes Minimalism expresses disillusionment with the New Deal practice of vast and vague delegation of authority to expert bureaucratic agencies. At the top, the problem is that, because legislative specification generates excessively rigid instructions and invites rent seeking by special interests, statutes must afford discretion to administrators. At the bottom, the problem is the difficulty of controlling the inevitable discretion of the “street level bureaucrats” – welfare case workers, police officers on the beat, classroom teachers --

7 David Super, “Laboratories of Destitution: Democratic Experimentalism and the Failure of Antipoverty Law”, 157 University of Pennsylvania Law Review 541 (2008).

6

who make on-the-spot, face-to-face decisions that often determine the life chances of citizens.8 Minimalism also reflects disillusionment with the more ambitious claims for participatory democracy among liberals in the 1960s. Ackerman emphasizes that people have limited time and energy for politics except in times of crisis.9 Moreover, Sunstein adds, popular deliberation is not all it’s cracked up to be by Utopian democrats. Serious disadvantages arise from tendencies, on the one hand, toward unreflective conformity (“groupthink”) and on the other, toward unreflective tendentiousness (“group polarization”). Both phenomena are mitigated in decision making processes that, instead of striving for deliberative agreement, impersonally aggregate individual views.10 The Minimalists believe in the possibility of meaningful public-regarding deliberation by officials and citizens, but democracy for them is mostly indirect. It takes place through electoral representation rather than popular participation. The Minimalists look to economics for norms and practices that obviate official discretion. The core economic norm is efficiency, which prescribes that resources be invested so that at the margin their return is equal and that duties be assigned to those who can perform them most cheaply. Basic economics suggests that standard markets will achieve both tasks in the presence of certain pre-conditions. The Minimalists insist that there is nothing inherently conservative about such ideas so long as we take account of the ways in which the preconditions to market efficiency are absent (and so long as efficiency is not permitted to displace distributive and dignitary values).11

The Minimalists would constrain policy-makers at the top through rigorous cost-benefit analysis. They support the presidential orders that have required agencies to engage in cost-benefit analysis 8 Bruce Ackerman and William Hassler, Clean Coal/Dirty Air (1981); Jerry Mashaw, Bureaucratic Justice, cited in note . 9 Ackerman makes an exception for “constitutional moments” of heightened popular mobilization and deliberation. “The Living Constitution,” cited in note . He is also open to the idea of occasional referenda on specific issues, and he likes the idea of quadrennial “deliberation days” in which citizens can discuss and make recommendations on key issues to elected leaders. Bruce Ackerman and James Fishkin, Deliberation Day (2004). However, he uses the “constitutional moment” idea idea more to explain the authority of past events than to inspire mobilization over current issues. And he sees direct deliberation as rare and as engaging issues only selectively or at a high level of abstraction. 10 Cass Sunstein, “Group Judgments: Statistical Means, Deliberation, and Information Markets”, 75 New York University Law Review 696, 732-38 (2005). 11 Bruce Ackerman, “Law in the Activist State,” 92 Yale Law Journal 1083 (1983).

7

before promulgating rules (though they deplore the tendentious misuse of such analysis in the service of political goals). They have drawn on and contributed to a large literature that tries to give structure to such analysis, in particular, by showing how regulatory benefits such as greater health, longer life, and lower mortality, can be taken into account systematically.12 In welfare programs, they propose a “social cost accounting” that balances the benefits of individualized treatment and adjudicatory process against its costs. The availability of trial-type hearings should be weighed against the magnitude of the cost of wrongful deprivation of the benefit in question. In setting substantive standards, administrators should choose between rigid rules and contextual standards so as to maximize the social benefits of “transparency”, “accessibility” (ease of application), and “congruence” (with underlying purposes).13 At the level of implementation, the Minimalists have a preference, where feasible, that regulatory programs take a form that complements or induces markets. The core examples are tradeable emissions permits and school or housing vouchers. Reforms of this kind promise two benefits. First, they incorporate the optimizing tendencies of markets. Second, they reduce the number and complexity of the program’s norms, thus cabining discretion. Cap-and-trade” regimes respond to defects in the more “command-and-control” approach of the Clean Air Act. Under the Act, polluters have been obligated to reduce emissions to the level of the “best available” mitigation technology. Such abatement is inefficient, however, because it imposes uniform duties that disregard large variations in the compliance costs of emitters. Moreover, determining the “best available” mitigation technology for myriad highly technical functions is just the sort of task to which bureaucracies are unsuited, especially since the information needed for sound decisions is held by private firms with many incentives to withhold it from regulators. Instead of attempting to define feasible technical solutions, the Minimalists propose that the regulator issue permits authorizing emissions of specified pollutants in socially acceptable aggregate 12 Cass Sunstein, Risk and Reason: Safety, Law, and the Environment (2002); Richard Revesz and Michael A. Livermore, Rethinking Rationality, 131-43 (2009). 13 Jerry Mashaw, “Administrative Due Process as Social Cost Accounting”, 9 Hofstra L.Rev. 1423, 1436 (1981). The parallels between Mashaw on the one hand and Sunstein and Ackerman on the other are clearer in Mashaw’s earlier work we cite below than in his more ambivalent recent work. See Jerry Mashaw, “Accountability and Institutional Design: Some Thoughts on the Grammar of Governance”, in Public Accountability: Designs, Dilemmas, Experiences (Michael Dowdle ed. 2006).

8

amounts and then compel emitters to buy such permits in a market where they can be freely traded. Firms that can abate emissions cheaply do so, and those with high costs buy the permits. The market thus allocates the burden of a prescribed degree of abatement to the cheapest cost avoiders with a precision that no bureaucracy could achieve directly.14 In social welfare, the Minimalists support school and housing vouchers in which beneficiaries choose for themselves from among a range of providers certified as meeting basic general standards. The beneficiary receives a voucher that she can exchange for housing or schooling, and the chosen provider, on surrendering the voucher, receives a specified payment from the government. As emissions permits contribute to optimization on the cost side by allocating duties to the cheapest cost avoider, vouchers do so on the benefit side by providing the beneficiary with the housing or schooling she deems most valuable. Moreover, in principle, vouchers put pressure on providers that make them more accountable to beneficiaries. If the beneficiaries are not satisfied, they go elsewhere and the providers lose the payments.15 At their most ambitious, Minimalists would replace much of the existing welfare state with what Ackerman and Anne Alstott call a “stakeholder” regime where each citizen, on turning eighteen, receives a substantial one-time endowment that can be used in whatever ways she decides -- “to start a business or pay for more education, to buy a house or raise a family or save for the future.” The goal of this universal voucher is to provide “a fair starting point for all” while protecting individuals from bureaucratic interference and eliminating the work disincentive effects of means-testing. “Stakeholders are free to make their own decisions… [but their] triumphs and blunders are their own.”16 Where markets already exist and the problem is people’s limited capacity for effective choice, Sunstein has urged that interventions should often be limited to “nudges” designed narrowly to compensate for the relevant cognitive impairments. These micro-interventions involve altering a default rule or attending to “choice architecture” by providing

14 Bruce Ackerman and Richard Stewart, “Reforming Environmental Law”, 37 Stan L. Rev. 1333 (1985). 15 Goodwin Liu, “Education, Equality, and National Citizenship”, 116 Yale L. J. 330, 403 (2006). 16 Ackerman and Alstott, cited in note . See also Michael J. Graetz and Jerry L. Mashaw, True Security 149-51 (1999) (proposing national programs of “cash income replacements or supplements, mandated individual savings, and certain forms of stop-loss insurance” that avoid “highly individualized discretion and decision making”.).

9

information to consumers. So, for example, if people appear to be saving too little (as indicated by post-retirement poverty or expressions of regret), the solution is not to mandate savings, but to change the default rule for tax-subsidized employer-run savings plans. The original default rule provided that the employer direct a portion of compensation to savings if the employee requested; the new rule would require the employer to contribute to savings unless the employee requests otherwise. By influencing without pre-empting choice, “nudges” protect impaired choosers from their mistakes, while maximizing autonomy and avoiding errors costs by leaving a significant range for choice.17 Where administrative judgments are unavoidable, the Minimalists prefer to limit discretion with clear and easily monitored rules. A key example here is the “vocational” determination in the Social Security Disability programs. Eligibility requires a finding that the applicant is incapable of performing any jobs that exist in significant numbers. For decades, the decisions were made on the basis of all-things-considered judgments. But “one disability examiner's ‘broken down manual laborer’ is another's ‘malingerer’.”18 The system of appeals to administrative law judges simply reproduced the problem at a second level. The Minimalist solution is a grid that dictates a conclusion on the basis of a series of straightforward findings about age, education, physical capacity, and work experience. The problem of “street level bureaucracy” is avoided by keeping the bureaucrats off the streets.19 B. Limitations: Regulation We turn now to our reservations about Minimalism, beginning with the Minimalist treatment of regulation. 1. The Efficiency Perspective. The main preoccupation of Minimalist regulatory scholarship is efficiency or optimization. Yet, efficiency is not the only and often not the most important regulatory value. Management theorists often espouse a different value, which they

17 Cass Sunstein and Richard Thaler, Nudge: Improving Decisions About Health, Welfare, and Happiness (2009). Influenced by Sunstein, President Obama proposed to mandate that large employers automatically enroll employees in retirement savings programs, and that requirement was incorporated in the Patient Protection and Affordable Health Care Act of 2010; Pub-L. 111-148, sec. 1511. 18 Jerry Mashaw. cited in note , at 1436. 19 Summarizing Mashaw’s view, A.I. Ogus writes: “Bureaucratic rationality involves accurate and cost-effective implementation of centrally determined goals; it avoids direct confrontations with the claimant, concentrating instead on the application of detailed rules and guidelines with internal systems of management and control to insure consistency.” Review, 7 Oxford Journal of Legal Studies 389, (1987).

10

suggest is sometimes in tension with efficiency. “Reliability” is one name for this norm.20 It connotes a capacity for learning and innovation, or more specifically, for prompt recognition of and adaptation to changing threats and opportunities. We do not doubt that the Minimalists would acknowledge the importance of reliability or that reliability could be subsumed under a thick definition of efficiency. However, as the efficiency idea is thickened to embrace reliability concerns, it loses some of the precision and rigor that Minimalists see as its core virtues. In practice, the Minimalists seem to deploy a static conception of efficiency that risks slighting reliability concerns. Reliability entails responsiveness, not just to strong signals like prices, but to “weak signals”, such as small anomalies or deviances. Consider an incident at the Davis-Besse nuclear power plant near Toledo in the early 2000s. Maintenance personnel found rust particles “mysteriously clogging” the air conditioning and water filters. They had to replace the filters every two days. The industry standard for replacing such filters was two months, although the workers involved did not know this. After every-other-day replacement went on for two years, different personnel discovered that a metal liner built to contain radioactive material had corroded from its original 6 ½” thickness to about a half inch. The corrosion had been generating the rust particles. Karl Weick and Kathleen Sutcliffe assert:

The rust accumulation was a weak signal of plantwide problems that would have been detected sooner had information about industry wide experience been disseminated more thoroughly, had local personnel compared this experience with filter replacement in other parts of the facility, had the purchasing department questioned the large orders for filter replacements, or simply if people had started asking around whether replacement every forty-eight hours seemed out of line.21

Weak signals may not be visible in market prices. They differ from prices and other strong signals in being more numerous and diverse, and in requiring deliberative consideration and complex judgment. The best known examples of organizations focused on detecting and assessing “weak signals” are “high reliability” organizations in areas such as aviation or energy production, where operational breakdown

20 Martin Landau and Donald Chisholm, “The Arrogance of Optimism: Notes on Failure-Avoidance Management”, 3 J. Conting. & Crisis Mngmt 67 (1995). 21 Kathleen Sutcliffe and Karl Weick. Managing the Unexpected: Resilient Performance in an Age of Uncertainty (2007).

11

threatens catastrophe, and “lean production” manufacturing firms, where competitive pressures arising from short product cycles and customer demand for specialized features calls for the kind of discipline that the threat of catastrophe imposes on “high reliability” organizations. Perhaps the most influential examples have been the U.S. Navy’s nuclear submarine program and the product development and manufacturing model associated with Toyota.22 The Navy’s Nuclear Propulsion program developed an ethos and a set of practices that treated even minor unexpected observations as symptoms of potential systemic malfunction calling for inquiry and possibly reform. The Toyota Production System deliberately stresses its processes (for example, by “just in time” parts delivery that eliminates buffer inventory stocks) so that errors will surface quickly. It eliminates end-of-the-line re-work departments in favor of practices that stop the line when defects are discovered and require that the “root cause” of the problem be diagnosed and remedied before production resumes. While there is no tension in principle between efficiency and reliability, many have observed that efficiency rhetoric can in practice inhibit the pursuit of reliability. First, efficiency is associated with a preference for simple and few norms and hence crude categories. Such preferences encourage a tendency to “normalize the unexpected,” to re-frame deviant observations in ways that assimilate them to previous understanding, rather than to treat them as opportunities to revise such understanding. Burn marks on the Challenger space shuttle and the rust particles at the Davis-Besse nuclear plant were initially perceived as anomalies, but in the absence of pressure to focus on them, were reconceived as normal. Arguing against “simplification”, Weick and Sutcliffe assert, “the diagnostic value of weak signals is lost when details are lumped into crude general categories.” They continue, “Categories may improve coordination [i.e., static efficiency], but they harm detection of events not seen before.”23 Second, the efficiency view is associated with a preoccupation with short-term cost minimization. The pursuit of reliability requires incurring immediate and measurable cost in the hope of speculative

22 Steven J. Spear, Chasing the Rabbit: How Market Leaders Outdistance the Competition and How Great Companies Can Catch Up and Win (2009); Charles F. Sabel, “A Real Time Revolution in Routines” in The Firm as a Collaborative Community (Paul Adler and Charles Heckscher es. 2006). 23 Sutcliffe and Weick, cited in note , at ; Diane Vaughn, The Challenger Launch Decision (1996).

12

future benefits. Those who have been taught to focus on efficiency sometimes find this hard to do. Reporting on the transformation of nuclear power safety in the 1990s, Joseph Rees writes that, under the old regime, “maximizing the output of electricity meant fixing the reactor after it breaks.” Conventionally trained managers resisted supporting “operations review” engineers who sought to prevent all breakdowns because the benefits of their work could not be measured. The managers were uncomfortable with committing resources to “solve problems that haven’t even occurred yet.”24 Books on Toyota-style organization assert incessantly that, while optimization and reliability ultimately go together, managers need to defer or sublimate optimization concerns and focus on reliability to achieve success.25 . Where markets fail, it is sometimes more important for regulation to strive to induce reliability than efficiency (in the static cost-minimization sense that the term often connotes). The regulators need to develop capacities for learning and adaptation in their own systems, and they need to encourage the firms they oversee to cultivate such capacities in a manner that gives appropriate weight to relevant public values.26

24 Joseph Rees, Hostages of Each Other: The Transformation of Nuclear Safety Since Three Mile Island (1984). 25 “Japanese managers have found that seeking improvement for improvement’s sake is the surest way to strengthen the company’s overall competitiveness. If you take care of quality, the profits will take care of themselves.” Masaaki Imai, Kaizen: The Key to Japan’s Competitive Success, 49-50 (1986). “[T]he manufacturing world learned [from firms like Toyota that] [i]f one focused on improving quality, then cost competitiveness would follow.” Rajan Suri, Quick Response Manufacturing, 55 (1998). Neoclassical economics encourages the hope that, if the administrator gets the prices right, the regulated firms will spontaneously develop whatever skills are necessary to succeed in the re-optimized market. But other economic learning shows that we cannot assume that this will happen. Key actors may be able to externalize the costs imposed by the regulator. Notably, corporate agents can shift costs to shareholders, and shareholders, with limited liability, can shift them back to the public. Moreover, as we will illustrate shortly, the learning that occurs through the analysis of weak signals has a public good quality. It is valuable to others, but it is usually not feasible for the discoverer to charge for it. (Information comes in small pieces of initially uncertain value, and it becomes obsolete quickly.) Under the circumstances, there is a substantial probability that, without further public intervention, private investment in the capacity to detect and interpret weak signals will be sub-optimal. 26 Some analyses of the financial crisis that began in 2007 associate regulatory failure with a preoccupation with static efficiency at the expense of reliability. E.g., U.K. Financial Services Authority, The Turner Review: A Regulatory Response to the Global Banking Crisis 41 (2009); William H. Simon, “Optimality and Its Discontents in Regulatory Design: The Example of Bank Regulation”, 4 Regulation and Governance 3 (2010)..

13

When goals are framed in terms of catastrophe-avoidance (as with “high reliability” organizations), the need for reliability is obvious, but reliability concerns are also potentially relevant where the relevant harms are more diffuse and frequent. Criminal justice is an important borderline case. Minimalism resonates with the law-and-economics perspective that directs attention to “getting prices right” by correctly calculating penalties. This view emphasizes cost-benefit analysis to approximate penalties to the social value of the relevant harm (adjusted to reflect probability of apprehension and conviction). A competing perspective argues for more attention to and resources for prevention. In this latter view, we should have fewer prisons and more police, and we should allocate policing strategically. We can do this by generating visible police presence in areas where crime is likely, or by strictly enforcing against small offenses (e.g., “quality of life” offenses) that lead to larger ones, or by targeting especially dangerous individuals or groups. The focus here is not on prices or penalties, but on “weak signals” from conditions that are not grave in themselves but are symptomatic of danger. The approach calls for collecting information and coordinating the efforts of public agencies and private organizations to interpret it and to rapidly adjust responses to it. The cost-benefit approach dominated discussion of criminal justice in the 1980s and 1990s. It was, however, challenged in the 1990s by proposals “community policing” regimes that were adopted with occasional success in many cities. 9/11 made this “weak signal” reliability perspective more salient. Because it involved catastrophic harm, it shifted attention to prevention, and because the critical performance failures involved sharing and analysis of ambiguous information, initiatives emphasized the importance of the capacity for adaptive preventive response rather than for efficient penalty calibration.27 2. Minimalist Intervention. Both the Minimalist’s favorite regulatory tools – cost benefit analysis and tradeable emissions permits – reflect a preoccupation with static efficiency. Both tools are valuable but not as valuable as the Minimalists suggest, and their efforts may have obscured or distracted attention from some important issues. a. Cost-benefit analysis. Any sensible regulatory proposal should consider costs. And it is often a valuable heuristic exercise to compare the costs with the benefits of a particular proposal or the relative return on public expenditures from different proposals even if the exercise 27 Daniel Richman, “The Right Fight”, Boston Review (Winter 2004); William Stuntz, “The Political Constitution of Criminal Justice”, 119 Harv. L. Rev. 780 (2006).

14

requires controversial and speculative estimations. It is useful to know that the cost-per-life-saved of the EPA’s Trihalomethane drinking water standard is $200,000, while the cost per life saved of its 1,2-Dichloropropane standard is $653 million. The comparison, even if plausibly calculated, does not establish that either standard is mistaken, but it gives a reason to revisit both. Nevertheless, the potential value of cost-benefit analysis is limited by its focus, at least in practice, on static factors, not the capacity for learning and adaptation. More specifically: First, the proponents have been preoccupied with methodological issues concerning the estimation of benefits (for example, the value of a life or of different kinds of lives or the right discount rate for future lives), whereas, in practice, conclusions will often be dominated by estimates of future costs that will necessarily be highly speculative. Richard Revesz and Michael Livermore note a persistent tendency to under-estimate cost-saving innovation and overestimate costs.28 They give examples of estimates that proved to be wrong by factors as high as 10. Methodological rigor makes only a slight contribution when the inputs are so soft. Second, the proponents tend to defend cost-benefit analysis in the abstract without much attention to how it might be institutionalized. The value of cost-benefit analysis as intellectual discipline does not imply any particular conclusions about institutional structure. To the extent that analysis involves comparison of costs and benefits across different regulations and agencies, it suggests a central role in collecting and disseminating data, but this central role could be a subordinate one performed without any loss of authority to the agencies. Or it could involve a coordination and oversight role played with varying degrees of strictness. In practice, however, the principal effect of cost-benefit analysis in the federal government in recent years seems to have been to rationalize a shift in authority from the agencies to the White House. This shift has at least occasionally involved heavy-handed ad hoc central intervention, sometimes rationalized by tendentious and ideologically-driven estimations of costs and benefits. It also appears to have involved significant diminution in the transparency of the regulatory process, since the White House regulatory offices are less transparent than the agencies.29 No doubt the Minimalists regret both these developments,

28 Richard Revesz and Michael A. Livermore, cited in note , at 131-43. 29 Peter Strauss, “Overseer, or ‘The Decider’?: The President in Administrative Law”, 75 Geo. Wash. L. Rev. 696 (2007); Amy Sinden, Douglas A. Kysar, and David

15

which are not entailed by cost-benefit analysis in principle. But the developments show the limitations of a literature preoccupied with analytical methodology rather than institutionalization. b. Cap-and-trade. The most developed regulatory proposal in the minimalist literature is the cap-and-trade emissions permit system, which epitomizes the Minimalists virtues of static efficiency (allocating burdens to the cheapest cost avoider) and minimizing discretion.30 Although experience to date is mixed, the emissions trading idea seems likely to prove an important contribution. But its range and its value are more limited than the Minimalists suggest. In particular, four limitations should be noted: First, the cap-and-trade approach only works with harms that can be effectively measured.31 Although the environmental harms of air pollution cannot be directly measured, they correlate sufficiently with the amount of emissions of specific pollutants, and for some sources and pollutants, we can measure that. However, for most regulatory goals, we have no comparably powerful proxies. Regulatory regimes are usually prophylactic; they are designed to minimize risk. But with, for example, food safety, occupational health and safety, or financial institution soundness there is no single powerful indicator of risk. In such situations, we are forced to rely on “weak signals” – a multifarious range of indicators that require interpretation and complex judgment and do not lend themselves to pricing.32 Second, even where adequate proxies are available, the information demands of cap-and-trade remain large and sometimes

Driesen, “Cost-Benefit Analysis: New Foundations on Shifting Sand”. 3 Reg. & Govern. 48 (2009). 30 Ackerman and Stewart, cited in note . 31 Cary Coglianese and David Lazar, “Management-Based Regulation: Prescribing Private Management to Achieve Public Goals,” .37 Law & Society Review 691 (2003). 32 Of course, measurement needs to be economically as well as technically feasible. Moreover, trading requires an infrastructure (for example, to identify owners and record trades) and entails transaction costs. These pre-requisites will limit the application of the model to actors of more than minimal size or abatements of more than minimal social value. The thresholds are met with respect to the kind of large power-generating and manufacturing facilities covered by current regimes. However, to the extent that the relevant problems involve smaller, more diffuse activities, they may not be. For example, the model seems unpromising for the increasingly salient problems of emissions from “non-point sources,” such as farm water pesticide run-off. See Bradley Karkkainen, “Information-Forcing Regulation and Environmental Governance,” in de Burca and Scott, cited in note .

16

disqualifying. The regulator still has to set an aggregate quantity limit (or in the tax alternative, a price) for each pollutant. Since the efficacy of the regime depends on this single number, the stakes are extremely high. Yet both methodology and data for this determination are likely to be incomplete and disputed. The regulator needs to estimate, not only the immediate effects of abatement of the target pollutant, but collateral costs and benefits of behavior induced by the abatement incentives. Costs, for example, of unregulated but still harmful pollutants that might be substituted for the regulated ones; benefits, for example, of unregulated but harmful pollutants that might be abated by the same efforts required to abate the regulated ones. Moreover, while the market regime obviates determinations of relative abatement costs among emitters, it requires knowledge of aggregate costs. To determine the efficient price, we need to know the point at which marginal aggregate costs equal marginal aggregate benefits. Yet, such determinations are notoriously difficult. It is no surprise then that the few examples to date of operating cap-and-trade systems include some spectacular examples of getting quantities, and hence prices, wrong. In 2000, quantities proved too low to accommodate an unanticipated surge in demand in the South Coast Air Quality Management District trading regime in California. Prices for nitrous oxide emissions soared from $4,300 per ton to $45,000 per ton.33

The District had to suspend trading and issue new permits to prevent exacerbation of a statewide energy crisis. Another crisis emerged with the inauguration of the European Union’s greenhouse gas emissions trading regime. Permit prices collapsed in 2006, when it became apparent that the regulators had over-estimated current “business as usual” emissions, and thus set an aggregate target that was too high, resulting in the distribution of too many permits.34 It is true that market-simulating reforms like cap-and-trade will often require less information and complex judgment than comparable command-and-control regimes. However, this advantage comes at a cost. There is a risk that market simulation will involve a sacrifice in flexibility, in particular, the capacity to revise in response to new information. Part of the market-simulation idea is to induce private

33 Rei Hirose and Raul Lejano, “Testing the Assumptions Behind Emissions Trading in Non-Market Goods: The RECLAIM Program in Southern California”, 8 Environ. Sci. Policy 367 (2005). 34 Heather Timmons, “Data Leaks Shake Up Carbon Prices”, New York Times (May 16, 2006).

17

investment by creating marketable entitlements. Investors will claim that they need and deserve some measure of stability with respect to the key parameter of price or quantity. Thus, permits may be good for several years – for example, five in the EU – and can sometimes be “banked” for use beyond that period. The regimes typically place limits on the ability of regulators to dilute the value of the permits by new issuances before the end of a period. Regulatory tightening is apt to trigger protest under any regime, but by encouraging emitters to view the current limits as an entitlement for a fixed period, cap-and-trade may exacerbate the problem. The third limitation on market-simulating regimes is that, while they are sensitive to variations in abatement cost among emitters, they are insensitive to variations in abatement benefits across the localities where emissions occur. At least in the paradigmatic model, there is a single price for emissions throughout the system. Yet, the harm emissions involve usually varies locally with such contingencies as population density, aggregate emissions levels, and land use patterns. The marginal harm in a heavily polluted urban era is often much higher than in a sparsely populated era with low aggregate emission levels. Moreover, market regimes may encourage the creation of “hot spots.” These occur where polluters minimize compliance costs by clustering in low cost areas (areas with low land costs and limited local taxes and regulation). When this happens, pollution burdens concentrate in particular localities. It is possible that these localities will disproportionately include low income and otherwise disadvantaged people. The “hot spot” problem does not apply to greenhouse gases because their effect operates at the global level, and emissions from any point contribute equally. However, for most pollutants, location matters.35

35 While the global nature of greenhouse gas effects on temperatures exempts it from the “hot spot” problem, it creates a “tragedy of the commons” problem. Only global efforts, or at least efforts including all major emitting nations, are likely to be strongly effective. Yet, it has proven difficult to induce major emitters, especially India and China, to participate substantially. Two responses show the limits of the market approach: 1. The Kyoto Protocol and other regimes allow emitters within the regime to earn credits by subsidizing projects in non-participating countries that reduce emissions (or increase gas absorption) beyond what would have happened in the absence of the project. A major fraction of the reductions credited to Kyoto have involved such efforts. 2. Some observers propose a focus on negotiated deals for large infrastructure projects in which rich countries trade economic benefits for developing country

18

Trading regimes can address the localization problem by sub-dividing markets. Thus, in the South Coast Air Quality Management District, permits sell at different prices in coastal and inland districts. Yet, such sub-division multiplies the number of judgments and the amount of information required.36 Fourth, it is not clear how powerful the incentives for the production and dissemination of innovative technology created by trading systems will be. To the extent that innovations take the form of major capital goods, especially with patentable features, they will be marketed and adopted where likely to be profitable. But many innovations take the form of incremental improvements or involve adaptation of technologies to particular local conditions. Without public efforts, small incremental improvements may not be disseminated because of transaction costs. Moreover, technology may not be usable without significant customization, and small and medium-size producers may lack access to necessary technical assistance.37 It is possible that trade associations or business consultants can fill this gap, but it is not inevitable. The example of the U.S. Agricultural Extension Service shows that publicly-subsidized technical assistance to small and medium

commitments that favor low-emission energy production (natural gas and nuclear power rather than coal and oil). A key point about both approaches is that they cannot operate primarily through self-adjusting price signals and self-interested trading. They require complex administrative capacity. Failure to appreciate this point accounts for the early disappointments of the Kyoto Clean Development Mechanism. It appears that a large fraction of the credits certified under the regime involve abatements that would have occurred without the regime or could have been induced at a much lower cost than the regime paid. A core problem is that the regime has encouraged private institutional development that has outpaced the development of public institutional capacity to restrain abuses. See Michael Wara, “Is the Global Carbon Market Working? 445 Nature 595 (2007); and, more generally, J.B. Ruhl and James Salzman, “Currencies and the Commodification of Environmental Law”, 53 Stanford Law Review 607 (2000). 36 In their early proposal, Ackerman and Stewart, cited in note , proposed extensive sub-division. Jonathan Nash and Richard Revesz speculated on the possibility of a massive computer program aggregating information about local climate affects that would continuously generate local prices for emissions of controlled pollutants. “Markets and Geography: Designing Marketable Permit Schemes of Control Local and Regional Pollutants”, 28 Ecology Law Quarterly 569 (2000). It seems ironic that a discussion that started out with the goal of minimizing the information demands on regulators should lead to a proposal that resembles nothing so much as Soviet central planning. 37 Eric von Hippel, Democratizing Innovation 67 (2008) (explaining that the costs of transferring even “fully developed” process technology to new settings can be high).

19

producers may be needed to induce adoption of socially desirable practices and to protect them from the competitive disadvantage that they might suffer vis-à-vis larger producers from regulations that require technologically complex responses.38 C. Limitations: Social Welfare The Minimalist welfare program of nudges (manipulation of default rules and choice architecture), norm simplification, and vouchers is likely to prove helpful with respect to some problems. But the approach seems implausible or question-begging with respect to many of the most important problems. It is unresponsive to some salient trends in welfare policy, and the most promising recent innovations have taken a much different approach. 1. The Changing Policy Context. The New Deal welfare system was focused on cash transfers under categorical rules. Its central programs were modeled on private insurance, and they reflected an actuarial conception of risk. The basic reference point was a typical worker: a male breadwinner working for 40 years, after an initial period of searching, in the same job or a succession of similar jobs in the same industry. His career would be interrupted by spells of unemployment, but these spells would correlate with systematically calculable phases of the business cycle. The Social Security Act created programs of wage-related benefits for this worker and his dependents during bouts of unemployment, after retirement, in case of disability, and for his surviving dependents after his death. A parallel program of means-tested public assistance benefits for people who could not work and lacked sufficient employment history to qualify for social insurance was expected to wither away as the economy recovered from the depression and breadwinners built up their social insurance entitlements. Of course, the public assistance programs did not wither away. The problems that they addressed persisted and grew even in periods that approached full employment. This basic development has required extensive rethinking of these programs and their relation to the social

38 On the extension service, see note below. Atul Gawande has argued that in health care, “[n]ew laboratory science is not the key to saving lives. The infant science of improving performance – of implementing our existing know-how – is.” A new drug with a fraction of the effectiveness of his simple but counter-intuitive surgery protocol would have attracted a vast amount of private capital, but developing and demonstrating the “checklist” required public support. The Checklist Manifesto 242-43 (2010).

20

insurance ones. An important body of Minimalist scholarship has struggled with this task. As we’ve noted, this scholarship has sought to preserve the New Deal emphasis on standardized rule-defined cash benefits while broadening the scope of both the social insurance and public assistance programs. These efforts, however, have not taken full account of changes in the social context that underlay the New Deal commitment to standardized rule-defined benefits, and they have largely ignored the most distinctive policy responses to them. The key social changes concern increased diversity and volatility. The distinction between labor market participants and nonparticipants has eroded. People whom the New Deal model considered normally outside the labor market – the elderly, disabled, housewives, single mothers – are now thought to have a right and/or a duty to participate effectively. Long-term careers in the same job or industry are less common; the skills that jobs demand vary more widely and change more rapidly than before. Long-term marriage and two-parent child rearing, which the New Deal took as normal, have diminished, and family structures have become more diverse. The social backgrounds and linguistic capacities of citizens vary increasingly. Some social problems, including many involving mental health and substance abuse, that were not prominent in the New Deal discussions are highly salient today. And the welfare system has gradually assumed responsibility for health care under circumstances where both relevant technology and needs are changing rapidly. The most distinctive policy innovations induced by these developments focus on capacitation rather than maintenance or compensation. Where actuarial risk-pooling breaks down in an uncertain world, the welfare state strives to equip people with skills to deal with the disruptions they face, whether in the market or the family. The services of the new programs are typically customized and bundled across disciplines and problems. For example, employment-related services may be combined with family support services, or family support services may combine education, mental health, and medical services. At the same time, persistent and labile problems call for continuing and adaptive intervention. Responding to unemployment will often require, not just temporary income support, but reskilling to respond to changing configurations of job opportunities. Chronic educational and psychological problems – for example, substance abuse, family dysfunction, and learning disabilities – require recurring intervention that takes account of the likelihood relapse.

21

The design of services reflects changes that have occurred in professional disciplines. One is the greater capacity to diagnose and respond to idiosyncracy. For example, in medicine, genetic indicators permit the customization of drugs to particular patients. More mundanely but at the moment more importantly, effective treatment for many behavior-dependent diseases such as diabetes is now understood to include the construction and monitoring of an individualized plan that covers medication, diet, and activity.39 Or to take another example: educators have recently come to understand that the debate between “phonics” and “whole language” approaches to reading was misconceived. Each approach has techniques that are effective with some students. The teacher should not choose wholesale between them but should assess each student to determine the combination of interventions most helpful to him or her.40 Reforms in most key areas of social welfare, including education, child welfare, disability rights, and mental health, have reflected these trends in professional practice. They typically require individual assessment and customization of services, complex, heavily diagnostic judgment, and ongoing re-assessment and adaptation.41 The Minimalists have not taken account of these developments. 2. Minimalist Intervention. We can assess the limitations of welfare Minimalism in terms of the two most characteristic Minimalist projects in this area: norm simplification and vouchers or stakeholder funds. a. Norm simplification. There is a plausible role for programs based on simple norms that obviate pervasive complex judgment. In his Minimalist critique of Experimentalism, David Super mentions as

39 Clayton Christensen, et al., The Innovator’s Prescription: A Disruptive Solution for Health Care 56-61 (2008). 40 Diane Schemo, “Federal-Local Clash Over Teaching of Reading”, New York Times (March 9, 2007). 41 Statutes requiring either individual service plans or “reasonable accommodation” include: Education for All Handicapped Children Act of 1975 (“individualized education plan”); Individuals with Disabilities Education Act of 1990 (“individualized education plan”); Adoption Assistance and Child Welfare Act of 1980, P.L. 96-272, sec. 101(a)(1), 42 U.S.C. 675(1) (“case plan” for children in state custody); Developmental Disabilities Act of 1984, P.L. 98-527, sec. 123(a) (“individualized habilitation plan”) [since removed]; Personal Responsibility and Work Opportunity Act of 1996, P.L. 104-193 (“individualized responsibility plan” for public assistance recipients); Americans with Disabilities Act of 1990 (“reasonable accommodation” for disabled workers); Fair Housing Act Amendments of 1988, Pub.L. 100-430, sec 6(a). 42 U.S.C. 3604(f)(3)(B) (“reasonable accommodation” of handicapped residential housing purchasers).

22

models of the kinds of programs he favors Food Stamps and Supplemental Security Income (SSI),42 These programs provide easily defined benefits (food vouchers or cash) to broadly defined categories of indigent people -- “households” (virtually any person or group that lives outside an institution and prepares its own meals) for Food Stamps, aged and disabled people for SSI. If we put aside the complicated issues associated with disability determination in SSI, they are based on relatively straightforward rules, and as far as they go, they are successful. The problem is that these programs do not go very far. They can get by with uniform benefits and simple rules because each addresses only a single and simple need – food or money – and they avoid making judgments at the margin by under-satisfying the need. The programs provide benefits only to the most disadvantaged, and the benefits they provide are meager. Relative to the broader purposes of poverty relief, the programs are spectacularly under-inclusive. If Food Stamps and SSI constituted the whole of America’s response to poverty and social need, they would be reviled as abominations. They can only be considered successful as a component of a broader array of programs that provide benefits related to health, education, job training, housing, and other matters. These benefits typically take the form of services, and their providers increasingly aspire to the complex customization that we noted above. The simplification project in the welfare area to which scholars have devoted the most attention is the vocational “grid” regulation in the Social Security Disability programs. These programs condition eligibility on “permanent and total disability”, which entails a finding that there is no substantial number of jobs that a person in the applicant’s medical condition could perform. For decades this “vocational assessment” was made through an all-things-considered judgment, first, by a vocational expert, and second if there was an appeal, by an administrative law judge. The process was costly, and the resulting judgments were wildly inconsistent. The response was to replace expert judgment in a broad range of cases with a “grid” consisting of a set of rules that dictated eligibility conclusions on the basis of a small number of easily determinable facts about age, physical capacity, education, and work experience. Minimalists have supported the grid as an efficient trade-off of “congruence” (fit with substantive purposes) for “accessibility” (less expensive process and more formal consistency). The main objection to

42 Super, cited in note , at 609.

23

the grid is that its simplicity makes for under-inclusion. There are some cases that would be deemed eligible under an all-things-considered judgment for which the grid denies eligibility. The Minimalists respond that perfect justice is impossible and that the quest for it must yield to resource constraints. Moreover, they suggest that the costs of under- and over-inclusion will be relatively small if, as appears, the relevant cases are borderline or marginal. To the extent that incorrectly denied cases resemble correctly denied ones and incorrectly accepted claims resemble correctly accepted ones, the social costs of error are likely to be relatively low.43 Two limitations of the line of analysis are becoming salient. First, the grid approach is out of touch with changes in the social understanding of disability. When disability programs were added to the Social Security regime in 1950, it was generally assumed that severely incapacitated people were necessarily and appropriately excluded from the workforce. Moreover, the idea of “permanent and total disability” was considered an observable self-defining category. But these notions have been discredited as both normative and empirical matters. Normatively, the inclusion of disabled people in the broadest feasible range of social activities has become a central goal, one that is sometimes given precedence over compensation. Empirically, we have discovered that a combination of rehabilitation services and “reasonable accommodation” at the workplace can enable a large fraction of those once considered incapable of significant work to perform a broad range of jobs. Thus, a variety of programs for disabled people provide or mandate customized services or workplace accommodation.44 It is thus no surprise that, as Jerry Mashaw reports, many urge that the Social Security programs move away from its compensation focus and its categorical norms toward “a community-based and multidisciplinary approach that would deploy financial assistance, medical care, rehabilitation, and transportation services, among other things, to promote the overall well-being and highest possible functioning of disability beneficiaries.” This revised approach, Mashaw emphasizes, “would demand highly discretionary judgments.”45

43 Jerry Mashaw, “How Much of What Quality?: A Comment on Conscientious Procedural Design”, 55 Cornell L. Rev. 823, 824-28.(1980); Colin Diver, “The Optimal Precision of Administrative Rules,” 93 Yale L. J. 65, 90-92 (1983). 44 Institute of Medicine The Dynamics of Disability: Measuring and Monitoring Disability for Social Security Programs (2002). 45 Cited in note , at 824-28.

24

Second, the most commonly asserted arguments for the grid are static cost-benefit arguments. By contrast, modern management theory emphasizes that detection and correction of individual errors involves opportunities for systemic improvement.46 Errors can be corrected in a specialized department that addresses only the particular defective product. Alternatively, they can be addressed diagnostically; the causes of a particular defect can be mapped, and the systemic flaws can be remedied along with the individual problem. As a purely logical matter, either approach might be efficient, but a full assessment has to consider, not only accuracy and decision costs, but also the opportunity costs and benefits of learning and systemic improvement. Companies that feel pressures and opportunities to innovate often decide that they do better to sacrifice static optimization in order to strive for continuous improvement. When they do so, they tend to adopt explicitly super-optimal norms such as “total quality” or “zero defects”, refusing to consider (at least rhetorically) that any level of error could be optimal. In Social Security disability, the fact that cases under- and over-included by the “grid” regulation tend to be borderline has different implications from static and dynamic perspectives. In a static sense, the proponents are right to suggest that the social cost of incongruence is relatively low. But from a dynamic perspective, the opportunity costs of formal rules in these cases may be relatively high because these may be the cases in which the potential for learning and systemic gain from contextual examination may be highest. Even in the current system, which is largely preoccupied with sorting people into categories of disabled or non-disabled, contextual examination in marginal cases might have exceptional diagnostic value if the characteristics ignored or over-emphasized by the rules are shared by relatively large numbers of applicants. For example, close examination might lead to the discovery of powerful indicators of the incapacitating effects of a particular medical condition that would be useful in other cases involving the same condition. Moreover, if we imagine a program reconfigured along the lines of recent criticisms to combine income support with rehabilitation, the learning potential from contextual inquiry may be highest in borderline cases because these may turn out to be the cases in which rehabilitation has most promise. b. Vouchers and Choice Architecture. Housing and school vouchers are a natural extension of Minimalist logic. Super expresses particular enthusiasm for housing vouchers. Aside from SSI and Food

46 See James Womack and Daniel Jones, Lean Thinking (2003).

25

Stamps, he points to the Section 8 housing voucher program as a model.47 Under this program, eligible recipients get certificates they can use to induce landlords to rent to them. The tenant pays a percentage of her income, and the government pays the difference between that amount and the officially determined “fair rental value” of the apartment. The idea is to give the tenant a broader range of choice and more leverage than traditional public housing, where there are few choices and, once a tenant moves in, she usually loses her subsidy if she moves. The evidence on housing and school vouchers remains limited, and we are open-minded on the subject. However, we note some basic limitations to the model. Some of these limitations resemble those observed with emissions permits. Foremost among these are information demands. Perhaps it is easier to design a voucher than to run a school or housing project directly, but voucher programs require administrators to set a price. If the housing voucher price is too low, landlords will refuse to participate. If the price is too high, they get a windfall. The Section 8 program Super admires has suffered from both problems. During some periods, a large fraction of certificates awarded to applicants were returned unused because the holders could not find a suitable apartment for which the landlord would accept the certificate. At the same time, it has been asserted that in some localities, the certificates gave landlords more than the market value of their apartments.48 There are further problems with vouchers in the welfare context. Beneficiaries may lack either adequate information or adequate analytical capacity to make good choices with their vouchers (as some Minimalists themselves emphasize). In addition, providers have strong incentives to “cream”: Instead of competing in the voucher market by trying to provide better services, they can compete by better identifying and attracting applicants who can be served more easily. Landlords will look for the most stable families; schools for students with the most promising pre-existing ability and behavioral traits; and medical facilities for the healthiest people.49

47 Super, cited in note , at 48 Jennifer J. Curhan, “The HUD Reinvention: A Critical Analysis”, 5 B.U. Pub. Int. L. J. 239, 243-45, 250-51 (1996). The danger of setting the price too low could be avoided by giving the voucher a flat base value and allowing the tenant to add any amount she wants to it. However, some worry that this approach, if widely used, might contribute to rent inflation. 49 Participants in voucher and other school choice programs tend to have disproportionately high socio-economic status even though the programs usually require schools to choose students by lottery. Advantaged students are more likely to apply and more likely to remain in these schools. This appears partly due to parental

26

Moreover, the effectiveness of schools and housing is widely thought to be influenced by the social mix of the local population. Average achievement may be higher when classrooms have a mix of ability levels than when they are segregated by ability. Disadvantaged people may do better in racially and economically integrated neighborhoods. Integration may require a complex balance of groups, achieving a “critical mass” needed for minority group members to feel comfortable on the one hand and avoiding the “tipping point” that triggers the flight of the privileged on the other. If integration or social balance is a goal, then it seems unlikely that it can be achieved through unaided individual choice. Aside from discrimination by providers and local governments, there is a coordination problem.50 Some combination of information and coordination problems seems to account for the finding that “if given vouchers, poor people choose … segregated areas similar to the ones they left.”51 The most admired effort to achieve housing integration with vouchers – the Gautreaux program in Chicago – depends heavily on professional services to match applicants with apartments and counsel them about adjusting to life in their new neighborhoods Health care cost control is the major social welfare challenge of the day. We doubt that the Minimalists’ approach can play more than a secondary role in effective reform, and we worry that it will discourage critical reforms that depend on the kind of administrative capacity they are skeptical of. The central American public health care programs, and even private insurance, already have voucher-like characteristics. The main difference between Medicare or Medicaid entitlement and a housing or school voucher is that there is no cap on the amount of benefits that the beneficiary of the health programs can demand. The system thus gives providers an incentive to overtreat and to use relatively expensive treatment that may not even serve the medical interests of the patient, much less social efficiency. The economists’ favored approach to the problem involves making consumers more selective by tightening their budget constraints motivation and information, but it is also likely a function of the schools’ outreach and recruiting and the degree of support they provide to students who struggle. See Diane Ravitch, The Death and Life of the Great American School System 129-47 (2010). 50 James S. Liebman, “Voice, Not Choice”, 101 Yale L. J. 259 (1991). 51 James E. Rosenbaum and Stefanie DeLuca, “Does Changing Neighborhoods Change Lives?: The Chicago Gautreaux Housing Program and Recent Mobility Programs” (Institute for Policy Research, Northwestern University, Working Paper Series, WP-09-01) (2009), at 5.

27

(for example, by cutting back tax subsidies and encouraging healthcare savings accounts and employer self-insurance),52 A competing response involves administrative review of practice decisions under continuously updated standards.53 The Minimalists are unlikely to be comfortable with either perspective. The economistic vision relies on individual choice to an extent the Minimalists know is implausible. The alternative perspective calls for exactly the kind of complex administrative capacity that the Minimalists despair of creating. The difficulties for Minimalism in this area can be seen in the chapter on health insurance in Sunstein and Richard Thaler’s Nudge.54 It focuses on the Medicare Part D prescription drug plan inaugurated in 2003. The program requires most beneficiaries to choose among a variety of private plans. Sunstein and Thaler summarize evidence that shows two crippling problems. First, left to themselves, many people will not choose at all, and if the default is no coverage, they will lose coverage. Second, most people do not seem to choose the plan that is most beneficial to them. We know this because researchers can get an idea of what people’s drug needs are from the records of their plans and then compare the coverage they got (deductible, co-payments, ceilings) with the coverage they could have gotten under alternative plans. Sunstein and Thaler propose a classic Minimalist response: manipulation of default rules and “choice architecture.” Instead of no coverage, the default for non-choosers should be a plan chosen by the government. And the government should make these default choices, not randomly (as is now done for some beneficiaries) but by “intelligent matching.” The government can use available data to calculate which plan is most likely to be the best for non-choosers and assign them to it provisionally. For choosers, the key is to present the data at the time of decision in a way that helps the beneficiary choose effectively. For example, the choice document can contain a list of the beneficiary’s drug use in the past year and the net cost to the patient of such drugs under alternative plans. Note that, for their only discussion of health insurance, the authors choose a relatively small and manageable portion of the picture. Even if we saw the kind of assisted choice they propose as effective here, it would be harder to see it working with respect to the more complicated

52 John F. Cogan, Glenn Hubbard, and Daniel Kessler, Healthy, Wealthy, and Wise: Five Steps to a Better Health Care System (2005). 53 Clayton Christensen, et al., cited in note , at 56-65. 54 Sunstein and Thaler, cited in note ..

28

choices involved in, say, choosing a doctor or a hospital or a comprehensive insurance plan.55 And most strikingly, even in this relatively simple case, choice ends up doing very little work. Sunstein and Thaler argue convincingly that, by using sophisticated data aggregation and analysis, the government, can make plausible decisions about the interests of the large fraction of people disinclined to choose. They suggest further that the government can use the same tools to present data to choosers in a way that will make their choices effective. It seems quite likely that, for many choosers, the presentation of the data will more or less determine the choice. To the extent that the government can accurately determine the best choice and can present the data that underlies that determination clearly, choosers will simply ratify the government’s decision. To the extent that choosers depart from the government prediction, it could mean that they understand their situation better than the government, or it could mean that they are confused. In the former case, choice would serve as a valuable check on the government’s analytical apparatus (which presumably could be continuously improved as the government investigates the reasons for the discrepancies). In the latter case, choice is just noise and nuisance. But in either case, the key intervention involves much more than a “nudge”; it is a kind of complex administration that Minimalism tends to acknowledge only as an afterthought.56

55 In a 2010 speech, Sunstein, as administrator of the Office of Information and Regulatory Affairs, described the Obama administration’s “approach to regulation” as focused on, in addition to cost-benefit analysis, “default rules and simplification.” Aside from encouragement of automatic employer enrollment in retirement savings plans, the only examples he gave involved improved disclosure to consumers or workers of the risks of products such as chemicals, tobacco, or bank loans. All the interventions seem sound, but in comparison to the Administration’s proposals to restructure food safety, finance, education, health care, and offshore oil drilling, they seem minor. These latter initiatives are not mentioned. Cass R. Sunstein, “Humanizing Cost-Benefit Analysis,” (Feb. 17, 2010), available at www.whitehouse.gov/omb/oira_02172010/. 56 You would never know from Sunstein and Thaler’s breezy discussion of the potential contribution of Minimalism that Medicare Part D involves a policy fiasco on a scale of the SO2 scrubber legislation that Ackerman and William Hassler famously denounced in Clean Coal/Dirty Air (1981). In larding unnecessary subsidies on the drug industry and precluding government price negotiation or foreign imports, the bill, like the scrubber legislation, represents the victory of special interest rent-seeking and regulatory dogmatism over sensible public policy. See, e.g. Louise M. Slaughter, “Medicare Part D: The Product of a Broken Process. 354 New England Journal of Medicine 2314 (2006). There is one difference: the relevant dogmatism in the case of the scrubber legislation was New Deal command-and-control. With Part D, it is market

29

II. Experimentalism Experimentalism takes its name from John Dewey’s political philosophy, which aims precisely to accommodate the continuous change and variation that we see as the most pervasive challenge of current public problems. “Policies [should be] experimental in the sense that they will be entertained subject to constant and well-equipped observation of the consequences they entail when acted upon, and subject to ready and flexible revision in the light of observed consequences,” he wrote.57 At the same time he rejected standardized bureaucratic solutions and urged responses that combined respect for local context with centralized structure and discipline. Aside from education, where he was an active reformer, Dewey offered little by way of concrete illustrations of his proposals, but postwar developments in business management and recent reforms in many public policy areas connect fruitfully with Dewey’s theories. In the United States, some Experimentalist institutions have emerged in sectors, such as nuclear power or food safety, where technological and economic change has outstripped the capacities of established market and bureaucratic safeguards to protect key public interests. Others have emerged in settings, such as public education or child protective services, where decades of apparently fruitless and interminable debate have created strategic uncertainty among the contending parties and opened them to interventions aimed at learning from promising local experience. In the European Union and in many international regimes, Experimentalist institutions have arisen from the realization that mutual evaluation and learning from diverse national practices is sometimes the only feasible way of coordination in the absence of a conventional national sovereign with presumptive authority to fix common goals. Not all these regimes have complete or well-configured architectures, but we consider them Experimentalist to the extent that they are designed to achieve local adaptation and aggregate learning by

fetishism. To be sure, the Minimalists are not to blame for the bad faith and unreflective uses of their ideas, but the fact that Sunstein and Thaler can discuss their market-mimicking reform without any indication of how dependent it might be on a deeply dysfunctional infrastructure suggests that even well-considered discussion can have distorting effects. 57 John Dewey, cited in note , at 203.

30

combining discretion with duties to report and explain and by pooling information. We do not suggest that any of these regimes has proven its efficacy over alternative approaches to the problems it addresses. But each has functional and jurisprudential properties that make it a distinctively promising response to the dimension of uncertainty in these problems. Our optimism about Experimentalism is based most generally on this fit between structures and problems. A. Basic Architecture We start with a relatively abstract model of Experimentalism in which the basic constituents are a “center” and a set of “local units.” In practice, the “center” is sometimes the national government and the local units, its federated states or municipalities. Or the center could be a government agency and the local units, the private actors it regulates or the public or private service providers with which it contracts. Or the center might be a single public or private organization, with the local units its (territorial) sub-divisions: a state department of child welfare services and its regional districts, for one example, or a school district and its individual schools for another. These relations are often nested, with an entity, such as a school district, at once the local unit of a broader (state) jurisdiction and the center of a territorial unit of its own; but the relation between contiguous “higher” and “lower” units is the same, regardless of where they are located in the overall system. Together the center and local units set and revise goals, and the means of pursuing them, in an iterative process with four basic elements. First, framework goals (such as an “adequate education” or “good water status”) and provisional measures for gauging their achievement are established by legislation, administrative action, or court order through consultation among the center and local units and relevant outside stakeholders. Second, local units are explicitly given broad discretion to pursue these ends as they see fit. But, third, as a condition of this autonomy, the local units must report regularly on their performance and participate in a peer review in which their results are compared with those employing other means to the same general ends. These reviews require the local units to describe and explain their efforts to peers and superiors; to show that they have considered alternatives; and to show that they are making progress by some jointly acknowledged measure of success, or are making plausible adjustments if not. The center provides services and inducements that

31

facilitate this disciplined comparison of local performances and mutual learning among local units. Finally, the framework goals, performance measures, and decision-making procedures themselves are periodically revised on the basis of alternatives reported and evaluated in peer reviews, and the cycle repeats. This model involves the following key elements: 1. Decentralization. Like Minimalism but in contrast to command-and-control, Experimentalism combines decentralization of operative control with central coordination of the evaluation of results. John Braithwaite’s observation on the most successful private mine safety efforts exemplifies this feature of the experimentalist architecture:

[W]hile all these companies have decentralized control over safety, they also have centralized assessment of the safety performance of line managers. All carefully monitor each mine and each district to ascertain whether their accident and fatality rates are improving or worsening in relation to performance of previous years and to the performance of other mines and districts. [For all the local managers,] the sense that the head office is watching their safety performance is pervasive. 58

2. Signals and Norms. Experimentalism has been influenced by 20th century innovations in economic organization. In particular, it reflects the practices of “learning by monitoring” and “continuous improvement” exemplified by the Toyota Production System.59 This system breaks strongly with organizational premises shared by both command-and-control and Minimalist administration. Toyota-style organization rejects any strong distinction between making rules on the one hand and applying them on the other. It contemplates that rules be continuously revised in the course of application. It treats rule departures diagnostically as symptoms of systemic problems and opportunities for systemic improvement. Sometimes, instead of seeking to minimizing or isolating them, it tends to induce and expand them (for example, by eliminating “buffer inventory stocks” and imposing super-

58 John Braithwaite, To Punish or Persuade: Enforcement of Coal Mine Safety ___ (1985) 59 See William H. Simon, “Toyota Jurisprudence: Legal Theory and Rolling Rule Regimes,” in de Burca and Scott, cited in note .

32

optimal standards like “zero defects”) in order to make the system’s weaknesses transparent. An Experimentalist regime typically combines performance or outcome measures, such as total emissions from a factory or average test scores at a school, with practice standards that mandate particular equipment or conduct, such as wearing protective eyewear or making a lesson plan. Some of performance norms are mandatory and some are diagnostic. Mandatory performance norms indicate how well an actor is doing overall but often say little about the causes of his success or failure. Diagnostic performance norms are more fine-grained. For example, they might track the scrap rate of a manufacturing process or particular student skills in a school. They aspire to provide a rich array of “weak signals” that guide adaptation. They assist in self-assessment and comparison with peers, but the only further obligation associated with them is typically to attempt to improve.

Experimentalist regimes differ from Minimalist ones in having extensive practice norms; they differ from command-and-control regimes in making most of these norms presumptive rather than mandatory. Typically a small body of mandatory norms will express requirements or prohibitions that are susceptible to stable specification. They frequently concern matters of process, such as reporting, and fairness constraints, such as prohibitions of discrimination. The remaining practice norms presumptive: They must be followed unless doing so would frustrate their underlying purpose. In that case, the agent can depart from them but only when she signals her departure and explains her reasons to peers or superiors. In the phrase used in the EU, her duty is to “comply or explain.”60 If the explanation is accepted, the rules are promptly re-written to reflect the new understanding. The function of presumptive norms is not to control discretion but to produce transparency. They achieve this goal, not by forcing practice to conform mechanically in situations the rules did not anticipate, but by enlisting frontline workers in the continuous updating of the rules. When the system functions well, an outsider can get a good idea of what is going on by studying rules. Part of the job of controlling discretion that rules perform in other types of regimes is accomplished in Experimentalism by monitoring practices in which agents are required to signal and justify their rule departures. Thus, the role of collaborative explanation and deliberation -

60 See Christopher Hogg, “The ‘Comply or Explain” Approach to Corporate Governance,” available at www.qfinance.com.

33

- confined in Minimalism to elite administrators – is extended through all levels of administration. 3. Incentive Design. A distinctive set of central norms are designed to induce local actors to participate in the regime. They can involve either coercive sticks or enticing carrots. Coercive norms characteristically take the form of “penalty defaults” that specify fairly harsh consequences if the parties fail to come up with an acceptable alternative. For example, the Endangered Species Act precludes a developer from improving land where there are endangered species unless the developer concludes a Habitat Conservation Plan with local stakeholders.61 Enticements might take the form of conditional grants. For example, the Obama administration’s “Race to the Top” awards large grants for educational reform in a competitive process where state applications are judged on the extent to which they demonstrate capacities to plan and self-assess, to share and make use of information about their own and peer performances, and coordinate with “critical stakeholders” in both the public and private sectors.62 Experimentalism is sometimes associated with “soft law” and reliance on informal social pressures rather than material sanctions, but this view is misleading.63 Some Experimentalist sanctions are harsh, such as the no-development rule under the Endangered Species Act. Schools, nuclear power plants, and food processors can all suffer the harsh sanction of shut-down under Experimentalist regimes when they deliberately or persistently fail to meet mandatory performance norms. The distinctive characteristic of Experimentalist incentive design concerns the conduct it is calculated to induce. In command-and-control regimes, incentives are calculated to induce compliance with specific conduct the regulator has determined to be optimal. In Minimalist regimes, sanctions are designed to induce a particular result measured in terms of a few clear indicators, such as C02 reduction. The distinctive goal of Experimentalist incentive design is to induce actors to engage in investigation, information sharing, and deliberative engagement. This aim sometimes leads regulators away from sanctions that lead actors to hide information or to take defensive adversarial postures in dealings

61 Karkkainen, cited in note . 62 US. Dept. of Education, Race to the Top Program: Executive Summary (Nov. 2009), at 6, available at www2.ed.gov/programs/racetothetop/executive-summary.pdf. 63 In the EU and international regimes, resort to “soft law” is encouraged by the limited capacities for conventional coercive enforcement. Patrick Cottrell, David M. Trubek, and Mark Nance. “Soft Law, Hard Law, and European Integration”, in De Burca and Scott, cited in note ..

34

with regulators. Thus, an Experimentalist regime is less likely to impose harsh monetary or criminal penalties for unintentionally harmful conduct under vaguely defined standards and more likely to impose them for failing to make or execute plans or to report performance.64 4. Stakeholder Participation. Experimentalism has a more sanguine attitude toward participation than Minimalism, but the type of participation it relies on is not the universalistic and voluntarist kind that Minimalists disparage. Experimentalism emphasizes stakeholder participation that seeks to elicit and reconcile the diverse views and interests of people distinctively affected by and knowledgeable about the matters in issue. The problems of apathy and ignorance that constrain participation in generalist processes are less severe here. Moreover, some cognitive biases may be mitigated when people have strong incentives to win the cooperation of people with whom they disagree. We expect people stranded in lifeboats to figure out a mode of collaboration even when they have severe differences, and there may be a more general “lifeboat effect” from local interdependence in some contexts addressed by Experimentalism.65 Moreover, we’ve seen that designers of Experimentalist regimes do not need to rely on common interests or cooperative dispositions; they can alter default rules or allocate rewards in ways that motivate responsible participation. Finally,

64 Harsh sanctions deter disclosure, first by raising its cost when disclosure increases the likelihood of sanction, and second, by “crowding” out voluntary cooperation when they induce actors to take a more calculating attitude toward the regime. See Reinier Kraakman and Jennifer Arlen, “Controlling Corporate Misconduct: An Analysis of Corporate Liability Regimes,” 72 NYU Law Review 687 (1997); Uri Gneezy and Aldo Rustichini, “A Fine is a Price,” 29 Journal of Legal Studies 1 (2000).

Since we are trying to portray both Minimalism and Experimentalism in their most promising forms, our model of Experimentalism excludes the superficially similar approach known as New Public Management. New Public Management emphasizes performance measures and competition among local units. However, its performance measures tend to be strong signals intended to allocate rewards and punishments more than weak signals intended to diagnose problems and indicate directions for improvement. And its idea of competition tends to be harshly Darwinian and to exclude the emphasis on collaborative inquiry and peer review that is central to Experimentalism. See Carol Harlow, “Accountability, New Public Management, and the Problems of the Child Support Agency,” 26 Journal of Law and Society 150 (1999); Simon Head, “The Grim Threat to British Universities,” New York Review of Books (Jan. 13, 2011). 65 Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action 211(1990) (concluding that a shared sense among stakeholders that all “will be harmed if they do not adopt an alternative rule” is favorable to agreement on solutions to common pool resource problems).

35

the nature of the issues and circumstances that prompt Experimentalist regimes may improve the prospects of effective participation. These regimes are responses to uncertainty. They arise in situations where public officials are uncertain about both the definition of the problem and its solution. In these situations, stakeholders may find it difficult to map proposed solutions onto their selfish interests. To this extent, they may be more open to good faith deliberations focused on public interests. B. Regulation A variety of regulatory regimes in the United States and abroad are best understood in Experimentalist terms. One is “management-based regulation” in which the regulator requires each regulated actor to develop a plan to mitigate specified harms; assesses the adequacy of the plans; monitors their implementation, and through a combination of tangible penalties, technical assistance, and public shaming, induces the laggards to comply with minimum standards and the front runners to improve continuously. Examples discussed in recent scholarship include the Hazard Analysis and Critical Control Point (HAACP) program for meat and poultry of the Department of Agriculture, the Massachusetts Toxics Use Reduction Act, the power plant safety program of the Nuclear Regulatory Agency, and the mining safety programs of Queensland and New South Wales, Australia.66 Another structure involves public-private contracting in which an agency enters into a series of bilateral contracts with large regulated actors. The contracts commit the actor to observable performance on one or more regulatory dimensions, and the regulator may in return agree to waive otherwise applicable requirements or to use less frequent or costly monitoring. The EPA’s Project Excel and OSHA’s Voluntary Protection and Strategic Partnership Programs are examples.67 Or in

66 Coglianese and Lazar, cited in note ; Karkkainen, cited in note ; Rees, cited in note ; Neil Gunningham, Mine Safety: Law Regulation Policy (2007); Jody Freeman, “Collaborative Governance in the Administrative State,” 45 UCLA Law Review 1 (1997). 67 Orly Lobel, “Occupational Safety in the US,” in De Burca and Scott, cited in note ; Environmental Governance: Comparative Approaches to Regulatory Innovation in the United States and Europe (Eric Orts and Kurt Deketelaere eds. 2001). Environmental regimes have generated the largest body of self-consciously Experimentalist legal scholarship. For discussion and references see Cameron Holley, “Facilitating Monitoring, Subverting Self-Interest, and Limiting Discretion: Learning From the ‘New’ Forms of Accountability in Practice”, 35 Columbia Journal of Environmental Law 127, 139-49. (2010).

36

another variation, the government oversees and/or ratifies negotiations between regulated actors, NGOs, and local stakeholders, as occurs in the Habitat Conservation Plan program under the Department of the Interior.68 A third structure involves “multi-level governance” in which regulatory authority is decentralized to lower-tier governments in ways designed to permit adaptation to local conditions and/or regulatory competition. The Clean Air and Clean Water acts and OSHA all give states the option of assuming important regulatory responsibilities, subject to meeting minimum national standards. Some international trade regimes are pushing participating nations in Experimentalist directions. Principles of mutual recognition or “equivalence” found in many trade treaties require importing countries to accept products produced in compliance with the regulatory regimes of the exporting countries where these regimes provide protection substantially equivalent to the importing country’s regime.69 Such provisions require trading partners to engage in a kind of peer review of each other’s regimes. International organizations within and outside the World Trade Organization promote information exchange and discussion of regulatory standards among member states.70 The specific configuration of Experimentalist regulation depends in part on whether the risks it addresses take the form more of concentrated large-scale catastrophe, such as a nuclear reactor meltdown, or of relatively diffuse danger, such as tainted food. Regimes focused on catastrophe-avoidance emphasize event notification – the duty and capacity of regulated actors to identify and report “weak signals” of systemic problems and to respond to such reports.

In U.S. nuclear power safety, for instance, plants must initially satisfy a demanding set of structural requirements and then must report all potentially dangerous “significant operating events,” ranging from interruptions in power genration to unexpected deterioration of equipment, to the Nuclear Regulatory Commission. The NRC evaluates the report and notifies all operators of possible hazards. Thus the failure of a non-critical valve in one facility can lead to a notice alerting all the others to inspect similar valves, some of which may indeed be performing critical functions. Peer reviews are conducted frequently to

68 Karkkainen, cited in note . 69 Kalypso Nicolaidis and Gregory Shaffer, “Transnational Mutual Recognition Regimes: Governance without Global Government”, 68 Law & Contemp. Probs. 251 (2005). 70 Joanne Scott, The SPS Agreement (2008).

37

evaluate the facility’s responses to such notices. In addition, plants are ranked annually on their overall performance with regard to operating safety. These rankings, along with other safety information, are provided directly to the highest level managers and the operator’s board of directors to ensure that criticism is not dulled by passage through a managerial hierarchy.71 By contrast, in food processing, where the range of acceptably safe arrangements is much wider than in nuclear power generation and the risk of catastrophe less, responsibility for indentifying and mitigating hazards rests more with the operators. In Hazard Analysis and Critical Control Points (HACCP) programs, processors specify the points in the their own production processes where precise control is necessary to prevent contamination by pathogens, develop hazard mitigation plans for each plant describing how that control should be exercised, and design a testing regimes to confirm the efficacy of the measures they propose. The regulator reviews and approves the plan and monitors the firm’s compliance with its plan. Finally, plans increasingly include provisions for tracing products as they are processed, so that when failures occur it is possible to identify the entity and the procedures that failed.72 The Food Safety Modernization of 2010 generalizes the HAACP approach, which was previously confined in the U.S. to processors of meat, poultry, and a few other products, to virtually all food processors above a minimum size. The core of the bill is a mandate that the Food and Drug Administration (FDA) require processors to formulate, validate, and continuously update HAACP plans (now called Hazard Analysis and Preventive Controls). The Act requires the FDA to allocate its inspection resources in accordance with an assessment of the relative riskiness of regulated facilities. Among the specified risk factors are the adequacy of a facility’s plan and its implementation and whether the facility has been certified by a private auditor that is in good standing under an accreditation regime.73 In practice, the distinctions between the event reporting and risk planning approaches blur, as systems based on risk-mitigation planning increasingly focus on event notification and vice versa. Thus, in food safety, the Centers for Disease Control and Prevention coordinate a network that continuously analyzes data from healthcare providers about

71 Rees, cited in note . 72 Committee on the Review of the Use of Scientific Standards for Safe Food, Scientific Criteria to Ensure Safe Food (2008). 73 Food Safety Modernization Act of 2010, Pub.-L. 111-353; 90th Cong., 2d Sess.; sec.s 103, 201.

38

food borne illness that federal and state regulators use to target their efforts.74 At the same time, the ability to respond effectively to the information disseminated by event-notification systems of the type familiar from the nuclear power-generating industry is seen as depending on the kind of careful and continuing review of facility operations associated with the implementation of HACCP plans. The convergence of the reporting and planning approaches can be seen in the 2010 report of the National Commission on the Deepwater Horizon Oil Spill and Offshore Drilling. The Commission’s recommendations, modeled in part on the nuclear safety regime, call for both government orchestrated and enforced planning at the facility level and a system of peer information exchange and review.75 An especially interesting area where Experimentalist regulatory practices are converging is drug regulation. The system of drug regulation established during the New Deal requires “approval” by the Food and Drug Administration (FDA) prior to “marketing.” Approval typically involves an expensive and rigorous testing process focused around clinical trials. The FDA then makes a decision whether to permit marketing on the basis of a general assessment as to whether likely benefits to people within the prescription criteria outweigh likely costs (inefficacy, side effects). This framework initially had little to say about what happened post-approval, and until 2007, post-approval duties of both the manufacturers and the FDA were much more limited and ambiguous than pre-approval ones.76 This system has been subject to two salient criticisms. First, critics asserted that the approval process was too long and expensive, and in particular, that it unnecessarily delayed access to potentially beneficial drugs. Second, they decried the scantiness of post-approval regulation,

74 Richard L. Arkin, “PulseNet, FoodNet, NARMS: Tools to Fight Disease, Protect Public Health”, FDA Veterinarian Newsletter 22; available at www.fda.gov/AnimalVeterinary/NewsEvents/FDA/VeterinarianNewsletter/um110048.htm. (2007). The Food Safety Modernization Act requires FDA efforts to strengthen “foodborne illness surveillance systems.” Sec. 205. 75 National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, Deepwater: The Gulf Oil Spill and the Future of Offshore Drilling 217-91 (May 22, 2010). See especially the discussion of the nuclear safety regime at pp. 235-41. 76 Clayton Christensen et al., cited in note , at 56-61; Government Accountability Office, Improvement Needed in FDA’s Postmarket Decisionmaking and Oversight Process (GAO-06-402) (2006).

39

pointing to a series of scandals in which approved drugs continued to be marketed despite evidence that they were doing unanticipated harm. Pressure on the New Deal structure was also increased by trends in medicine toward greater individuation of treatment. These include “personalized medicine,” which uses genetics to customize treatments and drugs to the characteristics of the patient, and “behavioral medicine”, which focuses on patient education, communication, and monitoring. Another important set of developments involves the aggregate analysis of patient treatment data. A major current health policy priority is to develop information systems for standardized recording and transmission of individual treatment data (while protecting the patient’s privacy). Aside from facilitating coherent patient care by making past history transparent to current caregivers, such systems could contribute to the continuous assessment of treatment, including drug, efficacy. Drug efficacy has been studied in a controlled environment because that has been the only way to get sufficiently precise data on the baseline condition of the patient and the nature and effect of the intervention. But with sufficiently precise and standardized routine treatment reporting, the line between clinical trials and routine treatment would blur. If current trends continue, the key judgment for drug regulators will cease to be a binary call about whether or not to permit general marketing of a drug. Instead, regulation will focus on assuring that the drugs are properly customized to the patient and properly integrated with other medical and behavioral interventions. As the line between research and treatment erodes, a one-time global cost-benefit judgment would be far less important than an ongoing capacity to monitor data for signals of problems and opportunities. These trends are reflected in the Food and Drug Administration Amendments Act of 2007. Among other things, the Amendments increase the FDA’s authority to require specified ongoing research as a condition of approval, require that the agency enhance its Adverse Event Reporting System, and mandate that it establish a system of proactive monitoring of drug efficacy through “targeted inquiries” of healthcare records.77 The advent of “personalized” and “behavioral” medicine suggest the likelihood of a trend toward regulatory regimes configured to particular drugs. Steps in this direction are evident in the Risk Evaluation and Mitigation Strategies (REMS) encouraged by the 2007 amendments. As a condition of approval, the FDA can require the 77 Food and Drug Administration Amendments Act of 2007, Pub.-L. 110-85, sec.s

40

promoter to formulate and implement a REMS. A relatively elaborate example is the highly regarded System for Thalidomode Education and Prescribing Safety (STEPS). Thalidomide was once prescribed widely for a variety of common conditions like insomnia and anxiety, but it was banned on the discovery that it severely increased the risk of birth defects. Since then, research has indicated that the drug is uniquely effective with certain conditions associated with leprosy. It can now be prescribed to patients with these conditions but only in accordance with STEPS protocols that require registration and training of the prescribing physician, the dispensing pharmacist, and the patient; birth control and pregnancy testing for women of child-bearing age and a variety practices for all patients; and regular reporting of treatment and patient condition. The program is administered by multidisciplinary committee within Celgene, the manufacturer, and another one within the Slone Epidemiological Unit of the Boston University School of Medicine. The latter committee has patient representatives, including delegates from the March of Dimes and the Thalidomide Victims Association of Canada. The committees enforce compliance in part through audits and “make… recommendations to the FDA for changes in the program based on real time data.”78 Drug regulation has long been experimental in the sense that it has based key judgments on disciplined empirical assessment. But as long as experimentation was confined to a specialized realm and associated with one-time binary judgments, it lacked the features associated with Experimentalism. As it moves to emphasize and integrate event notification with contextual risk-planning, it increasingly resembles Experimentalist regimes such as nuclear power and HAACP. It is not easy to account for such regimes in Minimalist terms. Static notions of optimization capture very little of the central thrust of the system, which is to facilitate learning and continuous revision. Moreover, by blurring the line between rule-making and implementation, the system precludes the kind of accountability that binds local actors to centrally-promulgated rules. In comparison to the Minimalist approach, the Experimentalist approach to regulation promises the following advantages:

78 Jerome B. Zeldis, Bruce A. Williams, Steve D. Thomas, and Marc E. Elsayed, “S.T.E.P.S.: A Comprehensive Program for Controlling and Monitoring Access to Thalidomide”, 21 Clinical Therapeutics 319 (1999); Institute for the Future, Optimal Futures for Risk Evaluation and Mitigation Strategies (REMS) (2010).

41

First, these regimes create pressures to respond to “weak signals” with self-diagnosis and corrective action. Moreover, they have the potential to induce continuous improvement by all actors, not just those who are below the minimum standards. They can provide a range of incentives, including rewards and subsidies, reputational effects, and conventional penalties. Second, Experimentalist regulation reduces the information burden on the regulator in determining initial levels of acceptable performance. To the extent that performance standards are based on questions of technical or economic feasibility, they can be set from observed experience across the full population of regulated actors. The regulator can require that plants that fail to perform at some fraction of average industry performance be subject to remedial and/or punitive consequences. Under the HAACP regime for meat and poultry safety in the US, only the poorest performing two percent of plants — those whose products most frequently test positive for conveniently measured proxies of pathogens — are at risk of closure. But as laggards improve or exit the industry, the average performance rises and the minimum standards could “ratchet up” without the need for special investigations or proceedings of the kind necessary, for example, to adjust permit allocations in a cap-and-trade regime. Third, Experimentalist regimes accommodate diversity by leaving extensive discretion to local actors while making their activities mutually transparent. The key mechanisms involve metrics that measure performance. Experimentalism co-opts the paradox of standardization: By imposing uniformity along a few dimensions, we permit more variations on others.79 Microbiological testing for both serious toxins and specified proxy organisms permits us to compare food processing plants and identify the leaders and laggards, while leaving them substantial discretion over their operations. Minimalist market simulation regimes aspire to achieve the same goal, but their logic focuses effort around a single indicator – price. A regulatory mistake in setting prices may undermine the whole regime. Moreover, price signals often have no diagnostic value. They tell us who the high cost producers are, but they do not tell us what they are doing wrong. By contrast, because Experimentalist regimes rely on a range of indicators, they are

79 “Uniformity and standardization may provide an underlying basis for differentiation and liberation of individual potentialities.” Dewey, cited in note , at 213.

42

less vulnerable to the failure of one. And their indicators typically have diagnostic weight.80 Fourth, an Experimentalist regime readily propagates relevant technical and organizational advances that may not circulate freely under bureaucratic or Minimalist regulation. Event notification and risk planning tends to make technical knowledge accessible both to regulators and peers. The agency can provide technical assistance to firms in developing risk-management plans or can disseminate quick fixes to problems registered in event-notification systems. It can mandate publication of risk management plans and can encourage or mandate information exchange through peer review. It can subsidize demonstration projects in which progressive firms get assistance in customizing new technology to local conditions so that its potential can be publicized to the firm’s peers.81 C. Social Welfare In the welfare area, promising programs with key Experimentalist traits can be found in at least two broad areas. The first includes programs that provide individualized services. These include child protective services, health care, both “special” and general education, job training, mental health services, and disability capacitation. They typically require highly individuated planning, pervasive performance measurement, and efforts to aggregate and disseminate information about effective practices.82 The second area involves public participation in the design of local public goods. Notable examples are community policing and community economic development. In community policing, police consult with neighborhood groups in order to set priorities in local law enforcement objectives, to obtain local knowledge to configure

80 Robert Kaplan and David Norton, The Balanced Scorecard 150 (1996), propose use of a “balanced scorecard” consisting of parallel sets of “outcome and “performance” measures: “Outcome measures without performance [measures] do not communicate how the outcomes are achieved…. Performance [measures may] fail to reveal whether operational improvements have been translated into … enhanced financial performance.” 81 Eric von Hippel, cited in note at 67; Daniel Carpenter, The Forging of Bureaucratic Autonomy: Reputations, Networks, and Policy in Innovation in Executive Agencies 238-54 (2001); Atul Gawande, Testing, Testing, The New Yorker (Dec. 14, 2009); Charles F. Sabel, “Self-Discovery as a Coordination Problem: Lessons from a Study of New Exports in Latin America” (Unpub. 2009). 82 Kathleen Noonan, Charles F. Sabel, and William H. Simon, “Legal Accountability in the Service Based Welfare System”, 34 Law & Soc. Inquiry 523. (2009).

43

strategies, and to coordinate with private activities in implementing enforcement plans. Success is monitored in terms of various indicators of safety. In community economic development, government agencies and private charities make financial and technical assistance available to community-based organizations to implement locally-produced plans for housing, job, and business development. Again, projects are evaluated in terms of an array of criteria, and the more successful organizations receive priority in later rounds of funding.83 These programs redefine the conventional relation between center and frontline. The center’s role is no longer merely to monitor frontline compliance with promulgated standards. It is responsible for providing the infrastructure and services that support frontline efforts. Thus, the role of the principal in the Experimentalist school is not just to verify that the teacher’s class is studiously at work, but also to organize the specialized services and framework conditions -- remedial reading, testing to diagnose learning difficulties, coaching in team building — on which the teacher’s team must rely in formulating and implementing individual learning plans.

In child welfare, caseworkers rely on a center that trains and otherwise qualifies foster parents, facilitates contracting with outside specialists, and marshals resources that respond to the unexpected needs of particular families or sudden community-wide problems (for example, a tornedo or an epidemic of methamphetamine abuse). In community policing, the precinct and its subunits count on a department that provides timely information about the migration of crime and criminals from one neighborhood to another (perhaps in response to local success in policing), helps orchestrate cooperation with community groups that work with youth recently released from detention and at risk of re-incarceration, or provides mediation and consultation services in the aftermath of racial incidents. The key influence in the move toward Experimentalism is the view that services need to be tailored to the needs of beneficiaries. Such tailoring requires the active participation of beneficiaries because effective intervention depends on their cooperation or because they have information essential to diagnosis and planning. As Dewey wrote,“[t]he man who wears the shoe knows best where it pinches”84

83 William H. Simon, The Community Economic Development Movement: Law, Business, and the New Social Policy (2001); Archon Fung, Empowered Participation: Reinventing Urban Democracy (2004). 84 Cited in note , at .

44

Tailoring also requires understanding of local context. A child welfare worker putting together a plan for an obese child may be able to include a bicycle in the plan if she knows that the family’s church can be persuaded to come up with one if credibly assured that it will fill an important need. Police dealing with a high crime neighborhood can be more effective if they learn from local residents that a poorly maintained house from which drugs are sold is a magnet for non-resident deviants. The solitary “street level bureaucrat”, who was discovered in the organizational literature of the 1970s and has since haunted the Minimalists, does not appear in the emerging Experimentalist regimes. The street level bureaucrat exercised tacit discretion under the radar of her superiors in the broad interstices of poorly enforced rules. Experimentalist design departs from the organizational features that gave rise to the street level bureaucrat in four important ways. First, difficult frontline issues are more likely to be decided by a team than by a single worker. The social professions increasingly see individual problems as functions of multiple and diverse causes that call for interdisciplinary diagnosis and intervention. In the most highly regarded child protective service programs, the case worker’s chief responsibility is to form and periodically convene a team that typically includes key family members, a health professional, lawyers for the child and the state, a therapist, and perhaps a teacher.85 In schools, analogous interdisciplinary teams — the classroom teacher, the reading specialist, the behavioral therapist — formulate plans for students with learning difficulties. In community policing, the precinct officers meet regularly with community members and their representatives, school officials, landlord and tenant associations, among others, to determine which crimes are currently most disruptive and threatening, and to explore ways of reconfiguring public services to reduce them — perhaps rerouting a bus line to pre-empt encounters between rival gangs or demolishing an abandoned house that has become a magnet for drug dealers. Group decision-making promotes accountability in two ways. Team members act under the gazes of a shifting array of peers. Thus, the informal pressures of pride and shame are triggered. Furthermore, collaborative decision requires articulation, and the fact that the team members are diverse in background means that matters that might be taken for granted in a more homogeneous setting have to be explained and subjected to examination.

85 Noonan et al, cited in note .

45

The second feature of experimentalist service provision that distinguishes it from street-level and other bureaucracies is a distinctive form of monitoring. Like event notification practices in Experimentalist risk regulation, social services monitoring engages in intensive scrutiny of individual cases to reveal systemic problems. But where event notification is triggered by unexpected disruptions, core monitoring in Experimentalist service provision is part of the organizational routine. A particularly well developed example is the Quality Service Review (QSR) used in child welfare programs in Utah and several other states. The QSR begins with selection of a stratified random sample of cases. A two-person team, including an agency official and an outside reviewer, examines the case over two days, beginning with a file review and proceeding to interviews with the child, family members, non-family caregivers, professional team members, and others with pertinent information. The reviewers then score the case numerically in terms of two sets of indicators, the first concerning "child and family status” — the well-being of the child and his or her family -- and the second concerning "system performance" — the capacity to build teams, make assessments, formulate and update plans, and execute the plans. The initial scoring is then refined in meetings among the reviewers as a group, and then between reviewing teams and the caseworkers and supervisors whose decisions they have reviewed. Following these clarifications, there is a second meeting to draw general conclusions and a final meeting between reviewers and personnel from the region to discuss the systemic significance of their findings. The final report sets out the aggregate scoring, identifies recurring problems, and illustrates these from specific cases. The QSR is as much a process of norm elaboration as of compliance enforcement. Agency goals like child safety and family stability (“permanence”) are in the abstract, indeterminate. The QSR helps establish paradigmatic instances of what the goals mean and how the processes should be carried out. Participation by officials from the child welfare department’s central administration promotes consistency across regions. The integration of outsiders from other states or consultants with national practices promotes consistency across states. Similarly, QSR data functions as a measure of performance and as diagnostic tool of systemic reform. The scores can be compared over time and (in principle though not as yet in practice) across states, giving rough but serviceable indications of where attention and remedial effort should be focused.

46

Third, rules have a different relation to accountability in Experimentalist administration than in conventional legal thought, including Minimalism. Workers often have discretion to depart from rules where they believe it would be counter-productive to follow them. This discretion, however, is limited by the requirement that she do so transparently in a manner that triggers review and, if her judgment is sustained, prompt re-writing of the rule to reflect the new understanding. These regimes challenge the premise of the street-level bureaucracy literature that the only escape from the rigidity of mechanical rule following is low-visibility, ad hoc frontline discretion. Fourth, another important accountability mechanism in many Experimentalist regimes is co-investment among different government and private actors. Child welfare case planning in Utah-type regimes includes an assessment of available “private supports”. Private supports may come from relatives, friends, churches, employers connected to the child’s family. They may come from institutions in the community with charitable missions. Community-based housing development grants are structured so that projects typically require the support of multiple governmental and private institutions. Job training programs aspire to enlist private employers in supporting and designing their curricula. From one perspective, the co-investment practice is an extension of the team concept, further marshaling diverse perspectives and expertise. At the same time, it serves as a check on agency discretion. The willingness of independent institutions to invest in the plans and projects provides some reassurance of their soundness. Co-investment can also occur between the programs and their beneficiaries, and this kind of co-investment potentially responds to concerns about beneficiary responsibility. Given his skepticism about complex ongoing administrative judgments, Ackerman favors a scheme – stakeholder grants – that is generous upfront in a standardized way – but necessarily stingy at the back end for individuals whose choices do not work out. But Experimentalism makes possible an approach that responds to need throughout a person’s life. The moral hazard problems are muted because the key benefits are services that a public provider had determined, in collaboration with the beneficiary, are useful, and because the individual co-invests her efforts and perhaps also resources. The Utah child welfare model, where parents co-invest in service plans, is an example. A more large-scale example is the Nordic “flexicurity” model that provides intensive job training and placement services, as well as generous income support, to workers at whatever stage of their

47

careers they need it. The model is controversial, but is widely credited with neutralizing the employment disincentive effects of high benefits.86 IV. Conclusion Much current public law scholarship seems to reflect the Minimalist preoccupations with static optimization and administrative simplification at the expense of Experimentalist concerns with reliability and learning. At the very least, this bias amounts to a major failure of description. The number of major recent policy initiatives that reflect distinctive Minimalist concerns is small compared to the number that employ unmistakably Experimentalist rhetoric and architecture. And the Experimentalist initiatives have been central in a broad array of the most pressing policy areas, including pollution, occupational safety, food safety, policing, and nearly every major sphere of social service. A common feature of these problems is that they arise in circumstances of fluidity and diversity and they call for interventions that are capable of adaptation and contextualization. Interventions of the sort that Minimalism tends to favor – “nudges,” market simulation, rule simplification – have scant capacity for adaptation and contextualization. By contrast, Experimentalist regimes have developed forms of non-bureaucratic administration that try to combine accountability with local initiative in ways that facilitate learning and individuation. They strive for accountability less through simple rules than through peer review of local discretion. The aspiration is that pooled learning will discipline local autonomy while generalizing its successes. Perhaps Minimalism could accept such regimes as appropriate to categories of problems other than those on which it has focused, but in order to do so, it would have to soften its resistance to frontline discretion.

86 Turben M. Anderson and Michael Svarer, “Flexicurity – Labour Performance in Denmark”. 53 CESifo Econ. Stud. 389 (2007).