Adam Garfinkle is Editor of The American Interest.
A DANISH television series called “Borgen” features a politically correct and highly earnest, yet somehow still likable, female politician named Birgitte Nyborg, who ends up improbably becoming prime minister. In one episode she travels to Greenland for what she thinks will be a pro forma, pass-through meeting with Greenland’s premier—who is, of course, an Inuit. The meeting goes badly. Nyborg tries to be friendly, but vaguely so. The premier, who is even more earnest than she, tongue lashes her after she blurts out in a defensive tic that 40 percent of Greenland’s youngsters drop out of high school, and that, given Greenland’s wide local autonomy from Denmark, the fault cannot lie with the motherland or its government. The gist of his ensuing criticism is that the Prime Minister cares so little about Greenland that she isn’t even willing to stay long enough to meet any ordinary citizens and learn about their real problems.
Stung by this truth telling, the Prime Minister extends her trip and tries to make amends. She soon learns, for example, that as many as 20 percent of all young Greenlanders attempt suicide, and nearly the whole adult population is devastated by alcoholism. Why? Because, she is told, colonization, and then an unbidden but insistent modernization pressing in around them, have destroyed their culture. The people have “lost their stories,” the ones that tie the present to the past, lean toward the future, and thus create meaning.
In between episodes of “Borgen,” I happened to be reading Marc Lewis’s The Biology of Desire (2015). Lewis, a neurophysiologist, shows why the lately popular belief in “addiction as a brain disease,” however tactically useful for addiction counselors and insurance companies, is bad science. But he also writes about how addicts conquer their problems, which he calls not “recovery,” but rather “personality development beyond addiction.” Lewis argues that afflicted people need to be able to project a story line outward that gives them a vision of themselves in a better future. Trying to merely suppress cravings doesn’t work; it leads to ego fatigue, inner demobilization, and relapse. He reasons that entire cultures, or parts of them, can become incubators of addiction if they lose the stories that connect families, and through them individuals, to tradition and social rituals that nurture collective pride and personal worth. He could be talking about families in affluent American zip codes, in which Alpha parents fail to pay sufficient emotional attention to their over-privileged teenagers, or about whole shards of less-affluent America that Robert Putnam discusses in Our Kids (2015). Or he could be talking about Greenland.
The point is that development does not happen in a socio-cultural vacuum any more than addiction is a hermetically sealed “medical” problem. Those societies that have managed to find ways to affluence through broadly shared productivity are ones that work as societies endowed by their stories, not just as economies. In other words, certain groups of people more than others have been able to build the sorts of institutions conducive to generating prosperity. This means that societies get rich because they function effectively; they don’t function effectively because they somehow got rich.
Indeed, the whole idea of an economy as such is an artificial construction of early twentieth-century Western social science and, as such, corresponds ontologically to nothing sociologically real. This means that the perduring dichotomy between economic development and political development is not really a dichotomy at all; it’s simply a widespread category error. When Auguste Comte wrote that “intellectual confusion is at the bottom of every historical crisis,” this is what he meant (despite being afflicted by confusions of his own). And this is so even if one does not rank the failure of wealthy governments to generate development in poorer nations as a top-shelf historical crisis.
As is well known, in September the many heads of state who gathered for the annual UN General Assembly extravaganza adopted the Sustainable Development Goals agenda, the keystone to the UN’s follow-on to its earlier Millennium Development Goals. Many of these so-called Agenda 2030 goals will not be met; others may be met only by accident or dint of unconsidered reasons. To understand why, it is useful to review the long skein of intellectual confusions that compose the desultory “development” efforts of the past seven decades. Doing so will not only reaffirm the humbling limits of human understanding, but may also help us discover, through an improved epistemology of development, a more fruitful way of pursuing a goal that is simultaneously of significant moral and pragmatic value.
Harold Lasswell famously described political science as being about who gets what, when, and how. Ironically, perhaps, he did so in 1936, just at the time when the traditional concept of political economy—the term that, say, Hume and J.S. Mill would have known it by—was falling apart. The development of the social sciences in the West is a fascinating subject, but also a confusing one, given that it is an inherently recursive and, hence, awkward enterprise: social science trying to understand itself is a bit like a frog trying to interpret its own reflection off the surface of a pond. Suffice it to say that, for a variety of reasons, in the years immediately after World War II economics was hived off from the other social sciences, given a quantitative suit of armor, and posed as a “positive” science—one capable of generating objective empirical truths without paying heed to values or policy implications. One cannot begin to understand the early trajectory of “economic development” efforts unless one grasps the modern origins of economics as a discipline.
With economics removed from its quiver—and with a similar mitosis of what used to be called political philosophy into two lonely constituent parts—post-Lasswellian political science became an impoverished shadow of its former self as political economy. Before long, American political scientists began whoring after the prestige and Federal grant money that attended the quantifying disciplines, with economics, and soon econometrics, leading the way. The will to quantification desiccated economics itself, with essential microeconomic analysis falling by the wayside in the face of that era’s equivalent of macroeconomic “big data.” But it all but destroyed the integrity of an orphaned political science, which moved in directions conducive to the application of quantitative methods—such as voting behavior, opinion analysis, and things like that. The result defined away the most important dimensions of politics itself, namely the “why” as opposed to the “how” questions that together compose the true problem set of the discipline.
One result was that when, in the early post-colonial age, Western elites turned their attention to how to make newly emancipated poor countries grow economically, political science—and its bastard infant, international relations, the offspring of a promiscuous union with history, sociology, and Lord-knows-what-else—had little of use to say, at least until Samuel Huntington’s Political Order in Changing Societies (1968) came along. Huntington’s argument twinned political and economic development, and by so doing ran more or less parallel to Edward Banfield’s analysis of social and economic development both abroad and in the United States—in this context, consider his The Moral Basis of a Backward Society (1958). Both scholars were vilified on the altar of a then youthful incubus of political correctness, although, of course, time has proven both to be correct.
Actually, it was worse than that. Western social science at the time was fully entombed by the philosophical biases of the day—Skinnerian positivism and the social engineering fantasies that flowed, or rather oozed, from them. Want to generate economic growth in Ghana, or Burma? Simple: generate aggregate demand through import substitution; educate the necessary modernizing cadres in the West, while building up local educational institutions to mimic Western ones; transfer technology, shake well, add an olive, and voilà: economic development. Robert McNamara’s years as President of the World Bank epitomized this mechanistic, culture- and institution-free approach. It’s still unclear as to whether McNamara did more harm and wasted more money at the World Bank or as U.S. Secretary of Defense during the Vietnam War.
Essentially, the early practitioners of economic development thought that there was only one path to the goal, a path that by nature was measurable. The West, being ahead on this single-lane highway to the abundant future, could simply deconstruct its own experience and cause it to be replicated and accelerated in other countries. Unfortunately, their understanding of their own experience was deficient. Western experts wrote about the passing of traditional society (Daniel Lerner), about culture-sanitized models of “take-off” (W.W. Rostow), and nearly all agreed that religion (read: superstition) and other forms of arational behavior would fade as economic and social rationalization/modernization proceeded along basically Weberian lines. They were basically Whigs at heart, believers in the inevitable material and moral progress of humankind. The underlying metaphor was captured in around 1752 in Giovanni Batista Tiepolo’s famous staircase fresco ceiling in Wurzberg, at the residence of the Schönborn prince-bishops, in which all of humanity is ranked according to level of civilization, with the epicenter of superior splendor located, of course, in Franconia itself, in the very building in which the beholder stood.
Needless to say, this method of generating economic development did not work so well. It worked about as well as Lyndon Baines Johnson’s Great Society programs succeeded in winning the War on Poverty in the United States. Just as the original neoconservatives who gathered around The Public Interest magazine critiqued social engineering approaches to building the Great Society, a few observers, Peter Bauer being the best and bravest, critiqued the World Bank/London School of Economics consensus about how to generate economic growth in what soon became known as the Third World. Bauer pointed out—first in The Economics of Under-developed Countries (1957), then in Dissent on Development (1972), and also in Equality, the Third World, and Economic Delusion (1981)—that most recently emancipated countries lacked most of the necessary preconditions for modern market economics to form, and for growth to occur, and that these preconditions were affected by both social structure and culture (related, but not the same-as any Anthro 101 graduate knows). No serious person disputes this today. But to Western development experts then-overwhelmingly economists by profession, liberals by faith, and meliorists by nature-overtures to the relevance of culture belied their lowest-common-denominator Enlightenment universalist credo and worse, smacked of implicit racism. Besides, one could not quantify culture. So, while Bauer was prescient, he too was vilified by the new cottage industry that had arisen to generate economic growth in the Third World.
Donor governments and pluricratic institutions like the World Bank wasted trillions of dollars before significant dents appeared in their armor of pseudo-scientific smugness. These dents were administered, as only they could have been, by renegade economists and development experts whose number grew as early development efforts failed to measure up to expectations. W. Paul Strassmann’s Technological Change and Economic Development (1968), written in the shadow of Robert Solow’s path-breaking work on the sources of economic growth, had the temerity to point out that technology transfer could not work as planned when the relative factor endowments of the supplying economy did not align with those of the receiving society. A machine born under conditions of labor scarcity but capital abundance (say in the United States) that got transferred to a place where capital was scarce but labor was abundant (say India) would result in vast new slums, like those in Calcutta, filled by people forced off the land by mechanized agriculture.
Better known than Strassmann was Albert O. Hirschman. His Development Projects Observed (1967) emphasized the roles played not only by uncertainty, side effects, and cultural limits, but also by the centrality of corruption. Most of his associates at first did not take him seriously, at least as evidenced by their behavior. Bureaucracies are conservative institutions, resistant to change even when individuals within them realize the futility—or the counterproductivity—of what they are doing.
Strassmann, Hirshman, and others essentially questioned the core positivist assumptions of modern macroeconomics: the proper level of analysis is the individual; said individuals are rational actors; and rational actors are value maximizers. Thanks to Mancur Olson’s reintroduction of the idea of the logic of collective action in his 1965 book of the same name, and his focus on administrative capacity in his The Rise and Decline of Nations (1982), the first of these assumptions was blown out of the sky. The rise of behavioral economics has long since put paid to the second. As to the third, Herbert Simon vanquished it with his notion of “satisficing” or “bounded rationality” as early as 1956; but, as usual, it took most economists a while to catch on. Some, apparently, have yet to do so; rational choice theory is still alive and well, at least in academia.
Many Moving Parts
During the Cold War, none of this foundering about mattered as much as might be supposed. Despite a widespread functionalist view that economic growth was the best protective against an opportunistic communist surge, the real aim of American foreign aid was not to generate economic growth in poor countries. It was to influence the elites of recently decolonized countries either to do things we Americans wanted them to do, or at least not do things we didn’t want them to do. The monies paid were either outright bribes or insurance premiums, depending on how blunt one cares to be. Many critics, including the aforementioned Edward Banfield, were skeptical that foreign aid could deliver political benefits, as made evident in American Foreign Aid Doctrines (1963). But since politics trumped economics as the motive for such programs, the money was not necessarily wasted, even if it did not work well to generate economic development.
Arguments have been made that the money not only failed to work to produce economic development, but that it was counterproductive because it allowed post-colonial authoritarian cliques to better cling to their rentier, gatekeeper functions and thus, by enabling such cliques to extend their tenures, delayed reform for longer than would otherwise have been the case. A case in point is a 2002 essay in The National Interest magazine by Bruce Bueno de Mesquita and Hilton L. Root, entitled “The Political Roots of Poverty.” To some extent, this argument is based on a counterfactual assumption that shorter tenures for bad leaders would have enabled better leaders to become empowered sooner. The evidence for this expectation is mixed. For example, what has followed the Mobuto regime in Congo/Zaire doesn’t seem a great deal more enlightened or public-minded than what went before, just less competent at holding on to power. Even so, the argument is at least plausible.
So, now that the Cold War is a quarter century in the grave, and development functionaries have presumably learned from their many early errors, what are the prospects? The field is vast and hard to summarize neatly, but it is reasonable to say that the Skinnerian assumptions of the early period have given way to a more nuanced understanding—one that takes to heart the socio-cultural factors that shape political institutions, and, hence, the prospects for economic development, and hence, too, the ways that development assistance can most effectively be applied. Douglass North was particularly responsible for introducing the criticality of institutional development into the mental set of development specialists in the 1980s with two influential books bracketing that decade: Structure and Change in Economic History (1981) and Institutions and Economic Growth: An Historical Introduction (1989).
But a more nuanced understanding has only complicated matters by showing how many moving parts there really are to poverty eradication and economic growth (not exactly the same things), leading in turn to new arguments about sequencing and thresholds for self-sustainment. Moreover, it has become obvious that most societies are resistant to both rapid internal change and external manipulation, at least short of extreme social disruption of the kind caused by wars. In short, the old mechanistic, social engineering dispensation was simple, but just happened to be wrong. The new understanding has a better grip on reality, but its complexity yields actionable policy insights only with difficulty. One result is that the naïve optimism of the early period has bowed before a more realistic sense of limits.
This is probably a generous depiction of current reality. Most World Bank economists have not changed their stripes. They are not especially interested in cultural aspects of development, and instead maintain their drive-by, model-driven prognostications when they take to the field, however briefly, in their line of work. The U.S. government, by contrast, has made some intellectual progress, as evidenced by the Millennium Challenge Corporation, which invests the recipient country in both the planning and implementation of development programs. It proffers a conditionalized partnership model, not a charity model, and that is to the good. Its smarter officials realize, as did Huntington, that good government—meaning a competent state capable of exercising effective executive authority—is more important to development than democratic government, at least at earlier stages. Some will even say privately that democratic forms introduced prematurely in ethnically heterogeneous societies will more likely produce competitive kleptocratic elites, political instability—and even violence (read: Kenya 2009, or Burundi much more recently) than they will produce prosperity.
Yet, even wiser officials and most bureaucrats remain behind the curve of best intellectual practice. Francis Fukuyama’s recent volumes on this subject represent that best practice: The Origins of Political Order (2011) and Political Order and Political Decay (2014). They are being read in the development trenches, as are other high-profile attempts to advance this particular ball down the field, such as Daron Acemoglu and James A. Robinson in their book Why Nations Fail: The Origins of Power, Prosperity, and Poverty (2012). But it takes time for sophisticated ideas to trickle down into a translation suitable for bureaucratic tradecraft.
This is not the place to review 1,243 pages of text and notes, except to say that Fukuyama’s argument is based, loosely at least, on Alexander Gershenkron’s notion of path dependency. Fukuyama identifies three essential components of an effective political order, without which self-sustaining economic growth is unlikely to occur: executive competence, accountability (whether substantive or procedural), and rule of law. It is the sequence in which these three elements develop and intertwine that defines regime type. The question for developing countries that lack one or more of these three basic elements regards the proper, deliberate sequencing and intertwining required in their particular cases to achieve growth—and what, if anything, the outside world can do to help.
Beyond this general question, however, is another: How, more specifically, does one prioritize institutional elements like education, health, banking and finance, labor and patent law (and, of course, one can go on listing relevant variables)? Even after 1,243 pages, the reader does not receive specific do-it-yourself assembly instructions.
Current intellectual best practice may also include two twists. One, flowing from the pen of James C. Scott, questions whether government-directed development programs are even a good idea, or a practical one, in the first place. In one book, Seeing Like a State (1999), Scott doubts whether anything resembling the London School of Economics’ top-down policy is likely to produce positive results for most people in most countries. In a sense, he is tacitly at one with Hayek on the point that command economies and industrial policies can never be as smart as markets working off what he calls metis.
But in The Art of Not Being Governed (2010) he goes even further: What, after all, is so inherently superior about Western materialistic urbanized living over traditional village life? What is inherently superior about highly governed modular-role Weberian models compared to less-governed social structures still based on segmentary lineages? Again, this is not the place to detail these arguments, only to note that these are serious questions; but the development cottage industry takes little heed of them.
Related in a way to Scott’s broader argument is the second twist: a trend toward the biologization of social and historical explanation. There are now several books arguing that genetic endowments account for slight differences in social behavior that lead, over time, to distinct institutional set-ups, some of them conducive to development as conventionally understood, and some not. The first of these to make a splash was Gregory Clark’s A Farewell to Alms (2009), which argued that the trickle down of upper-class traits conducive to economic success to a new middle class is what really drove the Industrial Revolution.
More recently and controversially, Nicholas Wade’s A Troublesome Inheritance (2014) argues that racial differences exist, that evolution is continuous and recent, and, most importantly, that differential adaptation stresses have selected for forms of social behavior such that the main races have developed different distributions of them. They have, therefore, evolved differing institutional arrangements, some of which are friendlier to economic dynamism than others.
Wade does not claim, of course, that the human genome itself differs from place to place and race to race, but that changes in the distribution of alleles in a population over time is the mechanism of adaptation, giving rise to discernable behavioral and institutional differences. He acknowledges the huge role of culture, but argues that the most stable aspects of culture themselves have genetic bases.
It is not clear that Wade has his causal arrows pointed in all the right directions, and he admits that empirical evidence linking stable cultural apparata to evolution remains sparse. Nevertheless, despite the obvious potential for abuse, the argument is plausible and its implications are clear: very deeply rooted differences among societies explain why some abound in human capital, social trust, and institutional coherence—the triadic formula for success in the modern world—while others do not. He is pessimistic, therefore, that effective institutions can be transmitted from one society to another. Nation-building anywhere but home, in his view, is a mug’s game.
Development and the SDGs
Now, how does the best available thinking on development align with the thinking associated with the UN’s 2015 Sustainable Development Goals?
The goals are highly numeric, both in discrete objectives and in the price tags associated with them. The flavor comes across in an essay by Arun S. Nair entitled “The Summer of Sustainable Development” in the magazine Diplomatic Courier, dated July 28th, 2015. There were eight Millennium Development Goals and 21 anti-poverty targets associated with the year 2000 conclave; now there are 17 goals and 169 targets. Government spending, Nair claims, is falling short of achieving these goals by $22.5 trillion. Uganda’s Sam Kutesa, President of the 69th Session of the UN General Assembly, is quoted as saying that additional funding to eradicate extreme poverty will require between $135 and $195 billion every two years, while new infrastructure projects related to transport, water, energy, and sanitation will require investments of five to seven trillion dollars annually.
But, writes Nair, “a major concern in this regard is the insufficient ‘net ODA’ from the OECD”—in other words, foreign aid from rich-country governments to the central governments of poor ones. Most of the 28 OECD members are accused of giving too little, and the United States is singled out for special criticism. Nair quotes George Soros’s book On Globalization (2002) on this point:
It is not by accident that international resource transfers are running so far below the 0.7 percent of GDP target or that the U.S. stands lowest among the developed countries. . . . There is a strongly held belief, particularly in the U.S., that foreign aid is ineffective and sometimes even counterproductive. What is worse, this concern is without foundation.
Actually, it is not without foundation, and one would not know from this statement that the U.S. government is the single largest source of ODA: about $26.5 billion of a global total of $150 billion. Soros himself adds that the effectiveness and impact of foreign aid could be greatly improved by a new paradigm “built around giving recipients a greater sense of ownership and participation in the programs that are supposed to benefit them.” That describes exactly the essence of the Millennium Challenge Corporation, but that was a Bush Administration initiative, so Soros has not a kind word to say about it. But if he admits that the effectiveness of foreign aid isn’t what it could be, then does he not contradict himself by claiming that there is no foundation for questioning the effectiveness of foreign aid?
It is clear that the September gathering in New York was really about leveraging heightened sensitivity to inequality, the data for which is mostly misunderstood or deliberately distorted, to shake down wealthy countries into donating more money to poor ones. The new goals and targets encompass a multiplicity of relevant factors, so we are well beyond the primitive technology-transfer model of the 1960s; but the associated UN or OECD materials still lack any explicit epistemology of development. In its absence, one finds an all-but-the-kitchen-sink inventory of discrete meliorist projects without context: little investment in institution building, and barely any mention of ethno-cultural factors or corruption. Political correctness reigns.
In short, while our understanding has advanced, the bureaucratic routines of the foreign aid industry have changed little. This does not mean that ODA isn’t useful. It is particularly effective, for example, in public health. The Bush Administration’s May 2003 President’s Emergency Plan for AIDS Relief (PEPFAR) initiative did far more short-term good than the MCC, for example; it saved tens of millions of lives, mostly of people of working age or younger. Since there is a clear negative correlation between disease and development, this program must be considered as perhaps the greatest success in the history of development assistance. ODA has also been put to good use in water/sanitation programs, in reducing under-five mortality, for female health, and in fighting polio, malaria, and other diseases. All of this improves living conditions; what it does not do, in the absence of a more mature institutional array, is lead to sustained economic growth.
The Future of Development
So, measured against what we at least think we know about political economy in the twenty-first century, what does the future of development look like? Here are the relevant trends and facts in a nutshell; they coincide only partially with UN-speak on the subject from the September 2015 gathering in New York.
The most important factor to grasp is that global economic growth over recent decades has resulted in several countries attaining middle-income status, and in a greater eradication of abject poverty than during any comparable period in recorded history. That status has not come about because of foreign aid or World Bank projects, however. It has come about because of sounder macroeconomic and fiscal policies, more trade and remittances, more liberal capital flows, and better governance. It is also related to rapid urbanization in the developing world; population density has been historically the best indicator of the pace of economic transactions.
All this sounds like great news, and it is. But wait.
The global middle class now numbers about two billion people, and will probably double by 2030. As people get richer, they demand higher quality food and better functioning governments, and, as technology pushes against existing arrangements—what Schumpeter called “creative destruction”—social stability and political calm usually take a dive. The common idea that rapid economic advances will make politics less roiled is perhaps the most ahistorically foolish one around. Relatedly, while the growth of cities can stimulate development if cities are equipped with the right leaders, strategies, and infrastructures, cities with dysfunctional governments will more likely become breeding grounds for criminal nodes, violence, disease, and overall state decay.
Then there is the global youth bulge, which is partly a result of economic growth. Like the growth of cities, this trend could cut both ways. The bulge could be a major source of talent if educational access and quality keep up, or a big-time problem if youth lack opportunities for inclusion in the global economy. Similarly, communications is being revolutionized in developing countries and will have a major impact on economic life. Africa alone is expected to have one billion mobile phone users by 2020, with most able to access banking and other services. That could be a formula for growth, or a disintermediation formula for the decay of state executive capacities.
There will also likely be more frequent global pandemics in the future, due to increased travel, urbanization, and diets with increased protein that raise the risk of “zoonotic” diseases (those that go from animals to people). If these pandemics are not effectively managed, they can—like the AIDS epidemic—wipe out many development gains almost overnight.
Also, as societies continue to develop and populations continue to grow, energy demand will rise—and so will the associated economic costs and environmental problems. The same may be said for infrastructure. More people and higher life expectations will sire huge infrastructure investments. If these investments are to be made efficiently, governance must be non-corrupt and financial markets in particular need to be free of cronyization. Good luck with that one, for corruption remains an endemic issue.
While corruption is a culturally variable concept, there is no doubt that kleptocratic elites in the developing world (and elsewhere, too)-enabled by a system of global finance centered in the West-drain many state treasuries of money that belongs to the people. For every dollar of ODA that comes in the front door of many developing countries, between eight and ten dollars go quietly out the backdoor into an offshore shell account. Demands for more ODA from some sub-Saharan African autocrats acquire a different hue in light of such facts. The United States and other Western governments can do more for the treasuries of many poor countries by putting a stop to the behavior of shyster Western bankers, lawyers, and accountants than they can by increasing ODA by a few tenths of a percent.
There is also now more to steal. With increased wealth in developing countries, domestic tax bases and government revenues have expanded to the point that developing country governments need less external assistance if they harness local capital markets, savings, and taxes and fees collected. In 2012, developing and emerging economies mobilized around $7.7 trillion in domestic resources, orders of magnitude far beyond the roughly $150 billion available in ODA.
On the other hand, a growing plague of regional security threats endangers basic order in many countries, and without basic order development doesn’t happen. Think Syria, or South Sudan. Radical Islamist groups in the Middle East and Africa, gangs and criminal networks in Central America, and civil conflict in Africa, are vastly greater threats to development than a shortfall of ODA from the OECD.
Worse, perhaps, the future may bring significant social, and therefore economic, discontinuities in wealthier countries. The recent swoon of the Chinese stock market, as it tried to swallow and digest a huge real estate bubble, is just one sign that a global recession (or worse) is likely to come. A world economy that is more tightly interconnected tends to crash together after it rises together. Global economic reality since the end of the Cold War has therefore come to resemble a mini-max game with a single integrated business cycle, which is frankly a little scary.
Even more basic, the future of the Westphalian territorial state itself, as an agent of effective governance, is in doubt. There is a growing misfit between the ambit of national political authority and the spatial characteristics of economic and culture transactions. It will take time to work out what this means as effective sovereignty migrates both upward toward transnational arrangements and downward toward devolved subsidiarity.
For many years the world’s societies will coexist on a multi-speed and rapidly changing planet. As this process ensues, the world’s poorer countries should not expect much help from rich but roiled societies in the throes of disorienting and multi-level political ricochet. If they are to develop the human capital, social trust, and institutional coherence necessary to succeed, they will probably need to muster their own cultural resources to do it for themselves.