Book Reviews

30 março, 2008

180) Protecionismo vs Free Trade: o eterno debate

How Rich Countries Got Rich ... and Why Poor Countries Stay Poor
Erik Reinert
Hardcover: 320 pages
PublicAffairs; 1st Carroll & Graf Ed edition, 2007
ISBN-10: 0786718420
ISBN-13: 978-0786718429

Book Description

In this refreshingly revisionist history, Erik Reinert shows how rich countries developed through a combination of government intervention, protectionism, and strategic investment, rather than through free trade. Reinert suggests that this set of policies in various combinations has driven successful development from Renaissance Italy to the modern Far East. Yet despite its demonstrable sucess, orthodox developemt economists have largely ignored this approach and insisted instead on the importance of free trade. Reinart shows how the history of economics has long been torn between the continental Renaissance tradition on one hand and the free market theories of English and later American economies on the other. Our economies were founded on protectionism and state activism—look at China today—and could only later afford the luxury of free trade. When our leaders come to lecture poor countries on the right road to riches they do so in almost perfect ignorance of the real history of national affluence.

About the Author
Erik S. Reinert, author of Globalization, Economic Development and Inequality: An Alternative Perspective(2004), is Professor of Technology, Governance and Development Strategies at Tallinn University of Technology, Estonia, and President of The Other Canon Foundation, Norway. He is one of the world's leading heterodox development economists.

Two opposing reviews:

To Become a Rich Country: Industrialization Policy First, Free Trade Second
November 12, 2007
By Serge J. Van Steenkiste (Atlanta, GA)

Erik Reinert masterfully uses experienced-based economics to demonstrate how rich countries got rich. Economic growth and welfare in rich countries originated not in unrestrained international free trade, but in conscious and deliberate industrialization policy that progressively shaped a particular form of economic structure (pp. xx, xxiv, 9-10, 47-48, 65, 79-83, 88, 98-100, 115-20, 177, 198, 246-49, 288-89).

Reinert's greatest merit is to clearly show how economic development really works (pp. 39, 52, 305-08):

1) A country first industrializes behind a wall of tariffs, direct subsidies, and/or patents and is then slowly and systematically integrated economically with nations at the same level of development (pp. 17, 22-24, 56, 84, 88, 134, 171, 210, 235, 268-69, 273). The United States followed the example of England to industrialize behind a protectionist wall for about 150 years based on Adam Smith's Wealth of Nations (pp. 23-25, 31, 58, 212). The Marshall Plan to reindustrialize Europe after WWII was built on the same logic (pp. 63, 89-90, 179-81, 241, 265-66).

Countries already wealthy need very different economic policies from those of countries still poor (p. 81). An aspiring poor country needs to tax "bad" trade, i.e., exports of raw materials (read agricultural or mining products) and imports of industrial products (pp. 17, 21, 62, 78). Perfect or commodity competition is for the poor, resulting in price-driven diminishing returns, no industrialization, and immigration to the rich world (pp. 8, 18, 62, 71, 133, 149-201, 245, 280-81). Unlike development economics, palliative economics do not radically change the productive structures of poor countries but instead focus on easing the pains of economic misery (pp. 63, 179, 211, 239-70, 282, 296-97).

Paradoxically, being poor in natural resources is one of the keys to becoming rich (pp. 7, 77). A poor country has to encourage "good" trade, i.e., imports of raw products and exports of industrial products. The existence of an (inefficient) manufacturing sector establishes a national wage level which prevents countries from moving too far into diminishing returns (pp. 109, 124, 183, 251, 265, 295). Reinert observes that "good" trade also results from the export of industrial goods in exchange for other industrial goods (p. 89).

That initial protection is essential to achieve increasing returns and to access new technologies (p. 67). Once these goals are achieved, further protectionism is counterproductive. Reinert contrasts the "good" protectionism of East Asia with the "bad" protectionism of Latin America (pp. 224, 285, 311-12). Solidly industrialized countries require bigger and more international markets to further develop and prosper (p. 81).

The timing of the opening up of an economy to international competition is therefore critical. Opening up too late undermines growth (p. 205). In contrast, opening up prematurely will result in deindustrialization, falling wages, and increasing social problems (p. 252). Mongolia and Peru are two examples that come to mind (pp. 110, 164, 173-79, 251-52). That insight about the timing of free trade is absent in the Washington Consensus as applied to most of the developing world (pp. 19, 55, 68-69, 81, 84, 107, 204, 216-37, 244, 278, 295).

2) The preconditions for wealth, democracy, and political freedom are diversified manufacturing and knowledge-intensive services subject to increasing returns (pp. 268-69). In some economic activities (read manufacturing and advanced services), costs fall as the volume of production increases (pp. 36, 38, 108-09).

David Ricardo's theory of comparative advantage in international trade relies on a number of simplistic, abstract assumptions that too often lock poor countries into specializing in being poor (pp. 15, 23, 42, 59, 75, 106, 277, 301-04, 309-10). One of the key assumptions is that there are no qualitative differences between economic activities. If left alone, the market will even out the differences between say, Microsoft in the U.S. and goat-herders in Mongolia (p. 177). Ricardo's theory is also built on the rigid assumption that there can be no change in specialization, which unsurprisingly results in factor-price polarization (pp. 19, 117, 213-14).

3) Economic wealth results from synergies, i.e., people of many different trades and professions sharing a community (pp. 73, 93-95, 102, 136, 268-69, 275). The diversity of economic activities based on the extent of division of labor makes it possible for new knowledge to be transferred from one sector to the other (pp. 94-95, 256-63, 276). Ricardo's theory completely ignores synergies (p. 214).

Reinert clearly explains to his audience that this path for a country to join the club of rich countries is much more difficult today than in the past for the following reasons (pp. 290, 292-93):

1) Information Technology: Unlike process innovations, product innovation tends to create imperfect competition and higher wages. For example, at Google, search technology as a product innovation results in high wages and high profits. When the same technology is employed in say, the hotel and airline industries, the results are falling margins for the travel industry and lower wages for many persons employed in that sector of activity (pp. 188, 229).

2) Intellectual Property: The increasing percentage of copyrighted, trademarked, and patented products widens the gap between rich and poor countries (pp. 111-14). For example, the pharmaceutical industry based in rich countries works hard to legally protect the output of its substantial investment in research and development.

3) Workers Distribution: There is a transition from single-plant economies of scale towards multi-location economies of scope. For example, the integrated American automakers are evolving toward modular architectures for their mainstream models to compete on speed and flexibility.

4) Workforce Mix: Manufacturing increasingly gets automated while services occupy an increasing percentage of the total workforce.

5) Workers Substitution: Service workers are often more easily substituted than specialized industrial workers, resulting in diminished workers' bargaining power.

6) Employers Fragmentation: Decentralized franchising instead of centralized ownership also reduces workers' power at the negotiation table.

To summarize, Reinert recommends that poor countries study the policies of those who created American and European prosperity, and ignore the advice of their forgetful successors (pp. xxix, 13). "Do not do as the Americans tell you to do, do as the Americans did," concludes Reinert (pp. 23, 168).
Comment Comment | Permalink | Was this review helpful to you? YesNo (Report this)

==========

Nice History, Bad Logic
January 28, 2008
By J. Hanley "Christian Anarchist" (Michigan)

Reniert gets 3 starts for a well-argued case for protectionism by developing countries. He accurately shows that most now-well developed countries used protectionist policies during their developmental periods.
He fails, however, to prove that those countries' protectionist policies were causal factors in the countries' development, and the unavoidable counter-argument that protectionism actually retarded their development has not been disconfirmed.
This may sound silly: after all, the U.S., England, France, etc., are all developed now, so how could their development have been retarded? Well, the fact is, we don't have alternate histories to compare for these countries. Reniert only assumes, he cannot demonstrate, that they could not have developed faster if they had eliminated their protectionist policies.
In short, protectionism may just be a historical fact of now-developed countries. But of course correlation does not equal causation.

Counter-arguments:
England actually developed faster after it eliminated its "corn laws" (restrictions on import of grains).
Government agencies, not having to worry about losing money and going out of business, have little incentive to spur them to make good investment choices, and do not have good a track record of doing so. Reinert doesn't really deal with the fact that all protectionist policies only protect those businesses with the political influence to get government to favor them. That, of course, doesn't automatically equal good economic choicer by government. And he doesn't satisfactorily answer Adam Smith's claim that free markets are for the benefit of consumers, not businesses. If protectionism drives up prices, that harms consumers. And of course they cannot support as many different local/national businesses if to much of their income is directed by the government toward just a handful of companies.

But worth reading to see the free-market skeptics side of the argument. Just don't forget that all these arguments have been answered before, by a variety of economists.

28 março, 2008

179) A batalha do Japao: especialmente selvagem

Death in the Pacific
By EVAN THOMAS
The New York Times Review of Books, March 30, 2008

RETRIBUTION: The Battle for Japan, 1944-45
By Max Hastings.
Alfred A. Knopf, 615 pp. $35.

In the war against Japan, American naval commanders faced what might be called the prison ship problem. Submarines had little way of knowing which Japanese transport ships were carrying prisoners of war. In any case, “the U.S. Navy adopted a ruthless view,” Max Hastings writes. “Destruction of the enemy must take priority over any attempt to safeguard P.O.W. lives.” As a result, some 10,000 Allied prisoners were doomed (including more than twice as many Americans as have perished in Iraq). And if the Americans didn’t kill the P.O.W.’s, then the Japanese did. Aboard the tramp steamer Shinyo Maru, the Japanese guard commander told the prisoners that if the ship was attacked, he would slaughter them all. As promised, when it was torpedoed in September 1944, the Japanese guards machine-gunned the prisoners trying to abandon ship. About 20 men escaped into the sea and were rescued by another Japanese ship. When their identities were learned, they were executed.

War, while sometimes necessary, is rarely ennobling. The war in the Pacific seems to have been particularly degrading. The Japanese massacred and tortured American soldiers and horribly abused civilians; the United States bombed and burned their cities. In his masterly account of the climax of the conflict against Japan, “Retribution: The Battle for Japan, 1944-45,” Hastings suggests a kind of perversion of the golden rule. Combatants who brutalize their enemy will be brutalized in return, or, in Hastings’s rather delicate phrasing, “It has been suggested ... that few belligerents in any conflict are so high-minded as to offer to an enemy higher standards of treatment than that enemy extends to them.”

Other authors, especially John Dower, have shown that race added a nasty edge to the Pacific war. Hastings memorably quotes the way the British general Sir William Slim captured the mood of the time: the Japanese soldier, Slim said, “is the most formidable fighting insect in history.” The Americans’ racial hatred was no less harsh. Returning to Hawaii from combat on Iwo Jima, some marines paraded in front of Japanese-Americans, waving a Japanese skull and taunting, “There’s your uncle on the pole.”

But Hastings rejects moral equivalency. Defending the use of the atom bomb, he essentially argues that by the cruel logic of war, the Japanese beckoned fate. “War is inherently inhumane,” he writes, “but the Japanese practiced extraordinary refinements of inhumanity in the treatment of those thrown upon their mercy.” Sadism by the Japanese was not occasional but institutional. Prisoners of war and civilian internees were starved, bayoneted, beheaded, raped and, in some cases, vivisected.

The Japanese were careless even with the lives of their own troops. The Japanese Navy, unlike the American Navy, had no search and rescue for downed fliers, and so lost hundreds of experienced aviators. Militarists twisted the ancient samurai code of Bushido into a sick cult of death. The Japanese were supposed to wish for death over surrender, and as the war went on, the Americans accommodated them. After Japanese prisoners tried to sabotage American submarines, the subs stopped picking them up, and soon most United States ships refused to rescue Japanese in the water, except to pick up an occasional “intelligence sample.” Since surrender was considered shameful, any Americans who had given themselves up were deemed to have lost their honor and thus “forfeited fundamental human respect.”

In 1942, the Japanese prime minister, Gen. Hideki Tojo, was told nothing of the Japanese Navy’s defeat at Midway until weeks after the event. “Faced with embarrassment,” Hastings writes, “Japanese often resort to silence — mokusatsu.” By the summer of 1944, the Japanese were beaten but refused to accept defeat. When the first kamikazes flew that October, Japanese pilots vied for the honor of killing themselves (though their enthusiasm waned over time). Such ritualized suicide chilled the Americans. “I could imagine myself in the heat of battle where I would perhaps instinctively take some sudden action that would almost surely result in death,” wrote Ben Bradlee, a young officer on a destroyer (and later the executive editor of The Washington Post). “I could not imagine waking up some morning at 5 a.m., going to some church to pray and knowing that in a few hours I would crash my plane into a ship on purpose.”

Japan’s madness brought out American ruthlessness. At Dugway Proving Ground in Utah in 1943, the Americans built a small Japanese village, complete with straw tatami mats, to prove how easily and quickly it could be burned. “The panic side of the Japanese is amazing,” William McGovern, an intelligence analyst in the Office of Strategic Services, told a Washington planning meeting in September 1944. Fire “is one of the great things they are terrified at from childhood.” The policy of Gen. Curtis LeMay of the 20th Bomber Command was simple: “Bomb and burn ’em till they quit.” On the night of March 9, 1945, the B-29 pilot Robert Ramer recorded in his diary: “Suddenly, way off at about 2 o’clock, I saw a glow on the horizon like the sun rising or maybe the moon. The whole city of Tokyo was below us stretching from wingtip to wingtip, ablaze in one enormous fire with yet more fountains of flame pouring down from the B-29s. The black smoke billowed up thousands of feet, causing powerful thermal currents that buffeted our plane severely, bringing with it the horrible smell of burning flesh.” Around 100,000 people died; a million were rendered homeless.

LeMay has gone down in history as a Dr. Strangelove figure who advocated bombing North Vietnam “back into the Stone Age.” But Hastings notes that the responsibility for methodically incinerating Japan more properly lies with civilian commanders from Roosevelt and Churchill on down, including the genteel secretary of war Henry Stimson, who fretted over slaughtering civilians but did not stop it. “The material damage inflicted ... by LeMay’s offensive was almost irrelevant,” Hastings notes, “because blockade and raw-material starvation had already brought the economy to the brink of collapse.” The fire bombings’ real purpose was terror, to break the Japanese will to resist. That took some doing; two atom bombs in August were barely enough. Die-hard militarists tried to stage a coup rather than permit the emperor to surrender. Hastings convincingly argues that the atom bombs were necessary, though he regrets that the Americans did not first offer warning.

Hastings is a military historian in the grand tradition, belonging on the shelf alongside John Keegan, Alistair Horne and Rick Atkinson. He is equally adept at analyzing the broad sweep of strategy and creating thrilling set pieces that put the reader in the cockpit of a fighter plane or the conning tower of a submarine. But he is best on the human cost of war. He describes an American soldier’s bewilderment on reading the diary found on a dead Japanese soldier during the bloody battle of Manila. The Japanese soldier “wrote of his love for his family, eulogized the beauty of a sunset — then described how he participated in a massacre of Filipinos during which he clubbed a baby against a tree.” Americans were shocked by the Japanese massacre of civilians in Manila. After a month of constant bombardment, the United States Army left much of the city in rubble.

Evan Thomas, an editor at large at Newsweek, is the author of “Sea of Thunder: Four Commanders and the Last Great Naval Campaign, 1941-1945.”

178) Empire (to be partitioned): Parag Khanna

Guess Who’s Coming to Power
By RAYMOND BONNER
The New York Times Review of Books, March 30, 2008

THE SECOND WORLD: Empires and Influence in the New Global Order
By Parag Khanna.
Random House, 2008, 466 pp. $29.

After the collapse of Communism, the “new world order” quickly disintegrated into the new world disorder, as pent-up nationalism erupted, most dramatically in the Balkans. The nationalistic volcano subsided, replaced by fears about the “clash of civilizations,” which meant the West against the rest, and primarily Islam. After 9/11, the “ism” of concern became “terrorism,” and our book shelves groan under the weight of policy prescriptions from public officials, academics, journalists and even former spies.

Now, a young, well-traveled, multilingual foreign-policy scholar, Parag Khanna, suggests in “The Second World” that we are on the cusp of a new new world order — “a multipolar and multicivilizational world of three distinct superpowers competing on a planet of shrinking resources.” The three are the United States, the European Union and China. The contest now is primarily for the world’s limited resources, and it will be waged in Khanna’s second world. “From Eastern Europe to Central Asia, from South America across the Arab world and into Southeast Asia, the race to win the second world is on.”

His is not an apocalyptic scenario. Indeed, he agrees with the view that increasing globalization leads to decreasing chances of war. And since each of the new empires has nuclear weapons, “economic power is more important than military power.” Thus, the competition will be won through “soft power,” effective diplomacy and attractive social models (the liberal democracy of the United States and European Union versus China’s mixed structures).

But is the European Union a superpower? Yes, its economy and population are larger than those of the United States, and more countries are trying to get in. But it cannot be said yet whether such a diverse gathering of nations will be able to agree on a common foreign policy — Spain and other union members disagree on how to handle Hugo Chavez — or whether Paris and London are ready to surrender foreign policy to Brussels.

There can be little doubt, however, that China will be a superpower. It does not have to conquer the world militarily, just buy it. In Venezuela, China is now the largest source of foreign investment, and has offered to build homes and a fiber-optic network. Argentina’s economic recovery is heavily dependent on agricultural exports — to China. In Egypt, China is investing in everything from the Suez Canal and cement factories to electronics companies and convention centers. In Jordan, it has built four of the country’s five new dams, “with remarkable efficiency,” Khanna writes.

China has some advantages when it comes to competing with the United States and the European Union, which are not all that laudatory, but which Khanna glosses over. It has no law prohibiting its companies from paying bribes in order to get contracts; anecdotal stories abound about the amounts of money handed out in Indonesia, Thailand and Vietnam, as well as the Middle East and Africa. (The same is true for Japanese and Korean companies.) Nor is there a human rights lobby in China, or a free press, to take the country’s leaders to task for supporting corrupt, dictatorial regimes — Zimbabwe, Sudan, Syria and Uzbekistan among them.

Still, if the United States is going to compete successfully, the next administration must undertake some deep-seated fixes at the State Department. In the Arab world, Khanna notes, Chinese diplomats “show deference to local culture by learning Arabic and even taking Arabic names.” America will not become more diplomatically competitive by cutting the State Department further, as many conservatives would like. Already, America’s image and standing in the world have been weakened immensely by closing American libraries and consulates, or putting them behind forbidding security barriers (and also, of course, by the administration’s rendition and torture policies). The diplomatic ranks need to grow; there are more musicians in America’s military bands than there are foreign service officers, and the generals and admirals who head the various commands, like the Central Command or Centcom in Florida, have more aides and advisers than the country has ambassadors and assistant secretaries of state.

The notion that the United States will not be the world’s only superpower, that it will have to share power with Europe and China, will horrify many Americans. Conservatives believe the nation’s self-interest is best served by using its military power to remain on top; liberals are just as committed to keeping America No. 1 in the name of “humanitarian intervention.” But there may be real benefits to the United States, as well as to other democracies, from a tripartite world order.

Khanna is full of praise for the European Union’s development aid programs, which he says are directed to building public institutions like courts. But the Europeans, like the Americans, often put as much emphasis on getting credit for their help as they do on actually making a difference; it remains to be seen whether European programs will become overly bureaucratic and, like American projects, spend too much on consultants and economic studies. American taxpayers should be delighted, in any case, to let Europe bear more of the development burden.

Khanna is something of a foreign policy whiz kid. He has a degree from the School of Foreign Service at Georgetown, and is pursuing a doctorate at the London School of Economics. Only 30 years old, he has already been a fellow at several research institutes and has served as an adviser to the United States Special Operations Forces in Afghanistan and Iraq. Yet he tries to do too much in this book. A more accurate title might be “Around the World in 400 Pages.”

Khanna mentions about 100 countries, some in only a sentence or a few paragraphs, as if to prove that he has indeed visited that many places. The section on Latin America is overly long; a look at Mexico, Brazil and, briefly, Venezuela would have sufficed. And it is hard to understand why he devotes so much attention to Malaysia, except when one notes that he was a visiting fellow at a public policy institute in neighboring Singapore. On the other hand, his chapters on Kazakhstan and on Egypt, which he describes as “a country ripe for revolution,” both make the book worth buying.

By trying to cover so many bases, Khanna dilutes his most important arguments. Russia, he observes for example, “has no divine right to continue in its present form.” This says considerably more than it seems to at first. The country’s vast eastern section is being gobbled up by China through investment and immigration.

Khanna is obviously not shy about making bold statements. He disputes the popular view that India will emerge as a check to China. “India is big but not yet important,” he writes. “It could also be argued that China is a freer country than democratic India.” By that, Khanna means, literacy is higher and the poverty rate lower in China; it has more Internet connections and cellphones; and it is easier to start a business in China than in India.

In similar grand fashion, he states that Iran is at once “an authoritarian regime and perhaps the most democratic country in the region.” And “Islam and democracy are certainly more compatible than authoritarianism and democracy.” Iraq will cease to exist, he declares flatly (though he offers no prescription for what the next administration should do there). A Kurdish state, meanwhile, is inevitable. Closer to home, Khanna has this provocative thought: The United States should offer Mexico the same deal it wants the Europeans to offer Turkey: inclusion, citizenship, open migration, enormous subsidies and language rights. One wonders what the presidential candidates might say about that.

Raymond Bonner, a foreign correspondent for The Times, has lived on every continent but Antarctica and has reported from more than 80 countries.

10 março, 2008

177) Contestadores da ideia de deus...

Introduz e selecionou o prefacio abaixo a candidata à carreira diplomática Luciana Nery:

God: The Failed Hypothesis by Victor J. Stenger

Luciana Nery wrote (em 10.03.2008):
Esse é o prólogo de Hitchens para o livro de Stenger que será lançado só em abril. Além de informativo, é bastante intrigante, e talvez um anúncio do que está por vir - se antes cientistas se eximiam de provar ou desprovar a existência de Deus, afirmando que essas questões eram distintas e que não havia ponto de contato, agora já se sentem à vontade para demonstrar a implausibilidade da existência de um ser divino que se importa com o nosso destino. Provar uma negativa é impossível. Então o argumento parece ser - observe esses fatos... onde estaria a interferência de alguma inteligência superior? Ainda não sei se concordo com esse ponto de vista, mas é algo interessante a se pensar.
O título do livro é bem corajoso... tantas vezes o ateísmo é confundido com fundamentalismo, por fazer a afirmação de que não há evidências da existência de Deus... mas se tantos pastores e religiosos afirmam com veemência que Deus existe, que está em todos os lugares, em nossas mentes e corações, etc., e têm a convicção íntima disso... então são fundamentalistas também? Ou seja, acreditar em Deus é bom e natural, não acreditar em Deus é fundamentalismo? Por que padrões diferentes? É um engano comum. Os recentes questionamentos sobre a plausibilidade da existência de Deus passeiam por essas novas possibilidades de dúvida.
Luciana Nery

Hitchens foreword to God: The Failed Hypothesis by Victor J. Stenger
Christopher Hitchens
Until relatively recently, the argument between theists and atheists or (to adopt my own self-description) between theists and anti-theists, was largely based on two implicitly shared assumptions. The first was that science and religion belonged, in the famous words of Stephen Jay Gould, to "non-overlapping magisteria." The second was that science and reason could not actually disprove the existence of a deity or a creator: they could no more than show that there was no good or sufficient evidence to justify such a belief.
One sometimes suspects that the acceptance of the "non-overlapping" verdict was a cause of some relief to many non-scientists such as myself, who prefer to argue with religion from different premises. But with the arrival on the scene of Victor Stenger's book, the already-revived and extended argument for unbelief has undergone a sort of quantitative and qualitative acceleration. One side in this dispute is going to have to yield.
Before I say more about how important I think this contribution is, I'd like to say a word for the lay or non-scientist infidel community, who are now so much in Victor Stenger's debt. Until 1834 the very word "scientist" was not in common circulation. Men like Sir Isaac Newton were considered, and considered themselves, to be "natural philosophers": men of scientific bent to be sure, but men of a wider and deeper learning as well. Arguments about greater cosmic purposes were all of a piece with calculations and experiments, and the tyranny of specialization had not imposed itself on us. As a result, by the way, many scientists held completely "unscientific" views. Newton himself was a secret alchemist who believed that the Pope was anti-Christ and that the true dimensions of the Temple of Solomon might yield crucial findings. Joseph Priestley, the Unitarian discoverer of oxygen, was a devotee of the phlogiston theory. Alfred Russel Wallace liked nothing better than a good spiritualist séance.
It is not really until the figure of Albert Einstein (and perhaps Bertrand Russell also) that we start to find that very powerful synthesis between scientific method and a more general "humanism"; a synthesis basing itself upon reason and daring to make the connection between physical and natural evidence and the conclusion that an ethical life, as well as a rational one, is best lived on the assumption that there is no supernatural dimension.
In recent years, a number of scientists – physicists, biologists, neurologists, and others – have become, in effect, "public intellectuals" for the cause of atheism. They have transcended the bounds of their respective disciplines in order to defend the general proposition that free scientific inquiry, and the sort of society that can both support it and benefit from it, is worth defending from the assaults of ignorance and bigotry and terrorism. Thanks to these volunteers, from the brilliant Richard Dawkins in Oxford to the truly exceptional and brave Pervez Hoodbhoy in Islamabad, there is now a wide cultural resistance to those who would force stultifying creationist nonsense into the schoolroom, or those whose only interest in science is the plagiarism of technology for the purposes of criminal "faith-based" violence.
Attending a recent conference that included many such figures, I was interested to find that, when their experience of debating with the faithful was "pooled," there was really only one argument from the other side that was considered to have any interest or bear any weight. This was the question of "why is there something rather than nothing?" with its attendant suggestion that the laws of physics and the universe have been somehow "fine-tuned" in order to create the conditions most optimal for life.
I first came across this "argument" in a book published in 1993. Credible Christianity: The Gospel in Contemporary Culture, was written by a man named Hugh Montefiore whom I slightly knew and rather liked. A senior bishop of the Church of England, he had been converted from Judaism as a schoolboy by the appearance of a white-robed figure who commanded him to "Follow Me." Here is how the bishop phrased the matter:
"For example, if the strong force which keeps the nucleus of an atom together had been only 2 per cent stronger, the universe would have blown up: if it were slightly weaker, nuclear fusion, which keeps the stars burning, would not have happened. There are many such coincidences, signal examples to the eyes of faith of the wisdom and providence of the Creator."
If you turn to pages 137 to 164 of Victor Stenger's book, you will find a fairly comprehensive rebuttal of this attempt to update the old argument from design, which was originally cast in more purely terrestrial terms by William Paley in his Natural Theology. It becomes ever-clearer that the scientific and the supernatural explanations of matters are not so much "non-overlapping" as doomed to overlap, and to contradict one another, or perhaps better say, to be incompatible or irreconcilable with one another.
Let me adduce a couple more examples of my own – or rather, adaptations of my own from the work of others – to support Victor Stenger's case that the god hypothesis has actually been conclusively discredited. Suppose we take the hypothesis at face value for a moment. Edwin Hubble long ago demonstrated that the universe is exploding away from its "big bang" starting point. Persuaded by the "red light" evidence that this was indeed true, the scientific community nonetheless thought, for what might be called Newtonian reasons, that this rate of expansion would slow down over time. To the contrary, and as Lawrence Krauss had predicted, it has now been found that the universe is exploding away from itself at a rapidly increasing rate. Among the non-trivial consequences of this will be that we shall one day be unable to observe anything in the whirling galaxies that will any longer confirm that the "big bang" ever took place. Meanwhile, the Andromeda galaxy, already visible to the naked eye in the night sky, is headed directly towards our own and will collide with it in five billion years. What sort of "fine tuning" is this? (Perhaps the same tuning that has made all the other planets just in the tiny suburb of our own solar system either too hot or too cold to support life.) At least, however, it provides a good demonstration of how a great deal of "nothing" is all set to come out of our brief "something."
Or take a quite different order of instance, again from the sort of scientific knowledge and discovery that has not been available to us for more than a few years. Now that we have mapped the human genome, we know that all our common ancestors left Africa about 60,000 years ago, and that we all share the genetic markers to prove it. Allow me to quote from an essay by Spencer Wells, the director of the Genographic Project at the National Geographic:
What set these migrations in motion? Climate change – today's big threat –
seems to have had a long history of tormenting our species. Around 70,000 years ago it was getting very nippy in the northern part of the globe, with ice sheets bearing down on Seattle and New York; this was the last Ice Age. At that time, though, our species, Homo sapiens, was still limited to Africa; we were very much homebodies. But the encroaching Ice Age, perhaps coupled with the eruption of a super-volcano named Toba, in Sumatra, dried out the tropics and nearly decimated the early human population. While Homo sapiens can be traced to around 200,000 years ago in the fossil record, it is remarkably difficult to find an archaeological record of our species between 80,000 and 50,000 years ago, and genetic data suggest that the population eventually dwindled to as few as 2,000 individuals. Yes, 2000 – fewer than fit into many symphony halls. We were on the brink of extinction.
Ponder this arresting finding, even with its misuse of the word "decimate" (which means "reduce by one tenth" rather than "eradicate"). There are, really, only two ways of assimilating and analysing it. The first way is to see the survival and escape and later population spread of the endangered 2,000 as a miracle: a form of the Exodus story that alas never managed to get written on any tablets or papyri. The second way is to remember something else that we didn't know until recently: that almost 99 per cent of all species ever recorded as having lived on this planet DID become extinct. If you bear that in mind, then any author of any miracle must also have been the deliberate author of the ice-sheets and the Sumatran explosion – "the wisdom and providence of the Creator," as Bishop Montefiore put it so fulsomely – and then have stayed his hand until just the point when the population of his preferred creatures dipped below the 2,000 mark. That could, I suppose, be called "fine tuning." It could also be thought of as a very laborious and roundabout and inefficient and incompetent (and somewhat cruel and capricious) method of ensuring human survival.
In other words, none of these god-centred "hypotheses" can do any more than replace, or attempt to restate, the original fallacy of the "design" arguments. Meanwhile, our advances in knowledge and technique simply place these efforts under an ever more pitiless and skeptical gaze. Now we know roughly the age of our species. Richard Dawkins has put it as high as a quarter of a million years, while Francis Collins (the extremely genial and decent C.S. Lewis fan who oversaw the Human Genome Project) once in my hearing said that it could be as little as a hundred thousand. No matter. Let us take the lower figure, and use it to illustrate the truth of revelation. On this model, our species emerged and for tens of thousands of years cowered in the few climatic refuges of the globe that were hospitable to it. Life expectancy? Perhaps a couple of decades. Infant mortality? Extremely high. Death from tooth decay or diarrhea? Commonplace. Terror of micro-organisms in general? Intense. Fear of death from earthquake, tsunami, volcano and flood? Extreme, and again compounded by ignorance. Wars between tribes and clans, for food and territory? Grim and frequent. Religion? Not known to us, but probably involving human and animal sacrifice to propitiate weird idols.
And for a minimum of ninety-five thousand years, heaven watches this with folded arms! Stony, lofty indifference attends the striving and the suffering and the agonising deaths of infants and innocents, to say nothing of the sadistic and genocidal violence and the worship of bogus shrines and false gods. And then, at long last, after nine thousand and five hundred decades or so (in instant in evolutionary time, to be sure, but quite a long time for frightened mammals), it is decided that heaven must intervene. By direct revelation. But only in certain illiterate and backward parts of the Middle East. As I say, you may choose to believe this if you so desire, but that is what you must now believe. Until an amazingly recent date, science would not have compelled you to face the absurd consequences of your faith in quite this way.
In any case, and as Victor Stenger points out quite early on in his marvellous book, there is a big difference between being a deist and a theist. You may still, to your own satisfaction, decide that none of nature's observable processes could have got under way without a prime mover. But alas, all your real work as a religious person is still ahead of you. How can you get from this prime mover or first cause to a deity who cares who you sleep with, what you eat, what holy day you observe or how you mutilate your own (or your children's) genitalia? From the big bang of the great beginning to the small and sordid bang of the virgin-hunting suicide bomber is still quite a step. Nobody has even come close to showing how this step could ever be taken. And it is highly unlikely, now, that anybody ever will. The simple reason for this is that we have better and clearer and more impressive explanations for things, as well as explanations that are more beautiful, elegant and harmonious. To look the facts in the face is not to surrender to despair and nihilism: we know that the world will come to an end and we even know how, but it is only the religious who look forward to this event with relish and relief.
The challenge of our age is the same that confronted all previous ages. How shall we live the good life and how shall we know virtue? In the past millennia of primeval ignorance, pattern-seeking primates proposed a totalitarian solution to this question and threw all the responsibility onto a supreme dictator who demanded to be loved and feared at the same time. The story of human emancipation is the narrative of our liberation from this evil myth, and from the greedy, ambitious primates who sought (as they still seek) to rule in its name. Many forces have contributed to this emancipation, from philosophers to satirists, but it is perhaps to the natural and human sciences that we have come to owe the most, and Victor Stenger is prominent among those whom we must acknowledge.