Monday, January 19, 2015
Bill Hudson. Birmingham, Alabama, May 3, 1963.
"... Associated Press photographer Bill Hudson is perhaps best known for capturing this galvanizing image of Parker High School student Walter Gadsden being attacked by police dogs in Birmingham, Alabama on May 3, 1963; a three column-spanning version of the shocking photo ran above the fold in The New York Times the following day."
"When a group of young women in rural Georgia were placed under lock and key after protesting segregation at the local library, photos like the one above, which was snapped through the bars by new journalism pioneer Danny Lyon, helped secure their release."
by Gautam Pemmaraju
Auron par hasney ka anjaam jo hoga so hoga;
Lekin voh qaum nahin miththi jo apney aap par hansti hai.
The consequences of laughing at others will be what they are.
But the people who laugh at themselves will never be erased.
Last week, at a screening of my documentary film (a work in progress) on the humour-satire performance poetry traditions of Dakhani, the spoken vernacular Urdu of the Deccan region, one of the first to arrive was the eighty-six year old bright-eyed, warm and charming Ghouse Mohiuddin 'Khamakha'. The above couplet of his has remained with me over the years, and its current relevance is but obvious as we see the unfolding of several disturbing things.
As fleeting relief I offer some fine examples of Dakhani Mizahiya Shayri (humour-satire poetry) here. The richness of the vernacular, drawing largely from folk traditions and situated as it is further down the interrupted path of the glorious rise of the language till 1700 CE, is expressed amply in the humour-satire poetry in the Deccan. Stricken though it may be by the vicissitudes of time, the triumph of conquests, and the contempt of the elite, the tongue still remains spoken today across the Deccan plateau.
by Hari Balasubramanian
It's hard to spot new birds during Massachusetts winters (I don't own a house with a yard or a bird feeder, which makes it doubly hard). The hundreds of species that make their home or pass through here are more easily observed in spring, summer and early fall. But last Tuesday – a bone chillingly cold but sunny day in Amherst – I ran into four species all at once. I had come out for a walk in a quiet part of town, a dead end street where an unpaved hiking trail leads to a pond. The unusually high levels of noise in the trees suggested that a lot of birds were active. The repeated deep thuds I was hearing indicated that woodpeckers were around, hammering on tree trunks.
So here are the species that I spotted, from left to right (picture assembled from Wikipedia images): the eastern blue bird; the black capped chickadee; the female downy woodpecker (the male has slight red marks on the head); the misleadingly named red-bellied woodpecker because the prominent red or orange patch is actually on the bird's curved head. The chickadee is the smallest of the four, and the red-bellied woodpecker the largest. Overall, nothing really surprising here – these are all common winter birds. But as an amateur bird watcher, I felt a special joy stumbling upon them; it felt, at least in those few moments, as if some special secret of nature had been unexpectedly revealed.
Some other things I've noticed this winter: (1) starlings, dozens of them somersaulting gracefully in the air in unison, literally a dance to avoid death, an attempt to disorient hawks that are hunting them (something similar to what's happening in this video. On a different note, the 150 million starlings in North America today are descended from the 60 odd European starlings that were deliberately introduced to New York's Central Park in 1890 by "a small group of people with a passion to introduce all of the animals mentioned in the works of William Shakespeare" -- talk about literature influencing ecology!); (2) young wild turkey, moving black specks from a distance, foraging in a snow covered meadow (here's a previous piece on wild turkey); and (3) a few weeks ago, at twilight, the mysterious, round faced barred owl, the only owl I've ever seen, well camouflaged against the bark of a tree, very similar to this picture.
That will be it – a short post this time. A very happy new year to all at 3QD! My ten essays from last year are all collected here.
by Emrys Westacott
Option A: You live 34,748 days. Your final four weeks are spent in and out of hospital, alternating between discomfort and semi-consciousness, entirely dependent on family members and health care providers for assistance with every basic function.
You die in hospital or in a nursing home. The cost of home care, hospital services, and medications over this period depletes your estate by thousands of dollars.
Option B: You live 34,720 days–that is, 28 days less. The 28 days you give up are those last four weeks just described. You die at home. The money you save helps put a grandchild (or great grandchild) through college.
To my mind, this is a no-brainer. Option B is clearly preferable. In both cases you live until you are 95, a good long life. Everything significant that you were able to enjoy or accomplish will have happened. All you miss out on if you choose Option B is a few days of humiliation, discomfort (occasionally rising to out-and-out pain), guilt about the burden you are imposing on others, and anxiety about how your final pitiable condition might affect the way you are remembered. I assume most people will share my view that B is the better option. So the question arises: Why do the final days of so many people resemble Option A rather than Option B?
This question was prompted by two very good bestselling books that I read during the recent holidays: Atul Gawande's Being Mortal, and Roz Chast's Can't we talk about something more pleasant? Gawande, a physician, addresses an increasingly important problem. Due to the tremendous progress made in medicine over the last century, dying is often a much more complex and protracted process than it used to be. Doctors today have the know-how and the technology to keep us alive a lot longer after we are stricken with illness or old age. Unfortunately, says Gawande, doctors, other care-providers, and family members, often unthinkingly opt for whatever will prolong life without considering sufficiently whether what is being prolonged is really worth living from the perspective of the person who has to live it.
Our worst nursing homes are luxury hotels compared to the old workhouses and almshouses where people used to spend their final days, but they are nevertheless dreaded. Innovative assisted living arrangements make an honest attempt to eliminate some of most objectionable aspects of nursing homes, particularly the lack of independence granted to the residents. But all the same, loss of autonomy, and the blighting of even small pleasures by continual discomfort, seems to be the fate that awaits many of us if we take our time shuffling off our mortal coil.
by Mara Naselli
Rembrandt in America, an exhibition shown at the Minneapolis Institute of Arts a couple years ago, displayed several portraits by Rembrandt as well as works painted by Rembrandt's students and contemporaries. Curators had posted labels that highlighted the provenance of the paintings, many of which have been collected in the United States over the last century or so by the super rich. One painting, Man with Arms Akimbo, is still for sale, for $45M by Otto Naumann, Ltd., though it isn't one of the better ones. When it comes to the art market, questions of authenticity dominate, and with Rembrandt, whose style was so wide ranging, it is hard to tell what was Rembrandt’s and what was painted in his studio. Early he mastered what they call a smooth style. Later he painted in a rough style, more impressionistic, long before Impressionism became a movement. But the style of technique is not always an obvious indicator. Was the painting by Rembrandt's hand? Was the painting painted in his workshop? If so, by whom? Was it supervised or corrected by Rembrandt? Was the painting painted by Rembrandt and overpainted by his students? Was the face painted by Rembrandt, the ruff painted by someone who specialized in collars, and the black cloak painted by someone who specialized in black fabric? These are the questions that occupy an appraiser or the auction house or the billionaire looking for a place to park $45M. The art economy is fascinating in its own way, in fact it was so preoccupying that I had to come back, on the last afternoon of the exhibit, to get a good look at the paintings themselves.
I scanned the galleries. Each room was full of people and I could see the tops of some of the larger pictures—all portraits, their heads gazing out from their frames just above the crowd. They seemed to look over us, we mere viewers. As if the sitters, the subjects of these portraits, were fixed with some higher purpose. How had I not seen this the first time? Some seemed almost alive. I don't mean to be facile about this—people spend entire careers assessing what was done by Rembrandt and what wasn't, using sophisticated instruments and technology—but certain portraits were simply arresting. Their faces glowed. The expression, the depth of field, the particular countenance of each portrait. The details were neither muted nor exaggerated. They expressed the distinctiveness of the sitter: creases around the eyes, the ridge in the brow, the gaze fixed or far off, the position of the shoulders, the shape of the mouth, the curve of the spine, the turn of the head, the color in the skin. These were traces of lives lived.
by Brooks Riley
by Carl Pierer
After having presented Clark and Chalmers' extended cognition hypothesis as well as two lines of argument against the hypothesis, the last article at this place ended with an intuitive, bad gut-feeling and a promise to develop this feeling into a full blown argument. Before making good on that promise, this article will start with a brief recap of the arguments presented so far.
Clark and Chalmers' argue in their famous "the extended mind" paper that when a person uses tools or the environment to facilitate a particular cognitive process, this person and her tool constitute a coupled system. Indeed, Clark and Chalmers suggest that in such a coupled system the cognition extends, i.e. it is not confined to the brain/skull-boundary. The argument works as follows: suppose the cognitive process in question is to decide whether a certain shape that appears on the screen will fit into a given slot (as in the classic Tetris game). The person can use a computer to rotate the shape and decide whether it will fit or not. Now, this is clearly an external process. But imagine that in the not so far future, a person will have a neural implant with exactly the same functional structure as the computer and she can use the implant to rotate the shape and check whether it will fit (or she can use the traditional method of rotating it mentally). Clark and Chalmers think that as there is no difference between the computer and the neural implant. Further, whether the person in the near future choses the implant or the traditional method does not matter for the process to count as cognitive. Therefore, the only thing that distinguishes the computer-scenario from the neural-implant-one is that the former involves the use of a tool external to the brain/skull-boundary. But since precisely this is at question, this difference cannot be invoked to support the claim that using the computer is non-cognitive. Thus, using the computer is cognitive and so cognition extends.
Clark and Chalmers' argument relies on the parity principle:
If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.
This seems to follow directly from the basic functionalist idea that what it takes for a process to count as cognitive is its functional structure, rather than its physical instantiation.
In the previous article, two lines of argument against this view were presented. The first is taken by Adams and Aizawa. They suggest that any process that is to count as a cognitive process has to bear the "mark of the cognitive". They think that it is not theoretically impossible for cognition to extend, but as a contingent matter of fact there is no process involving the external world that bears the mark of the cognitive. It was mentioned in passing that their suggested "mark" is closely modelled on human cognition. The second line is taken by Sprevak, who argues that the hypothesis of extended cognition provides a counterargument to the view from which it is derived, i.e. functionalism. He attacks Adams and Aizawa's argument on the grounds that their "mark of the cognitive" is too closely modelled on human cognition and deny processes to be cognitive on the grounds of being instantiated differently – a violation of the basic functionalist idea. At the same time, he suggests that functionalism entails extended cognition and further that a moderate (Clark and Chalmers') version of extended cognition is impossible. Instead, if functionalism is accepted the conclusion that any process is cognitive follows.
by Leanne Ogasawara
I never really understood the expression, "drank the Kool-Aid" --until I went to Jerusalem. It happened at the Western Wall, where I found myself standing in a very long line to the ladies' restroom. The young woman ahead of me turned around to look intently into my eyes as she spoke of her love of Jesus Christ. Talking blissfully of her savior, she told me a bit about the evangelic church tour she was on. Those tours don't spend all that much time in Jerusalem, she explained, for their focus is up in the north, where Jesus had his ministry along the Sea of Galilee. Rarely stopping in churches either; they don't acknowledge their Orthodox and Catholic counterparts as co-brethren.
I was not so surprised by what she said, since the Via Dolorosa had been filled that week with Orthodox pilgrims from Russia; along with Catholics from Africa and southern India and Indonesia. It was a more eastern Christian church along the stations and in the Sepulchre. It was an unfamilar Christianity for an American in many ways, in fact.
What I found disturbing was not what she was saying but by the strange look she had in her shining eyes. So deeply committed to the point of tearing up as she spoke--she appeared almost alngelic in her religious certainty. It scared the hell out of me...
It’s been three days and our eight teams are already up, pitching for their lives. Watching them from the front row is a series of mentors we’ve curated, from areas like branding, user interface design, product development, technology, business and investing. There’s a tug between the mentors and the startups underway -- criticism and backtalk, kicking the tires and trash-talking the car, defending its value and selling its golden possibilities.
Startup mentoring is a lot like teaching, supervising, consulting, parenting -- plus maybe running a cult retreat. It can’t happen without a deep and personal bond between the mentor and mentee. That relationship usually arises accidentally, through life circumstances, working relationships and chance meetings. Here we were engineering that relationship into existence, several entities and multiple individuals at a time.
In the run up to our first day, my main goal was to ensure that I made a personal connection with each cofounder. Without this central relationship gelling, the whole thing would fall apart, fall away. In the weeks leading up to the launch of Startup Tunnel I’d been taking long winter walks, doing yoga and actively working on clearing my thoughts to make space for this set of startups and their many needs. I also designed a series of exercises that would allow startup founders to see in one another and in our mentor group a useful set of resources that they could draw from as they developed their business. I scripted every aspect of our initial interactions in detail. There would be a ball to play with, a registration desk, thirty chairs set up against the demodeck, startup names posted along their workstations. There would be self-introductions, peer-feedback sessions, a seminar and workshop on understanding end users.
This way of working is not very old. It brings together three distinct kinds of expertise: entrepreneurial insight, technology capacity and financial investing. It was Y-Combinator, beginning in the summer of 2005, that began putting batches of young entrepreneurs through a common program of enrichment, trying to learn through that process what would work and what wouldn’t, thereby iteratively improving their program and reinforcing observed insights. Y-Combinator has enjoyed extraordinary success over the past nine years, having seeded numerous successful startups, in which the group’s equity holdings now exceed a billion dollars USD. But the scope of their success is even more unfathomable when one considers that they have also brought into existence a significant new business model that inverts everything that most people thought they knew about business: that entrepreneurial success cannot be predicted, that the charisma of the entrepreneur cannot be taught or improved, that entrepreneurship cannot be any better organized or routinized.
Sunday, January 18, 2015
The responses to Edge.org's Annual Question for 2015 have been published. Here is my answer:
The rumors of the enslavement or death of the human species at the hands of an Artificial Intelligence are highly exaggerated because they assume that an AI will have a teleological autonomy akin to our own. I don't think anything less than a fully Darwinian process of evolution can give any creature that.
There are basically two ways in which we could produce an AI: the first is by trying to write a comprehensive set of programs which can perform specific tasks that human minds can perform, perhaps even faster and better than we can, without worrying about exactly how humans perform those tasks, and then bringing those modules together into an integrated intelligence. We have already started this project and succeeded in some areas. For example, computers can play chess better than humans. One can imagine that with some effort it may well be possible to program computers to also perform even more creative tasks such as writing beautiful (to us) music or poetry with some clever heuristics and built-in knowledge.
But here's the problem with this approach: we deploy our capabilities according to values and constraints programmed into us by billions of years of evolution (and some learned during our lifetimes as well) and we share some of these values with the earliest life-forms including, most importantly, the need to survive and reproduce. Without these values, we would not be here, and we would not have the very finely tuned (to our environment) emotions that allow us not only to survive but to cooperate with others in a purposive manner. The importance of this value-laden emotional side of our minds is made obvious by, among other things, the many examples of individuals who are perfectly "rational" but unable to function in society because of damage to the emotional centers of their brains. So what values and emotions will an AI have?
Robert Pinsky in Slate:
The poem’s intensity and misgivings are epitomized by the invented word at the end of its first stanza. “Wordshed,” on the model of “bloodshed,” generates associations of violent conflict; from another associated word, “woodshed,” gush other associations: drudgery, storage, punishment, and (maybe anachronistically) the jazz musician’s verb for practicing one’s art, woodshedding. And opposite to that practice-time in art, the simple meaning of shedding words: falling silent.
The poem’s erratic, doubling progress follows those conflicted energies as it oscillates, I think frantically, between the two magnetic attractions of abundance and of silence. The traditional lover’s uncertainty or agony has, in this poem, a rhetorical counterpart in the struggle between embracing traditional eloquence and rejecting it. For instance, “the grapples clawing blindly the bed of want” is a line of iambic pentameter as regular as anything in Shakespeare. The reckless, hyperbolic eloquence of the images—those eye-sockets and the “black want splashing their faces”—collides with the flatly corrosive, meaning-dispersing, adverbial “all always is it better too soon than never.”
For me, that hovering, back-and-forth movement between passion and reservations, need and doubt, images and disavowals, creates a strong emotion. The feeling gathers force from the poem’s argument with itself.
Carl Zimmer in the New York Times:
A team of scientists, in a groundbreaking analysis of data from hundreds of sources, has concluded that humans are on the verge of causing unprecedented damage to the oceans and the animals living in them.
“We may be sitting on a precipice of a major extinction event,” said Douglas J. McCauley, an ecologist at the University of California, Santa Barbara, and an author of the new research, which was published on Thursday in the journal Science.
But there is still time to avert catastrophe, Dr. McCauley and his colleagues also found. Compared with the continents, the oceans are mostly intact, still wild enough to bounce back to ecological health.
“We’re lucky in many ways,” said Malin L. Pinsky, a marine biologist at Rutgers University and another author of the new report. “The impacts are accelerating, but they’re not so bad we can’t reverse them.”
Scientific assessments of the oceans’ health are dogged by uncertainty: It’s much harder for researchers to judge the well-being of a species living underwater, over thousands of miles, than to track the health of a species on land. And changes that scientists observe in particular ocean ecosystems may not reflect trends across the planet.
Dr. Pinsky, Dr. McCauley and their colleagues sought a clearer picture of the oceans’ health by pulling together data from an enormous range of sources, from discoveries in the fossil record to statistics on modern container shipping, fish catches and seabed mining.
Jeffrey D. Sachs in Project Syndicate:
French Prime Minister Manuel Valls was not speaking metaphorically when he said that France is at war with radical Islam. There is, indeed, a full-fledged war underway, and the heinous terrorist attacks in Paris were part of it. Yet, like most wars, this one is about more than religion, fanaticism, and ideology. It is also about geopolitics, and its ultimate solution lies in geopolitics as well.
Crimes like those in Paris, New York, London, and Madrid – attacks on countless cafes, malls, buses, trains, and nightclubs – affront our most basic human values, because they involve the deliberate murder of innocents and seek to spread fear throughout society. We are wont to declare them the work of lunatics and sociopaths, and we feel repulsed by the very idea that they may have an explanation beyond the insanity of their perpetrators.
Yet, in most cases, terrorism is not rooted in insanity. It is more often an act of war, albeit war by the weak rather than by organized states and their armies. Islamist terrorism is a reflection, indeed an extension, of today’s wars in the Middle East. And with the meddling of outside powers, those wars are becoming a single regional war – one that is continually morphing, expanding, and becoming increasingly violent.
From the jihadist perspective – the one that American or French Muslims, for example, may pick up in training camps in Afghanistan, Syria, and Yemen – daily life is ultra-violent.
More here. [Thanks to Syed Tasnim Raza.]
S. Abu Rizvi in Education Week:
Fifteen years ago, my colleagues and I observed that most economics undergraduates we taught quickly lost a third to half of their knowledge. "A" students turned into "C" students in a matter of weeks, right after final exams. For those of us who wanted disciplinary understanding to be useful to students well after they left college, this and similar findings were sobering. They spurred us to revamp how and what we teach while keeping an eye on why: to prepare students to use their understanding of the disciplines in other times and places.
Let's begin where we want to end up, with an example of the successful and flexible use of disciplinary understanding. As we consider the activities of two professional economists, Atif Mian and Amir Sufi, we should keep in mind that the concepts they employ are taught in introductory economics classes.
Mian and Sufi's intervention arose from the Great Recession at the end of the last decade. Economic turmoil left many homeowners "underwater," with homes worth less than what was owed on mortgages. Federal debt relief was a policy that was considered. But Timothy Geithner, the Secretary of Treasury at the time, claimed that the impact of relief on the economy would be tiny. By freeing overburdened homeowners to spend, even a large program of $700 billion "would have increased annual personal consumption by just 0.1 to 0.2 percent." Mian and Sufi thought this figure was too low. They used the concept of the marginal propensity to consume (MPC), "a very well-researched question," to show that relief this big would have had an impact six to thirteen times higher than Geithner claimed. His figure for the policy's economic impact was far too small. Their argument, made at the right time, could have carried the day against Geithner's proposal.
Ross Perlin in Dissent:
Every language has a complex grammar—an almost invisible glue between words that enables meaning-making—and new vocabulary can always be borrowed or coined. Some languages may specialize in melancholy, or seaweed, or atomic structure, or religious ritual; some grammars may glory in conjugating verbs while others bristle with syntactic invention. Hawaiian has just thirteen phonemes (meaningful sounds) while the Caucasian language Ubykh, extinct as of 1992, had eighty-four. “English” (with all its technical varieties) is said to be adding up to 8,500 words per year, more than many Australian aboriginal languages have to begin with. But these are surface inequalities—questions of personality.
Perceptions of linguistic superiority or inferiority are instead based on power, class, and social status. Historically, it was languages that were swept in with strong political, economic, or religious backing—Latin, Greek, Sanskrit, Hebrew, Arabic, Persian, and Chinese in the Eurasian core—that were held to be the oldest, the holiest, and the most perfect in structure, their “classical” status cemented by the received weight of canonical tradition. By the nineteenth century, the imperial nation-states of Europe were politely shunting them off to the museum and imposing their own equivalents: newly standardized “modern” languages like English and French. Johann Gottfried Herder’s Treatise on the Origin of Language (1772) inspired would-be nation-builders to document, restore, and develop their own neglected vernaculars. One by one, the nationalists of Central and Eastern Europe adopted Herder’s program, as has virtually every modern nation-state sooner or later: warding off imperial languages from without by establishing a dominant standardized language within, at the expense of minority languages and local varieties.
The quietly pacifist peaceful
to make room for men
who shout. Who tell lies to
children, and crush the corners
off of old men's dreams.
And now I find your name,
scrawled large in someone's
blood, on this survival list.
by Alice Walker
from Her Blue Body Everything We Know
Harvest Books, 1991
Saturday, January 17, 2015
From the introduction of The History Manifesto, "a call to arms to historians and everyone interested in the role of history in contemporary society. Leading historians David Armitage and Jo Guldi identify a recent shift back to longer-term narratives, following many decades of increasing specialization, which they argue is vital for the future of historical scholarship and how it is communicated."
A spectre is haunting our time: the spectre of the short term.
Jeffrey Aaron Snyder reviews Lani Guinier's The Tyranny of Meritocracy: Democratizing Higher Education in America in Boston Review (Image: zaveqna):
“The world . . . provides us with more than one correct answer to most questions,” Guinier says, and nods to Bard College President Leon Botstein who tells us that no professional “pursues her vocation” by choosing the “right” answer from “a set of prescribed alternatives that trivialize complexity and ambiguity.” Incisive points, to be sure, but there are alternatives to the multiple-choice format. Many standardized tests, for instance, now include “open-response” items, which require students to fashion their own answers rather than simply choosing the one “correct” answer from a ready-made list. In my view, however, the limitations of standardized testing with respect to prefabricated questions are far more important than the shortcomings associated with prefabricated answers. The ability to formulate a significant question is a hugely important skill, especially for college-level work, and one that no standardized test even attempts to measure. Standardized tests, then, too often reinforce the dreary lesson taught by many schools that it is the job of students to answer rather than to ask questions.
I never thought I would feel compelled to defend the integrity of the College Board or the number-crunchers at U.S. News and World Report, but a few corrections of the kind of fanciful exaggerations favored by anti-testing crusaders are in order. It has been over twenty years since the SAT ceased to be an acronym but it seems the SAT will always be known as the Scholastic Aptitude Test in the popular imagination, forever associated with the attempt to measure native intellectual ability. Guinier only reinforces this common misconception, stating that the SAT “doesn’t even pretend to measure achievement.” But as the College Board website explains, the SAT “doesn’t test logic or abstract reasoning.” Rather, “it tests the skills you’re learning in school: reading, writing and math.” In other words, today’s SAT is meant to be an achievement rather than an aptitude test.
Guinier, like many critics of the SAT, is dismissive of the test’s predictive power, claiming that the correlation between SAT scores and first-year college grade-point-average is “very, very slight.” In fact, most studies put the figure in the neighborhood of .45, which is a shade higher than the correlation between rates of smoking and incidences of lung cancer. It is also only a tad lower than the correlation between cumulative high school GPA and first-year college GPA.
Finally, according to Guinier, the U.S. News annual college rankings “rely heavily on SAT scores for their calculations.” Admissions test scores actually account for just over 8 percent of a school’s ranking.
Marilynne Robinson in the NYRB (photo from Enoch Pratt Free Library, Baltimore):
In the last year of his life he wrote a prose poem, Eureka, which would have established this fact beyond doubt—if it had not been so full of intuitive insight that neither his contemporaries nor subsequent generations, at least until the late twentieth century, could make any sense of it. Its very brilliance made it an object of ridicule, an instance of affectation and delusion, and so it is regarded to this day among readers and critics who are not at all abreast of contemporary physics. Eureka describes the origins of the universe in a single particle, from which “radiated” the atoms of which all matter is made. Minute dissimilarities of size and distribution among these atoms meant that the effects of gravity caused them to accumulate as matter, forming the physical universe.
This by itself would be a startling anticipation of modern cosmology, if Poe had not also drawn striking conclusions from it, for example that space and “duration” are one thing, that there might be stars that emit no light, that there is a repulsive force that in some degree counteracts the force of gravity, that there could be any number of universes with different laws simultaneous with ours, that our universe might collapse to its original state and another universe erupt from the particle it would have become, that our present universe may be one in a series.
All this is perfectly sound as observation, hypothesis, or speculation by the lights of science in the twenty-ﬁrst century. And of course Poe had neither evidence nor authority for any of it. It was the product, he said, of a kind of aesthetic reasoning—therefore, he insisted, a poem. He was absolutely sincere about the truth of the account he had made of cosmic origins, and he was ridiculed for his sincerity. Eureka is important because it indicates the scale and the seriousness of Poe’s thinking, and its remarkable integrity. It demonstrates his use of his aesthetic sense as a particularly rigorous method of inquiry.
Francis Beckett in New Humanist:
Betrayed: The English Catholic Church and the Sex Abuse Crisis (Biteback) by Richard Scorer.
The Devil’s Advocate: Child Abuse and the Men in Black (Devil’s Advocate Library) by Graham Wilmer
Priestly sex abuse has done far more harm to the Catholic Church in the USA, Canada, Ireland and Australia than it has in Britain, which leads some British Catholics to the comforting conclusion that there is less of it here. But at least 61 Catholic priests have been convicted of sexual offences in the criminal courts of England and Wales since 1990, and there may well be more, for the church still has no single centralised record of known offenders. However, American courts award much higher sums in compensation to victims, which is why American dioceses have been ruined. And the English Catholic Church has been ruthless in its efforts to keep the lid on the scandal, to silence victims, and to protect priests who use young children for their own sexual gratification.
Over and over again, the princes of the church have silently and cynically moved a priest from one school or parish where he was discovered to be abusing children, to another where he was unknown and could find more children to abuse. Of course children were abused in many institutions, not just Catholic ones, but the fact, though Catholics refuse to face it, is that the church had a culture of abuse like no other organisation. If there was ever any doubt about that, two new books have dispelled it. Richard Scorer is a lawyer who has represented many victims of priestly sexual abuse. He has written Betrayal in clear, luminous prose, telling us only what he has heard and seen. He avoids conjecture, does not seem to be anti-Catholic and does not editorialise. The result is compulsive reading.