Monday, May 02, 2016
by Leanne Ogasawara
Last summer, marooned with a large group of astronomers in a remote 11th century abbey in the Tuscan countryside, I found myself growing increasingly antsy. Hatching a plan to break out, I dragged my astronomer off on what should have been one of the great pilgrimages of our lifetime--for as luck would have it, just down the road lay what Aldous Huxley considered to be the greatest picture in the world.
I am referring to one of the paintings on the famous Piero della Francesca trail. To see those masterpieces in situ is astonishing, and I consider the Piero Pilgrimage to be one of the great art historical experiences in the world.
Like all pilgrimages, however, this one was not without its mishaps.... Flushing my phone accidentally down the toilet after seeing the astonishingly beautiful and transportive fresco cycles in Arezzo was bad enough; but then to finally arrive at the climax of the pilgrimage where Aldous' "best picture on earth" stood, only to find it unavailable for viewing (and not just that but veiled in such a way as to tantalize us about what glorious beauty we were missing)-- was close to unbearable.
Our biggest blunder, however, came when we willfully decided to skip driving an extra half hour to go see the Madonna del Parto. Yes, I want to kick myself! Located in Monterchi, the Madonna del Parto is an extremely rare (perhaps the only?) treatment in Christian art of the Virgin pregnant. "Del Parto" can mean labor or childbirth--and in the picture, Piero depicts a very pregnant Mary.
by Matt McKenna
Zootopia’s target audience may be a tad younger than Bernie Sanders’ target audience, but youthful Sanders supporters should nonetheless consider watching the film in order to see a dark vision of their potential future. Like many animated Disney films, Zootopia includes talking animals working together to solve a problem. Also like many animated Disney films, the audience is bludgeoned with allusions comparing the cartoon animals’ society to our own (in Zootopia, institutions are specist like real world institutions are racist). There’s nothing wrong with talking animals or ham-fisted moralizing--after all, the film is for kids. What differentiates this Disney film from previous Disney films is that a young voter--pro-Sanders or not--may well see their dreary, hopeless future in Officer Judy Hopps’ transition from plucky bunny to establishment stooge.
The hero of Zootopia is Judy Hopps who, like young voters in reality, starts out as an ardent advocate for the downtrodden. Though she is but a humble rabbit, a child to carrot farmers, Judy dreams of becoming a police officer in the big city of Zootopia, which is an interesting choice for the name of a city built by animals since (at least for me) the name conjures up images of caged creatures on display for human amusement. Anyway, young and full of hope, Hopps enrolls in the police academy, lands a job as the city’s first rabbit cop, and quickly thereafter becomes disillusioned by her role in the force. You can probably guess the challenges she faces: the chief is a jerk, the sleazy Mayor Lionheart (he’s a lion) cares about his image and not about the city’s crime wave, and the people Officer Hopps attempts to protect eventually take advantage of her naïveté. At the film’s emotional nadir, Hopps falls into a depression and heads home to farm carrots with her parents. It's the classic tale of a kid rebelling at twenty only to go mainstream at thirty. Admittedly, Hopps speeds through this transition much faster than a decade, but that shortened time period may be narratively justified by converting the film’s timeline into rabbit-years or something.
Duke Ellington was one of the great composers and bandleaders of the last century, and his band was one of the great bands. Touring, however, is unforgiving. Long hours sitting in a bus, meals if and when you can grab them, and gigs every night. And when you’ve played the same tunes with the same cats for decades, well, it can be rough to get up for a gig. Fact is there were times when Ellington’s musicians looked like they were asleep on the stage.
That’s how they appeared the one time I saw Ellington live. It was at one of those sessions held by the Left Bank Jazz Society in Baltimore’s Famous Ballroom on Sunday afternoons. This was probably in 1970, 71, or 72, long after Ellington’s prime years in the second quarter of the century. The Famous Ballroom was on North Charles Street, not too far from the train station, and up three flights of fairly wide stairs. It too was past its prime years, but the patrons of the Left Bank, they were always primed for good music. Some were dressed to the nines in their church Sunday best, the men in sharp suits, the women in elaborate hats; and some were dressed casually in jeans and sneakers.
That’s generally how it was, but I only specifically remember three things from that concert. Ellington dressed well and had a line of patter smooth as silk and brittle as glass. He’d been doing this a long time. That’s one. The guys slumped in their chairs like they’d just gotten off an all-night flight from Timbuktu. Perhaps they had. That’s two.
And three: Paul Gonsalves burned the place down with his tenor sax. I forget what the number was. All I remember is that Gonsalves strode out on stage to play a solo, but he didn’t position himself in front of the microphone. He stood to one side. A helpful member of the audience moved the mike directly in front of him as he started to blow. He stopped playing for a second, grabbed the mike angrily and shoved it aside. Not for him the brittle reverberations of amplified sound. Then he started blowing again. The pure juice of the natural human essence flowed from his sax to embrace us in its majesty and urgency.
Sunday, May 01, 2016
Patrick Sauer in Signature:
In his works, Vonnegut’s fondness for the Bard can be traced from Kurt Sr.’s 1949 woodworking through a 2005 essay in A Man Without A Country, the last work published in his lifetime. In that piece, Vonnegut compares Hamlet to Cinderella and Kafka’s cockroach, expounds on how apparitions are not to be trusted, compares Polonius to Rush Limbaugh, and commends Shakespeare for doing what so few people do: Telling the truth, admitting we know so little about life. It’s a theme mirrored throughout Vonnegut’s career, even if the Bard’s technique didn’t require as many authorial surrogates. Tomato, Tomahto, so it goes…
The paths of Shakespeare and Vonnegut crossed multiple times, once through dimensions only known by the Tralfamadorians. In God Bless You, Dr. Kevorkian, which originated as a series of 90-second public radio pieces, Vonnegut interviews people about the afterlife. The “tongue-tied, humiliated, self-loathing semi-literate Hoosier hack” is fisked by a feisty William Shakespeare who starts out by mocking Vonnegut’s dialect, calling it the “ugliest English he had ever heard, ‘fit to split the ears of groundlings.’” The Bard is salty throughout, responding to Kurt’s congratulations on all the Oscars Shakespeare in Love won by retorting the movie is “a tale told by an idiot, full of sound and fury, signifying nothing.” Both novelists and playwrights love a good callback.
David P. Barash at the History News Network:
For now, I want to focus on why monogamy has become so popular, at least in the modern Western world, and at least in theory, if not always in practice. Although monogamy is exceedingly rare in the animal world, it is found in a few cases, and nearly always, the payoff seems to be associated with the adaptive benefit of biparental childcare, something that Homo sapiens finds especially beneficial, given that we are unusual in that our offspring are profoundly helpless at birth, remaining needy for an extraordinarily long time. Nor is it absolutely necessary that the cooperating adults be man and woman; we know from abundant sociological data that two women or two men can do an excellent job, and that when it comes to child rearing, two – of any sex – are better than one. But we also know that prior to the cultural homogenization that followed European colonialism, more than 83% of human societies were preferentially polygamous, and that polygamy was also prominent in the ancient Near East from which that presumed Western move to monogamy originated.
So my question for now is: why did such a large segment of human society switch from polygamy to monogamy? And my first answer is: at present, we don’t know. My second answer is a guess, which goes as follows. (I propose it simply as a hypothesis, in the hope that readers will not only find it interesting but also useful in generating informed discussion and, if possible, meaningful research.)
Imagine a polygynous society with an average harem size of, say, ten. This means that for every male harem-keeper, there are nine unsuccessful, sexually and reproductively frustrated, resentful bachelors. The simply reality is that polygyny is disadvantageous not only for women – for complex reasons – but even more so for men, since with a 50/50 sex ratio, there are unmated men in proportion as polygyny obtains. This, btw, runs counter to the lascivious imaginings of many men, who, when I describe the evidence for primitive human polygyny, often express regret that they weren’t alive in those days, imagining that they would be a happy harem-holder.
Francine Prose in The Guardian:
Late in Alain de Botton’s engaging novel, a married couple, Rabih and Kirsten, find that the demands and stresses of ordinary life – work, domestic chores, financial worries, the harrowing expenditure of energy required to raise their two adored children – have made them irritable and contentious. In part, the narrator concludes, they are at odds “because they have so seldom seen their struggles sympathetically reflected in the art they know … Were Rabih and Kirsten able to read about themselves as characters in a novel, they might ... experience a brief but helpful burst of pity at their not at all unworthy plight, and thereby perhaps learn to dissolve some of the tension that arises on those evenings when, once the children are in bed, the apparently demoralising and yet in truth deeply grand and significant topic of ironing comes up.”
Presumably, the novel that Rabih and Kirsten need to read is the one De Botton has written: a sympathetic account of the relationship that begins only after the besotted courtship has ended. Having fallen deeply in love, the couple “will marry, they will suffer, they will frequently worry about money, they will have a girl first, then a boy, one of them will have an affair, there will be passages of boredom, they’ll sometimes want to murder one another and on a few occasions to kill themselves. This will be the real love story.”
Rabih and Kirsten are well-drawn, individualised characters, with distinct and separate backgrounds (he’s half-Lebanese, half-German; she’s Scottish), careers (he’s an architect working in an urban design studio; she’s a surveyor employed by Edinburgh City Council) and personalities (she’s confident and feisty; he’s dreamy and insecure). But what’s interesting is De Botton’s decision to make their experience so thoroughly ordinary that their lives seem emblematic, their stories interchangeable with those of countless couples.
Jefferson Morley in The Intercept:
Last summer I paid a visit to Georgetown University’s Lauinger Libraryas part of my research on legendary CIA counterspy James Jesus Angleton. I went there to investigate Angleton’s famous mole hunt, one of the least flattering episodes of his eventful career. By the early 1960s, Angleton was convinced the KGB had managed to insert a penetration agent high in the ranks of the CIA.
In researching and writing a biography of Angleton, I constantly confront a conundrum: Was the man utterly brilliant? Or completely nuts?
Angleton is one of America’s archetypal spies. He was the model for Harlot in Harlot’s Ghost, Norman Mailer’s epic of the CIA, a brooding Cold War spirit hovering over a story of corrupted idealism. In Robert De Niro’s cinematic telling of the tale, The Good Shepherd, the Angletonian character was a promising product of the system who loses his way in the moral labyrinth of secret intelligence operations.
In real life, Jim Angleton was a formidable intellectual and canny bureaucrat who helped shape the ethos of the Central Intelligence Agency we have today.
Jeff Guo in the Washington Post:
Scholars have long puzzled over the different fates of the world’s peoples. Why, on the eve of the modern world, were some societies so technologically and politically complex? For centuries, leading intellectuals from Adam Smith to Karl Marx believed that agricultural abundance had propelled the rise of advanced civilizations. The Assyrians and Babylonians of ancient Mesopotamia, for instance, flourished thanks to their fertile farms, which fed an upper class that devoted itself to religion and empire.
In his 1997 bestseller “Guns, Germs and Steel,” historian Jared Diamond argued that the availability of nutritious and easily domesticated plants and animals gave some societies a head start. In the Middle East there was barley and wheat; in Asia there was millet and rice. “People around the world who had access to the most productive crops became the most productive farmers,” Diamond later said on his PBS show. And more productivity led to more advanced civilizations.
But the staple crops associated with less-advanced peoples — like manioc, the white potato, the sweet potato and taro — weren’t necessarily less productive. In fact, manioc and the potato are superstar crops, less demanding of the soil and less thirsty for water. These plants still feed billions of people today.
Kathleen Downes in Women's Media Center:
When I was a little girl, I held hands with my friends. It was a sign of companionship and togetherness, one that wordlessly affirmed the strong force that is female friendship. As I grew from a girl into a woman, I started to get a lot of cultural messages, implicit and explicit, that holding hands was no longer acceptable between friends because it was now assumed to be romantic, reserved for those who are “more than friends.” Suddenly, this way to be close to those I love was sexualized. Hand holding between any two people is beautiful when used as a romantic gesture. But it grieves me, as it should grieve us all, that our culture is so hypersexualized that just about anything we do stands the possibility of being perceived as sexual. This is especially true for women. A simple gesture that in my childhood served as a means of human connection is now treated as sexual, and all its other meanings—like unity, strength, and togetherness—seem to fade away in the eyes of the world.
Not wanting my friends or those around us to misinterpret a gesture of friendship as something more, I stopped holding their hands. I more or less stopped connecting with my friends through touch altogether after childhood because I didn’t want to “give the wrong idea.” When we lose social permission to hold hands as an expression of sisterhood, all women lose something. As a disabled woman, I have felt this loss uniquely and profoundly. I was born with cerebral palsy, and I spend most of my time in a power wheelchair. I view my wheelchair as a tool of freedom, as natural to me as a leg or an arm. I do not resent my wheelchair or see it as confining. Any metaphors likening my chair to a metal prison will be swiftly rejected. However, it cannot be denied that being seated on an electronic throne of metal, plastic, and overpriced foam affects my relationship with physical touch. I live in a world that does not even know how to look at me, much less touch me.
Patrick Hennessey in The Telegraph:
You wouldn’t think that Rudyard Kipling would be particularly esteemed in modern India. Now notorious, rather than celebrated, as the “Bard of Empire,” you might imagine that, if Kipling were remembered in India at all, it would be with understandable awkwardness at best and, at worst, disdain. Memories are long in the Punjab, and few have forgiven Kipling for his public support of General Dyer, the Butcher of Amritsar. Yet as I followed in the young writer’s footsteps through modern Pakistan and India to make the documentary Kipling’s Indian Adventure, from Lahore across the hot Punjabi plain and up into the fresh foothills of the Himalayas to the Raj’s summer capital of Shimla, I discovered not only that Kipling was well known, but that many of his works are well regarded and even taught in schools — more so, I dare say, than in Britain. And no matter where I went or to whom I spoke, one particular set of stories was loved above all others: the Jungle Books. Of course, their modern reach owes a lot to the 1967 Disney animation, the very last production overseen by Walt Disney himself. On the Mall in Shimla, in the shadow of the arch-Gothic Gaiety Theatre — surely the symbolic apotheosis of the Raj — I discussed Kipling’s legacy in India with a group of young students. As soon as the Jungle Books were mentioned, someone started humming The Bare Necessities.
It was probably for the best that Kipling did not live to see the liberties Disney took with his work. But despite straying far from the original texts, the film gleefully and stubbornly kept one of Kipling’s finest creations in the hearts of successive generations — for that alone it should be applauded. No matter what one thinks of Kipling’s politics, The Jungle Book (1894) and The Second Jungle Book (1895) represent one of the great pieces of imaginative writing in English, allegorical tales as timeless as Aesop’s fables and flights of masterfully realized fancy on a par with Lewis Carroll’s Alice stories and Kenneth Grahame’s The Wind in the Willows (1908).
Saturday, April 30, 2016
Steven Nadler in Aeon:
In July 1656, the 23-year-old Bento de Spinoza was excommunicated from the Portuguese-Jewish congregation of Amsterdam. It was the harshest punishment of herem (ban) ever issued by that community. The extant document, a lengthy and vitriolic diatribe, refers to the young man’s ‘abominable heresies’ and ‘monstrous deeds’. The leaders of the community, having consulted with the rabbis and using Spinoza’s Hebrew name, proclaim that they hereby ‘expel, excommunicate, curse, and damn Baruch de Spinoza’. He is to be ‘cast out from all the tribes of Israel’ and his name is to be ‘blotted out from under heaven’.
Over the centuries, there have been periodic calls for the herem against Spinoza to be lifted. Even David Ben-Gurion, when he was prime minister of Israel, issued a public plea for ‘amending the injustice’ done to Spinoza by the Amsterdam Portuguese community. It was not until early 2012, however, that the Amsterdam congregation, at the insistence of one of its members, formally took up the question of whether it was time to rehabilitate Spinoza and welcome him back into the congregation that had expelled him with such prejudice. There was, though, one thing that they needed to know: should we still regard Spinoza as a heretic?
Unfortunately, the herem document fails to mention specifically what Spinoza’s offences were – at the time he had not yet written anything – and so there is a mystery surrounding this seminal event in the future philosopher’s life. And yet, for anyone who is familiar with Spinoza’s mature philosophical ideas, which he began putting in writing a few years after the excommunication, there really is no such mystery. By the standards of early modern rabbinic Judaism – and especially among the Sephardic Jews of Amsterdam, many of whom were descendants ofconverso refugees from the Iberian Inquisitions and who were still struggling to build a proper Jewish community on the banks of the Amstel River – Spinoza was a heretic, and a dangerous one at that.
The American public sphere is blessed with many religious experts. In the midst of the Syrian refugee crisis, pundits reminded us that Christianity enjoins the welcoming of refugees. Many of the same people, it turns out, are also deeply familiar with Islam, allowing them to piously intone that it is a “religion of peace.” These claims often come from people who are not themselves affiliated with those faiths or any other: they are political interventions masquerading, sometimes insultingly, as exegesis. They serve an important function, however, as a form of wish fulfillment. If these pat, nervous descriptions of long and complex religious traditions were true, the age-old problem of religion in the public square could vanish into a puff of banalities. Peace and refugee assistance are perfectly good secular, progressive goals, and it would be convenient if Christianity and Islam, which long antedate secular progressivism, happened to enjoin the same things. Alas, the world is not so simple. But what, then, are we to do? What should we expect from religion in a secular society?
The conservative position on religiosity has the virtue of coherence: America, from this perspective, is a Christian nation. Even if other religions should be tolerated in the name of Christian charity, they should cede pride of place to America’s exceptional Christian heritage. Progressives have a much more difficult time, and we ricochet between contradictory and unsustainable positions. On the one hand, religion is transparently absurd, but on the other the triumphant atheism of Richard Dawkins is embarrassing, too. When someone such as Kim Davis forces us to confront difficult issues of law and faith, we often have recourse to uncomfortable mockery, unsure why it is wrong to disobey political authority in the name of individual conscience. The old Marxist account of religion as an “opiate of the people” survives, too, in the conventional wisdom that evangelical voters cling to guns and religion because they are distracted from their true economic interests. These attempts to sidestep the question of religion’s role are dangerous but understandable. The great philosopher Richard Rorty once sighed that religion was a conversation-stopper: If someone claims to be acting for religious reasons, what is there to say? If he were alive today, he would know that if we cease talking about religion, we start shouting about it.
Over the past hundred years, philosophical interest in language has become, as Charles Taylor puts it, “close to obsessional”. The obsession goes back to a remark made by Ludwig Wittgenstein in 1915: “The limits of my language mean the limits of my world.” If Wittgenstein was right, then language is not so much a device for recording and communicating information, as the framework of all our knowledge and experience.
But the philosophers who drew inspiration from Wittgenstein’s remark could not agree about what it implied. The positivists among them thought of language as a strict map of impersonal facts, dismissing everything else as rhetoric, emotion or superstition. The humanists, on the other hand, saw it as a creative force that gives wings to our perceptions and opens us to the unknown. For the positivists, you might say, language aspires to the condition of natural science, but for the humanists it is essentially a poem.
Taylor is on the side of the poets, and in his latest book he makes the case with eloquence, force and broad historical sweep. He starts withÉtienne de Condillac, the 18th-century proto-positivist who suggested that language came into existence when our ancestors got bored with instinctive grunts and gestures, and decided to share their ideas by means of artificial vocal sounds.
The Swedish writer Therese Bohman seems to have an affinity for aimless young women vulnerable to the attentions of older men. In two of her novels, Drowned and the newly translated The Other Woman, she channels the psyches of twenty-something University students engaged in liaisons with men already involved with other women.
The books share so much in common that they might be the same novel: both explore almost identical situations, share many of the same structural and plot devices, and the author’s and translator Marlaine Delargy’s prose styles remain the same from book to book. What differences there are prove to be relatively superficial. Drowned and The Other Womanare conveyances for Bohman’s thoughts on feminism, sisterhood, and perhaps even the socio-economic status of women in modern society. Regardless of the ambiguous morality of her female characters’ decisions, Bohman’s treatment of them is inarguably sympathetic. Their affairs with men may be the impetus for coming-of-age journeys, but they do not represent a final destination.
Drowned is a psychological thriller—dark, gothic, and fraught with eroticized violence—and technically the better, more innovative novel. It is the story of two sisters. Stella, the elder, lives in a beautiful “yellow wooden house” with a garden; she has the perfect job at the local parks and gardens department; her boyfriend, Gabriel, is devastatingly attractive and a successful novelist.
On retirement from the Natural History Museum, where he was senior palaeontologist, Richard Fortey used the proceeds of a television series to purchase a small beech wood in the Chilterns. It’s clearly kept him busy since then, for in The Wood For the Trees he presents not only an account of the wood’s long history but a year-long study of its biodiversity. For this he has called on the expertise of a lifetime’s-worth of friends and colleagues, who arrive with pooters, cherry-pickers and high-tech gear to help him understand absolutely everything about it. The wood may only be four acres, but it’s quite an undertaking.
Fortey is an award-winning science writer whose previous books include Trilobite!(2000), The Earth: An Intimate History (2004) and The Hidden Landscape: A Journey into the Geological Past (1993). He’s a regular on TV, too, recently exploring Hawaii, Madagascar and Madeira in stripy braces and Panama hat forNature’s Wonderlands: Islands of Evolution on BBC4. His style on the page mirrors that on the small screen: deeply knowledgeable, enthusiastic, avuncular and a little bit old-fashioned. Words such as “thrice”, “pace” and even “fain” dot his prose like relict trees among the newer growth — and are just as pleasing.
The Wood for the Trees opens in April as the bluebells are coming out and concludes at the end of March, taking in a year’s cycle in the wood. Fortey’s nature notes form the basis of each chapter, the larger story of the wood — its geological past and human history — told piecemeal as the book unfolds.
Nina Martyris in Lapham's Quarterly:
On the evening of April 5, 1815, Mount Tambora, in the Indonesian archipelago, lost its head. So furious was the volcanic eruption that the top third of the 4,300-meter mountain disappeared. More than 10,000 people were incinerated, while an additional 30,000 across the world perished from the crop failures, famine, and disease that resulted from extreme weather triggered by the explosion. Volcanic ash blotted out much of the sun for more than a year, seeding wild rumors that the sun was dying. In Europe and North America, there were snowfalls in June, dry fogs, streaky sunsets, and unseasonal storms. The average global temperature dropped by a whole degree. The climate changed overnight. Unexpectedly, however, this catastrophe spurred two remarkable works of apocalyptic literature in distant Europe. As we mark their bicentenary, these works can be viewed as forerunners to the literature of climate change. The more famous of the two is Mary Wollstonecraft Shelley’s Frankenstein, the mesmerizing and moving story of a hubristic scientist, Victor Frankenstein, who creates a yellow-skinned and watery-eyed monster in his laboratory—and then loses control of it. It has become the classic cautionary tale against what Shelley’s vainglorious scientist upholds as “the unquestioned belief that the products of science and technology are an unqualified blessing for mankind.” The other is the lesser known but equally haunting poem “Darkness,” by the romantic poet George Gordon Byron. It imagines the horrific end days of human life on an earth that has become “a lump of death—a chaos of hard clay.” These two works share a unique kinship: not only were they goaded into being by the gloomy Tambora weather, but they were conceived in the same month, July 1816, and in the same place—on the shores of a storm-lashed Lake Geneva, where Byron and the Shelleys had rented neighboring villas.
The story of how Frankenstein was born has passed into literary legend. The year 1816 was known by the clammy epithet of the Year Without a Summer; thunderstorms and what Mary described as “an almost perpetual rain” kept them indoors. The group of friends—which included Byron’s personal physician Dr. John Polidori and Claire Clairmont, Mary’s eighteen-year-old stepsister, who was madly in love with Byron and pregnant with his child—decided to pass the time by inventing ghost stories. Eighteen-year-old Mary Shelley came up with Frankenstein, whose opening page, shivery with icy winds, manifests a deep longing for a place where “the sun is forever visible.”
Jennifer Schuessler in The New York Times:
In 1858, when Walt Whitman sat down to write a manifesto on healthy living, he came up with advice that might not seem out of place in an infomercial today. “Let the main part of the diet be meat, to the exclusion of all else,” Whitman wrote, sounding more than a little paleo. As for the feet, he recommended that the comfortable shoes “now specially worn by base-ball players” — sneakers, if you will — be “introduced for general use,” and he offered warnings about the dangers of inactivity that could have been issued from a 19th-century standing desk. “To you, clerk, literary man, sedentary person, man of fortune, idler, the same advice,” he declared. “Up!”
Whitman’s words, part of a nearly 47,000-word journalistic series called “Manly Health and Training,” were lost for more than 150 years, buried in an obscure newspaper that survived only in a handful of libraries. The series was uncovered last summer by a graduate student, who came across a fleeting reference to it in a digitized newspaper database and then tracked down the full text on microfilm. Now, Whitman’s self-help-guide-meets-democratic-manifesto is being published online in its entirety by a scholarly journal, in what some experts are calling the biggest new Whitman discovery in decades. “This is really a complete new work by Whitman,” said David S. Reynolds, the author of “Walt Whitman’s America” and a professor of English at the Graduate Center of the City University of New York, who was not involved with the find.
33 Years Old
Upon my arrival in Stuttgart, sad news:
a strong young man, supermarket worker
killed himself going to buy things at a supermarket.
I’m 33 and I know that he was too
and will be, eternally . . .
33 years old was the Mexican poet who killed himself
on the road from Bari to Brindisi, going to board
a boat bound for Greece.
We all die a little at 33 . . .
While the funeral procession of the wind strikes
the windows, the nights, the days
making us remember our childhood, something tells us,
that one day He will come.
The wind opens the windows with gloves of dead leaves.
The young man died and now I occupy his room
and I’m afraid because I’m the same age he was.
In this room I have two windows:
one looks out on a strange castle full of tourists and the other
on a forest. Beautiful at dawn and fearsome at night.
I am so close to both windows! One on the old world,
the other on the wild.
Both worlds call to me, they strike at my window at every moment
and they will keep going until the end of days.
The young man died headed for the supermarket . . .
At 33 . . .
I open the window to listen to the sound of the forest, the colors
threading into the dark sky. Smell of a kerosene heater
going in the depths of chest. It’s the forest’s heart!
I never lived close to a forest.
And I think I haven’t even ever seen one.
This is a beautiful, strong, tall, friendly forest . . .
That is 33 years old . . .
by Washington Cucurto
from Poetry International
translation: Jordan Lee Schnee
Friday, April 29, 2016
Nomy Arpaly in Pea Soup:
I’ll never forget the old guy who asked me, at an APA interview: “suppose I wanted to slap you, and suppose I wanted to slap you because I thought you were giving us really bad answers, and I mistakenly believed that by slapping you I’ll bring out the best in you. Am I blameworthy?”.
When he said “suppose I wanted to slap you”, his butt actually left his chair for a moment and his hand was mimicking a slap in the air.
Since that event - which happened back when I was a frightened youngster with all the social skills of a large rock - I have thought many times about the connection between philosophy and rudeness - especially the connection between philosophical debating and rudeness. It seems to me that the connection between philosophical argument and rudeness is similar to the connection between fighting a war and immorality. Surprisingly precise analogies can be drawn between the soldier in a just war and the philosophical arguer in pursuit of the truth. Let me explain.
It is a big part of moral behavior in ordinary situations not to kill people. Yet the morally healthy inhibition against killing people has to be lost, of necessity, in war - even in a morally justified war.
It is a big part of politeness - not in the sense of using the right fork, but in the sense of civility - in ordinary situations not to tell another person that she is wrong and misguided about something she cares a lot about, or that she cares about being right about. For brevity’s sake, let’s just say it’s a big part of politeness or civility not to correct people. Yet the civilized inhibition against correcting people has to be lost, of necessity, in a philosophical argument.
A soldier who is fighting, even for a just cause, is in a precarious situation, with regard to morality, because he has lost, of necessity, the basic moral inhibition against killing people.
A philosopher who is arguing with another, even in pursuit of truth, is in a precarious situation with regard to politeness, because she has lost, of necessity, the basic civil inhibition against correcting people.
Daniel Engber in FiveThirtyEight:
In 2012, network scientist and data theorist Samuel Arbesman published a disturbing thesis: What we think of as established knowledge decays over time. According to his book “The Half-Life of Facts,” certain kinds of propositions that may seem bulletproof today will be forgotten by next Tuesday; one’s reality can end up out of date. Take, for example, the story of Popeye and his spinach.
Popeye loved his leafy greens and used them to obtain his super strength, Arbesman’s book explained, because the cartoon’s creators knew that spinach has a lot of iron. Indeed, the character would be a major evangelist for spinach in the 1930s, and it’s said he helped increase the green’s consumption in the U.S. by one-third. But this “fact” about the iron content of spinach was already on the verge of being obsolete, Arbesman said: In 1937, scientists realized that the original measurement of the iron in 100 grams of spinach — 35 milligrams — was off by a factor of 10. That’s because a German chemist named Erich von Wolff had misplaced a decimal point in his notebook back in 1870, and the goof persisted in the literature for more than half a century.