Friday, October 21, 2016
Video length: 3:15
Ian Leslie in The Economist:
In 1930, a psychologist at Harvard University called B.F. Skinner made a box and placed a hungry rat inside it. The box had a lever on one side. As the rat moved about it would accidentally knock the lever and, when it did so, a food pellet would drop into the box. After a rat had been put in the box a few times, it learned to go straight to the lever and press it: the reward reinforced the behaviour. Skinner proposed that the same principle applied to any “operant”, rat or man. He called his device the “operant conditioning chamber”. It became known as the Skinner box. Skinner was the most prominent exponent of a school of psychology called behaviourism, the premise of which was that human behaviour is best understood as a function of incentives and rewards. Let’s not get distracted by the nebulous and impossible to observe stuff of thoughts and feelings, said the behaviourists, but focus simply on how the operant’s environment shapes what it does. Understand the box and you understand the behaviour. Design the right box and you can control behaviour. Skinner turned out to be the last of the pure behaviourists. From the late 1950s onwards, a new generation of scholars redirected the field of psychology back towards internal mental processes, like memory and emotion. But behaviourism never went away completely, and in recent years it has re-emerged in a new form, as an applied discipline deployed by businesses and governments to influence the choices you make every day: what you buy, who you talk to, what you do at work. Its practitioners are particularly interested in how the digital interface – the box in which we spend most of our time today – can shape human decisions. The name of this young discipline is “behaviour design”. Its founding father is B.J. Fogg.
...In a phone conversation prior to the workshop, Fogg told me that he read the classics in the course of a master’s degree in the humanities. He never found much in Plato, but strongly identified with Aristotle’s drive to organise and catalogue the world, to see systems and patterns behind the confusion of phenomena. He says that when he read Aristotle’s “Rhetoric”, a treatise on the art of persuasion, “It just struck me, oh my gosh, this stuff is going to be rolled out in tech one day!”
Jim Davies in Nature:
Some researchers argue that consciousness is an important part of human cognition (although they don’t agree on what its functions are), and some counter that it serves no function at all. But even if consciousness is vitally important for human intelligence, it is unclear whether it’s also important for any conceivable intelligence, such as one programmed into computers. We just don’t know enough about the role of consciousness — be it in humans, animals or software — to know whether it’s necessary for complex thought. It might be that consciousness, or our perception of it, would naturally come with superintelligence. That is, the way we would judge something as conscious or not would be based on our interactions with it. A superintelligent AI would be able to talk to us, create computer-generated faces that react with emotional expressions just like somebody you’re talking to on Skype, and so on. It could easily have all of the outward signs of consciousness. It might also be that development of a general AI would be impossible without consciousness. (It’s worth noting that a conscious superintelligent AI might actually be less dangerous than a non-conscious one, because, at least in humans, one process that puts the brakes on immoral behaviour is ‘affective empathy’: the emotional contagion that makes a person feel what they perceive another to be feeling. Maybe conscious AIs would care about us more than unconscious ones would.)
Either way, we must remember that AI could be smart enough to pose a real threat even without consciousness. Our world already has plenty of examples of dangerous processes that are completely unconscious. Viruses do not have any consciousness, nor do they have intelligence. And some would argue that they aren’t even alive. In his book Superintelligence (Oxford University Press, 2014), the Oxford researcher Nick Bostrom describes many examples of how an AI could be dangerous. One is an AI whose main ambition is to create more and more paper clips. With advanced intelligence and no other values, it might proceed to seek control of world resources in pursuit of this goal, and humanity be damned. Another scenario is an AI asked to calculate the infinite digits of pi that uses up all of Earth’s matter as computing resources. Perhaps an AI built with more laudable goals, such as decreasing suffering, would try to eliminate humanity for the good of the rest of life on Earth. These hypothetical runaway processes are dangerous not because they are conscious, but because they are built without subtle and complex ethics.
Thursday, October 20, 2016
Eli Saslow in the Washington Post:
Their public conference had been interrupted by a demonstration march and a bomb threat, so the white nationalists decided to meet secretly instead. They slipped past police officers and protesters into a hotel in downtown Memphis. The country had elected its first black president just a few days earlier, and now in November 2008, dozens of the world’s most prominent racists wanted to strategize for the years ahead.
“The fight to restore White America begins now,” their agenda read.
The room was filled in part by former heads of the Ku Klux Klan and prominent neo-Nazis, but one of the keynote speeches had been reserved for a Florida community college student who had just turned 19. Derek Black was already hosting his own radio show. He had launched a white nationalist website for children and won a local political election in Florida. “The leading light of our movement,” was how the conference organizer introduced him, and then Derek stepped to the lectern.
“The way ahead is through politics,” he said. “We can infiltrate. We can take the country back.”
Years before Donald Trump launched a presidential campaign based in part on the politics of race and division, a group of avowed white nationalists was working to make his rise possible by pushing its ideology from the radical fringes ever closer to the far conservative right. Many attendees in Memphis had transformed over their careers from Klansmen to white supremacists to self-described “racial realists,” and Derek Black represented another step in that evolution.
He never used racial slurs. He didn’t advocate violence or lawbreaking. He had won a Republican committee seat in Palm Beach County, Fla., where Trump also had a home, without ever mentioning white nationalism, talking instead about the ravages of political correctness, affirmative action and unchecked Hispanic immigration.
Bill Gates in his own blog:
A few years ago, I pulled off a purposeful prank. While I was giving a TED Talk on malaria to a room full of influential people, I opened a canister and let loose a small swarm of mosquitoes. “There’s no reason that only poor people should have the experience,” I said. I let the audience squirm in their seats for about half a minute before I let on that the mosquitoes were not infected with malaria. My gimmick worked. A distant problem suddenly got very close to home.
Today, gimmicks are no longer necessary for convincing Americans of the danger of mosquito-borne diseases. The spread of Zika virus in south Florida, Puerto Rico, and other parts of the U.S. has given millions of Americans a direct understanding what it’s like to live with the fear of mosquitoes and the harm they can do, especially to pregnant women and children.
The world must focus serious attention and resources on ending the Zika epidemic. At the same time, we should keep in mind that the overwhelming toll of mosquito-related illness and death comes from malaria. Malaria is the key reason mosquitoes are the deadliest animal in the world.
Murray Shanahan in Aeon:
In 1984, the philosopher Aaron Sloman invited scholars to describe ‘the space of possible minds’. Sloman’s phrase alludes to the fact that human minds, in all their variety, are not the only sorts of minds. There are, for example, the minds of other animals, such as chimpanzees, crows and octopuses. But the space of possibilities must also include the minds of life-forms that have evolved elsewhere in the Universe, minds that could be very different from any product of terrestrial biology. The map of possibilities includes such theoretical creatures even if we are alone in the Cosmos, just as it also includes life-forms that could have evolved on Earth under different conditions.
We must also consider the possibility of artificial intelligence (AI). Let’s say that intelligence ‘measures an agent’s general ability to achieve goals in a wide range of environments’, following the definition adopted by the computer scientists Shane Legg and Marcus Hutter. By this definition, no artefact exists today that has anything approaching human-level intelligence. While there are computer programs that can out-perform humans in highly demanding yet specialised intellectual domains, such as playing the game of Go, no computer or robot today can match the generality of human intelligence.
But it is artefacts possessing general intelligence – whether rat-level, human-level or beyond – that we are most interested in, because they are candidates for membership of the space of possible minds. Indeed, because the potential for variation in such artefacts far outstrips the potential for variation in naturally evolved intelligence, the non-natural variants might occupy the majority of that space. Some of these artefacts are likely to be very strange, examples of what we might call ‘conscious exotica’.
In what follows I attempt to meet Sloman’s challenge by describing the structure of the space of possible minds, in two dimensions: the capacity for consciousness and the human-likeness of behaviour.
This talk was presented at Harvard-Epworth Church, Cambridge, MA on May 12, 2016. Video length: 1:26:30
Martin Heidegger never apologized for his support of the Nazis. He joined the party in 1933 and remained a member until the bitter end, in 1945. First, he spoke out enthusiastically in favor of a conservative revolution with Hitler at its helm. From about 1935, he found his own ambitions disappointed, and grew more silent. Yet, when he called his dalliance with National Socialism his greatest mistake after the war, he was upset not at his crime, but at the fact that he got caught.
Not that Heidegger has had to apologize, either. For the past seventy years, his many apologists and acolytes have gone to astounding lengths in trying to prove that his philosophical oeuvre exists independent of what was, they avowed, a mere weakness of character, an instance of momentary opportunism. In 2014, a group of French philosophers even tried to halt the publication of Heidegger’s Black Notebooks, his philosophical diaries. But if antisemitic references in his philosophy are oblique and, as some would have it, coincidental to his critique of modernity, the Notebooks leave little room for such charitable reading. Even after the war he would bemoan the Jewish “drive for revenge,” with their aim consisting in “obliterating the Germans in spirit and history.”
In his book Command and Control: Nuclear Weapons, the Damascus Accident and the Illusion of Safety, Eric Schlosser reveals that worst-case scenarios have come harrowingly close to coming true on a number of occasions—yet the American public has never been adequately informed.
So the question that continues to haunt me is, Why would a generation of presidents, supported by responsible men like William Perry, engage in a nuclear poker game that no sane gambler would in good conscience play? Why on earth wouldn’t both sides calculate the worst-case scenario and elect not to play the game?
On some nights during the Cold War, I lay awake turning over that question. The only plausible answer I was able to imagine is that they, the two governments, couldn’t help it. They had no choice, or thought they had no choice: the nuclear genie was out of the bottle and both sides seized on deterrence as an existential necessity. But was it?
‘The world has never seen anything like this picture,’ Thackeray said. Commenting on the writer’s reaction to the painting, John Barrell wrote (in the LRB of 18 December 2014) that Thackeray ‘won’t have to wait for the tide of modern art to flood in to appreciate what Turner has done. It’s 1844, and he’s got it. Turner is not out of his time; he and Turner are contemporaries.’ The tide of modern art wasn’t long in coming: in the first Impressionist salon in 1874, George Braquemond showed an etching of Manet’s Olympia alongside an intriguing version of Rain, Steam and Speed. He captured some of the elements of Turner’s title – the wind-driven rain slashes across the bridge – but his train appears as static as a Monet locomotive idling at the Gare St Lazare. He also left out the hare.
Kenneth Clark described Rain, Steam and Speed as the ‘most extraordinary’ of Turner’s paintings. ‘I suppose that everybody today would accept it as one of the cardinal pictures of the 19th century on account of its subject as well as its treatment.’ That subject is often seen as the ascendancy of man-made industrial society and the obliteration of the old natural order. Andrew Wilton, the author of Turner in His Time, considered the painting’s perspective as indicative of the triumph of the new: ‘The plunging diagonal line that cuts across the familiar location here is an emphatic demonstration of how the new technologies of the age imposed a precise geometric order on the pastoral scene.’
Richard Brody in The New Yorker:
Michael Moore in TrumpLand” isn’t quite the film that I expected it to be, and that’s all to the good. Moore is, of course, a genius of political satire, deploying his persona—as a populist socialist skeptic with a superb sense of humor and a chess player’s skill at media positioning—to deeply humane ends that are mainly detached from practicality, policy, and practical politics. The very idea of the new film—a recording of Moore’s one-man show from the stage of a theatre in a small, predominantly Republican town in Ohio—runs the risk of self-parody, being a feature-length lampooning of Trump, laid out with meticulously researched facts set forth with the sublime derision of which Moore is a master. It would have been a highly saleable version of preaching to the converted.
...Moore’s final rhetorical stroke is to add that the lifetime of struggle that Hillary has faced (and he cites the struggles of Pope Francis as a comparison) has left her bitterly resentful of the status quo, profoundly progressive in temperament, deeply intent on making decisive changes when, finally, she realizes her lifelong goal of being in a position to make them. In effect, Moore presents a Hillary Clinton whose progressivism arises from no mere butterfly idealism but embodies the hard-won experience of the best American tradition. Then he can’t help but ice the cake: he dreams of her flurry of executive orders (a conservative’s nightmare); he envisions that she’ll replace old enemies (“Iran and North Korea”) with new ones (“Monsanto and Wells Fargo”); and he puts his own enthusiasm for Clinton on the line with a celebrity-fuelled vow—that if, in two years, she doesn’t deliver on the progressive vision that she promises, he himself will run for President in 2020. (He quickly piles the comedy onto this notion—his first promise is that all electronic devices will use the same charger cord.)
This masculinity "script" still embraced by older men was outlined as the four-part Blueprint of Manhood, first published by sociologist Robert Brannon when the men in the studies were entering adulthood in the 1970's. The blueprint included:
No Sissy Stuff - men are to avoid being feminine, show no weaknesses and hide intimate aspects of their lives.
The Big Wheel - men must gain and retain respect and power and are expected to seek success in all they do.
The Sturdy Oak - men are to be ''the strong, silent type" by projecting an air of confidence and remaining calm no matter what.
Give 'em Hell - men are to be tough, adventurous, never give up and live life on the edge.
"We're all aging; it's a fact of life. But as men age, they're unable to be who they were, and that creates a dissonance that is hard to reconcile," said Langendoerfer, who studies aging in men. "We need to better understand how older men adapt to their stressors—high suicide rates, emotions they stifle, avoiding the doctor—to hopefully help them build better lives in older age," she said. The review, published in the journal Men and Masculinities, was co-written by Edward Thompson Jr., an emeritus professor of sociology and anthropology at the College of the Holy Cross and now an affiliate of the Department of Sociology at Case Western Reserve.
Until now, now that I’ve reached my thirties:
All my Muse’s poetry has been harmless:
American and diplomatic: a learned helplessness
Is what psychologists call it: my docile, desired state.
I’ve been largely well-behaved and gracious.
I’ve learned the doctors learned of learned helplessness
By shocking dogs. Eventually we things give up.
Am I grateful to be here? Someone eventually asks
If I love this country. In between the helplessness,
The agents, the nation must administer
A bit of hope: must meet basic dietary needs:
Ensure by tube, by nose, by throat, by other
Orifice. Must fistbump a janitor. Must muss up
Some kid’s hair and let him loose
Around the Oval Office. click click could be cameras
Or the teeth of handcuffs closing to fix
The arms overhead. There must be a doctor on hand
To ensure the shoulders do not dislocate
And there must be Prince’s “Raspberry Beret.”
click click could be Morse code tapped out
Against a coffin wall to the neighboring coffin.
Outside my window, the snow lights cobalt
For a bit at dusk and I’m surprised
Every second of it. I had never seen the country
Like this. Somehow I can’t say yes. This is a beautiful country.
I have not cast my eyes over it before, that is,
In this direction, is how John Brown put it
When he was put on the scaffold.
I feel like I must muzzle myself,
I told my psychologist.
“So you feel dangerous?” she said.
“So you feel like a threat?”
Why was I so surprised to hear it?
by Solmaz Sharif
Greywolf Press, 2016
Wednesday, October 19, 2016
Lorraine Berry in Literary Hub:
If your Facebook feed looks anything like mine, the comparisons between Donald Trump and Adolf Hitler appear like surreal dreams, with Trump’s face Photoshopped so he’s standing in front of a rally at Nuremberg. It doesn’t take too many comments before someone invokes Godwin’s Law and the conversation shuts down. Donald Trump is many things; Adolf Hitler, he is not.
On February 19th, the public intellectual, novelist, essayist, and semiotician, Umberto Eco died in Milan. While the rest of the world has mourned the loss of rock star David Bowie, Eco’s death meant the loss of one of our intellectual rock stars, a man who was as comfortable discussing Barbie as he was explaining the aesthetics of Thomas Aquinas. It was Eco who insisted that a “fundamental” reading of a text—an approach espoused by Antonin Scalia, for example—was of little use when trying to understand books. “Books are not made to be believed, but to be subjected to inquiry. When we consider a book, we mustn’t ask ourselves what it says but what it means.” (How different Italy’s intellectual giant from the man who insisted the Constitution means exactly what it meant when it was first written—by rich, white slave-owners).
Computer scientists have come up with an algorithm that can fairly divide a cake among any number of people
Erica Klarreich in Quanta:
Two young computer scientists have figured out how to fairly divide cake among any number of people, setting to rest a problem mathematicians have struggled with for decades. Their work has startled many researchers who believed that such a fair-division protocol was probably impossible.
Cake-cutting is a metaphor for a wide range of real-world problems that involve dividing some continuous object, whether it’s cake or, say, a tract of land, among people who value its features differently — one person yearning for chocolate frosting, for example, while another has his eye on the buttercream flowers. People have known at least since biblical times that there’s a way to divide such an object between two people so that neither person envies the other: one person cuts the cake into two slices that she values equally, and the other person gets to choose her favorite slice. In the book of Genesis, Abraham (then known as Abram) and Lot used this “I cut, you choose” procedure to divide land, with Abraham deciding where to divide and Lot choosing between Jordan and Canaan.
Around 1960, mathematicians devised an algorithm that can produce a similarly “envy-free” cake division for three players. But until now, the best they had come up with for more than three players was a procedure created in 1995 by political scientist Steven Brams of New York University and mathematician Alan Taylor of Union College in Schenectady, New York, which is guaranteed to produce an envy-free division, but it is “unbounded,” meaning that it might need to run for a million steps, or a billion, or any large number, depending on the players’ cake preferences.
Prashant Keshavmurthy in The Wire:
For over a thousand years, since around the ninth century, the imagination of the Indian in Arabic and Persian literature coalesced in the figure of a non-Islamic religious specialist, the Brahman. Not that of the Kayastha Hindu, the men of whose caste, from the mid-16th century onward, increasingly staffed the bureaucracies of the Afghan and Mughal states of North India, nor that of the occasional Brahman who, by familial and personal circumstance, received a traditional madrasa education in Arabic and Persian. For both these types of men were so steeped in Persian-Islamic learning and comportment as to be Muslim, in an elite cultural sense.
Rather, the Brahman of the Persian literary imagination was continuous with the Brahman of the earliest texts of kalāmor rational theology in Arabic, whether Muslim or Jewish. This Brahman was purely a debate opponent invoked by Muslim and Jewish theologians to defend the necessity of prophecy. These heresiarchs presented him as a proponent of the sufficiency of human reason and thus of the redundancy of prophets. Sarah Stroumsa, a scholar of early Islamic-Jewish theology, has argued that early Muslim-Jewish theological debates were shaped by encounters with Brahmans and that these debates were conducted solely on the shared ground of logic, avoiding reference to theological doctrines specific to each side. The polemically simplified picture of the Brahman this left behind in the archive of early Muslim-Jewish heresiography was perhaps what allowed him to pass from theology into literature, where he congealed into a stock character.
Barack Obama spoke to Wired Editor in Chief Scott Dadich and MIT Media Lab Director Joi Ito last week. You can see all eight excellent videos here. (Is there anything BHO doesn't know a lot about? I was amazed by how he has time to keep up with things like issues surrounding recent developments in AI. I highly recommend watching all eight Wired videos.)
Here is the 6th video in the series about how AI will affect jobs. Video length: 9:12
And here are two bonus BHO videos:
Anthony Powell said that John Betjeman had ‘a whim of iron’. To judge by these compulsive letters, Patrick Leigh Fermor had a pleasure-loving streak of purest titanium. From the first letter, written in 1940, soon after he joined the Irish Guards, until the last in 2010, sent when he was ninety-four, he was on a lifelong search for erotic, alcoholic, intellectual and courageous diversion. One moment he’s in Crete, meeting the partisans who helped him kidnap the Nazi general Heinrich Kreipe, his most dashing escapade. The next he’s at Chatsworth, sitting next to Camilla Parker Bowles – ‘immensely nice, non-show-off, full of charm and very funny’.
In between, it’s back to the Mani peninsula and the enchanting seaside home he and his wife, Joan, built in the mid-1960s. It was only there, in Greece, and then, in his fifties, that Leigh Fermor had a real adult home and reined in the wanderlust – and the lust. Until then, he’d continued the manic travels that began with his walk as a teenager across Europe in the 1930s. In the letters we follow him as he flits from borrowed Italian castello to French abbey to Irish castle, taking the edge off his ‘high-level cadging’ by making jokes about it. In 1949, he wrote to Joan: ‘Darling, look out for some hospitable Duca or Marchesa with a vast castle, and try and get off with him, so that he could have us both to stay.’
he most passionately discussed New York City gallery exhibition of last season might have been Philip Guston at Hauser & Wirth, but the most talked-about one by a living artist was undoubtedly “ TDavid Hammons: Five Decades” at Mnuchin Gallery. Each of the two shows cast its own spell, one very different from the other, but both seemed to offer one emphatic if understated lesson to young artists: Keep your distance from the art world. Guston sought solitude by “painting a lot of other people out of the canvas,” as Harold Rosenberg put it in a conversation with him. Guston concurred: “People represent ideas…. But you have to paint them out. You know, ‘Get out.’” He told Morton Feldman that “by art I don’t mean the art world, I don’t mean lovers of art.” Lovers of art—people like me—might love it to death; what we love in art may not be what the artist needs from it. Guston once compared the art world to a country occupied by a foreign power.
Hammons is even more vehement. For him, not just the art world but art itself is suspect. “I can’t stand art actually. I’ve never, ever liked art,” he told the art historian and curator Kellie Jones in a 1986 interview that remains the most complete exposition we have of this notoriously unforthcoming artist’s philosophy.
Walking through Reading on a recent afternoon, I passed by all the things you can’t see from the pagoda on top of Mt. Penn. There were the bas relief depictions of the town’s railroad past on the stone walls of a building, posters for cheap collect calls to Central America, and the bustling scene and pawn shops of Penn Square not far from a newly opened luxury Double Tree hotel, a nod to downtown revitalization. There was also, unexpectedly, a pair of Trump signs.
I spotted them outside a dingy building attached to Tommy’s Auto Repair, a garage on North 8th Street, and wandered in.
They belonged, according to Tommy Acevedo, 39, to “the old man who owns the building.” Acevedo, originally from the Dominican Republic, owns the autobody business and a grocery store a few blocks away.
Sitting in the garage office, Acevedo and a few customers argued animatedly about the presidential race as soon as the topic of the Trump signs came up, the election being 2016’s one sure conversational accelerant. They talked Trump’s businessman appeal, Clinton’s emails and the threat of terrorism. Ultimately, though, Acevedo said, “I’m definitely trying to get Hillary in there.”
Danny Heitman in The Christian Science Monitor:
In one of the most hotly contested political seasons in American history, a new biography by Larry Tye revisits the life of Robert F. Kennedy, a campaign warrior who helped define national life in the 1960s. It was also a life, as Tye points out, that was deeply shaped by reading. In “Bobby Kennedy: The Making of a Liberal Icon,” Tye chronicles RFK’s intellectual evolution, a change influenced in large part by Kennedy’s deepening dependence on books for inspiration. After his brother, President John F. Kennedy, was assassinated on November 22, 1963, Robert increasingly turned to literature to make sense of his grief. At the suggestion of his sister-in-law, widowed First Lady Jackie Kennedy, RFK began reading the ancient Greeks, especially the work of Aeschylus, a playwright who offered special insights on loss. Aeschylus, writes Tye, “seemed to be speaking directly to Bobby when he wrote, ‘Take heart. Suffering, when it climbs highest, lasts but a little time.’”
Kennedy and his late brother “had kept a daybook of quotes that moved them for use in speeches,” Tye notes. “Now Bobby did it on his own from readings that had progressed beyond his old war and adventure tales to biography and history. There was more poetry now and less football. For the rest of his life he would habitually stuff a paperback in his coat pocket or briefcase, some new to him and others that he liked enough to reread repeatedly, his lips moving as he did. Aides thought he was staring into his lap until they looked closer and saw the essays of Emerson and Thoreau, or poetry by Shakespeare or Tennyson.” Reading wasn’t a retreat from the world for Robert F. Kennedy, Tye suggests, but a way to engage it. A favorite quote from Francis Bacon affirmed life as active rather than passive: “In this theater of man’s life, it is reserved only for God and for angels to be lookers-on.”
Nic Fleming in Nature:
Two US researchers have doubled their 16-year-old wager on whether anyone born before 2001 will reach the age of 150. The scientists have now staked US$600 on the question — but, if the fund in which the cash is deposited keeps growing at its current rate, the descendants of the victor could net hundreds of millions of dollars in 2150. The friendly rivalry began in 2000, when Steven Austad, a biologist who studies ageing, was quoted in a Scientific American article1 with the provocative statement: "The first 150-year old person is probably alive right now." Jay Olshansky, another expert on ageing, didn't think so — and the scientists agreed to stake cash on the debate. On 15 September 2000, the two put $150 each into an investment fund, and signed a contract stating that the money and any returns would be paid to the winner (or his descendants) in 2150. The bet also stipulates that Austad will only win if the 150-year-old is of sound mind.
Then last week, a paper in Nature2 suggested — from an analysis of global demographic data — that there may be a natural limit to human lifespan of about 115 years. Olshansky, at the University of Illinois at Chicago, wrote an accompanying commentary which argues that fixed genetic programs stand in the way of significant human life extension3. He says he believes a major breakthrough that will significantly extend human lifespan will occur within his lifetime, but that it will come too late to help those born before 2001 to reach their 150th birthday. But Austad, at the University of Alabama, Birmingham, disagrees. “I’m more convinced than ever that I was correct in our original bet,” he says. He cites recent studies showing that a number of drugs, such as the immune-system suppressor rapamycin, can significantly extend lifespan in animals. And he points to the imminent start of a clinical trial called Targeting Aging with Metformin, or TAME, which hopes to show that a well-known diabetes drug can slow ageing.
Tuesday, October 18, 2016
Philosopher Kwame Anthony Appiah says race and nationality are social inventions being used to cause deadly divisions
Hannah Ellis-Petersen in The Guardian:
Regarded as one of the world’s greatest thinkers on African and African American cultural studies, Appiah has taught at Yale, Harvard, Princeton and now NYU. He follows in the notable footsteps of previous Reith lecturers Stephen Hawking, Aung San Su Kyi, Richard Rodgers, Grayson Perry and Robert Oppenheimer.
The “Mistaken Identities” lectures cover ground already well trodden by the philosopher. His mixed race background, lapsed religious beliefs and even sexual orientation have, in his own words, put him on the “periphery of every accepted identity”.
But in the face of religious fundamentalism, Brexit and the need to reiterate in parts of the US that black lives matter, Appiah argues it is time we stopped making dangerous assumptions about how we define ourselves and each other.
Appiah’s lecture on nationality draws heavily on the “nonsense misconceptions” he saw emerge prominently in the Brexit and Donald Trump campaigns – that to preserve our national identity we have to oppose globalisation.
“My father went to prison three times as a political prisoner, was nearly shot once, served in parliament, represented his country at the United Nations and believed that he should die for his country,” Appiah says. “There wasn’t a more patriotic man than my father, and this Ghanaian patriot was the person who explicitly taught me that I was a citizen of the world. In fact, it mattered so much to him that he wrote it in a letter for us when he died.
Nicole Im in Literary Hub:
Joanna Kavenna is a philosopher dressed down in the sensory details of the novel. Kavenna, who took her last name from the Norwegian name for woman, seems to be reassembling the world from its basic questions—what are we and why are we here?
In her latest, A Field Guide to Reality, protagonist Eliade Jencks is always one scent or thought-carom away from the void. In one moment she notes the smell of laundered handkerchiefs and the sound of clattering plates while pondering questions like “does perception create the world, or is it there before us, preset and perpetual?” A Field Guide to Reality circles these questions through the point of loss. After learning that her friend Professor Solete has died, Eliade embarks on a journey to find his mysterious “Field Guide.” She is pulled into the strange worlds of Solete’s various colleagues, and as her journey progresses, finds it harder and harder to “determine what [is] real and what [is] not.”
Nicole Im: Nature plays a big role in your recent story in Freeman’s, “If There Was No Moon,” and in your novel, A Field Guide to Reality. Cold, dark rivers, shadow-casting trees, circling birds, and throughout A Field Guide, a swirling, smudging mist. How do you view the relationship between the physical world and philosophical ideas, and how do they connect in your writing?
Joanna Kavenna: For many years I had this idea about an impossible book, which would supply cogent, succinct answers to all those ambiguous and perplexing questions about the meaning of life and death, i.e a field guide to reality: a sober, helpful, lucid manual for fixing existential angst, like a manual for fixing a car. So that was the idea behind A Field Guide to Reality—this idea of an impossible book.
I’m very interested in philosophical questions about reality and truth and the meaning of things. I don’t think there should be an esoteric elite that gets to think deeply about life, and surrenders its hallowed revelations to the rest of us. I think we all have the right to speculate about what the hell is going on. Because it’s all very weird but it’s actually happening to us—just this once, just for now. Perhaps because of all this, my narrators observe things in quite a detailed and even at times frenetic way—whether they’re in the countryside or in a city or town. To me, also, philosophical thought and the surrounding environment are allied, partly because I walk long distances, whenever possible, to work out my ideas.
David Wescott in the Chronicle of Higher Education:
It’s the year 2120. You feel no hunger, no cold, no heat, no pain. There’s no need to eat or to take medicine, though you can if you like. You are beautiful, intelligent, and charismatic, as are your friends, co-workers, lovers. Though the economy is fiercely competitive, retirement is not far off. You do not fear death. Look out your office window and you see sunlit spires towering over tree-lined boulevards.
At least this is what you think you see. In fact, you live and work in virtual reality. Your city amounts to racks of computer hardware and the pipes that cool them. And you are not "you" in the traditional sense: You are an "em," a robotic brain emulation created by scanning a particular human brain and uploading it to a computer. On the upside, you process information 1,000 times faster than a human. On the downside, you inhabit a robotic body, and you stand roughly two millimeters tall.
This is the world Robin Hanson is sketching out to a room of baffled undergraduates at George Mason University on a bright April morning. To illustrate his point, he projects an image of an enormous futuristic city alongside clip art of a human castaway cowering on a tiny desert island. His message is clear: The future belongs to "ems."
This may sound more like science fiction than scholarship, but that’s part of the point. Hanson is an economist with a background in physics and engineering; a Silicon Valley veteran determined to promote his theories in an academy he finds deeply flawed; a doggedly rational thinker prone to intentionally provocative ideas that test the limits of what typically passes as scholarship. Those ideas have been mocked, memed, and marveled at — often all at once.