Friday, October 21, 2016
Max Holleran in Boston Review:
This summer’s holiday season in the Mediterranean began with the startling announcement, from the International Organization for Migration, that more than 3,000 migrants have already died in 2016 attempting to cross into Europe over the Mediterranean Sea. While Germany resettled nearly a million people in 2015, other EU nations have been far more reluctant. Since last year, the European public has resolutely told their national leaders to begin deportations and reform border security, often in urgently nationalistic language of the kind found in Brexit’s “Breaking Point” ad. The EU has begun to tighten entry for those immigrating from outside of the continent, and securing the southern border has become an existential test of whether the political federation can survive. Mediterranean countries are on the frontline of this effort despite their limited economic resources compared to their wealthier Northern neighbors. They have been tasked with the role of sentry, patrolling the walls of fortress Europe. Yet a backdoor to the castle seems to have been left open.
Since the 2008 financial crisis, many Mediterranean countries have begun to offer citizenship-for-sale to non-European nationals. These countries include places hit hard by austerity like Cyprus, Portugal, and Spain (where the program is called “golden visa” in a nod to the optimism about the value of an EU passport as well as excitement for the wealth that citizenship investors could potentially bring). Often connected to the purchasing of property, these programs offer residency, a passport, and—after several years—full citizenship to those able to pay several hundred thousand euros. Selling citizenship is a contentious idea that disrupts some of our basic notions about what it means to belong to a national community. Mediterranean states support it partly as a way to raise revenues after the global financial crisis, which brought budget slashing and pushed unemployment over 20 percent in many countries.
David Kaiser in Nautilus:
If all the bizarre facets of quantum theory, few seem stranger than those captured by Erwin Schrödinger’s famous fable about the cat that is neither alive nor dead. It describes a cat locked inside a windowless box, along with some radioactive material. If the radioactive material happens to decay, then a device releases a hammer, which smashes a vial of poison, which kills the cat. If no radioactivity is detected, the cat lives. Schrödinger dreamt up this gruesome scenario to mock what he considered a ludicrous feature of quantum theory. According to proponents of the theory, before anyone opened the box to check on the cat, the cat was neither alive nor dead; it existed in a strange, quintessentially quantum state of alive-and-dead.
Today, in our LOLcats-saturated world, Schrödinger’s strange little tale is often played for laughs, with a tone more zany than somber.1 It has also become the standard bearer for a host of quandaries in philosophy and physics. In Schrödinger’s own time, Niels Bohr and Werner Heisenberg proclaimed that hybrid states like the one the cat was supposed to be in were a fundamental feature of nature. Others, like Einstein, insisted that nature must choose: alive or dead, but not both.
Although Schrödinger’s cat flourishes as a meme to this day, discussions tend to overlook one key dimension of the fable: the environment in which Schrödinger conceived it in the first place. It’s no coincidence that, in the face of a looming World War, genocide, and the dismantling of German intellectual life, Schrödinger’s thoughts turned to poison, death, and destruction. Schrödinger’s cat, then, should remind us of more than the beguiling strangeness of quantum mechanics. It also reminds us that scientists are, like the rest of us, humans who feel—and fear.
Liza Batkin in Broadly:
In All the Single Ladies, her recent book about the growing population of single women in America, Rebecca Traister relates her experience of going off to college knowing that, "by most accounts, marriage was coming to swallow [her] up in just a few short years," but simultaneously feeling that nothing was less likely. A gap, resulting from a sizable sociological shift, had yawned between the expectations of her parents' generation and her own. The median age of first marriage—which hovered between 20 and 22 years old during the 20th century—today is approximately 27, and whereas 60 percent of Americans between the ages of 18 and 29 were married in 1960, the percentage now falls around 20. Today it is more common to be unmarried than married in your 20s, and Traister concludes from this that young women will "no longer have to wonder," as she did when she graduated high school, "what unmarried adult life for women might look like, surrounded as we are by examples of this kind of existence."
But figuring out "what unmarried adult life for women might look like" still seems to require a good deal of wondering. In Spinster, published last year, Kate Bolick recounts her realization at the age of 23—which stands out for her as the age at which Sylvia Plath married Ted Hughes—that "marriage was the last thing on [her] mind." With a husband far from her vision of her future, Bolick experienced a "failure of imagination." "How do you embark on your adulthood," she asks, "when you don't know where you're headed?" In Labor of Love, another recent book that examines modes of dating as they reflect and are produced by historical economic conditions, Moira Weigel describes being broken up with by a boyfriend and finding herself asking him what she should want.
"Why was I always asking some man?" she wonders. When she realizes that she "had learned to do it by dating," she sets out to understand why she "was struggling to follow desires that did not seem to be [her] own."
In the introduction to Future Sex, another hyped nonfiction book about modern relationships, out from FSG this week, Emily Witt narrates her own moment of reckoning with a failure of imagination. It arrives after she sleeps with a man who is seeing another woman; she is chastised for "pantomiming thrills" and fears that she may have contracted chlamydia. Researching methods for preventing STDs, Witt finds that the CDC recommends being in "a long-term mutually monogamous relationship with a partner who has been tested and is known to be uninfected."
Richard Marshall interviews Samuel Scheffler in 3:AM Magazine:
3:AM: A criticism of some moral philosophy – and perhaps of the position that you’ve just been discussing where the scope is about small-scale personal relationships and avoiding harm – is that it doesn’t accommodate big-scale issues like justice. These are deeply felt values, so how do you propose we accommodate them within your non-consequentialist ethical position?
SS: Your question seems to suggest that the issue of how to accommodate justice within one’s overall moral outlook is a problem for non-consequentialists alone. And in a way that’s right, but only because justice is not a concept that plays a fundamental role in consequentialist thought at all. We can, if we like, treat utilitarianism (for example) as a candidate theory of justice, as Rawls did in A Theory of Justice, but this is in one respect misleading. Utilitarianism offers us a theory of right action, but it is not a theory that even mentions, let alone uses, the concept of justice. At no point in their theory do utilitarians rely on an independent notion of justice or fairness. They are concerned solely with the maximization of value. Non-consequentialists are the only people who treat justice as a fundamental moral concept.
Since justice is a fundamental moral concept, the question should be: how do we (any of us) accommodate ideas of justice, and especially ideas about the justice of basic social, political, and economic institutions, within an overall outlook that is also sensitive to a variety of other moral values and principles, including values and principles that apply to small-scale personal relationships? That is a pressing and difficult question. One of the attractions of Rawls’s theory is that it suggests a kind of division-of-labor answer to the question. The idea is that there are sui generis principles of justice that apply to the basic institutional structure of society. If a society’s basic structure satisfies those principles, then individuals in the society may appropriately and without qualms be guided by the many different values and principles that apply to them, including principles governing the conduct of their personal relationships. Of course, individuals have duties to support and sustain just institutions, according to this view, but they have duties of other kinds as well.
Frank Furedi in Aeon:
It is Saturday, 1 November 2014. I am book-browsing at Barnes and Noble on Fifth Avenue in New York City when my attention is caught by a collection of beautifully produced volumes. I look closer and realise that these books are part of what’s called the Leatherbound Classic series. An assistant informs me that these fine specimens help to ‘embellish your book collection’. Since this exchange, I am reminded time and again that, as symbols of cultural refinement, books really matter. And, though we are meant to be living in a digital age, the symbolic significance of the book continues to enjoy cultural valuation. That is why, often when I do a television interview at home or in my university office, I am asked to stand in front of my bookshelf and pretend to be reading one of the texts.
Since the invention of the cuneiform system of writing in Mesopotamia around 3500 BCE and of hieroglyphics in Egypt around 3150 BCE, the serious reader of texts has enjoyed cultural acclamation. The clay tablets on which marks and signs were inscribed were regarded as precious and sometimes sacred artefacts. The ability to decipher and interpret the symbols and signs was seen as an extraordinary accomplishment. Egyptian hieroglyphics were thought to possess magical powers and, to this day, many readers regard books as a medium for gaining a spiritual experience. Since text possesses so much symbolic significance, how people read and what they read is widely perceived as an important feature of their identity. Reading has always been a marker of character, which is why people throughout history have invested considerable cultural and emotional resources in cultivating identities as lovers of books.
In ancient Mesopotamia, where only a small group of scribes could decipher the cuneiform tablets, the interpreter of signs enjoyed tremendous prestige. It is at this point in time that we have one of the earliest hints of the symbolic power and privilege enjoyed by the reader.
Christopher Ingraham in the Washington Post:
Gerrymandering -- drawing political boundaries to give your party a numeric advantage over an opposing party -- is a difficult process to explain. If you find the notion confusing, check out the chart above -- adapted from one posted to Reddit this weekend -- and wonder no more.
Suppose we have a very tiny state of fifty people. Thirty of them belong to the Blue Party, and 20 belong to the Red Party. And just our luck, they all live in a nice even grid with the Blues on one side of the state and the Reds on the other.
Now, let's say we need to divide this state into five districts. Each district will send one representative to the House to represent the people. Ideally, we want the representation to be proportional: if 60 percent of our residents are Blue and 40 percent are Red, those five seats should be divvied up the same way.
Fortunately, because our citizens live in a neatly ordered grid, it's easy to draw five lengthy districts -- two for the Reds , and three for the Blues. Voila! Perfectly proportional representation, just as the Founders intended. That's grid 1 above, "perfect representation."
Video length: 3:15
Ian Leslie in The Economist:
In 1930, a psychologist at Harvard University called B.F. Skinner made a box and placed a hungry rat inside it. The box had a lever on one side. As the rat moved about it would accidentally knock the lever and, when it did so, a food pellet would drop into the box. After a rat had been put in the box a few times, it learned to go straight to the lever and press it: the reward reinforced the behaviour. Skinner proposed that the same principle applied to any “operant”, rat or man. He called his device the “operant conditioning chamber”. It became known as the Skinner box. Skinner was the most prominent exponent of a school of psychology called behaviourism, the premise of which was that human behaviour is best understood as a function of incentives and rewards. Let’s not get distracted by the nebulous and impossible to observe stuff of thoughts and feelings, said the behaviourists, but focus simply on how the operant’s environment shapes what it does. Understand the box and you understand the behaviour. Design the right box and you can control behaviour. Skinner turned out to be the last of the pure behaviourists. From the late 1950s onwards, a new generation of scholars redirected the field of psychology back towards internal mental processes, like memory and emotion. But behaviourism never went away completely, and in recent years it has re-emerged in a new form, as an applied discipline deployed by businesses and governments to influence the choices you make every day: what you buy, who you talk to, what you do at work. Its practitioners are particularly interested in how the digital interface – the box in which we spend most of our time today – can shape human decisions. The name of this young discipline is “behaviour design”. Its founding father is B.J. Fogg.
...In a phone conversation prior to the workshop, Fogg told me that he read the classics in the course of a master’s degree in the humanities. He never found much in Plato, but strongly identified with Aristotle’s drive to organise and catalogue the world, to see systems and patterns behind the confusion of phenomena. He says that when he read Aristotle’s “Rhetoric”, a treatise on the art of persuasion, “It just struck me, oh my gosh, this stuff is going to be rolled out in tech one day!”
Jim Davies in Nature:
Some researchers argue that consciousness is an important part of human cognition (although they don’t agree on what its functions are), and some counter that it serves no function at all. But even if consciousness is vitally important for human intelligence, it is unclear whether it’s also important for any conceivable intelligence, such as one programmed into computers. We just don’t know enough about the role of consciousness — be it in humans, animals or software — to know whether it’s necessary for complex thought. It might be that consciousness, or our perception of it, would naturally come with superintelligence. That is, the way we would judge something as conscious or not would be based on our interactions with it. A superintelligent AI would be able to talk to us, create computer-generated faces that react with emotional expressions just like somebody you’re talking to on Skype, and so on. It could easily have all of the outward signs of consciousness. It might also be that development of a general AI would be impossible without consciousness. (It’s worth noting that a conscious superintelligent AI might actually be less dangerous than a non-conscious one, because, at least in humans, one process that puts the brakes on immoral behaviour is ‘affective empathy’: the emotional contagion that makes a person feel what they perceive another to be feeling. Maybe conscious AIs would care about us more than unconscious ones would.)
Either way, we must remember that AI could be smart enough to pose a real threat even without consciousness. Our world already has plenty of examples of dangerous processes that are completely unconscious. Viruses do not have any consciousness, nor do they have intelligence. And some would argue that they aren’t even alive. In his book Superintelligence (Oxford University Press, 2014), the Oxford researcher Nick Bostrom describes many examples of how an AI could be dangerous. One is an AI whose main ambition is to create more and more paper clips. With advanced intelligence and no other values, it might proceed to seek control of world resources in pursuit of this goal, and humanity be damned. Another scenario is an AI asked to calculate the infinite digits of pi that uses up all of Earth’s matter as computing resources. Perhaps an AI built with more laudable goals, such as decreasing suffering, would try to eliminate humanity for the good of the rest of life on Earth. These hypothetical runaway processes are dangerous not because they are conscious, but because they are built without subtle and complex ethics.
Thursday, October 20, 2016
Eli Saslow in the Washington Post:
Their public conference had been interrupted by a demonstration march and a bomb threat, so the white nationalists decided to meet secretly instead. They slipped past police officers and protesters into a hotel in downtown Memphis. The country had elected its first black president just a few days earlier, and now in November 2008, dozens of the world’s most prominent racists wanted to strategize for the years ahead.
“The fight to restore White America begins now,” their agenda read.
The room was filled in part by former heads of the Ku Klux Klan and prominent neo-Nazis, but one of the keynote speeches had been reserved for a Florida community college student who had just turned 19. Derek Black was already hosting his own radio show. He had launched a white nationalist website for children and won a local political election in Florida. “The leading light of our movement,” was how the conference organizer introduced him, and then Derek stepped to the lectern.
“The way ahead is through politics,” he said. “We can infiltrate. We can take the country back.”
Years before Donald Trump launched a presidential campaign based in part on the politics of race and division, a group of avowed white nationalists was working to make his rise possible by pushing its ideology from the radical fringes ever closer to the far conservative right. Many attendees in Memphis had transformed over their careers from Klansmen to white supremacists to self-described “racial realists,” and Derek Black represented another step in that evolution.
He never used racial slurs. He didn’t advocate violence or lawbreaking. He had won a Republican committee seat in Palm Beach County, Fla., where Trump also had a home, without ever mentioning white nationalism, talking instead about the ravages of political correctness, affirmative action and unchecked Hispanic immigration.
Bill Gates in his own blog:
A few years ago, I pulled off a purposeful prank. While I was giving a TED Talk on malaria to a room full of influential people, I opened a canister and let loose a small swarm of mosquitoes. “There’s no reason that only poor people should have the experience,” I said. I let the audience squirm in their seats for about half a minute before I let on that the mosquitoes were not infected with malaria. My gimmick worked. A distant problem suddenly got very close to home.
Today, gimmicks are no longer necessary for convincing Americans of the danger of mosquito-borne diseases. The spread of Zika virus in south Florida, Puerto Rico, and other parts of the U.S. has given millions of Americans a direct understanding what it’s like to live with the fear of mosquitoes and the harm they can do, especially to pregnant women and children.
The world must focus serious attention and resources on ending the Zika epidemic. At the same time, we should keep in mind that the overwhelming toll of mosquito-related illness and death comes from malaria. Malaria is the key reason mosquitoes are the deadliest animal in the world.
Murray Shanahan in Aeon:
In 1984, the philosopher Aaron Sloman invited scholars to describe ‘the space of possible minds’. Sloman’s phrase alludes to the fact that human minds, in all their variety, are not the only sorts of minds. There are, for example, the minds of other animals, such as chimpanzees, crows and octopuses. But the space of possibilities must also include the minds of life-forms that have evolved elsewhere in the Universe, minds that could be very different from any product of terrestrial biology. The map of possibilities includes such theoretical creatures even if we are alone in the Cosmos, just as it also includes life-forms that could have evolved on Earth under different conditions.
We must also consider the possibility of artificial intelligence (AI). Let’s say that intelligence ‘measures an agent’s general ability to achieve goals in a wide range of environments’, following the definition adopted by the computer scientists Shane Legg and Marcus Hutter. By this definition, no artefact exists today that has anything approaching human-level intelligence. While there are computer programs that can out-perform humans in highly demanding yet specialised intellectual domains, such as playing the game of Go, no computer or robot today can match the generality of human intelligence.
But it is artefacts possessing general intelligence – whether rat-level, human-level or beyond – that we are most interested in, because they are candidates for membership of the space of possible minds. Indeed, because the potential for variation in such artefacts far outstrips the potential for variation in naturally evolved intelligence, the non-natural variants might occupy the majority of that space. Some of these artefacts are likely to be very strange, examples of what we might call ‘conscious exotica’.
In what follows I attempt to meet Sloman’s challenge by describing the structure of the space of possible minds, in two dimensions: the capacity for consciousness and the human-likeness of behaviour.
This talk was presented at Harvard-Epworth Church, Cambridge, MA on May 12, 2016. Video length: 1:26:30
Martin Heidegger never apologized for his support of the Nazis. He joined the party in 1933 and remained a member until the bitter end, in 1945. First, he spoke out enthusiastically in favor of a conservative revolution with Hitler at its helm. From about 1935, he found his own ambitions disappointed, and grew more silent. Yet, when he called his dalliance with National Socialism his greatest mistake after the war, he was upset not at his crime, but at the fact that he got caught.
Not that Heidegger has had to apologize, either. For the past seventy years, his many apologists and acolytes have gone to astounding lengths in trying to prove that his philosophical oeuvre exists independent of what was, they avowed, a mere weakness of character, an instance of momentary opportunism. In 2014, a group of French philosophers even tried to halt the publication of Heidegger’s Black Notebooks, his philosophical diaries. But if antisemitic references in his philosophy are oblique and, as some would have it, coincidental to his critique of modernity, the Notebooks leave little room for such charitable reading. Even after the war he would bemoan the Jewish “drive for revenge,” with their aim consisting in “obliterating the Germans in spirit and history.”
In his book Command and Control: Nuclear Weapons, the Damascus Accident and the Illusion of Safety, Eric Schlosser reveals that worst-case scenarios have come harrowingly close to coming true on a number of occasions—yet the American public has never been adequately informed.
So the question that continues to haunt me is, Why would a generation of presidents, supported by responsible men like William Perry, engage in a nuclear poker game that no sane gambler would in good conscience play? Why on earth wouldn’t both sides calculate the worst-case scenario and elect not to play the game?
On some nights during the Cold War, I lay awake turning over that question. The only plausible answer I was able to imagine is that they, the two governments, couldn’t help it. They had no choice, or thought they had no choice: the nuclear genie was out of the bottle and both sides seized on deterrence as an existential necessity. But was it?
‘The world has never seen anything like this picture,’ Thackeray said. Commenting on the writer’s reaction to the painting, John Barrell wrote (in the LRB of 18 December 2014) that Thackeray ‘won’t have to wait for the tide of modern art to flood in to appreciate what Turner has done. It’s 1844, and he’s got it. Turner is not out of his time; he and Turner are contemporaries.’ The tide of modern art wasn’t long in coming: in the first Impressionist salon in 1874, George Braquemond showed an etching of Manet’s Olympia alongside an intriguing version of Rain, Steam and Speed. He captured some of the elements of Turner’s title – the wind-driven rain slashes across the bridge – but his train appears as static as a Monet locomotive idling at the Gare St Lazare. He also left out the hare.
Kenneth Clark described Rain, Steam and Speed as the ‘most extraordinary’ of Turner’s paintings. ‘I suppose that everybody today would accept it as one of the cardinal pictures of the 19th century on account of its subject as well as its treatment.’ That subject is often seen as the ascendancy of man-made industrial society and the obliteration of the old natural order. Andrew Wilton, the author of Turner in His Time, considered the painting’s perspective as indicative of the triumph of the new: ‘The plunging diagonal line that cuts across the familiar location here is an emphatic demonstration of how the new technologies of the age imposed a precise geometric order on the pastoral scene.’
Richard Brody in The New Yorker:
Michael Moore in TrumpLand” isn’t quite the film that I expected it to be, and that’s all to the good. Moore is, of course, a genius of political satire, deploying his persona—as a populist socialist skeptic with a superb sense of humor and a chess player’s skill at media positioning—to deeply humane ends that are mainly detached from practicality, policy, and practical politics. The very idea of the new film—a recording of Moore’s one-man show from the stage of a theatre in a small, predominantly Republican town in Ohio—runs the risk of self-parody, being a feature-length lampooning of Trump, laid out with meticulously researched facts set forth with the sublime derision of which Moore is a master. It would have been a highly saleable version of preaching to the converted.
...Moore’s final rhetorical stroke is to add that the lifetime of struggle that Hillary has faced (and he cites the struggles of Pope Francis as a comparison) has left her bitterly resentful of the status quo, profoundly progressive in temperament, deeply intent on making decisive changes when, finally, she realizes her lifelong goal of being in a position to make them. In effect, Moore presents a Hillary Clinton whose progressivism arises from no mere butterfly idealism but embodies the hard-won experience of the best American tradition. Then he can’t help but ice the cake: he dreams of her flurry of executive orders (a conservative’s nightmare); he envisions that she’ll replace old enemies (“Iran and North Korea”) with new ones (“Monsanto and Wells Fargo”); and he puts his own enthusiasm for Clinton on the line with a celebrity-fuelled vow—that if, in two years, she doesn’t deliver on the progressive vision that she promises, he himself will run for President in 2020. (He quickly piles the comedy onto this notion—his first promise is that all electronic devices will use the same charger cord.)
This masculinity "script" still embraced by older men was outlined as the four-part Blueprint of Manhood, first published by sociologist Robert Brannon when the men in the studies were entering adulthood in the 1970's. The blueprint included:
No Sissy Stuff - men are to avoid being feminine, show no weaknesses and hide intimate aspects of their lives.
The Big Wheel - men must gain and retain respect and power and are expected to seek success in all they do.
The Sturdy Oak - men are to be ''the strong, silent type" by projecting an air of confidence and remaining calm no matter what.
Give 'em Hell - men are to be tough, adventurous, never give up and live life on the edge.
"We're all aging; it's a fact of life. But as men age, they're unable to be who they were, and that creates a dissonance that is hard to reconcile," said Langendoerfer, who studies aging in men. "We need to better understand how older men adapt to their stressors—high suicide rates, emotions they stifle, avoiding the doctor—to hopefully help them build better lives in older age," she said. The review, published in the journal Men and Masculinities, was co-written by Edward Thompson Jr., an emeritus professor of sociology and anthropology at the College of the Holy Cross and now an affiliate of the Department of Sociology at Case Western Reserve.
Until now, now that I’ve reached my thirties:
All my Muse’s poetry has been harmless:
American and diplomatic: a learned helplessness
Is what psychologists call it: my docile, desired state.
I’ve been largely well-behaved and gracious.
I’ve learned the doctors learned of learned helplessness
By shocking dogs. Eventually we things give up.
Am I grateful to be here? Someone eventually asks
If I love this country. In between the helplessness,
The agents, the nation must administer
A bit of hope: must meet basic dietary needs:
Ensure by tube, by nose, by throat, by other
Orifice. Must fistbump a janitor. Must muss up
Some kid’s hair and let him loose
Around the Oval Office. click click could be cameras
Or the teeth of handcuffs closing to fix
The arms overhead. There must be a doctor on hand
To ensure the shoulders do not dislocate
And there must be Prince’s “Raspberry Beret.”
click click could be Morse code tapped out
Against a coffin wall to the neighboring coffin.
Outside my window, the snow lights cobalt
For a bit at dusk and I’m surprised
Every second of it. I had never seen the country
Like this. Somehow I can’t say yes. This is a beautiful country.
I have not cast my eyes over it before, that is,
In this direction, is how John Brown put it
When he was put on the scaffold.
I feel like I must muzzle myself,
I told my psychologist.
“So you feel dangerous?” she said.
“So you feel like a threat?”
Why was I so surprised to hear it?
by Solmaz Sharif
Greywolf Press, 2016
Wednesday, October 19, 2016
Lorraine Berry in Literary Hub:
If your Facebook feed looks anything like mine, the comparisons between Donald Trump and Adolf Hitler appear like surreal dreams, with Trump’s face Photoshopped so he’s standing in front of a rally at Nuremberg. It doesn’t take too many comments before someone invokes Godwin’s Law and the conversation shuts down. Donald Trump is many things; Adolf Hitler, he is not.
On February 19th, the public intellectual, novelist, essayist, and semiotician, Umberto Eco died in Milan. While the rest of the world has mourned the loss of rock star David Bowie, Eco’s death meant the loss of one of our intellectual rock stars, a man who was as comfortable discussing Barbie as he was explaining the aesthetics of Thomas Aquinas. It was Eco who insisted that a “fundamental” reading of a text—an approach espoused by Antonin Scalia, for example—was of little use when trying to understand books. “Books are not made to be believed, but to be subjected to inquiry. When we consider a book, we mustn’t ask ourselves what it says but what it means.” (How different Italy’s intellectual giant from the man who insisted the Constitution means exactly what it meant when it was first written—by rich, white slave-owners).
Computer scientists have come up with an algorithm that can fairly divide a cake among any number of people
Erica Klarreich in Quanta:
Two young computer scientists have figured out how to fairly divide cake among any number of people, setting to rest a problem mathematicians have struggled with for decades. Their work has startled many researchers who believed that such a fair-division protocol was probably impossible.
Cake-cutting is a metaphor for a wide range of real-world problems that involve dividing some continuous object, whether it’s cake or, say, a tract of land, among people who value its features differently — one person yearning for chocolate frosting, for example, while another has his eye on the buttercream flowers. People have known at least since biblical times that there’s a way to divide such an object between two people so that neither person envies the other: one person cuts the cake into two slices that she values equally, and the other person gets to choose her favorite slice. In the book of Genesis, Abraham (then known as Abram) and Lot used this “I cut, you choose” procedure to divide land, with Abraham deciding where to divide and Lot choosing between Jordan and Canaan.
Around 1960, mathematicians devised an algorithm that can produce a similarly “envy-free” cake division for three players. But until now, the best they had come up with for more than three players was a procedure created in 1995 by political scientist Steven Brams of New York University and mathematician Alan Taylor of Union College in Schenectady, New York, which is guaranteed to produce an envy-free division, but it is “unbounded,” meaning that it might need to run for a million steps, or a billion, or any large number, depending on the players’ cake preferences.
Prashant Keshavmurthy in The Wire:
For over a thousand years, since around the ninth century, the imagination of the Indian in Arabic and Persian literature coalesced in the figure of a non-Islamic religious specialist, the Brahman. Not that of the Kayastha Hindu, the men of whose caste, from the mid-16th century onward, increasingly staffed the bureaucracies of the Afghan and Mughal states of North India, nor that of the occasional Brahman who, by familial and personal circumstance, received a traditional madrasa education in Arabic and Persian. For both these types of men were so steeped in Persian-Islamic learning and comportment as to be Muslim, in an elite cultural sense.
Rather, the Brahman of the Persian literary imagination was continuous with the Brahman of the earliest texts of kalāmor rational theology in Arabic, whether Muslim or Jewish. This Brahman was purely a debate opponent invoked by Muslim and Jewish theologians to defend the necessity of prophecy. These heresiarchs presented him as a proponent of the sufficiency of human reason and thus of the redundancy of prophets. Sarah Stroumsa, a scholar of early Islamic-Jewish theology, has argued that early Muslim-Jewish theological debates were shaped by encounters with Brahmans and that these debates were conducted solely on the shared ground of logic, avoiding reference to theological doctrines specific to each side. The polemically simplified picture of the Brahman this left behind in the archive of early Muslim-Jewish heresiography was perhaps what allowed him to pass from theology into literature, where he congealed into a stock character.
Barack Obama spoke to Wired Editor in Chief Scott Dadich and MIT Media Lab Director Joi Ito last week. You can see all eight excellent videos here. (Is there anything BHO doesn't know a lot about? I was amazed by how he has time to keep up with things like issues surrounding recent developments in AI. I highly recommend watching all eight Wired videos.)
Here is the 6th video in the series about how AI will affect jobs. Video length: 9:12
And here are two bonus BHO videos:
Anthony Powell said that John Betjeman had ‘a whim of iron’. To judge by these compulsive letters, Patrick Leigh Fermor had a pleasure-loving streak of purest titanium. From the first letter, written in 1940, soon after he joined the Irish Guards, until the last in 2010, sent when he was ninety-four, he was on a lifelong search for erotic, alcoholic, intellectual and courageous diversion. One moment he’s in Crete, meeting the partisans who helped him kidnap the Nazi general Heinrich Kreipe, his most dashing escapade. The next he’s at Chatsworth, sitting next to Camilla Parker Bowles – ‘immensely nice, non-show-off, full of charm and very funny’.
In between, it’s back to the Mani peninsula and the enchanting seaside home he and his wife, Joan, built in the mid-1960s. It was only there, in Greece, and then, in his fifties, that Leigh Fermor had a real adult home and reined in the wanderlust – and the lust. Until then, he’d continued the manic travels that began with his walk as a teenager across Europe in the 1930s. In the letters we follow him as he flits from borrowed Italian castello to French abbey to Irish castle, taking the edge off his ‘high-level cadging’ by making jokes about it. In 1949, he wrote to Joan: ‘Darling, look out for some hospitable Duca or Marchesa with a vast castle, and try and get off with him, so that he could have us both to stay.’
he most passionately discussed New York City gallery exhibition of last season might have been Philip Guston at Hauser & Wirth, but the most talked-about one by a living artist was undoubtedly “ TDavid Hammons: Five Decades” at Mnuchin Gallery. Each of the two shows cast its own spell, one very different from the other, but both seemed to offer one emphatic if understated lesson to young artists: Keep your distance from the art world. Guston sought solitude by “painting a lot of other people out of the canvas,” as Harold Rosenberg put it in a conversation with him. Guston concurred: “People represent ideas…. But you have to paint them out. You know, ‘Get out.’” He told Morton Feldman that “by art I don’t mean the art world, I don’t mean lovers of art.” Lovers of art—people like me—might love it to death; what we love in art may not be what the artist needs from it. Guston once compared the art world to a country occupied by a foreign power.
Hammons is even more vehement. For him, not just the art world but art itself is suspect. “I can’t stand art actually. I’ve never, ever liked art,” he told the art historian and curator Kellie Jones in a 1986 interview that remains the most complete exposition we have of this notoriously unforthcoming artist’s philosophy.