Thursday, October 30, 2014
Michael Collins in In These Times:
Set in the present day, the film follows the lives of five black people on the fictitious Ivy League college Winchester as they navigate race, love and ever-shifting personal identities. Broken into a series of blithely titled chapters, the film is billed as “a satire about being a black face in a white place.”
The film, however, is less a satire in the sense of using “wit to expose stupidity” as much as it is a mockumentary whose humor comes from its earnestness, in the vein of films like Best in Show. Perhaps this is because, as the title suggests, the work is narrowly pointed at white America. Or, more specifically, the type of liberal white America that prefaces racist statements with “I’m not racist, but…” and when challenged responds, “But my best friend is a black!” For those who already know that all black people aren’t the same (we have different names for a reason!), and that race, class and sexuality are complex parts of a greater whole, the film will have little critical edge. But for those who haven’t taken Race in America 101, the film may yet be productive.
Through a series of occasionally disjointed chapters, we are presented with a host of college archetypes: the charismatic jock played by the astonishingly beautiful Brandon Bell; the black militant played by Tessa Thompson, the pushover nerd played convincingly by Tyler Williams ofEverybody Hates Chris fame, the society queen with a terrible secret (and an amazing wardrobe of pearl necklaces and backless dresses) played by Teyonah Parris, and the incorrigible dean played by Dennis Haysbert. Throughout, the film adds various layers to these one-dimensional caricatures by highlighting their “performance of blackness.”
For those who slept through critical race theory, it’s now taken for granted that there is no essential black experience. Rather, blackness is a social, political and economic construct that individuals engage with as society, the economy or our personal desires dictate. The film revels in multiplicity of identity, internal contradictions and the general sense of confusion and misidentification that characterize public discussions of race.
Wednesday, October 29, 2014
This was a very low period for Waugh. There was an urgent necessity for him to find a way of making a living, and eventually, with deep foreboding, he took a post as a teacher at Arnold House Preparatory School on the north coast of Wales. This grotesque establishment was the model for the hilariously awful Llanabba Castle in Decline and Fall. He did not stay there for long, and found another teaching job at a more nearly normal school in Buckinghamshire, from which eventually he was sacked, apparently for drunkenness. Waugh was not cut out to be a teacher.
He did not really know what he was cut out to be. He had started to write, and some short stories had been published, but he had not yet given up hope of being a painter. He also spent a brief, happy few months taking carpentry lessons with a view to embarking on a career as a cabinetmaker. He did some journalistic work, and began his first book, a life of the Pre-Raphaelite painter Dante Gabriel Rossetti, but the most important event of these years was his meeting Evelyn Gardner on April 7, 1927. (They would come to be known to their friends as He-Evelyn and She-Evelyn.)
It is not guaranteed, they say, that a successful vaccine against Ebola can be “developed, produced, and distributed” in time, and in large enough amounts, to throw a fence of containment around the disease.
If not, they warn, it is possible that the rest of the world’s reaction could trigger the next global financial crisis.
Being someone who has a professional specialty of covering epidemics (HIV, the anthrax attacks, SARS, H5N1, H1N1, lots of smaller outbreaks), I reluctantly have to conclude: Lanard and Sandman are not being alarmist here. Imagine that Ebola cannot be contained; think back to the events of this weekend; and then imagine that reaction multiplied thousands of times. It isn’t a big leap to the suspicion, disruption and expense that will then be triggered in response to any travelers from the region. From there, it isn’t much of a further leap to closed borders, curbs on international movement, disruption in global trade, cuts in productivity, even civil unrest and the opportunities that unrest offers to extremist movements. None of that is far-fetched, if Ebola is not controlled.
The protest failed because it relied on falsehoods: the opera is not anti-Semitic, nor does it glorify terrorism. Granted, Adams and his librettist, Alice Goodman, do not advertise their intentions in neon. The story of the Achille Lauro hijacking is told in oblique, circuitous monologues, delivered by a variety of self-involved narrators, with interpolated choruses in rich, dense poetic language. The terrorists are allowed ecstatic flights, private musings, self-justifications. But none of this should surprise a public accustomed to dark, ambiguous TV shows like “Homeland.” The most specious arguments against “Klinghoffer” elide the terrorists’ bigotry with the attitudes of the creators. By the same logic, one could call Steven Spielberg an anti-Semite because the commandant in “Schindler’s List” compares Jewish women to a virus.
In the opera, the opposed groups follow divergent trajectories. The terrorists tend to lapse from poetry into brutality, whereas Leon Klinghoffer and his wife, Marilyn, remain robustly earthbound, caught up in the pleasures and pains of daily life, hopeful even as death hovers. Those trajectories are already implicit in the paired opening numbers, the Chorus of Exiled Palestinians and the Chorus of Exiled Jews. The former splinters into polyrhythmic violence, ending on the words “break his teeth”; the latter keeps shifting from plaintive minor to sumptuous major, ending on the words “stories of our love.”
Howard Jacobson in New Statesman:
If I were to give this essay a title, it would be “Waiting for Calvin”. Not John Calvin the theologian, nor Calvin Klein the fashion designer, but Calvin, a Navajo baby whose first laugh I travelled to Arizona in 1995 to film as part of a series of television programmes I was making about comedy. It’s a nerve-racking business waiting for a baby to laugh, particularly if you have a camera crew standing by in another state, but Calvin’s laugh was as important to my film as it was to his family and community. The Navajo celebrate a baby’s first laugh as a rite of passage, a moment in which the baby laughs himself, as it were, out of inchoate babydom and into conscious humanity. It’s a wonderful concept and grants a primacy to laughter that we, who probably laugh too automatically and certainly far too much, would do well to think about. If it’s laughter that makes us human, or at least kick-starts the process of our becoming human, what does that say about what being human is?
It is sometimes argued that laughter is what distinguishes us from animals, but not everyone would agree that we have laughter to ourselves. Thomas Mann, for example, wrote an essay about his dog Bashan in which he made a claim for Bashan’s demonstrating many of the signs of mirth. And that’s before we get on to the tricky question of internal laughter – that appreciation of ironical mishap or absurd situation that even in human beings doesn’t always issue in a smile, never mind a laugh. Laughter, we can say, is an act of comprehension – whether immediate or arising out of rumination – but which of us can know for sure how much animals comprehend of what they see and how long they go on thinking about it?
From recognizing speech to identifying unusual stars, new discoveries often begin with comparison of data streams to find connections and spot outliers. But simply feeding raw data into a data-analysis algorithm is unlikely to produce meaningful results, say the authors of a new Cornell study. That’s because most data comparison algorithms today have one major weakness: somewhere, they rely on a human expert to specify what aspects of the data are relevant for comparison, and what aspects aren’t. But these experts can’t keep up with the growing amounts and complexities of big data. So the Cornell computing researchers have come up with a new principle they call “data smashing” for estimating the similarities between streams of arbitrary data without human intervention, and even without access to the data sources.
How ‘data smashing’ works
Data smashing is based on a new way to compare data streams. The process involves two steps.
- The data streams are algorithmically “smashed” to “annihilate” the information in each other.
- The process measures what information remains after the collision. The more information remains, the less likely the streams originated in the same source.
Data-smashing principles could open the door to understanding increasingly complex observations, especially when experts don’t know what to look for, according to the researchers.
A Ball Rolls on a Point
The whole ball
of who we are
the green baize
of a single tiny
spot. A aural
track of crackle
betrays our passage
it's hot and
spring out of it.
The pressure is
intense and the
sense that we've
As though bringing
too much to bear
too locally were
Victoria Law in Jacobin (image “Prison Blueprints.” Remeike Forbes/Jacobin):
Casting policing and prisons as the solution to domestic violence both justifies increases to police and prison budgets and diverts attention from the cuts to programs that enable survivors to escape, such as shelters, public housing, and welfare. And finally, positioning police and prisons as the principal antidote discourages seeking other responses, including community interventions and long-term organizing.
How did we get to this point? In previous decades, police frequently responded to domestic violence calls by telling the abuser to cool off, then leaving. In the 1970s and 1980s, feminist activists filed lawsuits against police departments for their lack of response. In New York, Oakland, and Connecticut, lawsuits resulted in substantial changes to how the police handled domestic violence calls, including reducing their ability to not arrest.
Included in the Violent Crime Control and Law Enforcement Act, the largest crime bill in US history, VAWA was an extension of these previous efforts. The $30 billion legislation provided funding for one hundred thousand new police officers and $9.7 billion for prisons. When second-wave feminists proclaimed “the personal is the political,” they redefined private spheres like the household as legitimate objects of political debate. But VAWA signaled that this potentially radical proposition had taken on a carceral hue.
At the same time, politicians and many others who pushed for VAWA ignored the economic limitations that prevented scores of women from leaving violent relationships.
Keith Doubt on Eric Gordy's Guilt, Responsibility, and Denial: The Past at Stake in Post-Milošević Serbia, in Berfrois (Belgrade, Serbia. Photograph by Jamie Silva):
The intellectual integrity of cultural anthropology is based largely on its commitment to cultural relativism as a principled notion. Cultural relativism is the principle from which the discipline achieves its sense of empirical objectivity. Cultural differences are cherished as just that, cultural differences. No difference is stipulated as superior or inferior, better or worse. The commitment guards against ethnocentric judgments, colonizing prejudices, and, worst of all, grand theorizing with metaphysical pretense. This ethos in the discipline of cultural anthropology guides the recent book by Eric Gordy titled, Guilt, Responsibility, and Denial: The Past at Stake in Post-Milošević Serbia.
While cultural initiatives rarely investigate and never sentence, they offer some of the keys to understanding that have been missing from political legal projects: the ability to hear and identify with the lived experiences of individuals, a route to engagement that participants in the public can understand, and openness to interpretation that constitutes an invitation to dialogue. (p. 179)
There is a contrasting notion in the social sciences to the principle of cultural relativism, namely, the assumption that social science has a valid knowledge-base and ethical responsibility from which to demonstrate how some societies are healthier than others and how some social structures are better for community life. Social science depicts certain normative orientations and collective sentiments as more functional for the vitality of human life and sociability. For example, human rights scholars assume that a genuine respect for the principle of human rights is good: good for people in society, good for their communities, and good for their governments. Gordy understands this perspective but recognizes its unintended consequences, given his political knowledge of what Max Weber calls the ethical irrationality of the world in his famous lecture, “Politics as a Vocation.” In politics, it is necessary to employ force in realizing one’s values. When, however, force is employed, no matter how good the intentions behind the use of force, bad results follow or evil consequences occur. Weber calls this the ethical irrationality of the world which is the reason for the sense of disenchantment that characterizes the spirit of the modern world. In politics, actions whose motives are seemingly good can lead to bad results. The reverse is also true; actions whose motives are seemingly bad can lead to good results. Weber calls this the paradox of consequences, an ever-repeating empirical and historical pattern, and Gordy understands this matter well. There is a hubris that informs the forceful use of law and legal process at both the national and the international level, and Gordy wants to debunk this hubris that guides international interventions in societies experiencing conflict and social violence.
To introduce the structure of his book, Gordy writes, “the ordering of the chapters is meant to lead readers through the logic that brought the study from apparently clear and relatively simple moral questions to greater complexity and uncertainty, and to an insistence on the importance of the cultural and social context” (p. xv). After relatively simple moral questions implode upon themselves when confronted with empirical scrutiny and historical accounts, the significance of cultural variables within their own milieu and within their own historical context assume their rightful place.
Over at The Physics arXiv Blog:
Taleb and co begin by making a clear distinction between risks with consequences that are local and those with consequences that have the potential to cause global ruin. When global harm is possible, an action must be avoided unless there is scientific near-certainty that it is safe. This approach is known as the precautionary principle.
The question, of course, is when the precautionary principle should be applied. Taleb and co begin by saying that their aim is to place the precautionary principle within a formal statistical structure that is grounded in probability theory and the properties of complex systems. “Our aim is to allow decision-makers to discern which circumstances require the use of the precautionary principle and in which cases evoking the precautionary principle is inappropriate.”
Their argument begins by dividing potential harm into two types. The first is localised and non-spreading. The second is propagating harm that results in irreversible and widespread damage. Taleb and co say that traditional decision-making strategies focus on the first type of risk where the harm is localised and the risk is easy to calculate from past data.
In this case, it is always possible to make a mistake when decision-making about risk. The crucial point is that when the harm is localised, the potential danger from a miscalculation is bounded.
By contrast, harm that is able to propagate on a global scale is entirely different. “The possibility of irreversible and widespread damage raises different questions about the nature of decision-making and what risks can be reasonably taken,” say Taleb and co. In this case, the potential danger from a miscalculation can be essentially infinite. It is in this category of total ruin problems that the precautionary principle comes into play, they say.
Tuesday, October 28, 2014
Jonathan Rée in Prospect:
Every October for the past 13 years, the Oxford Lieder Festival has been bringing classy performances of classical art songs to what used to be a rather unmusical town. I love it: great art taken seriously, mostly in the glorious intimacy of the Holywell Music Room, and without any pomp, artifice or unnecessary formality. But I must say I was rather dismayed when I heard what was planned for this year’s festival, which started a week ago: a complete survey of all the songs that Schubert ever wrote.
Schubert was, of course, the inventor of the classical Liederabend or song recital: the extraordinary musical institution that features nothing but a singer and a pianist, achieving, when all goes well, a thrillingly direct communication with their audience. And apart from inventing the institution, Schubert wrote the classics against which all subsequent efforts are measured—notably Winterreiseand Schöne Müllerin, whose depth, variety, drama, animation and melancholy place them amongst the bare necessities of any possible desert island. But given that Schubert died at the age of 31 (in 1828) and that, apart from inventing the Liederabend, he composed in practically every other genre of classical music, you might think that he could not have written terribly many songs.
Actually he wrote more than 600, so the idea of a three-week festival featuring every single one is perhaps even crazier than you might have thought. A suitable event for anoraks, pub-quizzers and musical train-spotters, perhaps, but why should anyone who cares for the art of singing want to scrape the barrel for hundreds of minor works, rather than remaining with the tried and tested pre-loved masterpieces?
More here. [Thanks to Brooks Riley.]
Morgan Meis in The Smart Set:
In the year 1905, Henri Matisse painted a portrait of his wife wearing a rather extraordinary hat. The painting was displayed at the Salon d’Automne in Paris that same year. Much shock and controversy followed. To many, the hat looked like a giant lump of randomly chosen colors sitting atop the poor woman’s head. What, also, was the point of all the green on the woman’s face? People and hats don’t look like that. The world doesn’t look like that.
By 1905, this game of looking at contemporary painting and expressing shock and dismay had been going on for some time. A generation had already passed since Impressionism first scandalized right-thinking art aficionados. In the years just after Impressionism, artists like Gauguin and Van Gogh fully dispatched the idea that color in painting had to correspond to color as we see it in the real world. In 1905, the public should have been ready for Matisse. But something about that portrait by Matisse was extra upsetting, even to a public that was now used to being scandalized by art. The color wasn’t just unexpected; it was jarring, verging on ugly. Critics dubbed Matisse and other painters in the show Fauvists. The word means, literally, wild beasts.
You’d expect the wild beast who painted "Woman with a Hat" (1905) to go even further into brutality and ugliness. Shocking the bourgeoisie is hard work. You’re constantly forced to up the ante.
That’s not what happened with Matisse. He continued to experiment radically with color. But his pictures became gentler as time went on. Matisse brought into his paintings a sense of balance, poise, beauty. By the end of his life, Matisse was making art that was downright pretty. There is no more damning adjective to an avant-garde artist than “pretty.”
John Donoghue and Sheila Nirenberg, computer scientist Michel Maharbiz, and psychologist Gary Marcus discuss the cutting edge of brain-machine interactions
The matter of the legacy of Dietrich Bonhoeffer is at once straightforward and immensely complicated. About the man there is no question. Whatever Bonhoeffer’s flaws—and Charles Marsh’s masterly and comprehensive new biography Strange Glory reveals that there were more than is commonly supposed—the witness of his breathtakingly courageous opposition to Adolf Hitler’s Third Reich leaves criticism disarmed.1 In the one great challenge of his life, he was magnificent. He behaved the way that the rest of us, in our most hopeful moments, like to imagine we would.
But Bonhoeffer is known to history not simply as a victim of Nazi horror but as a theologian of note. His appeal is startlingly ecumenical: He finds adherents across the Christian spectrum from conservative evangelicals to Lutherans (of various stripes) to liberal Protestants to celebrants of the death of God. Bonhoeffer himself was sympathetic to Catholicism—Karl Barth worried about his “nostalgia for Rome”—and he even came to insist, in Marsh’s words, on “equivalence before God of the church and the synagogue, between the body of Christ and the chosen people of Israel.”
But from such extravagant pluralism, can there be any coherence?
The first photo: a fireman, a woman, and a child wait on the top-floor landing of a fire escape. Smoke purls from the windows behind them. As the gallant-eyed fireman reaches for the approaching rescue ladder, the woman and girl hug one another, their faces wounded by fear. In the second photograph, the fire escape has buckled and detached from the building. The fireman dangles from the ladder while the woman clutches onto his legs, in the postural arrangement of trapeze swingers. The little girl is not anywhere in view. We see in the third photo that the fireman is safely on the ladder, but the woman and little girl are floating, halfway into their fall, arms and legs gravity-splayed. The woman’s expression is eerily serene, as if already resigned to her fate. The final photograph (above) shows the woman and girl suspended in mid-air, like Degas ballerinas; the girl faces the camera with her arms outstretched, her pajama bottoms inflated with the wind of her fall. The woman plummets headfirst, a hideous, limb-tangled descent into oblivion. The woman, a nineteen-year-old named Diana Bryant, died on impact, but her two-year-old goddaughter, Tiare Jones, landed on Bryant’s body and lived.
Originally published in 1975 in the Boston Herald and taken by Stanley Forman, who thought he was merely documenting some gawk-worthy scenes from a heroic rescue, the photographs are so expertly composed and nakedly harrowing that they resemble film stills from a Hollywood blockbuster. And despite its disquieting content, Forman’s work, known simply as “Fire Escape Collapse,” was reprinted in over four hundred U.S. newspapers.
When William Makepeace Thackeray died, near the end of 1863, he left behind a formidable library in a mansion he’d only recently designed, erected, and occupied. A few months later, his home was dismantled and his books were put to auction. On the flyleaves and margins, their new owners discovered a wealth of Thackeray’s sketches, some in pencil and others in pen and ink.
Thackeray’s talents as an artist were no secret—he’d contributed illustrations to many of his own novels, including Vanity Fair—but few were aware of the extent of his doodling habit. More than ten years later, in 1875, the art collector Joseph Grego published Thackerayana, an assemblage of more than six hundred of Thackeray’s drawings with extracts of the books in which he’d drawn them. (Grego, perhaps fearing the consequences of his blatant copyright infringement, presented the collection anonymously.)
What surprises most about the sketches in Thackerayana is their range—Thackeray was an adept caricaturist, but these drawings find him equally at home in more high-flown styles.
Jeremyy Seal in The Telegraph:
It’s no surprise that imperial splendour should so often have been the keynote in published histories of Istanbul, for more than 1,500 years the glittering capital of the Byzantines and latterly that of the Ottomans. But this book tells of the unfamiliar interwar period, perhaps the most humbling era for the world’s one-time greatest metropolis, which necessarily makes it shorter on gilt and glory than it is on pawnshops, penury and the stench of stale wine. It’s a counter-intuitive approach, but one that succeeds brilliantly in portraying the eclipsed city and its often exotic cast of the destitute, the dispossessed and the defiant – prostituted Russian princesses, scheming spies, out-of-work artists, impresarios and arms dealers as well as familiar figures like Leon Trotsky, Kemal Ataturk and Turkey’s national poet Nazim Hikmet – at a time of unparalleled social upheaval. With the Turks’ defeat in the First World War, Allied forces occupied Istanbul, intent on divvying up the lands of the Ottoman Empire; and though Ataturk’s nationalists were to drive the Greeks out of Anatolia and recover the city on the Bosphorus from the Allies in 1923, they wasted no time in instead making Ankara the capital of the newly proclaimed Turkish Republic.
But with the backwaters beckoning, belittled Istanbul had already embarked on perhaps the most compelling and affecting period in all its long history, one that was especially shaped by the flight to Istanbul of some 185,000 White Russian soldiers, aristocrats and assorted camp followers. In this vivid narrative’s many tangled threads – war and occupation, displacement, espionage, radical social reform, the nationalists’ persecution of the city’s minorities, the women’s movement, the remarkable blossoming of the city’s jazz age – it’s the human detail that always impresses.
C. Nathan DeWall in The New York Times:
We do grow up. We get jobs. We have children of our own. Along the way, we lose our tendencies toward magical thinking. Or at least we think we do. Several streams of research in psychology, neuroscience and philosophy are converging on an uncomfortable truth: We’re more susceptible to magical thinking than we’d like to admit. Consider the quandary facing college students in a clever demonstration of magical thinking. An experimenter hands you several darts and instructs you to throw them at different pictures. Some depict likable objects (for example, a baby), others are neutral (for example, a face-shaped circle). Would your performance differ if you lobbed darts at a baby? It would. Performance plummeted when people threw the darts at the baby. Laura A. King, the psychologist at the University of Missouri who led this investigation, notes that research participants have a “baseless concern that a picture of an object shares an essential relationship with the object itself.”
Paul Rozin, a psychology professor at the University of Pennsylvania, argues that these studies demonstrate the magical law of similarity. Our minds subconsciously associate an image with an object. When something happens to the image, we experience a gut-level intuition that the object has changed as well. Put yourself in the place of those poor college students. What would it feel like to take aim at the baby, seeking to impale it through its bright blue eye? We can skewer a picture of a baby face. We can stab a voodoo doll. Even as our conscious minds know we caused no harm, our primitive reaction thinks we tempted fate.
Hattie MacDaniels Arrives at the Coconut Grove
late, in aqua and ermine, gardenias
scaling her left sleeve in a spasm of scent,
her gloves white, her smile chastened, purse giddy
with stars and rhinestones clipped to her brilliantined hair
on her free arm that fine Negro,
Mr. Wonderful Smith.
It’s the day that isn’t, February 29th,
at the end of the shortest month of the year-
and the sh*****st, too, everywhere
except Hollywood, California,
where the maid can wear mink and still be a maid,
bobbing her bandaged head and cursing
the white folks under her breath as she smiles
and shoos their silly daughters
in from the night dew … what can she be
thinking of, striding into the ballroom
where no black face has ever showed itself
except above a serving tray?
Hi-Hat-Hattie, Mama Mac, Her Haughtiness,
The “little lady” from Showboat whose name
Bing forgot, Beulah & Bertha & Malena
& Carrie & Violet & Cynthia & Fidelia,
one half of the Dark Barrymores —
dear Mammy we can’t help but hug you crawl into
your generous lap tease you
with arch innuendo so we can feel that
much more wicked and youthful
and sleek but oh what
we forgot: the four husbands, the phantom
pregnancy, your famous parties, your celebrated
ice box cake. Your giggle above the red petticoat’s rustle
black girl and white girl walking hand in hand
down the railroad tracks
in Kansas City, six years old.
The man advised you, now
that you were famous, to “begin eliminating”
your more “common” acquaintances
and your reply (catching him square
in the eye): “That’s a good idea.
I’ll start right now by eliminating you.”
Is she or isn’t she? Three million dishes,
a truckload of aprons and headrags later, and here
you are: poised, between husbands
and factions, no corset wide enough
to hold you in, your huge face a dark moon split
by that spontaneous smile — your trademark,
your curse. No matter, Hattie: It’s a long beautiful walk
into that flower-smothered standing ovation,
so go on
and make them wait.
by Rita Dove
from The Poetry Archive
Hear Rita Dove read this poem here.
Public Radio International interviews the novelist M. NourbeSe Philip, and Dohra Ahmad, author of Rotten English, on literatures in the vernacular. Also at the website, you listen to readings of David Copperfield in Jamaican patois, Spanglish, Hawaiian pidgin, Trinadadian and Tobgan vernacular, and standard English. (image Credit: id-iom):
There are probably as many terms for different kinds of English vernacular as there are vernaculars themselves: pidgin, patois, slang, creole dialect and so on.
But while we usually think of the vernaculars as oral versions of the English language, they're making their way into the written word as well.
“There's a really interesting paradox going on, where you're taking something that's constantly changing — and that people don't expect to see written down — and you're making it codified and setting it down for a wider audience," says Dohra Ahmad, editor of an anthology of vernacular literature called "Rotten English."
M. NourbeSe Philip, one of the authors included in the anthology, speaks and writes Trinidadian Creole but points out that the process of getting the language on the page is much the same as writing in Standard English.
“You can’t write it exactly as the person speaks it," she says. "You have to put it through a certain process that conveys the impression that it is being said in the dialect."
She writes both in dialect and in standard English, with her characters switching back and forth between the Englishes.
“As people from the Caribbean, we inhabit a spectrum of language, and you actually hear it when you go into the cultures," Philp says. "You can hear somebody code-switching. You might start off saying something in Standard English and midway switch into the dialect or the vernacular."
Monday, October 27, 2014
by Emrys Westacott
According to a number of studies done over several years, cheating is rife in US high schools and colleges. More than 60% of students report having cheated at least once, and it is quite likely that findings based on self-reporting understate rather than overstate the incidence of cheating. Understandably, most educators view this as a serious problem. At the college where I work, the issue has been discussed at length in faculty meetings, and policies have been carefully crafted to try to discourage academic dishonesty. But in my experience these discussions are overly self-righteous and insufficiently self-critical. We hear the phrase "academic dishonesty" and we immediately whistle for our moral high horse. But too much moralistic tongue-clicking can blind us to the ways in which we who constitute the system contribute to the very malady we lament. For if academic dishonesty is like a disease—and we repeatedly hear it described as an "epidemic"—we may all be carriers, even cultivators, of the virus that causes it. Let me explain.
Socrates sought to understand the essence of a thing by asking what all instances of it have in common. This approach is open to well-known objections, but it can have its uses. In the present case, for example, I think it leads to the following important observation: all instances of academic dishonesty are attempts to appear cleverer, more knowledgeable, more skillful, or more industrious than one really is. Buying or copying a term paper, plagiarizing from the Internet, using a crib sheet on an exam, accessing external assistance from beyond the exam room by means of a cell phone, fabricating a lab report, having another student sign one's name on an attendance sheet—all such practices serve this same purpose. The goal is to produce an appearance that is more impressive than the reality.
So far, so obvious, you might say. But what is not so obvious—and this is a key point in the argument I am making—is that this same prioritizing of appearance over reality permeates much of our education system. It is endorsed by parents, teachers, and administrators, and it is encouraged by many of our well-intentioned pedagogical practices. Students absorb this ordering of values over many years, especially in high school; so by the time they reach college they have been marinating in the toxin for a long time. Here are some examples of what I mean.
by Charlie Huenemann
"It is therefore worth noting," Schopenhauer writes, "and indeed wonderful to see, how man, besides his life in the concrete, always lives a second life in the abstract." I suppose you might say that some of us (especially college professors) tend to live more in the abstract than not. But in fact we all have dual citizenship in the concrete and abstract worlds. One world is at our fingertips, at the tips of our tongues, and folded into our fields of vision. The concrete world is just the world; and the more we try to describe it, the more we fail, as the here and now is immeasurably more vivid than the words "here" and "now" could ever suggest - even in italics.
The second world is the one we encounter just as soon as we begin thinking and talking about the here and now. It is such stuff as dreams are made on; its substance is concept, theory, relation. We make models of the concrete world, and think about those models and imagine what the consequences would be if we tried this or that. Sometimes our models are wrong and we make mistakes. Other times our models work pretty well and we manage to figure out some portion of the concrete world and manipulate it to our advantage. But in any case, we all shuttle between the two worlds as we live and think.
Right now, of course, you and I are deep into an abstract world, forming a model of how we move back and forth between our two worlds. We are modeling our own modeling. But I'll drop that line of thought now, since it leaves me dizzy and confused. My fundamental point is that the abstract world isn't reserved only for college professors. We all engage with it all the time, except perhaps when we sleep or are lost blessedly in the vivacity of sensual experience, and it is in some ways just as close to us as whatever is here and now. To be a human, as Schopenhauer suggests, is to live in two worlds.
Brooks Riley. Viennese Office Chair, circa 1910.
Thanks to Brooks.
by Brooks Riley
I'm standing at the window looking north over a small garden with several different kinds of trees and bushes. If I refine my intake of visual information, I am, in fact, gazing at many different shades of green at once, perhaps even all of them (at least 57, like Heinz). There's the middle green of leaves on a thorny bush in the sunlight, and on the same bush, a darker green tweaked by shade. Add to these variations of light the variety of flora in my view, and I come away with a whole alphabet of green—the common green of a lawn, the brown green of dying leaves, the gray-green highlights of a fir tree, the black green of certain waxy leaves, the lime green of new leaves on a late bloomer, the Schweinfurt green of certain succulents. Green in nature is a chlorophyll-induced industry all its own—a Pantene paradise. . .
. . . for those who love green.
I do not love green. Separated from nature, green is a travesty. I was born with green eyes, and I do love them, but I wouldn't want their hue on my sofa or my walls or my bedspread or my person. Removed from nature, decorative green is a shabby attempt to remember nature or worse, to try to recreate its effect on us. As a child I was attracted to green olives, acquiring a taste for them that had as much to do with their color as with their shape. But olive green is not that far from baby-couldn't-help-it green, or drab Polizei green (slowly being phased out in favor of blue), and removed from its smooth round humble origins in an olive, loathsome. So too the so-called institutional green, once thought to soothe the troubled souls of those coerced to spend time in schools, hospitals, or insane asylums.
I'm not here to condemn another's love of the color green. And from a Pantene point of view, I confess to appreciating certain shades of green (artichoke green, celadon green), as long as I don't have to apply them to anything.