The world gets a tutorial on how to create wall-to-wall media coverage of the death of a celebrity

The recent deaths of two well-known actors, Robin Williams and Lauren Bacall, dominated the news media this week, but in very predictable ways. The news media has got celebrating the life of a famous person down to a science. If the feeding frenzy on the dead bones of a troubled comic or a classy New York personality has been so thorough, it’s only because the media has done it many, many times before.

No reporter assigned to write a story about a celebrity death should have to scratch his or her head in frustration or confusion, wondering where to begin. There are so many models from which to select that most of the stories about dead celebrities seem to write themselves. Besides the basic obituary of the star, the media churns out story after story on the following topics:

    1. Analysis and appreciations of the celebrity’s body of work
    2. Reaction of the public
    3. Reaction of the star’s family
    4. Reaction of other celebrities
    5. Anecdotes and memories, primarily by other celebrities
    6. The funeral
    7. In-depth coverage of the reason the star died—e.g., suicide in middle age for Robin Williams
    8. The last moments or days in the star’s life
    9. The star’s significance in his or her field and to the larger society
    10. The lessons we can all learn from the star’s life or death
    11. Past scandals or high moments in the life/career of the star, e.g. Bacall & Bogie supporting the blacklisted actors, directors and technicians
    12. Unfinished work that the public may be able to see after the star’s death
    13. The star’s financial state
    14. The star’s will and who gets what
    15. The dispensation of the star’s real estate
    16. Any special tributes that cities or organizations are making, from moments of silence to all-star concerts for charity
    17. His or her past sex life

Eventually, the backlash starts. We’ve already started seeing it with Robin Williams. Suddenly there are stories questioning how the news media covered the death;  whether the celebrities who commented were self-serving or in good/bad taste; and  whether the celebrity’s significance really warranted all the coverage. The media like nothing better than to flagellate themselves—or should I say, other media.

Input Robin Williams into Google News and you will find several versions of all of these generic story ideas; a search for Lauren Bacall and you’ll find at least one example of most of these concepts.

These media frenzies can go on for days, or in the case of someone of the stature of Michael Jackson, who died under suspicious circumstances, for weeks or months.

Some justify this intensive coverage of the death of a celebrity as part of the national mourning: the news media channels what everyone is feeling into a barrage of stories that give us all a good catharsis.

But the therapeutic value of mass media’s mass mourning begs a question: who is being glorified and beautified and why?  Why does the media go on for days about Robin Williams or Phillip Seymour Hoffman and give cursory attention to the deaths of Maya Anjelou or Gabriel Garcia Marquez?  What about scientists like Jancinto Convit or Andres Carrasco. Or Bill Dana, who flew the X-15 and other experimental aircraft or NASA engineer John Houbolt? Or how about Howard Baker, once the voice of conscience of the Republican Party? Why don’t we find out about their children, finances, real estate, deep secrets, life history, fears and significance?

If Robin William’s touched the lives of more people, it is not just because he starred in a few TV shows and movies. It’s also because the news media focuses much more on actors, singers, athletes and celebrities (people who are famous for being famous or for being rich) than they do on scientists, engineers, classical composers, elected officials (except presidents), scholars, jazz musicians and other high achievers.

The more significant question, though, is not who is being glorified, it’s why there is so much of it. I would be just as disappointed to see newspapers and the Internet stuffed with meaningless stories about a recently deceased great historian or scientist. In either case, the coverage is excessive because it drives out coverage of other, more important news. We get woefully inadequate coverage of local political campaigns and issues, much less than the news media gave us twenty or even ten years ago. Neither the New York Times nor Wall Street Journal seem to have enough space to do any stories on Democratic candidates this year, although I suspect a bias in favor the Republicans is part of the reason for ignoring Democratic primary races. We are painfully unaware of what is happening in many parts of the world.  The mass media has practically ignored studies that show that charter schools are ineffective, immigrants raise the wages of other workers, we could supply the entire world’s electricity needs with windmills right now, inequality of wealth is growing and raising taxes on the wealthy leads to economic growth.

In short, the coverage of important economic, social and political issues is sparse, and often one-sided. Instead of news, we get dead celebrity worship.

Have people in America & Great Britain gotten meaner, & if so, why?

People have gotten meaner because they have no vested interest in worrying about their fellow human beings. That’s the conclusion of Tom Clark (with Anthony Heath) in Hard Times: The Divisive Toll of the Economic Slump¸ a recent book that sifts through a slew of recent research and impressionistic interviews related to the effect of the Great Recession on the economy and the fabric of society in the United States and Great Britain.

Clark makes his argument through a series of assertions, each of which he proves with research and illustrates with a handful of conversations with people who suffered during the recession that ended a few years ago if you belonged to the upper 1% in income/wealth, but continues for everyone else:

  1. This last “great recession” essentially affected a small part of the population, although everyone outside the 1% has suffered from stagnant wages over the past 30 years.
  2. Those who suffered from the recession the most have tended not to recover.
  3. Unlike other recessions, it was easy to predict who would and would not be affected and not recover from the Great Recession: the poor, the underemployed, the undereducated, primarily minorities and the young.
  4. Compared to previous recessions since the Great Depression of the 1930’s, Anglo-Saxon governments did much less for those who suffered the worst effects of the Great Recession.
  5. The attitudes of the wealthy, middle class and working poor towards victims of the Great Recession were much less generous to victims of previous recessions. A blame-it-on-the-victim mentality replaced the former generosity displayed in surveys in former recessions about whether people liked government support of victims of economic dislocation.

Clark establishes these facts and then uses them to develop a grand synthesis which he thinks explains what he sees as a hard turn right in both the United States and Great Britain over the past 10 years: In former recessions, the impact was widespread and serendipitous, so people supported government intervention and support of victims out of self-interest: maybe they would need the help. But we could predict who the long-term and permanent victims of the Great Recession would be. The result: even though—or perhaps because—most everyone else has been struggling, they did not think they would need the government benefits and so did not support expansion of benefits. Additionally, more of the middle class and working poor grew to believe that large portions of those receiving benefits were “undeserving.” In a sense, 30 years of static wages and a slow erosion of buying power made everyone hunker down and get more selfish.

Clark’s argument resonates to a careful student of the history of healthcare reform. In Remedy and Reaction: The Peculiar Struggle over Health Care Reform, Paul Starr points out that because most Americans already had health insurance through their employers, Medicare or Medicaid, they had no vested interest in seeing the healthcare law now called Obamacare pass, and in fact recognized that it would mean that they would pay more without getting more to help fund those getting coverage under the proposed new law. Republican Scott Brown, U.S. Senator from Massachusetts for what baseball people used to call “a cup of coffee,” expressed this attitude best when he said he liked the recently enacted healthcare law in Massachusetts but did not want the citizens he represented to pay for extending the Massachusetts model to the rest of the country, which Obamacare essentially did.

But although Clark makes a compelling case, I think he discounts the impact of the constant barrage of propaganda we have endured since the rise of Reaganism. We’ve had more than 30 years of the right using code words to demonize the poor and downtrodden, such as “welfare queens, “those people,” “the 47% who think they’re victims” and “urban culture problems.” We’ve had more than 30 years of the glorification of the free market and the nonsense that government always produces inferior solutions. For more than 30 years, we’ve been told that the ultra-rich worked hard for their money and deserve what they get, whereas those who fail have only themselves to blame. More than 30 years of media bashing of unions, teachers and public school workers. More than 30 years of hearing and reading the lie that giving food stamps, medical care and other aid to the poor makes them dependent on handouts and saps their self-reliance so that they prefer to sit on their duffs and do nothing all day. We’ve been told the lie that the only thing that hurts the economy more than giving money to poor people, who will spend it all and thereby create jobs, is to cut taxes on the wealthy. The news media has drummed into our minds that we have to pay down the debt, even if it means gutting social welfare benefits.

In short, some 30 years of brain-washing has made Americans—and evidently Brits, too—inured to the suffering of their fellow neighbors and has atomized our communities into millions of selfish individuals.

I am reluctant to recommend Hard Times as a read, because it’s written in an irritating combination of styles, taking the worst from both a jargon-laden academic style and the slang-and-case-history approach of pop sociology. What’s worse, it’s not even U.S. slang, but that of the foreign tongue known as British. The ideas are certainly worth assimilating and the book is relatively short, but still, if you’re a stickler for good writing, its style will infuriate even as its ideas captivate.

Neither Israel nor United States can justify current bombing campaigns, but Hamas & ISIS are also wrong

Someone on Facebook recently wondered why it’s okay for the United States to bomb the ISIS positions in Iraq but not okay for Israel to bomb the Gaza strip. By “okay,” I’m pretty sure she was asking why the mainstream news media and our political leaders applauded one and not the other. She was correct to observe that while there has been almost universal approval of Obama bombing Iraq (except for those who think he should be doing more!), the press and politicos have expressed mixed feelings about Israel’s actions.

In my mind, both the United States and Israel are pursuing the worst possible courses from both a moral and a political standpoint. Neither country will achieve the stated goals on its acts of violence.

The Iraq situation is much easier to analyze, for the simple reason that no U.S. lives are in harm’s way and no one has attacked our country.

We hear two main reasons to bomb: 1) ISIS is becoming a destabilizing force in the region after having carved out major territory for itself in both Syria and Iraq; 2) We owe it to Iraq, which is a kind of “we broke it so we have to fix it” argument.

This second argument often comes from Republicans and their supporters as part of their program of blaming the President for the situation since he authorized final withdrawal of American troops from Iraq a few years ago. It’s as short-sighted and self-serving as the argument that Obama caused the Great Recession.  Iraq has always been a glued-together country. Even in ancient times, the territory that was Iraq consisted of two and sometimes three national entities. Just as Yugoslavia fell apart as soon as strongman Tito died, so did Iraq splinter when the United States destroyed the strongman government of Sadam Hussein. The violent fractionalization of Iraq was predictable, and many people predicted it.  It has also been painfully obvious to anyone willing to look the facts straight in the face that the country would remain a seething pit of terrorism as long as United States troops remained in the country and that it would soon break apart soon after we left. That’s exactly what has happened.

All the U.S. bombing can do now is shore up a corrupt and weak regime that does not represent all its citizens.  It does not offer a permanent solution.  Instead, U.S. bombing slows down the inevitable process of the various factions in Iraq coming to terms with one another, either in a unified country or in a number of smaller countries. It’s not likely to be pretty and will probably be violent, but with the United States bombing, it is definitely going to be violent and will take a lot longer to achieve. It’s time for us to leave bad enough alone by not bombing or committing any military action in Iraq, while increasing our non-military support for a newly elected government of Iraq that would be willing not to play ethnic or religious favorites.  I’m not saying that ISIS is not a grave threat; what I’m saying is the U.S. position is too compromised from past actions in Iraq to help in the fight against ISIS. We should stay on the sidelines of the military battle, and instead increase humanitarian aid, call for and uphold an arms embargo in Iraq and Syria and coordinate with the United Nations on evacuation and diplomatic efforts.

Like the United States in Iraq, to a large degree Israel made its own untenable situation through years of harsh treatment of the Palestinians, brutal execution of wars and unwillingness to be flexible at the negotiating table. To be sure, Israel has not been alone in its unwillingness to confront the other side peaceably. Moreover, Hamas and its predecessors have conducted terrorist campaigns against Israeli citizens.

But Israel’s past harsh ways have never worked, unless the country’s real goals are to keep a population that it believes to be inherently inferior in a political and social structure akin to apartheid, no matter how much violence it takes. I do not believe this patently anti-Semitic characterization of Israel’s actions, which is why I can’t understand why Israel’s political and military leaders keep answering violence with an escalation of brutality. The numbers speak for themselves: 1,800 Palestinians dead since the latest conflict began, 70% of whom were civilians; fewer than 70 Israelis killed of whom only three were civilians.  No wonder the mainstream media is giving the Israeli attacks a mixed review. And it’s no wonder anti-Semitic acts have increased in Europe.  The contrast between 1,800 and 70 feeds the imaginations of anti-Semites everywhere. It makes Hamas even more recalcitrant and it encourages the funders of terrorism to give more money to their violent clients.

In short, the Israeli way to meet a slap with a sledgehammer has never worked and never will work. It would have been much better if Israel had reacted to the act that started the latest wave of violence—the kidnapping and killing of three boys—with a more studied, more nuanced approach.  First and foremost, it should have insisted on due process to find and punish the killers of the boys. By not bombing civilian targets, it would have won the admiration of many in the West for restraint and perhaps convinced the other side that it was willing to consider a peaceful solution to the Palestinian problem. It might have considered using drones to target known terrorists in civilian areas or tried some surgical operations similar to when it dismantled a Syrian nuclear reactor years ago.

But instead of trying to think of new approaches, Israel and the United States have both decided to default to the unworkable. And so “business as usual”continues in Israeli and the occupied lands and returns to Iraq. It’s a bloody status quo that shows absolutely no signs of transforming into something better.

S&P identifies problem—income inequality—but gives same absurd solution that most economists do

Standard & Poor’s, the agency that rates financial instruments, has released a report that demonstrates that inequality of income is a drag on the American economy. S&P predicts that the economy will grow at an annual rate of 2.5% over the next 10 years, which is .3% lower than it predicted five years ago. S&P says that this more than 10% drop in growth rate derives from the great inequality of income and wealth that exists today, which has made the economy more prone to boom-and-bust cycles.

S&P joins the growing list of economists of all political persuasions to recognize that income and wealth inequality is growing and that it is a bad omen for the economy.

And just like many of the other distinguished boffins who have chimed in on the subject, S&P thinks that the way to reduce inequality of income is through education. It’s amazing how many economists and pundits—conservatives and progressives—think that education is answer.

No one except Thomas Piketty has cared to dismantle the argument that greater equality of wealth is connected to education, and Piketty approaches the issue from the causal end: the argument that education has caused inequality of wealth. Piketty postulates that it’s not the fact that some people are much more highly skilled than others that has led to growing inequality of income, but social custom and the low rate of income tax which gives executives more incentive to line their own pockets.

But I haven’t read anyone try to refute the assertion that education will reduce inequality of wealth because it will enable the newly educated employees to become more productive.

Yet this ridiculous argument is child’s play to dismantle: Even if everyone gets a PhD, someone has to flip the burgers, dig the ditches, sweep the floors, clean the bed pans, check out people at the grocery store and work the drive-through car wash. As it turns out, most of the new jobs created since the Great Recession started are at the low end of the skill and wage ladder.

It doesn’t matter how educated one is if one works in a job that’s poorly compensated. Moreover, if enough people get the education and training needed for a job with a higher wage, the wage for that job will fall because of the increase in the supply of qualified candidates.

There are only four ways to make wages more equal:

  1. Cut what we pay the highest compensated professions, such as hedge fund managers and executives of large corporations.
  2. Raise taxes on the income of these highly compensated workers.
  3. Raise salaries of lower paid workers.
  4. Provide lower paid workers with government benefits, which in a sense subsidizes their employers by paying for some of the living costs off low-paid workers.

I like the combination of two and three: Tax the wealthy more and pay workers more.  In fact, if we taxed the rich what we taxed them in 1950 and pay workers the purchasing power they had in 1950, we would find wealth inequality return to what it used to be in the golden age of American equality between about 1946 to sometime in the mid or late 1970’s.

S&P gives the standard rightwing reasons for warning against raising taxes to reduce inequality of wealth: that it reduces the incentives to work and that businesses will hire fewer workers. Why anyone ever believed that business ever hire workers just because they’re cheaper is beyond me; only good businesses survive, and good businesses only hire when they need someone, whatever the salary. I suppose that those who believe that increasing taxes reduces the incentive to work would all turn down a job because the take home pay was $4 million instead of $5.4 million.

Like the argument that education will diminish wage inequality, the idea that raising taxes reduces the incentive to make money is so absurd that the true wonder is why anyone would still try to slip these logical absurdities by the public. And yet the mass media continues to lap it up like the lapdogs they are when it comes to reporting macroeconomic news.

If the S&P really wants to reduce inequality of income, it should call for the following:

  • Raise the minimum wage to $15 an hour
  • End all right-to-work laws and shorten the time before employees vote to affiliate with a union
  • Raise income tax rates on incomes over $150,000 and institute an annual wealth tax.

I’m basically talking about returning things to the 1950’s. The reason the rich have gotten so much richer while the rest of us have stagnated is because of policy changes. All I’m proposing is changing it back to the way it used to be.

NY Times reveals its conservative bias in its reporting of primary elections

The New York Times management probably thinks the paper got a lot of progressive street cred by coming out in favor of the total legalization of marijuana. The Times has published a series of very long editorials that take apart every aspect of the subject and conclude that legalization is the right thing to do. I concur with the Times and am pleased it has come out so aggressively on this relatively minor social issue.

But that doesn’t change my mind about the electoral politics the Times news department subtle favors in its news coverage.  First of all, just because you support legalization of pot doesn’t mean you’re a progressive. Progressives do not have ownership of the legalize pot issue—libertarians are also in favor allowing recreational use of the devil weed.

More to the point, while the Times editorial page has been smoking, the news room has a chronic case of Tea Party conservatism when it comes to election coverage. The coverage of the primary results in Kansas and Michigan provides an excellent example of the way the Times has been reporting primaries since 2010: The article titled “Senator Beats Tea Party Challenger in Kansas” reports the results of three Kansas and two Michigan primaries—but only on the Republican side. The story follows the Times overarching narrative of the 2014 election, which is the same narrative the newspaper—and the rest of the mass media—foisted on the American public in 2010. The story is the bitter and dramatic battle for the soul of the Republican Party between ultra-rightwing Tea Partiers and the merely conservative traditional Republicans.

But what about the Democrats?

There is no national narrative about the Democratic Party, except an occasional mention of a candidate running away from Obamacare. No coverage of the races in which progressive candidates are facing centrist Democrats. In fact, no coverage of Democratic primaries at all!

It’s not just the New York Times, of course. A Google search comparing coverage of the Republican and Democratic primaries in Kansas and Michigan shows a decided bias in covering the doctrinal disputes between factions of the Republican Party, while ignoring anything that has to do with Democratic primaries or the Democrat’s process of selecting candidates:

  • Inputting “Chad Taylor,” who won the Democratic primary for U.S. Senator from Kansas, reveals 16,000 stories on Google News; do the same for the Republican nominee Pat Roberts (the incumbent) and it’s 30,300 stories.
  • In Michigan’s 11th district, a Google News search of Democratic nominee Bobby McKenzie yields 1,570 stories; a search for the Republican nominee Dave Trott yields 3,790, more than twice as many.

I searched Google News for all five races covered in the Times article and in each case there were many more stories in the national news about the Republican winning than about the Democrat (although in two of the cases, the Democrat ran unopposed).  Even more revealing is the fact that a majority of the stories I read focus on the Republican race, only mentioning the Democrat as the candidate whom the Republican will have to face in November.

Just as in 2010, casual perusers of newspapers and the Internet might come to the conclusion that no Democrats are running for any office come November. They certainly will learn a lot about the nuances that distinguish the hard right from the very hard right while culling almost nothing about what issues divide and unite the Democrats—who, BTW, are the larger party in terms of membership and total votes cast for both the presidential and Congressional races in 2012.

It’s as if the mass media are collectively writing the story of the election from the point of view of the Republican Party. Even though the New York Times and the rest of the national mainstream media will endorse Democrats, their news coverage in fact endorses the Republic Party by focusing primary election coverage almost exclusively on Republican races; providing extensive coverage of the Republican’s extreme element while ignoring the far left of the Democratic Party; and framing most national and international issues from the Republican playbook.

Marketers are discovering a rapidly growing group of consumers: adults who want to remain children

The latest marketer to cash in on the trend of adults wanting to remain children is a museum.

The American Museum of Natural History (AMNH), that venerable icon to the natural sciences, is now offering special sleepover parties—for adults only.  That’s right, for a mere $375 a person ($325 for members), you can snuggle up in jammies in your sleeping bag on a cot provided by the museum under the enormous blue whale in the Hall of Ocean Life with 149 people you never met before. I don’t know if they’re serving cookies and milk or ‘Smores and hot chocolate, but I understand that lights out is about 1:42 am. BTW, the museum has offered sleepovers for families for about eight years.

Now, the adult meaning of sleepover is much different from when the term is applied to children. For adults, a sleepover means having sex, usually for the first time or early in a relationship. For kids through their late teens, by contrast, it means making popcorn, watching movies, talking through the night and having mom make pancakes or French toast in the morning.

Which do you think the American Natural History museum’s resembles? There is no way the museum trustees, the insurance companies or the police are going to allow condoned sex, nor do I think many adult couples who attend the sleepover are going to want to engage in conjugal relations in full sight and earshot of everyone else trying to sleep on a cot. There may be some hidden hanky-panky among the mastodons or in a bathroom stall, but the point of the AMNH sleepover is not sex. It is therefore not an adult sleepover, at least not in conventional or traditional terms.

What is it then? Well you get a chance to see the exhibits—just like a regular visit or a special event such as a singles night or members day. You get to hear guest lecturers– just like a regular visit or a special event . You get the run of the place pretty much to yourself, which is not like the wall-to-wall masses of chattering humanity of a regular visit, but very much like a special museum event.

The only thing that differentiates the sleepover from other museum events then is the sleepover itself. The big sell point for an adult event is something for children.

In other words, a major American museum is appealing to those adults who want to do something from their childhood—have a sleepover.  The museum’s marketing department is trying to cash in on the growing number of adults who collect My Little Pony dolls, play with Legos, like to go to Disney theme parks, read comic books and juvenile fiction like Harry Potter or spend a lot of time playing shoot-‘em-up video games. And judging from the stories we see in the mass media, their number is growing by leaps and bounds. You can see just how much infantilization of American adults has progressed when  you peruse the growing number of movies dedicated to adults preserving the life they led as children: “Harold & Kumar” movies,  “Neighbors,” “The Internship,” “Old School,” “Big,” “Grandma’s Boy,” “Ted,” “The Wedding Crashers,” “Billy Madison,” “You, Me and Dupree,” “Dodgeball,””Step Brothers,” “The 40-year-old Virgin,” “Knocked Up,” all three “Hangovers,” the “Jackass” movies, “Bridesmaids,” “Hall Pass” and “Identity Thief” start the long list of movies that glorify not growing up.

Going to the museum is a pretty adult thing to do, unless it’s a children’s museum or the museum has decided to focus an exhibit on a child’s level of discourse. And keep in mind that the purpose of children’s museums or children’s exhibits is to guide children in learning how appreciate the adult experience that is museum-going. So how does a palace dedicated to the scientific education of all ages attract the fast-growing segment of adults who don’t want to grow up?  AMNH has come up with the brilliant solution by combining the very adult pleasure of looking at scientific specimens and analyzing information about the natural world with the child’s treat of having a sleepover.

While we should all retain a child’s sense of wonder and curiosity, I believe that at a certain point, it’s time, as Saul of Tarsus said, to put away childish things. His full quote, according to the King James version of the Christian Bible is “When I was a child, I spake as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things.”

The infantilization of American adults is a clear and present danger for representational democracy because adults who constantly participate in child-like activities are not practicing their adult thinking and emotional skills. I believe that mass marketers like infantilized adults because they make more docile and credulous consumers. But I for one would much rather have those who think like adults make decisions in the real world.

American political situation begins to resemble Alice’s Adventures in Wonderland

Have we fallen down a rabbit hole and entered a surrealistic world as Alice did when she fell into Wonderland?  Have we walked through a looking glass to a world that looks like ours but operates on a weird kind of logic?

I can’t be the only one who looks at the political scene in Washington, D.C. and concludes it looks a lot like Lewis Carroll’s 19th century fantasies, Alice in Wonderland and Through the Looking Glass and What Alice Found There.

What could be more bizarre than the lawsuit of the Republicans against President Obama because he has failed to implement parts of a law that they vehemently opposed and then spent four years trying to repeal?

Or how about this bit of logic from John Boehner, Speaker of the House of Representatives? In the morning he says that Congress will come up with a plan to solve the current border crisis and in the afternoon we find out the plan is to tell the president to do something. Of course, if the president does do something about the border crisis, he will be expanding the powers of the presidency, the same sin of which the lawsuit accuses him.

Or how about this bit of lunacy?  Dick Cheney, John McCain and others want to send more troops to Iraq to support a government known for repressing segments of its population.  Or think of a world turned upside down in which the United States supports a coup d’état and Russia supports the constitutionally elected government? Believe it or not, that’s what has happened in the Ukraine—although I have to add that by no means serves as an endorsement of Russia’s support of Ukrainian rebels.

As our political landscape begins to resemble the Wonderland into which Alice fell, I find myself assigning characters to an imaginary Washington Wonderland: Our President would have to be Alice. John Boehner as unctuous but ineffectual The White Rabbit and Ted Cruz as the completely bonkers Mad Hatter are easy calls, and if Cruz is the Mad Hatter, then Jeff Sessions is the March Hare and Mitch McConnell is the Dormouse who sits squeezed tight between these two crazies. The Mock Turtle, a creature that doesn’t exist outside of a mediocre pun, is Marco Rubio. Paul Ryan is the mean-spirited and hypocritical Duchess.

And who is the self-satisfied Cheshire Cat, sitting in a tree above the action—or should I say above the rhetoric and posturing masquerading as action—with a sly, knowing smile?  It’s has to be America’s ultra-wealthy as represented by those fat cats, the Koch brothers.  While our elected officials, especially our legislatures, continue to create chaos out of order, the ultra-rich enjoy a regime of low taxes, enormous tax loopholes for corporations, insufficient environmental regulations and a foreign policy driven solely by the needs of the 1%.  The Cheshire Cats smile, too, because they know they are sitting pretty, what with current campaign finance laws that allow them to buy elections and the Republican campaign to restrict voting rights (which, BTW, takes a page from wealthy northerners in the 1870’s and the South from the end of Reconstruction to the Civil Rights Act of 1964, who also moved to dramatically restrict eligible voters).

The more things don’t change, the more the ultra-wealthy smile like Cheshire Cats.

Greedy, self-serving billionaire thinks buying yacht is act of charity

Dennis M. Jones is a billionaire who claims that the $34 million he paid for his new yacht was a form of charity since the yacht creates jobs in the manufacturing, maintenance, cleaning, furnishing, decorating, cooking, and serving industries.

Funny, that’s the very same self-serving rationale that society matron Cornelia Martin gave to justify an egregiously ostentatious masked ball in New York in 1897—in the middle of a depression—in which one wealthy woman came wearing today’s equivalent of $6.4 million in jewelry sewn into her dress. Thus reports Sven Beckert on the very first page of his The Monied Metropolis, his history of the concentration of wealth into the hands of New York financiers, manufacturers and merchants during the Gilded Age (1850-1896).  The Gilded Age ranks second among American epochs in the flow of wealth upwards to a handful of very wealthy people. First place, of course, goes to the current era, which started in about 1980 and which I like to call the Age of Reagan.

If you wondered why rich folk can so glibly come up with convoluted excuse for why it’s great for the government to follow policies that take money from the middle class and poor and give it to them, now you know: They’ve been laying down the same line for decades.

Let’s take a look at two scenarios for an alternative use of Mr. Johnson’s $34 million and the untold millions he pays every year for upkeep of his metal-and-fiberglass leviathan.

The first scenario is a fantasy socialist utopia in which hundreds of families enjoy a weekend or week owning the yacht. The government pays for the cost to maintain the yacht and succession of temporary owners. Living on the yacht is open to everyone. At least everyone who likes that sort of thing. I’ll take the three-week trip with luxury accommodations across the old Silk Road instead!

Now a second, realistic scenario: The government spends the additional $34 million it collects from Johnson on teacher salaries, thereby shrinking the size of classrooms and providing an environment that’s more conducive to learning.  Or maybe, the government could fund $34 million in research into Alzheimer’s disease or cancer. $34 million would repair a lot of highways and feed a lot of hungry children. What about spending $34 million to subsidize consumer purchase of solar heating equipment.

Every single one of these uses for $34 million creates jobs. More importantly, they all improve our society and future economy more than does buying and operating a yacht for the gratification of a single individual.

But it’s Mr. Jones’ money! will shout those like George Will and the Roberts court who place property rights about human rights and social needs.

Yes, but in any other decade since we enacted the 16th Amendment in 1913, Mr. Jones would be paying a higher rate on the taxes from both his income and investments.

Moreover, no matter how hard Mr. Jones worked, he did not really earn that money by himself. The pharmaceutical company he sold 14 years ago for $3.4 billion used roads, bridges, sewers, airports and pipelines built with government money, all protected from both local and foreign threats by government organizations. His employees were mostly educated in public schools. They all made a ton less money than he did—and does—and none benefited the way he did from the company sale.

I couldn’t find a biography of Jones, but it doesn’t matter for my argument. He earned a billion in only one of two ways: Either he was already rich and connected or he was born with a talent which he may have burnished but which he did nothing to create. Either way, luck had at least as much to do with it as hard work. I’m sure that there are servants, boat engine maintenance specialists and cooks who work just as hard. 

In other words, Mr. Jones owes a lot to society and to luck. I’m not saying he shouldn’t earn—and spend—more than other people. Nor am I proposing a ceiling on wages and wealth.

What I am saying is that we have to return taxes to their levels before 1980 as long as there is still widespread poverty in the world, schools are overcrowded, our infrastructure is rapidly deteriorating, global warming is exacting severe penalties in lost lives and wealth, and people still suffer from debilitating ailments.

If that means Mr. Jones couldn’t afford to spend $34 million to buy a yacht and then untold millions to operate it, so be it. Life can be tough for billionaires.

Fill in the blank: Americans are living in the land of the____. My answer: guns

What comes first to mind when you think of the United States?

High standard of living? Beacon of representational democracy? The melting pot? Consumer society?  Fast food and blockbuster movies?

Land of the free? Home of the brave?

Not me.

When I think of the United States, the first image that comes to my mind is a gun.

We are a society awash in weaponry with an economy in large part based on weaponry.

Let’s start with the fact that we sell three quarters of all the arms exported around the world. That means of every dollar’s worth of bombs, tanks, jet fighters, ammo and machine guns sold around the world, 75 cents of it goes to a U.S. company.

Saudi Arabia, Iraq, Egypt, the Congo and Israel are among the many countries receiving arms from the Unites States, often purchased with funds borrowed from the U.S. government.

It’s a good thing the mainstream media doesn’t give much attention to the harm that’s done from the guns we sell abroad. Based on the space given in the media recently to condemning Russia for giving or selling to Ukraine rebels a rocket that the rebels used to down a commercial jetliner, coverage of the people killed with U.S. weaponry would crowd out all other news every day of the week.

At more than $660 billion a year, we dedicate more money to military spending than the next nine largest military spenders combined. We spend more than three times as much as number two on the list, China, even though the population of China is more than four times what ours is.  Let’s do the math: The autocratic Chinese spend about $139 per person per year on their military. The United States spends about $2,032 per person.  (I’m using 2013 figures from the Stockholm International Peace Institute, which I first found in a Wikipedia article).

The United States thus bears the major responsibility for the flood of weapons that help national and regional problems turn violent all over the world.

The violence doesn’t stop at our borders. Our militarism abroad runs parallel to our dedication to guns at home. The United States has the largest number of privately held guns per capita of any nation, almost one per person. Just as with military spending and weapons exports, our private ownership of guns far surpasses that of any other country in the world. We have 97 private guns per 100 people; no other nation has as many as 60 guns per 100 people.

More guns lead to more deaths and injuries from gunfire in the United States than in any other industrialized countries. Only countries at war see more of their people killed and injured by guns than the United States does.

It seems as if we worship guns and gun ownership. State legislatures and dubious court decisions have loosened gun control laws over the past two decades. After every bloody mass murder, more states pass laws to make it easier to own and carry a gun than toughen gun laws. Every week, the media covers protests of gun owners who think their rights have been squeezed or want to assert new rights to tote guns: sometimes they march into a fast food joint, sometimes on a university campus.  Very few politicians—and virtually none in the South—will come out against gun control for fear that the gun lobby will pour money into the opponent’s campaign.

The funny thing is, our reverence for the weapon both inside and outside the boundaries of our country plays to a stridently vocal minority. Only about 40% of the population has a gun in the home. Many surveys show that the number of gun owners is falling—but that each owner has more guns in his or her possession.  Our gun sales abroad primarily benefit the gun-makers, who are delighted to get the subsidies that U.S. loans to support arms sales represent.

What we have then is a society dedicated to guns and an economy in which making and selling guns play an outsized role.   Instead of singing “the land of the free and the home of the brave,” we might more accurately end the “Star Spangled Banner” with “the land of the gun…and the home of the gun sale.”

A TV commercial subtly suggests cannibalism, another makes fun of those with disabilities

Two commercials currently on TV are making me—and probably most other viewers—squirm with discomfort. Both are meant to be funny, but once explained, the logic behind the humor may turn stomachs.

The first is a spot for Lay’s potato chips that opens with an animated version of the classic Mr. Potato toy getting home from work. He can’t find his wife anywhere. He hear a strange crackle and then another. He follows the sounds until he sees his wife hiding in a room with a bag of Lay’s potato chips, munching away. She is suitably embarrassed at what amounts to an act of cannibalism, but the commercial explains that the chips are so delicious that they are irresistible. The last shot shows Mr. & Mrs. Potato Head snacking on the chips, both with a look of mischievous glee on their face—they know they are doing a naughty thing, but it just doesn’t matter.

The scene is reminiscent of Jean-Luc Godard’s masterpiece, “Weekend,” at the end of which the main female character sucks on a bone from a stew prepared by the revolutionary who has forcibly made her his concubine. “What is it we’re eating?” she asks, to which the punky gangster answers, “Your husband.” She has the last line of the movie: “Not bad…” and then keeps gnawing on the bone.

Eating another being of your own species is generally considered to be an abomination. Although the Potato Heads are not humans, they are stand-ins for humans with human emotions and aspirations, just like the various mice, ducks, rabbits, dogs, foxes, lions and other animals we have anthropomorphized since the beginning of recorded history. From Aesop and Wu Cheng’en to Orwell and Disney, authors have frequently used animals as stand-ins for humans in fairy tales, satires and children’s literature.

So when Mrs. Potato Head eats a potato, it’s an overt representation of cannibalism—humans eating other humans.

The advertiser is trying to make fun of transgression, to diminish the guilt that many on a diet or watching their weight might feel in eating potato chips, which after all, are nutritionally worthless.  But behind the jokiness of a potato eating a potato chip stands more than the idea that it’s okay for humans to eat them. The implication in having a potato playing at human eating other potatoes is that we are allowed to do anything transgressive, even cannibalism—everything is okay, as long as it leads to our own pleasure.  The end-game of such thinking is that our sole moral compass should be our own desires.

Thus the Lay’s Potato Head commercial expresses an extreme form of the politics of selfishness, the Reaganistic dictate that everyone should be allowed to pursue his or her own best interests without the constraint of society. Like the image of the vampire living on the blood of humans or of the “Purge” series of movies in which people are allowed any violent action one night a year, the Potato Head family eating other potatoes that have first been dried, processed, bathed in chemicals, extruded and baked symbolizes and justifies what the 1% continues to do to the rest of the population.

And it’s a happy message, too!  We don’t get the sense that it’s a “dog-eat-dog world in which you have to eat or be eaten.” No, Lay’s presents the gentle Reagan version: you can do anything you like to fill your selfish desires (no matter whom it hurts).

The kooky image of potatoes as cannibals may be funny, but I can’t imagine anyone is laughing at the Direct TV series of commercials that present human beings as string puppets who trip over furniture and get caught in ceiling fans.

To sell the fact that Direct TV—a satellite television service—can operate without wires, these commercials start by depicting a normal-looking character complaining about wires in the entertainment system or expressing delight that he has Direct TV and therefore can go wireless. At this point in the several versions of the spot I have seen, we are introduced to another member of the family who is a string puppet. As the normal character stammers about how wireless is okay for people but not when it comes to TV, the string puppet bounces around, hands and fingers flapping, shoulders hunching together and legs and knees dangling, until it trips or gets hung up in the fan or something that is supposed to be funny happens. But it’s only funny if one enjoys the cruel humor of slapstick and if one forgets that the stringed puppet is supposed to be part of the family—in other words a real human being with a challenging disability.

Direct TV has a long history of commercials that make fun of its audience, such as the idiot who fails to inherit a mansion, yacht and major stock portfolio but cries for glee because his rich deceased relative has willed him the Direct TV package. But the string people in these new Direct TV spots are not buffoons, not stupid, not venial, not pompous or supercilious. No, the trait that the spot exploits for humor is that they are disabled.

The commercial tries to extract humor out of mocking people with disabilities. No wonder everyone with whom I have watched this spot has turned away with a disgusted expression.

Nothing connects these two commercials except the bad taste which led to their conception and broadcast.  The Direct TV commercial has no political or social subtext to it—it’s a juvenile effort to make a joke at the expense of people with physical challenges. The Mr. Potato Head cannibalism commercial, however, seems to offer a fable about the relationship between the haves and the have-nots, or in this case—those who eat and those who are eaten. The fabulist is interested in selling products and making consumers feel good about the process of consumption, even when it is transgressive.  Some may call it an overturning of traditional morality. I call it business as usual in a post-industrial consumer society.