Saturday, September 29, 2012

Josh Cohen's Memory Books Reading List

by Josh Cohen, November 24, 2010


Below are some books related to memory that I’ve read so far, in no particular order (except for the first three which are good places to start). My favorites are in bold.  This page is regularly updated as I find new books.
  • Moonwalking with Einstein by Joshua Foer
  • Quantum Memory Power by Dominic O’Brien
  • You Can Have an Amazing Memory: Learn Life-Changing Techniques and Tips from the Memory Maestro by Dominic O’Brien
  • How to Pass Exams by Dominic O’Brien
  • How to Develop a Brilliant Memory Week by Week by Dominic O’Brien
  • The Amazing Memory Kit by Dominic O’Brien
  • Learn to Remember by Dominic O’Brien
  • The Art of Memory by Frances Yates
  • The Mind Sport of Memory 2008 Yearbook by Chambers and Day
  • Remember, Remember by Ed Cooke (2 more chapters to go)
  • Maximize Your Memory by Ramón Campayo
  • Memory Pack by Andi Bell
  • Use Your Perfect Memory by Tony Buzan
  • The Mind Map Book by Tony Buzan
  • Improve Your Memory by Robert Allen
  • Maximize Your Memory by Johnathan Hancock
  • The Mind of a Mnemonist by A. R. Luria
  • Mind Performance Hacks by Ron Hale-Evans
  • Rhetorica ad Herennium (only the section on memory)
  • Your Brain, the Missing Manual by Matthew Mac Donald (1/4 to go)
  • How to Be Clever by Ben Pridmore
  • On Memory and Reminiscence by Aristotle
  • By Heart: 101 Poems to Remember by Ted Hughes
Books I have but haven’t read yet:
  • Mindhacker: 60 Tips, Tricks, and Games to Take Your Mind to the Next Level by Ron Hale Evans and Marty Hale Evans
  • The Medieval Craft of Memory by Carruthers and Ziolkowski
  • Giordano Bruno and the Hermetic Tradition by Frances Yates
  • On the Composition of Images, Signs and Ideas by Giordano Bruno
  • Memory, a Very Short Introduction by Jonathan Foster
  • Brain Boot Camp by Tony Buzan
  • Use Both Sides of Your Brain by Tony Buzan
  • Super Memory, Super Student by Harry Lorayne
  • How to Develop a Perfect Memory by Dominic O’Brien
Giordano Bruno mnemonic
Giordano Bruno mnemonic
Books that I’ve seen online that I’m looking at next:
  • The Brilliant Memory Tool Kit: Tips, Tricks and Techniques to Boost Your Memory Power by Dominic O’Brien
  • The Book of Memory: A Study of Memory in Medieval Culture by Mary Carruthers
  • The Craft of Thought: Meditation, Rhetoric, and the Making of Images, 400-1200 by Mary Carruthers
  • De Umbris Idearum by Giordano Bruno, though I haven’t seen it in English yet.
  • De Oratore by Cicero
  • The Student Survival Guide By Chambers & Colliar
  • Wax Tablets of the Mind: Cognitive Studies of Memory and Literacy in Classical Antiquity by Jocelyn Penny Small
  • The Gallery of Memory: Literary and Iconographic Models in the Age of the Printing Press by Lina Bolzoni
  • The Web of Images: Vernacular Preaching from Its Origins to Saint Bernardino Da Siena by Lina Bolzoni
  • Logic and the Art of Memory: The Quest for a Universal Language by Paolo Rossi
  • The Memory Palace of Matteo Ricci by Jonathan Spence
  • In the Palaces of Memory: How We Build the Worlds Inside Our Heads by George Johnson
  • Memory from A to Z: Keywords, Concepts, and Beyond by Yadin Dudai
  • Eros and Magic in the Renaissance by Ioan Culianu
  • Metaphors of Memory: A History of Ideas about the Mind by Douwe Draaisma
  • Theories of Memory: A Reader by Rossington and Whitehead
  • A Sheep Falls From the Tree by Christiane Stenger
  • Memory Power: You Can Develop a Great Memory by Scott Hagwood
  • How to Remember Anything: The Proven Total Memory Retention System by Dean Vaughn
  • How to Master the Art of Remembering Names by Dean Vaughn
  • Remember Every Name Every Time by Benjamin Levy
  • The Memory Book by Harry Lorayne and Jerry Lucas
  • Cartographies of Time By Daniel Rosenberg and Anthony Grafton
  • Your Memory : How It Works and How to Improve It by Kenneth Higbee
  • How to Remember Anything: The Proven Total Memory Retention System by Dean Vaughn
  • Memory in Oral Traditions: The Cognitive Psychology of Epic, Ballads, and Counting-out Rhymes by David C. Rubin
  • Medical Terminology 350: Learning Guide by Dean Vaughn
  • Basic Human Anatomy by Dean Vaughn
The following books are about speed-reading, mental speed math and other related topics. I don’t have them yet, but they’re on my reading list because I’ve read reviews of them or someone has recommended them to me:
  • How to Calculate Quickly: Full Course in Speed Arithmetic by Henry Sticker
  • Breakthrough Rapid Reading by Peter Kump
  • The Art of Learning by Josh Waitzkin
Books on music and the brain on my to-read list:
  • Musicophilia: Tales of Music and the Brain by Oliver Sacks
  • This Is Your Brain on Music: The Science of a Human Obsession by Daniel Levitin
  • Music, Language, and the Brain by Aniruddh Patel
  • Music, The Brain, And Ecstasy: How Music Captures Our Imagination by Robert Jourdain
  • The Tao of Music: Sound Psychology by John Ortiz
  • Music and the Mind by Anthony Storr
For more, go to:

http://mnemotechnics.org/

Jonah Lehrer: How To Raise A Superstar


By Jonah Lehrer, Wired Science Blogs, August 24, 2010


(Note from Don: After his fall from grace Jonah Lehrer's old blogs now carry this disclaimer):

Editor's Note: Some work by this author has been found to fall outside our Editorial standards. Not all posts have been checked. If you have any comments about this post, please write to research@wired.com.

The 10,000 hour rule has become a cliche. This is the idea, first espoused by K. Anders Ericsson, a pyschologist at Florida State University, that it takes about 10,000 hours of practice before any individual can become an expert. The corollary of this rule is that that differences in talent reflect differences in the amount and style of practice, and not differences in innate ability. As Ericsson wrote in his influential review article “The Role of Deliberate Practice in the Acquisition of Expert Performance”: “The differences between expert performers and normal adults are not immutable, that is, due to genetically prescribed talent. Instead, these differences reflect a life-long period of deliberate effort to improve performance.”

On the one hand, this is a deeply counter-intuitive idea. (It’s best articulated in Gladwell’s excellent Outliers and Daniel Coyle’s The Talent Code.) Although we pretend to be egalitarians, we really believe that the talented are naturally “gifted”. You and I can’t become chess grandmasters, or NBA superstars, or concert pianists, simply because we don’t have the necessary anatomy. Endless hours of hard work won’t compensate for our biological limitations. When fate was handing out skill, we got screwed.

And yet, the 10,000 hour rule also echoes a long-standing belief about how talent happens. Let’s call this the parable of Tiger Woods. The story goes something like this: When Tiger Woods was an infant, his dad, Earl, moved his high chair into the garage. This was where Earl practiced his golf swing, hitting balls into a soccer net after work. Tiger was captivated by the swift movement. For hours on end, he would watch his father smack hundreds of balls. When Tiger was nine months old, Earl sawed off the top of an old golf club. Tiger could barely walk – and he had yet to utter a single word – but he quickly began teeing off on the Astroturf next to his father. When Tiger was 18 months old, Earl started taking him to the driving range. By the age of three, Tiger was playing nine hole courses, and shooting a 48. That same year, he began identifying the swing flaws of players on the PGA tour. (“Look Daddy,” Tiger would say, “that man has a reverse pivot!”) He finally beat his father – by a single stroke, with a score of 71 – when he was eleven. At fifteen, he became the youngest player to ever win the United States Junior Amateur championship. At eighteen, he became the youngest player to ever win the United States Amateur championship, a title he kept for the next three years. In 1997, when he was only 21, Tiger won the Masters at Augusta by the largest margin in a major championship in the 20th century. Two months later he became the number one golfer in the world.
The lesson of Tiger Woods is that the best way to become a superstar is to start young and get in those 10,000 hours as quickly as possible. That’s why Earl put a club in the hands of a toddler, and why Mozart was composing music before most of us can do arithmetic.

However, a series of recent studies by psychologists at Queen’s University adds an important wrinkle to the Tiger Woods parable. The scientists began by analyzing the birthplace of more than 2,000 athletes in a variety of professional sports, such as the NHL, NBA, and the PGA.  This is when they discovered something peculiar:  the percent of professional athletes who came from cities of fewer than a half million people was far higher than expected. While approximately 52 percent of the United States population resides in metropolitan areas with more than 500,000 people, such cities only produce 13% of the players in the NHL, 29% of the players in the NBA, 15% of the players in MLB, and 13% of players in the PGA.*
I can think of several different explanations for this effect, none of which are mutually exclusive. Perhaps kids in small towns are less likely to get distracted by gangs, drugs, etc.

Perhaps athletes outside of big cities go to better schools, and thus receive more attention from their high school coaches. Perhaps they have more access to playing fields. Perhaps they have a better peer group. The scientists summarize this line of reasoning in a recent paper: “These small communities may offer more psychosocially supportive environments that are more intimate. In particular, sport programs in smaller communities may offer more opportunities for relationship development with coaches, parents, and peers, a greater sense of belonging, and a better integration of the program within the community.”

But there’s another possible explanation for this effect, which was nicely summarized by Sian Beilock, a psychologist at the University of Chicago and author of the forthcoming Choke. She proposes that an important advantage of small towns is that they’re actually less competitive, thus allowing kids to sample and explore many different sports. (I grew up in a big city, and my sports career basically ended when I was 13. I could no longer compete with the other kids in my age group.) While conventional wisdom assumes that it’s best to focus on a single sport as soon as possible, and to compete in the most rigorous arena – this is the essential lesson of Tiger Woods – Beilock argues that that’s probably a mistake, both for psychological and physical reasons:
Sampling a variety of activities lowers the likelihood of burnout in one sport and increases children’s feelings of confidence because they get to see the results of their hard work in different settings. In addition, playing different sports lessens the occurrence of sports-related injuries that may end an athletic career. It’s common today for a 10-year-old baseball pitcher to need the tendon replacement surgeries for an injured elbow that were previously restricted to college and major league pitchers. This is the type of injury that sports medicine doctors argue is the direct result of arm overuse and sport specialization at too young an age.
Findings like the birthplace effect suggest that we need to rethink the idea that kids should receive year-round training in one sport early on. Although this early specialization certainly worked for Woods, for most kids, less sport-specific training seems to be the key to athletic success. Of course, this doesn’t mean limiting practice overall. Indeed, smaller cities offer more opportunities for unstructured play than larger cities, which results in more opportunities to hone general coordination, power, and athletic skills. These longer hours of play also allow kids to experience successes (and failures) in different settings, which likely toughens their attitudes in general.
This is a nice addendum to the 10,000 hour rule. While deliberate practice remains absolutely crucial, it’s important to remember that the most important skills we develop at an early age are not domain specific. (In other words, Tiger Woods is not using the same golf swing he relied on as a 5 year old.) Instead, the real importance of early childhood has to do with the development of general cognitive and non-cognitive traits, such as self-control, patience, grit, and the willingness to practice. This is also the lesson of a recent study on Australian football players:
The developmental histories of 32 players in the Australian Football League (AFL), independently classified as either expert or less skilled in their perceptual and decision-making skills, were collected through a structured interview process and their year-on-year involvement in structured and deliberate play activities retrospectively determined. Despite being drawn from the same elite level of competition, the expert decision-makers differed from the less skilled in having accrued, during their developing years, more hours of experience in structured activities other than Australian football.
What Beilock suggests is that the most important skills for success – the domain general traits that allow us to persist in the face of challenges and perform under pressure – are more likely to emerge when we pursue a variety of athletic activities at a young age, which tends to happen in smaller communities. (Big cities, in contrast, encourage a more single-minded focus, since any particular sport is more competitive.) We won’t be good at all of these sports, but that’s probably a good thing. The struggle will make us stronger.

*According to the researchers, the location of our birth matters much more than several other celebrated correlations, such as the “January effect” in which kids born in the first months of the year are more likely to excel in sports.

Stephen J. Dubner and Steven D. Levitt: A Star Is Made

New York Times Magazine, May 7, 2006

The Birth-Month Soccer Anomaly

If you were to examine the birth certificates of every soccer player in next month's World Cup tournament, you would most likely find a noteworthy quirk: elite soccer players are more likely to have been born in the earlier months of the year than in the later months. If you then examined the European national youth teams that feed the World Cup and professional ranks, you would find this quirk to be even more pronounced. On recent English teams, for instance, half of the elite teenage soccer players were born in January, February or March, with the other half spread out over the remaining 9 months. In Germany, 52 elite youth players were born in the first three months of the year, with just 4 players born in the last three.

What might account for this anomaly? Here are a few guesses: a) certain astrological signs confer superior soccer skills; b) winter-born babies tend to have higher oxygen capacity, which increases soccer stamina; c) soccer-mad parents are more likely to conceive children in springtime, at the annual peak of soccer mania; d) none of the above.

Anders Ericsson, a 58-year-old psychology professor at Florida State University, says he believes strongly in "none of the above." He is the ringleader of what might be called the Expert Performance Movement, a loose coalition of scholars trying to answer an important and seemingly primordial question: When someone is very good at a given thing, what is it that actually makes him good?

Ericsson, who grew up in Sweden, studied nuclear engineering until he realized he would have more opportunity to conduct his own research if he switched to psychology. His first experiment, nearly 30 years ago, involved memory: training a person to hear and then repeat a random series of numbers. "With the first subject, after about 20 hours of training, his digit span had risen from 7 to 20," Ericsson recalls. "He kept improving, and after about 200 hours of training he had risen to over 80 numbers."

This success, coupled with later research showing that memory itself is not genetically determined, led Ericsson to conclude that the act of memorizing is more of a cognitive exercise than an intuitive one. In other words, whatever innate differences two people may exhibit in their abilities to memorize, those differences are swamped by how well each person "encodes" the information. And the best way to learn how to encode information meaningfully, Ericsson determined, was a process known as deliberate practice.

Deliberate practice entails more than simply repeating a task — playing a C-minor scale 100 times, for instance, or hitting tennis serves until your shoulder pops out of its socket. Rather, it involves setting specific goals, obtaining immediate feedback and concentrating as much on technique as on outcome.

Ericsson and his colleagues have thus taken to studying expert performers in a wide range of pursuits, including soccer, golf, surgery, piano playing, Scrabble, writing, chess, software design, stock picking and darts. They gather all the data they can, not just performance statistics and biographical details but also the results of their own laboratory experiments with high achievers.

Their work, compiled in the "Cambridge Handbook of Expertise and Expert Performance," a 900-page academic book that will be published next month, makes a rather startling assertion: the trait we commonly call talent is highly overrated. Or, put another way, expert performers — whether in memory or surgery, ballet or computer programming — are nearly always made, not born. And yes, practice does make perfect. These may be the sort of clichés that parents are fond of whispering to their children. But these particular clichés just happen to be true.

Ericsson's research suggests a third cliché as well: when it comes to choosing a life path, you should do what you love — because if you don't love it, you are unlikely to work hard enough to get very good. Most people naturally don't like to do things they aren't "good" at. So they often give up, telling themselves they simply don't possess the talent for math or skiing or the violin. But what they really lack is the desire to be good and to undertake the deliberate practice that would make them better.

"I think the most general claim here," Ericsson says of his work, "is that a lot of people believe there are some inherent limits they were born with. But there is surprisingly little hard evidence that anyone could attain any kind of exceptional performance without spending a lot of time perfecting it." This is not to say that all people have equal potential. Michael Jordan, even if he hadn't spent countless hours in the gym, would still have been a better basketball player than most of us. But without those hours in the gym, he would never have become the player he was.

Ericsson's conclusions, if accurate, would seem to have broad applications. Students should be taught to follow their interests earlier in their schooling, the better to build up their skills and acquire meaningful feedback. Senior citizens should be encouraged to acquire new skills, especially those thought to require "talents" they previously believed they didn't possess.
And it would probably pay to rethink a great deal of medical training. Ericsson has noted that most doctors actually perform worse the longer they are out of medical school. Surgeons, however, are an exception. That's because they are constantly exposed to two key elements of deliberate practice: immediate feedback and specific goal-setting.

The same is not true for, say, a mammographer. When a doctor reads a mammogram, she doesn't know for certain if there is breast cancer or not. She will be able to know only weeks later, from a biopsy, or years later, when no cancer develops. Without meaningful feedback, a doctor's ability actually deteriorates over time. Ericsson suggests a new mode of training. "Imagine a situation where a doctor could diagnose mammograms from old cases and immediately get feedback of the correct diagnosis for each case," he says. "Working in such a learning environment, a doctor might see more different cancers in one day than in a couple of years of normal practice."

If nothing else, the insights of Ericsson and his Expert Performance compatriots can explain the riddle of why so many elite soccer players are born early in the year.

Since youth sports are organized by age bracket, teams inevitably have a cutoff birth date. In the European youth soccer leagues, the cutoff date is Dec. 31. So when a coach is assessing two players in the same age bracket, one who happened to have been born in January and the other in December, the player born in January is likely to be bigger, stronger, more mature. Guess which player the coach is more likely to pick? He may be mistaking maturity for ability, but he is making his selection nonetheless. And once chosen, those January-born players are the ones who, year after year, receive the training, the deliberate practice and the feedback — to say nothing of the accompanying self-esteem — that will turn them into elites.

This may be bad news if you are a rabid soccer mom or dad whose child was born in the wrong month. But keep practicing: a child conceived on this Sunday in early May would probably be born by next February, giving you a considerably better chance of watching the 2030 World Cup from the family section.

Stephen J. Dubner and Steven D. Levitt are the authors of "Freakonomics: A Rogue Economist Explores the Hidden Side of Everything." More information on the research behind this column is at www.freakonomics.com.


And from Wiki:

Dr. K. Anders Ericsson is a Swedish psychologist and Conradi Eminent Scholar and Professor of Psychology at Florida State University who is widely recognized as one of the world's leading theoretical and experimental researchers on expertise.

He is the co-editor of The Cambridge Handbook of Expertise and Expert Performance, a volume released in 2006 (Ericsson et al. 2006).

Dr. Ericsson's research with Herbert A. Simon on verbal reports of thinking is summarized in a book Protocol Analysis: Verbal Reports as Data, which was revised in 1993. With Bill Chase he developed the Theory of Skilled Memory based on detailed analyses of acquired exceptional memory performance (Chase, W. G., & Ericsson, K. A. (1982). Skill and working memory. In G. H. Bower (Ed.), The psychology of learning and motivation, (Vol. 16). New York: Academic Press). With Walter Kintsch he extended this theory into long-term memory to account also for the superior working memory of expert performers and memory experts (Ericsson & Kintsch 1995).

Currently he studies the cognitive structure of expert performance in domains such as music, chess and sports, and how expert performers acquire their superior performance by extended deliberate practice. He published an edited book with Jacqui Smith Toward a General Theory of Expertise in 1991 and edited a book The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports and Games that appeared in 1996, as well as a collection edited with Janet Starkes Expert Performance in Sports: Recent Advances in Research on Sport Expertise in 2003.

For Jonah Lehrer

I really enjoyed reading Jonah Lehrer's popular science books. I even ordered a collectible copy of the recently recalled and scandal-ridden book, Imagine. Lehrer enjoyed quoting artists in his first book, Proust Was a Neuroscientist, so I have a quote in support of him.

"Art is a lie that makes us realize truth.” -- artist Pablo Picasso


Science writer Jonah Lehrer was caught fabricating Bob Dylan quotations in his new book. 

by Jeff Bercovici Forbes Staff



It’s painful to read about the self-plagiarism and fabrication that has become the downfall of ex-Wired science writer, Jonah Lehrer.
Jonah Lehrer Was Going To Give A Speech On Ethics. It's Canceled,
Jonah Lehrer Resigns From The New Yorker Over Fabricated Bob Dylan Quotes In Book
After a brief stint at The New Yorker, Lehrer has resigned. His new book, Imagine: How Creativity Works, has been pulled by its publisher and refunds are being offered to anyone who has purchased it.

This is because Lehrer fabricated Bob Dylan quotes in the book and was caught by journalist Michael C. Moynihan – a serious Bob Dylan fan and, apparently, an even more serious reporter.

He lied about the fabrications when confronted by Moynihan before finally coming clean.

Lehrer is 31 – my age, actually, though I write mostly about video games which makes me feel, at times, much younger.

Of course, Lehrer was far more successful than I am. He had a long-running and highly read blog at Wired and two successful books. He’s been on a lucrative public speaking circuit for some time now. And he’d just taken a high-profile writing gig at The New Yorker - the Holy Grail of writing jobs.

All told, his career was following closely in the footsteps of bestseller Malcolm Gladwell, as Slate writer Josh Levin points out. Both New Yorker writers, both “idea men” and both well-enumerated public speakers, the big difference between the two has now become appallingly apparent.

Earlier this summer Lehrer was caught copying and pasting large sections of his older work into new posts at The New Yorker, without disclosing that this was recycled material. He was given a slap on the wrist and the controversy passed.

Secretly recycling your own work is problematic, though not nearly so much as putting words into the mouths of American cultural icons. It’s also a red flag.

But both acts are strange and somewhat baffling, to say the least. I’ve quoted myself in the past – usually to point out the prescience of some earlier insight (ha!) or, as is so often the case, to note how badly wrong I got some prediction or other. But I’ve always noted when this self-quoting has occurred.

I’ve also probably repeated myself on various subjects (okay, I know I’ve been repetitive many times – it’s the nature of the blogging beast, I’m afraid) and likely even unwittingly paraphrased something I’ve written in the past.

And I’ve made my share of mistakes – whether getting some fact wrong or not taking the extra step required to go to the original source, and quoting secondary sources instead – and have updated and corrected those mistakes as quickly and thoroughly as possible. This can be a tough business, and it’s easy to err. It’s also embarrassing. The sting of having to correct yourself makes you more cautious in the future. As long as those mistakes aren’t intentional, and as long as writers own up to them and include corrections where due, this can be chalked up to human error. We learn from our mistakes, just like anyone. Or at least we ought to.

Still, I can’t for the life of me wrap my brain around the conscious decision that went into fabricating Bob Dylan quotes – not just in a blog post but in a book. It makes you question every single thing Lehrer has written. And I imagine the fact-checkers at publications he’s written at are pouring over his work already, as are the legions of DIY grassroots fact-checkers who inhabit internet-land.

Bob Dylan is an odd choice.

For one thing, Dylan has said so many things over the course of his career that there’s really no shortage of actual quotations to draw from. For another, Dylan is still alive and has a devoted following. If you’re going to put words into someone’s mouth, Dylan must be one of the worst choices of all time.

This is the age of the internet, after all. As Howard Kurtz reports Moynihan saying: “We’re in a technological culture where it’s much easier to catch [...] Had one tried to expose Jonah Lehrer’s quotes in 1925, good luck.”

This is a tragic story. Lehrer’s career was enviable. I’ve never read his books, but I’ve enjoyed many of his blog posts in the past (admittedly, it’s been some time since I read his work, though.) He was a fine writer, with a knack for creating narrative, and turning questions of science and the brain into interesting stories. What he did he appeared to do quite effortlessly.
Writing isn’t easy, especially if you make it your career. It’s a struggle. There’s a lot of competition. Blogging is hard work simply because you need to constantly come up with new ideas, new insights, better and more thoughtful analysis. You need to be unique and stand out somehow.

Many of us would love to have the sort of success Lehrer enjoyed, especially so young.
Why throw it all away? Not just the success but the reputation, the trust of readers and friends and, I imagine, family? It’s baffling. I can’t pretend to understand it.

Thursday, September 27, 2012

Mark Stokes' Zombie Boy Web-Comic


My pal, cartoonist Mark Stokes, has immortalized me in his Zombie Boy web-comic strip. Mark's character has been around for 20 years now! The poor guy shares a jumbo-cubical with me and has suffered through many of my proto-Aspergian quirks and OCD-fueled analyses.

Mark writes, "Zombie Boy’s science teacher, Doctor Mangus Provocovitch (Doc Provoc) is based on a good friend of mine who really is a comic genius. His antics have inspired more than a few of my strips, so the least I could do is pay him back somehow. When I showed him the first strip that I had put him in, a slightly raised bushy eyebrow was the only reaction I got. But, I saw no signs of resistance either. He could have his own strip honestly, the man is a living, breathing sitcom!"

To see Mark's Zombie Boy comic strips, go to:

http://www.zombieboycomics.com/

Ramon Llull: The Tree of Logical Relations




Born in Majorca, Ramon Llull lived from 1232 to 1316 and was a prolific and multi-faceted author who expressed his thoughts in Latin, Catalan and Arabic. The heart of Llull’s contribution resided in what he called the Art: a general system for the interpretation of visible and invisible reality, which made use of semi-mechanical techniques, symbolic notation and combinatorial diagrams. The Art was the foundation of his apologetics and provided a single methodological basis for all fields of knowledge in the 13th century, from theology to the natural and human sciences.

His intellectual profile is complex and atypical: as a Christian philosopher, he developed Neoplatonic and Aristotelian themes in a creative manner; as a mystic, he has been considered to be the founding father of the great Iberian tradition; as a novelist, he was one of the first to propound contemporary themes; as an apologist for Christianity, he promoted missionary schools and conceived of a new method for bringing about conversions. Llull was also one of the first writers to use the vernacular, in his case Catalan, to discuss theological, philosophical and scientific subjects normally reserved for the language of learning, that is to say, Latin.

Opinion: Big Pharma Has Defrauded $30B from States, Federal Government: Report




By Morgan Korn | Daily Ticker, September 27, 2012

States and the federal government are taking a tougher stance against the pharmaceutical industry and have collected a record amount of money from drug companies this year according to a new report issued by consumer lobbying group Public Citizen.

Big Pharma has swindled $30 billion from states and the federal government for nearly two decades says Public Citizen's Dr. Sammy Almashat. States are fighting the pharmaceutical industry over a range of allegations including overcharging taxpayer programs like Medicare and Medicaid and illegally marketing their drugs to patients. Drug companies are sometimes inflating the cost of their drugs by as much as 60 or 70 times the real value of that drug, according to Almashat.

Many states are facing severe budgetary constraints and have been increasing their enforcement efforts against drug companies to reap additional revenue. More than $6.6 billion has been recovered through mid-July by both the federal government and states.

"Since 2009 state governments have finalized more than twice as many settlements, for more than six times as much money, as they had from the previous 18 years combined," the report said. GlaxoSmithKline (GSK), Johnson & Johnson (JNJ) and Abbott Labs (ABT) accounted for two-thirds of the financial penalties paid to state governments and Washington, according to the report.

In July GlaxoSmithKline, the UK's largest drug manufacturer, pleaded guilty to criminal charges and agreed to a $3 billion settlement with states and the U.S. government over accusations that it improperly promoted its drugs for unapproved uses and failed to report safety data. Almashat says 50,000 to 100,000 patients died from using Glaxo's blockbuster diabetes drug Avandia because the company did not report studies that showed an increased risk of heart attack and other fatal side effects.

"In these cases it's not only defrauding taxpayer programs at billions of dollars it's also putting patients' lives in danger," Almashat says in an interview with The Daily Ticker.
GlaxoSmithKline CEO Sir Andrew Witty said of the settlement:

"Whilst these originate in a different era for the company, they cannot and will not be ignored. On behalf of GSK, I want to express our regret and reiterate that we have learnt from the mistakes that were made. We are deeply committed to doing everything we can to live up to and exceed the expectations of those we work with and serve."

Glaxo's settlement was the largest in U.S. history, eclipsing the $2.3 billion fine Pfizer paid in 2009 for over-marketing its drugs including the painkiller Bextra.

"We are seeing systematic fraud," he says. "Almost every drug company has been involved in at least one settlement with the federal or state governments."

GlaxoSmithKline has been involved in as many as 20 settlements and could be regarded as a "repeat offender" Almashat notes. The federal government and states may be announcing a record number of settlements with drug companies but the financial penalties are not stopping the unlawful action.

"It has been going on for years and ultimately only results in slaps on the wrist," he says. "Every year there is a new billion dollar settlement. Payouts are only a fraction of the profits that are generated by the fraudulent activity. Crime does pay in these cases."

To end this continual cycle of wrongdoing, Almashat says the government needs to take a new tactic with drug makers: criminally prosecute executives involved in the fraud. Financial penalties paid by drug companies also need to align with the profits companies reap from improper drug marketing and pricing fraud.

In the News: Cellphones Are Eating the Family Budget



....And to think I just bought an iPad. At least I'm just using free wi-fi for now.

by Anton Troianovski | The Wall Street Journal – Wed, Sep 26, 2012

Heidi Steffen and her husband used to treat themselves most weeks to steak at Sodak Shores, a restaurant overlooking a lake near their hometown of Milbank, S.D. Then they each got an iPhone, and the rib-eyes started making fewer appearances.

"Every weekend, we'd do something," said Ms. Steffen, a registered nurse whose husband works at a tire shop. "Now maybe once every month or two, we get out."

More than half of all U.S. cellphone owners carry a device like the iPhone, a shift that has unsettled household budgets across the country. Government data show people have spent more on phone bills over the past four years, even as they have dialed back on dining out, clothes and entertainment—cutbacks that have been keenly felt in the restaurant, apparel and film industries.
The tug of war is only going to get more intense. Wireless carriers are betting they can pull bills even higher by offering faster speeds on expensive new networks and new usage-based data plans. The effort will test the limits of consumer spending as the draw of new technology competes with cellphone owners' more rudimentary needs and desires.

So far, telecom is winning. Labor Department data released Tuesday show spending on phone services rose more than 4% last year, the fastest rate since 2005. During and after the recession, consumers cut back broadly on their spending.
 
The combined cost of the components for the iPhone 5 is estimated at $197, or only $9 more than for the iPhone 4S, Arik Hesseldahl reports on digits. Photo: Getty Images.
But as more people paid up for $200 smartphones and bills that run around $100 a month, the average household's annual spending on telephone services rose to $1,226 in 2011 from $1,110 in 2007, when Apple Inc.'s iPhone first appeared.

Families with more than one smartphone are already paying much more than the average—sometimes more than $4,000 a year—easily eclipsing what they pay for cable TV and home Internet.

The trend has been a boon for companies like Verizon Wireless and AT&T Inc. (T). U.S. wireless carriers brought in $22 billion in revenue selling services such as mobile email and Web browsing in 2007, according to analysts at UBS AG. By 2011, data revenue had jumped to $59 billion. By 2017, UBS expects carriers to be pulling in an additional $50 billion a year.

But the question for the industry is how much bigger bills can get before the cuts in other parts of the family budget grow too painful.

Melinda Tuers, an accounting clerk at a high school in Redlands, Calif., said she already pays close to $300 a month for her family's four smartphones. She and her husband have cut back on dining out, special events and concerts to make room for the bigger phone bill.

Her household may soon have an even bigger hole to fill. Two of the Tuers's smartphones are on unlimited data plans, meaning she pays the same price no matter how much she surfs the Web. She has taken advantage of that freedom to watch TV shows such as "Covert Affairs" and "Grey's Anatomy" on her phone almost every day.

Ms. Tuers now wants to replace those three-year-old smartphones. But her carrier, Verizon, announced this summer that customers would have to give up unlimited data plans if they want to upgrade their phones at the subsidized price.

Ms. Tuers figures that she and her husband would need to scrape together more than $1,000 to pay full price for two new high-end phones or settle for one of Verizon's tiered-data plans, which she fears would cost a lot more given her video habit.
Streaming 30 minutes of video per day over a 4G connection and doing nothing else on her phone would cost Ms. Tuers roughly $120 a month on one of Verizon's new data plans, according to the carrier's website.

Carriers fully expect people to use more data and pay more for it. "Speed entices more usage," Verizon Chief Financial Officer Fran Shammo said at an investor conference last week, according to a transcript. "The more data they consume, the more they will have to buy."
But some question where the money for that data will come from. Americans spent $116 more a year on telephone services in 2011 than they did in 2007, according to the Labor Department, even as total household expenditures increased by just $67.

Meanwhile, spending on food away from home fell by $48, apparel spending declined by $141, and entertainment spending dropped by $126. The figures aren't adjusted for inflation.
The increase in telephone-services spending masks an even higher rise in cellphone bills, because people have been paying less for landline service.

Much of the revenue growth that industry executives and investors are hoping for is likely to come from higher-income households that do have the money to spend more on wireless data. But the wireless industry also generates a lot of revenue from lower-income users.
Almost nine in 10 of all U.S. adults have a cellphone, according to a Pew Research Center survey. Middle-income consumers increased their telephone spending in 2011 by $59, almost as much as the $64 in additional telephone spending by the 20% of consumers with the highest incomes, according to the Labor Department data.

As wireless service gets more expensive, the trade-offs become more painful. That could threaten to further crimp consumer spending elsewhere—or slow the upward swing in consumer spending on wireless.

That trend is evident in the home of 40-year-old Scott Boedie, a neighborhood service representative for a cable company.

Mr. Boedie said he and his wife now pay $200 a month for cellphone service, up by about $50 from early last year, even as they have managed to cut spending on groceries by shopping at discount chain Aldi and on "fun stuff" by going out to dinner and movies less often.
Looking over the family budget on Sunday night, Mr. Boedie said, his wife marveled at how much of it was going to the phone company.

"It stinks," Mr. Boedie said. "I guess it's the cost of modern-day America now."

Thanks for the Memory Graphic

This chart gives a detailed overview of the term "memory" as used in various branches of academia
I'm enjoying reading Joshua Foer's Moonwalking with Einstein The Art and Science of Remembering Everything. It's a winner!

Charles Burchfield Watercolor

Wednesday, September 26, 2012

Does This Card Ever Get Played Any More?



Jon Corzine: Criminal Or Just Plain Old-Fashioned Stupid?
by Richard Finger, Forbes contributor, August 27, 2012

It is a story all too common today that our society is practically anesthetized to it: Securities laws repeatedly failing to protect the trusting investors from unscrupulous money managers.

MF Global is a little different twist. Unsuspecting trading clients are bilked to pay for the highly leveraged “cowboy” trades of their very own clearing house which turns out to be nothing but a hedge fund in disguise.

At a bankruptcy filing date MF Global had a $40 billion balance sheet and a paltry $1.4 in equity. Its annual revenues were only $2.2 billion. When you include up to $16 billion in off balance sheet liabilities you get to a leverage ratio of about 40 to 1 -- not a lot of room for errors.

The Trade Structure

The trades that “brought down the airplane” were quite prosaic in the arcane world of hedge fund trades. It was a simple highly leveraged “carry trade”. Corzine bought $6.3 billion of the sovereign debt of Southern European PIIGS countries and financed it through a repurchase agreement or in trade jargon, “repos”.

The purchased bonds had a much higher coupon payment rate than the loan rate that MF Global would pay to the “repo” lender hence MF global would be making a guaranteed spread or “carry”.

When (if) the bonds ultimately matured and repaid 100% of their face amount, then MF Global contractually would use the bond proceeds to buy back or “repurchase” the bonds from the lender, thus repaying the “repo” loan. 

For example, JP Morgan (JPM) or Bank of America (BAC) took in Spanish bonds as collateral that MF Global had just purchased and made a loan that matured concurrently with the bond maturity date. If the bonds were $1 billion maybe JPM or BAC would loan on the order of $980 million and MF Global would come up with $20 million.

The $20 million or, in this case, 2% of the purchase price is the “haircut” that JPM wanted from the purchaser. The “haircut” or margin required is a negotiated amount between borrower and lender. To a highly creditworthy borrower the “haircut” may even be 0. That is the lender would lend all the purchase money.

The interest rate environment of 2011 when these trades occurred is not significantly different than the rate landscape of today. JPM probably lent on a floating or fixed LIBOR based formula, something like LIBOR +40 basis points or just Libor +40 as it would be quoted. As short term LIBOR was less than 25 basis points (or .25 of 1%) then all in all, MF Global borrow rate was most likely something less than 1% (.25 +.40=.65 of 1%). 

Say the rate on the Spanish bond was 5% and Corzine was able to purchase at a 10% discount to par (100) or 90% of face value. Then the “carry” would be the 5.55% coupon (remember the discount) less the .65% loan rate or 4.90%. At maturity if the bond paid in full as planned then using extreme leverage the return potential quickly gets into triple digits.

The game plan that Corzine had designed was conceptually sound. While he was admittedly purchasing the “junk” credits of Spain, Portugal, Italy, and Ireland the reality was, at the time, there had never been a Eurozone sovereign default and the zeitgeist was to preserve the “union” at all costs.

Further, only short duration bonds, presumably with maturities of two years or less, were bought.  This greatly reduced the risk of the trade. So as bonds would mature, presumably Spain and the other countries would simply “roll over” or sell new bonds to retire the maturing ones.

Worse case, the assumption was the ECB would step in and purchase bonds through EFSF or ESM or the direct bond purchase program in place.

All was copacetic until the plane hit some severe unanticipated thunderstorms, with lots of lightning. As part of the “repo” agreement, the amount of margin or “haircut” was subject to be increased as the price of the collateral (the bonds) fluctuated and were “marked” to the market on a daily basis or if the credit of the underlying borrower (MF Global) deteriorated.  

Headlines swirled, Greece caused turmoil for the entire Southern Europe bond market.

On October 24, 2011 Moody’s, due to the European exposure downgraded MF Global debt to one level above “junk”. One day later MF Global reported a $191 million quarterly loss. The following day, the 26th, Moody’s hammered Corzine’s firm with a further two notch downgrade placing the firm at Ba2, or in solid “junk” territory.

Net, Net, MF Global was now repeatedly being called for more and more margin as both their company credit and the prices of the European bonds deteriorated rapidly. It was a liquidity crisis. With credit lines completely used up -- where was there left to turn?

Crossing The Line

On October 26th and the ensuing five days before the bankruptcy, who did what, when, and why, on whose instructions is a main crux of the issue.

On the 26th $615 million of segregated customer funds were approved for transfer by assistant treasurer Edith O’Brien from accounts at JP Morgan. This transfer was supposed to be a loan that was to be repaid by the end of the same business day which would have been legal.

Needless to say the funds were not returned. On the 28th per e-mail trails, Corzine ordered a $175 million transfer to cover an overdraft at JP Morgan.
Ms. O’Brien tapped another $200 million of customer funds to meet this obligation. We know there were many more transactions wiring customer funds out of what were supposedly statutorily segregated at the broker dealer level.

Hundreds of millions of the customer money was funneled to an MF Global UK subsidiary. Ultimately, MF Global Inc. bankruptcy trustee James Giddens found as much as $1.6 billion of misappropriated customer funds.

Nobody Knows Anything

“I simply do not know where the money is” Corzine droned at a Congressional hearing. OK, if you don’t then who does? There is some evidence now that this money filching scheme began as early as August 2011 over two months prior to the October 31 bankruptcy filing. Mid –level employee Edith O’Brien pleaded her Fifth Amendment rights when she was called to testify about her role in the scam.

In what was surely dozens of illegal transactions, miraculously there is a dearth of memory cells purporting any knowledge of the purloined funds. Between the CFTC, the CME, the SEC, and the Justice Department investigations, astonishing and incredulous am I that none of these agencies could piece together enough forensic accounting evidence to levy even a few indictments against Corzine and his henchmen.

Lurking conspiracy theorists say MF Global was a client of Eric Holder and his deputy AG Lanny Breuer’s former law firm of Covington and Burling and Corzine is a huge Obama fundraiser so therefore any probes will be superficial and inconclusive.

Well, that has certainly been the headlines in recent days -- no criminal charges are expected. I guess you’re not paranoid if they’re out to get you.

Contributor 
From all accounts Corzine was a very hands-on manager. He relentlessly walked the trading floor and was chief architect for all the sovereign debt positions.

He is an expert on the asset class; he used to trade sovereigns back in his Goldman days. Consequently, because of the extreme leverage employed even small moves in interest rates could mean big margin calls. Any person with any attention to detail would be constantly monitoring all trades and calculating exactly what margin may be due and where the money to fund them was going to come from.

Corzine did not get to be head of Goldman Sachs, a senator and governor of New Jersey without being an very bright, attention to detail-oriented person.

Anytime a big margin call came in, who do you think was notified first? Ultimately whose decision was it as to where the money to meet the call was going to come from?

It’s perplexing why can’t regulators trace over a billion dollars in wire transfers? “Judicial Watch” got so tired of asking they have filed suit under the FOIA requesting all documents relating to missing customer funds. 

Trustee Giddens is so angry about desultory regulators; he vows to seek state venues for criminal action. So unless Mr. Corzine received some unclosed head blow which resulted in a rapid cognitive decline, the odds are, to me, thin indeed that he remained ignorant of MF Global machinations.

The sad thing is that Corzine just ran out of time. Most of his European bets have already been refinanced and paid off in full. As they say in accounting, it was just a “timing difference”.
Except in this case someone misappropriated $1.6 billion of money that didn’t belong to them.

Will Mr. Corzine realize his next dream of starting his own hedge fund or will he get to spend the next decade or two doing what I think he should be doing, time, and preferably not at some “club fed”.

In the News: Richest Americans' net worth jumps to $1.7 Trillion - Forbes



Art by the great Warren Kremer.



By Dan Burns, Reuters

New York (Reuters) - The net worth of the richest Americans grew by 13 percent in the past year to $1.7 trillion, Forbes magazine said on Wednesday, and a familiar cast populated the top of the annual list, including Bill Gates, Warren Buffett, Larry Ellison and the Koch brothers.

The average net worth of the 400 wealthiest Americans rose to a record $4.2 billion, up more than 10 percent from a year ago, while the lowest net worth came in at $1.1 billion versus $1.05 billion last year, the magazine said. Seven in ten of the list's members made their fortunes from scratch.

It was a bad year, however, for social media moguls, whose net worth fell by a combined $11 billion. On the heels of Facebook Inc's rocky IPO in May, the No. 1 social network's chief executive, Mark Zuckerberg, was the year's biggest dollar loser: his net worth fell by nearly half to $9.4 billion from $17.5 billion. He also slid to the No. 36 slot from No. 14 a year ago, Forbes said.

Facebook shares have fallen 40 percent from their IPO price of $38 a share in May.
Dismal performances by other social media stocks dropped some executives from the list altogether, including Groupon Inc Chairman Eric Lefkofsky, No. 293 on last year's list, and Zynga Inc Chairman and CEO Mark Pincus, No. 212 on the 2011 list.

"The gap between the very rich and merely rich increased and helped drive up the average net worth of The Forbes 400 members to an all-time record $4.2 billion," said Forbes Senior Wealth Editor Luisa Kroll.

Collectively, this group's net worth is the equivalent of one-eighth of the entire U.S. economy, which stood at $13.56 trillion in real terms according to the latest government data.

But the 13 percent growth in the wealth of the richest Americans far outpaced that of the economy overall, helping widen the chasm between rich and poor.

Forbes attributed the growth in net worth in part to the performance of the stock market and a recovering real estate market.

But while their wealth grew faster than the economy as a whole, which expanded at an anemic 1.7 percent annual rate in the second quarter of 2012, the super rich generally failed to keep pace with the stock market. The benchmark Standard & Poor's 500 index rose nearly 20 percent over the 12 months ended August 24, the last date of market performance measured for this year's list.

Familiar Names at the Top

Gates, the chairman of Microsoft Corp., topped the list for the 19th year in a row, with $66 billion, up $7 billion from a year earlier.

Buffett, chairman and chief executive of insurance conglomerate Berkshire Hathaway Inc, stood second with $46 billion, followed by Ellison, head of software maker Oracle Corp, with $41 billion. Brothers Charles and David Koch, who run the energy and chemicals conglomerate Koch Industries Inc and who are active in conservative politics, were tied for fourth with $31 billion, Forbes said.

The ranks of the top five were unchanged from a year earlier.

Two notable names dropped from the top 10, however. Casino magnate Sheldon Adelson, also active in conservative political causes, fell to the 12 spot from No. 8 last year, and financier and liberal philanthropist George Soros dropped five spots to No. 12.

Michael Bloomberg, the billionaire founder of Bloomberg LP who is now in his third term as New York City mayor, rose to the No. 10 slot.

Newcomers to the elite club of 400 include Laurene Powell Jobs, the widow of Apple Inc (NSQ:AAPL - News) cofounder Steve Jobs who is now the wealthiest woman in Silicon Valley, and Jack Dorsey, the co-founder of Twitter.

Just 45 women made the cut, up from 42 last year, Forbes said.

California has the largest share of Forbes 400 members, with 87, followed by New York, Texas, Florida and Illinois. Among cities, New York City topped the list, with 53. San Francisco, Dallas, Los Angeles and Houston rounded out the top-five cities.

One quarter of the Forbes 400 come from the finance and investment sector while another quarter come from either the technology, media or energy industries.

The complete list can be found at: www.forbes.com/forbes400 .

(Additional reporting by Edith Honan in New York; editing by Matthew Lewis)

More Omega-3 Fish Oil Syrup for my Waffles, Please





Fish Oil Supplements: Do They Do The Body Any Good?, Scientific American 

by Melinda Wenner Moyer

If you've been following the media trail on fish oil lately, you've probably been tempted to forgo the smelly capsules. A systematic review of 20 studies published last week in JAMA The Journal of the American Medical Association reported that neither eating extra helpings of fish nor taking fish oil supplements reduces the risk of stroke, heart attack or death. In June a review of studies published on behalf of the Cochrane Collaboration, an independent, not-for-profit organization that promotes evidence-based decision-making, concluded that fish oil pills fail to prevent or treat cognitive decline. And a 2011 meta-analysis by Yale University researchers debunked the idea that omega-3s alleviate depression. These proclamations run counter to what we have been told about fish and fish oil for decades. So why is the consensus changing? Is it time for us to toss out our pills for good?

Not necessarily. Although it's true that early research on fish oil seemed far more promising—one 1999 trial, for instance, reported that people who took omega-3 pills were 10 percent less likely to have a heart attack, stroke or die from cardiac disease than people who did not—some researchers think that recent negative findings reveal more about us than they do about fish oil. Omega-3 pills may be beneficial for certain people but not for others, they say, and existing studies may not account for individual differences.

There's no question that polyunsaturated omega-3 fatty acids—the technical name for the good fats found in fish and fish oil—are important parts of a healthy diet. Our bodies can't make them, yet we need them to survive, as they form part of our cell membranes. Although the mechanism by which they might prevent heart disease, cognitive decline and depression isn't well understood, research suggests that they reduce blood pressure and inflammation and that they increase brain blood flow and give neurons structural strength.

And no one questions the World Health Organization's recommendation that pregnant and
nursing women should consume at least 300 milligrams of omega-3s daily to boost fetal brain development. "That [benefit] has been clearly demonstrated in trials," says Dariush Mozaffarian, a cardiologist and epidemiologist at Harvard University, who studies fish oil.

But for other adults, the health benefits of supplementing have become much harder to gauge. That's in part because many of us get lots of these good fats from our diet anyway: According to the United Nations Food and Agriculture Organization, per capita fish consumption has doubled (pdf) since 1961, and "more consumption doesn't really add much bang for your buck," Mozaffarian says. In other words, adding more omega-3s to an already omega-3–rich diet does not do much good, a fact that could help explain why recent studies have been more equivocal than studies from several decades ago, when fish was less popular. "We have no evidence from populations whose dietary intake of omega-3 fatty acids may be low and who may therefore benefit from supplementation," says Alan Dangour, head of the nutrition group at the London School of Hygiene and Tropical Medicine and co-author of the recent Cochrane review. In addition, preliminary research suggests that certain ethnic groups—such as Japanese and Italians—may benefit more from omega-3 supplements than others, perhaps in part because of how well their bodies absorb the fats.

Another potential problem is that most of the research on fish oil and heart health—including all of the trials included in the recent JAMA analysis—have involved subjects who already have heart disease or established risk factors. Whereas this isn't necessarily a problem in itself, it means that very little research has addressed whether fish oil supplements benefit healthy people. "The jury is still out on whether omega-3 supplements can prevent a first cardiovascular event in people at usual risk," says JoAnn Manson, an epidemiologist at Brigham and Women's Hospital in Boston who is conducting a trial to answer this question, which she estimates will be finished in 2016.

Moreover, because so many trials have involved subjects with heart problems, subjects in recent years have been "taking multiple medications, such as aspirin and statins, which can obscure the effects of supplements," Manson says. (Half of the studies included in the JAMA analysis were conducted after statins became commonplace.) This fact could also help explain the outcome discrepancies between recent trials and older ones carried out during the pre-statin era. Indeed, a February 2012 analysis of a large European clinical trial reported that fish oil supplements do not prevent second heart attacks among people taking statins, but that it cuts risk by half among people who don't take the medications. (Because there were so few non-statin users enrolled in the trial, this finding did not quite reach statistical significance.)

People who enroll in and complete omega-3 trials may differ from typical Americans in important ways, too. "Members of the public who volunteer to join randomized controlled studies are frequently healthier and more active than the average for the population," Dangour says, which could affect outcomes in unknown ways. In addition, he adds, people who drop out of trials are often the sickest, and they might be the ones who would most benefit from supplementation.

There's also more than one type of fish oil. Typically, in omega-3 intervention studies, subjects take pills containing a near-equal mixture of two fats, docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA)—but some research suggests that in certain situations and for certain outcomes, one may be better than the other. For instance, a 2011 meta-analysis concluded that fish oil capsules don't help treat depression, but a group of British and Norwegian researchers challenged these findings, citing evidence that pills containing at least 60 percent EPA do seem to provide mood benefits. Another controversial question is whether the omega-3 dose is important in and of itself or whether the ratio of omega-3 to omega-6 fats one consumes is more important. (Although omega-6 fats are important for survival, Americans tend to consume far more of these fats than they need.)

Finally, whereas the recent JAMA analysis concluded that fish oil has no effect on cardiovascular outcomes, the researchers did find that omega-3s reduced the risk of cardiac death by 10 percent, an effect that was statistically significant (having a "p value" of 0.01).

The researchers did not report the finding in their conclusions because they subsequently modified their statistical calculations to account for the fact that they had used the same data set to ask a number of different "exploratory" questions: In this case, does fish oil prevent heart attacks? Strokes? What about sudden cardiac death? The team wanted to tighten their definition of statistical significance to account for the fact that the more questions one asks, the more likely one is to get a positive result by chance. Still, Mozaffarian says, "if you combine all the data and look only at cardiac death, there is a statistically significant benefit. A 10 percent reduction in the number-one cause of death in both men and women in the U.S. is a big deal."

So, should you take fish oil pills or not? Most researchers agree that oily fish such as salmon and mackerel are far better way to get your fill of good fats. Many of the early, promising studies on omega-3s involved fish rather than fish oil pills (in one, men who were advised to eat fish were 29 percent less likely to die in the two years following a heart attack than men who were not), and a 2010 study by Columbia University scientists reported that following a Mediterranean diet—which, among other things, is rich in fatty fish—reduces the risk of Alzheimer's disease by 34 percent. "Not only may people be benefiting from the omega-3 fatty acids in fish, but the fish may be displacing such foods as hamburgers and quiche from the diet, both high in saturated fat," explains Alice Lichtenstein, director of the Cardiovascular Nutrition Laboratory at Tufts University.

If you don't eat fish, then the question of what to do becomes more difficult. Researchers argue that, ultimately, we need more randomized, controlled clinical trials on omega-3 supplements—particularly in healthy people who don't take medications—but these are becoming rare, in part because the supplements will never be big moneymakers. "Generic fish oil is available, so drug companies don't have any strong motivation to fund trials," Mozaffarian says.

But despite the murky science, if you don't get omega-3s from other sources—or if you have heart disease risk factors but aren't taking medications—omega-3 pills may still be a good idea. "There's no harm in taking a fish oil supplement, and there could be benefits," Mozaffarian says. "The bottom line is that getting some [omega-3] is better than getting none."

Tuesday, September 25, 2012

"Cool of the Evening" Tenor Sax Giants Al Cohn/Zoot Sims



http://www.youtube.com/watch?v=ZR4ZfCh601k&feature=related

Icons From the Age of Anxiety: Max Beckmann "The King"

Max Beckmann, The King, 1934-37, oil on canvas,53.25 x 39.25 in., St. Louis Art Museum


Stephan Lackner writes:

"The crowned king sits in his palace, in Oriental splendor, proudly erect, surrounded by two women. The young, beautiful lady on the left seems utterly trustful and loving as she puts her right arm over his thigh and fondles his left arm. The older, dark woman whispers conspiratorial advice into his ear; her cowl gives her an air of intrigue and secrecy, and her left hand is pushed forward in a gesture of warning or rejection, apparently contradicting the naive, friendly creature on the other side of the monarch. The young blonde has his love, no doubt, but the older woman "has the king's ear." The king weighs the two influences silently. There is a strange, portentous atmosphere in the palace chamber. When will he arise and proclaim his decision?

"The king's features are akin to Beckmann's own, although no formal self-portrait may have been intended. The collar with its triangular flaps has the shape the artist usually assigned to clown and harlequin costumes, so we may suspect that the ominous scene is really just part of a play.

"Beckmann worked on The King for a long time. He must have considered it already finished in 1934, for he had it photographed in Berlin, three years before his emigration. He submitted it to the Carnegie International, where it was exhibited in the European section, in San Francisco, in 1934-35, and illustrated in the Carnegie catalogue. The painting did not win a prize. Disappointed, Beckmann changed the first version considerably and finally signed it in Amsterdam in 1937. This history of the painting is important because some commentators have seen allusions to the "despot" of the day and claim that this was the first painting that Beckmann created in exile. But the resemblance to Beckmann himself precludes any reference to the actual tyrant. No-this is the inner drama of a proud, powerful, benign individual.

"In the first version, the base of the column at the right edge of the painting resembled the bases of the columns of Persepolis. Beckmann at the time was immersed in studies of Tel Halaf, and Assyrian and Babylonian lore. This localization of the scene gave way, in the final version, to a more general, luxurious background. Also, the profile of the warning, plotting confidante is more expressive, and the texture of the final canvas is more varied and decisive. On the whole, we can thank the Carnegie judges of 1934 for awarding the prize to Karl Hofer and not to Beckmann. Their action caused Beckmann to dig even deeper into his subconscious, to explore his own myth."

David K. Randall: Rethinking Sleep


Vincent van Gogh , The Siesta (after Millet), December 1889-January 1890, oil on canvas, H. 73; W. 91 cm, Musée d'Orsay.

Sometime in the dark stretch of the night it happens. Perhaps it’s the chime of an incoming text message. Or your iPhone screen lights up to alert you to a new e-mail. Or you find yourself staring at the ceiling, replaying the day in your head. Next thing you know, you’re out of bed and engaged with the world, once again ignoring the often quoted fact that eight straight hours of sleep is essential.

Sound familiar? You’re not alone. Thanks in part to technology and its constant pinging and chiming, roughly 41 million people in the United States — nearly a third of all working adults — get six hours or fewer of sleep a night, according to a recent report from the Centers for Disease Control and Prevention. And sleep deprivation is an affliction that crosses economic lines. About 42 percent of workers in the mining industry are sleep-deprived, while about 27 percent of financial or insurance industry workers share the same complaint.

Typically, mention of our ever increasing sleeplessness is followed by calls for earlier bedtimes and a longer night’s sleep. But this directive may be part of the problem. Rather than helping us to get more rest, the tyranny of the eight-hour block reinforces a narrow conception of sleep and how we should approach it. Some of the time we spend tossing and turning may even result from misconceptions about sleep and our bodily needs: in fact neither our bodies nor our brains are built for the roughly one-third of our lives that we spend in bed.

The idea that we should sleep in eight-hour chunks is relatively recent. The world’s population sleeps in various and surprising ways. Millions of Chinese workers continue to put their heads on their desks for a nap of an hour or so after lunch, for example, and daytime napping is common from India to Spain.

One of the first signs that the emphasis on a straight eight-hour sleep had outlived its usefulness arose in the early 1990s, thanks to a history professor at Virginia Tech named A. Roger Ekirch, who spent hours investigating the history of the night and began to notice strange references to sleep. A character in the “Canterbury Tales,” for instance, decides to go back to bed after her “firste sleep.” A doctor in England wrote that the time between the “first sleep” and the “second sleep” was the best time for study and reflection. And one 16th-century French physician concluded that laborers were able to conceive more children because they waited until after their “first sleep” to make love. Professor Ekirch soon learned that he wasn’t the only one who was on to the historical existence of alternate sleep cycles. In a fluke of history, Thomas A. Wehr, a psychiatrist then working at the National Institute of Mental Health in Bethesda, Md., was conducting an experiment in which subjects were deprived of artificial light. Without the illumination and distraction from light bulbs, televisions or computers, the subjects slept through the night, at least at first. But, after a while, Dr. Wehr noticed that subjects began to wake up a little after midnight, lie awake for a couple of hours, and then drift back to sleep again, in the same pattern of segmented sleep that Professor Ekirch saw referenced in historical records and early works of literature.

It seemed that, given a chance to be free of modern life, the body would naturally settle into a split sleep schedule. Subjects grew to like experiencing nighttime in a new way. Once they broke their conception of what form sleep should come in, they looked forward to the time in the middle of the night as a chance for deep thinking of all kinds, whether in the form of self-reflection, getting a jump on the next day or amorous activity. Most of us, however, do not treat middle-of-the-night awakenings as a sign of a normal, functioning brain.

Doctors who peddle sleep aid products and call for more sleep may unintentionally reinforce the idea that there is something wrong or off-kilter about interrupted sleep cycles. Sleep anxiety is a common result: we know we should be getting a good night’s rest but imagine we are doing something wrong if we awaken in the middle of the night. Related worries turn many of us into insomniacs and incite many to reach for sleeping pills or sleep aids, which reinforces a cycle that the Harvard psychologist Daniel M. Wegner has called “the ironic processes of mental control.”

As we lie in our beds thinking about the sleep we’re not getting, we diminish the chances of enjoying a peaceful night’s rest.

This, despite the fact that a number of recent studies suggest that any deep sleep — whether in an eight-hour block or a 30-minute nap — primes our brains to function at a higher level, letting us come up with better ideas, find solutions to puzzles more quickly, identify patterns faster and recall information more accurately. In a NASA-financed study, for example, a team of researchers led by David F. Dinges, a professor at the University of Pennsylvania, found that letting subjects nap for as little as 24 minutes improved their cognitive performance.

In another study conducted by Simon Durrant, a professor at the University of Lincoln, in England, the amount of time a subject spent in deep sleep during a nap predicted his or her later performance at recalling a short burst of melodic tones. And researchers at the City University of New York found that short naps helped subjects identify more literal and figurative connections between objects than those who simply stayed awake.

Robert Stickgold, a professor of psychiatry at Harvard Medical School, proposes that sleep — including short naps that include deep sleep — offers our brains the chance to decide what new information to keep and what to toss. That could be one reason our dreams are laden with strange plots and characters, a result of the brain’s trying to find connections between what it’s recently learned and what is stored in our long-term memory. Rapid eye movement sleep — so named because researchers who discovered this sleep stage were astonished to see the fluttering eyelids of sleeping subjects — is the only phase of sleep during which the brain is as active as it is when we are fully conscious, and seems to offer our brains the best chance to come up with new ideas and hone recently acquired skills. When we awaken, our minds are often better able to make connections that were hidden in the jumble of information.
Gradual acceptance of the notion that sequential sleep hours are not essential for high-level job performance has led to increased workplace tolerance for napping and other alternate daily schedules.

Employees at Google, for instance, are offered the chance to nap at work because the company believes it may increase productivity. Thomas Balkin, the head of the department of behavioral biology at the Walter Reed Army Institute of Research, imagines a near future in which military commanders can know how much total sleep an individual soldier has had over a 24-hour time frame thanks to wristwatch-size sleep monitors. After consulting computer models that predict how decision-making abilities decline with fatigue, a soldier could then be ordered to take a nap to prepare for an approaching mission. The cognitive benefit of a nap could last anywhere from one to three hours, depending on what stage of sleep a person reaches before awakening.

Most of us are not fortunate enough to work in office environments that permit, much less smile upon, on-the-job napping. But there are increasing suggestions that greater tolerance for altered sleep schedules might be in our collective interest. Researchers have observed, for example, that long-haul pilots who sleep during flights perform better when maneuvering aircraft through the critical stages of descent and landing.

Several Major League Baseball teams have adapted to the demands of a long season by changing their sleep patterns. Fernando Montes, the former strength and conditioning coach for the Texas Rangers, counseled his players to fall asleep with the curtains in their hotel rooms open so that they would naturally wake up at sunrise no matter what time zone they were in — even if it meant cutting into an eight-hour sleeping block. Once they arrived at the ballpark, Montes would set up a quiet area where they could sleep before the game. Players said that, thanks to this schedule, they felt great both physically and mentally over the long haul.

Strategic napping in the Rangers style could benefit us all. No one argues that sleep is not essential. But freeing ourselves from needlessly rigid and quite possibly outdated ideas about what constitutes a good night’s sleep might help put many of us to rest, in a healthy and productive, if not eight-hour long, block.

David K. Randall is a senior reporter at Reuters and the author of “Dreamland: Adventures in the Strange Science of Sleep.”

The Epigenetics Revolution

This epigentics book looks interesting. I'll order a copy today.




The Epigenetics Revolution: How Modern Biology Is Rewriting Our Understanding of Genetics, Disease, and Inheritance by Nessa Carey

Epigenetics can potentially revolutionize our understanding of the structure and behavior of biological life on Earth. It explains why mapping an organism's genetic code is not enough to determine how it develops or acts and shows how nurture combines with nature to engineer biological diversity. Surveying the twenty-year history of the field while also highlighting its latest findings and innovations, this volume provides a readily understandable introduction to the foundations of epigenetics.

Nessa Carey, a leading epigenetics researcher, connects the field's arguments to such diverse phenomena as how ants and queen bees control their colonies; why tortoiseshell cats are always female; why some plants need cold weather before they can flower; and how our bodies age and develop disease. Reaching beyond biology, epigenetics now informs work on drug addiction, the long-term effects of famine, and the physical and psychological consequences of childhood trauma. Carey concludes with a discussion of the future directions for this research and its ability to improve human health and well-being.

 Biography
Here's the official version...
Nessa Carey has a virology PhD from the University of Edinburgh and is a former Senior Lecturer in Molecular Biology at Imperial College, London. She has worked in the biotech and pharmaceutical industry for ten years. She lives in Bedfordshire and this is her first book.
And what else?
After leaving school I went to the University of Edinburgh to become a vet. This didn't last because I was allergic to fur, unable to think in 3D (not good for anatomy), quite bored and really rubbish at the course. So I dropped out and at Catford Job Centre, in amongst the ads for short order chefs (I couldn't cook) and van drivers (I couldn't drive), was one for a forensic scientist. And oddly enough I had always wanted to work at this end of crime - I must have been the only kid in the UK who had read a biography of Bernard Spilsbury by the age of 11.
So for five years I worked at the Metropolitan Police Forensic Science Lab in London and studied part-time. I then realised that I loved academic science and went off to do a PhD. At the University of Edinburgh. In the veterinary faculty.
After that, it was the academic route of post-doc, Lecturer and Senior Lecturer. But I had a tendency to wander off on routes that intrigued me - degree in Immunology, PhD in Virology, post-doc in Human Genetics, academic position in Molecular Biology. Such wandering isn't necessarily the best idea in academia but the breadth of experience is really valued in industry. I've spent 10 years in biotech and have recently moved to the pharmaceutical sector.
And outside of work? I love birdwatching (no, I don't have a life-list), cycling, scavenging stuff from skips, and growing vegetables. I have a fantasy about one day having a smallholding (where I will starve to death if I really have to be self-sufficient) and I can't wait to write my next book. And I can now cook. And drive.