Tuesday, April 30, 2013

Heather Roff Blog: How Automated Wars Rob Us Of Humanity





Jack Kirby's The Destroyer, from the Mighty Thor.
  Hannah Arendt once used the phrase "the banality of evil" to describe the character of Adolph Eichmann's acquiescence in committing atrocities for the Nazi regime.

What this phrase means, in Eichmann's case, is that it was his "sheer thoughtlessness -- something by no means identical with stupidity -- that predisposed him to become one of the greatest criminals of that period."

 Indeed,it is "that such remoteness from reality and such thoughtlessness can wreak more havoc than all the evil instincts taken together," that evil is in this sense banal, means that there is no thought -- no decision -- to be (or to act) evil. It is so commonplace, and it is a lack of thinking that results in the most horrific of actions.

Thus Eichmann's most dangerous element was that he threw away what it meant to be human -- he threw away his capacity for rational thought and reflection on right and wrong, good and evil.

We are at a similar juncture with regards to a "lack of thinking." In our case, however, it is in regards to the delegation of thinking to a machine, and a lethal machine in particular. What I mean here is that militaries, and the U.S. military in particular, envisions a future where weapons do the thinking -- that is, planning, target selection and engagement.

Already the U.S. military services have capabilities that enable weapons to seek out and queue targets, such as the F-35 joint fighter and some targeting software platforms on tanks, like the M1 Abrams, as well as seeking out targets and automatically engaging them, like Phalanx or Counter Rocket, Artillery and Mortar (CRAM) systems.

The U.S.' decision to rely on unmanned aerial vehicles, or "drones," admits to the appeal of fighting at a distance with the use of automated technology. The current drones in combat operations, such as the Predator and Reaper, show the ease with which killing by remote can be accomplished.

While drones are certainly problematic, from a legal and moral standpoint in regards to targeted killings, human beings still ultimately control this type of technology. Human pilots are in the "cockpit," and for better (or worse) there are human beings making targeting decisions.

The worry, however, is that militaries are planning to push autonomy further than the F-35 joint striker (which is far more autonomous than the Predator or Reaper) to "fully autonomous" weapons.

Moreover, while we might try to push this worry aside and claim that it is a long way off, or too futuristic, we cannot deny the middle term between now and "fully autonomous" weapons. In this middle term, the warfighter will become increasingly dependent upon such technologies to fight.

Indeed, we already see this in "automation bias" (or the over-reliance on information generated by an automated process as a replacement for vigilant information seeking and processing). With increased dependence on the technology, this automation bias will only increase and thus will lead to a degeneration of not only strategic thinking in the services, but like the case of Eichmann, a lack of thinking more generally.

The evil here is that through the banality of autonomy, we risk not only creating a class of unthinking warfighters, but that the entire business of making war becomes so removed from human judgment and critical thinking that it too becomes commonplace.

In fact, it might become so banal, so removed from human agency, that even the word "war" starts to lose meaning. For what would we call a conflict where one side, or both, hands over the "thinking" to a machine, doesn't risk its soldiers' lives, and perhaps doesn't even place human beings outside of its own borders to fight? "War" does not really seem to capture what is going on here.

The danger, of course, is that conflicts of this type might not only perpetuate asymmetric violence, but that it further erodes the very foundations of humanity. In other words, if we are not careful about the increasing push towards autonomous weapons, we risk vitiating the thinking, judging and thus rational capacity of humanity.

What was once merely automation bias becomes the banality of autonomy, and in an ironic twist, humans lose their own ability to be "autonomous."

The human warfighter is now the drone.



No one could drawn mind-blowing technology quite like Jack Kirby.

"The release of atom power has changed everything except our way of thinking. The solution to this problem lies in the heart of mankind. If only I had known, I would have become a watchmaker." -- Albert Einstein






Hypno-Don: Cognitive Dissonance



The Fox and the Grapes by Aesop. When the fox fails to reach the grapes, he decides he does not want them after all. Rationalization (making excuses) is often involved in reducing anxiety about conflicting cognitions, according to cognitive dissonance theory.

From ye Wiki:

In modern psychology, cognitive dissonance is the discomfort experienced when simultaneously holding two or more conflicting cognitions: ideas, beliefs, values or emotional reactions. In a state of dissonance, people may sometimes feel "disequilibrium": frustration, hunger, dread, guilt, anger, embarrassment, anxiety, etc.

The phrase was coined by Leon Festinger in his 1956 book When Prophecy Fails, which chronicled the followers of a UFO cult as reality clashed with their fervent belief in an impending apocalypse.

Festinger subsequently (1957) published a book called A Theory of Cognitive Dissonance in which he outlines the theory.

Cognitive dissonance is one of the most influential and extensively studied theories in social psychology.

The theory of cognitive dissonance in social psychology proposes that people have a motivational drive to reduce dissonance by altering existing cognitions, adding new ones to create a consistent belief system, or alternatively by reducing the importance of any one of the dissonant elements. It is the distressing mental state that people feel when they "find themselves doing things that don't fit with what they know, or having opinions that do not fit with other opinions they hold."

A key assumption is that people want their expectations to meet reality, creating a sense of equilibrium. Likewise, another assumption is that a person will avoid situations or information sources that give rise to feelings of uneasiness, or dissonance.

Cognitive dissonance theory explains human behavior by positing that people have a bias to seek consonance between their expectations and reality.

According to Festinger, people engage in a process he termed "dissonance reduction", which can be achieved in one of three ways: lowering the importance of one of the discordant factors, adding consonant elements, or changing one of the dissonant factors.

This bias sheds light on otherwise puzzling, irrational, and even destructive behavior.


It would be most irrational not to believe Hypno-Don.

Coridea Out to Prove Devices can Replace Drugs -- and Succeeding



by Catherine Arnst

The burning question of the biopharma industry over the last several years is “where are the new drugs?” With blockbusters going off patent and investigational drugs failing in clinical trial after clinical trial, there has been a dearth of important new medications, and venture capital financing in life sciences startups has dropped like a stone. A cardiologist and an engineer believe they have found a better way, however, to move life sciences forward—with innovations that treat or cure diseases without new drugs.

Mark Gelfand and Howard Levin, co-founders of New York-based technology incubator Coridea, believe that now is a golden age, not for drugs, but for medical devices. “Drugs are shotguns that hit a lot of targets, and create a lot of side effects as a result,” says Levin, the cardiologist. “Medical devices tend to be snipers. A [heart] valve replacement fixes one specific problem, and that’s it. No worries about side effects.” Plus, he says, unlike many drugs that must be taken for life, devices can often cure diseases, making them much more appealing to patients, and to payers.

So far, Levin’s and Gelfand’s emphasis on devices is playing out according to plan. Working out of a small suite of offices on a gritty block of Manhattan’s Flatiron district, the two have launched six medical device companies since founding Coridea in 2003. Just last month they raised $10 million in Series A financing for their newest creation, Cibiem.

As my colleague Luke Timmerman writes in this week’s BioBeat column, Cibiem is in a elite group of only 28 companies in North America and Britain to pull in at least $5 million in first time financing so far this year. The startup is developing a novel catheter-based system that targets the carotid body, a chemosensor at the base of the neck, where the carotid artery forks, that may be implicated in a broad range of diseases, including heart failure, hypertension, and diabetes.

Coridea’s most successful launch to date is Ardian, a Mountain View, CA-based developer of a catheter-based system to treat hypertension. It was acquired by Medtronic for $800 million in January 2011, with the potential for as much as $500 million in additional milestone payments. Ardian grabbed the pharma industry’s attention in 2010 when it reported that its device was able to bring down high blood pressure in hypertensive patients who were taking up to five drugs a day without success.

“Ten years ago people laughed at us when we tried to sell them on the idea of a device to treat hypertension,” says Levin. “It was just brutal to try and raise money.” That all changed, he says, when the Ardian clinical trial data was presented. “All of sudden people realized you could replace drugs with a device, and it would be much more cost effective.”

Levin started his career as a cardiologist at Johns Hopkins, treating heart disease with surgery or medicines that to him seemed far too crude a solution. As an investigator on clinical trials for one of the first left-ventricular assist devices, he began working with Gelfand, a senior research engineer in the cardiology division at Johns Hopkins Medical School.

The two decided to go out on their own, and spent about two years trying, and failing, to raise money, says Levin. In 2004 they were finally able to obtain about $1.5 million in financing from The Foundry, a medical device incubator near San Francisco—“money they had set aside for some other crazy project,” says Gelfand. The pair used the money to develop the Ardian device, but both men say they never had any interest in actually running a company.

“We know what we do well, and that is generating ideas. There are a lot of people who are much better than us at executing,” says Gelfand, who was born and trained in Russia before emigrating to the U.S. in 1987. “We know how to take these brilliant ideas and decide which can become a viable commercial opportunity.”

They have generated a lot of ideas. Levin and Gelfand can lay claim to 67 issued U.S. patents and 95 pending applications between them. Among the companies they have created are CHF Solutions, to develop a device they invented for removing fluid overload from patients, acquired by Sweden’s Gambro in 2010, and Respicardia  (formerly Cardiac Concepts), of Minnetonka, MN, to develop their sleep apnea device. Respicardia was spun out of Coridea in 2010, after raising $30 million to take their device into clinical trials.

The two men believe their success rate rests on their completely different backgrounds in medicine and engineering. “We challenge each other all the time, and we each know that we don’t know everything,” says Levin. “We’ve been working together since 1987. At this point, it’s like we’re an old married couple.” Next up for this marriage—the two will continue to search for devices for diseases that they believe are not well served by drugs, such as asthma and diabetes.

Monday, April 29, 2013

Anatomy Lessons for Geeks: Godzilla

Marvelmania T-Shirts



I never ordered any of these cool Marvelmania offerings,  back in the day. I guess I spent all my quarters on the actual comic books. I now buy the Graphitti "recreation t-shirts" drawn by Dave Stevens, whenever they turn up on eBay.

Ben Thomas Blog: Web Portals and Mouse Mazes: How Your Brain Sorts the World




We're drowning in information. Every day, a whopping 2.5 quintillion bytes of new data appears across the Internet -- the tweets alone contain more textual data than your hard drive can hold. So we rely on search engines and RSS aggregators to track down and organize the data that's most useful to us, in much the same way as our ancestors relied on encyclopedias and almanacs.

This need to organize data isn't even a new phenomenon on the Internet. From the earliest days of the World Wide Web, the homepages of Yahoo and AOL (among others) provided "portals" to useful pages on other sites -- vouching, in effect, for these other pages' relevance and legitimacy.
Perhaps more surprising is the fact that, even in the age of Google, portal pages continue to proliferate.

From BuzzFeed's up-to-the-minute meme roundups to Reddit's constantly updated rankings of popular links, curated content continues to shape our Internet voyages.

In fact, these websites about websites reflect a central principle in human psychology: Our instinct for generalizing; for grouping our experiences under headings, then grouping those headings under bigger headings.

Not only is this ability a sign of intelligence; it's also crucial to the organization of our brains -- and to how we interact with the world around us.

The principle is known as "invariant representation," and it's as easy to explain as it's baffling to study.

Take vision, for example; your ability to see. The simplest visual areas of your brain -- the areas where signals from your eyes arrive earliest -- respond only to basic components of shapes, such as edges, corners and contrasts.

The neural signals triggered by those shape-components then get passed up to more complex areas, which respond to larger groups of visual signals, like those encoding whole shapes or movements.

This process continues all the way up through the brain's chain of communication, until eventually a signal arrives at a group of nerve cells that responds only to faces, or to views of scenes, or even -- believe it or not -- to cute things.

As a general rule, cell groups that sit higher in the hierarchy tend to respond to more complex or specific concepts -- so a group of cells that responds to cats might respond to a cat you see strutting by the window, to the sound of a cat's claws on your countertop, or even -- in some cases -- to the memory of the cat you had in high school.

This is what scientists mean when they talk about "invariant representation" -- the more sensory signals your brain learns to associate with cats (or with a particular cat), the more neural pathways can lead to the activation of that "cat" cell group, and the more persistent the concept of "cat" becomes in your memory.

This is a super-simplified version of the process, as I'm sure you can tell. There's probably no group of cells in your brain that responds only to cats and to nothing else; nor is it likely that any one particular brain cell, or group of cells, responds only to memories of your childhood cat.

Still, the process of generalized category representation is one that scientists have discovered again and again, all throughout the cerebrum, from the brain's relatively simple visual layers all the way up to abstract-planning areas like the prefrontal cortex. From the cellular level up, our brains are evolved to categorize.

Why are we so good at forming categories? The short answer is, complex categories help us make long-term, large scale predictions about the world.

A mouse can learn the layout of a maze, but -- as far as we know -- it doesn't suddenly pause in the middle of maze #317 and wonder why it has to keep running mazes every day. A mouse's brain probably isn't physically able to build categories that big, so its understanding of the past -- and its predictions about the future -- just can't reach that far.

Unlike a mouse's brain, yours is amazingly talented at building bigger, more complex, more precise categories. Just about every category in your mind was forged by your brain's trial-and-error attempts to predict the future based on your past.

Even your brain's "invariant" representations aren't necessarily immune to modification -- they can change quite a bit as you learn new details about the world. What's more, every representation in your brain is deeply intertwined with others -- which is why it's hard to picture the ocean without imagining the sound of crashing waves, or to say the words to your favorite song without thinking of its melody too.

It wasn't too long ago that it seemed we'd always need human brains to categorize the information around us. More than ever before, though, search engines are learning algorithmically from our behavior, tailoring their results to our preferred corners of reality.

Advertisers are starting to predict what we're likely to buy before we even know we want it. It's plausible that within our lifetimes, software programs will learn to pick up on data patterns whose existence we'd never imagined. And what happens beyond that could be hard for even the smartest human brains to predict.

Ben Thomas is an author, journalist, inventor and independent researcher who studies consciousness and the brain. A lifelong lover of all things mysterious and unexplained, he weaves tales from the
frontiers of science into videos, podcasts and unique multimedia events. Lots more of his work is available at http://the-connectome.com. .


Source: http://www.huffingtonpost.com/ben-thomas/

Friday, April 26, 2013

Sterankophile: Steranko's Car in 1962: A White 1960 MGA 1600 Convertible

1960 MGA 1600 Roadster





"There's a sensuality in the motion of a sports car" -- "Steranko: Legend" Genii November 1962

Kelly Bulkeley Ph.D. Blog: What Does Science Know About Dreams?



El sueño de la razón produce monstruos (The Sleep of Reason Produces Monsters) Francisco Goya, aquatint etching, 1797-99. This is plate 43 of the 80 etchings making up the Los Caprichos series. In the etching that might have served as the frontispiece to his suite of satires, Los Caprichos, Goya imagined himself asleep amid his drawing tools, his reason dulled by sleep, and bedeviled by creatures that prowl in the dark. The artist's nightmare reflected his view of Spanish society, which he portrayed in the Los Caprichos series as demented, corrupt, and ripe for ridicule. It consists of a self-portrait of the artist with his head on a table, as owls and bats surround him, assailing him as he buries his head into his arms. Seemingly poised to attack the artist are owls (symbols of folly) and bats (symbols of ignorance). The full epigraph for caprichio #43 is: "Fantasy abandoned by reason produces impossible monsters: united with her, she is the mother of the arts and the origin of their marvels."

by Kelly Bulkeley Ph.D.

I recently attended a public lecture by a famous neuroscientist, with several hundred other people in the audience. The topic of his talk was the unconscious brain processes that shape our conscious mental lives. The lecture was a good introduction to mainstream neuroscience, and the audience clearly enjoyed it.

At the end of his prepared remarks, the neuroscientist took written questions from the audience. The second question he was asked (which the moderator said came from several people) had to do with dreaming. What does science know about dreams?

From where I sat, the neuroscientist looked distinctly uncomfortable trying to answer this question. He started by saying, "Dream content is not well understood."

He brought up hypnosis and meditation as other types of anomalous brain functioning that may have something to do with creativity, but he quickly dismissed them as "parlor tricks" unworthy of serious investigation.

Returning to the question of dreams, he said they were probably just a "mish-mash" of the brain's "machine code" that we can't and shouldn't try to understand. This brought him back to a point he'd made earlier, namely discouraging people from spending too much time delving into their own unconscious minds.

Consciousness, he said, is like a CEO who shouldn't try to micromanage all the lower-level activities of his company because that would be disruptive and distract him from his proper focus on the big picture.

The moderator asked him if he paid attention to his own dreams, and the neuroscientist seemed relieved at finally being able to admit the truth: "You know, I can't stand dreaming. It makes no sense to me." The moderator said something to the effect of, so you're an expert on the unconscious who can't stand his dreams? The neuroscientist shrugged. There was a bit of nervous laughter from the audience, and the moderator went on to the next question.

I'm not using the neuroscientist's name because the problem I want to discuss isn't just about him, but about an unfortunate combination of bias and ignorance I suspect is more widespread than it should be in the neuroscience community.

The problem starts with the assertion, "Dream content is not well understood." That's simply wrong. Several decades of scientific research on dreaming has identified a number of basic patterns in dream content, giving us a reasonably clear view of the meaningful features of dreaming experience.

Many questions still remain and more research needs to be done, but it's misleading at best and intellectually lazy at worst to suggest there is no good scientific evidence about the nature of dream content.

What, then, does science actually know about dreams?

1. Most dreams turn out to be rather mundane. Contrary to the common misconception that dreams are nothing but a "mish-mash" of random nonsense and bizarre fantasy, research shows that dreams usually include fairly realistic portrayals of familiar people, places, and activities. (David Foulkes was one of the first researchers to make this point.)

2. Many aspects of dream content are accurate, honest reflections of people's emotional concerns in
waking life. Whatever you care most about in waking life, it's likely to appear with special frequency in the content of your dreams. (See, for example, G. William Domhoff's work on the "continuity hypothesis.")

3. The graphic, repetitive nightmares suffered by victims of post-traumatic stress disorder (PTSD) often change, in the course of healing, to more ordinary nightmares with a more diffuse range of imagery.

As Deirdre Barrett says in her 1996 edited book Trauma and Dreams: "[A]s time passes, and especially for those whose PTSD is gradually improving, the dream content begins to make the trauma more symbolic and to interweave it with concerns from the dreamer's daily life." (p. 3)

These are three straightforward findings of contemporary dream research. They are not esoteric or obscure. They have been produced by well-trained psychologists, published in reputable books and journals, and cited frequently by other scholars. Combined with other solid scientific findings (see the additional references below), they suggest that dreaming is creatively structured by a network of emotional and cognitive processes that also operate in waking consciousness and have meaningful connections to our health and development.

It's okay if neuroscientists aren't interested in their own personal dreams. But it's not okay to shift from "I don't understand my dreams" to "Dream content is not well understood." The shift to a passive voice makes this claim sound like a fact-based conclusion of mainstream science, whereas it really indicates a failure to engage with the abundant evidence generated by more than half a century of empirical dream research.

Note: All quotes unless otherwise indicated come from my hand-written notes during the lecture.

Other good sources on the science of dreams include works by Patrick McNamara and Deirdre Barrett, Edward Pace-Schott et al., Ernest Hartmann, Milton Kramer, Rosalind Cartwright, and the journal Dreaming.

Thursday, April 25, 2013

In the News: Autism Risk Detected at Birth in Abnormal Placentas




by Julia Haskins  

The placenta may be the key to determining autism risk in very young children.

The cause of autism has long been a source of mystery and debate within the medical community, but we're now one step closer to detecting the disorder early. Autism diagnosis is a touchy subject, but one that must be addressed, as nearly one in every 88 children in the U. S. has the developmental disorder.

We may not know precisely what causes it, but thanks to researchers at Yale University, we now have greater insight into how the disorder first appears. For clues to autism risk, the researchers suggest, look no further than the placenta, an organ that provides nourishment to a growing fetus in the womb.

The latest research, published this week in Biological Psychiatry provides more evidence of a biological link between abnormal formation of the placenta and autism, as Dr. Harvey Kliman explains.

“[The placenta] is like a check engine light,” Kliman said in an interview with Healthline. “It doesn’t say exactly what’s wrong but it does say hey, here’s what's going on.” 

In conjunction with the Mind Institute at the University of California, Davis, Kliman and colleagues compared 117 placentas from families at risk of having children with autism to a control group of 100 normal placentas. They found that the at-risk placentas had many more trophoblast inclusions, or abnormal placental folds and cell growths, than those in the control group. These placental markers are excellent indicators of autism risk.

Placentas with trophoblast inclusions lack the symmetry found in the blood vessel "trees" of normal placentas. The abnormal folding pattern is similar to a real tree’s branches, but instead of growing out evenly, they can grow backward, into the placenta. Kliman compares this phenomenon to a pit in the skin, much like a belly button.

“There’s something about these [at-risk] families biologically that is different, and the biological basis for this is very clear,” Kilman said. “What’s causing it is very clearly associated with a whole host of genetic abnormalities, not just autism.” He hopes that routinely examining the placenta will be one of the first steps doctors take to make a determination about childhood developmental disorders going forward.

What Does This Research Mean for My Family?

Most autism diagnoses can only be made when the child begins exhibiting autistic behaviors, typically between 3 and 5 years of age. By this time, parents are struggling to help their children grow and develop. But armed with the knowledge of their child’s condition at birth, parents can be better prepared for the challenges ahead and can intervene early.

“Those families could be more vigilant,” Kliman said. Being aware of autism sooner gives parents more time to engage in developmental exercises with their children, he says. This could provide families, pediatricians, and psychologists with valuable time to conduct social and occupational therapy. 
And suppose your child does not have autism. The research is still valuable, Kliman says. “Even if we say to a family your kid [might have] autism but it turns out they don’t, the attentive parenting is fantastic for kids. No kid has ever been hurt by more attention from their parents.”


Perhaps Jack Kirby's wildest creation, Paranex The Fighting Fetus, from Captain Victory #7 (Pacific Comics, 1982). Basically he's a fetus inside an Iron Man-type suit.

A Healing Blog: Mend the Network




Mend the Network.








Art by Saul Steinberg.


Wednesday, April 24, 2013

Sterankophile: Hail HYDRA


"Hail, HYDRA. Immortal HYDRA. We shall never be destroyed. Cut off a limb, and two more shall take its place. We serve none but the Master --as the world shall soon serve us. Hail HYDRA." -- The HYDRA oath from Strange Tales #135 (Aug. 1965).

From Wiki:

HYDRA is a fictional terrorist organization in Marvel Comics.

Despite the name's capitalization per Marvel's official spelling, the name is not an acronym but rather a reference to the mythical Lernaean Hydra. The organization's motto references the myth of the Hydra, stating that "if a head is cut off, two more will take its place" proclaiming their resilience and growing strength in the face of resistance. HYDRA agents often wear distinctive green garb featuring a serpent motif.

HYDRA was first mentioned in Menace #10. In that issue, a plainclothes HYDRA agent paid off a scientist named Dr. Nostrum for information about a cobalt bomb that turned people into monsters. Dr. Nostrum shot all the other scientists on his team after they were turned into monsters, then shot himself after his son put an image from a monster magazine on his mirror.

The organization first appeared in Strange Tales #135. In its original continuity, it was headed by nondescript businessman Arnold Brown, who was killed as S.H.I.E.L.D. apparently crushed the organization. It soon returned, however, headed by Baron Wolfgang von Strucker, with the support of the Nazi Red Skull; HYDRA's changing origin was one of Marvel's earliest retcons. After its initial defeat, several of its branches, such as its scientific branch A.I.M. (Advanced Idea Mechanics) and the Secret Empire, became independent.

HYDRA is a criminal organization dedicated to the achievement of world domination through terrorist and subversive activities on various fronts, resulting in a fascist New World Order. Its extent of operations is worldwide, always attempting to elude the ongoing counter-espionage operations by S.H.I.E.L.D. HYDRA is funded by Baron Strucker's personal fortune, based on his recovered hoard of Nazi plunder from World War II, and funds established by the original leaders of the Japanese secret society that became HYDRA.

The organization is run with behind-the-scenes direction by Baron Strucker, alias Supreme Hydra. Under him is a central ruling committee; under them are individual division chiefs, and under them are the rank and file members and special agents.





Brain's Asymmetrical Shape Reflects Human Adaptability, MRI Study Suggests





by Tanya Lewis, LiveScience Staff Writer

The two halves of the human brain are not symmetrical. This lopsidedness, which arises during brain development, may be a stamp of the adaptability of the human brain, a new study suggests.

Researchers compared geometric differences between brain scans of humans and chimpanzees. They observed structural asymmetries in both human and chimpanzee brains, but human brains were especially asymmetric. The findings, published online today (April 23) in the journal Proceedings of the Royal Society B, suggest human and chimp brains evolved a high degree of flexibility during development.

The human brain is known to be asymmetric — the "left brain" is involved in language processing, for example, while the "right brain" is where spatial reasoning takes place. "It's very common that there are some areas that are bigger in the left hemisphere than in the right hemisphere," said lead study author Aida Gómez-Robles, an anthropologist at The George Washington University in Washington, D.C.

Asymmetry and specialization of the brain's hemispheres were once thought be distinctly human traits, but primates and other animals possess them as well. The asymmetries take several forms: A population may have brains with one half that is consistently larger than the other, known as directional asymmetry; a population may consist of some individuals with one brain half larger and some with the other half larger, known as anti-symmetry; or a population can exhibit differences in both halves that differ from the average shape, known as fluctuating asymmetry.

Genetics is thought to play a role in the first two asymmetries. But scientists believe fluctuating asymmetry, in which individuals in a population possess a variety of differences in brain shape, may result when environmental factors affect the brain's development.

In their study, Gómez-Robles and her colleagues compared the differences between live human brains and chimpanzee brains using magnetic resonance imaging (MRI) scans. They processed the scans to obtain a 3D reconstruction of just the brain hemispheres. Then they used statistical techniques to map and compare the brain structures between individual humans and chimpanzees, as well as between the two species.

Both human and chimpanzee brains had asymmetries that varied across each population, the analysis showed. Compared with the chimpanzee brain, human brains showed even more variation in structure size between individuals in the population.

Overall, human brains had enlarged frontal and parietal lobes compared with chimp brains, as expected.

Generally, chimps had relatively short and broad brain proportions, whereas humans had long and narrower proportions.

The pattern of brain variation seen in both humans and chimpanzees suggests this structural variation evolved in a common ancestor, enabling them to adapt to selective pressures in their environment.
The lack of symmetry in the brains of both animals, but especially humans, may be a sign of the flexibility, or plasticity, of their brains.

"We know that plasticity is an important trait in the function of the brain," which is critical for human cognitive evolution, Gómez-Robles said. Being flexible allows the brain to adapt to the conditions of its environment, and this adaptation results in less symmetric brains.

It would be interesting to compare the results with the brains of other primates besides humans and chimpanzees, Gómez-Robles said, but this would require having skull MRIs (brain images) from those animals.




Check out the big brain on Jack Kirby's MODOK -- Mental/Mobile/Mechanized Organism Designed Only for Killing.


A Cosmic Healing Blog for You






Galactus re-energizes his cosmic herald, the Silver Surfer, in this John Buscema and Joe Sinnott  drawing

In most Native American cultures, Bear is considered a medicine being with impressive magical powers, and plays a major role in many religious ceremonies. Bears are symbols of strength and wisdom to many Native Americans, and are often associated with healing and medicine (since bears continue fighting after being seriously injured, Native Americans often believed they were capable of healing their wounds.)

In the News: Scientists Find Way to Turn Stem Cells Into Brain Cells

Scientists at The Scripps Research Institute have found a simple way to turn bone marrow stem cells directly into brain precursor cells, such as those shown here. (Credit: Image courtesy of the Lerner lab, The Scripps Research Institute)

An antibody injected into bone marrow was discovered in a California lab

by Jason Koebler

 A California lab has discovered a way turn stem cells from bone marrow into brain cells.
Scientists have discovered an antibody that can turn stem cells from a patient's bone marrow directly into brain cells, a potential breakthrough in the treatment of neurological diseases and injuries.

Richard Lerner, of the Scripps Research Institute in California, says that when a specific antibody is injected into stem cells from bone marrow—which normally turn into white blood cells—the cells can be triggered to turn into brain cells.

"There's been a lot of research activity where people would like to repair brain and spinal cord injuries," Lerner says. "With this method, you can go to a person's own stem cells and turn them into brain cells that can repair nerve injuries."

Antibodies are Y-shaped proteins that the immune system uses to help identify foreign threats to the body. They bind to foreign invaders in the body in order to alert white blood cells to attack harmful bacteria and viruses. There are millions of known antibodies.

Lerner and his team were working to find an antibody that would activate what is known as the GCSF receptor in bone marrow stem cells, in order to stimulate their growth. When they found one that worked, the researchers were surprised: Instead of inducing the stem cells to grow, they began to form into neural cells.

"The cells proliferated, but also started becoming long and thin and attaching to the bottom of the dish," which is reminiscent of behavior of neural cells, Jia Xie, a research associate on Lerner's team, said in a released statement. Further tests confirmed that they were neural progenitor cells, which are very similar to mature brain cells.

Lerner says that scientists have "an awful lot of experience injecting antibodies" into stem cells and that the process is not "inherently dangerous." The team plans to start animal tests of the technology soon.

"We're going to collaborate with people who are trying to regenerate nerves in the eye," Lerner says. "We will team up with a couple people strong in that area of research."



From: http://marswillsendnomore.wordpress.com/


Tuesday, April 23, 2013

More Anxiety, Please: Fake AP Tweet Sends Stocks Briefly Plunging





Tweet this!
by Alain Sherter / MoneyWatch/ April 23

Stocks tumbled briefly Tuesday after hackers hijacked the main Twitter feed of The Associated Press and sent out a false tweet about a terror attack at the White House.

The Dow Jones industrial average plunged more than 130 points, or roughly 1 percent, after the fake Twitter posting before quickly rebounding.

As of 1:50 p.m, it was up 130 points, or 0.9 percent, to 14,698. The S&P 500 and Nasdaq also fell sharply, but recovered after AP confirmed that the tweets were false.

The tweet, which came shortly after 1 p.m. Eastern Time, claimed there had been two explosions at the White House and that President Barack Obama had been injured. The AP's mobile application also was compromised.

"The AP twitter account has been hacked," the wire service said in a statement. "The tweet about an attack at the White House is false. We will advise more as soon as possible."

The AP said that unidentified hackers have "made repeated attempts to steal the passwords of AP journalists." The media company suspended its Twitter account and said it was trying to correct the issue.

Other media organizations have seen their Twitter accounts hacked in recent months.  Over the weekend, CBS News confirmed that its "60 Minutes" and "48 Hours" Twitter accounts were compromised. Both accounts remain suspended at present time.


eBidiot: Marvel FOOM Poster by Jim Steranko (Marvel, 1973)



I couldn't resist -- I just had to own a Steranko FOOM poster before the modern world  as we know it collapses, or a super-volcano erupts and eradicates all multi-cell life on Earth -- (actually, don't worry, folks, this is just my way of rationalizing a purchase of another extravagant piece of Marvelmania).

Monday, April 22, 2013

On the Reading Table: "The Secret Life of the Grown-up Brain" by Barbara Strauch


Here's my latest selection from the reading nightstand. I had originally intended to just skim this hardback and then "prune it" from the pile of "I just have too many HPB clearance books to read" candidates -- but then I got hooked into reading the whole book -- so, it's a quick and fun read. Recommended for the neuroscience buffs.




The Secret Life of the Grown-up Brain: The Surprising Talents of the Middle-Aged Mind by Barbara Strauch

A leading science writer examines how our brains improve in middle age.

Pulitzer Prize-winning science writer Barbara Strauch explores the latest findings that demonstrate how the middle-aged brain is more flexible and capable than previously thought. In fact, new research from neuroscientists and psychologists suggests that the brain reorganizes, improves in important functions, and even helps us adopt a more optimistic outlook in middle age. We recognize patterns faster, make better judgments, and find unique solutions to problems. Part scientific survey, part how-to guide, The Secret Life of the Grown- Up Brain is a fascinating glimpse at our surprisingly talented middle-aged minds.

From Booklist

Along with bulging waistlines and graying hair, declining mental faculties have long been seen as an inevitable drawback of middle age. When New York Times science editor Strauch first began research for this follow-up to The Primal Teen (2004), her book on adolescent intelligence, faltering midlife brain fitness was considered a given. To her pleasant surprise, her forays into contemporary neuroscience revealed a reassuring discovery.

Aside from usual short-term memory lapses of forgetting names and mislaying keys, the middle-aged brain is more vigorous, organized, and flexible than has been previously believed.

In eleven easily digested chapters, Strauch overviews the latest findings of high-tech brain scans and psychological testing that demonstrate cognitive expertise reaching its peak in middle age.

Although distractions and oversights may more easily prey on the mind, the continued growth of myelin (or white matter) increases problem-solving skills, pattern recognition, and even wisdom.

Supplemented by a section on keeping one’s brain in top shape, Strauch’s work proffers a welcome dose of optimism to every aging baby boomer. --Carl Hays


Sunday, April 21, 2013

Busy Times




Time marches on and so do I. I've been quite, quite busy this year.

Over the last few weeks, I've: finished another major comic art catalog for HA, been called to criminal court jury duty, filed a last-minute extension on my 2012 taxes, and most importantly, finally succumbed and bought wifi and an iMac for home use.

I have big plans for new creative endeavors ahead, and once I learn to navigate with the new Mac commands and options, I hope to be very productive in my research, writing, and art. so please stay tuned for new insights and projects from yours truly -- super-sophisticate cartoonist, Don Mangus.


Friday, April 19, 2013

Adrenaline, Cortisol, Norepinephrine: The Three Major Stress Hormones, Explained






by Sarah Klein

Thanks to the work of our sympathetic nervous system, the "fight or flight" system that takes over when we're stressed, when you see your boss's name in your inbox late at night, your body reacts like there's a lion on the loose.

Behind the wide range of both physical and mental reactions to stress are a number of hormones that are in charge of adding fuel to the fire.

Adrenaline

What It Is: Commonly known as the fight or flight hormone, it is produced by the adrenal glands after receiving a message from the brain that a stressful situation has presented itself.

What It Does: Adrenaline, along with norepinephrine (more on that below), is largely responsible for the immediate reactions we feel when stressed. Imagine you're trying to change lanes in your car, says Amit Sood, M.D., director of research at the Complementary and Integrative Medicine and chair of Mayo Mind Body Initiative at Mayo Clinic.

Suddenly, from your blind spot, comes a car racing at 100 miles per hour. You return to your original lane and your heart is pounding. Your muscles are tense, you're breathing faster, you may start sweating. That's adrenaline.

Along with the increase in heart rate, adrenaline also gives you a surge of energy -- which you might need to run away from a dangerous situation -- and also focuses your attention.

Norepinephrine

What It Is: A hormone similar to adrenaline, released from the adrenal glands and also from the brain, says Sood.

What It Does: The primary role of norepinephrine, like adrenaline, is arousal, says Sood. "When you are stressed, you become more aware, awake, focused," he says. "You are just generally more responsive." It also helps to shift blood flow away from areas where it might not be so crucial, like the skin, and toward more essential areas at the time, like the muscles, so you can flee the stressful scene.

Although norepinephrine might seem redundant given adrenaline (which is also sometimes called epinephrine), Sood imagines we have both hormones as a type of backup system. "Say your adrenal glands are not working well," he says. "I still want something to save me from acute catastrophe."

Depending on the long-term impact of whatever's stressing you out -- and how you personally handle stress -- it could take anywhere from half an hour to a couple of days to return to your normal resting state, says Sood.

Cortisol

What It Is: A steroid hormone, commonly known as the stress hormone, produced by the adrenal glands.

What It Does: It takes a little more time -- minutes, rather than seconds -- for you to feel the effects of cortisol in the face of stress, says Sood, because the release of this hormone takes a multi-step process involving two additional minor hormones.

First, the part of the brain called the amygdala has to recognize a threat. It then sends a message to the part of the brain called the hypothalamus, which releases corticotropin-releasing hormone (CRH). CRH then tells the pituitary gland to release adrenocorticotropic hormone (ACTH), which tells the adrenal glands to produce cortisol. Whew!

In survival mode, the optimal amounts of cortisol can be life saving. It helps to maintain fluid balance and blood pressure, says Sood, while regulating some body functions that aren't crucial in the moment, like reproductive drive, immunity, digestion and growth.

But when you stew on a problem, the body continuously releases cortisol, and chronic elevated levels can lead to serious issues. Too much cortisol can suppress the immune system, increase blood pressure and sugar, decrease libido, produce acne, contribute to obesity and more.

"Ducks walk out of a lake, flap their wings and they fly off," says Sood. "When you face something stressful, particularly if it's not likely to repeat or doesn't have a huge long-term impact, you want to be able to shake it off and move on with life."

Of course, he adds, estrogen and testosterone are also hormones that affect how we react to stress, as are the neurotransmitters dopamine and serotonin. But the classic fight-or-flight reaction is mostly due to the three major players mentioned above.



Thursday, April 18, 2013

Lloyd Sederer M. D. : Huffington Post Blog: Things You Want to Know About Psychiatric Medications But Didn't Know Who (or How) to Ask




Psychiatric medications are among the most frequently-prescribed medications in this country and throughout the world. One in 10 Americans takes an anti-depressant. Yet despite the incessant barrage of multi-media drug promotions, you may not have the answers to the questions you most want answered.

I asked more than a dozen expert psychiatric colleagues, and myself, the questions they most frequently receive about psychiatric medications from people who take them or their families. Here are a dozen of those many questions; the responses are mine.

1. What are the chances that my medication will (or will not) work?

This could be the most frequently asked question -- if all who wondered dared to ask it. For a medication to be approved by the Food and Drug Administration (FDA), it must be shown to be safe and effective. "Effective" means that it outperforms the placebo effect, which can result in improvement in more that 30 percent of individuals.

A common example is antidepressant medications for depression, which can improve symptoms for approximately 75 percent of people suffering from moderate to severe illness (mild to moderate depression often responds to non-drug treatments). But the effectiveness of antidepressants depends on finding the right medication for an individual, at an adequate dose (not too little and not too much), taking it reliably for enough time and without barriers to its effectiveness like alcohol or drug abuse.

Similar effectiveness is found for anti-manic and anti-psychotic medications, particularly for acute symptoms.

Medications have limited effectiveness in eating and personality disorders, except when used for another co-occurring mental disorder.

But medications do not cure mental disorders; they treat symptoms. Adequate sleep, good nutrition, moderate (or no) use of alcohol and non-prescribed drugs, psychotherapy, the hard work of recovery or rehabilitation, and support are what everyone needs to manage an illness, be it a mental or a physical illness.

2. How soon will it work?

Some medications work in hours, like tranquilizing medications. Some can take up to six weeks or longer, like antidepressants. Some are meant to prevent relapse or recurrence, so they work over time. This is a good question to ask when a doctor is writing the prescription.

3. How will I know if it's working?

This is a crucial question. I urge patients and their families to set specific goals for treatment early on. Medications work on specific symptoms like sleep problems or anxiety or feeling very blue or agitated; speak with the doctor about what symptoms the medication is meant to improve. Improved functioning at school and work can take more time to achieve than symptoms, but if you are clear about your goals and monitor them (with others you trust) you will know if the medication is working.

4. Will this medication change my personality?

No. Your personality is you; it is who you have been since you were young. The effects of a medication can change how you feel (more focused, more energetic, more clear-thinking -- or more restless, sleepier, or without sexual desire) but that is not a change in your personality.

5. Will this medication change my brain?

The goal of a medication for a mental illness is to change how part of the brain is working in order to improve how a person feels and thinks, and enable them to behave more like they want to. There are also unwanted side effects that can result from how a medication can affect other parts of the brain, like regions involved in appetite or alertness or muscle tone. There is no such thing as a perfect medication with only benefits and no side effects, so you will need to weigh benefits and side effects and decide if a medication is right for you.

6. If I take this medication, does it mean I'm crazy?

Talk of craziness remains an unfortunate residue of the stigma that continues to pervade how people think of mental illness (see my recent HuffPost blog "The Painted Bird: Stigma and Mental Illness").
Mental illnesses are diseases of the brain and mind, not inexplicable happenings or failures of will and character. We need to think of taking medication for a mental illness as part of a comprehensive plan by which a person manages an illness -- any illness -- and rebuilds functioning at school, home, and work as well as with family and friends.

7. Will I have to be on this medication for the rest of my life?

Not necessarily, but you should not stop a medication suddenly or without discussing it with your doctor. Over time, dosage can be reduced, and with planning a trial off a medication can be tried. People whose illness has gone on for years or who have had repeated episodes of acute problems usually require taking medications for a long period of time in the same way that individuals with diabetes, high blood pressure and asthma do.

8. Will I become addicted to this medication?

None of the antidepressant, mood-stabilizing, or antipsychotic medications are addicting. However, with benzodiazepam tranquilizers and some sleep medications our bodies can develop tolerance (where a higher dose is needed to achieve the same effect) or dependence (where a person experiences withdrawal symptoms if the drug is quickly stopped). This can also happen with many pain medications.

9. Will the medication make me fat?

Many psychiatric medications bring the unwanted side effect of weight gain. But some medications used for similar conditions (for example, in the class of antidepressant or antipsychotic agents) are associated with greater weight gain than others. Carefully considering which medication to use, coupled with attention to diet and exercise, can prevent or minimize weight gain.

10. Can I drink while I'm on this medication?

Excessive drinking (or use of non-prescribed medications or street drugs) is not a good idea for a person whose brain is affected by a mental illness. For those who have a co-occurring alcohol or drug-use disorder, abstinence will be a necessary aim. In some instances, alcohol may interfere or alter blood levels of the medication, so it is important to check with your doctor. For people with no substance-use problem and mild to moderate or stable illness, I usually say that they may be able to enjoy a drink. Try one and see how it feels. For people on psychiatric medications, keep in mind that one drink can feel like two or three.

11. Will this medication affect my sex life?

Serotonin reuptake inhibitors (SSRIs and SNRIs), used especially for depression and anxiety disorders, can produce problems with sexual arousal as well as in achieving orgasm -- in men and women. In some cases, a person's libido, or sex drive, is reduced. For people, however, who lost their drive from depression, it can improve with treatment. There is one antidepressant, bupropion, which has not shown these problems. For other antidepressants, there are ways to help by adjusting the dose or timing of the drug or using a medication to enhance sexual functioning.

Other types of medication (for high blood pressure, heart disease and other general medical conditions) may affect sexual desire and performance, so if that is happening to you, speak with your doctor. Losing the pleasure of sex -- for you or your partner -- may not need to be a consequence of taking medication.

12. My child refuses to take a medication. Can I hide it in his or her food?

Trust is the bedrock of every relationship. A family can feel very desperate and want to do something like this when a loved one refuses treatment and is suffering and failing. But the cost in loss of trust usually outweighs any (temporary) benefits.
---
More answers about medications, other treatments, and more broadly about mental health care are in my book, The Family Guide to Mental Health Care.

The opinions expressed here are solely mine as a psychiatrist and public health advocate. I receive no support from any pharmaceutical or device company.

www.askdrlloyd.com

For more by Lloyd I. Sederer, M.D., click here.




Timmie Rogers: Forgotten Pioneer of Comedy .




http://www.youtube.com/watch?v=UxP30lNJKD0

http://www.youtube.com/watch?v=rlm53CiComo

Doctor Doom Says: Buy Marvel Hedge Funds





With prices in the gold market crashing like a lead zeppelin, I've converted my life savings completely to Marvel currency.

From the HA Comics Newsletter: The Ten Best Comic Book Covers of All Time (According Me): Don Mangus

The Amazing Spider-Man #33 (Steve Ditko)
Stan Lee and Steve Ditko's Spider-Man was an oddball superhero — an alienated, gangly teenager, racked by adolescent guilt, angst, and self-doubt. This yarn found him tested to his breaking point — wedged beneath a colossal machine, drowning, even as his beloved Aunt May faced imminent death — in sore need of a rare medical isotope that Spidey will have to snatch back from the clutches of Doctor Octopus. Ditko's masterfully staged sequence — where the wall-crawler digs deep into his last ounce of resolve to hurl off his burden with a superhuman effort — remains the ultimate climactic moment of superhero storytelling.
Captain America Comics #1 (Jack Kirby and Joe Simon)
Few American comic books can claim to have the importance of this 1941 Timely Publications title. Created by Joe Simon and Jack Kirby, Captain America became the most successful of the many patriotic heroes to arise following the conflict in Europe involving Nazi Germany and the attack on Pearl Harbor by Japan which resulted in World War II. The savvy Joe Simon once explained the inspiration behind this famous cover as follows: "There had never been a truly believable villain in comics. But Adolf was live, hated by more than half the world — I could smell a winner."
Detective Comics #69 (Jerry Robinson)
Bob Kane profited enormously from major big-league contributions by his talented associates. Indeed, Jerry Robinson and Bill Finger developed Batman's greatest arch-nemesis, that Harlequin of Hate — the Joker. Robinson's superior draftsmanship shines ever so brightly in this magical, Mort Meskin-influenced cover design.
Fantastic Four #1 (Jack Kirby)
Marvel comics was going down for the count when Jack Kirby and Stan Lee melded the appealingly monstrous with the anti-heroic, and sparked the Marvel Age of Heroes. The accomplished George Klein was a first-rate inker over both Jack Kirby at Marvel, and at DC, Curt Swan (on Superman). While other Marvel characters later became more popular, Kirby and Lee's FF laid the foundation for the revolution that followed. Some fans have noted striking compositional similarities between the covers of FF #1 and Brave and Bold #28 (the first JLA cover by Mike Sekowsky and Bernard Sachs, 1960) — well, it's hard to go wrong with a towering monster.

The Incredible Hulk #1 (Jack Kirby)
Monsters and heroes — every kid loves them. But what if the line between good and evil was blurred, and the monster was the hero? The Incredible Hulk shares many motifs with other bipolar pop culture "prototypes" such as Robert Louis Stevenson's "The Strange Case of Dr. Jekyll and Mr. Hyde," Boris Karloff's portrayal of the Frankenstein Monster — and the gamma bomb-blast origin of the Incredible Hulk owes more than a little to the API fantasy film, "The Amazing Colossal Man" (1957). Kirby and Ditko's work on Marvel's many pre-hero "monster books" prepared them perfectly for the upcoming Marvel Age.
Marvel Mystery Comics #9 (Bill Everett and Alex Schomburg)
Even in the Golden Age, Timely/Marvel's superstars were half-monster, half-antihero — case in point — those two hellraisers, Namor, the Sub-Mariner and Dr. Phineas Horton's android, the Human Torch. Two opposing elements at war — fire vs. water — as depicted by two giants of Golden Age comic art — Bill Everett and Alex Schomburg — what could be more mythic?
Nick Fury, Agent of SHIELD #1 (Jim Steranko)
Magician, escape artist, and graphic designer Jim Steranko reinvigorated the kinesthetic "eyeball kicks" of four-color comics with his outrageous brand of "Zap Art" — a combination of hallucinatory surrealism, eye-popping op art, visual misdirections and puzzles, and existential, film-noir storytelling. Combining such disparate artistic influences as Jack Kirby, Will Eisner, Bernie Krigstein, Richard Powers, Salvador Dali, Wally Wood, and others, Steranko blew comic readers' minds in the late sixties. Nick Fury was transformed from an anachronistic, cigar-chomping ex-Howler into a James Bond-Diabolik-styled, ultra-cool, super-super-sexy Cold Warrior.
Showcase #4 the Flash (Carmine Infantino)
Carmine Infantino's jet-age costume concept was perfect for the Silver Age Flash. It left Jay Garrick's old-fashioned Mercury-inspired uniform behind in its vapor trails. Joe Kubert "gilded the lilly," adding his expressive inks over Infantino's streamlined pencils. Although DC editor Julie Schwartz is celebrated for launching the Silver Age, I hasten to point out that it was super-scribe Robert Kanigher who actually wrote the first few critical scripts and concepts. Few fans realize he also conceived and designed the famous filmstrip cover concept. This project was DC's all-star team pulling together for one rare occasion on one seminal creation.
Showcase #57 (Joe Kubert)
Leave it to controversial DC writer editor Robert Kanigher to create one of the most outrageous concepts to hit the Silver Age war comics scene. A continuing feature spotlighting Hans Von Hammer, Enemy Ace, the autocratic and merciless cold-blooded Hammer of Hell who reigned over the Killer Skies of WWI. Joe Kubert's rimlit cover presents a chilling portrait of this cold-blooded Kanigher creation, who had only a feral wolf as his confederate. This cover showcases the legendary "K-K team" at its finest.
Superman #14 (Fred Ray)
The young Fred Ray was inspired by comic strip artist and illustrator Noel Sickles, and as a result, Ray created the most iconic and beautifully designed image ever of America's first superhero.
 
Don Mangus brings his experience as a published writer and former college-level Design, Drawing, and Painting instructor to his catalog descriptions in Comics and Illustration Art. He is an artist/cartoonist, with both a BFA and a MFA from Southern Methodist University. His articles on comic art have been published in Comic Book Artist, Robin Snyder's the Comics, and The Charlton Spotlight, as well as on numerous comics-related Web sites If you like Don's list, you can drop him an email at DonM@HA.com.