Monday, October 29, 2007

Brain acts diffrent For Creative And Noncreative Thinkers


How brain acts for crealtivity ?


Why do some people solve problems more creatively than others? Are people who think creatively different from those who tend to think in a more methodical fashion?


These questions are part of a long-standing debate, with some researchers arguing that what we call "creative thought" and "noncreative thought" are not basically different. If this is the case, then people who are thought of as creative do not really think in a fundamentally different way from those who are thought of as noncreative. On the other side of this debate, some researchers have argued that creative thought is fundamentally different from other forms of thought. If this is true, then those who tend to think creatively really are somehow different.


A new study led by John Kounios, professor of Psychology at Drexel University and Mark Jung-Beeman of Northwestern University answers these questions by comparing the brain activity of creative and noncreative problem solvers. The study, published in the journal Neuropsychologia, reveals a distinct pattern of brain activity, even at rest, in people who tend to solve problems with a sudden creative insight -- an "Aha! Moment" - compared to people who tend to solve problems more methodically.


At the beginning of the study, participants relaxed quietly for seven minutes while their electroencephalograms (EEGs) were recorded to show their brain activity. The participants were not given any task to perform and were told they could think about whatever they wanted to think about. Later, they were asked to solve a series of anagrams - scrambled letters that can be rearranged to form words [MPXAELE = EXAMPLE]. These can be solved by deliberately and methodically trying out different letter combinations, or they can be solved with a sudden insight or "Aha!" in which the solution pops into awareness. After each successful solution, participants indicated in which way the solution had come to them.


The participants were then divided into two groups - those who reported solving the problems mostly by sudden insight, and those who reported solving the problems more methodically - and resting-state brain activity for these groups was compared. As predicted, the two groups displayed strikingly different patterns of brain activity during the resting period at the beginning of the experiment - before they knew that they would have to solve problems or even knew what the study was about.


One difference was that the creative solvers exhibited greater activity in several regions of the right hemisphere. Previous research has suggested that the right hemisphere of the brain plays a special role in solving problems with creative insight, likely due to right-hemisphere involvement in the processing of loose or "remote" associations between the elements of a problem, which is understood to be an important component of creative thought. The current study shows that greater right-hemisphere activity occurs even during a "resting" state in those with a tendency to solve problems by creative insight. This finding suggests that even the spontaneous thought of creative individuals, such as in their daydreams, contains more remote associations.


Second, creative and methodical solvers exhibited different activity in areas of the brain that process visual information. The pattern of "alpha" and "beta" brainwaves in creative solvers was consistent with diffuse rather than focused visual attention. This may allow creative individuals to broadly sample the environment for experiences that can trigger remote associations to produce an Aha! Moment.


For example, a glimpse of an advertisement on a billboard or a word spoken in an overheard conversation could spark an association that leads to a solution. In contrast, the more focused attention of methodical solvers reduces their distractibility, allowing them to effectively solve problems for which the solution strategy is already known, as would be the case for balancing a checkbook or baking a cake using a known recipe.


Thus, the new study shows that basic differences in brain activity between creative and methodical problem solvers exist and are evident even when these individuals are not working on a problem. According to Kounios, "Problem solving, whether creative or methodical, doesn't begin from scratch when a person starts to work on a problem. His or her pre-existing brain-state biases a person to use a creative or a methodical strategy."


In addition to contributing to current knowledge about the neural basis of creativity, this study suggests the possible development of new brain imaging techniques for assessing potential for creative thought, and for assessing the effectiveness of methods for training individuals to think creatively.


Journal reference: Kounios, J., Fleck, J.I., Green, D.L., Payne, L., Stevenson, J.L., Bowden, E.M., & Jung-Beeman, M. The origins of insight in resting-state brain activity, Neuropsychologia (2007), doi:10.1016/j.neuropsychologia.2007.07.013


sourced Drexel University.




Technorati : , ,

Sunday, October 28, 2007

Study :organic really is better for health



THE biggest study into organic food has found that it is more nutritious than ordinary produce and may help to lengthen people's lives.


The evidence from the £12m four-year project will end years of debate and is likely to overturn government advice that eating organic food is no more than a lifestyle choice.


The study found that organic fruit and vegetables contained as much as 40% more antioxidants, which scientists believe can cut the risk of cancer and heart disease, Britain's biggest killers. They also had higher levels of beneficial minerals such as iron and zinc.


Professor Carlo Leifert, the co-ordinator of the European Union-funded project, said the differences were so marked that organic produce would help to increase the nutrient intake of people not eating the recommended five portions a day of fruit and vegetables. "If you have just 20% more antioxidants and you can't get your kids to do five a day, then you might just be okay with four a day," he said.


This weekend the Food Standards Agency confirmed that it was reviewing the evidence before deciding whether to change its advice. Ministers and the agency have said there are no significant differences between organic and ordinary produce.


Researchers grew fruit and vegetables and reared cattle on adjacent organic and nonorganic sites on a 725-acre farm attached to Newcastle University, and at other sites in Europe. They found that levels of antioxidants in milk from organic herds were up to 90% higher than in milk from conventional herds.


As well as finding up to 40% more antioxidants in organic vegetables, they also found that organic tomatoes had significantly higher levels of antioxidants, including flavo-noids thought to reduce coronary heart disease.





Technorati : ,

Thursday, October 25, 2007

Adult stem cells lack key regulator


The protein Oct4 plays a major role in embryonic stem cells, acting as a master regulator of the genes that keep the cells in an undifferentiated state. Unsurprisingly, researchers studying adult stem cells have long suspected that Oct4 also is critical in allowing these cells to remain undifferentiated. Indeed, more than 50 studies have reported finding Oct4 activity in adult stem cells.


But those findings are misleading, according to research in the lab of Whitehead member and MIT biology professor Rudolf Jaenisch.


In a paper published online in Cell Stem Cells on Oct. 10, postdoctoral fellow Christopher Lengner has shown that Oct4 is not required to maintain mouse adult stem cells in their undifferentiated state, and that adult tissues function normally in the absence of Oct4. Furthermore, using three independent detection methods in several tissue types in which Oct4-positive adult stem cells had been reported, Lengner found either no trace of Oct4, or so little Oct4 as to be indistinguishable from background readings.


This means that pluripotency, the ability of stem cells to change into any kind of cell, is regulated differently in adult and embryonic stem cells.


"This is the definitive survey of Oct4," said Jaenisch. "It puts all those claims of pluripotent adult stem cells into perspective."


Oct4 is essential in maintaining the pluripotency of embryonic stem cells, but only for a short time before the embryo implants in the uterine wall. After implantation, Oct4 is turned off and the cells differentiate into all of the 200-plus cell types in the body.


"We have convincingly shown that Oct4 has no role in adult stem cells," said Lengner.


He initially set out to determine how tissues previously shown to express Oct4 (the intestinal lining, brain, bone marrow and hair follicle) functioned without


Top panels: Cells of the intestinal lining of mice lacking the embryonic pluripotency regulator Oct4 stop dividing and die after radioactive exposure. Middle panels: Intestinal stem cells then become activated and begin dividing rapidly. Bottom panels: The intestinal lining is completely regenerated, with stem cells relocating to the bottom


the protein. To do so, he bred mice in which the Oct4 gene had been deleted from a given tissue type.


Next, Lengner stressed the tissue in several ways, forcing the adult stem cells within to regenerate the tissue. All regenerated normally. Lengner and his fellow researchers then tested to confirm that Oct4 had indeed been deleted from these cells. Finally, the researchers set out to validate the previously published reports claiming Oct4 was expressed in these adult stem cell types. Using highly sensitive tests that could detect Oct4 at the single-cell level, they were unable to confirm the earlier reports.


"This is a cautionary tale of believing what you read in the literature," said Lengner, who suggests that earlier studies may have misapplied tricky analytical techniques or worked with cell cultures that had spent too much time in an incubator.


"We now know that adult stem cells regulate their pluripotency, or 'stemness,' using different mechanisms from embryonic stem cells, and we're studying these mechanisms," he said. "Is there a common pathway that governs stemness in different adult stem cells, or does each stem cell have its own pathway? We don't yet know."


Other authors of this paper are from Massachusetts General Hospital, the Max Planck Institute for Molecular Biomedicine and the Russian Academy of Science.




Technorati :

Sunday, October 21, 2007

Iran's chief nuclear negotiator resigns


24hoursnews


Ali Larijani played a key role this year in defusing a crisis that erupted when Iranians seized a group of British sailors and Marines in disputed Persian Gulf waters off southern Iraq. .


TEHRAN -- Iran's chief nuclear negotiator, a relative moderate who struggled against the uncompromising agenda of President Mahmoud Ahmadinejad, has resigned his high-profile post, government officials announced Saturday.

The resignation of Ali Larijani dealt a major setback to Iranian moderates trying to forge a compromise over Iran's pursuit of nuclear technology, which is strongly opposed by the West.


For two years, Larijani had served as secretary of the powerful Supreme National Security Council, which advises the highest levels of the government. His withdrawal "may make negotiations even more problematic than in recent months," said Patrick Cronin, a nuclear nonproliferation expert at the International Institute for Strategic Studies, a British think tank.

Larijani, a confidant to supreme leader Ayatollah Ali Khamenei, is said to oppose Iran's isolation over its insistence on continuing its uranium enrichment program. Insiders said he advocated cutting a deal with the West to end the dispute, which has led to two sets of economic sanctions against Iran.

Within the inner leadership circle, Larijani was often at odds with Ahmadinejad, who refused to tone down his rhetoric or steer a more moderate course on the nation's nuclear ambitions.

"The difference between Ali Larijani and President Ahmadinejad was on the cost of the nuclear issue," said a Larijani advisor, who spoke on condition of anonymity. "Ahmadinejad insists on not any inch of compromise."

Word of the resignation came a few days after Russian President Vladimir V. Putin visited Tehran and proposed a deal to end the stalemate, and just before Larijani was to have discussed the issue with the European Union's foreign policy chief, Javier Solana.

"We will consider what you said and your proposal," Khamenei told Putin, according to the official IRNA news agency. "We are determined to satisfy the needs of the country in nuclear energy, and it is for this that we take seriously the question of enrichment."

Analysts said the resignation probably meant that Iran's leadership had opted to reject Putin's proposal, which most observers say was a deal in which Iran would halt its enrichment program in exchange for concessions from the West.

"Mr. Ali Larijani believed in a sort of compromise on uranium enrichment, but President Ahmadinejad thinks that Iran should go ahead with the current uranium enrichment and current nuclear policy," said Sadegh Zibakalam, a professor of political science at Tehran University. "Therefore, Mr. Ali Larijani had no option but to resign."

Enriched uranium can be used to power electricity plants or, if highly concentrated, become explosive material for an atomic bomb.

President Bush has said Iran should not have the know-how to create such a weapon. The United Nations Security Council has demanded that Iran halt enrichment until questions about its past nuclear activities are cleared up.

The West, led by the U.S., accuses Iran of using a legal nuclear energy program to mask an illegal pursuit of nuclear weapons technologies. Iran says its program is only for generating electricity.

Reacting to the resignation, White House spokeswoman Eryn Witcher said, "We seek a diplomatic solution to the issue of Iran's nuclear program and hope that whomever has this position will help lead Iran down a path of compliance with their U.N. Security Council obligations."

Larijani had tried before to tender his resignation.

"Larijani had resigned several times, and President Mahmoud Ahmadinejad finally accepted his resignation," government spokesman Gholamhossein Elham said Saturday, according to IRNA .

Elham downplayed the resignation, saying that Iran's policies would not change and that Larijani resigned for personal reasons to pursue other political activities. But a former advisor to the Iranian government on the nuclear issue said that "the gap between him and Ahmadinejad had reached a point that he simply had to resign."

Larijani has long been considered a relatively moderate voice. In 2005 he pushed for a two-year suspension of Iran's enrichment program, and this year he played a key role in defusing the crisis that erupted when Iranians seized a group of British sailors and marines in disputed Persian Gulf waters.

Diplomats say Larijani had a fruitful line of communication with Solana, the EU point person on Iran's nuclear issue.

The Fars News Agency reported that Saeed Jalili, deputy foreign minister for European and U.S. affairs, would fill the post for now and attend the Tuesday meeting with Solana in Italy.

"I think this is a very risky move that lightens Iran's diplomatic clout -- because for one thing, Jalili is too young and inexperienced to handle this big job, which means he will be at the president's beckoning," the former government advisor said, describing Ahmadinejad as "equally a novice on nuclear diplomacy."

Elham said Larijani might join the delegation. A Supreme National Security Council official said the post would be permanently filled within days.

Some analysts pointed out that style rather than substance characterized the differences between the two camps on the nuclear issue.

"Larijani was not advocating making major nuclear compromises, but he appreciated the need to retain constructive dialogue with the EU and felt Ahmadinejad needlessly undermined Iran's case with his blusterous rhetoric," said Karim Sadjadpour, an Iran expert at the Carnegie Endowment for International Peace, a Washington think tank.

Replacing Larijani with Jalili buys Iran more time to pursue its ultimate goal of becoming a nuclear power, said Saeed Leylaz, an Iranian analyst and economist.

"The Islamic Republic of Iran is in a race against time with the West," Leylaz said. "All in all, Iran is going toward more radicalization and full nuclear power."


SOURCE : http://www.latimes.com




Technorati :

Saturday, October 20, 2007

Errors blamed for nuclear arms going undetected


art.bruce.emig.af.jpgAir Force weapons officers assigned to secure nuclear warheads failed on five occasions to examine a bundle of cruise missiles headed to a B-52 bomber in North Dakota, leading the plane's crew to unknowingly fly six nuclear-armed missiles across the country.


That August flight, the first known incident in which the military lost track of its nuclear weapons since the dawn of the atomic age, lasted nearly three hours, until the bomber landed at Barksdale Air Force Base in northern Louisiana.


But according to an Air Force investigation presented to Defense Secretary Robert M. Gates on Friday, the nuclear weapons sat on a plane on the runway at Minot Air Force Base in North Dakota for nearly 24 hours without ground crews noticing the warheads had been moved out of a secured shelter.


"This was an unacceptable mistake," said Air Force Secretary Michael W. Wynne at a Pentagon news conference. "We would really like to ensure it never happens again."


For decades, it has been military policy to never discuss the movement or deployment of the nuclear arsenal. But Wynne said the accident was so serious that he ordered an exception so the mistakes could be made public.


On Aug. 29, North Dakota crew members were supposed to load 12 unarmed cruise missiles in two bundles under the B-52's wings to be taken to Louisiana to be decommissioned. But in what the Air Force has ruled were five separate mistakes, six missiles contained nuclear warheads.


According to the investigation, the chain of errors began the day before the flight when Air Force officers failed to inspect five bundles of cruise missiles inside a secure nuclear weapons hangar at Minot. Some missiles in the hangar have nuclear warheads, some have dummy warheads, and others have neither, officials said.


An inspection would have revealed that one of the bundles contained six missiles with nuclear warheads, investigators said.


"They grabbed the wrong ones," said Maj. Gen. Richard Newton, the Air Force's deputy chief of staff in charge of operations.


After that, four other checks built into procedures for checking the weapons were overlooked, allowing the plane to take off Aug. 30 with crew members unaware that they were carrying enough destructive power to wipe out several cities.


Newton said that even though the nuclear missiles were hanging on the B-52's wings overnight without anyone knowing they were missing, the investigation found that the Minot's tarmac was secure enough that the military was never at risk of losing control of the warheads.


The cruise missiles were supposed to be transported to Barksdale without warheads as part of a treaty that requires the missiles to be mothballed. Newton said the warheads are normally removed in the Minot hangar before the missiles are assigned to a B-52 for transport.


The Air Force did not realize the warheads had been moved until airmen began taking them off the plane at Barksdale. The B-52 had been sitting on the runway there for more than nine hours, however, before they were offloaded.


Newton did not say what explanation the Minot airmen gave investigators for their repeated failure to check the warheads once they left the secured hangar, saying only that there was inattention and "an erosion of adherence to weapons-handling standards."


Air Force officials who were briefed on the findings said investigators found that personnel lacked neither the time nor the resources to perform the inspections, indicating that the weapons officers had become lackadaisical in their duties.


One official noted that until the Air Force was given the task of decommissioning the cruise missiles this year, it had not handled airborne nuclear weapons for more than a decade, implying that most of the airmen lacked experience with the procedures.


The Air Force has fired four colonels who oversaw aircraft and weapons operations at Minot and Barksdale, and some junior personnel have also been disciplined, Newton said. The case has been handed to a three-star general who will review the findings and determine whether anyone involved should face court-martial proceedings.


Despite the series of failures, Newton said, the investigation found that human error, rather than inadequate procedures, were at fault. Gates has ordered an outside panel headed by retired Gen. Larry D. Welch, a former Air Force chief of staff, to review the Pentagon's handling of nuclear weapons.




From CNN International :Air Force officers relieved of duty over loose nukes


A six-week probe into the mistaken flight of nuclear warheads across the country uncovered a "lackadaisical" attention to detail in day-to-day operations at the air bases involved in the incident, an Air Force official said Friday.


Four officers -- including three colonels -- have been relieved of duty in connection with the August 29 incident in which a B-52 bomber flew from Minot Air Force Base in North Dakota to Barksdale Air Force Base in Louisiana.


The plane unknowingly carried a payload of nuclear-tipped cruise missiles.


"Nothing like this has ever occurred," Newton said.


"Our extensive, six-week investigation found that this was an isolated incident and that the weapons never left the custody of airmen -- were never unsecured -- but clearly this incident is unacceptable to the people of the United States and to the United States Air Force."


The probe also found there was "an erosion of adherence to weapons-handling standards at Minot Air Force Base and at Barksdale Air Force Base," Newton said.


"We have acted quickly and decisively to rectify this," he added.





Relieved of duty were the Minot wing commander and maintenance crew commander, and the Barksdale operational group commander.


Minot's munitions squadron commander was relieved of duty shortly after the incident.


Newton didn't name any of the officers, but Col. Bruce Emig had been the commander of the 5th Bomb Wing at Minot.


A number of other personnel -- "under 100," Newton said, including the entire 5th Bomb Wing at Minot -- have lost their certification to handle sensitive weaponry.


The matter will be referred to an Air Force convening authority to find out whether there's enough evidence to bring charges or any other disciplinary action against any personnel, Newton said.


Air Force Secretary Michael Wynne called the incident "an unacceptable mistake and a clear deviation for our exacting standards." VideoWatch Wynne talk about the breakdown in procedures »


"We are making all appropriate changes to ensure that this has a minimal chance of happening again, but we would really like to ensure that it never happens again," he said.


Wynne has convened a blue-ribbon panel to review all of the Air Force's security procedures and adherence to them. That panel is to report back on January 15.


The probe into the incident, which ended this week, lists five errors -- all of them procedural failures to check, verify and inspect, Newton said.


The investigation found that nuclear warheads were improperly handled and procedures were not followed as the missiles were moved from their storage facility, transferred to the bomber and loaded onto it, Newton said.


The bomber carried six nuclear warheads on air-launched cruise missiles, but the warheads should have been removed from the missiles before they were attached to the B-52.


A munitions crew at Barksdale followed proper procedure when the plane landed, discovering the error and reporting it up the chain of command, Newton said.


The weapons were secured in the hands of airmen at all times and had been stored properly at Minot, Newton said.





Technorati : ,

Friday, October 19, 2007

Is Mars alive, or is it only sleeping?



This is a shaded relief image derived from Mars Orbiter Laser Altimeter data, which flew onboard the Mars Global Surveyor. The image shows Olympus Mons and the three Tharsis Montes volcanoes: Arsia Mons, Pavonis Mons, and Ascraeus Mons from southwest to northeast. Credit: NASA



The surface of Mars is completely hostile to life as we know it. Martian deserts are blasted by radiation from the sun and space. The air is so thin, cold, and dry, if liquid water were present on the surface, it would freeze and boil at the same time. But there is evidence, like vast, dried up riverbeds, that Mars once was a warm and wet world that could have supported life. Are the best times over, at least for life, on Mars?
New research raises the possibility that Mars could awaken from within -- three large Martian volcanoes may only be dormant, not extinct. Volcanic eruptions release lots of greenhouse gasses, like carbon dioxide, into the atmosphere. If the eruptions are not complete, and future eruptions are large enough, they could warm the Martian climate from its present extremely cold and dry state.


NASA-funded researchers traced the flow of molten rock (magma) beneath the three large Martian volcanoes by comparing their surface features to those found on Hawaiian volcanoes.


"On Earth, the Hawaiian islands were built from volcanoes that erupted as the Earth's crust slid over a hot spot -- a plume of rising magma," said Dr. Jacob Bleacher of Arizona State University and NASA's Goddard Space Flight Center in Greenbelt, Md. "Our research raises the possibility that the opposite happens on Mars - a plume might move beneath stationary crust." The observations could also indicate that the three Martian volcanoes might not be extinct. Bleacher is lead author of a paper on these results that appeared in the Journal of Geophysical Research, Planets, September 19.


The three volcanoes are in the Tharsis region of Mars. They are huge compared to terrestrial volcanoes, with each about 300 kilometers (186 miles) across. They form a chain heading northeast called the Tharsis Montes, from Arsia Mons just south of the Martian equator, to Pavonis Mons at the equator, to Ascraeus Mons slightly more then ten degrees north of the equator.


No volcanic activity has been observed at the Tharsis Montes, but the scarcity of large impact craters in the region indicates that they erupted relatively recently in Martian history. Features in lava flows around the Tharsis Montes reveal that later eruptions from large cracks, or rift zones, on the sides of these volcanoes might have started at Arsia Mons and moved northeast up the chain, according to the new research.


The researchers first studied lava flow features that are related to the eruptive history of Hawaiian volcanoes. On Hawaii (the Big Island), the youngest volcanoes are on the southeastern end, directly over the hot spot. As the Pacific crustal plate slowly moves to the northwest, the volcanoes are carried away from the hotspot. Over time, the movement has created a chain of islands made from extinct volcanoes.


Volcanoes over the hot spot have the hottest lava. Its high temperature allows it to flow freely. A steady supply of magma from the hot spot means the eruptions last longer. Lengthy eruptions form lava tubes as the surface of the lava flow cools and crusts over, while lava continues to flow beneath. After the eruption, the tube empties and the surface collapses, revealing the hidden tube.


As the volcano is carried away from the hot spot, magma has to travel farther to reach it, and the magma cools. Cooler magma makes the lava flow more slowly compared to lava at the younger volcanoes, like the way molasses flows more slowly than water. The supply of magma is not as steady, and the eruptions are shorter. Brief eruptions of slowly flowing lava form channels instead of tubes. Flows with channels partially or completely cover the earlier flows with tubes.


As the volcano moves even further from the hot spot, only isolated pockets of rising magma remain. As the magma cools, it releases trapped gas. This creates short, explosive eruptions of cinders (gas bubbles out of the lava, forming sponge-like cinder stones). Earlier flows become covered with piles of cinders, called cinder cones, which form around these eruptions.

"We thought we could take what we learned about lava flow features on Hawaiian volcanoes and apply it to Martian volcanoes to reveal their history," said Bleacher. "The problem was that until recently, there were no photos with sufficient detail over large surface areas to reveal these features on Martian volcanoes. We finally have pictures with enough detail from the latest missions to Mars, including NASA's Mars Odyssey and Mars Global Surveyor, and the European Space Agency's Mars Express missions."

Using images and data from these missions, the team discovered that the main flanks of the Tharsis Montes volcanoes were all alike, with lava channels covering the few visible lava tubes. However, each volcano experienced a later eruption that behaved differently. Lava issued from cracks (rifts) on the sides of the volcanoes, forming large lava aprons, called rift aprons by the team.

The new observations show that the rift apron on the northernmost volcano, Ascraeus Mons, has the most tubes, many of which are not buried by lava channels. Since tube flows are the first to form over a hot spot, this indicates that Ascraeus was likely active more recently. The flow on the southernmost volcano, Arsia Mons, has the least tubes, indicating that its rift aprons are older. Also, the team saw more channel flows partially burying tube flows at Arsia. These trends across the volcanic chain indicate that the rift aprons might have shared a common source like the Hawaiian volcanoes, and that apron eruptions started at Arsia, then moved northward, burying the earlier tube flows at Arsia with channel flows.

Since there is no evidence for widespread crustal plate movement on Mars, one explanation is that the magma plume could have moved beneath the Tharsis Montes volcanoes, according to the team. This is opposite to the situation at Hawaii, where volcanoes move over a plume that is either stationary or moving much more slowly. Another scenario that could explain the features is a stationary plume that spreads out as it nears the surface, like smoke hitting a ceiling. The plume could have remained under Arsia and spread northward toward Ascraeus. "Our evidence doesn't favor either scenario, but one way to explain the trends we see is for a plume to move under the stationary Martian crust," said Bleacher.

The team also did not see any cinder cone features on any of the Tharsis Montes rift apron flows. Since cinder cone eruptions are the final stage of hot spot volcanoes, the rift apron eruptions might only be dormant, not extinct, according to the team. If the eruptions are not complete, and future eruptions are large enough, they could contribute significant amounts of water and carbon dioxide to the Martian atmosphere.




Technorati :

Brain Images Make Cognitive Research more Believable :neuroscience


Brain is most complexive thing, how barin works , how brain think, what accept or reject,, and much more is the vital research matter .People are more likely to believe findings from a neuroscience study when the report is paired with a colored image of a brain as opposed to other representational images of data such as bar graphs, according to a new Colorado State University study. (Credit: iStockphoto/Aaron Kondziela)




People are more likely to believe findings from a neuroscience study when the report is paired with a colored image of a brain as opposed to other representational images of data such as bar graphs, according to a new Colorado State University study.


Persuasive influence on public perception.


Scientists and journalists have recently suggested that brain images have a persuasive influence on the public perception of research on cognition. This idea was tested directly in a series of experiments reported by David McCabe, an assistant professor in the Department of Psychology at Colorado State, and his colleague Alan Castel, an assistant professor at University of California-Los Angeles. The forthcoming paper, to be published in the journal Cognition, was recently published online.


"We found the use of brain images to represent the level of brain activity associated with cognitive processes clearly influenced ratings of scientific merit," McCabe said. "This sort of visual evidence of physical systems at work is typical in areas of science like chemistry and physics, but has not traditionally been associated with research on cognition.


"We think this is the reason people find brain images compelling. The images provide a physical basis for thinking."


Brain images compelling


In a series of three experiments, undergraduate students were either asked to read brief articles that made fictitious and unsubstantiated claims such as "watching television increases math skills," or they read a real article describing research showing that brain imaging can be used as a lie detector.


When the research participants were asked to rate their agreement with the conclusions reached in the article, ratings were higher when a brain image had accompanied the article, compared to when it did not include a brain image or included a bar graph representing the data.


This effect occurred regardless of whether the article described a fictitious, implausible finding or realistic research.


Conclusions often oversimplified and misrepresented


"Cognitive neuroscience studies which appear in mainstream media are often oversimplified and conclusions can be misrepresented," McCabe said. "We hope that our findings get people thinking more before making sensational claims based on brain imaging data, such as when they claim there is a 'God spot' in the brain."


Article: "Seeing is believing: The effect of brain images on judgments and scientific reasoning."





Technorati :

Thursday, October 18, 2007

All the Energy We Could Ever Need? Space-Based Solar Power Looking Better



Published by the Pentagon's National Security Space Office, the report says the US should demonstrate the technology by building a pilot "space-based solar power" station, big enough to continuously beam up to 10 megawatts of power to the ground, in the next decade.


The good news? Beaming all the solar energy we could ever need down to Earth from space appears more feasible than ever before. The bad news? It's going to take a lot of money and political will to get there.


While the idea of sending giant solar panels into orbit around the Earth is nothing new - the idea has been kicked around with varying degrees of seriousness since the '60s and 70s - changing times have made the concept a lot more feasible today, according to a study released Oct. 10 by the National Security Space Office (NSSO). Fossil fuels are a lot more expensive, and getting harder to access, than they were in past decades. And technology advances are making possible today projects that were all but inconceivable in years past.


"The magnitude of the looming energy and environmental problems is significant enough to warrant consideration of all options, to include revisiting a concept called Space-Based Solar Power (SBSP) first invented in the United States almost 40 years ago," the report's executive summary states.


Oil prices have jumped from $15/barrel to now $80/barrel in less than a decade. In addition to the emergence of global concerns over climate change, American and allied energy source security is now under threat from actors that seek to destabilize or control global energy markets as well as increased energy demand competition by emerging global economies.


By collecting solar energy before it passes through the Earth's atmosphere, losing much of its power, a space-based solar power could provide the planet with all the energy it needs and then some, the NSSO report said. The output of a single one-kilometer-wide band of solar panels at geosynchronous orbit would equal the energy in all the world's remaining recoverable oil: an esimated 1.28 trillion barrels.


Because it didn't have the time or funds to study the feasibility of space-based solar power the traditional way, the NSSO's Advanced Concepts Office (known as "Dreamworks") developed its report through a unique strategy: an open-source, Internet-based forum inviting worldwide experts in the field to collaborate online. More than 170 contributors joined into the discussion, with the mission to answer one question:


Can the United States and partners enable the development and deployment of a space-based solar power system within the first half of the 21st Century such that if constructed could provide affordable, clean, safe, reliable, sustainable, and expandable energy for its consumers?


Their answer, delivered in the form of the Oct. 10 report: it's possible, but a lot remains to be done.


The study group ended up making four major recommendations. First, it said, the U.S. government should move to resolve the remaining unknowns regarding space-based solar power and act effectively to allow for the technology's development. Second, the government should also reduce as much as possible the technical risks faced by businesses working on the technology. Third, the government should set up the environment - policy, regulatory and legal - needed to develop space-based solar power. And, fourth, the U.S. should commit to becoming an early demonstrator, adopter and customer of space-based solar power and set up incentives for the technology's development.


"Considering the development timescales that are involved, and the exponential growth of population and resource pressures within that same strategic period, it is imperative that this work for 'drilling up' vs. drilling down for energy security begins immediately," the NSSO report stated.


If it could be done, space-based solar power would have incredible potential, the NSSO said: It could solve our energy problems, deliver "energy on demand" for troops in the field, provide a fast and sustainable source of energy during humanitarian disasters, and reduce the risk of future conflict over dwindling or risky energy supplies.


Considering that, over the past 30 years, both NASA and the Department of Energy have invested a meager $80 million in space-based solar power research (compared to $21 billion over the last half-century for nuclear fusion - which still remains out of reach as a feasible power source), maybe it's time to directing our research energies - and dollars - upward




Technorati :

VIVACE R&T project delivers major improvements for future Aeronautical


A Virtual Aeronautical Collaborative Enterprise" (VIVACE) Research and Technology project focuses on simulation and modelling techniques for aeronautical products during their design and development phases with the objective of reducing development time and costs .


The final results of VIVACE are presented at a public Forum held in Toulouse from 17th to 19th October.


VIVACE is a very large European Commission co-funded R&T project, grouping 63 companies and research institutions from the aeronautic sector such as Airbus, Rolls Royce, Snecma, Thales… It was launched in January 2004 and will be fully completed at the end of 2007.


Major innovation and progress has been developed within the scope of the project in seven key areas of the product development process, providing solutions in "Design Simulation", "Virtual Testing", "Design Optimisation", "Business and Supply Chain Modelling", "Knowledge Management", "Decision Support" and "Collaboration in the Extended and Virtual Enterprise".


Through industrial simulations of a part of the aircraft, of the engine or of a development process, reflecting both the Virtual Product and the Virtual Extended Enterprise, major improvements have been obtained in terms of processes, methods and tools.


VIVACE contributes to answering the Advisory Council for Aeronautics Research in Europe (ACARE) Vision of halving the time to market for new products, increasing the integration of the supply chain and maintaining a steady and continuous fall in travel costs. By using the latest innovations in advanced simulation and modelling techniques, it will provide the means to get the best possible knowledge about the product prior to its physical development, thus reducing the development cost, shortening time to market and further improving product quality.


More information on the VIVACE project can be found at: www.vivaceproject.com




Technorati : ,

Wednesday, October 17, 2007

FalconStor Unveils A Virtual, Virtual Tape Appliance


FalconStor Software is adding to the slowly-growing number of virtual storage appliances with the introduction this week of its new virtual virtual tape library.
It is one of two new virtual tape libraries the company is introducing this week aimed at bringing down the cost of the technology.


Virtual tape libraries, or VTLs, are disk arrays configured to look to the host server and the backup software as if they are physical tape libraries. Data is streamed to and recovered from the VTL as if it were tape, so no changes are needed to the backup process. However, because they use hard drives, the backup and recover speed is much higher than when using tape drives. Data backed up to a VTL can also be backed up to a physical tape for archiving or off-site storage.


The FalconStor VTL Virtual Appliance is a pre-configured, ready-to-run software application with an operating system that can be downloaded into a virtual machine using VMware, said John Lallier, vice president of product management for the vendor.


FalconStor this week also introduced a new family of low-cost physical VTLs. The primary differences between the virtual and the physical appliances is its price and the fact that the virtual VTL performance is limited compared to the hardware versions.


Both the virtual and the physical VTL appliances include FalconStor's Single Instance Repository de-duplication technology.


De-duplication, also called "de-dupe," removes duplicate information as data is backed up or archived. It can be done on the file level, where duplicate files are replaced with a marker pointing to one copy of the file, and/or at the sub-file or byte level, where duplicate bytes of data are removed, resulting in a significant decrease in storage capacity requirements.


It is only the latest in a handful of virtual storage appliances which do the same function as hardware-based appliances but which run on a virtual machine built using VMware.


FalconStor last month unveiled its first virtual storage appliance, one which does continuous data protection between physical and/or virtual servers. It is aimed at helping customers do LAN-less data backups and archiving as well as build disaster recovery architectures which rely on virtual servers at the remote site.


Last month also saw EMC introduce a virtual data de-duplication appliance using technology it received from its Avamar acquisition.


Steve Bishop, CTO of VeriStor Systems, an Atlanta-based storage solution provider, said he is seeing a number of vendors starting to move to offer virtual storage appliances.


"Customers are asking, can their storage applications be virtualized?" Bishop said. "We're seeing a lot of interest."


Greg Knieriemen, vice president of marketing at Chi, a Cleveland, Ohio-based FalconStor partner which has already had good success with the vendor's virtual CDP appliance, said a virtual VTL could help open the market for replacing tape with disk-based storage.


"We're selling VTLs to SMBs and enterprises, across the board," Knieriemen said. "It's a 50-50 split. But there's a much larger base of SMB customers. The SMB adoption of VTLs is still marginalized. This could really open the door for VTLs in the SMB market."


Both Bishop and Knieriemen said the $8,000 list price for the FalconStor virtual VTL is a good price, when compared to physical VTLs.


However, because of the slower performance of the virtual VTL appliance compared to hardware appliances, the right choice for customers depends on a number of factors, including backup performance requirements, customer size, what virtualization environment is available, how many virtual machines are in use, and what the customer's backup window looks like, Knieriemen said.


"You have to really develop a complete profile of the customer," he said.


Wendy Petty, vice president of sales at FalconStor, said the virtual VTL appliance makes it easy for customers or solution providers to test the vendor's VTL software.


"Just download the VTL appliance, and you can test the software as a proof-of-concept," Petty said. "It's very, very simple. You don't need to send a hardware out to test it."


With their de-dupe capability, the virtual VTL appliances are also good for small remote offices, Petty said. "Partners can offer a solution that saves customers money," she said. "They can take the management from the remote offices, where backups are not normally done anyway. And they can do global de-dupe with our patented replication."


For customers looking for higher performance, FalconStor also unveiled three new VTL hardware appliances.


The VTL-S6 can be configured for up to four different tape libraries with a total of 16 virtual tape drives and 1,024 tapes, for a maximum pre-de-dupe capacity of up to 50 Tbytes. It has a backup speed of 200 Mbytes per second.


The VTL-S12 can be configured for up to eight different tape libraries with a total of 32 virtual tape drives and 2,048 tapes, for a maximum pre-de-dupe capacity of up to 100 Tbytes. It has a backup speed of 250 Mbytes per second.


The VTL-S24 can be configured for up to 16 different tape libraries with a total of 64 virtual tape drives and 4,096 tapes, for a maximum pre-de-dupe capacity of up to 200 Tbytes. It has a backup speed of 300 Mbytes per second.


The virtual VTL appliance, model VTL-V3, can be configured for up to 4 different tape libraries with a total of 16 virtual tape drives and 1,024 tapes, for a maximum pre-de-dupe capacity of up to 40 Tbytes. It has a backup speed of 60 Mbytes per second.


All four VTLs are available. The VTL-V3 is priced at $8,000, while the VTL-6 is priced at about $20,000. Replication software is available as an option for $3,000 to $8,000, depending on capacity. A Fibre Channel connectivity option is available for the three hardware appliances with a price of $3,000 to $8,000.




Technorati :

MySpace IM voice chat will begin testing on the social network site.


Communication becoming more easy and economy which will help to build various social network easily.


MySpace will give its millions of members the ability to engage in free voice chats via the MySpace instant messaging service, thanks to a partnership with VOIP provider Skype.


News Corp.'s MySpace and eBay Inc. subsidiary Skype will announce the beta version of the service, called MySpace IM with Skype Wednesday at the Web 2.0 Summit in San Francisco.


MySpace, the world's largest social network, has about 110 million monthly active users, while Skype has about 220 million registered users, the companies said.


MySpaceIM with Skype will mesh MySpace's IM service, which has an installed base of 25 million users, with Skype's Internet voice communications services, the companies said.


MySpaceIM with Skype will be released generally in November, along with the ability to let people also link their MySpace profiles with their Skype accounts.


The voice chat service will let MySpace users call others in the social network as well as Skype users. MySpaceIM with Skype will not require users to download any additional Skype software.


MySpace will launch the voice chat service in 20 countries where it has "localized" communities. Meanwhile, Skype will allow its users to link their accounts to their MySpace profile worldwide except in Japan, China and Taiwan.


Beyond the free voice chat service, MySpace users will also get the option of buying other premium Skype products, such as SkypeOut for generating calls from Skype to outside lines, as well as SkypeIn for receiving calls from outside lines.


MySpace Says: Skype Me :


MySpace and Skype are teaming up to create the world's largest online voice network. MySpace members will be able to make Internet phone calls using Skype's telephony network and MySpace's instant message program.


MySpace says 110 million unique users come to its site each month, and that 8 million people actively use its IM service. Some 220 million people have downloaded Skype's software. The two companies have been talking about a partnership since the days when they were independent companies--more than two years ago. They began coding in July and plan to launch a test version of the joint service next month.


The news comes on the heels of ebay's (nasdaq: EBAY - news - people ) $1.4 billion write-off related to its $2.6 billion acquisition of Skype in 2005. That impairment will be reflected in ebay's third-quarter earnings announcement tomorrow. Ebay executives are vexed about the way the service has worked out for the online auction company. "Skype has not performed as expected. We are disappointed that we had to take the impairment, but it is a more accurate reflection of the value of Skype as an asset," says ebay spokesperson Hani Durzy.


This new deal will no doubt score new users for Skype, but whether the company can turn them into paying customers remains questionable. Ebay's "Skype Me" button allows buyers and sellers to call each other about auctions, but cross-pollination has been sluggish, and not very lucrative. So far, Skype has brought in a mere $285 million in revenue for ebay in the past 12 months--a piddly $1.30 per user. (This does not include the latest financial results, which ebay is slated to release on Wednesday.)


Similar to Skype's traditional service, calls between MySpace and Skype members will be free. Users will pay a fee, however, to place calls to mobile phones and land lines, as well as use voice mail and call forwarding services. Skype will handle the billing. Revenue from the fees will be split between MySpace and Skype, though specific terms were not disclosed.


The real value of this deal seems to be exposure. Skype, which is widely used in Europe, has had a hard time building its customer base in the U.S. MySpace has had great success here but faces increased competition from players such as Bebo, Friendster and Hi5 abroad. Skype and MySpace users hardly overlap. Only 6.7% of Skype users also use MySpace IM, and only 2.6% of MySpace IM users also use Skype.


Widgets by Ingenio, Jaxtr, Jangl and Jajah have been available for some time on MySpace (as well as other social networks). None of them, however, have been sanctioned by MySpace, let alone built into its technology foundation. As a result, such voice widgets have so far been rather clunky to use and not very reliable.


In contrast, Skype will be built right into MySpace. Users will be able to call their friends by clicking a link beneath a member's profile picture, or in the IM menu. The partnership also works in reverse. Skype is adding a MySpace window into its application so that when users sign on, they can also sign into, or join, MySpace.


To cut back on spam and security risks, MySpace will provide a caller ID, which lets users click to a person's profile before accepting a call, as well as prevent non-friends from calling.


One big question remains: Will this deal help MySpace win its fight with Facebook--or is Skype working on a similar partnership with the social networking site? Skype executives are coy.


"Our vision is to enable the world's conversations. We want to be available wherever people want those conversations to take place," says Don Albert, vice president and head of Skype North America.


Looks like the social networking phone call wars have begun


Can MySpace And Skype Work For Business?


Social networking megasite MySpace is teaming with communications company Skype to link their networks, a partnership that potentially connects millions of users together for free VoIP telephone calls. The move brings more low-cost communications tools to Internet users, but solution providers say MySpace's poor reputation among business users makes it unlikely the new offering will gain any enterprise traction.


"MySpace is looked at as a very poor man's Facebook. It's unstructured, chaotic and more known for predatory precautions with underage users than real application infrastructure," said Narinder Singh, co-founder of Appirio, a San Francisco software development and services firm that focuses on emerging, on-demand technologies. "The combination is less appealing than Skype on its own. I'd see WebEx or other collaboration technologies as more natural and synergistic partners for Skype."


Dubbed "MySpaceIM with Skype," the new alliance was announced today at the Web 2.0 Summit in San Francisco, a gathering focused on technologies and business models that exploit new Web frontiers. At this evening's keynote session, MySpace CEO Chris DeWolfe and MySpace owner Rupert Murdoch, CEO of News Corp., will address the conference audience.


Skype took a beating earlier this month as its CEO resigned and auction site eBay wrote off an impairment charge of $1.4 billion related to the rocky acquisition, which hasn't resulted in the business synergies eBay initially envisioned. Analysts have suggested that a social networking site like MySpace would be a more natural parent for Skype, but today's alliance falls well short of such a sweeping restructuring. Both Skype and MySpace already offer their services for free; Wednesday's alliance means MySpace users will be able to use the service natively from within MySpace, without downloading any additional software.


Skype's VoIP service, available for free when users make Skype-to-Skype voice calls, could save money for businesses willing to adopt it as a communications infrastructure, but the MySpace integration is unlikely to be an enterprise selling point. Entrepreneur Jason Calacanis, who is quick to adopt low-cost "Web 2.0" tools and networking services when he sees a business use in them, said he has a MySpace profile but rarely makes business connections though it -- for that, he relies on business-friendlier services like LinkedIn and Twitter.


"The problem is that when people use social networking sites, they're in a totally different mindset. They're in a socializing mindset," said Calacanis, who is CEO of search engine start-up Mahalo.


Appirio's Singh doubts MySpace's utility for professionals, but he does see enterprise potential in Skype.


"We do a substantial amount of work with Salesforce.com Service and Support and many call centers that have CTI (computer-telephony) integration," Singh said. "With Skype we see the potential for call centers with integrated voice/chat with very little or no actual infrastructure cost."


Whether or not MySpace gains enterprise ground, solution providers do see a growing role for "Web 2.0" technologies and networking tools.


"Any tool that can enhance the way people communicate with each other is a bonus," said Stuart Crawford, director of business development for Canadian IT services firm IT Matters. "Solution providers must be able to inform their clients that this technology may not be something that they want to have as their main communication strategy, but as a tool in the tool chest of business, it is one of those additional items that can bring a competitive advantage."




Technorati :

Novel Gate Dielectric Materials: Perfection Is Not Enough


On the left is an Illustration of the displacement of hafnium atoms (white) in the structure of hafnium oxide to accommodate the presence of the self-trapped hole in the oxygen atom (red). On the right is the quantum mechanics view of the probability of finding a hole near certain atoms (larger blue structures represent higher probability). (Credit: London Centre for Nanotechnology



For the first time theoretical modeling has provided a glimpse into how promising dielectric materials are able to trap charges, something which may affect the performance of advanced electronic devices. This is revealed in a paper published in Physical Review Letters by researchers at the London Centre for Nanotechnology and SEMATECH, a company in Austin, Texas.


Through the constant quest for miniaturization, transistors and all their components continue to decrease in size. A similar reduction has resulted in the thickness of a component material known as the gate dielectric -- typically a thin layer of silicon dioxide, which has now been in use for decades. Unfortunately, as the thickness of the gate dielectric decreases, silicon dioxide begins to leak current, leading to unwieldy power consumption and reduced reliability. Scientists hope that this material can be replaced with others, known as high-dielectric constant (or high-k) dielectrics, which mitigate the leakage effects at these tiny scales.


Metal oxides with high-k have attracted tremendous interest due to their application as novel materials in the latest generation of devices. The impetus for their practical introduction would be further helped if their ability to capture and trap charges and subsequent impact on instability of device performance was better understood. It has long been believed that these charge-trapping properties originate from structural imperfections in materials themselves.


However, as is theoretically demonstrated in this publication, even if the structure of the high k dielectric material is perfect, the charges (either electrons or the absence of electrons -- known as holes) may experience 'self trapping'. They do so by forming polarons -- a polarizing interaction of an electron or hole with the perfect surrounding lattice. Professor Alexander Shluger of the London Centre for Nanotechnology and the Department of Physics & Astronomy at UCL says: "This creates an energy well which traps the charge, just like a deformation of a thin rubber film traps a billiard ball."


The resulting prediction is that at low temperatures electrons and holes in these materials can move by hopping between trapping sites rather than propagating more conventionally as a wave. This can have important practical implications for the materials' electrical properties. In summary, this new understanding of the polaron formation properties of the transition metal oxides may open the way to suppressing undesirable characteristics in these materials.


The article "Theoretical Prediction of Intrinsic Self-Trapping of Electrons and Holes in Monoclinic HfO2", authored by D. Muñoz Ramo, A. L. Shluger, J. L. Gavartin, and G. Bersuker was published in Physical Review Letters volume 99 issue 15, page 155504, on the 12 October 2007


The work at the London Centre for Nanotechnology and UCL Department of Physics & Astronomy was funded by the EPSRC. Access to computer time on the HPCx facility was awarded to the Materials Chemistry Consortium with funding from the EPSRC.


Contacts:












Dave Weston
UCL Media Relations Manager
Tel: +44 (0) 20 7679 7678
Mobile: +44 (0)77333 075 96
Out-of-hours: +44 (0)7917 271 364


d.weston@ucl.ac.uk




Danielle Reeves

Press Officer,
Imperial College London
Tel: (0)20 7594 2198
Mobile: 07803 886248
danielle.reeves@imperial.ac.uk




Source : science Daily





Technorati :

Global Warming and the Future of Coal


Carbon Capture and StorageEver-rising industrial and consumer demand for more power in tandem with cheap and abundant coal reserves across the globe are expected to result in the construction of new coal-fired power plants producing 1,400 gigawatts of electricity by 2030, according to the International Energy Agency. In the absence of emission controls, these new plants will increase worldwide annual emissions of carbon dioxide by approximately 7.6 billion metric tons by 2030. These emissions would equal roughly 50 percent of all fossil fuel emissions over the past 250 years.


In the United States alone, about 145 gigawatts of new power from coal-fired plants are projected to be built by 2030, resulting in CO2 emissions of 790 million metric tons per year in the absence of emission controls. By comparison, annual U.S. emissions of CO2 from all sources in 2005 were about 6 billion metric tons.


Policymakers and scientists now recognize that the current growth of greenhouse gas emissions must be reversed and that emissions must be reduced substantially in order to combat the risk of climate change. Yet a dramatic increase in coal-fired power generation threatens to overwhelm all other efforts to lower emissions and virtually guarantees that these emissions will continue to climb. This would preclude any possibility of stabilizing greenhouse gas concentrations in the atmosphere at levels that would acceptably moderate the predicted rise in global temperatures.


In China and other developing countries experiencing strong economic growth, demand for power is surging dramatically, with low-cost coal the fuel of choice for new power plants. Emissions in these countries are now rising faster than in developed economies in North America and Europe: China will soon overtake the United States as the world's number one greenhouse gas emitter. With the power sector expanding rapidly, China and India will fall further behind in controlling greenhouse gas emissions unless new coal plants adopt emission controls. Lack of progress in these countries would doom to failure global efforts to combat global warming.


The Promise of Carbon Capture and Storage


Fortunately, there is a potential pathway that would allow continued use of coal as an energy source without magnifying the risk of global warming. Technology currently exists to capture CO2 emissions from coal-fired plants before they are released into the environment and to sequester that CO2 in underground geologic formations. Energy companies boast extensive experience sequestering CO2 by injecting it into oil fields to enhance oil recovery. Although additional testing is needed, experts are optimistic this practice can be replicated in saline aquifers and other geologic formations that are likely to constitute the main storage reservoirs for CO2 emitted from power plants.


However, these so-called carbon capture and storage, or CCS systems, require modifications to existing power plant technologies. Today the prevailing coal-based generation technology in the United States is pulverized coal, with high-temperature (supercritical and ultrasupercritical) designs available to improve efficiency. It is possible to capture CO2 emissions at these pulverized coal units, but the CO2 capture technology currently has performance and cost drawbacks.


But there's a new coal-based power generation technology, Integrated Gasification Combined Cycle, or IGCC, which allows CCS systems in new plants to more efficiently capture and store CO2 because the CO2 can be removed before combustion. Motivated by this advantage, some power plant developers have announced plans to use IGCC technology but very few have committed to installing and operating CCS systems.


The great challenge is ensuring that widespread deployment of CCS systems at new IGCC and pulverized coal plants occurs on a timely basis. Despite growing recognition of the promise of carbon capture and storage, we are so far failing in that effort. The consequences of delay will be far-reaching-a new generation of coal plants could well be built without CO2 emission controls.


Barriers to the Adoption of Carbon Capture and Storage Systems


Industry experts today are projecting that only a small percentage of new coal-fired plants built during the next 25 years will use IGCC technology. IGCC plants currently cost about 20 percent to 25 percent more to build than conventional state-of- the-art coal plants using supercritical pulverized coal, or SCPC, technology. What's more, because experience with IGCC technology is limited, IGCC plants are still perceived to have reliability and efficiency drawbacks.


More importantly, IGCC plants are not likely to capture and sequester their CO2 emissions in the current regulatory environment since add-on capture technology will reduce efficiency and lower electricity output. This will increase the cost of producing electricity by 25 percent to 40 percent over plants without CCS capability.


These barriers can be partially overcome by tax credits and other financial incentives and by performance guarantees from IGCC technology vendors. Even with these measures, however, it is unlikely that IGCC plants will replace conventional coal plants in large numbers or that those plants which are built will capture and store CO2. There are two reasons for this.


First, even cost-competitive new technologies are usually not adopted rapidly, particularly in a conservative industry such as the utility sector, where the new technology is different from the conventional technology. This is the case with IGCC plants, which are indeed more like chemical plants than traditional coal-fired plants.


Second, there is now no business motivation to bear the cost of CCS systems when selecting new generation technologies even though the cost of electricity from IGCC plants is in fact lower than from SCPC plants once CCS costs are taken into account. This is because plant owners are not required to control greenhouse gas emissions and CCS systems are unnecessary for the production of power. The upshot: IGCC units (with and even without CCS capability) will lack a competitive edge over SCPC units unless all plant developers are responsible for costeffectively abating their CO2 emissions. No such requirement exists today.


A New Policy Framework to Stimulate the Adoption of CCS Systems


This paper considers how best to change the economic calculus of power plant developers so they internalize CCS costs when selecting new generation technologies. Five policy tools are analyzed:


Establishing a greenhouse gas cap-and-trade program
Imposing carbon taxes
Defining CCS systems as a so-called Best Available Control Technology for new power plants under the Clean Air Act's New Source Review program
Developing a "low carbon portfolio" standard that requires utilities to provide an increasing proportion of power from low-carbon generation sources over time
Requiring all new coal power plants to meet an "emission performance" standard that limits CO2 emissions to levels achievable with CCS systems.
Each of these tools has advantages and drawbacks but an emission performance standard for new power plants is likely to be most effective in spurring broad-scale adoption of CCS systems.


In the current U.S. political environment, a cap-and-trade system is unlikely to result in a sufficiently high market price for CO2 (around $30 per ton) in the early years of a carbon control regime to assure that all coal plant developers adopt CCS systems. At lower carbon prices, plant developers could well conclude that it is more economical to build uncontrolled SCPC plants and then purchase credits to offset their emissions. A carbon tax that is not set at a sufficiently high level likely would have the same consequences.


A low carbon portfolio standard would be complex and difficult to implement because of the wide variations in generation mix between different regions. Moreover, unless the standard sets stringent targets for low carbon generation, it would not preclude construction of uncontrolled coal plants.


Although the recent Supreme Court decision defining CO2 as a "pollutant" has opened the door to controlling new power plant emissions under the New Source Review program, legal uncertainties may prevent the Environmental Protection Agency from defining CCS systems as the Best Available Control Technology under current law. Individual states could also reject CCS systems during permitting reviews. Moreover, the New Source Review program would not allow flexible compliance schedules for installing and operating CCS systems, nor would it provide financial incentives to offset the increased cost of electricity.


How Emission Performance Standards for New Coal Plants Would Work


In contrast to other approaches, an emission performance standard that limits new plant emissions to levels achievable with CCS systems would provide certainty that new coal plants in fact capture and store


CO2. To provide a clear market signal to plant developers, this standard would apply to all new plants built after a date certain, although some flexibility would be allowed in the timing of CCS installation so that the power generation industry can gain more experience with various types of capture technology and underground CO2 storage. For example, all plants that begin construction after 2008 could be subject to the standard and would be required to implement carbon capture technology by 2013, and then to meet all sequestration requirements by 2016.


To provide additional flexibility while CCS technology is being perfected, plant developers during the first three years in which the new performance standard is in effect could have the option to construct traditional coal plants that do not capture and sequester CO2 if they offset on a one-to-one basis their CO2 emissions by taking one or more of the following steps:


Improving efficiencies and lowering CO2 emissions at existing plants
Retiring existing coal or natural gas units that generate CO2 emissions
Constructing previously unplanned renewable fuel power plants representing up to 25 percent of the generation capacity of the new coal plant.
In 2011, this alternate compliance option would sunset and all new plants subsequently entering construction would need to capture and sequester their emissions.


An emission performance standard for new coal plants should be accompanied by a cap-and-trade program for existing power plants, with the cap starting at 100 percent of emissions and progressively declining over time. A declining cap would encourage greater efficiencies in operating existing plants and incentivize the retirement of higher emitting existing plants. This would assure that an emission performance standard for new plants does not simply prolong the useful life of older plants. In addition, as the cap declines, retrofitting existing plants with CCS systems could become a viable option.


Mitigating Electricity Price Hikes


If legislation requiring an emission performance standard for new coal plants is enacted, then Congress should simultaneously take steps to offset the additional costs of installing CCS systems and provide relief from electricity price increases. This would prevent disproportionate costs from falling upon consumers who live in regions heavily dependent on coal for power generation. By reducing the financial risks and uncertainties of building power plants with CCS systems, it would also encourage investments in such plants by developers and their financial backers.


One approach would be to create a fund to "credit" utilities for all or part of the price increase that consumers would otherwise bear if they receive power from plants with CCS systems. Alternatively, financial incentives could be offered to plant developers which, in combination, offset a significant portion of the incremental costs of installing a CCS system as opposed to operating a coal-fired plant that does not control CO2 emissions. This new incentive program would replace current incentive programs for IGCC plants and other coal technologies that do not include CCS systems.


Assuming that government incentives cover 10 percent to 20 percent of total plant construction costs and that they apply to the first 80 gigawatts of new coal capacity with CCS systems built by 2030, these incentives could cost in the range of $36 billion over 18 years. Although $36 billion is a large sum, it is only a fraction of the $1.61 trillion that the International Energy Agency predicts will be invested in new power plants in the United States between now and 2030.


Building a Technical and Regulatory Foundation for CCS Systems


Once the nation commits to a rapid timetable for requiring CCS systems at all new coal plants under an emission performance standard, then all of our regulatory and research and development efforts should be focused on implementing CCS technology as effectively as possible. This would require:


An enhanced R&D program for capture technologies at both SCPC and IGCC facilities to reduce the costs of capture as quickly as possible
An accelerated program to gain largescale experience with sequestration for a range of geologic formations
A comprehensive national inventory of potential storage reservoirs
A new regulatory framework for evaluating, permitting, monitoring, and remediating sequestration sites and allocating liability for long-term CO2 storage.
Maintaining the Viability of Coal in a Carbon-Constrained World


Although an emission performance standard that requires CCS systems for all new coal plants would pose a daunting technological and economic challenge, it will ultimately assure coal a secure and important role in the future U.S. energy mix. Such a standard would establish a clear technological path forward for coal, preserving its viability in a carbon-constrained world and giving the utility industry confidence to invest substantial sums in new coal-fired power generation. In contrast, continued public opposition and legal uncertainties may cause investors to withhold financing for new coal plants, placing the future of coal in jeopardy.


If the United States is successful in maintaining the viability of coal as a cost-competitive power source while addressing climate concerns, our leadership position would enable U.S. industries to capture critical export opportunities to the very nations facing the largest challenges from global warming. Once our domestic marketplace adopts CCS systems as power industry standards, the opportunities to export this best-of-breed technology will grow exponentially.


This will be critical to combating the massive rise of coal-derived greenhouse gas emissions in the developing world. Boosting exports while also helping China, India, and other developing nations reduce emissions and sustain economic growth would be a win-win-win for our economy, their economies, and the global climate.






Read the full report (PDF)




Technorati :

YouTube Copyright Enforcement System :If you dont own me ,dont use me!!


YouTube Copyright Enforcement System :If you dont own me ,dont use me!!Its really good to hear against piracy ,


The technology is designed to let content owners prevent YouTube users from uploading copies of their videos, or they can have the choice of monetizing unauthorized uploads with ads.


Seven months ago, Viacom filed a copyright infringement lawsuit demanding $1 billion from Google and YouTube and charging the companies with "brazen disregard" for intellectual property laws and threatening "the economic underpinnings of one of the most important sectors of the United States economy." On Tuesday, YouTube finally launched a content identification system, YouTube Video Identification, to give copyright owners some measure of control over the presence of their content on the site.


The new service requires that content owners upload videos they wish to protect so that a "hash" -- a numeric fingerprint of sorts -- can be created. That done, content owners will be able to prevent YouTube users from uploading copies of their videos; they will also have the choice of monetizing unauthorized uploads with ads.


"Video Identification goes above and beyond our legal responsibilities," said David King, YouTube Product Manager, in a blog post. "It will help copyright holders identify their works on YouTube, and choose what they want done with their videos: whether to block, promote, or even -- if a copyright holder chooses to license their content to appear on the site -- monetize their videos."


YouTube's and Google's legal responsibilities are at issue in Viacom's copyright lawsuit.


Under the Digital Millennium Copyright Act, Google, as an Internet service provider, escapes liability for copyright infringement by its users if it responds quickly to notifications of copyright infringement.


Viacom claims that Google and YouTube "actively engage in, promote and induce this infringement," and thus shouldn't qualify for safe harbor protection.


It's not yet clear whether YouTube's new technology will prompt Viacom to drop its copyright claim. When the lawsuit was first filed, pundits observed that the lawsuit was a negotiating tactic to force concessions of some sort from Google.


In April, at the Web 2.0 Expo in San Francisco, Calif., Google CEO Eric Schmidt predicted that as Google rolls out its content protection system, "the issues in Viacom become moot."


Yet, 64 legal filings later, the case chugs along, with Viacom still apparently set on a $1 billion pay day.


In his post, King pointed out that Google already has a number of content policies and tools in place to help copyright owners. These include account terminations for repeat infringers, technical measures to prevent videos that have been removed from being re-uploaded, a 10-minute limit on the length of uploaded content, an electronic notice and takedown tool, and prominent copyright compliance tips for users.




Technorati :

Global warming :Finding the clean tech money


. The effect of Global warming is not easy to face so What kind of clean tech product will thrive over the long term?is matter of evaluate seriously.


"Something that doesn't defy laws of physics, and there are plenty of those," said Rodrigo Prudencio, a partner with Nth Power LLC. The venture capital firm helped Evergreen Solar and Imperium Renewables to get off the ground.


Nobody at the AlwaysOn Going Green conference was making bold predictions about what might become the Google of green tech, but the sector is expected to continue expanding at a rapid clip.


Clean tech companies receive the third largest amount of venture capital, a staggering increase to $2.4 billion last year from $917 million in 2005, according to research by Clean Edge. Ninety percent of venture-backed, green tech companies that made initial public offerings last year are listed on the Nasdaq market.


"There will be new ways to squeeze that last bit out of a kilowatt and new ways to create that kilowatt," said Steve Eichenlaub, managing director of Cleantech Investments at Intel Capital. He and other investment experts offered these tips:


Don't burn out by shooting for every initial public offering. "You still have to be careful," said JonCarlo Mark, senior portfolio manager at CalPERS. "There will be money lost in certain technologies and investments, but there's a need to diversify from fossil fuels."
Although unglamorous, technologies that improve energy efficiency, from manufacturing plants to workplaces to homes, will be in high demand as businesses and consumers seek to reduce expenses and carbon emissions. "All companies making incremental improvements in the energy economy are gonna move the needle," said Prudencio.
Renewable sources of energy that don't lean on government subsidies or tax incentives look promising.
Think globally, far into the future. For instance, the need for water filtration and treatment will balloon as the world's population exceeds 8 billion within the next decade, and more people migrate to coastal regions.
"Climate change aside, anything that takes hazardous waste out of the market is gonna be a huge market for investment," said Keith Casto, a partner of Sedgwick, Detert, Moran & Arnold who heads the law firm's international climate change practice. Companies that use recycled components in manufacturing can save money they might otherwise spend on a dwindling supply of raw materials.




Technorati :

Tuesday, October 16, 2007

Because vascular health impacts many different diseases


Vascular health impacts many different diseases..........


The finding not only offers an important insight into the development of the vascular system during embryonic development but suggests a potential target for inhibiting the blood vessels that fuel cancers, diabetic eye complications and atherosclerosis, the researchers say.


The study was conducted in the zebrafish, the tiny, blue-and-silver striped denizen of India's Ganges River and many an aquarium. A "News and Views" commentary on the paper will run in the same issue.


"We expect this finding will offer important insights into blood vessel formation in humans," says lead author Massimo Santoro, PhD, UCSF visiting postdoctoral fellow in the lab of senior author Didier Stainier, PhD, UCSF professor of biochemistry and biophysics. "The zebrafish has proven to be an important model for discovering molecules relevant to human disease."


Angiogenesis, or the growth of blood vessels, is active not only during embryonic development but throughout the life of the body, providing a source of oxygenated blood to tissues damaged by wounds.


However, it is also active in a number of disease processes, including cancer. Without a blood supply, tumors cannot grow beyond the size of a small pea. Cancerous tumors release chemical signals into their environment that stimulate healthy blood vessels to sprout new vessels that then extend into the tumors. During the last decade, scientists have identified several molecules that promote angiogenesis. A drug that inhibits these molecules is now commercially available and others are being studied in clinical trials.


Scientists are also exploring strategies for stimulating the growth of new blood vessels in patients whose clogged arteries prevent a sufficient blood supply to the heart muscle.


In the current study, the UCSF team determined that two well known signaling molecules, birc2 and TNF, are crucial to the survival of endothelial cells -- which line the blood vessels and maintain the integrity of the blood vessel wall during vascular development -- in zebrafish embryos.


"The pathway these molecules make up during vascular development has not been looked at before," says Stainier. "It offers a new target for therapeutic strategies."


The birc2 gene belongs to a family of proteins that control the balance between cell survival and cell death (apoptosis). A cell induces apoptosis when it detects that it is irreparably damaged. The integrity of the blood vessel wall is determined by a dynamic balance between endothelial cell survival and apoptosis.


The scientists started the investigation by examining zebrafish with unusual physical characteristics and working to identify the mutated genes that were responsible for the traits.


"We began with a genetic mutant that displayed vascular hemorrhage associated with vascular defects, and soon proved that the mutant had a defective birc2 gene," says Santoro. "Without the birc2 gene, hemorrhage and blood pooling occurred, resulting in vascular regression and cell death."


Next, through a series of genomic analyses and biochemical studies, the team discovered the critical role of birc2 and TNF in blood vessel health in the zebrafish embryo. They showed that birc2 is needed for the formation of the tumor necrosis factor receptor complex 1, a group of proteins and peptides that activate cell survival by initiating signals. Tumor necrosis factor promotes activation of NF-kB, a protein complex transcription factor involved in the transfer of genetic information. Further tests proved the existence of a genetic link between the birc2/NF-kB pathway, and that it is critical for vascular health and endothelial cell survival.


"Studies on vascular development are important so that we can better understand the molecular basis of some endothelial cell-related pathologies, such as cancer and [diabetic eye complications, known as] retinopathies," Santoro said. "It can also help us design new therapeutic strategies for these diseases."


The team hopes that future researchers will investigate other avenues and alternative pathways. "Because vascular health impacts many different diseases, understanding how to genetically control endothelial cell survival and apoptosis is critical to future work in these areas




Technorati :

Monday, October 15, 2007

Nanotechnology Milestone for Quadrupling Terabyte (3º) Hard Drive by Hitachi


Hard Drive 15/10/2007 04:01:00 Business Wire vertically-integrated research, design and manufacturing capabilities, Hitachi GST delivers leadership technology and quality to its global customer base.
With approximately 33,000 employees worldwide, Hitachi GST offers a comprehensive range of hard drive products for desktop computers, high-performance storage systems and servers, notebooks and consumer devices.
For more information, please visit the company s website at http://www.hitachigst.com.
Information contained in this news release is current as of the date of the press announcement, but may be subject to change without prior notice.


Hitachi Global Storage Technologies, the hard drive arm of the Japanese conglomerate, has made what it says is the world's smallest read head for hard drives.


And, if it comes out in 2011 or so as expected, the head will allow Hitachi to continue to increase the density of drives, said John Best, Hitachi's CTO. Current top-of-the-line desktop drives hold a terabyte.


With the new, elegantly named current perpendicular-to-the-plane giant magneto-resistive heads (CPP-GMR heads to you laypeople), drive makers will be able to come out with 4 terabyte drives in 2011 and/or 1 terabyte notebook drives.


The CPP-GMR drive essentially changes the structure of drive heads. Current drives come with a tunnel magnetoresistance head. In these, an insulating layer sits between two magnetic layers. Electrons can tunnel through the layer. Precisely controlling the tunneling ultimately results in the 1s and 0s of data.


Unfortunately, drive heads need to be shrunk as areal density, the measure of the amount of data that can be squeezed onto a square inch of media, increases. Shrinking the heads increases electrical resistance, which in turn creates electrical noise and potential degradation in performance. Past 500 gigabits per square inch of areal density, TMR heads may not work reliably. (Current top-end drives exhibit an areal density of around 200 gigabits per square inch.)


In a CPP-GMR head, the insulator is eliminated and replaced by a conductor, usually copper. Instead of running parallel with the middle layer, the current runs at a perpendicular angle. The structure reduces resistance and thus allows the head to be shrunk.


Put another way, current drive heads can read media where the tracks are 70 nanometers apart. The CPP-GMR heads will be capable of reading media where the tracks are 50 nanometers apart or smaller. Fifty nanometer tracks hit in 2009, and 30 nanometer tracks are expected to hit in 2011.


Before TMR heads, the industry used more conventional GMR heads, but the current in the older versions ran parallel with the insulating layer.


"In a sense, it (GMR) is making a comeback in a different form," said Best.


Earlier this month, France's Albert Fert and Germany's Peter Gruenberg won the Nobel Prize in physics for their discoveries surrounding giant magneto-resistance in 1988.


The first commercial drives with CPP-GMR head will likely come in 2009 or 2010.


Hitachi will present these achievements at the Perpendicular Magnetic Recording Conference next week in Tokyo.




Technorati :