Food in the sky? Highrise farming idea gains ground



French architect Vincent Callebaut poses as he presents a picture of one of his futuristic projects on January 10, 2014 in Paris

Imagine stepping out of your highrise apartment into a sunny, plant-lined corridor, biting into an apple grown in the orchard on the fourth floor as you bid "good morning" to the farmer off to milk his cows on the fifth.

You take the lift to your office, passing the rice paddy and one of the many gardens housed in the glass edifice that not only heats and cools itself, but also captures rainwater and recirculates domestic waste as plant food.

No, this is not the setting for a futuristic movie about humans colonising a new planet.

It is the design of Belgian architect Vincent Callebaut for a 132-floor "urban farm" the answer, he believes, to a healthier, happier future for the estimated six billion people who will live in cities by 2050.

With food, water and energy sources dwindling, the city of the future will have to be a self-sufficient "living organism", said the 36-year-old designer of avant-garde buildings some critics have dismissed as daft or a blight on the landscape.

"We need to invent new ways of living in the future," Callebaut told AFP at the Paris studio where he plies his trade.

"The city of tomorrow will be dense, green and connected. The goal is to bring agriculture and nature back into the urban core so that by 2050. we have green, sustainable cities where humans live in balance with their environment."

Each building, he said, must ultimately be a "self-sufficient, mini-power station."

The quest for sustainable urban living has never been more urgent as people continue flocking to cities which encroach ever more onto valuable rural land, gobbling up scarce natural resources and making a disproportionate contribution to pollution and Earth-warming carbon emissions.

Enter Callebaut with his project "Dragonfly" a design for a massive, twin-towered, "vertical farm" on New York's Roosevelt Island.


French architect Vincent Callebaut poses in front of a picture of one of his futuristic projects on January 10, 2014 in Paris

From each tower springs a large, glass-and-steel wing, so that the edifice resembles the insect after which it was named.

The draft structure includes areas for meat, dairy and egg production, orchards, meadows and rice fields along with offices and flats, gardens and public recreation spaces.

Energy is harvested from the Sun and wind, and hot air is trapped between the building "wings" to provide heating in winter. In summer, cooling is achieved through natural ventilation and transpiration from the abundant plant growth.

Plants grow on the exterior shell to filter rain water, which is captured and mixed with liquid waste from the towers, treated organically and used as fertiliser.

And at the base of the colossus: A floating market on the East River for the inhabitants to sell their organic produce.

"They made fun of me. They said I created a piece of science fiction," Callebaut says of his detractors.

But as awareness has grown of the plight of our planet, overpopulation and climate change, his ideas have gained traction, and the Dragonfly design has been exhibited at an international fair in China.

No buyers, but rising interest

Callebaut has also drafted a concept for a floating city resembling a lily pad that will house refugees forced from their homes by climate change.

And he hopes to sell a design for a "farmscraper" in Shenzhen, China that will include housing, offices, leisure space and food gardens.

As yet, Callebaut has found no buyers for these big projects.

"With the recent economic recession, politicians and government may... have been reluctant to venture into such new, large-scale endeavours that have not been tested before," Emilia Plotka, a sustainability expert at the Royal Institute of Royal Architects, told AFP of Dragonfly and similar projects.

But she pointed out the concept has inspired other, smaller projects.

"Instead of majestically tall bionic towers plonked in riverbeds, vertical farms have been rather more modestly integrated into existing buildings, derelict industrial sites and floating barges," said Plotka.

One example is the Pasona Urban Farm a nine-storey office building in Tokyo that allows employees to grow their own food in specially reserved green spaces at work.

"Whilst the buy-in may not be as noticeable at the moment, it certainly is widespread and growing," said Plotka of the "vertical farm" movement.

"I suspect most other new vertical farms will remain hidden in disused urban spaces or existing business and domestic blocks, which is not bad at all as they will use fewer resources to be set up and enhance their surrounding environments and communities."

Phys.org


Now read: Grass the new biofuel

New 'Bioengineered Skin' Gets Closer to the Real Thing


Successfully tested on rats, the lab-grown product has blood and lymph vessels, scientists say.

People who need skin grafts because of burns or other injuries might someday get lab-grown, bioengineered skin that works much like real human skin, Swiss researchers report.

This new skin not only has its own blood vessels but also and just as important its own lymphatic vessels. The lymph vessels are needed to prevent the accumulation of fluids that can kill the graft before it has time to become part of the patient's own skin, the researchers said.

The discovery that lymph vessels can be grown in a laboratory also opens up "a broad spectrum of possibilities in the field of tissue engineering, since all organs in the human body with the exception of the brain and inner ear contain lymph vessels," said lead researcher Daniela Marino, from the Tissue Biology Research Unit at University Children's Hospital Zurich.

"These data strongly suggest that if an engineered skin graft containing both blood and lymph vessels would be transplanted on human patients, fluid formation would be hindered, wound healing would be improved and regeneration of a near natural skin would be greatly promoted," Marino said.

The researchers said that until now, bioengineered skin grafts had not contained much of the components of real skin, including blood and lymphatic vessels, pigmentation, sweat glands, nerves and hair follicles.

Blood vessels transport nutrients, oxygen and other essential factors that keep organs alive and functioning. Lymph vessels remove fluid from the tissue and return it to the bloodstream.

"When skin is wounded, fluid builds up in the damaged tissue," Marino said. "If not efficiently removed, it accumulates to form so-called seromas, which may impair wound closure and skin regeneration."

To create the new skin, Marino's group used human cells from blood and lymph vessels, placing them in a solution that scattered the cells onto a skin-like gel. After time in an incubator, the mixture grew into skin grafts.

The researchers then tested the grafts on rats, and found that the bioengineered skin turned into near-normal skin. After connecting the graft to the rats' own lymph system, it collected and drew fluid away from the tissue just as normal skin does.

Skin grafts grown this way might find their best use in patients with severe burns who do not have enough of their own healthy skin available for grafts, the researchers said.

Experts note, however, that experiments in animals don't always work out when tested in people. But Marino said she is hopeful that trials in humans are not too far away.

Not everyone is sure there will be a big role for these types of grafts, however.

Dr. Alfred Culliford, director of plastic, reconstructive and hand surgery at Staten Island University Hospital in New York City, called the bioengineered tissue "a technology in search of a purpose."

"I don't think it will be broadly applicable to many people who need skin grafts," Culliford said. "It may be helpful in burn patients who have had a large portion of their body surface burned and don't have enough healthy skin to transplant."

Culliford said the best grafts for most patients still come from the patient's own skin. In addition, he said he doesn't believe adding lymph vessels to a graft is a great advance, since fluid drainage is now done by methods such as compressing the graft.

But Dr. Robert Glatter, an emergency physician and burn expert at Lenox Hill Hospital in New York City, saw more promise in the technology.

"Although we are still in animal models, in the near term there is a significant possibility this could remarkably change the way we deal with non-healing wounds," Glatter said.

Non-healing wounds are generally found among people with diabetes or vascular disease whose own skin doesn't function normally. "They don't heal well with standard skin grafts," Glatter said.

For her part, Marino said the newer tissue is a real advance.

"Taken together, it is most important to have both blood and lymph vessels in a bioengineered skin to initiate nutrition soon after transplantation and to maintain the balance of tissue fluids," she said. "This long-awaited step in regenerative medicine is now in reach."

The report was published Jan. 29 in the journal Science Translational Medicine.

Daniela Marino, Ph.D., Tissue Biology Research Unit, University Children's Hospital Zurich; Alfred Culliford, M.D., director, plastic, reconstructive and hand surgery, Staten Island University Hospital, New York City; Robert Glatter, M.D., emergency physician, Lenox Hill Hospital, New York City; Jan. 29, 2014, Science Translational Medicine

"Bioengineering Dermo-Epidermal Skin Grafts with Blood and Lymphatic Capillaries," by D. Marino et al. Science Translational Medicine, 2014.

Healthday.com


Now read: Gold nanoparticle artificial skin could sense touch, humidity, temperature all at the same time

Grey is the new black hole: is Stephen Hawking right?

Stephen Hawking stirs the debate on black holes. But is he right? Credit: Flickr/NASA HQ PHOTO

Over the past few days, the media has cried out the recent proclamation from Stephen Hawking that black holes, a mystery of both science and science fiction, do not exist.

Such statements send social media into conniptions, and comments quickly degenerate into satirical discussions of how you should never believe anything scientists say, as they just make it up anyway.

Science, it is often suggested, is little different to religion, with the current clergy awaiting the latest proclamation from the giants in the field. And, in modern physics, you do not get much more of a giant than Stephen Hawking. But what does this new pronouncement mean? Are textbooks to be rewritten, something that would put an immense smile on textbook publishers?

To answer, we need to take a step back and look at what we mean by black holes, and work out where Hawking's problems begin.

A classical black hole

In 1915, Einstein derived the equations of general relativity, revolutionising our view of gravity. While Einstein struggled with his equations, the German physicist Karl Schwarzschild was able to use them to determine the gravitational field outside of a spherical distribution of mass.

But the conclusions of Schwarzschild were rather frightening, predicting that objects could completely collapse, with mass crashing down to a central "singularity", surrounded by a gravitational field from which even light cannot escape. For any black hole, the delineation between light escaping and being trapped is a well-defined surface the event horizon separating our universe from the mysteries close to the black hole.

With this, the notion of the "classical" black hole was born, governed purely by the equations of general relativity. But while we know general relativity governs the force of gravity, the early 20th century saw a revolution in the understanding of the other fundamental forces, describing them in exquisite detail in terms of quantum mechanics.

A quantum leap

But the problem is that general relativity and quantum mechanics just don't play well together. Simply put, the equations of quantum mechanics can't describe gravity, whereas general relativity can only handle gravity.

To talk about them both in situations where gravity is strong and quantum mechanics cannot be ignored, the best we can do at the moment is sticky-tape the equations together. Until we have a unified theory of gravity and the other forces, this is the best we can do.

Stephen Hawking undertook one of the most famous attempts at this in the early 1970s. He wondered about what was happening at the event horizon in terms of quantum mechanics, where empty space is a seething mass of particles popping in and out of existence. At the horizon, this process separates particles, with some sucked into the central singularity, while their partners escape into space.

What Hawking showed is, through a jerry-rigged version of gravity and quantum mechanics, black holes leak radiation into space, slowly sucking energy from their gravitational core, and that, given enough time, black holes evaporate completely into radiation. When quantum mechanics is thrown into the mix, the notion of a "classical black hole" is dead.



A composite image showing jets and radio-emitting lobes emanating from Centaurus A’s central black hole. Credit: NASA/ESO/WFI

Teapots and black holes

There is, however, a bigger problem in including quantum mechanics into the study of gravity, and that problem is information.

Quantum mechanics cares intensely about information, and worries about the detailed make-up of an object like a teapot: how many protons are there, and electrons, and where are they; they care about the fact that a teapot is a teapot, a particular arrangement of electrons and protons, which is different to something else, like a light beam or a sofa.

When the teapot is thrown into a black hole, it is complete destroyed, first smashed into a million pieces, then atomised, and then the atoms ripped into their constituent parts, before being absorbed into central singularity.

But the radiation that Hawking predicted being emitted from black holes doesn't contain any information of what fell in; no matter how well you examine the radiation, you can't tell if it was a teapot, a fridge or a small iguana called Colin that met their demise.

To many, this seems like a trivial matter. But in reality, quantum mechanics is the study of information, tracing the flow and interaction of fundamental bits of information in the Universe.

Erasing information, therefore, is a very big deal, and in recent years researchers have examined various ways in which the information swallowed by a black hole is somehow preserved.

Pushing boundaries

It must be remembered that we are now pushing the boundaries of modern physics and, as we do not have a single mathematical framework where gravity and quantum mechanics play nicely together, we have to worry a little about how we have glued the two pieces together.

In 2012, the problem was revisited by US physicist Joseph Polchinski. He examined the production of Hawking radiation near the event horizon of a black hole, watching how pairs of particles born from the quantum vacuum separate, with one lost irretrievably into the hole, while the other flies off into free space.

With a little mathematical trickery, Polchinski asked the question: "What if the information of the infalling particle is not lost into the hole, but is somehow imprinted on the escaping radiation?"

Like the breaking of atomic bonds, this reassignment of information proves to be very energetic, surrounding a black hole with a "firewall", through which infalling particles have to pass. As the name suggests, such a firewall will roast Colin the iguana to a crisp. But at least information is not lost.

While presenting a possible solution, many are bothered by its consequences of the existence of a firewall and that Colin will notice a rapid increase in temperature, he will know he is at the event horizon. This goes against one of the key tenets of general relativity, namely that an infalling observer should happily sail through the event horizon without noticing that it is there.

Back to Hawking

This is where Hawking's recent paper comes in, suggesting that when you further stir the quantum mechanics into general relativity, the seething mass of the vacuum prevents the formation of a crisp, well-defined event horizon, replacing with a more ephemeral "apparent horizon".

This apparent horizon does the job of an event horizon, trapping matter and radiation within the black hole, but this trapping is only temporary, and eventually the matter and radiation are released carrying their stored information with them.

As black holes no longer need to leak information back into space, but can now release it in a final burst when they have fully evaporated, there is no need to have a firewall and an infalling observer will again have a roast-free ride into the black hole.

Are black holes no more?

To astronomers, the mess of fundamental physics at the event horizon has little to do with the immense gravitational fields produced by these mass sinks at the cores of galaxies, powering some of the most energetic processes in the universe. Astrophysical black holes still happily exist.

What Hawking is saying is that, with quantum mechanics included, the notion of a black hole as governed purely by the equations of general relativity, the "classical black hole", does not exist, and the event horizon, the boundary between escape and no-escape, is more complex than we previously thought. But we've had inklings of this for more than 40 years since his original work on the issue.

In reality, the headlines should not be "black holes don't exist" but "black holes are more complicated than we thought, but we are not going to really know how complicated until gravity and quantum mechanics try to get along".

But one last vexing question is Hawking right? I started this article by noting that science is often compared to religion, with practitioners awaiting pronouncements from on high, all falling into line with the latest dogma.

But that's not the way science works, and it is important to remember that, while Hawking is clearly very smart to quote the immortal Tammy Wynette in Stand By Your Man, "after all, he's just a man" and just because he says something does not make it so.

Hawking's proposed solution is clever, but the debate on the true nature of black holes will continue to rage. I'm sure they will continuously change their spots, and their properties will become more and more head-scratchingly weird, but this is the way that science works, and that's what makes it wonderful.

courtesy The Conversation



Now read: Stephen Hawking’s new research: ‘There are no black holes’

Engineer brings new twist to sodium-ion battery technology with discovery of flexible molybdenum disulfide electrodes



A Kansas State University engineer has made a breakthrough in rechargeable battery applications. The bottom image shows a self-standing molybdenum disulfide/graphene composite paper electrode and the top image highlights its layered structure. Credit: Gurpreet Singh, Kansas State University

A Kansas State University engineer has made a breakthrough in rechargeable battery applications.

Gurpreet Singh, assistant professor of mechanical and nuclear engineering, and his student researchers are the first to demonstrate that a composite paper made of interleaved molybdenum disulfide and graphene nanosheets can be both an active material to efficiently store sodium atoms and a flexible current collector. The newly developed composite paper can be used as a negative electrode in sodium-ion batteries.

"Most negative electrodes for sodium-ion batteries use materials that undergo an 'alloying' reaction with sodium," Singh said. "These materials can swell as much as 400 to 500 percent as the battery is charged and discharged, which may result in mechanical damage and loss of electrical contact with the current collector."

"Molybdenum disulfide, the major constituent of the paper electrode, offers a new kind of chemistry with sodium ions, which is a combination of intercalation and a conversion-type reaction," Singh said. "The paper electrode offers stable charge capacity of 230 mAh.g-1, with respect to total electrode weight. Further, the interleaved and porous structure of the paper electrode offers smooth channels for sodium to diffuse in and out as the cell is charged and discharged quickly. This design also eliminates the polymeric binders and copper current collector foil used in a traditional battery electrode."

The research appears in the latest issue of the journal ACS Nano in the article "MoS2/graphene composite paper for sodium-ion battery electrodes."

For the last two years the researchers have been developing new methods for quick and cost-effective synthesis of atomically thin two-dimensional materials graphene, molybdenum and tungsten disulfide in gram quantities, particularly for rechargeable battery applications.

For the latest research, the engineers created a large-area composite paper that consisted of acid-treated layered molybdenum disulfide and chemically modified graphene in an interleaved structured. The research marks the first time that such a flexible paper electrode was used in a sodium-ion battery as an anode that operates at room temperature. Most commercial sodium-sulfur batteries operate close to 300 degrees Celsius, Singh said.

Singh said the research is important for two reasons:

1. Synthesis of large quantities of single or few-layer-thick 2-D materials is crucial to understanding the true commercial potential of materials such as transition metal dichalcogenides, or TMD, and graphene.

2. Fundamental understanding of how sodium is stored in a layered material through mechanisms other than the conventional intercalation and alloying reaction. In addition, using graphene as the flexible support and current collector is crucial for eliminating the copper foil and making lighter and bendable rechargeable batteries. In contrast to lithium, sodium supplies are essentially unlimited and the batteries are expected to be a lot cheaper.

"From the synthesis point of view, we have shown that certain transition metal dichalcogenides can be exfoliated in strong acids," Singh said. "This method should allow synthesis of gram quantities of few-layer-thick molybdenum disulfide sheets, which is very crucial for applications such as flexible batteries, supercapacitors, and polymer composites. For such applications, TMD flakes that are a few atoms thick are sufficient. Very high-quality single-layer flakes are not a necessity."

The researchers are working to commercialize the technology, with assistance from the university's Institute of Commercialization. They also are exploring lithium and sodium storage in other nanomaterials.

Kansas State University

phys.org


Now read: Folded paper lithium-ion battery increases energy density by 14 times

Physicists create synthetic magnetic monopole predicted more than 80 years ago

Artistic illustration of the synthetic magnetic monopole. Credit: Heikka Valja.

Nearly 85 years after pioneering theoretical physicist Paul Dirac predicted the possibility of their existence, an international collaboration led by Amherst College Physics Professor David S. Hall '91 and Aalto University (Finland) Academy Research Fellow Mikko Möttönen has created, identified and photographed synthetic magnetic monopoles in Hall's laboratory on the Amherst campus. The groundbreaking accomplishment paves the way for the detection of the particles in nature, which would be a revolutionary development comparable to the discovery of the electron.

A paper about this work co-authored by Hall, Möttönen, Amherst postdoctoral research associate Michael Ray, Saugat Kandel '12 and Finnish graduate student Emmi Ruokokski was published today in the journal Nature.

"The creation of a synthetic magnetic monopole should provide us with unprecedented insight into aspects of the natural magnetic monopole if indeed it exists," said Hall, explaining the implications of his work.

Ray, the paper's lead author and first to sight the monopoles in the laboratory, agreed, noting: "This is an incredible discovery. To be able to confirm the work of one of the most famous physicists is probably a once-in-a-lifetime opportunity. I am proud and honored to have been part of this great collaborative effort."

Ordinarily, magnetic poles come in pairs: they have both a north pole and a south pole. As the name suggests, however, a magnetic monopole is a magnetic particle possessing only a single, isolated pole a north pole without a south pole, or vice versa. In 1931, Dirac published a paper that explored the nature of these monopoles in the context of quantum mechanics. Despite extensive experimental searches since then, in everything from lunar samples moon rock to ancient fossilized minerals, no observation of a naturally-occurring magnetic monopole has yet been confirmed.



Hall's team adopted an innovative approach to investigating Dirac's theory, creating and identifying synthetic magnetic monopoles in an artificial magnetic field generated by a Bose-Einstein condensate, an extremely cold atomic gas tens of billionths of a degree warmer than absolute zero. The team relied upon theoretical work published by Möttönen and his student Ville Pietilä that suggested a particular sequence of changing external magnetic fields could lead to the creation of the synthetic monopole. Their experiments subsequently took place in the atomic refrigerator built by Hall and his students in his basement laboratory in the Merrill Science Center.

After resolving many technical challenges, the team was rewarded with photographs that confirmed the monopoles' presence at the ends of tiny quantum whirlpools within the ultracold gas. The result proves experimentally that Dirac's envisioned structures do exist in nature, explained Hall, even if the naturally occurring magnetic monopoles remain at large.

Finally seeing the synthetic monopole, said Hall, was one of the most exciting moments in his career. "It's not every day that you get to poke and prod the analog of an elusive fundamental particle under highly controlled conditions in the lab." He added that creation of synthetic electric and magnetic fields is a new and rapidly expanding branch of physics that may lead to the development and understanding of entirely new materials, such as higher-temperature superconductors for the lossless transmission of electricity. He also said that the team's discovery of the synthetic monopole provides a stronger foundation for current searches for magnetic monopoles that have even involved the famous Large Hadron Collider at CERN, the European Organization for Nuclear Research. (Older theoretical models that described the post-Big Bang period predicted that they should be quite common, but a special model for the expansion of the universe that was later developed explained the extreme rarity of these particles.)

Added Aalto's Möttönen: "Our achievement opens up amazing avenues for quantum research. In the future, we want to get even a more complete correspondence with the natural magnetic monopole."

Hall, who was recently named a Fellow of the American Physical Society, said his team's experimental work arose out of interest from Amherst summer student researchers at a group meeting in 2011, well after Pietilä and Möttönen's 2009 paper had appeared in Physical Review Letters. "It felt as though Pietilä and Möttönen had written their letter with our apparatus in mind," he said, "so it was natural to write them with our questions. Were it not for the initial curiosity on the part of the students we would never have embarked on this project."

Observation of Dirac Monopoles in a Synthetic Magnetic Field, M. W. Ray, E. Ruokokoski, S. Kandel, M. Möttönen, and D. S. Hall, Nature, 2014: dx.doi.org/10.1038/nature12954

The method used in the monopole creation has been developed in: Creation of Dirac Monopoles in Spinor Bose-Einstein Condensates, V. Pietilä ja M. Möttönen, Phys. Rev. Lett. 103, 030401 (2009): link.aps.org/abstract/PRL/v103/e030401

phys.org

Amherst College



Now read: Magnetic vortices could form the basis for future high-density, low-power magnetic data storage

Sony’s new genome analysis company is not playing catch-up with Calico



In a move that is becoming increasingly common for international technology giants, Sony will make a startling new push into an area of research that doesn’t seem to overlap at all with its current projects. Diversification is the corporate buzzword of the day, a frantic attempt to spread out a company’s weight as it tries to keep from falling through ever-thinning economic ice. The internet is threatening established media, display technology is advancing down all sorts of new paths, and the cloud is even bringing the necessity of physical processing hardware into question; even a company as diverse as Sony must look at its frankly incredible array of products and wonder whether it might still be just a few Kickstarter success stories away from total irrelevance.

That being the case, the company has decided to enter into a partnership with Japanese medical giant M3 and genetics pioneer Illumina to create a new company named P5. Though details are scarce right now, the new company will focus on creating a “genetic information platform,” which we can safely assume means a proprietary stab at making salable products out of the promise of personalized medicine. Almost certainly the first planned device is a cheap genetic sequencing and annotation tool meant for quick identification of genetic problems or warning signs.


This announcement immediately brings Google’s Calico to mind, but the comparison is not perfect.


The short-term target of the company is the research and testing industry, but sales direct to the public are an openly stated goal, as well. The biomedical industry is already one of the largest economic sectors there is, and that’s while working entirely through third parties (insurance companies and medical professionals). If technology could allow companies like Sony to safely cut out such inherent inefficiencies, the market could explode even further.

This differs from Google’s announcement of Calico in a few key ways. First, Sony has a robust history as a hardware company, while Google still primarily makes software. Second, Sony’s goals seem much less lofty than Google’s; all modern hospitals already house equipment with at least a few recognizable electronics logos, and Sony just wants a (bigger) piece of that action. That genetic analysis will undoubtedly have the effect of lengthening lifespans doesn’t mean the two projects have the same goal. Sony wants to fill a very specific niche, and has a direct plan to recoup expenses; Google seems to have a much more general understanding of the relationship between health and profit, and trusts that a more wide-based approach will work out in the end.



Camera-maker Olympus controlled fully 70% of the endoscope market when Sony bought them in 2012.

In late 2012, Sony announced a general plan to enter the health industry with the explicit aim of cornering an emerging market. They see medicine as a new frontier in technology that could work its way into the average home. Could they sell you a hardware-software health kit that tracks and advises on health the way a financial suite might do for your budget? Could they sell you a testing platform at a loss, then recoup that loss through monthly health monitoring fees? Will you have to pay for version upgrades capable of testing more accurately, or for a wider array of problems?

This sort of research need not confine itself to hypotheticals, however. The $664 million purchase of camera-maker Olympus was reportedly to gain control of its medical endoscope business, but the research necessary to improve those devices could also drive the development of Sony’s 4K TVs. There are multiple possible applications for virtually every sort of research, and there are few if any companies better positioned to take advantage of the full spectrum of applications for a new technology. With many of Sony’s traditional businesses struggling in recent years, last year saw the company take in a majority of its profit from the financial services branch – this is a company that knows full well the importance of embracing new and seemingly uncharacteristic ideas.

Genome analysis is one of the fastest-emerging fields in the world, recently passing the $1,000 milestone and continuing to advance with no sign of slowing. Quite literally, the $100 genome might not be far away and any company not poised to exploit that market the instant it appears could easily find itself frozen out for good. Sony and Google have big plans to make sure that doesn’t happen. All that remains to be seen is who else will throw their hat into the ring.
Extremetech


Now read: Google’s next big challenge: death.

What makes us human? Unique brain area linked to higher cognitive powers

(A) The right vlFC ROI. Dorsally it included the inferior frontal sulcus and, more posteriorly, it included PMv; anteriorly it was bound by the paracingulate sulcus and ventrally by the lateral orbital sulcus and the border between the dorsal insula and the opercular cortex. (B) A schematic depiction of the result of the 12 cluster parcellation solution using an iterative parcellation approach. We subdivided PMv into ventral and dorsal regions (6v and 6r, purple and black). We delineated the IFJ area (blue) and areas 44d (gray) and 44v (red) in lateral pars opercularis. More anteriorly, we delineated areas 45 (orange) in the pars triangularis and adjacent operculum and IFS (green) in the inferior frontal sulcus and dorsal pars triangularis. We found area 12/47 in the pars orbitalis (light blue) and area Op (bright yellow) in the deep frontal operculum. We also identified area 46 (yellow), and lateral and medial frontal pole regions (FPl and FPm, ruby colored and pink). Credit: Neuron, Neubert et al.

Oxford University researchers have identified an area of the human brain that appears unlike anything in the brains of some of our closest relatives.

The brain area pinpointed is known to be intimately involved in some of the most advanced planning and decision-making processes that we think of as being especially human.

'We tend to think that being able to plan into the future, be flexible in our approach and learn from others are things that are particularly impressive about humans. We've identified an area of the brain that appears to be uniquely human and is likely to have something to do with these cognitive powers,' says senior researcher Professor Matthew Rushworth of Oxford University's Department of Experimental Psychology.

MRI imaging of 25 adult volunteers was used to identify key components in the ventrolateral frontal cortex area of the human brain, and how these components were connected up with other brain areas. The results were then compared to equivalent MRI data from 25 macaque monkeys.

This ventrolateral frontal cortex area of the brain is involved in many of the highest aspects of cognition and language, and is only present in humans and other primates. Some parts are implicated in psychiatric conditions like ADHD, drug addiction or compulsive behaviour disorders. Language is affected when other parts are damaged after stroke or neurodegenerative disease. A better understanding of the neural connections and networks involved should help the understanding of changes in the brain that go along with these conditions.

The Oxford University researchers report their findings in the science journal Neuron. They were funded by the UK Medical Research Council.

Professor Rushworth explains: 'The brain is a mosaic of interlinked areas. We wanted to look at this very important region of the frontal part of the brain and see how many tiles there are and where they are placed.

'We also looked at the connections of each tile – how they are wired up to the rest of the brain – as it is these connections that determine the information that can reach that component part and the influence that part can have on other brain regions.'

From the MRI data, the researchers were able to divide the human ventrolateral frontal cortex into 12 areas that were consistent across all the individuals.

'Each of these 12 areas has its own pattern of connections with the rest of the brain, a sort of "neural fingerprint", telling us it is doing something unique,' says Professor Rushworth.

The researchers were then able to compare the 12 areas in the human brain region with the organisation of the monkey prefrontal cortex.

Overall, they were very similar with 11 of the 12 areas being found in both species and being connected up to other brain areas in very similar ways.

However, one area of the human ventrolateral frontal cortex had no equivalent in the macaque – an area called the lateral frontal pole prefrontal cortex.

'We have established an area in human frontal cortex which does not seem to have an equivalent in the monkey at all,' says first author Franz-Xaver Neubert of Oxford University. 'This area has been identified with strategic planning and decision making as well as "multi-tasking".'

The Oxford research group also found that the auditory parts of the brain were very well connected with the human prefrontal cortex, but much less so in the macaque. The researchers suggest this may be critical for our ability to understand and generate speech.

Comparison of human ventral frontal cortex areas for cognitive control and language with areas in monkey frontal cortex, Neuron, 2014. dx.doi.org/10.1016/j.neuron.2013.11.012phys.org

Oxford University


Now read: Human brain development is a symphony in three movements


This carbon nanotube heatsink is six times more thermally conductive, could trigger a revolution in CPU clock speeds

One of the most significant problems facing modern CPUs is the efficient transmission of heat between the CPU cores themselves and the heatsinks that cool them. The problem is twofold: Conventional thermal interface materials (TIMs) are terrible at conducting heat, while processors are terrible at spreading heat laterally (across the surface of the chip). The first problem means that a great deal of thermal energy gets “stuck” at the top of the CPU core, while the latter creates hot spots on the die. Now, a new approach to cooling via the use of carbon nanotubes could aggressively improve the first half of the problem.

Carbon nanotubes have long been known to have amazing thermal conductivity, but bonding them to thermal interfaces has been problematic. A new paper published in Nature claims to have solved this problem by using organic compounds to form strong covalent bonds between the carbon nanotubes themselves and the metal layer at the top of a chip. Once formed, this thermal interface material can conduct heat 6x more effectively off the top of a chip. Even better, the bonding technique can work with aluminum, silicon, gold, and copper. Old methods of bonding carbon nanotubes to cooling surfaces added roughly 40 microns of material on each side of the CNT layer; this new approach adds just seven microns of additional material. (Research paper: doi:10.1038/ncomms4082 - “Enhanced thermal transport at covalently functionalized carbon nanotube array interfaces”.)


The Lawrence Berkeley National Lab (Berkeley Lab) team is working on a method that would ensure more of the nanotubes come into contact with the actual metal layer, but the six-fold improvement is already extraordinary. As for how important the advance is, consider the fact that scientists are experimenting with TIMs made of wax to allow for high-speed burst computing, precisely because metal caps, paste, and solder layers are such an inefficient way of dealing with the problem. The one downside, as with many of the cooling methods we considered last year, is that there would be no way for the end-user to service this kind of layer. Intel could theoretically deploy a nanotube layer in between the CPU and its heat spreader, but you’d also need to connect a layer of material between the heat spreader and the actual heat sink in order to see lasting benefits.

Those benefits, however, could be significant. Moving heat more efficiently into the heatsink would reduce CPU core temps and allow for higher frequency operation or longer periods at Turbo Boost clocks as opposed to being stuck at base clock. As it becomes more difficult to push CPU advances through silicon technology improvements, ancillary methods of improving the thermal conductivity of the entire system will become increasingly important.

Extremetech


Now read: Carbon nanotube logic device operates on subnanowatt power

Best laptops for engineers: When work requires a real workstation

Along with gamers, engineers pose one of the toughest design challenges for laptop makers. Engineering applications crave memory, graphics horsepower, and large screens all hurdles in designing stylish, lightweight laptops. The result is a necessary tradeoff between performance and convenience. While not every engineer will make the same compromises, there are a few laptops that stand out for use by engineers, depending on their specific needs.

So what is the best laptop for an engineer? Here are a few great options, one of which will get the job done for you.

Lenovo ThinkPad W540


For those used to lugging a typical portable workstation “brick” the new Lenovo ThinkPad W540 may be a breath of fresh air. While not lightweight compared to a business laptop, at just under 5.5 pounds and just over an inch thick, it is not much larger than a MacBook Pro. Under the hood it can be configured with a variety of 4th generation (Haswell) Core i7 processors ranging up to the 4930MX capable of 3GHz (3.9GHz Turbo). It can also be stuffed full with up to 32GB of RAM and a 2880×1620 high resolution display. As befits a laptop designed for heavy-duty graphics, it features an Nvidia Quadro K1100 or K2100 discrete GPU.

Oddly for a Windows machine, the W540 doesn’t offer an HDMI or Displayport video output, opting instead for VGA and Thunderbolt ports. That may be a turnoff for those who don’t want to invest the extra time and effort in Thunderbolt peripherals. However, for those who need the ultimate in expansion capability, and want Thunderbolt’s high-speed 10Gbps transfer rate, it may be just the thing. For everyone else, fortunately, the W540 offers plenty of USB ports (2x USB3.0 and 2x USB 2.0). Automatic switching between integrated and discrete graphics helps the W540 claim a more-than-respectable 6+ hours of battery life (users are reporting around five hours).

As befits a ThinkPad, it comes with Intel’s vPro and a fingerprint reader. The system ships with Windows 7 Pro, so for Windows 8 haters, there’s no need to fret. However, if you want the latest version of Windows you’ll need to upgrade it on your own. W540 pricing starts at $1600 for an entry-level model with a 2.4GHz (3.4GHz Turbo) i7-4700MQ CPU, 8GB of RAM, and a 500GB hard drive.

HP ZBook 15 and 17 Mobile Workstations


Having made its reputation selling to engineers, HP is a natural when it comes to shopping for a portable workstation-class laptop. If accurate color and rugged construction are high on your list of requirements, the HP ZBook 15 with 15-inch display (or the larger and heavier 17-inch version the ZBook 17) will fit the bill. The tradeoff is a slightly heavier and larger machine (6.2 pounds for the 15-inch version) than the similar ThinkPad W540, even though specs are similar. The ZBooks also don’t offer the ultra-high-resolution display of the ThinkPad W540, but their 1080p displays are top notch and feature HP’s DreamColor technology.

Thunderbolt is also a feature on these models, allowing the connection of up to four displays. Unlike the ThinkPad, it also has a Displayport in addition to a VGA port. It also includes vPro and a fingerprint reader, along with a docking station connector on its underside. CPU options range up to the Core i7-4900MQ, clocked at up to 2.8GHz (3.8GHz Turbo), and it can be ordered with either Windows 7 Pro or Windows 8 Pro. Reviewers loved almost everything about the machine, except for its sub-four-hour battery life. The HP ZBook 15 is priced starting at $1650.

Dell Precision M3800 or XPS 15 Touch


While Dell’s flagship Precision M4800 goes head to head with beefy models like the ThinkPad W540 and ZBook 15, I’m focusing here on the new, lighter Precision M3800. The M3800 promises workstation power in a svelte 0.71-inch, 4.15-pound package. It has many of the newest bells and whistles, including an option for a QHD+ display, fourth-generation Core i7, Nvidia Quadro discrete graphics, and an mSATA slot to go along with its spinning hard drive.

Gorilla Glass covers the multi-touch screen it’s still a bit of a novelty in the mobile workstation category, but valuable if you order the machine with Windows 8.1 pre-installed. The option for dual SSDs allows for maximum performance if you put them in a RAID0 configuration or allocate your swap and temp drives carefully. Like the other machines we’ve looked at here, Nvidia’s Optimus technology provides automatic switching between integrated and the discrete Quadro graphics. The M3800 is priced starting at around $2000. For those who don’t need the Quadro graphics and dual drives, the new Haswell-equipped Dell XPS 15 has almost identical specs otherwise.



Sager NP9570


Often the words “portable” and “mobile” are used interchangeably. Not with the Sager NP9560. This beast of a machine is essentially a portable desktop, but not what you’d call a mobile computing device. Not as well-known as the big brand names, Sager has a reputation for creating machines with amazing performance. For those who want maximum power, the Sager NP9570 is an amazing laptop. Available with CPUs up to an Intel Core i7-4960X Extreme Edition CPU (typically a desktop processor), running at 3.60GHz (4.0GHz Turbo) this laptop’s raw performance is its defining characteristic.


The 1080p display isn’t as sexy as the ultra-high-resolution versions available on other machines, but it has an unmatched three hard drive bays, a tons of ports, 7.1 channel sound, and the choice of two powerhouse 4GB or 5GB Nvidia GeForce 770M or 680M video cards with SLI. This selections makes it the top graphics performer in anything short of a full-on desktop.

The downside of this machine, not surprisingly, is the size and weight. At 12 pounds, it is a beast in more ways than one. Part of the weight is the 17-inch display that covers 90% of the NTSC color gamut, but the high-powered, near-desktop components drive most of the rest.

The battery on a machine like this is more for moving from one outlet to another than doing a lot of work, with reviewers get just over an hour on battery, for occasions when the need demands. The Sager NP9570 is priced starting at $2000.

What, no MacBook Pro?


For those who have gotten tired of seeing the MacBook Pro appear in nearly every “best of” list of laptops, you won’t find it here. For starters, many of the top engineering applications do not run on OS X. That includes industry-standard design tools Solidworks and Creo, although MATLAB and Mathematica do have native Mac OS X versions. Other software like LabVIEW runs on both, but offer more purchasing options for Windows.

Running Windows on a MacBook is certainly an option, but from talking to engineers who have tried to make a go of it, they’ve lost the MacBook Pro’s natural advantages in battery life and driver support resulting in an unhappy compromise. So obviously, if you don’t need any of the Windows-only applications it is an excellent choice, but you definitely limit your software options.
Extremetech

Samsung will unveil major Tizen changes at MWC, to combat Google’s Android lockdown

Say you’re Samsung. You own one of the world’s most popular mobile phone franchises. You’ve got a history of driving enormous revenues in the smartphone market for many people, Samsung and Android are nearly synonymous terms when talking about the mobile phone industry. Headed into Mobile World Congress, easily one of the largest smartphone events of the year, you invite members of the press to attend the debut of your next-generation operating system?

Apparently, yes. According to multiple reports, Samsung’s major unveil at MWC in February will focus on the operating system it’s been building for the past few years rather a smartphone launch. Presumably, Samsung will announce the Galaxy S5 at its own event, as it did last year with the S4. The fact that the Korean manufacturer wants to put such an emphasis on Tizen, however, is still surprising given that the OS has only shipped on a handful of camera SKUs to date.


Google’s Android lockdown


There are multiple alternative mobile phone operating systems in various stages of development, from Mozilla’s Firefox OS to Ubuntu Touch. Tizen is one of the only projects backed by a company as massive as Samsung but having made so much money on Android, why is Samsung looking to leave it in the first place? It’s all about control but the story there is more complicated than you might think.

Ars Technica wrote a major piece on how Google has used the Android ecosystem (ostensibly open-source) to tie its own services to the platform. Over the past six years, Google introduced open-source applications that provided basic functionality, then replaced them with its own closed-source apps in later versions. Once it’s created a closed-source version, the open-source flavor is effectively orphaned. Updates for the open, Android versions of the keyboard, calendar, photo app, or music player have been few and far between since the “Google” version of each application debuted.



Tizen’s UI environment, captured from devices in December 2013

Here’s the kicker: If device makers reject one closed-source version of an application, they don’t get any of them. Google can’t stop a manufacturer like Amazon from using Android, but it controls all of the licensing terms for Google apps. Those licensing terms are reportedly much simpler if you’re a member of the Open Handset Alliance and the contractual terms of the OHA license prohibit device manufacturers from forking Android.

Samsung’s work on Tizen illustrates that the company doesn’t much like the way Android has been turned into a Google-only show. The terms and agreements surrounding the Google applications that govern the Android experience (and that users want) are as much a prison as the ecosystem that Android was ostensibly supposed to combat. Faced with the difficulty of building its own competing applications at the heart of Android or targeting a new OS that isn’t encumbered by the same license terms, Samsung has decided to pour effort into both camps. Samsung’s own version of Google apps and its TouchWiz UI skin aren’t just annoyances (though they’re certainly annoying) they’re the manufacturer’s attempt to insure it has acceptable alternatives if its arrangement with Google breaks down. The Google Play ecosystem only exacerbates the trend apps that use Google APIs can’t run properly on devices like the Kindle Fire.

Tizen is the “OS B” to that “Plan B.” Ideally (from Samsung’s perspective) it can build an app store based on its own open environment. After all, Tizen is based on Linux, with its own coalition of developers and contributors, and it could absolutely help free the industry from the tyranny of Hang on. Déjà vu. Wasn’t Android meant to do exactly that and release us from the shackles of Apple’s iOS?

We don’t know what devices (if any) Samsung will actually be demonstrating. Japanese manufacturer DoCoMo recently announced that it would not bring a Tizen phone to market, dashing Samsung’s plans. The company has only stated that devices will be on hand to demonstrate just how far the operating system has come in the past year. It’s not impossible to think we might see a few devices debut in 2014, but a wider launch seems likely to wait for 2015.

Extremetech


Now read: Samsung makes Tizen OS penetration for new mobiles

Stephen Hawking’s new research: ‘There are no black holes’

Exactly 40 years after famed theoretical physicist Stephen Hawking brought event horizons and black holes into the public eye, he is now claiming that black holes don’t actually exist. Instead of all-consuming event horizons and black holes which nothing can escape from, Hawking now proposes that there are “apparent horizons” which suck in matter and energy but only temporarily, before eventually releasing them again.

To be clear, Hawking isn’t proposing that black holes don’t exist just that black holes, as we’ve understood them for the last 40 years or so (thanks to work done by Hawking and others), don’t exist. The current understanding is that black holes are surrounded by an event horizon a boundary in spacetime which only allow matter and energy to pass through one way, towards the black hole. It is, in other words, the point of no return. This is why black holes appear black energy can’t escape, and so they produce no light and no heat. In thermodynamics terms, a black hole is a perfect black body an object that absorbs all energy and radiation.

The problem with this theory, though, is that it’s based on general relativity. In recent years, as our understanding of quantum theory has improved, numerous conflicts have arisen, especially in places where both theories apply such as black holes and event horizons. Basically, quantum mechanics has a big issue with the idea that event horizons completely and utterly destroy information a big no-no in the world of quantum. Hawking’s new proposal tries to ameliorate this conflict between the two theories.


The Event Horizon’s “gravity drive.” I wonder if the film will have to be renamed Apparent Horizon

In a short research paper (arXiv:1401.5761) called “Information Preservation and Weather Forecasting for Black Holes,” Hawking proposes that black holes are instead enveloped by an apparent horizon. Basically, instead of an event horizon that blocks everything absolutely, an apparent horizon suspends matter and energy from trying to escape and when it does escape, due to the wild fluctuations within a black hole and its apparent horizon, the energy would be released in a garbled form. Hawking likens these fluctuations to weather on Earth: “It will be like weather forecasting on Earth. That is unitary, but chaotic, so there is e ffective information loss. One can’t predict the weather more than a few days in advance.” (Unitarity is the part of quantum theory that strongly disapproves of event horizons being a point of no return.)

The research paper concludes: “The absence of event horizons mean that there are no black holes in the sense of regimes from which light can’t escape to infi nity. There are however apparent horizons which persist for a period of time.”

It’s worth noting that Hawking’s new paper is just two pages long, contains no calculations, and hasn’t yet passed peer review. It does seem to do what it set out to achieve, though. Complex problems don’t necessarily have complex solutions. Speaking to Nature, Hawking had a little more to say about the matter, too: “There is no escape from a black hole in classical theory,” Hawking said. “[Quantum theory, however] enables energy and information to escape from a black hole.” To fully explain the process, though, the theoretical physicist admits that we’re still looking for a theory that ties up gravity with the other universal constants a theory that , Hawking says, “remains a mystery.”
Credit-Extremetech

Now read: Gravitational Waves Help Us Understand Black-Hole Weight Gain


We the Internet: Bitcoin developers seed idea for Bitcloud


Partial map of the Internet based on the January 15, 2005 data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. Credit: Wikimedia Commons. Click on image to enlarge

A developer group is seeding a project that would behave as a decentralized Internet, in a departure from the present model. Posting their intentions recently on Reddit, they said "We will have to start by decentralizing the current Internet, and then we can create a new Internet to replace it." Called Bitcloud, they propose a peer to peer system for sharing bandwidth. Individual users would complete computing tasks such as routing, storing, or computing in exchange for cash. As the BBC explained, "Just as Bitcoin miners provide computing power and are rewarded for solving complex mathematical equations with the virtual currency, so individual net users would be rewarded based on how much bandwidth they contribute to the Bitcloud network."

Elaborating on Reddit about this cash model, the developers said this about payments: "One of the many problems of certain free and open source projects in the past has been the lack of a profit incentive. With Bitcloud, nodes on a mesh network can be rewarded financially for routing traffic in a brand new mesh network. This removes the need for Internet Service Providers."

The idea of Bitcloud was based on the ideas of Bitcoin, Mediaglobin and Tor. The team members for this project are searching for more developers. "We have a basic idea of how everything will work, but we need assistance from programmers and thinkers from around the world who want to help," they said. They noted the project requires a "massive amount of thought and development in many different parts of the protocol, so we need as many people helping as possible."

The team made this appeal on Reddit: "If you're interested in privacy, security, ending internet censorship, decentralising the internet and creating a new mesh network to replace the internet, then you should join or support this project."

The system's currency would be called "cloudcoins." These, they said, would be much like bitcoins as the currency of the Bitcoin protocol. "You need bitcoins to use the Bitcoin payment system, and you need cloudcoins to use Bitcloud in certain ways," they said. "For example, someone who wants to advertise on a public video that is streamed from a Bitcloud node will have to pay for that advertisement in cloudcoins. Another example would be someone who wants to pay for personal cloud storage on the Bitcloud network. By monetizing the system, nodes can get paid for their willingness to share bandwidth, provide cloud storage, and allow for direct streaming to stored content. Adding the profit motive to the equation gives this project a chance to succeed where many others have failed in the past." They said donations can only take you so far in trying to create something of this magnitude.

WeTube, meanwhile, according to the project idea, could act as a replacement for YouTube, Netflix, Hulu, Soundcloud, Spotify, and other streaming systems. "The decentralized nature of WeTube will allow users to share content with the world without having to worry about censorship or privacy concerns."

github.com

phys.org



Read now: Bitcoin breaks $1000, but how far can it go?

How dust changed the face of the Earth

In spring 2010, the research icebreaker Polarstern returned from the South Pacific with a scientific treasure - ocean sediments from a previously almost unexplored part of the South Polar Sea. What looks like an inconspicuous sample of mud to a layman is, to geological history researchers, a valuable archive from which they can reconstruct the climatic history of the polar areas over many years of analysis. This, in turn, is of fundamental importance for understanding global climatic development. With the help of the unique sediment cores from the Southern Ocean, it is now possible to provide complete evidence of how dust has had a major influence on the natural exchange between cold and warm periods in the southern hemisphere. An international research team under the management of the Alfred Wegener Institute in Bremerhaven was able to prove that dust infiltrations there were 2 to 3 times higher during all the ice ages in the last million years than in the warm phases in climatic history.

"High large-area dust supply can have an effect on the climate for two major reasons", explained Dr. Frank Lamy, geoscientist at the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, about the findings. "Trace substances such as iron, which are essential for life, can be incorporated into the ocean through dust. This stimulates biological production and increases the sea's capacity to bind carbon. The result is that the greenhouse gas carbon dioxide is taken out of the atmosphere. In the atmosphere itself, dust reflects the sun's radiation and purely due to this it reduces the heat input into the Earth's system. Both effects lead to the fact that the Earth cools down." Lamy is the main author of the study which will be published in the renowned Science journal on 24th January 2014. Other participants included geochemist Gisela Winckler from the US Lamont-Doherty Earth Observatory and the Bremen Centre for Marine Environmental Sciences MARUM.

The influence of dust supply on the climate changes between ice ages and warm periods has long been suspected. Climatic researchers always found particularly high dust content containing iron when the earth was going through an ice age, both in Antarctic ice cores and in sediment cores from the Atlantic part of the Southern Ocean. However, up to now there was no data available for the Pacific section, which covers 50% of the Southern Ocean. "We can now close this central gap" is how Lamy underlines the importance of the new study. "The result is that we are now finding the same patterns in the South Pacific that we found in cores from the South Atlantic and the Antarctic ice. Therefore, the increased dust input was a phenomenon affecting the southern hemisphere during colder periods. This means that they now have to be considered differently when assessing the complex mechanisms which control natural climate changes."

What sounds almost incidental in Lamy's words is something of considerable relevance for research. This is because up to now many scientists were convinced that dust supply to the Pacific area could not have been higher during the ice ages than during warmer periods of the Earth's climate history. Where could larger dust quantities in this area of the Earth's oceans come from? Up to now, South Patagonia was suspected as a geological dust source since it is the only landmass in the Southern Ocean, intruding into it like a huge finger. However, since the wind predominating in this part of the world comes from the West, any dust particles in the air originating from South America mostly drift towards the Atlantic. For this reason, data from the South Pacific has been on scientists' wishlists for a long time.

However, the Pacific section of the Southern Ocean has remained something of a "terra incognita" for researchers despite modern technology. It is considered to be one of the most remote parts of the world's oceans. "The region is influenced by extreme storms and swells in which wave heights of 10 m or more are not uncommon. The area is also complicated from logistic point of view due to the huge distance between larger harbours" is how AWI scientist Dr. Rainer Gersonde, co-author and at the time leader of the Polarstern expedition, explains the extraordinary challenges faced by the research voyage. The Polarstern made a voyage of 10,000 nautical miles or 18,500 km through this particularly inhospitable part of the Antarctic Ocean in order to obtain high quality and sufficiently long sediment cores.

The question is, however, where did the historic dust freight towards the South Pacific come from, and why did the phases of increased input take place at all? Frank Lamy believes that one of the causes is the relocation or extension of the exceptionally strong wind belts prevalent in this region towards the Equator. The entire Southern Ocean is notorious amongst sailors for its powerful westerly winds - the "Roaring Forties" and the "Furious Fifties". It is considered to be one of the windiest regions in the world. The scientists' theory is that a relocation or extension of this powerful westerly wind belt towards the North could have caused the extended dry areas on the Australian continent to be influenced by stronger wind erosion. The result was higher dust infiltration into the Pacific Ocean - with the consequences described above. On top of this, New Zealand was an additional dust source. The extended glaciation of the mountains there during the ice age provided considerable quantities of fine-grained material which was then blown far out into the South Pacific by the winds.

"Our investigations have now proved without a doubt that colder periods in the southern hemisphere over a period of 1 million years always and almost everywhere coincided, , with lower carbon dioxide content in the atmosphere and higher dust supply from the air. The climatic history of the Earth was, therefore, written in dust."

"Increased Dust Deposition in the Pacific Southern Ocean During Glacial Periods" F. Lamy et al., Science, 24 January 2014 DOI: 10.1126/science.1245424

Journal reference: Science 
 
Helmholtz Association of German Research Centres


Now read: Paleontologist presents origin of life theory

Is It the final days of the Stethoscope?


Image Credit: Thinkstock.com

World Heart Federation

An editorial in this month’s edition of Global Heart (the journal of the World Heart Federation) suggests the world of medicine could be experiencing its final days of the stethoscope, due to the rapid advent of point-of-care ultrasound devices that are becoming increasingly accurate, smaller to the point of being hand-held and less expensive as the years roll by. The editorial is by Professor Jagat Narula, Editor-in-Chief of Global Heart (Mount Sinai School of Medicine, New York, USA) and Associate Professor Bret Nelson, also of Mount Sinai School of Medicine, New York, USA.

Looking at the stethoscope (invented in 1816) and ultrasound (invented in the 1950s), the authors suggest that the stethoscope could soon be exiled to the archives of medical history. They say: “At the time of this writing several manufacturers offer hand-held ultrasound machines slightly larger than a deck of cards, with technology and screens modelled after modern smartphones.” As the minimum size of an ultrasound continued to decrease, concerns about smaller machines having inferior image quality compared to devices many times larger and more expensive were over time outweighed by evidence that rapid diagnostic decisions could be made with portable machines. Today, more than 20 medical specialties include use of point-of-care ultrasound as a core skill, and that mounting evidence suggests that compared with the stethoscope ultrasound technology can reduce complications, assist in emergency procedures and improve diagnostic accuracy.

The authors say: “Thus, many experts have argued that ultrasound has become the stethoscope of the 21st century. Why then, do we not see ultrasound machines in the coat pocket of every clinician? Several factors play a role. The ultrasound machines are expensive, and even clinicians enamored with the promise of point-of-care ultrasound must make a financial decision weighing the increased diagnostic accuracy against increased cost. In addition, point-of-care ultrasound is still a new field relative to traditional imaging. Many older clinicians completed training long before ultrasound use was part of standard practice for their specialty.”

Additionally, while the cheapest available stethoscopes are literally disposable (though many can cost hundreds of dollars), the cost of the cheapest ultrasound devices is still several thousand dollars, making roll-out, especially in developing nations, much more difficult. Yet the authors believe all the evidence shows that ultrasound can diagnose heart, lung, and other problems with much more accuracy than the 200-year-old stethoscope.

The authors conclude: “Certainly the stage is set for disruption; as LPs were replaced by cassettes, then CDs and .mp3s, so too might the stethoscope yield to ultrasound. Medical students will train with portable devices during their preclinical years, and witness living anatomy and physiology previously only available through simulation. Their mentors will increasingly use point-of-care ultrasound in clinical environments to diagnose illness and guide procedures. They will see more efficient use of comprehensive, consultative ultrasound as well- guided by focused sonography and not limited by physical examination alone. And as they take on leadership roles themselves they may realise an even broader potential of a technology we are only beginning to fully utilize. At that point will the “modern” stethoscope earn a careful cleaning, tagging, and white-glove placement in the vault next to the artifacts of Laënnec, Golding Bird, George Cammann, and David Littmann? Or, as some audiophiles still maintain the phonograph provides the truest sound, will some clinicians yet cling to the analog acoustics of the stethoscope?”

World Heart Federation


Now read: World's most powerful MRI gets set to come online

Formula 1 racing focus turns to energy management

Credit: Renault

Videos and preview briefs are surfacing on news sites about what we can expect in this year's Formula 1 World Championship. The consistent message is technical change, the use of hybrid technology and a focus on efficiency. Call it hybrid tech, or any fitting term to describe the balance that will be seen in 2014 between output and efficiency in the machines. Eyes are on the engines. As the BBC stated, the sport goes further than ever before to embrace green technology and the major turn rests in the 1.6-liter V6 turbo engines, which is a departure from the 2.4-liter V8 engines.

Rob White, Deputy Managing Director (technical), Renault, said that the word "engine" just won't fully describe a 2014 car's propulsive power. "It is more relevant to refer to the complete system as a 'Power Unit.'''

F1 cars will be powered by the turbocharged 1.6-litre V6 internal combustion engine. They replace the 2.4-litre V8s, in use since 2006. "V6" involves an internal combustion engine with an arrangement of two banks of three cylinders in a V configuration over a crankshaft.

Then there are the limits: Engines must consume fuel at no more than 100kg per hour. That's down from about 150kg teams used last year.

The F1 talk of technical shake-ups also rests in ERS, which stands for energy recovery system. These cutting-edge systems represent a change from the previous KERS, or Kinetic Energy Recovery Systems. The latter was introduced to the sport in 2009; it worked by harnessing waste energy created under braking and transforming it into electrical energy. ERS, said the BBC, is made up of the KERS system and a second electric motor fitted to the turbo. "As for the second electric motor, that will harness energy from the turbo that would otherwise be wasted as heat."

Quoted in a Wired report on F1 highlights for 2014, Naoki Tokunaga, Renault F1′s technical director for Power Units, said, "In essence, engine manufacturers used to compete on reaching record levels of power, This year is another matter. Now they are to compete, he said, "in the intelligence of energy management."

What does this 2014 focus mean for the Formula 1 drivers? They are also bracing themselves for the new type of car behavior. This year, said AUTOSPORT, demands on fuel economy mean that the drivers will need to adjust. "This different way of driving will need practice - to learn how best to be fast but not to use up too much fuel." said James Allison, Ferrari technical director.

According to the BBC, "Formula 1 is introducing arguably the biggest set of rule changes in its history this season." Renault would not disagree. According to Renault, "In 2014 Formula 1 will enter a new era. After three years of planning and development, the most significant technical change to hit the sport in more than two decades is introduced. Engine regulations form the major part of the coming revolution, with the introduction of a new generation of Power Units that combine a 1.6 liter V6 turbocharged engine with energy recovery systems that will dramatically increase efficiency by harvesting energy dissipated as heat in the exhaust or brakes."

phys.org


Now read: Ford’s new solar-powered hybrid car can charge up without plugging in

Quantum physics in 1-D: New experiment supports long-predicted 'Luttinger liquid' model

Design of the vertically-integrated quantum wire device. Scanning electron microscope photograph of the device. Credit: Science Express

How would electrons behave if confined to a wire so slender they could pass through it only in single-file?

The question has intrigued scientists for more than half a century. In 1950, Japanese Nobel Prize winner Sin-Itiro Tomonaga, followed by American physicist Joaquin Mazdak Luttinger in 1963, came up with a mathematical model showing that the effects of one particle on all others in a one-dimensional line would be much greater than in two- or three-dimensional spaces. Among quantum physicists, this model came to be known as the "Luttinger liquid" state.

Until very recently, however, there had been only a few successful attempts to test the model in devices similar to those in computers, because of the engineering complexity involved. Now, scientists from McGill University and Sandia National Laboratories have succeeded in conducting a new experiment that supports the existence of the long-sought-after Luttinger liquid state. Their findings, published in the Jan. 23 issue of Science Express, validate important predictions of the Luttinger liquid model.

The experiment was led by McGill PhD student Dominique Laroche under the supervision of Professor Guillaume Gervais of McGill's Department of Physics and Dr. Michael Lilly of Sandia National Laboratories in Albuquerque, N.M. The new study follows on the team's discovery in 2011 of a way to engineer one of the world's smallest electronic circuits, formed by two wires separated by only about 15 nanometers, or roughly 150 atoms.

What does one-dimensional quantum physics involve? Gervais explains it this way: "Imagine that you are driving on a highway and the traffic is not too dense. If a car stops in front of you, you can get around it by passing to the left or right. That's two-dimensional physics. But if you enter a tunnel with a single lane and a car stops, all the other cars behind it must slam on the brakes. That's the essence of the Luttinger liquid effect. The way electrons behave in the Luttinger state is entirely different because they all become coupled to one another."

To scientists, "what is so fascinating and elegant about quantum physics in one dimension is that the solutions are mathematically exact," Gervais adds. "In most other cases, the solutions are only approximate."

Making a device with the correct parameters to conduct the experiment was no simple task, however, despite the team's 2011 discovery of a way to do so. It took years of trial, and more than 250 faulty devices – each of which required 29 processing steps – before Laroche's painstaking efforts succeeded in producing functional devices yielding reliable data. "So many things could go wrong during the fabrication process that troubleshooting the failed devices felt like educated guesswork at times," explains Laroche. "Adding in the inherent failure rate compounded at each processing step made the fabrication of these devices extremely challenging."

In particular, the experiment measures the effect that a very small electrical current in one of the wires has on a nearby wire. This can be viewed as the "friction" between the two circuits, and the experiment shows that this friction increases as the circuits are cooled to extremely low temperatures. This effect is a strong prediction of Luttinger liquid theory.

The experiments were conducted both at McGill University and at the Center for Integrated Nanotechnologies, a U.S. Department of Energy, Office of Basic Energy Sciences user facility operated by Sandia National Laboratories.

"It took a very long time to make these devices," said Lilly. "It's not impossible to do in other labs, but Sandia has crystal-growing capabilities, a microfabrication facility, and support for fundamental research from DOE's office of Basic Energy Sciences (BES), and we're very interested in understanding the fundamental ideas that drive the behavior of very small systems."

The findings could lead to practical applications in electronics and other fields. While it's difficult at this stage to predict what those might be, "the same was true in the case of the laser when it was invented," Gervais notes. "Nanotechnologies are already helping us in medicine, electronics and engineering – and this work shows that they can help us get to the bottom of a long-standing question in quantum physics."

"1D-1D Coulomb Drag Signature of a Luttinger Liquid", D. Laroche, G. Gervais, M. P. Lilly, J. L. Reno, Jan. 23, 2014.Science Express.

McGill University



Now read: Quantum world record smashed

Researchers find vitamin-d supplements have no impact on healthy individuals

Image Credit: Thinkstock.com

Healthy people taking vitamin D supplements are unlikely to see any significant impact when it comes to preventing broken bones or cardiovascular conditions, claims new research appearing in the latest edition of The Lancet Diabetes & Endocrinology.

According to the AFP news agency, the study authors reviewed more than 40 previous trials in order to determine whether or not use of these vitamin supplements achieved a benchmark of reducing the risk of heart attacks, strokes, cancer or bone fractures by at least 15 percent.

“Previous research had seen a strong link between vitamin D deficiency and poor health in these areas,” the news agency said. However, the new study “strengthens arguments that vitamin D deficiency is usually the result of ill health – not the cause of it,” and the authors report that “there is ‘little justification’ for doctors to prescribe vitamin D supplements as a preventive measure for these disorders.”

All told, the investigators reported that the use of vitamin D supplements failed to significantly reduce a person’s risk of death, heart disease, cancer or stroke among the study participants. Likewise, in both healthy and hospitalized men and women, it also failed to result in a noticeable reduction of hip fracture risk, according to FoxNews.com.

In their study, the researchers reviewed randomized controlled trials of vitamin D supplement use both with and without calcium, BBC News explained. The research was led by University of Auckland senior research fellow Dr. Mark Bolland and funded by the Health Research Council of New Zealand.

“Previous research has shown that vitamin D deficiency is associated with poor health and early death,” but the new evidence suggests that “that low levels of vitamin D are a result, not a cause, of poor health,” HealthDay News explained. Likewise, in an editorial accompanying the paper, one university professor said that there is legitimate concern that using the supplements could cause harm in healthy men and women.

“The impression that vitamin D is a sunshine vitamin and that increasing doses lead to improved health is far from clear,” Karl Michaelsson of the department of surgical sciences at Uppsala University, told BBC News. He urged caution when it came to taking vitamin D supplements until scientists can glean more information about the effect of doing so.

redorbit


Now read: Most clinical studies on vitamins flawed by poor methodology
Back to Top