» Medical Specialties «
There is a time when it is necessary to abandon the used clothes, which already have the shape of our body and to forget our paths, which takes us always to the same places. This is the time to cross the river: and if we don’t dare to do it, we will have stayed, forever beneath ourselves
Fernando Pessoa (via paizleyrayz)
If you’ve ever watched a rocket launch, you’ve probably noticed the billowing clouds around the launch pad during lift-off. What you’re seeing is not actually the rocket’s exhaust but the result of a launch pad and vehicle protection system known in NASA parlance as the Sound Suppression Water System. Exhaust gases from a rocket typically exit at a pressure higher than the ambient atmosphere, which generates shock waves and lots of turbulent mixing between the exhaust and the air. Put differently, launch ignition is incredibly loud, loud enough to cause structural damage to the launchpad and, via reflection, the vehicle and its contents.
To mitigate this problem, launch operators use a massive water injection system that pours about 3.5 times as much water as rocket propellant per second. This significantly reduces the noise levels on the launchpad and vehicle and also helps protect the infrastructure from heat damage. The exact physical processes involved – details of the interaction of acoustic noise and turbulence with water droplets – are still murky because this problem is incredibly difficult to study experimentally or in simulation. But, at these high water flow rates, there’s enough water to significantly affect the temperature and size of the rocket’s jet exhaust. Effectively, energy that would have gone into gas motion and acoustic vibration is instead expended on moving and heating water droplets. In the case of the Space Shuttle, this reduced noise levels in the payload bay to 142 dB – about as loud as standing on the deck of an aircraft carrier. (Image credits: NASA, 1, 2; research credit: M. Kandula; original question from Megan H.)
A pioneering new technique that encourages the wonder material graphene to “talk” could revolutionise the global audio and telecommunications industries.
Researchers from the University of Exeter have devised a ground-breaking method to use graphene to generate complex and controllable sound signals. In essence, it combines speaker, amplifier and graphic equaliser into a chip the size of a thumbnail.
Traditional speakers mechanically vibrate to produce sound, with a moving coil or membrane pushing the air around it back and forth. It is a bulky technology that has hardly changed in more than a century.
This innovative new technique involves no moving parts. A layer of the atomically thin material graphene is rapidly heated and cooled by an alternating electric current, and transfer of this thermal variation to the air causes it to expand and contract, thereby generating sound waves.
Read more.
Irving Langmuir, who won the 1932 Nobel Prize for ‘Surface Chemistry’, demonstrates how dipping an oil-covered finger into water creates a film of oil, pushing floating particles of powder to the edge.
The same phenomenon can be used to power a paper boat with a little ‘fuel’ applied to the back: as the film expands over the water, the boat is is propelled forward:
With experiments like this he revealed that these films are just one molecule thick - a remarkable finding in relation to the size of molecules.
In the full archive film, Langmiur goes on to demonstrate proteins spreading in the same way, revealing the importance of molecular layering for structure.
First, he drops protein solution onto the surface, and it spreads out in a clear circle, with a jagged edge:
Add a little more oil on top, and a star shape appears:
By breaking it up further, he makes chunks of the film which behave like icebergs on water:
You can watch the full demonstrations, along with hours more classic science footage, in our archive.
People can intuitively recognise small numbers up to four, however when calculating they are dependent on the assistance of language. In this respect, the fascinating research question ensues: how do multilingual people solve arithmetical tasks presented to them in different languages of which they have a very good command? This situation is the rule for students with Luxembourgish as their mother tongue, who were first educated in German and then attended further schooling in French as teaching language.
This question was investigated by a research team led by Dr Amandine Van Rinsveld and Professor Christine Schiltz from the Cognitive Science and Assessment Institute (COSA) at the University of Luxembourg. For the purpose of the study, the researchers recruited subjects with Luxembourgish as their mother tongue, who successfully completed their schooling in the Grand Duchy of Luxembourg and continued their academic studies in francophone universities in Belgium. Thus, the study subjects mastered both the German and French languages perfectly. As Luxembourger students, they took maths classes in primary schools in German and then in secondary schools in French.
In two separate test situations, the study participants had to solve very simple and a bit more complex addition tasks, both in German and French. In the tests, it became evident that the subjects were able to solve simple addition tasks equally well in both languages. However, for complex addition in French, they required more time than with an identical task in German. Moreover, they made more errors when attempting to solve tasks in French.
The bilingual brain calculates differently depending on the language used
During the tests, functional magnetic resonance imaging (fMRI) was used to measure the brain activity of the subjects. This demonstrated that, depending on the language used, different brain regions were activated.
With addition tasks in German, a small speech region in the left temporal lobe was activated. When solving complex calculatory tasks in French, additional parts of the subjects’ brains responsible for processing visual information, were involved. During the complex calculations in French, the subjects additionally fell back on figurative thinking. The experiments do not provide any evidence that the subjects translated the tasks they were confronted with from French into German, in order to solve the problem.
While the test subjects were able to solve German tasks on the basis of the classic, familiar numerical-verbal brain areas, this system proved not to be sufficiently viable in the second language of instruction, in this case French. To solve the arithmetic tasks in French, the test subjects had to systematically fall back on other thought processes, not observed so far in monolingual persons.
The study documents for the first time, with the help of brain activity measurements and imaging techniques, the demonstrable cognitive “extra effort” required for solving arithmetic tasks in the second language of instruction. The research results clearly show that calculatory processes are directly affected by language.
For the Luxembourg school system, these findings are somewhat groundbreaking, given the well-known fact that, upon moving from primary school to secondary school, the language of instruction for math changes from the primary teaching language (German) to the secondary teaching language (French). This is compounded by the fact that a much smaller proportion of today’s student population in the Grand Duchy has a German-speaking background compared to previous generations, and it can be assumed that they already have to perform visual translation tasks in German-speaking math classes in primary school.
In a proof-of-concept study published in Nature Physics, researchers drew magnetic squares in a nonmagnetic material with an electrified pen and then “read” this magnetic doodle with X-rays.
The experiment demonstrated that magnetic properties can be created and annihilated in a nonmagnetic material with precise application of an electric field – something long sought by scientists looking for a better way to store and retrieve information on hard drives and other magnetic memory devices. The research took place at the Department of Energy’s SLAC National Accelerator Laboratory and the Korea Advanced Institute of Science and Technology.
“The important thing is that it’s reversible. Changing the voltage of the applied electric field demagnetizes the material again,” said Hendrik Ohldag, a co-author on the paper and scientist at the lab’s Stanford Synchrotron Radiation Lightsource (SSRL), a DOE Office of Science User Facility.
“That means this technique could be used to design new types of memory storage devices with additional layers of information that can be turned on and off with an electric field, rather than the magnetic fields used today,” Ohldag said. “This would allow more targeted control, and would be less likely to cause unwanted effects in surrounding magnetic areas.”
Read more.
Flowers
My Neighbor Totoro | Tonari no Totoro (1988, Japan)
Director: Hayao Miyazaki Cinematographer: Mark Henley
(Image caption: A new technique called magnified analysis of proteome (MAP), developed at MIT, allows researchers to peer at molecules within cells or take a wider view of the long-range connections between neurons. Credit: Courtesy of the researchers)
Imaging the brain at multiple size scales
MIT researchers have developed a new technique for imaging brain tissue at multiple scales, allowing them to peer at molecules within cells or take a wider view of the long-range connections between neurons.
This technique, known as magnified analysis of proteome (MAP), should help scientists in their ongoing efforts to chart the connectivity and functions of neurons in the human brain, says Kwanghun Chung, the Samuel A. Goldblith Assistant Professor in the Departments of Chemical Engineering and Brain and Cognitive Sciences, and a member of MIT’s Institute for Medical Engineering and Science (IMES) and Picower Institute for Learning and Memory.
“We use a chemical process to make the whole brain size-adjustable, while preserving pretty much everything. We preserve the proteome (the collection of proteins found in a biological sample), we preserve nanoscopic details, and we also preserve brain-wide connectivity,” says Chung, the senior author of a paper describing the method in the July 25 issue of Nature Biotechnology.
The researchers also showed that the technique is applicable to other organs such as the heart, lungs, liver, and kidneys.
The paper’s lead authors are postdoc Taeyun Ku, graduate student Justin Swaney, and visiting scholar Jeong-Yoon Park.
Multiscale imaging
The new MAP technique builds on a tissue transformation method known as CLARITY, which Chung developed as a postdoc at Stanford University. CLARITY preserves cells and molecules in brain tissue and makes them transparent so the molecules inside the cell can be imaged in 3-D. In the new study, Chung sought a way to image the brain at multiple scales, within the same tissue sample.
“There is no effective technology that allows you to obtain this multilevel detail, from brain region connectivity all the way down to subcellular details, plus molecular information,” he says.
To achieve that, the researchers developed a method to reversibly expand tissue samples in a way that preserves nearly all of the proteins within the cells. Those proteins can then be labeled with fluorescent molecules and imaged.
The technique relies on flooding the brain tissue with acrylamide polymers, which can form a dense gel. In this case, the gel is 10 times denser than the one used for the CLARITY technique, which gives the sample much more stability. This stability allows the researchers to denature and dissociate the proteins inside the cells without destroying the structural integrity of the tissue sample.
Before denaturing the proteins, the researchers attach them to the gel using formaldehyde, as Chung did in the CLARITY method. Once the proteins are attached and denatured, the gel expands the tissue sample to four or five times its original size.
“It is reversible and you can do it many times,” Chung says. “You can then use off-the-shelf molecular markers like antibodies to label and visualize the distribution of all these preserved biomolecules.”
There are hundreds of thousands of commercially available antibodies that can be used to fluorescently tag specific proteins. In this study, the researchers imaged neuronal structures such as axons and synapses by labeling proteins found in those structures, and they also labeled proteins that allow them to distinguish neurons from glial cells.
“We can use these antibodies to visualize any target structures or molecules,” Chung says. “We can visualize different neuron types and their projections to see their connectivity. We can also visualize signaling molecules or functionally important proteins.”
High resolution
Once the tissue is expanded, the researchers can use any of several common microscopes to obtain images with a resolution as high as 60 nanometers — much better than the usual 200 to 250-nanometer limit of light microscopes, which are constrained by the wavelength of visible light. The researchers also demonstrated that this approach works with relatively large tissue samples, up to 2 millimeters thick.
“This is, as far as I know, the first demonstration of super-resolution proteomic imaging of millimeter-scale samples,” Chung says.
“This is an exciting advance for brain mapping, a technique that reveals the molecular and connectional architecture of the brain with unprecedented detail,” says Sebastian Seung, a professor of computer science at the Princeton Neuroscience Institute, who was not involved in the research.
Currently, efforts to map the connections of the human brain rely on electron microscopy, but Chung and colleagues demonstrated that the higher-resolution MAP imaging technique can trace those connections more accurately.
Chung’s lab is now working on speeding up the imaging and the image processing, which is challenging because there is so much data generated from imaging the expanded tissue samples.
“It’s already easier than other techniques because the process is really simple and you can use off-the-shelf molecular markers, but we are trying to make it even simpler,” Chung says.