The word egg was a borrowing from Old Norse egg, replacing the native word ey (plural eyren) from Old English ǣġ, plural ǣġru. Like “children” and “kine” (obsolete plural of cow), the plural ending -en was added redundantly to the plural form in Middle English. As with most borrowings from Old Norse, this showed up first in northern dialects of English, and gradually moved southwards, so that for a while, ey and egg were used in different parts of England.
In 1490, when William Caxton printed the first English-language books, he wrote a prologue to his publication of Eneydos (Aeneid in contemporary English) in which he discussed the problems of choosing a dialect to publish in, due to the wide variety of English dialects that existed at the time. This word was a specific example he gave. He told a story about some merchants from London travelling down the Thames and stopping in a village in Kent
And one of theym… cam in to an hows and axed for mete and specyally he axyd after eggys, and the goode wyf answerde that she could speke no Frenshe. And the marchaunt was angry, for he also coude speke no Frenshe, but wolde have hadde egges; and she understode hym not. And thenne at laste a-nother sayd that he wolde have eyren. Then the good wyf sayd that she understod hym wel. Loo, what sholde a man in thyse dayes now wryte, egges, or eyren? Certaynly it is hard to playse every man, by-cause of dyversite and chaunge of langage.
The merchant in this story was only familiar with the word egg, while the woman only knew ey, and the confusion was only resolved by someone who knew both words. Indeed, the woman in the story was so confused by this unfamiliar word egg that she assumed it must be a French word! The word “meat” (or “mete” as Caxton spelled it) was a generic word for “food” at the time.
The word ey may also survive in the term Cockney, thought to derive from the Middle English cocken ey (”cock’s egg”), a term given to a small misshapen egg, and applied by rural people to townspeople
Both egg and ey derived from the same Proto-Germanic root, *ajją, which apparently had a variant *ajjaz in West Germanic. This Proto-Germanic form in turn derived from Proto-Indo-European *h2ōwyóm. In Latin, this root became ōvum, from which the adjective ōvalis meaning “egg-shaped”, was derived. Ōvum itself was borrowed into English in the biological sense of the larger gamete in animals, while ōvalis is the source of oval.
The PIE root is generally though to derive from the root *h2éwis, “bird”, which is the source of Latin avis “bird”, source of English terms such as aviation. This word may also be related to *h2ówis “sheep”, which survived in English as ewe. One theory is that they were both derived from a root meaning something like “to dress”, “to clothe”, with bird meaning “one who is clothed [in feathers]” and sheep meaning “one who clothes [by producing wool]”.
On March 2, 1903 the Hotel Martha Washington became New York City’s first women-only hotel. Located on 30 East 30th Street, it served the growing population of professional women who otherwise struggled to find safe and socially acceptable lodging in the city. A far cry from the crowded boarding houses, this was a thoroughly modern operation housed in a twelve-story Renaissance Revival building that featured all the amenities, from a ladies’ tailor to electric lights. Upon opening, it was immediately popular, both with the women it served and with the curious onlookers who had a hard time coming to terms with the whole idea of the place.
George P. Hall & Son. Manhattan: Hotel Martha Washington. undated. photographic print. New-York Historical Society.
Robert L. Bracklow. The Hotel Martha Washington. February 23, 1903. Glass negative. New-York Historical Society.
How Tattooing Really Works
1. Tattooing causes a wound that alerts the body to begin the inflammatory process, calling immune system cells to the wound site to begin repairing the skin. Specialized cells called macrophages eat the invading material (ink) in an attempt to clean up the inflammatory mess.
2. As these cells travel through the lymphatic system, some of them are carried back with a belly full of dye into the lymph nodes while others remain in the dermis. With no way to dispose of the pigment, the dyes inside them remain visible through the skin.
3. Some of the ink particles are also suspended in the gel-like matrix of the dermis, while others are engulfed by dermal cells called fibroblasts. Initially, ink is deposited into the epidermis as well, but as the skin heals, the damaged epidermal cells are shed and replaced by new, dye-free cells with the topmost layer peeling off like a healing sunburn.
4. Dermal cells, however, remain in place until they die. When they do, they are taken up, ink and all, by younger cells nearby so the ink stays where it is.
5. So a single tattoo may not truly last forever, but tattoos have been around longer than any existing culture. And their continuing popularity means that the art of tattooing is here to stay.
From the TED-Ed Lesson What makes tattoos permanent? - Claudia Aguirre
Animation by TOGETHER
“Brainprint” Biometric ID Hits 100% Accuracy
Psychologists and engineers at Binghamton University in New York say they’ve hit a milestone in the quest to use the unassailable inner workings of the mind as a form of biometric identification. They came up with an electroencephalograph system that proved 100 percent accurate at identifying individuals by the way their brains responded to a series of images. But EEG as a practical means of authentication is still far off.
Many earlier attempts had come close to 100 percent accuracy but couldn’t completely close the gap. “It’s a big deal going from 97 to 100 percent because we imagine the applications for this technology being for high-security situations,” says Sarah Laszlo, the assistant professor of psychology at Binghamton who led the research with electrical engineering professor Zhanpeng Jin.
Perhaps as important as perfect accuracy is that this new form of ID can do something fingerprints and retinal scans have a hard time achieving: It can be “canceled.”
Fingerprint authentication can be reset if the associated data is stolen, because that data can be stored as a mathematically transformed version of itself, points out Clarkson University biometrics expert Stephanie Schuckers. However, that trick doesn’t work if it’s the fingerprint (or the finger) itself that’s stolen. And the theft part, at least, is easier than ever. In 2014 hackers claimed to have cloned German defense minister Ursula von der Leyen’s fingerprints just by taking a high-definition photo of her hands at a public event.
Several early attempts at EEG-based identification sought the equivalent of a fingerprint in the electrical activity of a brain at rest. But this new brain biometric, which its inventors call CEREBRE, dodges the cancelability problem because it’s based on the brain’s responses to a sequence of particular types of images. To keep that ID from being permanently hijacked, those images can be changed or re-sorted to essentially make a new biometric passkey, should the original one somehow be hacked.
CEREBRE, which Laszlo, Jin, and colleagues described in IEEE Transactions in Information Forensics and Security, involves presenting a person wearing an EEG system with images that fall into several categories: foods people feel strongly about, celebrities who also evoke emotions, simple sine waves of different frequencies, and uncommon words. The words and images are usually black and white, but occasionally one is presented in color because that produces its own kind of response.
Each image causes a recognizable change in voltage at the scalp called an event-related potential, or ERP. The different categories of images involve somewhat different combinations of parts of your brain, and they were already known to produce slight differences in the shapes of ERPs in different people. Laszlo’s hypothesis was that using all of them—several more than any other system—would create enough different ERPs to accurately distinguish one person from another.
The EEG responses were fed to software called a classifier. After testing several schemes, including a variety of neural networks and other machine-learning tricks, the engineers found that what actually worked best was a system based on simple cross correlation.
In the experiments, each of the 50 test subjects saw a sequence of 500 images, each flashed for 1 second. “We collected 500, knowing it was overkill,” Laszlo says. Once the researchers crunched the data they found that just 27 images would have been enough to hit the 100 percent mark.
The experiments were done with a high-quality research-grade EEG, which used 30 electrodes attached to the skull with conductive goop. However, the data showed that the system needs only three electrodes for 100 percent identification, and Laszlo says her group is working on simplifying the setup. They’re testing consumer EEG gear from Emotiv and NeuroSky, and they’ve even tried to replicate the work with electrodes embedded in a Google Glass, though the results weren’t spectacular, she says.
For EEG to really be taken seriously as a biometric ID, brain interfaces will need to be pretty commonplace, says Schuckers. That might yet happen. “As we go more and more into wearables as a standard part of our lives, [EEGs] might be more suitable,” she says.
But like any security system, even an EEG biometric will attract hackers. How can you hack something that depends on your thought patterns? One way, explains Laszlo, is to train a hacker’s brain to mimic the right responses. That would involve flashing light into a hacker’s eye at precise times while the person is observing the images. These flashes are known to alter the shape of the ERP.
(Image caption: Measurement of brain activity in a patient with phantom limb pain. Credit: Osaka University)
Cause of phantom limb pain in amputees, and potential treatment, identified
Researchers have discovered that a ‘reorganisation’ of the wiring of the brain is the underlying cause of phantom limb pain, which occurs in the vast majority of individuals who have had limbs amputated, and a potential method of treating it which uses artificial intelligence techniques.
The researchers, led by a group from Osaka University in Japan in collaboration with the University of Cambridge, used a brain-machine interface to train a group of ten individuals to control a robotic arm with their brains. They found that if a patient tried to control the prosthetic by associating the movement with their missing arm, it increased their pain, but training them to associate the movement of the prosthetic with the unaffected hand decreased their pain.
Their results, reported in the journal Nature Communications, demonstrate that in patients with chronic pain associated with amputation or nerve injury, there are ‘crossed wires’ in the part of the brain associated with sensation and movement, and that by mending that disruption, the pain can be treated. The findings could also be applied to those with other forms of chronic pain, including pain due to arthritis.
Approximately 5,000 amputations are carried out in the UK every year, and those with type 1 or type 2 diabetes are at particular risk of needing an amputation. In most cases, individuals who have had a hand or arm amputated, or who have had severe nerve injuries which result in a loss of sensation in their hand, continue to feel the existence of the affected hand as if it were still there. Between 50 and 80 percent of these patients suffer with chronic pain in the ‘phantom’ hand, known as phantom limb pain.
“Even though the hand is gone, people with phantom limb pain still feel like there’s a hand there – it basically feels painful, like a burning or hypersensitive type of pain, and conventional painkillers are ineffective in treating it,” said study co-author Dr Ben Seymour, a neuroscientist based in Cambridge’s Department of Engineering. “We wanted to see if we could come up with an engineering-based treatment as opposed to a drug-based treatment.”
A popular theory of the cause of phantom limb pain is faulty ‘wiring’ of the sensorimotor cortex, the part of the brain that is responsible for processing sensory inputs and executing movements. In other words, there is a mismatch between a movement and the perception of that movement.
In the study, Seymour and his colleagues, led by Takufumi Yanagisawa from Osaka University, used a brain-machine interface to decode the neural activity of the mental action needed for a patient to move their ‘phantom’ hand, and then converted the decoded phantom hand movement into that of a robotic neuroprosthetic using artificial intelligence techniques.
“We found that the better their affected side of the brain got at using the robotic arm, the worse their pain got,” said Yanagisawa. “The movement part of the brain is working fine, but they are not getting sensory feedback – there’s a discrepancy there.”
The researchers then altered their technique to train the ‘wrong’ side of the brain: for example, a patient who was missing their left arm was trained to move the prosthetic arm by decoding movements associated with their right arm, or vice versa. When they were trained in this counter-intuitive technique, the patients found that their pain significantly decreased. As they learned to control the arm in this way, it takes advantage of the plasticity – the ability of the brain to restructure and learn new things – of the sensorimotor cortex, showing a clear link between plasticity and pain.
Although the results are promising, Seymour warns that the effects are temporary, and require a large, expensive piece of medical equipment to be effective. However, he believes that a treatment based on their technique could be available within five to ten years. “Ideally, we’d like to see something that people could have at home, or that they could incorporate with physio treatments,” he said. “But the results demonstrate that combining AI techniques with new technologies is a promising avenue for treating pain, and an important area for future UK-Japan research collaboration.”
At this time in 1962, the U.S. was in the thick of the Cuban Missile Crisis. Here’s a brief recap of what exactly happened during those thirteen days.
It’s not hard to imagine a world where at any given moment, you and everyone you know could be wiped out without warning at the push of a button. This was the reality for millions of people during the 45-year period after World War II, now known as the Cold War. As the United States and Soviet Union faced off across the globe, each knew that the other had nuclear weapons capable of destroying it. And destruction never loomed closer than during the 13 days of the Cuban Missile Crisis.
In 1961, the U.S. unsuccessfully tried to overthrow Cuba’s new communist government. That failed attempt was known as the Bay of Pigs, and it convinced Cuba to seek help from the U.S.S.R. Soviet premier Nikita Khrushchev was happy to comply by secretly deploying nuclear missiles to Cuba, not only to protect the island, but to counteract the threat from U.S. missiles in Italy and Turkey. By the time U.S. intelligence discovered the plan, the materials to create the missiles were already in place.
At an emergency meeting on October 16, 1962, military advisors urged an airstrike on missile sites and invasion of the island. But President John F. Kennedy chose a more careful approach. On October 22, he announced that the the U.S. Navy would intercept all shipments to Cuba, but a naval blockade was considered an act of war. Although the President called it a quarantine that did not block basic necessities, the Soviets didn’t appreciate the distinction.
Thus ensued the most intense six days of the Cold War. As the weapons continued to be armed, the U.S. prepared for a possible invasion. For the first time in history, the U.S. Military set itself to DEFCON 2, the defense readiness one step away from nuclear war. With hundreds of nuclear missiles ready to launch, the metaphorical Doomsday Clock stood at one minute to midnight.
But diplomacy carried on. In Washington, D.C., Attorney General Robert Kennedy secretly met with Soviet Ambassador Anatoly Dobrynin. After intense negotiation, they reached the following proposal. The U.S. would remove their missiles from Turkey and Italy and promise to never invade Cuba in exchange for the Soviet withdrawal from Cuba under U.N. inspection. The crisis was now over.
While criticized at the time by their respective governments for bargaining with the enemy, contemporary historical analysis shows great admiration for Kennedy’s and Khrushchev’s ability to diplomatically solve the crisis. Overall, the Cuban Missile Crisis revealed just how fragile human politics are compared to the terrifying power they can unleash.
For a deeper dive into the circumstances of the Cuban Missile Crisis, be sure to watch The history of the Cuban Missile Crisis - Matthew A. Jordan
Animation by Patrick Smith
Idempotence.
A term I’d always found intriguing, mostly because it’s such an unusual word. It’s a concept from mathematics and computer science but can be applied more generally—not that it often is. Basically, it’s an operation that, no matter how many times you do it, you’ll still get the same result, at least without doing other operations in between. A classic example would be view_your_bank_balance being idempotent, and withdraw_1000 not being idempotent.
HTs: @aidmcg and Ewan Silver who kept saying it
(Fig.1. Neuron connections in biological neural networks. Source: MIPT press office)
Physicists build “electronic synapses” for neural networks
A team of scientists from the Moscow Institute of Physics and Technology(MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems. The paper has been published in the journal Nanoscale Research Letters.
The group of researchers from MIPT have made HfO2-based memristors measuring just 40x40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.
Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.
“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.
Synapses – the key to learning and memory
A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.
(Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office)
Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of thebrain.
From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.
The memristor as an analogue of the synapse
As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.
There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.
“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.
The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of memory in the brain.
The authors also succeeded in demonstrating a more complex mechanism – spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.
To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).
(Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and the change in potential of the neuron connections in biological neural networks. Source: MIPT press office)
These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.
“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.
In general I am a casual observer and usually do not make comments, especially since I am here to learn and have no background in linguistics. But in this case I feel strongly compelled to put my 2 cents' worth of thoughts in.
Although I cannot say that I am anything like fluent, I do have a reasonable amount of Mandarin Chinese and Japanese, and I have to say the first thing I thought when I saw this article was "ah". Because although I can see how katakana is derived from Chinese, using the rather restricted stroke combinations that is the basis of all Chinese characters, the same cannot be said for hiragana, because at the very least, squiggles do not exist in Chinese, at least by the time it was exported to Japan. What you might think are squiggles in Chinese are in fact just our possibly lazy, or perhaps more elegant way of writing, the way cursive would look compared to printed letters. Hirangana bears only a superficial resemblance to Chinese and always feels like it must have another source of inspiration.
Also keep in mind that Chinese was basically an imported language into Japan, and an attempt to shoehorn Japanese sounds into Chinese characters (which I think I can safely say did not sound the same) must have been unwieldy at best. In fact, today, Japanese pronouciations of kanji differ so much from the Chinese, and often their usage too, that I would use my knowledge of the characters only as a rough starting point as to what they might mean in Japanese.
Also, I looked up Kūkai, and, to cut a long story short, he was a Japanese Buddhist monk who went to China to study the sutras, and, to quote from the Wikipedia page directly:
Kūkai arrived back in Japan in 806 as the eighth Patriarch of Esoteric Buddhism, having learnt Sanskrit and its Siddhaṃ script, studied Indian Buddhism, as well as having studied the arts of Chinese calligraphy and poetry, all with recognized masters. He also arrived with a large number of texts, many of which were new to Japan and were esoteric in character, as well as several texts on the Sanskrit language and the Siddhaṃ script.
And a quick look at the Siddham script shows that it has its roots in the Aramaic alphabet.
This is the man to whom the invention of the kana system is attributed to, and if that is the case, I see a possible connection that is as not as far-fetched as it seems.
In Japanese language, we have three types of letters, Kanji, Hiragana, Katakana.
Hiragana’s root is from old Ivrit and Palmyra letters.
The first column: Phoenician alphabet The second column: Ostracon The third column: Old Aramaic The forth column: Imperial Aramaic The fifth column: Dead Sea scrolls The sixth column: Palmyrene script The seventh column: Palmyra
The first column: Hiragana The second column: Consonants The third column: Vowels The forth column: combined with the consonant and the vowel The fifth column: Sousho-tai (a hand writing style) The sixth column: Kanji
This work features 100 images highlighting Cassini’s 13-year tour at the ringed giant.
Explore our beautiful home world as seen from space.
Emblems of Exploration showcases the rich history of space and aeronautic logos.
Hubble Focus: Our Amazing Solar System showcases the wonders of our galactic neighborhood.
This book dives into the role aeronautics plays in our mission of engineering and exploration.
Making the Invisible Visible outlines the rich history of infrared astronomy.
The NASA Systems Engineering Handbook describes how we get the job done.
The space race really heats up in the third volume of famed Russian spacecraft designer Boris Chertok memoirs. Chertok, who worked under the legendary Sergey Korolev, continues his fascinating narrative on the early history of the Soviet space program, from 1961 to 1967 in Rockets and People III.
The second volume of Walking to Olympus explores the 21st century evolution of spacewalks.
Find your own great read in NASA’s free e-book library.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.
A reblog of nerdy and quirky stuff that pique my interest.
291 posts