Visualizzazione post con etichetta linguists. Mostra tutti i post
Visualizzazione post con etichetta linguists. Mostra tutti i post

Study Reveals Brain Activity Patterns Underlying Fluent Speech

Research Could Pave the Way for a Brain Prosthetic to Translate Thoughts into Speech

When we speak, we engage nearly 100 muscles, continuously moving our lips, jaw, tongue, and throat to shape our breath into the fluent sequences of sounds that form our words and sentences. A new study by UC San Francisco scientists reveals how these complex articulatory movements are coordinated in the brain.

The new research reveals that the brain’s speech centers are organized more according to the physical needs of the vocal tract as it produces speech than by how the speech sounds (its “phonetics”). Linguists divide speech into abstract units of sound called “phonemes” and consider the /k/ sound in “keep” the same as the /k/ in “coop.” But in reality, your mouth forms the sound differently in these two words to prepare for the different vowels that follow, and this physical distinction now appears to be more important to the brain regions responsible for producing speech than the theoretical sameness of the phoneme.

The findings, which extend previous studies on how the brain interprets the sounds of spoken language,  could help guide the creation of new generation of prosthetic devices for those who are unable to speak: brain implants could monitor neural activity related to speech production and rapidly and directly translate those signals into synthetic spoken language.

The new study, published on May 17, 2018, in Neuron, was conducted by Josh Chartier and Gopala K. Anumanchipalli, PhD, both researchers in the laboratory of senior author Edward Chang, MD, professor of neurological surgery, Bowes Biomedical Investigator, and member of the UCSF Weill Institute for Neurosciences. They were joined by Keith Johnson, PhD, professor of linguistics at UC Berkeley.

A Neural Code for Vocal Tract Movements

Chang, a neurosurgeon at the UCSF Epilepsy Center, specializes in surgeries to remove brain tissue that causes seizures in patients with epilepsy. In some cases, to prepare for these operations, he places high-density arrays of tiny electrodes onto the surface of the patients’ brains, both to help identify the location triggering the patients’ seizures and to map out other important areas, such as those involved in language, to make sure the surgery avoids damaging them.
In addition to its clinical importance, this method, known as electrocorticography, or ECoG, is a powerful tool for research. “It’s a unique means of looking at thousands of neurons activating in unison,” Chartier said. 
In the new study, Chartier and Anumanchipalli asked five volunteers awaiting surgery, with ECoG electrodes placed over a region of ventral sensorimotor cortex that is a key center of speech production, to read aloud a collection of 460 natural sentences. The sentences were expressly constructed to encapsulate nearly all the possible articulatory contexts in American English. This comprehensiveness was crucial to capture the complete range of “coarticulation,” the blending of phonemes that is essential to natural speech.
“Without coarticulation, our speech would be blocky and segmented to the point where we couldn’t really understand it,” said Chartier.
The research team was not able to simultaneously record the volunteers’ neural activity and their tongue, mouth and larynx movements. Instead, they recorded only audio of the volunteers speaking and developed a novel deep learning algorithm to estimate which movements were made during specific speaking tasks. 
This approach allowed the researchers to identify distinct populations of neurons responsible for the specific vocal tract movement patterns needed to produce fluent speech sounds, a level of complexity that had not been seen in previous experiments that used simpler syllable-by-syllable speech tasks.
The experiments revealed that a remarkable diversity of different movements were encoded by neurons surrounding individual electrodes. The researchers found there were four emergent groups of neurons that appeared to be responsible for coordinating movements of muscles of the lips, tongue, and throat into the four main configurations of the vocal tract used in American English. The researchers also identified neural populations associated with specific classes of phonetic phenomena, including separate clusters for consonants and vowels of different types, but their analysis suggested that these phonetic groupings were more of a byproduct of more natural groupings based on different types of muscle movement. 
Regarding coarticulation, the researchers discovered that our brains’ speech centers coordinate different muscle movement patterns based on the context of what’s being said, and the order in which different sounds occur. For example, the jaw opens more to say the word “tap” than to say the word “has” — despite having the same vowel sound (/ae/), the mouth has to get ready to close to make the /z/ sound in “has.” The researchers found that neurons in the ventral sensorimotor cortex were highly attuned to this and other co-articulatory features of English, suggesting that the brain cells are tuned to produce fluid, context-dependent speech as opposed to reading out discrete speech segments in serial order.
“During speech production, there is clearly another layer of neural processing that happens, which enables the speaker to merge phonemes together into something the listener can understand,” said Anumanchipalli. 

Path to a Speech Prosthetic

“This study highlights why we need to take into account vocal tract movements and not just linguistic features like phonemes when studying speech production,” Chartier said. He thinks that this work paves the way not only for additional studies that tackle the sensorimotor aspect of speech production, but could also pay practical dividends. 
“We know now that the sensorimotor cortex encodes vocal tract movements, so we can use that knowledge to decode cortical activity and translate that via a speech prosthetic,” said Chartier. “This would give voice to people who can’t speak but have intact neural functions.”
Ultimately, the study could represent a new research avenue for Chartier and Anumanchipalli’s team at UCSF. “It’s really made me think twice about phonemes fit in—in a sense, these units of speech that we pin so much of our research on are just byproducts of a sensorimotor signal,” Anumanchipalli said.  
This work was supported by grants from the NIH (DP2 OD008627 and U01 NS098971-01). E.F.C. is also a New York Stem Cell Foundation-Robertson Investigator. This research was also supported by the New York Stem Cell Foundation, the Howard Hughes Medical Institute, the McKnight Foundation, the Shurl and Kay Curci Foundation, and the William K. Bowes Foundation. 
UC San Francisco (UCSF) is a leading university dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care. It includes top ranked graduate schools of dentistry, medicine, nursing and pharmacy; a graduate division with nationally renowned programs in basic, biomedical, transitional and population sciences; and a preeminent biomedical research enterprise. It also includes UCSF Health, which comprises three top-ranked hospitals, UCSF Medical Center and UCSF Benioff Children's Hospitals in San Francisco and Oakland, and other partner and affiliated hospitals and healthcare providers throughout the Bay Area. 
___________________
https://www.ucsf.edu/news/2018/05/410606/study-reveals-brain-activity-patterns-underlying-fluent-speech

Resistance to changes in grammar is futile, say researchers

Linguists say that random chance plays a bigger role than previously thought in the evolution of language – but also that ‘English is weird’.

When it comes to changes in language, there’s no point crying over spilt milk: researchers charting fluctuations in English grammar say the rise of certain words, such as spilled, is probably down to chance, and that resistance is futile.
Comparisons have long been drawn between evolution and changes in language, with experts noting that preferences such as a desire for emphasis can act as a type of “natural selection”, affecting which words or forms of grammar are passed on between generations. 
But a new study shows that another evolutionary mechanism might play a key role : random chance.
The authors of the study say that the work adds to our understanding of how language changes over centuries.
“Whether it is by random chance or selection, one of the things that is true about English – and indeed other languages – is that the language changes,” said Joshua Plotkin, co-author of the research from the University of Pennsylvania. “The grammarians might [win the battle] for a decade, but certainly over a century they are going to be on the losing side.”
Writing in the journal Nature, Plotkin and colleagues describe how they tracked different types of grammatical changes across the ages.
Among them, the team looked at changes in American English across more than one hundred thousand texts from 1810 onwards, focusing on the use of “ed” in the past tense of verbs compared with irregular forms – for example, “spilled” versus “spilt”. 
The hunt threw up 36 verbs which had at least two different forms of past tense, including quit/quitted and leaped/leapt. However for the majority, including spilled v spilt, the team said that which form was waxing or waning was not clearly down to selection – meaning it is probably down to chance over which word individuals heard and copied. 
“Chance can play an important role even in language evolution – as we know it does in biological evolution,” said Plotkin, adding that the impact of random chance on language had not been fully appreciated before.
For just six of the 36 verbs, the rise of one form over another was clearly not only down to chance, but was largely a result of active preference – akin to natural selection.

The study also revealed that a flower today is more likely to be “smelled” rather than “smelt” and that the neighbour’s cat probably “dove” behind the sofa – although, as Plotkin notes, British felines remain more likely to have dived.Specifically, “woke” is increasingly preferred over “waked” and “lit” more popular than “lighted”, while “weaved” and “snuck” are on track to eventually overtake “wove” and “sneaked”, respectively. 

But there was a puzzle. “The prevailing view is that if language is changing it should in general change towards the regular form, because the regular form is easier to remember,” said Plotkin. However, four of the six verbs show a rise in the irregular form of the past tense.
That, the team note, might at least in part be down to whether the word sounds similar to other commonly used words of the age. For example the increasing popularity of “dove” rather than “dived” in American English coincides with the development of cars, and hence the rise of soundalike “drive” and past tense “drove” in describing journeys. The team add that they suspect similar effects might be at work in a number of the verbs that currently look like they might be changing by chance alone.
The authors add that the research suggests rare words are more likely to vary over time and be subject to random chance.
The study also explores the use of negation in sentences, such as “I say not”, across English texts dating from the 12th to the 16th centuries, revealing that the placement of the negative word has changed more than once due to selection, possibly because of a desire for emphasis.
“There a was period of time where double negation ... was the way to negate things, just as it is in French today,” said Plotkin.
Dr Christine Cuskley, from the Centre for Language Evolution at the University of Edinburgh, agreed that similarities to commonly used irregular verbs could affect which form of past tense is on the rise.
But she said that it was likely that there were other pressures affecting which form of a past tense is favoured. What’s more, Cuskley added, it is not clear if the conclusions from the latest research could be applied to other languages.
“English is weird,” she said.
_____________________
https://www.theguardian.com/science/2017/nov/01/resistance-to-changes-in-grammar-is-futile-say-researchers?CMP=share_btn_tw

Neuroscientists find that trying harder makes it more difficult to learn some aspects of language

When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.

Brain structure plays an important role in this "sensitive period" for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it's difficult to build another one for a new language.
In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT's McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults' language difficulties: When learning certain elements of language, adults' more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language's morphology—the structure and deployment of linguistic units such as root words, suffixes, and prefixes.
"We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it's actually worse when you try," Finn says.
Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLOS ONE. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper's senior author.
Too much brainpower
Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include "gone" and "been") or complicated verb tenses like the subjunctive.
"Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language—some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don't have of," Finn says.
In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed  than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.
"It's an idea that's been around for a long time, but there hasn't been any data that experimentally show that it's true," Finn says.
Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.
Study subjects listened to the  for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.
Each group heard the same recording, which was a series of three-word sequences—first a word from category A, then one from category B, then category C—with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.
Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.
The final test measured skill in identifying the language's morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.
Turning off effort
The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.
"It's likely to be the procedural memory system that's really important for learning these difficult morphological aspects of language. In fact, when you use the  system, it doesn't help you, it harms you," Finn says.
Still unresolved is the question of whether  can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of "turning off" the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while  is heard, and treating subjects with drugs that impair activity in that brain region.

English May Have Retained Words From an Ice Age Language

If you’ve ever cringed when your parents said “groovy,” you’ll know that spoken language can have a brief shelf life. But frequently used words can persist for generations, even millennia, and similar sounds and meanings often turn up in very different languages. The existence of these shared words, or cognates, has led some linguists to suggest that seemingly unrelated language families can be traced back to a common ancestor. Now, a new statistical approach suggests that peoples from Alaska to Europe may share a linguistic forebear dating as far back as the end of the Ice Age, about 15,000 years ago.

“Historical linguists study language evolution using cognates the way biologists use genes,” explains Mark Pagel, an evolutionary theorist at the University of Reading in the United Kingdom. For example, although about 50% of French and English words derive from a common ancestor (like “mere” and “mother,” for example), with English and German the rate is closer to 70%—indicating that while all three languages are related, English and German have a more recent common ancestor. In the same vein, while humans, chimpanzees, and gorillas have common genes, the fact that humans share almost 99% of their DNA with chimps suggests that these two primate lineages split apart more recently.
Because words don’t have DNA, researchers use cognates found in different languages today to reconstruct the ancestral “protowords.” Historical linguists have observed that over time, the sounds of words tend to change in regular patterns. For example, the p sound frequently changes to f, and the t sound to th—suggesting that the Latin word pater is, well, the father of the English word father. Linguists use these known rules to work backward in time, making a best guess at how the protoword sounded. They also track the rate at which words change. Using these phylogenetic principles, some researchers have dated many common words as far back as 9000 years ago. The ancestral language known as Proto-Indo-European, for example, gave rise to languages including Hindi, Russian, French, English, and Gaelic.
Some researchers, including Pagel, believe that the world’s languages are united by even older superfamilies, but this view is hotly contested. Skeptics feel that even if language families were related, words suffer from too much erosion, both in terms of sound and meaning, to be reliably traced back further than 9000 or 10,000 year, and that the similarities of many cognates may be pure chance. What was missing, Pagel says, was an objective method of analysis.
Pagel and his co-workers took a first step by building a statistical model based on Indo-European cognates. Incorporating only the frequency of a word’s use and its part of speech (noun, verb, numeral, etc.)—and ignoring its sound— the model could predict how long the word persisted through time. Reporting in Nature in 2007, they found that most words have about a 50% chance of being replaced by a completely different word every 2000 to 4000 years. Thus the Proto-Indo-European wata, winding its way through wasser in German, water in English, and voda in Russian, became eau in French. But some words, including Iyouherehownot, and two, are replaced only once every 10,000 or even 20,000 years.
The new study, appearing today in the Proceedings of the National Academy of Sciences, makes an even bolder statement. The researchers broadened the hunt to cognates from seven major language families, including Indo-European, Eskimo, Altaic (comprising many Oriental languages), and Chukchi-Kamchatkan (a group of non-Russian languages around Siberia), which have been proposed to form an ancient superfamily dubbed Eurasiatic. Again, using only the word’s frequency and part of speech, the model successfully predicted that a core group of about 23 very common words, used about once per 1000 words in everyday speech, not only persists within each language group, but also sounds similar to the corresponding words in other families. The word thou, for example, has similar sound and meaning among all seven language families. Cognates include te or tu in Indo-European languages,t`iin proto-Altaic, and turi in proto-Chukchi-Kamchatkan. The words notthatwewho, andgive were cognates in five families, and nouns and verbs including motherhandfireasheswormhear, andpull, were shared by four. Going by the rate of change of these cognates, the model suggests that these words have remained in a similar form since about 14,500 years ago, thus supporting the existence of an ancient Eurasiatic language and its now far-flung descendants.
“The model hints at a group of people living somewhere in Southern Europe as the glaciers were receding, speaking a language that might resemble those spoken today,” Pagel says. “It’s astonishing that spoken language can be transmitted through millennia with enough fidelity to give us information about our early history.”
Whether the findings will sway the skeptics is another question, according to William Croft, a linguist at the University of New Mexico, Albuquerque. The use of methods from evolutionary biology makes the Eurasiatic superfamily more plausible, says Croft, who is more sympathetic than many to the idea. “It probably won’t convince most historical linguists to accept the Eurasiatic hypothesis, but their resistance may soften somewhat.”

_______________________________________________________
This story provided by ScienceNOW, the daily online news service of the journal Science.

Are Gender-Neutral Pronouns Actually Doomed?

Some linguists say English, flexible as it is, isn't built for gender-neutral pronouns. Others say there's a good chance "they" could take flight, given the right visibility and support.

Baron, a professor of linguistics at the University of Illinois, has been monitoring the development of epicene—that is, gender-neutral, third-person singular pronouns—since the 1986 publication of his book Grammar and Gender. He keeps a list tracking the introduction of new epicene pronouns in English and has counted dozens, with the first documented in 1850—most of those being proposed by writers who took grammatical issue with, say, the singular “they.”
“They were the ones I found from the 19th century, when a rationale was given for them, it was a grammatical one rather than an issue of social equality or social justice,” Baron says.
I got in touch with Baron this summer after a heated meltdown with my friend Eric. It was right after “ougate”—a minor flap in which writer s.e. smith, who identifies as genderqueer and prefers the pronoun “ou,” was misgendered by Gawker writer Hamilton Nolan; the ensuing correction spurred a minor tizzy after which smith and Nolan both moved on.
Eric's a grad student in linguistics, wrapping up his dissertation on Balkan languages as I write this, and his argument was a little more specific: Prescribing a new pronoun for speakers of a language to adopt is an effort that's not all that likely to succeed. Some languages don't gender personal pronouns, but English does, and has for so long that reversing the trend seems completely impracticable.
I grant that as an English speaker, gender is inextricably tied to how I want to talk about people and even animals (why do so many people, by the way, refer to cats as “she”?). For years, I despised the numeric inconsistency of the singular “they” in writing (though I used it all the time in conversation), and usually struck it when editing others' work.
But I'm also deeply skeptical of claims that humans or speakers of a given language will inevitably think about gender in a certain way—or what languages are intrinsically built to do. They strike me nearly the same way as arguments using evolutionary psychology to bolster rigid gender roles—though I wonder if the latter flies because most people don't know enough about primitive humans to argue that primitive men probably didn't use the pre-agricultural equivalent of sports cars and expensive briefcases to lure primitive ladies into their primitive caves. Even non-linguists—say, every high school kid trying to figure out how French nouns are gendered—know there's remarkable diversity in the way living languages handle gender.
And anyway, there's the theoretical notion of how pronouns ought to work in languages, and then there's the practice, which is more a matter of etiquette than argument about what a language will “naturally” do.
Clouds Haberberg, a social worker in the United Kingdom who identifies as gender neutral, came out asking to be referred to as “they,” and while most friends were supportive and some merely confused, a few argued that singular “they” is ungrammatical and refused to use it.
“This stings, because my gender identity is not semantic to me,” Haberberg writes, adding that they are not out at work, because explaining gender-neutral identity would get too complicated, so they get misgendered all the time. “There's not much I can do about it until non-binary genders gain more recognition in mainstream society—which, I feel confident saying, is not imminent in the U.K. right now—so I am learning to tune it out. It hurts, but what else can I do?”
“I think the bulk of the population is just so locked into gender binarism that they can't get their heads around anything else. It just discombobulates them,” says Sally McConnell-Ginet, a professor emerita at Cornell, whose research has focused on the intersection of gender and language. “They're not at all at ease around this.”
BARON ARGUES THAT PRONOUNS are the most conservative part of speech in English, and that speakers are incredibly slow to adopt new ones broadly. The most recent one he counts is “its,” which appears in some of Shakespeare's work (though Shakespeare uses other words, including “is,” to mean precisely the same thing), but not in the King James Bible.
Baron's essay, “The Epicene Pronoun: The Word That Failed,” first appeared inGrammar and Gender and appears in a truncated form online, where he also collects news items related to gender-neutral pronouns. If the first attempts at creating an epicene pronoun came from nitpicky writers and grammarians, it's only been fairly recently—say, the 1950s and ‘60s—that writers started proposing epicene pronouns as an argument for greater inclusiveness. Feminists trying to shift away from the generic “he” were among the first; transgender and genderqueer writers came later. Baron finds broad use of the Spivak pronouns—“e,” “eim” and “eir”, coined by mathematician Michael Spivak—in transgender forums online, and in science fiction, but hasn't found an instance of a new epicene pronoun gaining traction in English.
The work of Lal Zimman, a visiting linguistics professor at Reed College who focuses on patterns in the speech of LGBT people, differs. Zimman tracks regional variants on the plural “you” (the Southern “y'all,” and the Pittsburgher “yin” for instance) as newer pronouns are created to serve a purpose other English words don't—and notes English used to have more pronouns than it currently does. There are even regionally specific instances of gender-neutral pronouns, like in Baltimore, where, at least a few years ago,“yo” was gaining traction.
Zimman agrees that language doesn't really work as a top-down system, with authoritative sources dictating how speakers should use it. But social and political change often does have a major effect on at least formal writing and, more slowly, speech. For instance, Zimman says, the use of the generic “he” wasn't an accident, but the result of an act by the British parliament that ordered that official documents be edited to use it. Prior to that, use of the singular “they” was common in formal writing.
“Their reasoning was explicitly that men are better than women,” Zimman says. “Later on, we kind of viewed it as a natural part of the language, but actually it's something where a major change took place.”
So, Zimman argues, it may not be fair to say pronouns are intrinsically more conservative. “If pronouns are slow to change, I would attribute that to social processes rather than linguistic ones,” he says.
And of course, even binary transgender identities aren't typically handled well in mainstream formal writing: Most media style guides specify that people should be referred to by the name and pronoun with which they publicly identify, but major media outlets balked nonetheless at referring to Chelsea Manning as “she” after her coming-out statement was released—at least until enough people yelled at them on the Internet.
Zimman notes that in the first days after Manning's statement was released, different reporters for the same media outlet referred to Manning differently, sometimes minutes apart within the same newscast, likely because reporters on the military desk simply have less experience reporting on LGBT issues.
Changing one's informal language can be more challenging, even when the situation is cut and dry: My best friend died in May, and I still catch myself referring to him in present tense, for reasons no more advanced or defensible than habit. A genderqueer friend who came out this summer admits that they still sometimes slip up when signing emails. And about 10 years ago, my mom was a long-term substitute at a rural Idaho high school that had a lot of Asian exchange students who went by Anglophone names because school administrators had told them Americans would never be able to pronounce those given to them. “We can try,” my mom said, and asked how they said their names. The administrators had a point—many Asian languages have sounds native English speakers can't hear or just have a hard time pronouncing—but their rightness mattered less to the kids than mom's kindness, however clumsy. Her students cried when they heard their real names for the first time in months.
EARLIER THIS YEAR, THE Swedish national encyclopedia added the gender-neutral “hen”, which has been used in various contexts there since the 1990s after being coined by a linguist in the late ‘60s. McConnell-Ginet says she's not sure how that will play out in terms of broad, informal adoption—and notes Swedish is spoken by a smaller, more homogeneous group of speakers than English, which is spoken by multiple cultures and multiple groups within cultures. McConnell-Ginet and Zimman both point out that other forms of gender-based language planning have been somewhat successful, such as the use of “Ms.,” which was scoffed at when it was reintroduced in the 1970s. While it hasn't been adopted across the board, it's still more often than not the default courtesy title given to women (or junk-mail recipients assumed to be women). McConnell-Ginet says that her dissertation is riddled with the generic “he,” and example dialogues using male names only.
“I look back and I think, 'Oh my god, how could I have done that?,'” she says. “At the time I would certainly have embraced gender egalitarianism. I didn't see it as connected, that they weren't just these dead examples.”
Now McConnell-Ginet has not only dropped the generic “he,” but is more likely than not to use the singular “they” in writing. She thinks the singular “they” is the epicene pronoun most likely to take off, since it's already in the language, and it would be easier to stretch its use than try to get a new word to take off.
“It's sort of the default, certainly in speech, and has been for centuries. It's perfectly acceptable. You see it more and more in writing,” Baron says. But now you see it, I think people are a little more lax about it in writing. I use it all the time. I use it fully aware of what I'm doing. It just sounds better.” While there are still people who object—including students in a classroom setting—more and more academics have begun to accept the singular “they” in formal writing, he says.
McConnell-Ginet is also optimistic that as trans people become more visible worldwide, speech will become more inclusive, at least in receptive communities: “The increased visibility of trans people is going to change the practice. It just is.”

Linguistics Researcher Develops New System to Help Computers ‘Learn’ Natural Language

Linguists, computer scientists use supercomputers to improve natural language processing


Newswise — AUSTIN, Texas - For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software. Now, a University of Texas at Austin linguistics researcher, Katrin Erk, is using supercomputers to develop a new method for helping computers learn natural language.
Instead of hard-coding human logic or deciphering dictionaries to try to teach computers language, Erk decided to try a different tactic: feed computers a vast body of texts (which are a reflection of human knowledge) and use the implicit connections between the words to create a map of relationships.
“An intuition for me was that you could visualize the different meanings of a word as points in space,” says Erk, a professor of linguistics who is conducting her research at the Texas Advanced Computing Center. “You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.’ ”
To create a model that can accurately recreate the intuitive ability to distinguish word meaning requires a lot of text and a lot of analytical horsepower.
“The lower end for this kind of a research is a text collection of 100 million words,” she explains. “If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers come in.”