psycholinguistics | Лингвистика

Telegram-канал psycholinguistics - Psycholinguistics

493

Here we share reviews and articles about Psycholinguistics and other relevant fields.

Подписаться на канал

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

#brain #hand #participants_brains #participant #object #cognitive #cognition #word #result #symbol #symbol_grounding #measuring #measure #measured #measurement #acrylic #broom #size #processed #process #processing #real #verbally #verbal #meaning #onishi_worked
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2022/09/220915104800.htm

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

👉🏽 Exposure to accents helps children learn words

• University of Freiburg study on vocabulary acquisition uses novel game-based design • Study results: Children of primary school age can benefit from long-term experience with multiple accents when learning words in unfamiliar accents from other children • Bilingualism, on the other hand, did not lead to corresponding effects in vocabulary learning
@psycholinguistics
Card game "Spot It!" as the foundation
"Until now, there was a lack of studies on the influence of regional and foreign accents on children's learning of new words," says Hanulíková. To fill this gap, the researchers had 88 Freiburg children aged seven to eleven play a computer game based on the popular card game "Spot It!," which is known as "Dobble" in Germany. In the game, two identical objects on different playing cards have to be discovered and named as quickly as possible. For the study, the children played the game on the computer with virtual peers. They spoke either standard German or German with a Swiss or Hebrew accent. The game included six terms that are usually unknown to children of elementary school age.
Regional accents help
All 88 children who participated in the study were German speakers, some of them bilingual or multilingual. The researchers also asked how often per week each child hears regional and foreign accents. The evaluation of the experiment showed that the children benefited from long-term experience with different accents: children with this experience found it easier to learn unfamiliar words from other children who spoke unfamiliar accents in this virtual game situation.
This effect occurred especially when children heard both regional and foreign accents in their daily life. Whereas experience with regional accents alone also predicted learning, children who had experience with foreign accents showed, at least in tendency, similar effects. Bilingualism had no corresponding effect.
Experiment resembles natural learning
Further studies are thus needed to investigate in more detail what type of experience in children's vocabulary acquisition leads to which effects -- and how these might differ from the learning of new words by adults, says Hanulíková. The study's newly developed, game-based design is a particularly suitable tool for this purpose, she says. "The children learn from other children while playing, not from adults, the latter being the focus of almost all studies to date. In addition, children are required to say and use these words in interaction, not to just passively recognize them. In this way, the experimental design resembles natural learning in everyday life."
Notes:
#accent #game #children_learning #effect #life #aged #age #experiment #experience #learn_unfamiliar #words_says #acquisition #suitable #german
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2022/09/220929132458.htm

Читать полностью…

Psycholinguistics

📚 Karin van Nispen, Kazuki Sekine, Ineke van der Meulen, Basil C. Preisig. Gesture in the eye of the beholder: An eye-tracking study on factors determining the attention for gestures produced by people with aphasia. Neuropsychologia, 2022; 108315 DOI: 10.1016/j.neuropsychologia.2022.108315
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

👉🏽 Connectivity of language areas unique in the human brain

Neuroscientists have gained new insight into how our brain evolved into a language-ready brain. Compared to chimpanzee brains, the pattern of connections of language areas in our brain has expanded more than previously thought.
@psycholinguistics
"At first glance, the brains of humans and chimpanzees look very much alike. The perplexing difference between them and us is that we humans communicate using language, whereas non-human primates do not," says co-first author Joanna Sierpowska. Understanding what in the brain could have enabled this unique ability has inspired researchers for years. However, up to now, their attention was mainly drawn towards a particular nerve tract connecting frontal and temporal lobes called arcuate fasciculus, which besides showing significant differences between species, is well-known to be involved in language function.
"We wanted to shift our focus towards the connectivity of two cortical areas located in the temporal lobe, which are equally important for our ability to use language," says Sierpowska.
Imaging white matter
To study the differences between the human and chimpanzee brain, the researchers used scans of 50 human brains and 29 chimpanzee brains scanned in a similar way as humans, but under well-controlled anesthesia and as part of their routine veterinary check-ups. More specifically, they used a technique called diffusion-weighted imaging (DWI), which images white matter, the nerve pathways that connect brain areas.
Using these images, they explored the connectivity of two language-related brain hubs (the anterior and posterior middle areas of the temporal lobe), comparing them between the species. "In humans, both of these areas are considered crucial for learning, using and understanding language and harbor numerous white matter pathways," says Sierpowska. "It is also known that damage to these brain areas has detrimental consequences for language function. However, until now, the question of whether their pattern of connections is unique to humans remained unanswered."
New connections in human brain
The researchers found that while the connectivity of the posterior middle temporal areas in chimpanzees is confined mainly to the temporal lobe, in humans a new connection towards the frontal and parietal lobes emerged using the arcuate fasciculus as an anatomical avenue. In fact, changes to both human language areas include a suite of expansions to connectivity within the temporal lobes. "The results of our study imply that the arcuate fasciculus surely is not the only driver of evolutionary changes preparing the brain for a full-fledged language capacity," says co-author Vitoria Piai.
"Our findings are purely anatomical, so it is hard to say anything about brain function in this context," says Piai. "But the fact that this pattern of connections is so unique for us humans suggests that it may be a crucial aspect of brain organization enabling our distinctive language abilities."
#language #brain #human #area #lobe #say #lobes_called #change #connecting #connectivity #connect #connection #imaging #image #sierpowska #anatomical #mainly #crucial #veterinary #check #significant_differences #difference #nerve
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2022/07/220704180914.htm

Читать полностью…

Psycholinguistics

📚 Irene de la Cruz-Pavía, Gesche Westphal-Fitch, W. Tecumseh Fitch, Judit Gervain. Seven-month-old infants detect symmetrical structures in multi-featured abstract visual patterns. PLOS ONE, 2022; 17 (5): e0266938 DOI: 10.1371/journal.pone.0266938
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

👉🏽 Our brain is a prediction machine that is always active

Our brain works a bit like the autocomplete function on your phone -- it is constantly trying to guess the next word when we are listening to a book, reading or conducting a conversation. Contrary to speech recognition computers, our brains are constantly making predictions at different levels, from meaning and grammar to specific speech sounds.
@psycholinguistics
This is in line with a recent theory on how our brain works: it is a prediction machine, which continuously compares sensory information that we pick up (such as images, sounds and language) with internal predictions. "This theoretical idea is extremely popular in neuroscience, but the existing evidence for it is often indirect and restricted to artificial situations," says lead author Micha Heilbron. "I would really like to understand precisely how this works and test it in different situations."
Brain research into this phenomenon is usually done in an artificial setting, Heilbron reveals. To evoke predictions, participants are asked to stare at a single pattern of moving dots for half an hour, or listen to simple patterns in sounds like 'beep beep boop, beep beep boop, …. "Studies of this kind do in fact reveal that our brain can make predictions, but not that this always happens in the complexity of everyday life as well. We are trying to take it out of the lab setting. We are studying the same type of phenomenon, how the brain deals with unexpected information, but then in natural situations that are much less predictable."
Hemingway and Holmes
The researchers analysed the brain activity of people listening to stories by Hemingway or about Sherlock Holmes. At the same time, they analysed the texts of the books using computer models, so called deep neural networks. This way, they were able to calculate for each word how unpredictable it was.
For each word or sound, the brain makes detailed statistical expectations and turns out to be extremely sensitive to the degree of unpredictability: the brain response is stronger whenever a word is unexpected in the context. "By itself, this is not very surprising: after all, everyone knows that you can sometimes predict upcoming language. For example, your brain sometimes automatically 'fills in the blank' and mentally finishes someone else's sentences, for instance if they start to speak very slowly, stutter or are unable to think of a word. But what we have shown here is that this happens continuously. Our brain is constantly guessing at words; the predictive machinery is always turned on."
More than software
"In fact, our brain does something comparable to speech recognition software. Speech recognisers using artificial intelligence are also constantly making predictions and are allowing themselves to be guided by their expectations, just like the autocomplete function on your phone. Nevertheless, we observed a big difference: brains predict not only words, but make predictions on many different levels, from abstract meaning and grammar to specific sounds."
There is good reason for the ongoing interest from tech companies who would like to use new insights of this kind to build better language and image recognition software, for example. But these sorts of applications are not the main aim for Heilbron. "I would really like to understand how our predictive machinery works at a fundamental level. I'm now working with the same research setup, but for visual and auditive perceptions, like music."
#prediction #predictable #predict #predictive #brain #difference_brains #situation #like #different #make #making #pattern #new #extremely #research #researcher #sound #heilbron #setting #beep #speech_recognition #artificial #deep_neural #information #continuously
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2022/08/220804102557.htm

Читать полностью…

Psycholinguistics

📚 Brandon G Jacques, Zoran Tiganj, Aakash Sarkar, Marc Howard, Per Sederberg. A deep convolutional neural network that is invariant to time rescaling. Proceedings of the 39th International Conference on Machine Learning, PMLR, 2022; 162: 9729-9738 [abstract]
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

Essentially, programmers input a multitude of different voices using different words at different speeds and train the large networks through a process called back propagation. The programmers know the responses they want to achieve, so they keep feeding the continuously refined information back in a loop. The AI then begins to give appropriate weight to aspects of the input that will result in accurate responses. The sounds become usable characters of text.
"You do this many millions of times," Sederberg said.
While the training data sets that serve as the inputs have improved, as have computational speeds, the process is still less than ideal as programmers add more layers to detect greater nuances and complexity -- so-called "deep" or "convolutional" learning.
More than 7,000 languages are spoken in the world today. Variations arise with accents and dialects, deeper or higher voices -- and of course faster or slower speech. As competitors create better products, at every step, a computer has to process the information.
That has real-world consequences for the environment. In 2019, a study found that the carbon dioxide emissions from the energy required in the training of a single large deep-learning model equated to the lifetime footprint of five cars.
Three years later, the data sets and neural networks have only continued to grow.
How the Brain Really Hears Speech
The late Howard Eichenbaum of Boston University coined the term "time cells," the phenomenon upon which this new AI research is constructed. Neuroscientists studying time cells in mice, and then humans, demonstrated that there are spikes in neural activity when the brain interprets time-based input, such as sound. Residing in the hippocampus and other parts of the brain, these individual neurons capture specific intervals -- data points that the brain reviews and interprets in relationship. The cells reside alongside so-called "place cells" that help us form mental maps.
Time cells help the brain create a unified understanding of sound, no matter how fast or slow the information arrives.
"If I say 'oooooooc-toooooo-pussssssss,' you've probably never heard someone say 'octopus' at that speed before, and yet you can understand it because the way your brain is processing that information is called 'scale invariant,' Sederberg said. "What it basically means is if you've heard that and learned to decode that information at one scale, if that information now comes in a little faster or a little slower, or even a lot slower, you'll still get it."
The main exception to the rule, he said, is information that comes in hyper-fast. That data will not always translate. "You lose bits of information," he said.
Cognitive researcher Marc Howard's lab at Boston University continues to build on the time cell discovery. A collaborator with Sederberg for over 20 years, Howard studies how human beings understand the events of their lives. He then converts that understanding to math.
Howard's equation describing auditory memory involves a timeline. The timeline is built using time cells firing in sequence. Critically, the equation predict that the timeline blurs -- and in a particular way -- as sound moves toward the past. That's because the brain's memory of an event grows less precise with time.
"So there's a specific pattern of firing that codes for what happened for a specific time in the past, and information gets fuzzier and fuzzier the farther in the past it goes," Sederberg said. "The cool thing is Marc and a post-doc going through Marc's lab figured out mathematically how this should look. Then neuroscientists started finding evidence for it in the brain."
Time adds context to sounds, and that's part of what gives what's spoken to us meaning. Howard said the math neatly boils down.
"Time cells in the brain seem to obey that equation," Howard said.
UVA Codes the Voice Decoder

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

Perhaps this will help us to understand why people voice misinformation, and how we might better identify and respond to it.
#voice #self #different #social #experiment #experience #people #studied #study #researcher #research #volunteer #hiroshi #illness #connection #embody #pitch #pandemic #changed #change #agency #strong #person
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2022/07/220705090707.htm

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

📚 Sae Onishi, Kunihito Tobita, Shogo Makioka. Hand constraint reduces brain activity and affects the speed of verbal responses on semantic tasks. Scientific Reports, 2022; 12 (1) DOI: 10.1038/s41598-022-17702-1
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

👉🏽 Talk with your hands? You might think with them too!

Scientists observed how the brain responds to words representing hand-manipulable objects, when a participant's hands were either free to move or restrained. They showed that brain activity in response to hand-manipulable words was significantly reduced by hand restraints. Verbal responses were also affected by hand constraints. Their results support the idea of embodied cognition, which proposes that the meaning of words is represented through interactions between the body and the environment. Understanding how words are processed through embodied cognition could also be useful for artificial intelligence to learn the meaning of objects.
@psycholinguistics
Words are expressed in relation to other words; a "cup," for example, can be a "container, made of glass, used for drinking." However, you can only use a cup if you understand that to drink from a cup of water, you hold it in your hand and bring it to your mouth, or that if you drop the cup, it will smash on the floor. Without understanding this, it would be difficult to create a robot that can handle a real cup. In artificial intelligence research, these issues are known as symbol grounding problems, which map symbols onto the real world.
How do humans achieve symbol grounding? Cognitive psychology and cognitive science propose the concept of embodied cognition, where objects are given meaning through interactions with the body and the environment.
To test embodied cognition, the researchers conducted experiments to see how the participants' brains responded to words that describe objects that can be manipulated by hand, when the participants' hands could move freely compared to when they were restrained.
"It was very difficult to establish a method for measuring and analyzing brain activity. The first author, Ms. Sae Onishi, worked persistently to come up with a task, in a way that we were able to measure brain activity with sufficient accuracy," Professor Makioka explained.
In the experiment, two words such as "cup" and "broom" were presented to participants on a screen. They were asked to compare the relative sizes of the objects those words represented and to verbally answer which object was larger -- in this case, "broom." Comparisons were made between the words, describing two types of objects, hand-manipulable objects, such as "cup" or "broom" and nonmanipulable objects, such as "building" or "lamppost," to observe how each type was processed.
During the tests, the participants placed their hands on a desk, where they were either free or restrained by a transparent acrylic plate. When the two words were presented on the screen, to answer which one represented a larger object, the participants needed to think of both objects and compare their sizes, forcing them to process each word's meaning.
Brain activity was measured with functional near-infrared spectroscopy (fNIRS), which has the advantage of taking measurements without imposing further physical constraints. The measurements focused on the interparietal sulcus and the inferior parietal lobule (supramarginal gyrus and angular gyrus) of the left brain, which are responsible for semantic processing related to tools. The speed of the verbal response was measured to determine how quickly the participant answered after the words appeared on the screen.
The results showed that the activity of the left brain in response to hand-manipulable objects was significantly reduced by hand restraints. Verbal responses were also affected by hand constraints. These results indicate that constraining hand movement affects the processing of object-meaning, which supports the idea of embodied cognition. These results suggest that the idea of embodied cognition could also be effective for artificial intelligence to learn the meaning of objects. The paper was published in Scientific Reports.

Читать полностью…

Psycholinguistics

📚 Helena Levy, Adriana Hanulíková. Spot It and Learn It! Word Learning in Virtual Peer‐Group Interactions Using a Novel Paradigm for School‐Aged Children. Language Learning, 2022; DOI: 10.1111/lang.12520
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

👉🏽 Gestures can improve understanding in language disorders

When words fail, gestures can help to get the message across -- especially for people who have a language disorder. An international research team has now shown that listeners attend the gestures of people with aphasia more often and for much longer than previously thought. This has implications for the use of gestures in speech therapy.
@psycholinguistics
People who suffer from an acquired language disorder due to a brain injury -- for example after a stroke, traumatic brain injury or brain tumor -- often have difficulties communicating with others. Previous research on aphasia indicates that these patients often try to express their needs using hand gestures. It was previously assumed that conversation partners pay relatively little attention to such non-verbal forms of communication -- but this assumption was based on research involving participants without language disorders.
Communicating with gestures
A new study from the University of Zurich, carried out together with researchers from the Netherlands and Japan, looked at whether gestures receive more attention if the verbal communication is impeded by aphasia. The researchers showed healthy volunteers video clips in which people with and without speech disorders described an accident and a shopping experience. As the participants watched the video clips, their eye movements were recorded.
Focus of attention shifts
"Our results show that when people have very severe speaking difficulties and produce less informative speech, their conversation partner is more likely to pay attention to their hand movements and to look longer at their gestures," says Basil Preisig of the Department of Comparative Language Science at UZH. In people who have no limitations in verbal production, hand gestures are granted less attention. Thus, it seems that listeners shift their attention when the speaker has a speech impediment and focus more on the speaker's nonverbal information provided through gestures. "For people with aphasia, it may be worth using gestures more in order to be better understood by the other person," says Preisig.
Using gestures as a specific tool in therapy
The present study not only illustrates the importance of gestures in communication, but also reinforces their relevance in speech rehabilitation. "Individuals with aphasia should be encouraged in therapy to use all available forms of communication. This includes increased use of gestures. In addition, their family and friends need to learn about hand gestures to improve communication," Preisig believes.
#communicating #communication #speech #language_disorder #brain #verbal #disorder #say #preisig #informative #information #research #researcher #increased #looked #look #study #attention #pay #hand
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2022/07/220707141908.htm

Читать полностью…

Psycholinguistics

📚 Joanna Sierpowska, Katherine L. Bryant, Nikki Janssen, Guilherme Blazquez Freches, Manon Römkens, Margot Mangnus, Rogier B. Mars, Vitoria Piai. Comparing human and chimpanzee temporal lobe neuroanatomy reveals modifications to human language hubs beyond the frontotemporal arcuate fasciculus. Proceedings of the National Academy of Sciences, 2022; 119 (28) DOI: 10.1073/pnas.2118295119
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

👉🏽 Seven-month-old babies already have a sense of symmetry

A collaborative study examined the spontaneous looking patterns of 7-month-old babies when presented with mosaic-like sequences with a symmetrical and asymmetrical structure. The results show that these babies quickly detect whether a mosaic has a symmetrical structure, suggesting a robust, automatic ability to extract structure from complex images.
@psycholinguistics
The group's Ikerbasque research fellow Irene de la Cruz-Pavía conducted a study in collaboration with the University of Padua researcher Judit Gervain and which was recently published in the journal PLOS ONE; it explores the ability of 7-month-old infants to perceive structural symmetry in abstract, mosaic-like visual patterns. This research was carried out at the University of Paris. "We examined the spontaneous looking patterns of almost 100 infants when presented with mosaic-like sequences displaying symmetrical and asymmetrical structures," the researchers explained.
These mosaics comprised two categories of square tiles (A and B) that differed in terms of their colour scheme and internal shape. These tiles were arranged to create mosaics with symmetrical (e.g. ABA, ABABA) or asymmetrical (e.g. AAB, AABBA) structures. The study found that the infants "discriminated between structurally symmetrical and asymmetrical mosaics, and that the length of the sequence (3 or 5 tiles) or the level of symmetry did not significantly modulate their behaviour." These results suggest that infants quickly detect structural symmetry in complex visual patterns: "Babies as young as 7 months have a robust, automatic ability to detect that a structure is symmetrical. This ability coincides with those found in studies we conducted using other stimuli, such as sign language or speech, demonstrating that babies are simply very good at detecting structures and regularities," said the researcher in the UPV/EHU's Department of Linguistics and Basque Studies.
Ability of babies to extract structure and rules from various media
As the Ikerbasque research fellow pointed out, "the grammar of a language consists of the set of structures and rules of a language. I want to understand to what extent infants' abilities to extract structures, detect regularities and learn rules are specific to language or whether they are found in other areas." "We conducted this study using information that is visual but which is not language. With these mosaics, we were able to see how babies were capable of extracting structure from different media."
The researchers stress that this study allows them to better understand "these infants' fundamental skills, which will enable them to start initially with some of the more accessible parts of grammar and gradually build up to something as complex as the grammar of a language. What we want to understand is this: what are the fundamental abilities of babies when it comes to detecting structure?"
"We have many more questions to answer," they concluded. "In this study we were able to determine that babies are able to detect structures spontaneously and quickly. Now we want to understand when this ability begins, and the degree of detail with which they analyse that structure and what aspects of the mosaics allow them to detect its structure (the shape, the colour, both...)."
Additional information
This study was carried out in collaboration with the Integrative Neuroscience and Cognition Center (CNRS, University of Paris, France), the University of Vienna (Austria) and the University of Padua (Italy).
#structural #structure #structurally #researcher #mosaic #ability #infant #research_fellow #padua_researcher #mosaic_visual #language #pattern #study #fundamental #detect #detecting #center #cnrs #judit #regularity #automatic
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2022/08/220808162226.htm

Читать полностью…

Psycholinguistics

📚 Micha Heilbron, Kristijan Armeni, Jan-Mathijs Schoffelen, Peter Hagoort, Floris P. de Lange. A hierarchy of linguistic predictions during natural language comprehension. Proceedings of the National Academy of Sciences, 2022; 119 (32) DOI: 10.1073/pnas.2201968119
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

About five years ago, Sederberg and Howard identified that the AI field could benefit from such representations inspired by the brain. Working with Howard's lab and in consultation with Zoran Tiganj and colleagues at Indiana University, Sederberg's Computational Memory Lab began building and testing models.
Jacques made the big breakthrough about three years ago that helped him do the coding for the resulting proof of concept. The algorithm features a form of compression that can be unpacked as needed -- much the way a zip file on a computer works to compress and store large-size files. The machine only stores the "memory" of a sound at a resolution that will be useful later, saving storage space.
"Because the information is logarithmically compressed, it doesn't completely change the pattern when the input is scaled, it just shifts over," Sederberg said.
The AI training for SITHCon was compared to a pre-existing resource available free to researchers called a "temporal convolutional network." The goal was to convert the network from one trained only to hear at specific speeds.
The process started with a basic language -- Morse code, which uses long and short bursts of sound to represent dots and dashes -- and progressed to an open-source set of English speakers saying the numbers 1 through 9 for the input.
In the end, no further training was needed. Once the AI recognized the communication at one speed, it couldn't be fooled if a speaker strung out the words.
"We showed that SITHCon could generalize to speech scaled up or down in speed, whereas other models failed to decode information at speeds they didn't see at training," Jacques said.
Now UVA has decided to make its code available for free, in order to advance the knowledge. The team says the information should adapt for any neural network that translates voice.
"We're going to publish and release all the code because we believe in open science," Sederberg said. "The hope is that companies will see this, get really excited and say they would like to fund our continuing work. We've tapped into a fundamental way the brain processes information, combining power and efficiency, and we've only scratched the surface of what these AI models can do."
But knowing that they've built a better mousetrap, are the researchers worried at all about how the new technology might be used?
Sederberg said he's optimistic that AI that hears better will be approached ethically, as all technology should be in theory.
"Right now, these companies have been running into computational bottlenecks while trying to build more powerful and useful tools," he said. "You have to hope the positives outweigh the negatives. If you can offload more of your thought processes to computers, it will make us a more productive world, for better or for worse."
Jacques, a new father, said, "It's exciting to think our work may be giving birth to a new direction in AI."
#sederberg #howard #time #process_information #processing #process #network #research #researcher #cell #large #programmer #computational #computer #world #human_brain #auditory #data #model #called #speech #university #said #basic #basically #inspiration #inspired #new #neural_networks #deep #specific #problem #like #better #jacques #learning #learned #computing_power #speaker #voice #memory #different #current_artificial #human #science #working #work #equated #equation #hearing #hear #hears #existing #bit #constantly #response #security #professor #carbon #year #scaled #scale #residing #reside #lab #code #coding #uva #breakthrough #understanding #understand_words #slower #footprint #international #powerful
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2022/07/220720150551.htm

Читать полностью…

Psycholinguistics

👉🏽 Alexa and Siri, listen up! Teaching machines to really hear us

The implications of new AI voice research go beyond user experience to making AI more efficient, which could change the industry and significantly reduce carbon footprints.
@psycholinguistics
Your device will struggle to reiterate what you just said. It might supply a nonsensical response, or it might give you something close but still off -- like "toe pus." Gross!
The point is, Sederberg said, when it comes to receiving auditory signals like humans and other animals do -- despite all of the computing power dedicated to the task by such heavyweights as Google, Deep Mind, IBM and Microsoft -- current artificial intelligence remains a bit hard of hearing.
The outcomes can range from comical and mildly frustrating to downright alienating for those who have speech problems.
But using recent breakthroughs in neuroscience as a model, UVA collaborative research has made it possible to convert existing AI neural networks into technology that can truly hear us, no matter at what pace we speak.
The deep learning tool is called SITHCon, and by generalizing input, it can understand words spoken at different speeds than a network was trained on.
This new ability won't just change the end-user's experience; it has the potential to alter how artificial neural networks "think" -- allowing them to process information more efficiently. And that could change everything in an industry constantly looking to boost processing capability, minimize data storage and reduce AI's massive carbon footprint.
Sederberg, an associate professor of psychology who serves as the director of the Cognitive Science Program at UVA, collaborated with graduate student Brandon Jacques to program a working demo of the technology, in association with researchers at Boston University and Indiana University.
"We've demonstrated that we can decode speech, in particular scaled speech, better than any model we know of," said Jacques, who is first author on the paper.
Sederberg added, "We kind of view ourselves as a ragtag band of misfits. We solved this problem that the big crews at Google and Deep Mind and Apple didn't."
The breakthrough research was presented Tuesday at the high-profile International Conference on Machine Learning, or ICML, in Baltimore.
Current AI Training: Auditory Overload
For decades, but more so in the last 20 years, companies have built complex artificial neural networks into machines to try to mimic how the human brain recognizes a changing world. These programs don't just facilitate basic information retrieval and consumerism; they also specialize to predict the stock market, diagnose medical conditions and surveil for national security threats, among many other applications.
"At its core, we are trying to detect meaningful patterns in the world around us," Sederberg said. "Those patterns will help us make decisions on how to behave and how to align ourselves with our environment, so we can get as many rewards as possible."
Programmers used the brain as their initial inspiration for the technology, thus the name "neural networks."
"Early AI researchers took the basic properties of neurons and how they're connected to one another and recreated those with computer code," Sederberg said.
For complex problems like teaching machines to "hear" language, however, programmers unwittingly took a different path than how the brain actually works, he said. They failed to pivot based on developments in the understanding of neuroscience.
"The way these large companies deal with the problem is to throw computational resources at it," the professor explained. "So they make the neural networks bigger. A field that was originally inspired by the brain has turned into an engineering problem."

Читать полностью…

Psycholinguistics

📚 Ryu Ohata, Tomohisa Asai, Shu Imaizumi, Hiroshi Imamizu. I Hear My Voice; Therefore I Spoke: The Sense of Agency Over Speech Is Enhanced by Hearing One’s Own Voice. Psychological Science, 2022; 095679762110688 DOI: 10.1177/09567976211068880
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

👉🏽 Link between recognizing our voice and feeling in control

Being able to recognize our own voice is a critical factor for our sense of control over our speech, according to a new study. If people think they hear someone else's voice when they speak, they do not strongly feel that they caused the sound. This could be a clue to understanding the experience of people who live with auditory hallucinations and could help to improve online communication and virtual reality experiences.
@psycholinguistics
Have you ever heard a playback of your voice and been surprised by how you sounded? Part of our self-image is based on how we think we sound when we speak. This contributes to our sense of agency, or sense of control, over our actions, i.e., "I saw a result, therefore I felt that I did the action; my voice was heard, therefore I felt that I spoke."
A dysfunctional sense of agency over speech may be a cause of auditory hallucinations. This is a common symptom of schizophrenia, a mental illness characterized by distortions of reality which affects how a person thinks, feels and acts. People with auditory hallucinations may hear voices when they are alone or not speaking, and may find it difficult to identify their own voice when they do. Although some experiments have looked into people's sense of agency over their movements, until now sense of agency over speech has not been extensively studied.
Then-Project Researcher Ryu Ohata and Professor Hiroshi Imamizu, from the Graduate School of Humanities and Sociology at the University of Tokyo, and their team, recruited healthy volunteers in Japan to help them investigate this through two linked psychological experiments. The volunteers spoke simple sounds and then reacted to hearing a playback of their voices under different conditions -- normal, with a raised pitch, with a lowered pitch, and after various time delays.
"This research investigated the significance of self-voice in the sense of agency, which previous studies have never sought out," said Ohata. "Our results demonstrate that hearing one's own voice is a critical factor to increased self-agency over speech. In other words, we do not strongly feel that 'I' am generating the speech if we hear someone else's voice as an outcome of the speech. Our study provides empirical evidence of the tight link between the sense of agency and self-voice identity."
In previous studies, the longer the delay in the outcome of a person's action -- such as pushing a button and seeing a result -- the less likely the person felt they had caused the action. However, in this study, the team found that the volunteers' sense of agency remained strong when they heard their normal voice played back, regardless of the time delay. The strength of the connection started to vary when the pitch of the voice was changed.
Understanding this close connection between recognizing our own voice and feeling a sense of agency may help to better understand and support people with auditory hallucinations who experience a disconnect in this link. It could also help to improve our experiences online, where the voice we hear when speaking might be different to usual.
"Recently, social interaction in the virtual environment is becoming more popular, especially after the COVID-19 pandemic broke out" said Ohata. "If users embody avatars with a totally different appearance, it might be possible that these users cannot experience strong self-agency over speech if they hear a voice that doesn't match their image of the character. This could result in a less comfortable experience for them and less effective communication online with others."
The team's next step will be to look at how different social situations change people's sense of agency. According to Ohata, "One idea is to measure the strength of people's sense of agency when they tell a lie. We expect that people might feel less agency in telling a lie than a truth, because they want to avoid the responsibility of this action."

Читать полностью…

Psycholinguistics

📚 Elizabeth M. Clerkin, Linda B. Smith. Real-world statistics at two timescales and a mechanism for infant learning of object names. Proceedings of the National Academy of Sciences, 2022; 119 (18) DOI: 10.1073/pnas.2123239119
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…
Подписаться на канал