March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010
November 2010
December 2010
January 2011
February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
30
31
News Every Day |

How a Memory Quirk of the Human Brain Can Galvanize AI

memory AI

Even as toddlers we’re good at inferences. Take a two-year-old that first learns to recognize a dog and a cat at home, then a horse and a sheep in a petting zoo. The kid will then also be able to tell apart a dog and a sheep, even if he can’t yet articulate their differences.

This ability comes so naturally to us it belies the complexity of the brain’s data-crunching processes under the hood. To make the logical leap, the child first needs to remember distinctions between his family pets. When confronted with new categories—farm animals—his neural circuits call upon those past remembrances, and seamlessly incorporate those memories with new learnings to update his mental model of the world.

Not so simple, eh?

It’s perhaps not surprising that even state-of-the-art machine learning algorithms struggle with this type of continuous learning. Part of the reason is how these algorithms are set up and trained. An artificial neural network learns by adjusting synaptic weights—how strongly one artificial neuron connects to another—which in turn leads to a sort of “memory” of its learnings that’s embedded into the weights. Because retraining the neural network on another task disrupts those weights, the AI is essentially forced to “forget” its previous knowledge as a prerequisite to learn something new. Imagine gluing together a bridge made out of toothpicks, only having to rip apart the glue to build a skyscraper with the same material. The hardware is the same, but the memory of the bridge is now lost.

This Achilles’ heel is so detrimental it’s dubbed “catastrophic forgetting.” An algorithm that isn’t capable of retaining its previous memories is severely kneecapped in its ability to infer or generalize. It’s hardly what we consider intelligent.

But here’s the thing: if the human brain can do it, nature has already figured out a solution. Why not try it on AI?

A recent study by researchers at the University of Massachusetts Amherst and the Baylor College of Medicine did just that. Drawing inspiration from the mechanics of human memory, the team turbo-charged their algorithm with a powerful capability called “memory replay”—a sort of “rehearsal” of experiences in the brain that cements new learnings into long-lived memories.

What came as a surprise to the authors wasn’t that adding replay to an algorithm boosted its ability to retain its previous trainings. Rather, it was that replay didn’t require exact memories to be stored and revisited. A bastardized version of the memory, generated by the network itself based on past experiences, was sufficient to give the algorithm a hefty memory boost.

Playing With Replay

In the 1990s, while listening in on the brain’s electrical chatter in sleeping mice, memory researchers stumbled across a perplexing finding. The region of the brain called the hippocampus, which is critical for spatial navigation and memory, sparked with ripples of electrical waves in sleep. The ripples weren’t random—rather, they recapitulated in time and space the same neural activity the team observed earlier, while the mice were learning to navigate a new maze.

Somehow, the brain was revisiting the electrical pattern encoding the mice’s new experiences during sleep—but compressed and distorted, as if rewinding and playing a fraying tape in fast-forward.

Scientists subsequently found that memory replay is fundamental to strengthening memories in mice and men. In a way, replay provides us with additional simulated learning trials to practice our learnings and stabilize them into a library of memories from which new experiences can build upon rather than destroy.

It’s perhaps not surprising that deep neural networks equipped with replay stabilize their memories—with the caveat that the algorithm needs to perfectly “remember” all previous memories as input for replay. The problem with this approach, the team said, is that it’s not scalable. The need to access prior experiences rapidly skyrockets data storage demands to an untenable amount.

But what if, just like the brain, we don’t actually need a perfect, total recall of memories during replay?

Memory Remix

The team’s lightbulb moment came when digging into the weeds of replay: rather than playing a perfectly accurate video tape of memories, perhaps the brain is “reimagining,” or generating its past experiences for playback. Here, replay doesn’t rely on faithfully stored memories. Instead, it’s more similar to our actual experience of memory: something reconstructed from reality, but tainted by our previous history and worldviews.

To test out their idea, the team coded an algorithm that reflects “brain-inspired replay.” It doesn’t store learning per se to be used for playback. Instead, it uses data from learned experiences to automatically reconstruct memories for replay.

As an analogy to the brain, say you’re learning a visual task, such as recognizing different animals. Your main processor is the cortex, which begins to parse out patterns that correspond to a dog or cat or sheep. In previous “replay” algorithms, these data are then transferred to the hippocampus, which stores the memory and uses it for playback. The electrical activity from the hippocampus washes across the cortex, strengthening what you just learned.

The new setup melds the two components—the artificial cortex and hippocampus—together in a more biologically feasible way. Here, the hippocampus uses data from the cortex, the processor, to basically “dream up” or “imagine” its replay patterns. These patterns lack pixel-by-pixel fidelity, similar to how our memory isn’t photographic in nature. However, the patterns capture something more abstract about the memory—what makes a sheep a sheep versus a dog—even when the two animals are learned across separate sessions.

When pitted against other deep learning algorithms for continuous learning, the newbie trounced its competitors at preventing catastrophic forgetting. In a visual memory challenge that required 100 tasks, the algorithm was able to keep its previous memories while deciphering new images. Impressively, the harder and the more “real world” the problem, the better the algorithm outperformed its peers.

“If our network with generative replay first learns to separate cats from dogs, and then to separate bears from foxes, it will also tell cats from foxes without specifically being trained to do so. And notably, the more the system learns, the better it becomes at learning new tasks,” said study author Dr. Gido van de Ven.

Meeting of Minds

These results are hardly the first to tap into the brain’s memory prowess.

Previously, AI researchers have also tuned into a separate memory process called metaplasticity, which alters how likely a neural network is to be vulnerable to change. Because memories are stored in a neural network, the more flexible it is, the more likely the memory can be altered or forgotten. Google DeepMind, for example, has used an artificial version of this brain quirk to help “protect” artificial synapses that are key to preserving a previous memory while encoding the next.

That’s not to say one approach bests another. What’s likely, the authors said, is that these strategies go hand-in-hand to protect the brain’s memories. An algorithm that incorporates both may be even more flexible and resilient to catastrophic forgetting, instead operating like a toddler trying to untangle a complex world one memory at a time.

Clearly the brain has a lot more inspiration for AI up its sleeve. Although the new algorithm is closer to biological plausibility, it can’t yet capture a fundamental component of our own memories—the experience of time—into its replay mechanism. On the other hand, machine learning also has more to give back to neuroscience. The results here could help unravel the neural processes behind replay that explain why some of our memories degrade, whereas others last a lifetime.

“Our method makes several interesting predictions about the way replay might contribute to memory consolidation in the brain. We are already running an experiment to test some of these predictions,” said van de Ven.

Image Credit: Karolina Grabowska from Pixabay

Москва

Пассажир рейса Москва — Пермь попал в реанимацию

I was diagnosed with cancer aged 39… you are never too rich, too famous or too young, says Dr Philippa Kaye

The 10 Intense New Action Movies on Netflix That Left Me on the Edge of My Seat!

Top 5 Websites to Watch FREE Movies - TV Shows (No Sign up!)

Top 10 Love Affair Movies of the 2000s and 2010s

Ria.city






Read also

It’s irresponsible, Reps slam FIRS boss for shunning invitations

High homeowners insurance rates scaring away Florida homebuyers, other states face the same issue

He Would Be More Inclined To – Uli Hoeness Makes Big Claim On Liverpool Target Xabi Alonso

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

I was diagnosed with cancer aged 39… you are never too rich, too famous or too young, says Dr Philippa Kaye

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

Top 10 Love Affair Movies of the 2000s and 2010s



Sports today


Новости тенниса
WTA

Россиянка покинула WTA-1000 из-за проблем со здоровьем



Спорт в России и мире
Москва

Блиц-матч по шахматам между двумя командами провели в павильоне «Спорт для каждого»



All sports news today





Sports in Russia today

Москва

Федерация бокса России на ВДНХ


Новости России

Game News

Cyberpunk 2077 станет временно бесплатной для PlayStation и Xbox


Russian.city


Симферополь

Книжная выставка "Необъятен и велик мир чудесных детских книг", открытие Недели детской книги


Губернаторы России
США

США включили ряд российских блокчейн-платформ в список SDN


«Авангард» победил «Локомотив» со счетом 3:2 и сравнял счет в серии плей-офф КХЛ

Московская область - Изготовление металлических навесов

Ключевым элементом успеха компании является ее подход к клиентам

Оставить за кадром: соцсетям и блогерам хотят запретить публикации жестокого видео


Шапки женские вязаные на Wildberries, 2024 — новый цвет от 392 руб. (модель 466)

Концерт "Crush string quartet "с программой «Мировые рок - хиты»

Ольга Бузова стала человеком-невидимкой на ТВ-3 в новом сезоне легендарного шоу о звёздах

В Курской области выступил Национальный оркестр России под управлением Владимира Спивакова


«Бомж» Дарья передает всем привет»: Касаткина ответила на слова Винер

Теннисист Медведев вышел в полуфинал турнира в Майами, обыграв Харри

Рыбакина о том, что не играла в Индиан-Уэллс из-за болезни: «К сожалению, восстановление заняло много времени»

Теннисистка Касаткина отреагировала на высказывания Винер о россиянах



Перенос дат II этапа культурного проекта «Классика: история и современность» в Дмитрове

Пассажир рейса Москва — Пермь попал в реанимацию

Шапки женские вязаные на Wildberries, 2024 — новый цвет от 392 руб. (модель 466)

«Радио Зенит» – информационный партнер форума «Мы вместе. Спорт»


Бизнес в огне. Почему так часто горят склады

Путин назвал культуру духовным каркасом народа в трагические для страны моменты

Выставка военных фотокорреспондентов и художника Росгвардии «СВОй VZгляд» открылась в Москве

Перенос дат II этапа культурного проекта «Классика: история и современность» в Дмитрове


В метрах от спада: есть ли риск кризиса на российском рынке жилья

Торги уместны: аукционный рынок России вырос вдвое

Инженерная школа в Дмитрове на 1,1 тысячи мест примет первых учеников в сентябре

Что изменится в жизни россиян с 1 апреля



Путин в России и мире






Персональные новости Russian.city
Моргенштерн

Эффект Карабаса: Почему Моргенштерн взялся спасать молодых коллег от музыкального лейбла Rhymes Music



News Every Day

Top 10 Love Affair Movies of the 2000s and 2010s




Friends of Today24

Музыкальные новости

Персональные новости