Neuronal memory for processing sequences with non-adjacent dependencies

An ongoing work on building biologically-plausible networks of spiking neurons that have to perform cognitive tasks. Here, we build on the previously developed model and the idea that intrinsic plasticity of neuronal membrane potentials can implement memory for assigning semantic roles to words in language. In this project, we extend this same approach and show that it holds also in tasks where related items (think words) do not always occur close together in a sequence (thnink sentences) – a hallmark of natural languages.

First results of this work was presented at the Bernstein Conference for Computational Neuroscience earlier in September 2019.

More to follow…

Avatar
Kristijan Armeni
postdoctoral fellow