Biologically Plausible Learning of Text Representation with Spiking Neural Networks

Introduction

In the world of natural language processing (NLP), finding efficient and effective ways to represent text data is a constant challenge. Traditional methods often rely on high-dimensional vector representations, which can lead to computational inefficiencies and overfitting. In our latest research, we explore a novel approach to text representation using Spiking Neural Networks (SNNs), a biologically inspired model that mimics the way neurons in the brain communicate through spikes.

Our paper, “Biologically Plausible Learning of Text Representation with Spiking Neural Networks,” introduces a new method for generating low-dimensional, spike-based text representations that can be used for text classification tasks. By leveraging the power of SNNs and the Spike-Timing-Dependent Plasticity (STDP) learning rule, we achieve competitive results on the well-known 20 newsgroups dataset, with an accuracy of 80.19%. In this blog post, I’ll walk you through the key insights from our research and explain how SNNs can revolutionize text representation.


What Are Spiking Neural Networks (SNNs)?

Spiking Neural Networks (SNNs) are a type of artificial neural network that closely mimic the behavior of biological neurons. Unlike traditional neural networks that operate on continuous values, SNNs process information through discrete events called spikes, which are similar to the electrical impulses used by neurons in the brain. This makes SNNs more biologically plausible and energy-efficient, especially when implemented on neuromorphic hardware.

SNNs have been successfully applied to various tasks, such as image and audio processing, but their application to text processing has been limited. Our research aims to bridge this gap by developing a novel method for transforming text into spike-based representations that can be used for classification tasks.


The Spike Encoder for Text (SET): A Novel Approach

Our proposed method, called the Spike Encoder for Text (SET), consists of two main phases:

  1. Text to Spike Transformation: In this phase, text documents are transformed into spike trains, which are sequences of spikes representing the words in the document. This transformation is based on the TF-IDF weighting scheme, which assigns higher importance to words that are more relevant to a document.
  2. Spiking Neural Network Training: The spike trains are then used as input to a two-layer SNN, which is trained using the STDP learning rule. After training, the SNN generates a low-dimensional, spike-based representation of the text, which can be used for classification tasks.

Key Components of the SET Method

1. Text Vectorization and Spike Transformation

The first step in our method is to transform raw text documents into spike trains. This is done by first building a dictionary of unique words from the corpus and then representing each document as a vector of weights using the TF-IDF scheme. Each weight corresponds to the relevance of a word to the document.

Once the text is vectorized, it is transformed into spike trains by generating spikes with a probability proportional to the word’s weight. For example, if the word “baseball” has a weight of 0.1, the probability of generating a spike for that word in each millisecond is 0.15 (with a proportionality coefficient of 1.5). This process results in a spike-based representation of the text, which is then fed into the SNN.

2. Spiking Encoder Architecture

The core of our method is the spiking encoder, a two-layer SNN with an additional inhibitory neuron. The first layer consists of neurons that represent words from the dictionary, and the second layer generates the low-dimensional spike-based representation of the text.

The SNN uses excitatory and inhibitory synaptic connections to simulate the behavior of biological neurons. When a neuron fires, it sends an inhibitory signal to other neurons, creating a winner-takes-all (WTA) competition. This ensures that only the most relevant neurons (those corresponding to the most important words) are activated, leading to a sparse and efficient representation of the text.

3. Hebbian Learning with STDP

The SNN is trained using a modified version of the Spike-Timing-Dependent Plasticity (STDP) learning rule, which is a biologically plausible form of unsupervised learning. In STDP, the strength of synaptic connections is adjusted based on the timing of spikes: if a presynaptic neuron fires before a postsynaptic neuron, the connection is strengthened; otherwise, it is weakened.

We also introduce a synaptic scaling mechanism to prevent uncontrolled growth of synaptic weights. This ensures that the SNN learns to focus on the most relevant words for each document, leading to a more accurate and interpretable text representation.


Experimental Results and Key Findings

We evaluated our method on the 20 newsgroups dataset, a well-known benchmark for text classification. The dataset contains 18,846 documents from 20 different newsgroups, covering topics such as computers, recreation, science, and religion.

1. Influence of Inhibition on Classification Accuracy

One of the key findings of our research is the impact of inhibition on the quality of the text representation. We found that disabling inhibition during the evaluation phase led to the best classification accuracy (78%). This is because inhibition creates a highly sparse representation, which may not capture subtle differences between documents belonging to different classes.

2. Connection Pruning and Encoder Size

We also explored the relationship between the size of the SNN encoder and the quality of the text representation. Our experiments showed that larger encoders (with more neurons) generally lead to better classification accuracy. However, we found that pruning up to 90% of the weakest synaptic connections did not significantly affect the accuracy, while greatly reducing the computational complexity of the network.

The best classification accuracy (80.19%) was achieved with an encoder size of 2,200 neurons and 90% pruning. This demonstrates that our method can effectively reduce the dimensionality of the text representation while maintaining high classification accuracy.


Comparison with Other Methods

To the best of our knowledge, our work is the first to apply SNNs to text classification using low-dimensional representations. When compared to other shallow approaches, such as K-competitive Autoencoder for Text (KATE) and Class Preserving Restricted Boltzmann Machine (CPr-RBM), our method achieved the highest accuracy (80.19%). This highlights the potential of SNNs for text processing tasks.


Conclusion and Future Directions

Our research demonstrates that Spiking Neural Networks can be effectively used for text representation and classification tasks. By leveraging biologically plausible learning rules like STDP, we were able to generate low-dimensional, spike-based representations that achieve competitive results on the 20 newsgroups dataset.

In the future, we plan to explore several directions for improving our method, including:

  • Deep Spiking Neural Networks (DSNNs): Adding more layers to the SNN encoder to capture more detailed features of the text.
  • Semantic Relevance Learning: Incorporating mechanisms to learn semantic relationships between words and documents.
  • Neuromorphic Hardware: Implementing our method on neuromorphic hardware to further improve energy efficiency and computational speed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.