Techniques

Neural Networks

Artificial neural networks (ANNs) are computing systems loosely inspired by biological brains. They consist of interconnected nodes (neurons) organised in layers that process and transform data. Neural networks learn by adjusting the strength (weights) of connections between neurons based on training examples, enabling them to approximate complex functions.

How Neural Networks Works

A neural network passes inputs through layers of neurons, each applying a weighted sum followed by a non-linear activation function (ReLU, sigmoid, tanh). The network is trained by showing it labelled examples, calculating prediction errors, and using backpropagation to update weights via gradient descent. Different architectures — CNNs for images, RNNs for sequences, Transformers for language — are suited to different data types.

Key Use Cases

  • Image classification and object detection
  • Language translation and generation
  • Speech recognition
  • Time series forecasting
  • Game playing (AlphaGo, AlphaFold)
  • Anomaly detection in cybersecurity
  • Medical diagnosis

Frequently Asked Questions

What is an artificial neural network?
An artificial neural network is a mathematical model composed of interconnected units (neurons) arranged in layers, designed to recognise patterns in data by adjusting connection weights through a learning process.
How do neural networks learn?
Neural networks learn through a process called supervised learning: they receive labelled training examples, make predictions, measure the error (loss), and use backpropagation with gradient descent to adjust weights to reduce future errors.
What are the main types of neural networks?
Key types include: Feedforward Networks (basic classification), CNNs (images/video), RNNs/LSTMs (sequences/time series), Transformers (language/multimodal), GANs (generative models), and Autoencoders (representation learning).