The ADALINE neuron computes its output by applying the formula below:
y = w_0 + w_1 * x_1 + w_2 * x_2 + ... + w_n * x_n
where w0, w1, w2, ..., wn represent the weights assigned to the respective inputs, x1, x2, ..., x_n denote the input values, and y signifies the resultant output produced by the neuron.
After the calculations for the outputs are completed, the weights are adjusted according to the formula below:
w_j = w_j + ? * (d_j - y) * x_j
where ? represents the learning rate, dj stands for the desired output, y signifies the actual output, and xj corresponds to the input linked with weight w_j. The weights undergo updates for each training instance until the discrepancy between the forecasted and real outputs is reduced to a minimum. One notable advantage of ADALINE compared to other models of artificial neural networks is its simplicity and computational efficiency, making it well-suited for scenarios with scant data or restricted computational capabilities. Furthermore, due to the single-layer configuration of artificial neurons in the ADALINE model, comprehending and interpreting the input weights is relatively straightforward.
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#define NUM_INPUTS 2
#define NUM_SAMPLES 4
#define MAX_EPOCHS 100
#define LEARNING_RATE 0.1
double inputs[NUM_SAMPLES][NUM_INPUTS+1] = {{1, 0, 0}, {1, 0, 1}, {1, 1, 0}, {1, 1, 1}};
double expected_outputs[NUM_SAMPLES] = {0, 1, 1, 0};
double dot_product(double weights[], double inputs[], int num_inputs) {
double sum = 0.0;
int i;
for (i = 0; i < num_inputs; i++) {
sum += weights[i] * inputs[i];
}
return sum;
}
double activation_function(double dot_product) {
return dot_product;
}
double calculate_error(double expected, double actual) {
return expected - actual;
}
void train(double inputs[][NUM_INPUTS+1], double expected_outputs[], double weights[], int num_inputs, int num_samples) {
int i, j, epoch;
double dot_product_value, actual_output, error;
for (epoch = 0; epoch < MAX_EPOCHS; epoch++) {
for (i = 0; i < num_samples; i++) {
dot_product_value = dot_product(weights, inputs[i], num_inputs);
actual_output = activation_function(dot_product_value);
error = calculate_error(expected_outputs[i], actual_output);
for (j = 0; j < num_inputs; j++) {
weights[j] = weights[j] + LEARNING_RATE * error * inputs[i][j];
}
}
}
}
int main() {
double weights[NUM_INPUTS] = {0.0, 0.0, 0.0};
int i, j;
train(inputs, expected_outputs, weights, NUM_INPUTS, NUM_SAMPLES);
for (i = 0; i < NUM_SAMPLES; i++) {
double dot_product_value = dot_product(weights, inputs[i], NUM_INPUTS);
double actual_output = activation_function(dot_product_value);
printf("Inputs: ");
for (j = 0; j < NUM_INPUTs; j++) {
printf("%f ", inputs[i][j]);
}
printf("\n");
printf("Expected output: %f\n", expected_outputs[i]);
printf("Actual output: %f\n", actual_output);
printf("\n");
}
return 0;
}
Output:
Inputs: 1.000000 0.000000 0.000000
Expected output: 0.000000
Explanation:
Nevertheless, a significant drawback of the ADALINE model is its restriction to linear issues. Put differently, it lacks the ability to tackle non-linear problems, thereby constraining its usefulness across various real-world scenarios. To overcome this drawback, the ADALINE model underwent enhancements by incorporating multiple layers of artificial neurons, resulting in the creation of multi-layer perceptron (MLP) networks. In summary, the ADALINE model serves as a straightforward and effective artificial neural network suitable for binary classification tasks. Despite its constraints, it holds significance in the evolution of artificial neural networks and has played a role in shaping the development of more intricate models.