Here are each of the four updates worked out step by step:
1. Error‐correction learning (single‐layer perceptron)
- Given:
wold=[0.5,−0.2],x=[1,2],d=1,η=0.1.
- Compute actual output:
y=wold⋅x=0.5⋅1+(−0.2)⋅2=0.5−0.4=0.1.
- Error term:
e=d−y=1−0.1=0.9.
- Weight change:
Δw=η(d−y)x=0.1×0.9×[1,2]=[0.09,0.18].
- Updated weight vector:
wnew=wold+Δw=[0.5,−0.2]+[0.09,0.18]=[0.59,−0.02].
2. Hebbian learning (memory‐based system)
- Given:
wold=0.3,x=0.8,y=0.5,η=0.2.
- Hebbian update rule:
Δw=ηxy=0.2×0.8×0.5=0.08.
- Updated weight:
wnew=wold+Δw=0.3+0.08=0.38.
3. Competitive (winner‐takes‐all) learning
-
Given:
x=[0.6,0.8],η=0.15,
with initial weights
w1=[0.4,0.2],w2=[0.7,0.5],w3=[0.3,0.9].
-
Compute Euclidean distances ∥x−wi∥:
- (\displaystyle
d_1 = \sqrt{(0.6-0.4)^2 + (0.8-0.2)^2}
= \sqrt{0.2^2 + 0.6^2}
= \sqrt{0.04 + 0.36}
= \sqrt{0.40}
\approx 0.632.
)
- (\displaystyle
d_2 = \sqrt{(0.6-0.7)^2 + (0.8-0.5)^2}
= \sqrt{(-0.1)^2 + 0.3^2}
= \sqrt{0.01 + 0.09}
= \sqrt{0.10}
\approx 0.316.
)
- (\displaystyle
d_3 = \sqrt{(0.6-0.3)^2 + (0.8-0.9)^2}
= \sqrt{0.3^2 + (-0.1)^2}
= \sqrt{0.09 + 0.01}
= \sqrt{0.10}
\approx 0.316.
)
Both neuron 2 and neuron 3 tie for the smallest distance (≈0.316). By convention (or breaking ties arbitrarily), we’ll choose neuron 2 as the “winner.”
-
Update rule for the winner (w2):
w2new=w2old+η(x−w2old).
First compute the difference:
x−w2old=[0.6,0.8]−[0.7,0.5]=[−0.1,0.3].
Then scale by η=0.15:
0.15×[−0.1,0.3]=[−0.015,0.045].
Finally add to the old w2:
w2new=[0.7,0.5]+[−0.015,0.045]=[0.685,0.545].
4. Boltzmann learning
- Given:
wold=0.4,ρ+=0.7,ρ−=0.3,η=0.1.
- Update rule:
Δw=η(ρ+−ρ−)=0.1×(0.7−0.3)=0.1×0.4=0.04.
- New weight:
wnew=wold+Δw=0.4+0.04=0.44.