SVM mencari pemisah dengan margin maksimum untuk meningkatkan generalisasi. Pada data tidak terpisah sempurna, soft-margin memberi toleransi error yang diatur oleh C. Dengan kernel trick, SVM mampu membuat batas keputusan non-linear (mis. RBF) tanpa eksplisit memetakan ke ruang fitur tinggi.
Sesi 12 – Support Vector Machine
Ulasan Materi: Definisi, Intuisi, dan Contoh
Margin & Soft-Margin
Intuisi: pilih garis yang memaksimalkan jarak dari titik terdekat (support vectors). C menyeimbangkan antara margin lebar dan kesalahan klasifikasi. C besar mengejar error minimal (rawan overfit), C kecil memberi margin lebih lebar (lebih tahan terhadap noise).
Kernel RBF & γ
RBF memungkinkan boundary non-linear. γ mengontrol seberapa jauh pengaruh satu sampel: γ besar → pengaruh sempit → boundary berlekuk; γ kecil → pengaruh luas → boundary halus.
Ringkasan Intuisi & Rumus: 1) SVM Linear (Hard/Soft-Margin) • Garis pemisah: w·x + b = 0. • Margin: 2/||w||, ingin dimaksimalkan. • Hard-margin (data separable): min 1/2||w||^2 s.t. y_i(w·x_i + b) ≥ 1. • Soft-margin (data noisy): min 1/2||w||^2 + C ∑ξ_i s.t. y_i(w·x_i + b) ≥ 1 − ξ_i, ξ_i ≥ 0. • C (regularisasi): besar → penalti kesalahan tinggi (varians ↑), kecil → lebih toleran kesalahan (bias ↑). 2) Kernel Trick • Proyeksikan data ke ruang fitur lebih tinggi tanpa menghitung eksplisit φ(x). • RBF/Gaussian: K(x,z) = exp(−γ ||x − z||²). • γ besar → batas keputusan berlekuk/sempit (varians ↑); γ kecil → halus (bias ↑). 3) Multikelas • One-vs-Rest (OvR) / One-vs-One (OvO) untuk memperluas SVM ke banyak kelas. 4) Scaling Penting • Standarisasi fitur (mean=0, std=1) agar margin & jarak bermakna pada SVM, terutama dengan kernel RBF.
Studi Kasus
- Moons Dataset: Tuning SVM RBF (C, γ) + visualisasi boundary.
- Iris: SVM multikelas OvR.
- (Opsional) SVR: Regresi pada sinyal sinus noisy.
Lab: SVM Linear, RBF, Scaling, Multikelas, SVR
# ====== SVM Linear (Data Hampir Separable) ======
# Jika perlu: !pip install scikit-learn
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, classification_report
np.random.seed(2)
N=200
X1 = np.random.multivariate_normal([0,0], [[1,0.3],[0.3,1]], N)
X2 = np.random.multivariate_normal([3,3], [[1,-0.2],[-0.2,1]], N)
X = np.vstack([X1,X2])
y = np.array([0]*N + [1]*N)
Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.25,random_state=42)
sc = StandardScaler(); Xtr_ = sc.fit_transform(Xtr); Xte_ = sc.transform(Xte)
clf = SVC(kernel='linear', C=1.0)
clf.fit(Xtr_, ytr)
yp = clf.predict(Xte_)
print('Akurasi:', accuracy_score(yte, yp))
print(classification_report(yte, yp))
# Visualisasi margin
w = clf.coef_[0]; b = clf.intercept_[0]
# garis: w0 x + w1 y + b = 0 → y = -(w0/w1)x - b/w1
xx = np.linspace(Xtr_[:,0].min()-1, Xtr_[:,0].max()+1, 100)
y_dec = -(w[0]/w[1])*xx - b/w[1]
# batas margin (±1)
margin = 1/np.sqrt(np.sum(w*w))
y_dec_up = y_dec + np.sqrt(1 + (w[0]/w[1])**2)*margin
_y_dec_dn = y_dec - np.sqrt(1 + (w[0]/w[1])**2)*margin
plt.figure(figsize=(5,4))
plt.scatter(Xtr_[:,0], Xtr_[:,1], c=ytr, s=16, alpha=0.5, label='Train')
plt.plot(xx, y_dec, label='Decision boundary')
plt.plot(xx, y_dec_up, '--', label='Margin +1')
plt.plot(xx, _y_dec_dn, '--', label='Margin -1')
plt.title('SVM Linear + Margin')
plt.legend(); plt.show()
# ====== SVM RBF + Grid Search C & gamma ======
# Jika perlu: !pip install scikit-learn
import numpy as np
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
X,y = make_moons(n_samples=600, noise=0.25, random_state=0)
Xtr,Xte,ytr,yte = train_test_split(X,y,test_size=0.25,random_state=42)
pipe = Pipeline([
('sc', StandardScaler()),
('svm', SVC(kernel='rbf'))
])
param = {
'svm__C':[0.1, 1, 10, 100],
'svm__gamma':[0.01, 0.1, 1, 10]
}
cv = GridSearchCV(pipe, param, cv=5, n_jobs=-1)
cv.fit(Xtr, ytr)
print('Best params:', cv.best_params_)
print('Val score:', cv.best_score_)
print('Test acc:', accuracy_score(yte, cv.best_estimator_.predict(Xte)))
# ====== Visualisasi Decision Boundary RBF (best model) ======
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
X,y = make_moons(n_samples=400, noise=0.25, random_state=0)
Xtr,Xte,ytr,yte = train_test_split(X,y,test_size=0.25,random_state=42)
model = Pipeline([
('sc', StandardScaler()),
('svm', SVC(kernel='rbf', C=10, gamma=1, probability=True))
])
model.fit(Xtr, ytr)
# grid untuk plot
x_min, x_max = X[:,0].min()-0.5, X[:,0].max()+0.5
y_min, y_max = X[:,1].min()-0.5, X[:,1].max()+0.5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 250), np.linspace(y_min, y_max, 250))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.figure(figsize=(6,4))
plt.contourf(xx, yy, Z, alpha=0.25)
plt.scatter(Xte[:,0], Xte[:,1], c=yte, s=16, edgecolor='k')
plt.title('SVM RBF Decision Boundary (contoh)')
plt.show()
# ====== Dampak Scaling pada SVM ======
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
np.random.seed(1)
N=500
x1 = np.random.normal(0,1,N)
x2 = 100*np.random.normal(0,1,N) # fitur skala jauh lebih besar
X = np.vstack([x1,x2]).T
y = (x1 + 0.02*x2 > 0).astype(int)
Xtr,Xte,ytr,yte = train_test_split(X,y,test_size=0.25,random_state=42)
# Tanpa scaling
m1 = SVC(kernel='rbf', C=1, gamma=0.1).fit(Xtr,ytr)
acc1 = accuracy_score(yte, m1.predict(Xte))
# Dengan scaling
sc = StandardScaler(); Xtr_ = sc.fit_transform(Xtr); Xte_ = sc.transform(Xte)
m2 = SVC(kernel='rbf', C=1, gamma=0.1).fit(Xtr_,ytr)
acc2 = accuracy_score(yte, m2.predict(Xte_))
print('Akurasi tanpa scaling :', acc1)
print('Akurasi dengan scaling:', acc2)
# ====== SVM Multikelas (OvR) ======
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report
X,y = load_iris(return_X_y=True)
Xtr,Xte,ytr,yte = train_test_split(X,y,test_size=0.25,random_state=0,stratify=y)
model = Pipeline([
('sc', StandardScaler()),
('svm', SVC(kernel='rbf', C=1, gamma=0.5, decision_function_shape='ovr'))
])
model.fit(Xtr,ytr)
print(classification_report(yte, model.predict(Xte)))
# ====== (Opsional) Support Vector Regression (SVR) ======
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import SVR
from sklearn.model_selection import train_test_split
np.random.seed(4)
X = np.linspace(-3,3,200).reshape(-1,1)
y = np.sin(X[:,0]) + np.random.randn(200)*0.2
Xtr,Xte,ytr,yte = train_test_split(X,y,test_size=0.25,random_state=0)
svr = SVR(kernel='rbf', C=10, gamma=1, epsilon=0.1)
svr.fit(Xtr,ytr)
xx = np.linspace(-3,3,300).reshape(-1,1)
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(6,4))
plt.scatter(Xte, yte, s=12, label='Test')
plt.plot(xx, svr.predict(xx), label='SVR RBF')
plt.title('SVR dengan Kernel RBF')
plt.legend(); plt.show()
# ====== Kuis 12 (cek mandiri) ======
qs=[
("Peran parameter C pada SVM:",{"a":"Kontrol margin vs kesalahan (regularisasi)","b":"Jumlah support vector","c":"Memilih kernel","d":"Mengatur dimensi fitur"},"a"),
("γ besar pada RBF cenderung...",{"a":"Boundary halus","b":"Boundary sangat berlekuk (overfit)","c":"Tidak berpengaruh","d":"Menjadi linear"},"b"),
("Mengapa scaling penting untuk SVM?",{"a":"Agar training lebih lambat","b":"Mengubah label","c":"Menjaga jarak/ margin bermakna antar fitur","d":"Tidak ada alasan"},"c"),
("SVM multikelas umum memakai...",{"a":"OvR/OvO","b":"Mini-batch","c":"Dropout","d":"Bagging"},"a"),
]
print('Kunci jawaban:')
for i,(_,__,ans) in enumerate(qs,1):
print(f'Q{i}: {ans}')
Penugasan & Referensi
Tugas Koding 8: Gunakan satu dataset klasifikasi nyata (≥ 500 sampel). Bandingkan SVM linear vs RBF. Lakukan scaling, tuning C dan γ via grid search (CV=5). Laporkan metrik (akurasi, F1, ROC-AUC) dan tampilkan plot decision boundary (jika 2D) atau kurva ROC. Tulis ringkasan ≤ 1 halaman tentang pengaruh C & γ.
- Vapnik, V. — Statistical Learning Theory (konsep margin maksimum).
- Geron — Hands-On Machine Learning, bab SVM.