AI
Artificial Intelligence (IF1706)
FSTT • ISTN Jakarta • Semester Ganjil 2025/2026
Sesi 10 – Evaluasi & Regularisasi

Evaluasi menentukan seberapa baik model Anda memprediksi. Regularisasi membantu mencegah overfitting dengan mengendalikan kompleksitas. Kita pelajari metrik untuk regresi & klasifikasi, k-fold cross-validation untuk estimasi yang stabil, serta Ridge (L2) dan Lasso (L1) untuk mengatur bobot model.

Tujuan: memilih metrik yang tepat, melakukan validasi, dan menerapkan regularisasi
Ulasan Materi: Definisi, Intuisi, dan Contoh

Metrik yang Tepat untuk Tugas yang Tepat

Untuk regresi, gunakan MSE/RMSE/MAE/R². Untuk klasifikasi, jangan hanya akurasi—perhatikan presisi, recall, F1, dan ROC-AUC terutama pada data tidak seimbang.

K-Fold Cross-Validation

Memecah data menjadi K lipatan mengurangi ketergantungan pada satu split. Skor rata-rata memberi perkiraan generalisasi yang lebih andal.

Regularisasi L1/L2

Tambahkan penalti pada bobot untuk menahan model agar tidak mengikuti noise. Ridge menekan bobot (kontinu), Lasso dapat meniadakan beberapa bobot (seleksi fitur).

Ringkasan Konsep & Rumus:

1) Regresi:
   • MSE = (1/m)∑(y_i − ŷ_i)^2    • RMSE = √MSE
   • MAE = (1/m)∑|y_i − ŷ_i|       • R² = 1 − (SS_res/SS_tot)

2) Klasifikasi:
   • Confusion Matrix: TP, FP, FN, TN
   • Akurasi = (TP+TN)/(TP+FP+FN+TN)
   • Presisi = TP/(TP+FP)   • Recall = TP/(TP+FN)
   • F1 = 2·(prec·rec)/(prec+rec)   • ROC-AUC: luas kurva TPR vs FPR

3) K-Fold Cross-Validation:
   • Bagi data jadi K lipatan; latih pada K−1; uji pada 1 lipatan; rata-rata skor untuk estimasi stabil.

4) Regularisasi:
   • Ridge (L2): menambah λ∑w_j² → mengecilkan bobot, mengurangi varians, jarang nol persis.
   • Lasso (L1): menambah λ∑|w_j| → mendorong sparsity (seleksi fitur).
   • Tujuan: mengendalikan kompleksitas → cegah overfitting.

5) Bias–Variance Trade-off:
   • Model sederhana → bias tinggi, varians rendah.
   • Model kompleks → bias rendah, varians tinggi. Cari titik seimbang (validasi).

Studi Kasus

  • Prediksi Harga: bandingkan OLS vs Ridge vs Lasso pada data multi-fitur.
  • Deteksi Kelas Imbang/Tidak Imbang: analisis presisi/recall/F1 daripada akurasi semata.
Lab: Evaluasi, K-Fold, Regularisasi, Bias–Variance
A. Evaluasi Regresi & Klasifikasi
# ====== Evaluasi Regresi & Klasifikasi (sklearn) ======
# Jika perlu: !pip install scikit-learn
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix, roc_auc_score
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

np.random.seed(1)
# Regresi sintetis
Xr = np.linspace(0, 10, 140).reshape(-1,1)
yr = 4.2*Xr[:,0] + 15 + np.random.randn(140)*3
Xr_tr, Xr_te, yr_tr, yr_te = train_test_split(Xr, yr, test_size=0.25, random_state=42)
reg = LinearRegression().fit(Xr_tr, yr_tr)
yr_pred = reg.predict(Xr_te)
print('Regresi: MSE=%.3f MAE=%.3f R2=%.3f' % (
    mean_squared_error(yr_te, yr_pred),
    mean_absolute_error(yr_te, yr_pred),
    r2_score(yr_te, yr_pred)))

# Klasifikasi sintetis
N=300
x1 = np.random.normal(0,1,N)
x2 = np.random.normal(0,1,N)
logit = -0.5 + 1.2*x1 + 0.8*x2
prob = 1/(1+np.exp(-logit))
yc = (prob>0.5).astype(int)
Xc = np.vstack([x1,x2]).T
Xc_tr, Xc_te, yc_tr, yc_te = train_test_split(Xc, yc, test_size=0.25, random_state=0)
sc = StandardScaler(); Xc_tr=sc.fit_transform(Xc_tr); Xc_te=sc.transform(Xc_te)
clf = LogisticRegression().fit(Xc_tr, yc_tr)
yc_pred = clf.predict(Xc_te)
yc_prob = clf.predict_proba(Xc_te)[:,1]
print('Klasifikasi: acc=%.3f prec=%.3f rec=%.3f f1=%.3f auc=%.3f' % (
    accuracy_score(yc_te, yc_pred),
    precision_score(yc_te, yc_pred),
    recall_score(yc_te, yc_pred),
    f1_score(yc_te, yc_pred),
    roc_auc_score(yc_te, yc_prob)))
print('Confusion Matrix:\n', confusion_matrix(yc_te, yc_pred))
B. K-Fold Cross-Validation (Manual)
# ====== K-Fold Cross-Validation (Manual) ======
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

np.random.seed(0)
X = np.linspace(0, 10, 100).reshape(-1,1)
y = 2.5*X[:,0] + 10 + np.random.randn(100)*2
K = 5
idx = np.arange(len(X))
np.random.shuffle(idx)
folds = np.array_split(idx, K)

scores=[]
for k in range(K):
    te = folds[k]
    tr = np.concatenate([folds[i] for i in range(K) if i!=k])
    model = LinearRegression().fit(X[tr], y[tr])
    pred = model.predict(X[te])
    mse = mean_squared_error(y[te], pred)
    scores.append(mse)
    print(f'Fold {k+1}: MSE={mse:.3f}')
print('Rata-rata MSE:', np.mean(scores))
C. Ridge vs Lasso
# ====== Perbandingan Ridge vs Lasso ======
# Jika perlu: !pip install scikit-learn
import numpy as np
from sklearn.linear_model import Ridge, Lasso, LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

np.random.seed(42)
# Data dengan fitur korelasi & noise
n=300; p=20
X = np.random.randn(n,p)
true_w = np.array([5, -3, 0, 0, 2] + [0]*(p-5))
y = X.dot(true_w) + np.random.randn(n)*2

Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.3,random_state=0)

for name, Model, lam in [
    ('OLS', LinearRegression, None),
    ('Ridge(λ=1.0)', Ridge, 1.0),
    ('Lasso(λ=0.05)', Lasso, 0.05)
]:
    if lam is None:
        m = Model().fit(Xtr,ytr)
    else:
        m = Model(alpha=lam).fit(Xtr,ytr)
    pred = m.predict(Xte)
    mse = mean_squared_error(yte, pred)
    print(f'{name:12s} MSE={mse:.3f}, ||w||0 ≈ {(np.abs(getattr(m,"coef_",np.zeros(p)))>1e-6).sum()} nonzero')
D. Bias–Variance (Polinomial)
# ====== Bias–Variance Trade-off (Polinomial) ======
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split

np.random.seed(3)
X = np.linspace(-3,3,120).reshape(-1,1)
y_true = np.sin(X[:,0])
y = y_true + np.random.randn(len(X))*0.2

Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.3,random_state=1)

for deg in [1,3,9]:
    model = Pipeline([
        ('poly', PolynomialFeatures(degree=deg, include_bias=False)),
        ('lin', LinearRegression())
    ])
    model.fit(Xtr,ytr)
    ytr_pred = model.predict(Xtr)
    yte_pred = model.predict(Xte)
    print(f'deg={deg}: train MSE={mean_squared_error(ytr,ytr_pred):.3f}, test MSE={mean_squared_error(yte,yte_pred):.3f}')

# Visualisasi untuk deg=1,3,9
plt.figure(figsize=(6,4))
plt.scatter(Xtr, ytr, s=12, label='Train')
xx = np.linspace(-3,3,200).reshape(-1,1)
for deg in [1,3,9]:
    model = Pipeline([
        ('poly', PolynomialFeatures(degree=deg, include_bias=False)),
        ('lin', LinearRegression())
    ])
    model.fit(Xtr,ytr)
    plt.plot(xx, model.predict(xx), label=f'deg={deg}')
plt.plot(X[:,0], y_true, label='True f(x)=sin(x)')
plt.legend(); plt.title('Bias–Variance Trade-off'); plt.xlabel('x'); plt.ylabel('y');
plt.show()
E. Kuis 10 (Cek Mandiri)
# ====== Kuis 10 (cek mandiri) ======
qs=[
  ("Tujuan regularisasi adalah...",{"a":"Mengecilkan data","b":"Meningkatkan kompleksitas","c":"Mengontrol kompleksitas untuk cegah overfitting","d":"Menaikkan learning rate"},"c"),
  ("Perbedaan Ridge vs Lasso:",{"a":"Ridge L2, Lasso L1","b":"Ridge L1, Lasso L2","c":"Dua-duanya L2","d":"Dua-duanya L1"},"a"),
  ("K-fold CV berguna untuk...",{"a":"Menambah ukuran test set","b":"Estimasi performa yang lebih stabil","c":"Menghapus outlier","d":"Menjamin generalisasi sempurna"},"b"),
  ("Presisi tinggi artinya...",{"a":"FP rendah","b":"FN rendah","c":"TPR tinggi","d":"FPR tinggi"},"a"),
]
print('Kunci jawaban:')
for i,(_,__,ans) in enumerate(qs,1):
    print(f'Q{i}: {ans}')
Penugasan & Referensi

Tugas Koding 6: Ambil dataset regresi atau klasifikasi skala kecil (≥ 500 sampel). Lakukan: (i) K-Fold CV (K=5) untuk estimasi metrik; (ii) bandingkan model dasar vs Ridge/Lasso (regresi) atau regularisasi C pada logistik (klasifikasi); (iii) visualisasikan tren over/underfitting; (iv) tulis kesimpulan ≤ 1 halaman.

  • James et al. — ISL, Bab 5 (Resampling) & Bab 6 (Regularization).
  • Geron — Hands-On ML, bab evaluasi & regularisasi.