AI
Artificial Intelligence (IF1706)
FSTT • ISTN Jakarta • Semester Ganjil 2025/2026
Sesi 11 – KNN & Decision Tree

Dua model klasik namun efektif: KNN (berbasis kedekatan contoh) dan Decision Tree (berbasis aturan if-else terstruktur). KNN sederhana namun sensitif pada skala & pemilihan k; Decision Tree kuat, interpretabel, namun mudah overfit jika tak dibatasi.

Tujuan: memahami intuisi, mengatur hyperparameter, dan membandingkan performa KNN vs Tree
Ulasan Materi: Definisi, Intuisi, dan Contoh

K-Nearest Neighbors (KNN)

Intinya: bandingkan kemiripan. Untuk sampel baru, lihat K tetangga terdekat & lakukan voting/rata-rata. Perhatikan scaling fitur dan pemilihan k agar tidak terlalu sensitif terhadap noise.

Decision Tree

Pohon keputusan memecah ruang fitur dengan pertanyaan berurutan yang memaksimalkan kemurnian (Gini/Entropy). Hasilnya mudah dijelaskan: setiap jalur dari akar ke daun adalah aturan jika-maka.

Ringkasan Intuisi & Konsep:

1) K-Nearest Neighbors (KNN)
   • Ide: untuk contoh baru x*, cari K tetangga terdekat pada data latih menggunakan jarak (mis. Euclidean).
   • Klasifikasi: voting mayoritas label tetangga. Regresi: rata-rata nilai tetangga.
   • Hyperparameter k: kecil → decision boundary berlekuk (varians tinggi, rawan overfit); besar → boundary halus (bias tinggi).
   • Fitur harus berada pada skala yang sebanding (normalisasi/standarisasi).

2) Decision Tree (DT)
   • Ide: partisi ruang fitur dengan pertanyaan ya/tidak yang memaksimalkan kemurnian kelas di setiap node.
   • Impurity: Gini = 1−∑p_c^2; Entropy = −∑p_c log2 p_c.
   • Kedalaman & jumlah daun mempengaruhi kompleksitas → kontrol dengan max_depth, min_samples_split, pruning.

3) Evaluasi & Overfitting
   • Gunakan train/val/test atau k-fold.
   • Untuk KNN: pilih k terbaik via grid search.
   • Untuk DT: batasi kedalaman atau gunakan pruning cost-complexity.

Studi Kasus

  • Klasifikasi 2D: KNN boundary pada data dua cluster.
  • Dataset nyata (breast_cancer): tuning parameter Decision Tree dan laporan metrik.
Lab: KNN & Decision Tree
A. KNN dari Nol (Visualisasi Boundary)
# ====== KNN dari Nol (Klasifikasi 2D) ======
import numpy as np
from collections import Counter
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
import matplotlib.pyplot as plt

np.random.seed(0)
# Data sintetis: dua kelas, 2 fitur
N=200
X1 = np.random.multivariate_normal([0,0], [[1,0.2],[0.2,1]], N)
X2 = np.random.multivariate_normal([3,3], [[1,-0.2],[-0.2,1]], N)
X = np.vstack([X1,X2])
y = np.array([0]*N + [1]*N)

Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.25,random_state=42)
sc = StandardScaler(); Xtr_=sc.fit_transform(Xtr); Xte_=sc.transform(Xte)

# fungsi jarak Euclidean

def dist(a,b):
    return np.sqrt(np.sum((a-b)**2))

# prediksi KNN

def knn_predict(Xtrain, ytrain, x, k=5):
    d = np.linalg.norm(Xtrain - x, axis=1)
    idx = np.argsort(d)[:k]
    vote = Counter(ytrain[idx]).most_common(1)[0][0]
    return vote

# evaluasi
pred = np.array([knn_predict(Xtr_, ytr, x, k=5) for x in Xte_])
print('Akurasi (k=5):', accuracy_score(yte, pred))
print('Confusion Matrix:\n', confusion_matrix(yte, pred))
print('Report:\n', classification_report(yte, pred))

# visualisasi boundary (grid)
xx, yy = np.meshgrid(np.linspace(X[:,0].min()-1, X[:,0].max()+1, 120),
                     np.linspace(X[:,1].min()-1, X[:,1].max()+1, 120))
XY = np.c_[xx.ravel(), yy.ravel()]
XY_ = sc.transform(XY)
Z = np.array([knn_predict(Xtr_, ytr, x, k=5) for x in XY_]).reshape(xx.shape)
plt.figure(figsize=(5,4))
plt.contourf(xx, yy, Z, alpha=0.25)
plt.scatter(Xte[:,0], Xte[:,1], c=yte, s=16, edgecolor='k')
plt.title('KNN (k=5) Decision Boundary')
plt.show()
B. KNN scikit-learn + Grid Search
# ====== KNN scikit-learn + Grid Search k ======
# Jika perlu: !pip install scikit-learn
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score

X,y = make_classification(n_samples=600, n_features=6, n_informative=4, n_redundant=0,
                          n_clusters_per_class=1, random_state=7)
Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.25,random_state=0)

pipe = Pipeline([
    ('sc', StandardScaler()),
    ('knn', KNeighborsClassifier())
])
param = {
    'knn__n_neighbors':[1,3,5,7,9,11],
    'knn__weights':['uniform','distance'],
    'knn__metric':['euclidean','manhattan']
}
cv = GridSearchCV(pipe, param_grid=param, cv=5, n_jobs=-1)
cv.fit(Xtr, ytr)
print('Best params:', cv.best_params_)
print('Val score:', cv.best_score_)
print('Test acc:', accuracy_score(yte, cv.best_estimator_.predict(Xte)))
C. Decision Tree (Klasifikasi)
# ====== Decision Tree (Klasifikasi) ======
# Jika perlu: !pip install scikit-learn
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.metrics import accuracy_score, classification_report
import matplotlib.pyplot as plt

X,y = make_moons(n_samples=500, noise=0.25, random_state=0)
Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.25,random_state=42)

# Coba beberapa kedalaman
for d in [2,4,8,None]:
    clf = DecisionTreeClassifier(max_depth=d, random_state=0)
    clf.fit(Xtr, ytr)
    acc = accuracy_score(yte, clf.predict(Xte))
    print(f'max_depth={d}: acc={acc:.3f}')

# Visualisasi pohon (untuk salah satu model)
clf = DecisionTreeClassifier(max_depth=4, random_state=0).fit(Xtr,ytr)
plt.figure(figsize=(8,5))
plot_tree(clf, filled=True, feature_names=['x1','x2'], class_names=['0','1'])
plt.title('Decision Tree depth=4')
plt.show()
D. Grid Search Decision Tree
# ====== Grid Search Decision Tree + Evaluasi ======
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report

X,y = load_breast_cancer(return_X_y=True)
Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.25,random_state=1,stratify=y)

param = {
  'max_depth':[2,4,6,8,10,None],
  'min_samples_split':[2,5,10,20],
  'min_samples_leaf':[1,2,4,8],
  'criterion':['gini','entropy']
}
clf = GridSearchCV(DecisionTreeClassifier(random_state=0), param, cv=5, n_jobs=-1)
clf.fit(Xtr, ytr)
print('Best params:', clf.best_params_)
print(classification_report(yte, clf.best_estimator_.predict(Xte)))
E. (Opsional) Decision Tree Regressor
# ====== (Opsional) Decision Tree Regressor ======
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt

np.random.seed(2)
X = np.linspace(0, 6, 200).reshape(-1,1)
y = np.sin(X[:,0]) + np.random.randn(200)*0.15
Xtr, Xte, ytr, yte = train_test_split(X,y,test_size=0.25,random_state=0)

for d in [1,3,5,8]:
    reg = DecisionTreeRegressor(max_depth=d, random_state=0).fit(Xtr,ytr)
    mse = mean_squared_error(yte, reg.predict(Xte))
    print(f'depth={d}: test MSE={mse:.3f}')

# Plot salah satu depth
reg = DecisionTreeRegressor(max_depth=5, random_state=0).fit(Xtr,ytr)
xx = np.linspace(0,6,300).reshape(-1,1)
plt.figure(figsize=(6,4))
plt.scatter(Xte, yte, s=12, label='Test')
plt.plot(xx, reg.predict(xx), label='DT fit')
plt.legend(); plt.title('Decision Tree Regressor (depth=5)'); plt.xlabel('x'); plt.ylabel('y');
plt.show()
F. Kuis 11 (Cek Mandiri)
# ====== Kuis 11 (cek mandiri) ======
qs=[
  ("Mengapa KNN butuh standarisasi fitur?",{"a":"Agar jarak tidak didominasi fitur berskala besar","b":"Mempercepat backprop","c":"Menghindari class imbalance","d":"Agar k lebih besar"},"a"),
  ("Dampak k kecil pada KNN:",{"a":"Bias tinggi","b":"Varians tinggi, rawan overfit","c":"Tidak berpengaruh","d":"Selalu optimal"},"b"),
  ("Impurity yang umum dipakai di Decision Tree:",{"a":"MSE dan MAE","b":"Gini/Entropy","c":"MAPE","d":"KL Divergence"},"b"),
  ("Cara mencegah overfitting pada Decision Tree:",{"a":"Tambah kedalaman","b":"Kurangi min_samples_leaf","c":"Gunakan max_depth/pruning","d":"Hilangkan validasi"},"c"),
]
print('Kunci jawaban:')
for i,(_,__,ans) in enumerate(qs,1):
    print(f'Q{i}: {ans}')
Penugasan & Referensi

Tugas Koding 7: Bandingkan KNN dan Decision Tree pada satu dataset klasifikasi (≥ 500 sampel). Lakukan scaling untuk KNN, grid search hyperparameter (k untuk KNN; max_depth/min_samples_* untuk DT), evaluasi dengan k-fold (K=5), dan diskusikan kelebihan/kekurangan masing-masing. Lampirkan confusion matrix & metrik ringkas.

  • Geron — Hands-On Machine Learning (Bab tentang k-NN & Trees).
  • James et al. — ISL (pembahasan pohon keputusan & k-NN).