HW4.pdf

CE
E717-2: Machine Learniing
Prooblem Seet 4
Instruuctor: Soley
ymani
Fall 2012
Duue date: 19th Day
D
Instructtions
-
If you have anny question ab
bout problemss mail to mlta4
40717@gmail.com with thee subject in th
he format:
QHW(homew
work number)--P(question nuumber) (e.g., QHW(4)-P(2))).
Send your asssignments witth subject HW
W(homework number)-(Stud
n
dent-ID) to mllta40717@gm
mail.com.
Probleem 1: EM
1.1. (12 Pts) EM
M& Bayesian Networks
Consider the Bayesian netw
work shown iin Figure 1 that contains fiv
ve Boolean vaariables (below
w, we use
the first letteer of the variiable names). Suppose thaat the variablees , , , aare observed and
is
() () ()
unobserved. Given
G
a set off training exxamples
,
, , ()
. Show
w that in the M-step of
is
updated
the
EM
algorithm
denoting
( = | = , = )
as
|
|
←
∑
← argmax
( ()
, ()
) ( ()
∑
( ()
| ( ), ( ) , ( ), ( ))
, ()
)
( ( )| ( ), ( ) , ( ), ( ), )
llog (
. Ind
deed, this upd
date rule correesponds to th
he M-step
()
,
()
,
()
,
()
,
()
| ′) .
Figure 1
1.2. Practical EM
M & GMM
Please download "EM.zip"" and use this implementation of the EM algorithm esttimating the parameters
p
of a Gaussiann mixture. Do
ownload "clusst_data.m" datta set (contain
ning the first ttwo features of the iris
data set from the UCI repository). Below
w, you will fin
nd clusters of this
t data set uusing the EM algorithm.
a
a. [5 Pts]
P The implementation off GMMEM uses
u
k-means clustering allgorithm for parameter
p
initiaalization. Find the clusteriing results (fo
for two clusteers) on the abbove data sett. Plot the
clustter centroids and aroundd each centro
oid draw an ellipsoid shhowing the respective
r
covaariance.
b. [4 Ptts] Plot also th
he clustering rresult of the k-means
k
algorithm (used in the initializattion phase
of EM
M) and compare it with thee result obtained in Part (a).
c. [4 Pts] Change GMMEMInitia
G
alize code to use
u the follow
wing parameteer initialization
n (instead
of ussing k-means for parameterr initialization
n):
= 6 3 , = 8 4 ,Σ =Σ = ,
=
= 0.5
5.
Agaiin plot the results as explainned above.
d. [5 Ptts] In general for the EM allgorithm (with
h GMM assum
mption), which
ch properties of
o datasets
can cause
c
it to gett stuck in local
al minima morre seriously?
Problem 2: k-NN
2.1. [10 Pts] Implement a k-NN classifier as a Matlab function foundY=kNN(k,trainX,trainY,X).
Therefore, k-NN must find the labels of the data points in X according to the training data. 2.2. In this part, k-NN classifier is used for text classification. Download the document classification data
set "doc_data.zip" introduced in Homework 2. The features for classifying a document are the number
of occurrences of each word (of the vocabulary) in the given document.
a. [7 Pts] Find misclassification error of k-NN (k=1, k=3, k=5, k=15) on the training and test
sets. Discuss about the obtained results.
b. [3 Pts] Compare the results of k-NN with those of Naïve Bayes classifier that you found in
Homework 2 and discuss about them.
Problem 3: Boosting
The AdaBoost algorithm (that we explained in the class) tries to minimize the exponential loss
=
( ) ( ( ))
∑
sequentially where ( ) =
ℎ ( ) + ⋯ + ℎ ( ) . Indeed, at the -th iteration (1 ≤ ≤
), it finds the weak classifier ℎ ( ) and the corresponding coefficient
so that the loss function (up to -th
iteration) is minimized.
()
()
3.1. [15 Pts] Suppose that we consider the sum of squares error = ∑
−
as the
objective function (instead of the exponential loss) and want to optimize it sequentially. What are the
and the classifier ℎ ( )?
update rule for the coefficient
3.2. Consider the XOR dataset shown in the below figure ('X' corresponds to = −1 and 'O' corresponds
to =1).
a. [5 pts] Assume the base classifiers ℎ ( ) are linear classifiers. Will AdaBoost that minimizes
the exponential loss get zero training error after sufficient iterations? What is the minimum
number of iterations before it can reach zero training error?
b. [5 pts] Will Adaboost with Decision Stumps as the weak classifier ever achieve better than
50% classification accuracy on this dataset? (Justify your answer). What about the boosting
algorithm explained in Problem 3.1 (with Decision Stumps as the base classifier)?
1
1
‐1
‐1
3.3. [5 Pts] How can we prevent Boosting from overfitting? How can we find the proper number of
boosting iterations?
Problem 4: MDPs & Reinforcement Learning
4.1. Consider the grid world shown in the below figure. In each of the nine states of this world, four actions
'Up', 'Down', 'Left', and 'Right' are available. Each action achieves the intended effect with probability
0.7, but the rest of the time, the action moves the agent in one of the three unintended directions
randomly (with the probability 0.1 in each of these directions). If the agent bumps into a wall, it stays
in the same square. When the agent reaches the state 8 (this state is a terminal state), it receives +10
reward. The discount factor is = 0.9.
a.
[8 Pts] Write down the numerical state values (of all states) after the first and second iterations
of Value Iteration algorithm. Initial values of the states are set to zero.
9
4
1
b.
8
5
2
7
6
3
[2 Pts] After these two iterations, what is the currently selected policy? Is this sensible?
4.2. [10 Pts] Suppose that we don't know the transition probabilities and rewards and we want to use Qlearning algorithm. During this algorithm, we observe the following two episodes (above each
transition, the action and the reward value have been shown):
2
,
3
,
4
What would be the resulting
3
,
,
9
2
,
,
9
5
,
,
4
6
,
,
1
3
,
,
4
,
2
,
4
values (if the initial values of
1
,
,
5
2
,
,
5
,
8
8
are zero) after the above episodes?