0
Machine Learning
What’s the best language to learn if I want to get involved with machine learning?
13 Respuestas
+ 13
Machine learning algorithms are best implemented in Python, R, MATLAB and Julia. Of those, R and MATLAB are pretty hermetic, while Julia is relatively new, but growing strong.
I would (and did ;) focus on Python 🐍
+ 7
PHP
VBScript
SMX – dedicated to web pages
Tcl – server-side in NaviServer and an essential component in electronics industry systems
WebDNA – dedicated to database-driven websites
AngelScript
Ch
EEL
Io
Julia
Lua
MiniD
Python
Ruby (via mruby)
Squirrel
Tcl
+ 5
php is the best i think
+ 4
Python is the current favourite.
+ 2
Let’s say we have a feature matrix X. Drag and drop in order to correctly define X_train and X_test for the second fold.
Answer:
kf = KFold(n_splits=5, shuffle=True)
splits = list(kf.split(X))
a, b = splits[1]
X_train = X[a]
X_test = X[b]
+ 1
Which of the following could be the output of this code assuming X has 3 datapoints?
kf = KFold(n_splits=3, shuffle=True)
splits = list(kf.split(X))
print(splits[0])
Answer:
([0, 2], [1])
([0, 1], [2])
+ 1
Complete the code to do a k-fold cross validation where k=5 and calculate the accuracy. X is the feature matrix and y is the target array.
scores = [ ]
kf = KFold(n_splits=5, shuffle=True)
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
model = LogisticRegression()
model.fit(X_train, y_train)
scores.append(model.score(X_test, y_test))
print(np.mean(scores))
0
Complete the code to create a fourth feature matrix that has just the Pclass and Sex features and uses the score_model function to print the scores. Assume we’ve defined y to be the target values and kf to be the KFold object.
X4 = df[['Pclass', 'male']].values
score_model(X4, y, kf)
0
Select all that apply for the version of Decision Trees we are using.
Select all that apply
A feature can only be used once
Every leaf node has a prediction for the target value
Each internal node has exactly two children
Every feature must be used
Every path to a leaf node will be the same length
Answer:
Every leaf node has a prediction for the target value
Each internal node has exactly two children
0
scores = [ ]
kf = KFold(n_splits=5, shuffle=True)
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
model = LogisticRegression()
model.fit(X_train, y_train)
scores.append(model.score(X_test, y_test))
print(np.mean(scores))
0
scores = [ ]
kf = KFold(n_splits=5, shuffle=True)
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
model = LogisticRegression()
model.
fit
(X_train, y_train)
scores.
append
(model.score(X_test, y_test))
print(np.
mean
(scores))
0
many thankssss
- 1
Let’s say we have a feature matrix X. Drag and drop in order to correctly define X_train and X_test for the second fold.
kf = KFold(n_splits=5, shuffle=True)
splits = list(kf.split(X))
a, b = splits[]
X_train = X[]
X_test = X[]
a b 1 0 2 ab
Unlock