scikit learn - sklearn not following n_iter param: Giving more iterations than asked -
the following doubt had while now. if resonates you, hope helps.
i have following simple code
with_model_analysis = perceptron(n_iter=2, warm_start=true, verbose=1)
when following code run
with_model_analysis.fit(x_train, y_train)
i verbose output follows:
-- epoch 1 norm: 2117.10, nnzs: 151491, bias: -0.200000, t: 2438128, avg. loss: 0.136197 total training time: 1.57 seconds. -- epoch 2 norm: 2152.62, nnzs: 152310, bias: -0.210000, t: 4876256, avg. loss: 0.138114 total training time: 3.14 seconds. -- epoch 1 norm: 2864.00, nnzs: 144626, bias: -0.250000, t: 2438128, avg. loss: 0.140278 total training time: 1.57 seconds. -- epoch 2 norm: 2908.83, nnzs: 145051, bias: -0.240000, t: 4876256, avg. loss: 0.141844 total training time: 3.13 seconds. -- epoch 1 norm: 996.64, nnzs: 55420, bias: -0.160000, t: 2438128, avg. loss: 0.012540 total training time: 1.59 seconds. -- epoch 2 norm: 1013.77, nnzs: 56011, bias: -0.150000, t: 4876256, avg. loss: 0.012728 total training time: 3.18 seconds. -- epoch 1 norm: 2850.54, nnzs: 176581, bias: -0.270000, t: 2438128, avg. loss: 0.209191 total training time: 1.58 seconds. -- epoch 2 norm: 2895.90, nnzs: 177293, bias: -0.260000, t: 4876256, avg. loss: 0.211221 total training time: 3.18 seconds. -- epoch 1 norm: 1489.41, nnzs: 80787, bias: -0.270000, t: 2438128, avg. loss: 0.029082 total training time: 1.54 seconds. -- epoch 2 norm: 1516.51, nnzs: 81432, bias: -0.290000, t: 4876256, avg. loss: 0.029050 total training time: 3.06 seconds. -- epoch 1 norm: 2718.56, nnzs: 191107, bias: 0.190000, t: 2438128, avg. loss: 0.178792 total training time: 1.48 seconds. -- epoch 2 norm: 2762.41, nnzs: 191638, bias: 0.220000, t: 4876256, avg. loss: 0.181443 total training time: 2.99 seconds. [parallel(n_jobs=1)]: done 6 out of 6 | elapsed: 28.5s finished
what last line done 6 out of 6
mean? when required iterations 2
, why doing 6 * 2
iterations?
the 6
represents number of output classes. in multi-class classifications, trains one-versus-all decision boundaries, trains classes separately.
Comments
Post a Comment