Diffirent results in a function approximation problem using MLPRegressor and Keras2019 Community Moderator ElectionMulti-class text classification with LSTM in KerasBinary text classification problem with small label-dataset using kerasKeras : problem in fitting modelHow to obtain with a recurrent neural network the Xor function using keras?Fraud detection using auto-encoders and KerasKeras Loss Function for Multidimensional Regression ProblemUsing Keras to Predict a Function Following a Normal DistributionUsing a custom R generator function with fit_generator (Keras, R)Keras Attention Guided CNN problemKeras inconsistent training results
What is the offset in a seaplane's hull?
How does one intimidate enemies without having the capacity for violence?
How can bays and straits be determined in a procedurally generated map?
Is this a crack on the carbon frame?
How could an uplifted falcon's brain work?
Accidentally leaked the solution to an assignment, what to do now? (I'm the prof)
Watching something be written to a file live with tail
Smoothness of finite-dimensional functional calculus
Problem of parity - Can we draw a closed path made up of 20 line segments...
What do you call a Matrix-like slowdown and camera movement effect?
Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?
What typically incentivizes a professor to change jobs to a lower ranking university?
"You are your self first supporter", a more proper way to say it
Why does Kotter return in Welcome Back Kotter?
tikz: show 0 at the axis origin
How is it possible to have an ability score that is less than 3?
Why "Having chlorophyll without photosynthesis is actually very dangerous" and "like living with a bomb"?
A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?
Arthur Somervell: 1000 Exercises - Meaning of this notation
Why did the Germans forbid the possession of pet pigeons in Rostov-on-Don in 1941?
Risk of getting Chronic Wasting Disease (CWD) in the United States?
In Japanese, what’s the difference between “Tonari ni” (となりに) and “Tsugi” (つぎ)? When would you use one over the other?
How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?
How to write a macro that is braces sensitive?
Diffirent results in a function approximation problem using MLPRegressor and Keras
2019 Community Moderator ElectionMulti-class text classification with LSTM in KerasBinary text classification problem with small label-dataset using kerasKeras : problem in fitting modelHow to obtain with a recurrent neural network the Xor function using keras?Fraud detection using auto-encoders and KerasKeras Loss Function for Multidimensional Regression ProblemUsing Keras to Predict a Function Following a Normal DistributionUsing a custom R generator function with fit_generator (Keras, R)Keras Attention Guided CNN problemKeras inconsistent training results
$begingroup$
I have different results in a function approximation problem. I am trying to approximate a sine wave using MLPRegressor and Keras (um dense layer)
Here is the code for the MLPRegressor:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error
#Cria um dataset
X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
noise= np.random.normal(0,0.1,100).reshape(-1,1)
y_train = np.sin(2*np.pi*X_train)
y_train=y_train + noise
y_train=y_train.ravel() # transfoprma em 1D array
#X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
#y_train = np.sin(2 * np.pi * X_train).ravel()
# Experimentos
#hidden_layer sizes : 1,3, 100
#max_iter=10,100,1000
#
nn = MLPRegressor(
hidden_layer_sizes=(3,), activation='tanh', solver='lbfgs', alpha=0.000, batch_size='auto',
learning_rate='constant', learning_rate_init=0.01, power_t=0.5, max_iter=80, shuffle=True,
random_state=0, tol=0.0001, verbose=True, warm_start=False, momentum=0.0, nesterovs_momentum=False,
early_stopping=False, validation_fraction=0.0, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
#Treina a Rede
n = nn.fit(X_train, y_train)
#previsoes na rede no conjunto de treinamento
predict_train =nn.predict(X_train)
#Plota o treinamento
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_train, y_train, s=5, c='b', marker="o", label='real')
ax1.plot(X_train,predict_train, c='r', label='NN Prediction')
#Conjunto de Teste
X_test = np.arange(0.0, 1, 0.01).reshape(-1, 1)
y_test = np.sin(2*np.pi*X_test) + np.random.normal(0,0.2,100).reshape(-1,1)
y_test=y_test.ravel()
#Calcula as previsoes no conjunto de teste
predict_test= nn.predict(X_test)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_test, y_test, s=5, c='b', marker="o", label='real')
ax1.plot(X_test,predict_test, c='r', label='NN Prediction')
plt.legend()
plt.show()
print('MSE training : :.3f'.format(mean_squared_error(y_train, predict_train)))
print('MSE testing : :.3f'.format(mean_squared_error(y_test, predict_test)))
Using MLPRegressor, I found satisfactory results with just 3 neurons. However, when I try to use Keras, I can not get reasonably results. The code is very similar with the exception of the optmizer and the activation function. Here is the code for Keras:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import mean_squared_error
#
#Cria um dataset
#Cria um dataset
X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
noise= np.random.normal(0,0.1,100).reshape(-1,1)
y_train = np.sin(2*np.pi*X_train)
y_train=y_train + noise
y_train=y_train.ravel() # transfoprma em 1D array
#Construir a Rede
nn = Sequential() # sequencia de camada
#activation
# sigmoid, tanh, relu, linear
# units: numero de neuronios na camada
#primeira camada escondida tem input_dim
nn.add(Dense(units = 100, activation = 'relu',
kernel_initializer = 'random_uniform', input_dim = 1))
nn.add(Dense(units = 1, activation = 'linear'))
# Algorritmo de aprendizado
#sgd = keras.optimizers.SGD(lr=0.1, decay=0, momentum=0, nesterov=False)
adam=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
#determina a funcao de custo e a metrica utilizada
nn.compile(loss = 'mean_squared_error', optimizer = adam,
metrics = ['mean_squared_error'])
history= nn.fit(X_train, y_train, batch_size = 1, epochs = 1000)
#previsoes na rede no conjunto de treinamento
predict_train =nn.predict(X_train)
#Plota o treinamento
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_train, y_train, s=5, c='b', marker="o", label='real')
ax1.plot(X_train,predict_train, c='r', label='NN Prediction')
#Conjunto de Teste
X_test = np.arange(0.0, 1, 0.01).reshape(-1, 1)
y_test = np.sin(2*np.pi*X_test) + np.random.normal(0,0.2,100).reshape(-1,1)
y_test=y_test.ravel()
#Calcula as previsoes no conjunto de teste
predict_test= nn.predict(X_test)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_test, y_test, s=5, c='b', marker="o", label='real')
ax1.plot(X_test,predict_test, c='r', label='NN Prediction')
plt.legend()
plt.show()
print('MSE training : :.3f'.format(mean_squared_error(y_train, predict_train)))
print('MSE testing : :.3f'.format(mean_squared_error(y_test, predict_test)))
I already tried sgd as optimizer and also tanh for activation function. I do not undestand what I am missing, that is why I cann make the code for function approximation using Keras work.
keras mlp
New contributor
$endgroup$
add a comment |
$begingroup$
I have different results in a function approximation problem. I am trying to approximate a sine wave using MLPRegressor and Keras (um dense layer)
Here is the code for the MLPRegressor:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error
#Cria um dataset
X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
noise= np.random.normal(0,0.1,100).reshape(-1,1)
y_train = np.sin(2*np.pi*X_train)
y_train=y_train + noise
y_train=y_train.ravel() # transfoprma em 1D array
#X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
#y_train = np.sin(2 * np.pi * X_train).ravel()
# Experimentos
#hidden_layer sizes : 1,3, 100
#max_iter=10,100,1000
#
nn = MLPRegressor(
hidden_layer_sizes=(3,), activation='tanh', solver='lbfgs', alpha=0.000, batch_size='auto',
learning_rate='constant', learning_rate_init=0.01, power_t=0.5, max_iter=80, shuffle=True,
random_state=0, tol=0.0001, verbose=True, warm_start=False, momentum=0.0, nesterovs_momentum=False,
early_stopping=False, validation_fraction=0.0, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
#Treina a Rede
n = nn.fit(X_train, y_train)
#previsoes na rede no conjunto de treinamento
predict_train =nn.predict(X_train)
#Plota o treinamento
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_train, y_train, s=5, c='b', marker="o", label='real')
ax1.plot(X_train,predict_train, c='r', label='NN Prediction')
#Conjunto de Teste
X_test = np.arange(0.0, 1, 0.01).reshape(-1, 1)
y_test = np.sin(2*np.pi*X_test) + np.random.normal(0,0.2,100).reshape(-1,1)
y_test=y_test.ravel()
#Calcula as previsoes no conjunto de teste
predict_test= nn.predict(X_test)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_test, y_test, s=5, c='b', marker="o", label='real')
ax1.plot(X_test,predict_test, c='r', label='NN Prediction')
plt.legend()
plt.show()
print('MSE training : :.3f'.format(mean_squared_error(y_train, predict_train)))
print('MSE testing : :.3f'.format(mean_squared_error(y_test, predict_test)))
Using MLPRegressor, I found satisfactory results with just 3 neurons. However, when I try to use Keras, I can not get reasonably results. The code is very similar with the exception of the optmizer and the activation function. Here is the code for Keras:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import mean_squared_error
#
#Cria um dataset
#Cria um dataset
X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
noise= np.random.normal(0,0.1,100).reshape(-1,1)
y_train = np.sin(2*np.pi*X_train)
y_train=y_train + noise
y_train=y_train.ravel() # transfoprma em 1D array
#Construir a Rede
nn = Sequential() # sequencia de camada
#activation
# sigmoid, tanh, relu, linear
# units: numero de neuronios na camada
#primeira camada escondida tem input_dim
nn.add(Dense(units = 100, activation = 'relu',
kernel_initializer = 'random_uniform', input_dim = 1))
nn.add(Dense(units = 1, activation = 'linear'))
# Algorritmo de aprendizado
#sgd = keras.optimizers.SGD(lr=0.1, decay=0, momentum=0, nesterov=False)
adam=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
#determina a funcao de custo e a metrica utilizada
nn.compile(loss = 'mean_squared_error', optimizer = adam,
metrics = ['mean_squared_error'])
history= nn.fit(X_train, y_train, batch_size = 1, epochs = 1000)
#previsoes na rede no conjunto de treinamento
predict_train =nn.predict(X_train)
#Plota o treinamento
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_train, y_train, s=5, c='b', marker="o", label='real')
ax1.plot(X_train,predict_train, c='r', label='NN Prediction')
#Conjunto de Teste
X_test = np.arange(0.0, 1, 0.01).reshape(-1, 1)
y_test = np.sin(2*np.pi*X_test) + np.random.normal(0,0.2,100).reshape(-1,1)
y_test=y_test.ravel()
#Calcula as previsoes no conjunto de teste
predict_test= nn.predict(X_test)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_test, y_test, s=5, c='b', marker="o", label='real')
ax1.plot(X_test,predict_test, c='r', label='NN Prediction')
plt.legend()
plt.show()
print('MSE training : :.3f'.format(mean_squared_error(y_train, predict_train)))
print('MSE testing : :.3f'.format(mean_squared_error(y_test, predict_test)))
I already tried sgd as optimizer and also tanh for activation function. I do not undestand what I am missing, that is why I cann make the code for function approximation using Keras work.
keras mlp
New contributor
$endgroup$
add a comment |
$begingroup$
I have different results in a function approximation problem. I am trying to approximate a sine wave using MLPRegressor and Keras (um dense layer)
Here is the code for the MLPRegressor:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error
#Cria um dataset
X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
noise= np.random.normal(0,0.1,100).reshape(-1,1)
y_train = np.sin(2*np.pi*X_train)
y_train=y_train + noise
y_train=y_train.ravel() # transfoprma em 1D array
#X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
#y_train = np.sin(2 * np.pi * X_train).ravel()
# Experimentos
#hidden_layer sizes : 1,3, 100
#max_iter=10,100,1000
#
nn = MLPRegressor(
hidden_layer_sizes=(3,), activation='tanh', solver='lbfgs', alpha=0.000, batch_size='auto',
learning_rate='constant', learning_rate_init=0.01, power_t=0.5, max_iter=80, shuffle=True,
random_state=0, tol=0.0001, verbose=True, warm_start=False, momentum=0.0, nesterovs_momentum=False,
early_stopping=False, validation_fraction=0.0, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
#Treina a Rede
n = nn.fit(X_train, y_train)
#previsoes na rede no conjunto de treinamento
predict_train =nn.predict(X_train)
#Plota o treinamento
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_train, y_train, s=5, c='b', marker="o", label='real')
ax1.plot(X_train,predict_train, c='r', label='NN Prediction')
#Conjunto de Teste
X_test = np.arange(0.0, 1, 0.01).reshape(-1, 1)
y_test = np.sin(2*np.pi*X_test) + np.random.normal(0,0.2,100).reshape(-1,1)
y_test=y_test.ravel()
#Calcula as previsoes no conjunto de teste
predict_test= nn.predict(X_test)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_test, y_test, s=5, c='b', marker="o", label='real')
ax1.plot(X_test,predict_test, c='r', label='NN Prediction')
plt.legend()
plt.show()
print('MSE training : :.3f'.format(mean_squared_error(y_train, predict_train)))
print('MSE testing : :.3f'.format(mean_squared_error(y_test, predict_test)))
Using MLPRegressor, I found satisfactory results with just 3 neurons. However, when I try to use Keras, I can not get reasonably results. The code is very similar with the exception of the optmizer and the activation function. Here is the code for Keras:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import mean_squared_error
#
#Cria um dataset
#Cria um dataset
X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
noise= np.random.normal(0,0.1,100).reshape(-1,1)
y_train = np.sin(2*np.pi*X_train)
y_train=y_train + noise
y_train=y_train.ravel() # transfoprma em 1D array
#Construir a Rede
nn = Sequential() # sequencia de camada
#activation
# sigmoid, tanh, relu, linear
# units: numero de neuronios na camada
#primeira camada escondida tem input_dim
nn.add(Dense(units = 100, activation = 'relu',
kernel_initializer = 'random_uniform', input_dim = 1))
nn.add(Dense(units = 1, activation = 'linear'))
# Algorritmo de aprendizado
#sgd = keras.optimizers.SGD(lr=0.1, decay=0, momentum=0, nesterov=False)
adam=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
#determina a funcao de custo e a metrica utilizada
nn.compile(loss = 'mean_squared_error', optimizer = adam,
metrics = ['mean_squared_error'])
history= nn.fit(X_train, y_train, batch_size = 1, epochs = 1000)
#previsoes na rede no conjunto de treinamento
predict_train =nn.predict(X_train)
#Plota o treinamento
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_train, y_train, s=5, c='b', marker="o", label='real')
ax1.plot(X_train,predict_train, c='r', label='NN Prediction')
#Conjunto de Teste
X_test = np.arange(0.0, 1, 0.01).reshape(-1, 1)
y_test = np.sin(2*np.pi*X_test) + np.random.normal(0,0.2,100).reshape(-1,1)
y_test=y_test.ravel()
#Calcula as previsoes no conjunto de teste
predict_test= nn.predict(X_test)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_test, y_test, s=5, c='b', marker="o", label='real')
ax1.plot(X_test,predict_test, c='r', label='NN Prediction')
plt.legend()
plt.show()
print('MSE training : :.3f'.format(mean_squared_error(y_train, predict_train)))
print('MSE testing : :.3f'.format(mean_squared_error(y_test, predict_test)))
I already tried sgd as optimizer and also tanh for activation function. I do not undestand what I am missing, that is why I cann make the code for function approximation using Keras work.
keras mlp
New contributor
$endgroup$
I have different results in a function approximation problem. I am trying to approximate a sine wave using MLPRegressor and Keras (um dense layer)
Here is the code for the MLPRegressor:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error
#Cria um dataset
X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
noise= np.random.normal(0,0.1,100).reshape(-1,1)
y_train = np.sin(2*np.pi*X_train)
y_train=y_train + noise
y_train=y_train.ravel() # transfoprma em 1D array
#X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
#y_train = np.sin(2 * np.pi * X_train).ravel()
# Experimentos
#hidden_layer sizes : 1,3, 100
#max_iter=10,100,1000
#
nn = MLPRegressor(
hidden_layer_sizes=(3,), activation='tanh', solver='lbfgs', alpha=0.000, batch_size='auto',
learning_rate='constant', learning_rate_init=0.01, power_t=0.5, max_iter=80, shuffle=True,
random_state=0, tol=0.0001, verbose=True, warm_start=False, momentum=0.0, nesterovs_momentum=False,
early_stopping=False, validation_fraction=0.0, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
#Treina a Rede
n = nn.fit(X_train, y_train)
#previsoes na rede no conjunto de treinamento
predict_train =nn.predict(X_train)
#Plota o treinamento
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_train, y_train, s=5, c='b', marker="o", label='real')
ax1.plot(X_train,predict_train, c='r', label='NN Prediction')
#Conjunto de Teste
X_test = np.arange(0.0, 1, 0.01).reshape(-1, 1)
y_test = np.sin(2*np.pi*X_test) + np.random.normal(0,0.2,100).reshape(-1,1)
y_test=y_test.ravel()
#Calcula as previsoes no conjunto de teste
predict_test= nn.predict(X_test)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_test, y_test, s=5, c='b', marker="o", label='real')
ax1.plot(X_test,predict_test, c='r', label='NN Prediction')
plt.legend()
plt.show()
print('MSE training : :.3f'.format(mean_squared_error(y_train, predict_train)))
print('MSE testing : :.3f'.format(mean_squared_error(y_test, predict_test)))
Using MLPRegressor, I found satisfactory results with just 3 neurons. However, when I try to use Keras, I can not get reasonably results. The code is very similar with the exception of the optmizer and the activation function. Here is the code for Keras:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import mean_squared_error
#
#Cria um dataset
#Cria um dataset
X_train = np.arange(0.0, 1, 0.01).reshape(-1, 1)
noise= np.random.normal(0,0.1,100).reshape(-1,1)
y_train = np.sin(2*np.pi*X_train)
y_train=y_train + noise
y_train=y_train.ravel() # transfoprma em 1D array
#Construir a Rede
nn = Sequential() # sequencia de camada
#activation
# sigmoid, tanh, relu, linear
# units: numero de neuronios na camada
#primeira camada escondida tem input_dim
nn.add(Dense(units = 100, activation = 'relu',
kernel_initializer = 'random_uniform', input_dim = 1))
nn.add(Dense(units = 1, activation = 'linear'))
# Algorritmo de aprendizado
#sgd = keras.optimizers.SGD(lr=0.1, decay=0, momentum=0, nesterov=False)
adam=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
#determina a funcao de custo e a metrica utilizada
nn.compile(loss = 'mean_squared_error', optimizer = adam,
metrics = ['mean_squared_error'])
history= nn.fit(X_train, y_train, batch_size = 1, epochs = 1000)
#previsoes na rede no conjunto de treinamento
predict_train =nn.predict(X_train)
#Plota o treinamento
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_train, y_train, s=5, c='b', marker="o", label='real')
ax1.plot(X_train,predict_train, c='r', label='NN Prediction')
#Conjunto de Teste
X_test = np.arange(0.0, 1, 0.01).reshape(-1, 1)
y_test = np.sin(2*np.pi*X_test) + np.random.normal(0,0.2,100).reshape(-1,1)
y_test=y_test.ravel()
#Calcula as previsoes no conjunto de teste
predict_test= nn.predict(X_test)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X_test, y_test, s=5, c='b', marker="o", label='real')
ax1.plot(X_test,predict_test, c='r', label='NN Prediction')
plt.legend()
plt.show()
print('MSE training : :.3f'.format(mean_squared_error(y_train, predict_train)))
print('MSE testing : :.3f'.format(mean_squared_error(y_test, predict_test)))
I already tried sgd as optimizer and also tanh for activation function. I do not undestand what I am missing, that is why I cann make the code for function approximation using Keras work.
keras mlp
keras mlp
New contributor
New contributor
New contributor
asked 5 hours ago
Jorge AmaralJorge Amaral
1
1
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Jorge Amaral is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48773%2fdiffirent-results-in-a-function-approximation-problem-using-mlpregressor-and-ker%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Jorge Amaral is a new contributor. Be nice, and check out our Code of Conduct.
Jorge Amaral is a new contributor. Be nice, and check out our Code of Conduct.
Jorge Amaral is a new contributor. Be nice, and check out our Code of Conduct.
Jorge Amaral is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48773%2fdiffirent-results-in-a-function-approximation-problem-using-mlpregressor-and-ker%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown