Is “adding the predictions to the real data for new training and prediction” a good idea for LSTM? The Next CEO of Stack Overflow2019 Community Moderator ElectionTensorflow - Clarification on how to predict more than 1 step ahead using LSTMLSTM Model for predicting the minutely seasonal data of the dayUsing different timesteps for training data and target valueHow to use LSTMs for predicting the value for a specific hour within a day, given past daily data?How to prepare future data for trainingLSTM future steps prediction with shifted y_train relatively to X_trainHow to calculate and print unseen time series values for LSTM after train, valid and test dataKeras/TF: Making sure image training data shape is accurate for Time Distributed CNN+LSTMHow to reshape data for LSTM training in multivariate sequence predictionHow to add previous predictions for new predictions in LSTM?

Should I tutor a student who I know has cheated on their homework?

What does this shorthand mean?

Customer Requests (Sometimes) Drive Me Bonkers!

India just shot down a satellite from the ground. At what altitude range is the resulting debris field?

How to safely derail a train during transit?

Fastest way to shutdown Ubuntu Mate 18.10

How can I quit an app using Terminal?

Why didn't Khan get resurrected in the Genesis Explosion?

Anatomically Correct Strange Women In Ponds Distributing Swords

Why did we only see the N-1 starfighters in one film?

What makes a siege story/plot interesting?

Under what conditions does the function C = f(A,B) satisfy H(C|A) = H(B)?

Return the Closest Prime Number

Does the Brexit deal have to be agreed by both Houses?

Is there a good way to store credentials outside of a password manager?

How to draw fully connected graph link picture bellow in latex?

How do spells that require an ability check vs. the caster's spell save DC work?

Does it take more energy to get to Venus or to Mars?

How do I go from 300 unfinished/half written blog posts, to published posts?

How to be diplomatic in refusing to write code that breaches the privacy of our users

How to Reset Passwords on Multiple Websites Easily?

Apart from "berlinern", do any other German dialects have a corresponding verb?

How can I open an app using Terminal?

Why is Miller's case titled R (Miller)?



Is “adding the predictions to the real data for new training and prediction” a good idea for LSTM?



The Next CEO of Stack Overflow
2019 Community Moderator ElectionTensorflow - Clarification on how to predict more than 1 step ahead using LSTMLSTM Model for predicting the minutely seasonal data of the dayUsing different timesteps for training data and target valueHow to use LSTMs for predicting the value for a specific hour within a day, given past daily data?How to prepare future data for trainingLSTM future steps prediction with shifted y_train relatively to X_trainHow to calculate and print unseen time series values for LSTM after train, valid and test dataKeras/TF: Making sure image training data shape is accurate for Time Distributed CNN+LSTMHow to reshape data for LSTM training in multivariate sequence predictionHow to add previous predictions for new predictions in LSTM?










0












$begingroup$


Considering we have trained our model with a lot of data for "many-to-one" prediction. Then we like to forecast the future data of next 10 days. So we use last 60 of existent data and predict the single next day. From here there are 2 approaches:



  1. We can put our model.predict() function in a for loop for 10 times and do predictions like this(adding our predictions to end of our real data).


  2. We can put all of our model(consisting training part, not just predict part), in a for loop and this means we train our model 10 times whenever we do a new prediction and adding it to our real data.


EDIT:
Thinking you have X_train = (100,60,1) array that means 100 examples, 60 time-steps(hidden units) and 1 feature for each example. Also you have y_train array of size (100,1,1) that means 100 labels with time-steps = 1 and 1 feature. Then you train your network to read 60 of inputs and predict the next single output. Also you create a X_test array like this: X_test = X_train[len(X_train - 60):] that means you use last 60 numbers of your series to predict the next number. So you use the new_number = model.predict(X_test) for that and you predict the time-step 61 that is not a real number. It's your prediction. Then you want to continue your predictions. So what do you do is adding the 61'th predicted number to the last of your X_test = np.append(X_test, new_number) and do new number = model.predict(X_test) again. But the difference is that the last number in your new X_test is your previous prediction. And you keep this way for 10 times to predict 10 next numbers. (This was the first approach).



The other approach(2) has a difference. After doingnew_number = model.predict(X_test) for the first time, you add the predicted number to x_train instead of X_test, like this X_train = np.append(X_train, new_number) and train your model again model.fit(X_train , y_train) with the new predicted number. Then you use new number = model.predict(X_test) and again adding predicted number into the X_train, then train your model again(this time, with 2 new predicted numbers that you have added to the end of your X_train) and so on for 10 times!










share|improve this question











$endgroup$




bumped to the homepage by Community 1 hour ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.














  • $begingroup$
    Using TimeDistributed() wrapper of Keras, you will be able to predict upcoming 10 at once. Have a look at machinelearningmastery.com/… and quora.com/What-is-time-distributed-dense-layer-in-Keras
    $endgroup$
    – Ugur MULUK
    Feb 26 at 17:24










  • $begingroup$
    @UgurMULUK: I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:03










  • $begingroup$
    Is many-to-one prediction is mandatory for you? The reason I ask is that LSTM passes the predictions of each recurrent neuron to the next one. If you can train your network as many-to-10 by using the labels of next 10 with previous 60, your LSTM model will automatically predict the next 10 prediction by predicting each of the days from forward by using the predictions of the previous future prediction. Have a look at the first image here: karpathy.github.io/2015/05/21/rnn-effectiveness The 4th RNN structure in that image is what I think as would work for your case.
    $endgroup$
    – Ugur MULUK
    Feb 27 at 19:41










  • $begingroup$
    @UgurMULUK: You mean LSTM automatically does what I explained in approach 2?
    $endgroup$
    – user145959
    Feb 27 at 19:57










  • $begingroup$
    Yes I mean that, LSTMs (RNNs in a general manner) has the structure of using previous prediction in the next recurrent layer by nature; if you structure your network properly (which can be done in Keras), you can have what you expect here. Have a look at this video, especially at the end part where there are different RNN structures: youtube.com/…
    $endgroup$
    – Ugur MULUK
    Feb 27 at 23:41
















0












$begingroup$


Considering we have trained our model with a lot of data for "many-to-one" prediction. Then we like to forecast the future data of next 10 days. So we use last 60 of existent data and predict the single next day. From here there are 2 approaches:



  1. We can put our model.predict() function in a for loop for 10 times and do predictions like this(adding our predictions to end of our real data).


  2. We can put all of our model(consisting training part, not just predict part), in a for loop and this means we train our model 10 times whenever we do a new prediction and adding it to our real data.


EDIT:
Thinking you have X_train = (100,60,1) array that means 100 examples, 60 time-steps(hidden units) and 1 feature for each example. Also you have y_train array of size (100,1,1) that means 100 labels with time-steps = 1 and 1 feature. Then you train your network to read 60 of inputs and predict the next single output. Also you create a X_test array like this: X_test = X_train[len(X_train - 60):] that means you use last 60 numbers of your series to predict the next number. So you use the new_number = model.predict(X_test) for that and you predict the time-step 61 that is not a real number. It's your prediction. Then you want to continue your predictions. So what do you do is adding the 61'th predicted number to the last of your X_test = np.append(X_test, new_number) and do new number = model.predict(X_test) again. But the difference is that the last number in your new X_test is your previous prediction. And you keep this way for 10 times to predict 10 next numbers. (This was the first approach).



The other approach(2) has a difference. After doingnew_number = model.predict(X_test) for the first time, you add the predicted number to x_train instead of X_test, like this X_train = np.append(X_train, new_number) and train your model again model.fit(X_train , y_train) with the new predicted number. Then you use new number = model.predict(X_test) and again adding predicted number into the X_train, then train your model again(this time, with 2 new predicted numbers that you have added to the end of your X_train) and so on for 10 times!










share|improve this question











$endgroup$




bumped to the homepage by Community 1 hour ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.














  • $begingroup$
    Using TimeDistributed() wrapper of Keras, you will be able to predict upcoming 10 at once. Have a look at machinelearningmastery.com/… and quora.com/What-is-time-distributed-dense-layer-in-Keras
    $endgroup$
    – Ugur MULUK
    Feb 26 at 17:24










  • $begingroup$
    @UgurMULUK: I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:03










  • $begingroup$
    Is many-to-one prediction is mandatory for you? The reason I ask is that LSTM passes the predictions of each recurrent neuron to the next one. If you can train your network as many-to-10 by using the labels of next 10 with previous 60, your LSTM model will automatically predict the next 10 prediction by predicting each of the days from forward by using the predictions of the previous future prediction. Have a look at the first image here: karpathy.github.io/2015/05/21/rnn-effectiveness The 4th RNN structure in that image is what I think as would work for your case.
    $endgroup$
    – Ugur MULUK
    Feb 27 at 19:41










  • $begingroup$
    @UgurMULUK: You mean LSTM automatically does what I explained in approach 2?
    $endgroup$
    – user145959
    Feb 27 at 19:57










  • $begingroup$
    Yes I mean that, LSTMs (RNNs in a general manner) has the structure of using previous prediction in the next recurrent layer by nature; if you structure your network properly (which can be done in Keras), you can have what you expect here. Have a look at this video, especially at the end part where there are different RNN structures: youtube.com/…
    $endgroup$
    – Ugur MULUK
    Feb 27 at 23:41














0












0








0





$begingroup$


Considering we have trained our model with a lot of data for "many-to-one" prediction. Then we like to forecast the future data of next 10 days. So we use last 60 of existent data and predict the single next day. From here there are 2 approaches:



  1. We can put our model.predict() function in a for loop for 10 times and do predictions like this(adding our predictions to end of our real data).


  2. We can put all of our model(consisting training part, not just predict part), in a for loop and this means we train our model 10 times whenever we do a new prediction and adding it to our real data.


EDIT:
Thinking you have X_train = (100,60,1) array that means 100 examples, 60 time-steps(hidden units) and 1 feature for each example. Also you have y_train array of size (100,1,1) that means 100 labels with time-steps = 1 and 1 feature. Then you train your network to read 60 of inputs and predict the next single output. Also you create a X_test array like this: X_test = X_train[len(X_train - 60):] that means you use last 60 numbers of your series to predict the next number. So you use the new_number = model.predict(X_test) for that and you predict the time-step 61 that is not a real number. It's your prediction. Then you want to continue your predictions. So what do you do is adding the 61'th predicted number to the last of your X_test = np.append(X_test, new_number) and do new number = model.predict(X_test) again. But the difference is that the last number in your new X_test is your previous prediction. And you keep this way for 10 times to predict 10 next numbers. (This was the first approach).



The other approach(2) has a difference. After doingnew_number = model.predict(X_test) for the first time, you add the predicted number to x_train instead of X_test, like this X_train = np.append(X_train, new_number) and train your model again model.fit(X_train , y_train) with the new predicted number. Then you use new number = model.predict(X_test) and again adding predicted number into the X_train, then train your model again(this time, with 2 new predicted numbers that you have added to the end of your X_train) and so on for 10 times!










share|improve this question











$endgroup$




Considering we have trained our model with a lot of data for "many-to-one" prediction. Then we like to forecast the future data of next 10 days. So we use last 60 of existent data and predict the single next day. From here there are 2 approaches:



  1. We can put our model.predict() function in a for loop for 10 times and do predictions like this(adding our predictions to end of our real data).


  2. We can put all of our model(consisting training part, not just predict part), in a for loop and this means we train our model 10 times whenever we do a new prediction and adding it to our real data.


EDIT:
Thinking you have X_train = (100,60,1) array that means 100 examples, 60 time-steps(hidden units) and 1 feature for each example. Also you have y_train array of size (100,1,1) that means 100 labels with time-steps = 1 and 1 feature. Then you train your network to read 60 of inputs and predict the next single output. Also you create a X_test array like this: X_test = X_train[len(X_train - 60):] that means you use last 60 numbers of your series to predict the next number. So you use the new_number = model.predict(X_test) for that and you predict the time-step 61 that is not a real number. It's your prediction. Then you want to continue your predictions. So what do you do is adding the 61'th predicted number to the last of your X_test = np.append(X_test, new_number) and do new number = model.predict(X_test) again. But the difference is that the last number in your new X_test is your previous prediction. And you keep this way for 10 times to predict 10 next numbers. (This was the first approach).



The other approach(2) has a difference. After doingnew_number = model.predict(X_test) for the first time, you add the predicted number to x_train instead of X_test, like this X_train = np.append(X_train, new_number) and train your model again model.fit(X_train , y_train) with the new predicted number. Then you use new number = model.predict(X_test) and again adding predicted number into the X_train, then train your model again(this time, with 2 new predicted numbers that you have added to the end of your X_train) and so on for 10 times!







lstm training prediction






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 26 at 18:06







user145959

















asked Feb 26 at 15:41









user145959user145959

1248




1248





bumped to the homepage by Community 1 hour ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 1 hour ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.













  • $begingroup$
    Using TimeDistributed() wrapper of Keras, you will be able to predict upcoming 10 at once. Have a look at machinelearningmastery.com/… and quora.com/What-is-time-distributed-dense-layer-in-Keras
    $endgroup$
    – Ugur MULUK
    Feb 26 at 17:24










  • $begingroup$
    @UgurMULUK: I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:03










  • $begingroup$
    Is many-to-one prediction is mandatory for you? The reason I ask is that LSTM passes the predictions of each recurrent neuron to the next one. If you can train your network as many-to-10 by using the labels of next 10 with previous 60, your LSTM model will automatically predict the next 10 prediction by predicting each of the days from forward by using the predictions of the previous future prediction. Have a look at the first image here: karpathy.github.io/2015/05/21/rnn-effectiveness The 4th RNN structure in that image is what I think as would work for your case.
    $endgroup$
    – Ugur MULUK
    Feb 27 at 19:41










  • $begingroup$
    @UgurMULUK: You mean LSTM automatically does what I explained in approach 2?
    $endgroup$
    – user145959
    Feb 27 at 19:57










  • $begingroup$
    Yes I mean that, LSTMs (RNNs in a general manner) has the structure of using previous prediction in the next recurrent layer by nature; if you structure your network properly (which can be done in Keras), you can have what you expect here. Have a look at this video, especially at the end part where there are different RNN structures: youtube.com/…
    $endgroup$
    – Ugur MULUK
    Feb 27 at 23:41

















  • $begingroup$
    Using TimeDistributed() wrapper of Keras, you will be able to predict upcoming 10 at once. Have a look at machinelearningmastery.com/… and quora.com/What-is-time-distributed-dense-layer-in-Keras
    $endgroup$
    – Ugur MULUK
    Feb 26 at 17:24










  • $begingroup$
    @UgurMULUK: I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:03










  • $begingroup$
    Is many-to-one prediction is mandatory for you? The reason I ask is that LSTM passes the predictions of each recurrent neuron to the next one. If you can train your network as many-to-10 by using the labels of next 10 with previous 60, your LSTM model will automatically predict the next 10 prediction by predicting each of the days from forward by using the predictions of the previous future prediction. Have a look at the first image here: karpathy.github.io/2015/05/21/rnn-effectiveness The 4th RNN structure in that image is what I think as would work for your case.
    $endgroup$
    – Ugur MULUK
    Feb 27 at 19:41










  • $begingroup$
    @UgurMULUK: You mean LSTM automatically does what I explained in approach 2?
    $endgroup$
    – user145959
    Feb 27 at 19:57










  • $begingroup$
    Yes I mean that, LSTMs (RNNs in a general manner) has the structure of using previous prediction in the next recurrent layer by nature; if you structure your network properly (which can be done in Keras), you can have what you expect here. Have a look at this video, especially at the end part where there are different RNN structures: youtube.com/…
    $endgroup$
    – Ugur MULUK
    Feb 27 at 23:41
















$begingroup$
Using TimeDistributed() wrapper of Keras, you will be able to predict upcoming 10 at once. Have a look at machinelearningmastery.com/… and quora.com/What-is-time-distributed-dense-layer-in-Keras
$endgroup$
– Ugur MULUK
Feb 26 at 17:24




$begingroup$
Using TimeDistributed() wrapper of Keras, you will be able to predict upcoming 10 at once. Have a look at machinelearningmastery.com/… and quora.com/What-is-time-distributed-dense-layer-in-Keras
$endgroup$
– Ugur MULUK
Feb 26 at 17:24












$begingroup$
@UgurMULUK: I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
$endgroup$
– user145959
Feb 26 at 18:03




$begingroup$
@UgurMULUK: I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
$endgroup$
– user145959
Feb 26 at 18:03












$begingroup$
Is many-to-one prediction is mandatory for you? The reason I ask is that LSTM passes the predictions of each recurrent neuron to the next one. If you can train your network as many-to-10 by using the labels of next 10 with previous 60, your LSTM model will automatically predict the next 10 prediction by predicting each of the days from forward by using the predictions of the previous future prediction. Have a look at the first image here: karpathy.github.io/2015/05/21/rnn-effectiveness The 4th RNN structure in that image is what I think as would work for your case.
$endgroup$
– Ugur MULUK
Feb 27 at 19:41




$begingroup$
Is many-to-one prediction is mandatory for you? The reason I ask is that LSTM passes the predictions of each recurrent neuron to the next one. If you can train your network as many-to-10 by using the labels of next 10 with previous 60, your LSTM model will automatically predict the next 10 prediction by predicting each of the days from forward by using the predictions of the previous future prediction. Have a look at the first image here: karpathy.github.io/2015/05/21/rnn-effectiveness The 4th RNN structure in that image is what I think as would work for your case.
$endgroup$
– Ugur MULUK
Feb 27 at 19:41












$begingroup$
@UgurMULUK: You mean LSTM automatically does what I explained in approach 2?
$endgroup$
– user145959
Feb 27 at 19:57




$begingroup$
@UgurMULUK: You mean LSTM automatically does what I explained in approach 2?
$endgroup$
– user145959
Feb 27 at 19:57












$begingroup$
Yes I mean that, LSTMs (RNNs in a general manner) has the structure of using previous prediction in the next recurrent layer by nature; if you structure your network properly (which can be done in Keras), you can have what you expect here. Have a look at this video, especially at the end part where there are different RNN structures: youtube.com/…
$endgroup$
– Ugur MULUK
Feb 27 at 23:41





$begingroup$
Yes I mean that, LSTMs (RNNs in a general manner) has the structure of using previous prediction in the next recurrent layer by nature; if you structure your network properly (which can be done in Keras), you can have what you expect here. Have a look at this video, especially at the end part where there are different RNN structures: youtube.com/…
$endgroup$
– Ugur MULUK
Feb 27 at 23:41











1 Answer
1






active

oldest

votes


















0












$begingroup$

Are you saying you would like to predict 10 days ahead in this instance?



If this is the case, the LSTM model is able to do this and iterating in the way you are suggesting is unnecessary and could give you unreliable results.



For instance, consider a dataset whereby we are attempting to predict one-step ahead:



# Training and Test data partition
train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]

# reshape into X=t and Y=t+1
previous = 1
X_train, Y_train = create_dataset(train, previous)
X_test, Y_test = create_dataset(test, previous)

# reshape input to be [samples, time steps, features]
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))

# Generate LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, Y_train, epochs=100, batch_size=1, verbose=2)

# Generate predictions
trainpred = model.predict(X_train)
testpred = model.predict(X_test)

# Convert predictions back to normal values
trainpred = scaler.inverse_transform(trainpred)
Y_train = scaler.inverse_transform([Y_train])
testpred = scaler.inverse_transform(testpred)
Y_test = scaler.inverse_transform([Y_test])

# calculate RMSE
trainScore = math.sqrt(mean_squared_error(Y_train[0], trainpred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(Y_test[0], testpred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))


In this batch of code, you can see that we have set the previous parameter equal to 1, meaning that the time step being considered by the model is t-1.



In this particular instance, here are the training and test predictions compared to the actual series:



neural 1



Now, the same model is run, but this time the previous parameter is set to 10. In other words, the previous 10 days are being considered as one time step, and the model is forecasting for time t+10 in this instance. Here is another sample prediction comparing the test set with the actual. A fuller example of this is provided here:



neural 2



In this regard, my advice would be to define the time series you wish to forecast and then work off that basis. Using iterations only complicates the situation, and could even cause issues with prediction.






share|improve this answer









$endgroup$












  • $begingroup$
    I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:02











Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46270%2fis-adding-the-predictions-to-the-real-data-for-new-training-and-prediction-a-g%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

Are you saying you would like to predict 10 days ahead in this instance?



If this is the case, the LSTM model is able to do this and iterating in the way you are suggesting is unnecessary and could give you unreliable results.



For instance, consider a dataset whereby we are attempting to predict one-step ahead:



# Training and Test data partition
train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]

# reshape into X=t and Y=t+1
previous = 1
X_train, Y_train = create_dataset(train, previous)
X_test, Y_test = create_dataset(test, previous)

# reshape input to be [samples, time steps, features]
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))

# Generate LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, Y_train, epochs=100, batch_size=1, verbose=2)

# Generate predictions
trainpred = model.predict(X_train)
testpred = model.predict(X_test)

# Convert predictions back to normal values
trainpred = scaler.inverse_transform(trainpred)
Y_train = scaler.inverse_transform([Y_train])
testpred = scaler.inverse_transform(testpred)
Y_test = scaler.inverse_transform([Y_test])

# calculate RMSE
trainScore = math.sqrt(mean_squared_error(Y_train[0], trainpred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(Y_test[0], testpred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))


In this batch of code, you can see that we have set the previous parameter equal to 1, meaning that the time step being considered by the model is t-1.



In this particular instance, here are the training and test predictions compared to the actual series:



neural 1



Now, the same model is run, but this time the previous parameter is set to 10. In other words, the previous 10 days are being considered as one time step, and the model is forecasting for time t+10 in this instance. Here is another sample prediction comparing the test set with the actual. A fuller example of this is provided here:



neural 2



In this regard, my advice would be to define the time series you wish to forecast and then work off that basis. Using iterations only complicates the situation, and could even cause issues with prediction.






share|improve this answer









$endgroup$












  • $begingroup$
    I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:02















0












$begingroup$

Are you saying you would like to predict 10 days ahead in this instance?



If this is the case, the LSTM model is able to do this and iterating in the way you are suggesting is unnecessary and could give you unreliable results.



For instance, consider a dataset whereby we are attempting to predict one-step ahead:



# Training and Test data partition
train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]

# reshape into X=t and Y=t+1
previous = 1
X_train, Y_train = create_dataset(train, previous)
X_test, Y_test = create_dataset(test, previous)

# reshape input to be [samples, time steps, features]
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))

# Generate LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, Y_train, epochs=100, batch_size=1, verbose=2)

# Generate predictions
trainpred = model.predict(X_train)
testpred = model.predict(X_test)

# Convert predictions back to normal values
trainpred = scaler.inverse_transform(trainpred)
Y_train = scaler.inverse_transform([Y_train])
testpred = scaler.inverse_transform(testpred)
Y_test = scaler.inverse_transform([Y_test])

# calculate RMSE
trainScore = math.sqrt(mean_squared_error(Y_train[0], trainpred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(Y_test[0], testpred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))


In this batch of code, you can see that we have set the previous parameter equal to 1, meaning that the time step being considered by the model is t-1.



In this particular instance, here are the training and test predictions compared to the actual series:



neural 1



Now, the same model is run, but this time the previous parameter is set to 10. In other words, the previous 10 days are being considered as one time step, and the model is forecasting for time t+10 in this instance. Here is another sample prediction comparing the test set with the actual. A fuller example of this is provided here:



neural 2



In this regard, my advice would be to define the time series you wish to forecast and then work off that basis. Using iterations only complicates the situation, and could even cause issues with prediction.






share|improve this answer









$endgroup$












  • $begingroup$
    I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:02













0












0








0





$begingroup$

Are you saying you would like to predict 10 days ahead in this instance?



If this is the case, the LSTM model is able to do this and iterating in the way you are suggesting is unnecessary and could give you unreliable results.



For instance, consider a dataset whereby we are attempting to predict one-step ahead:



# Training and Test data partition
train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]

# reshape into X=t and Y=t+1
previous = 1
X_train, Y_train = create_dataset(train, previous)
X_test, Y_test = create_dataset(test, previous)

# reshape input to be [samples, time steps, features]
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))

# Generate LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, Y_train, epochs=100, batch_size=1, verbose=2)

# Generate predictions
trainpred = model.predict(X_train)
testpred = model.predict(X_test)

# Convert predictions back to normal values
trainpred = scaler.inverse_transform(trainpred)
Y_train = scaler.inverse_transform([Y_train])
testpred = scaler.inverse_transform(testpred)
Y_test = scaler.inverse_transform([Y_test])

# calculate RMSE
trainScore = math.sqrt(mean_squared_error(Y_train[0], trainpred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(Y_test[0], testpred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))


In this batch of code, you can see that we have set the previous parameter equal to 1, meaning that the time step being considered by the model is t-1.



In this particular instance, here are the training and test predictions compared to the actual series:



neural 1



Now, the same model is run, but this time the previous parameter is set to 10. In other words, the previous 10 days are being considered as one time step, and the model is forecasting for time t+10 in this instance. Here is another sample prediction comparing the test set with the actual. A fuller example of this is provided here:



neural 2



In this regard, my advice would be to define the time series you wish to forecast and then work off that basis. Using iterations only complicates the situation, and could even cause issues with prediction.






share|improve this answer









$endgroup$



Are you saying you would like to predict 10 days ahead in this instance?



If this is the case, the LSTM model is able to do this and iterating in the way you are suggesting is unnecessary and could give you unreliable results.



For instance, consider a dataset whereby we are attempting to predict one-step ahead:



# Training and Test data partition
train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]

# reshape into X=t and Y=t+1
previous = 1
X_train, Y_train = create_dataset(train, previous)
X_test, Y_test = create_dataset(test, previous)

# reshape input to be [samples, time steps, features]
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))

# Generate LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, Y_train, epochs=100, batch_size=1, verbose=2)

# Generate predictions
trainpred = model.predict(X_train)
testpred = model.predict(X_test)

# Convert predictions back to normal values
trainpred = scaler.inverse_transform(trainpred)
Y_train = scaler.inverse_transform([Y_train])
testpred = scaler.inverse_transform(testpred)
Y_test = scaler.inverse_transform([Y_test])

# calculate RMSE
trainScore = math.sqrt(mean_squared_error(Y_train[0], trainpred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(Y_test[0], testpred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))


In this batch of code, you can see that we have set the previous parameter equal to 1, meaning that the time step being considered by the model is t-1.



In this particular instance, here are the training and test predictions compared to the actual series:



neural 1



Now, the same model is run, but this time the previous parameter is set to 10. In other words, the previous 10 days are being considered as one time step, and the model is forecasting for time t+10 in this instance. Here is another sample prediction comparing the test set with the actual. A fuller example of this is provided here:



neural 2



In this regard, my advice would be to define the time series you wish to forecast and then work off that basis. Using iterations only complicates the situation, and could even cause issues with prediction.







share|improve this answer












share|improve this answer



share|improve this answer










answered Feb 26 at 16:13









Michael GroganMichael Grogan

1964




1964











  • $begingroup$
    I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:02
















  • $begingroup$
    I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
    $endgroup$
    – user145959
    Feb 26 at 18:02















$begingroup$
I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
$endgroup$
– user145959
Feb 26 at 18:02




$begingroup$
I have edited my question and tried to explain more in EDIT section. Please read it. Maybe it would help to clear what I ask.
$endgroup$
– user145959
Feb 26 at 18:02

















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46270%2fis-adding-the-predictions-to-the-real-data-for-new-training-and-prediction-a-g%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп

ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result