lstm prediction Algorithm

For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDS's (intrusion detection systems).Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence teaching methods in numerous applications. In the same year, Google released the Google Neural machine translation system for Google translate which used LSTMs to reduce translation mistakes by 60%.Apple announced in its Worldwide developer Conference that it would begin use the LSTM for quicktype in the iPhone and for Siri. Google started use an LSTM for speech recognition on Google voice. Felix Gers and his adviser Jürgen Schmidhuber and Fred Cummins introduced the forget gate (also named “ keep gate ”) into LSTM architecture, enable the LSTM to reset its own state.2000: Gers & Schmidhuber & Cummins added peephole connections (connections from the cell to the gates) into the architecture.
"""
    Create a Long Short Term Memory (LSTM) network model
    An LSTM is a type of Recurrent Neural Network (RNN) as discussed at:
    * http://colah.github.io/posts/2015-08-Understanding-LSTMs
    * https://en.wikipedia.org/wiki/Long_short-term_memory
"""

from keras.layers import Dense, LSTM
from keras.models import Sequential
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler


if __name__ == "__main__":
    """
    First part of building a model is to get the data and prepare
    it for our model. You can use any dataset for stock prediction
    make sure you set the price column on line number 21.  Here we
    use a dataset which have the price on 3rd column.
    """
    df = pd.read_csv("sample_data.csv", header=None)
    len_data = df.shape[:1][0]
    # If you're using some other dataset input the target column
    actual_data = df.iloc[:, 1:2]
    actual_data = actual_data.values.reshape(len_data, 1)
    actual_data = MinMaxScaler().fit_transform(actual_data)
    look_back = 10
    forward_days = 5
    periods = 20
    division = len_data - periods * look_back
    train_data = actual_data[:division]
    test_data = actual_data[division - look_back :]
    train_x, train_y = [], []
    test_x, test_y = [], []

    for i in range(0, len(train_data) - forward_days - look_back + 1):
        train_x.append(train_data[i : i + look_back])
        train_y.append(train_data[i + look_back : i + look_back + forward_days])
    for i in range(0, len(test_data) - forward_days - look_back + 1):
        test_x.append(test_data[i : i + look_back])
        test_y.append(test_data[i + look_back : i + look_back + forward_days])
    x_train = np.array(train_x)
    x_test = np.array(test_x)
    y_train = np.array([list(i.ravel()) for i in train_y])
    y_test = np.array([list(i.ravel()) for i in test_y])

    model = Sequential()
    model.add(LSTM(128, input_shape=(look_back, 1), return_sequences=True))
    model.add(LSTM(64, input_shape=(128, 1)))
    model.add(Dense(forward_days))
    model.compile(loss="mean_squared_error", optimizer="adam")
    history = model.fit(
        x_train, y_train, epochs=150, verbose=1, shuffle=True, batch_size=4
    )
    pred = model.predict(x_test)

LANGUAGE:

DARK MODE: