Files
CVML-MachineLearning/solutions/8 Recurrent Neural Networks Exercises Solutions.ipynb
Sem van der Hoeven d979ca38f5 add all files
2021-05-26 15:12:05 +02:00

334 lines
8.4 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Recurrent Neural Networks"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Time series forecasting"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pandas.tseries.offsets import MonthEnd\n",
"\n",
"df = pd.read_csv('../data/cansim-0800020-eng-6674700030567901031.csv',\n",
" skiprows=6, skipfooter=9,\n",
" engine='python')\n",
"\n",
"df['Adjustments'] = pd.to_datetime(df['Adjustments']) + MonthEnd(1)\n",
"df = df.set_index('Adjustments')\n",
"df.head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"split_date = pd.Timestamp('01-01-2011')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train = df.loc[:split_date, ['Unadjusted']]\n",
"test = df.loc[split_date:, ['Unadjusted']]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.preprocessing import MinMaxScaler\n",
"\n",
"sc = MinMaxScaler()\n",
"\n",
"train_sc = sc.fit_transform(train)\n",
"test_sc = sc.transform(test)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_sc_df = pd.DataFrame(train_sc, columns=['Scaled'], index=train.index)\n",
"test_sc_df = pd.DataFrame(test_sc, columns=['Scaled'], index=test.index)\n",
"\n",
"for s in range(1, 13):\n",
" train_sc_df['shift_{}'.format(s)] = train_sc_df['Scaled'].shift(s)\n",
" test_sc_df['shift_{}'.format(s)] = test_sc_df['Scaled'].shift(s)\n",
"\n",
"X_train = train_sc_df.dropna().drop('Scaled', axis=1)\n",
"y_train = train_sc_df.dropna()[['Scaled']]\n",
"\n",
"X_test = test_sc_df.dropna().drop('Scaled', axis=1)\n",
"y_test = test_sc_df.dropna()[['Scaled']]\n",
"\n",
"X_train = X_train.values\n",
"X_test= X_test.values\n",
"\n",
"y_train = y_train.values\n",
"y_test = y_test.values"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 1\n",
"\n",
"In the model above we reshaped the input shape to: `(num_samples, 1, 12)`, i.e. we treated a window of 12 months as a vector of 12 coordinates that we simultaneously passed to all the LSTM nodes. An alternative way to look at the problem is to reshape the input to `(num_samples, 12, 1)`. This means we consider each input window as a sequence of 12 values that we will pass in sequence to the LSTM. In principle this looks like a more accurate description of our situation. But does it yield better predictions? Let's check it.\n",
"\n",
"- Reshape `X_train` and `X_test` so that they represent a set of univariate sequences\n",
"- retrain the same LSTM(6) model, you'll have to adapt the `input_shape`\n",
"- check the performance of this new model, is it better at predicting the test data?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train_t = X_train.reshape(X_train.shape[0], 12, 1)\n",
"X_test_t = X_test.reshape(X_test.shape[0], 12, 1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train_t.shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from tensorflow.keras.models import Sequential\n",
"from tensorflow.keras.layers import LSTM, Dense\n",
"import tensorflow.keras.backend as K\n",
"from tensorflow.keras.callbacks import EarlyStopping"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"K.clear_session()\n",
"model = Sequential()\n",
"\n",
"model.add(LSTM(6, input_shape=(12, 1)))\n",
"\n",
"model.add(Dense(1))\n",
"\n",
"model.compile(loss='mean_squared_error', optimizer='adam')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.summary()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"early_stop = EarlyStopping(monitor='loss', patience=1, verbose=1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.fit(X_train_t, y_train, epochs=600,\n",
" batch_size=32, verbose=0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_pred = model.predict(X_test_t)\n",
"plt.plot(y_test)\n",
"plt.plot(y_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"## Exercise 2\n",
"\n",
"RNN models can be applied to images too. In general we can apply them to any data where there's a connnection between nearby units. Let's see how we can easily build a model that works with images.\n",
"\n",
"- Load the MNIST data, by now you should be able to do it blindfolded :)\n",
"- reshape it so that an image looks like a long sequence of pixels\n",
"- create a recurrent model and train it on the training data\n",
"- how does it perform compared to a fully connected? How does it compare to Convolutional Neural Networks?\n",
"\n",
"(feel free to run this exercise on a cloud GPU if it's too slow on your laptop)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from tensorflow.keras.datasets import mnist\n",
"from tensorflow.keras.utils import to_categorical"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"(X_train, y_train), (X_test, y_test) = mnist.load_data()\n",
"X_train = X_train.astype('float32') / 255.0\n",
"X_test = X_test.astype('float32') / 255.0\n",
"y_train_cat = to_categorical(y_train, 10)\n",
"y_test_cat = to_categorical(y_test, 10)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train = X_train.reshape(X_train.shape[0], -1, 1)\n",
"X_test = X_test.reshape(X_test.shape[0], -1, 1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(X_train.shape)\n",
"print(X_test.shape)\n",
"print(y_train_cat.shape)\n",
"print(y_test_cat.shape)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# define the model\n",
"K.clear_session()\n",
"model = Sequential()\n",
"model.add(LSTM(32, input_shape=X_train.shape[1:]))\n",
"model.add(Dense(10, activation='softmax'))\n",
"\n",
"# compile the model\n",
"model.compile(loss='categorical_crossentropy',\n",
" optimizer='rmsprop',\n",
" metrics=['accuracy'])\n",
"\n",
"model.fit(X_train, y_train_cat,\n",
" batch_size=32,\n",
" epochs=100,\n",
" validation_split=0.3,\n",
" shuffle=True,\n",
" verbose=2,\n",
" )\n",
"\n",
"model.evaluate(X_test, y_test_cat)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
}
},
"nbformat": 4,
"nbformat_minor": 2
}