Python AI 画像

kerasを用いて機械学習(ニューラルネットワーク)を行います。ここでは、SequentialモデルでKerasを始めます

kerasはGoogleで開発され、GoogleのTensorFlowをバックエンドで用いるAPIラッパーで、コーディングが簡単になります。 Anacondaの場合、conda install numpyconda install matplotlibconda install pandasconda install keras-gpu等でインストールしてください。condaでインストールできない場合は、pip install ternsorflow kerasのようにpipでインストールしてください

In [1]:
#%matplotlib inline
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import matplotlib.pyplot as plt 
Using TensorFlow backend.

1.データの作成

In [2]:
x = data = np.linspace(1, 2, 200) # 等差数列 始点、終点、配列の長さ
y = x*4 + np.random.randn(*x.shape) * 0.3 # 正規分布の乱数生成
print(x[0:6]) # 最初のデータをサンプル表示
print(y[0:6]) # 最初のデータをサンプル表示
[1.         1.00502513 1.01005025 1.01507538 1.0201005  1.02512563]
[4.41526818 4.25831481 4.21266542 3.38633868 4.63971954 4.25130364]

2.モデルの構築(簡単なSequentialモデルで)

In [3]:
model = Sequential()
model.add(Dense(1,  # NNのレイヤーを追加します 出力は1
                input_dim = 1, # 入力 1次元(ノード)
                activation = 'linear')) #線形 単純に係数をとバイアスを加える
# 確率的勾配降下法(Stochastic Gradient Descent) 
# 正解からの誤差が大きいほど逆方向により大きく調整し最適化を図る
# トレーニングに用いるデータをランダムにシャッフルして与える
model.compile(optimizer = 'sgd', # 訓練プロセスを作ります 
              loss = 'mse', # 損失関数として、平均二乗誤差mean_squred_errorを指定
              metrics = ['mse'])

weights = model.layers[0].get_weights()
w_init = weights[0][0][0]
b_init = weights[1][0]
print('Linear regression model is initialized with weights w: %.2f, b: %.2f' % (w_init, b_init)) 
WARNING:tensorflow:From d:\ProgramData\Anaconda3\envs\keras-gpu\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Linear regression model is initialized with weights w: -1.60, b: 0.00

3.モデルにデータを学習させる

In [4]:
model.fit(x, y, batch_size = 1, epochs = 30, shuffle = False)
WARNING:tensorflow:From d:\ProgramData\Anaconda3\envs\keras-gpu\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/30
200/200 [==============================] - 3s 13ms/step - loss: 2.3402 - mean_squared_error: 2.3402
Epoch 2/30
200/200 [==============================] - 0s 2ms/step - loss: 0.2094 - mean_squared_error: 0.2094
Epoch 3/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1987 - mean_squared_error: 0.1987
Epoch 4/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1889 - mean_squared_error: 0.1889
Epoch 5/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1799 - mean_squared_error: 0.1799
Epoch 6/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1716 - mean_squared_error: 0.1716
Epoch 7/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1640 - mean_squared_error: 0.1640
Epoch 8/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1570 - mean_squared_error: 0.1570
Epoch 9/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1506 - mean_squared_error: 0.1506
Epoch 10/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1447 - mean_squared_error: 0.1447
Epoch 11/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1393 - mean_squared_error: 0.1393
Epoch 12/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1343 - mean_squared_error: 0.1343
Epoch 13/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1297 - mean_squared_error: 0.1297
Epoch 14/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1255 - mean_squared_error: 0.1255
Epoch 15/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1216 - mean_squared_error: 0.1216
Epoch 16/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1180 - mean_squared_error: 0.1180
Epoch 17/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1148 - mean_squared_error: 0.1148
Epoch 18/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1118 - mean_squared_error: 0.1118
Epoch 19/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1090 - mean_squared_error: 0.1090
Epoch 20/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1065 - mean_squared_error: 0.1065
Epoch 21/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1041 - mean_squared_error: 0.1041
Epoch 22/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1020 - mean_squared_error: 0.1020
Epoch 23/30
200/200 [==============================] - 0s 1ms/step - loss: 0.1000 - mean_squared_error: 0.1000
Epoch 24/30
200/200 [==============================] - 0s 1ms/step - loss: 0.0982 - mean_squared_error: 0.0982
Epoch 25/30
200/200 [==============================] - 0s 1ms/step - loss: 0.0965 - mean_squared_error: 0.0965
Epoch 26/30
200/200 [==============================] - 0s 1ms/step - loss: 0.0950 - mean_squared_error: 0.0950
Epoch 27/30
200/200 [==============================] - 0s 1ms/step - loss: 0.0936 - mean_squared_error: 0.0936
Epoch 28/30
200/200 [==============================] - 0s 1ms/step - loss: 0.0923 - mean_squared_error: 0.0923
Epoch 29/30
200/200 [==============================] - 0s 1ms/step - loss: 0.0911 - mean_squared_error: 0.0911
Epoch 30/30
200/200 [==============================] - 0s 1ms/step - loss: 0.0901 - mean_squared_error: 0.0901
Out[4]:
<keras.callbacks.History at 0x1a6e0252160>

4.モデルを評価する

In [11]:
weights = model.layers[0].get_weights()
w_final = weights[0][0][0]
b_final = weights[1][0]
print('Linear regression model is trained to have weight w: %.2f, b: %.2f' % (w_final, b_final))

predict = model.predict(data)

plt.plot(data, predict, 'r', # 予測値を赤線で
         data , y, 'g.') # yのデータを緑のドットで
plt.show()
Linear regression model is trained to have weight w: 3.44, b: 1.07
In [ ]: