python - How exactly does Keras take dimension argumentsfor LSTM / time series problems? -
i can't seem find concrete answer question of how feed data keras. examples seem work off image / text data , have defined data points.
i'm trying feed music lstm neural network. want network take ~3 seconds of music , nominate next 2 seconds. have music prepared .wav files , partitioned 5 second intervals i've decomposed x (first 3 seconds) , y (last 2 seconds). i've sampled music @ 44,100 hz x 132,300 observations 'long' , y '88,200' observations long.
but can't figure out how bridge keras data structure. i'm using tensorflow backend.
in interest of generalizing problem , answer, i'll use a,b,c denote dimensions. difference between example data , real data these random values distributed 0 1, , data array of integers.
import numpy np #using variables make easy generalize answer #a = number of observations have = 411 #b = duration of sample, 44.1k observations per second of music b_train = 132300 b_test = 88200 #c = number of channels in music, 2 channel stereo c = 2 #now create sample data dimensionality given above: x = np.random.rand(a,b_train,c) y = np.random.rand(a,b_test ,c) #split data sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.20, random_state=42)
however, don't know how configure model understand 'first' (a) dimension contains observations , want more or less break out music (b) channel (c).
i know it'd easier convert mono (and 2d problem) i'm curious see whether or not has 'simple' solution - whether takes shape of have below or whether should think of model in way.
the primary question this: how construct model allow me transform x data y data?
ideally, answer show how modify model below fit data structure above.
import keras import math, time keras.models import sequential keras.layers.core import dense, dropout, activation keras.layers.recurrent import lstm keras.models import load_model def build_model(layers): d = 0.3 model = sequential() model.add(lstm(256, input_shape=(layers), return_sequences=true)) model.add(dropout(d)) model.add(lstm(256, input_shape=(layers), return_sequences=false)) model.add(dropout(d)) model.add(dense(32,kernel_initializer="uniform",activation='relu')) model.add(dense(1,kernel_initializer="uniform",activation='linear')) start = time.time() model.compile(loss='mse',optimizer='adam', metrics=['accuracy']) print("compilation time : ", time.time() - start) return model #build model... model = build_model([328,132300,2]) model.fit(x_train,y_train,batch_size=512,epochs=30,validation_split=0.1,verbose=1)
however, yields error (at model = ... step):
valueerror: input 0 incompatible layer lstm_2: expected ndim=3, found ndim=4
i can't figure out keras gets expectation see ndim=4 data. also, don't know how ensure feed data model such model 'understands' observations distributed across a-axis , data distributed on b- , c-axis.
if unclear, please leave comment. i'll watch diligently until sept '17 or , i'll sure update question reflect advice / comments left.
thanks!
keras convention batch dimension typically omitted in input_shape
arguments. guide:
pass input_shape argument first layer. shape tuple (a tuple of integers or none entries, none indicates positive integer may expected). in input_shape, batch dimension not included.
so changing model = build_model([132300,2])
should solve problem.
Comments
Post a Comment