RNN/LSTM timeseries, with fixed attributes per run

I have a multivariate time series of weather date: temperature, humidity and wind strength ($x_{c,t},y_{c,t},z_{c,t}$ respectively). I have this data for a dozen different cities ($c\in {c_1,c_2,...,c_{12}}$).

I also know the values of certain fixed attributes for each city. For example, altitude ($A$), latitude $(L)$ and distance from ocean ($D$) are fixed for each city (i.e. they are time independent). Let $p_c=(A_c,L_c,D_c)$ be this fixed parameter vector for city $c$.

I have built a LSTM in Keras (based on this post) to predict the time series from some initial starting point, but this does not make use of $p_c$ (it just looks at the time series values). My question is:

Can the fixed parameter vector $p_c$ be taken into account when designing/training my network?

The purpose of this is essentially: (1) train a LSTM on all data from all cities, then (2) forecast the weather time series for a new city, with known $A_{new},L_{new},D_{new}$ values (but no other data - i.e. no weather history for this city).

(A structure different from LSTM is fine, if that's more suited.)

Topic lstm keras rnn neural-network time-series

Category Data Science


You can create a sort of encoder-decoder network with two different inputs.

latent_dim = 16

# First branch of the net is an lstm which finds an embedding for the (x,y,z) inputs
xyz_inputs = tf.keras.Input(shape=(window_len_1, n_1_features), name='xyz_inputs')
# Encoding xyz_inputs
encoder = tf.keras.layers.LSTM(latent_dim, return_state=True, name = 'Encoder')
encoder_outputs, state_h, state_c = encoder(xyz_inputs) # Apply the encoder object to xyz_inputs.

city_inputs = tf.keras.Input(shape=(window_len_2, n_2_features), name='city_inputs')
# Combining city inputs with recurrent branch output
decoder_lstm = tf.keras.layers.LSTM(latent_dim, return_sequences=True, name = 'Decoder')
x = decoder_lstm(city_inputs, 
                               initial_state=[state_h, state_c])

x = tf.keras.layers.Dense(16, activation='relu')(x)
x = tf.keras.layers.Dense(16, activation='relu')(x)
output = tf.keras.layers.Dense(1, activation='relu')(x)

model = tf.keras.models.Model(inputs=[xyz_inputs,city_inputs], outputs=output)

optimizer = tf.keras.optimizers.Adam()
loss = tf.keras.losses.Huber()
model.compile(loss=loss, optimizer=optimizer, metrics=["mae"])

model.summary()

Here you are, of course I inserted random numbers for layer, latent dimensions, etc.

With such code, you can have different features to input with xyz and city features and these have to passed as arrays. Of course, to predict you have to give the model "xyz_inputs" and city features of the one you want to predict.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.