Motivation:
I have a system comprised of three probabilistic boolean timeseries and I want to simulate how two of them would react, if a certain intervention would take place (lets say, timeseries 1 can be influenced and will definitely be "0" for the next 6 observations).
(The managerial question behind this is: Would it be good, to influence a system (comprised of the three time series) such that one of the timeseries involved can only produce zeros in the near future. "Good" hereby means, this would affect the remaining 2 timeseries such that they tend to become zero as well.)
Reasoning:
In my understanding, this is similar to a pre-trained logistic regression which gets applied to classify an unseen observation.
The logit would classify it, based on its pre-trained weights. So does each of the trained LSTMs.
So, when I make up data points, the LSTM would apply its pre-trained weights to classify the outcome: 0 or 1.
Hence, it should react to my "fake" data, as if it would classify actual data and give me a simulation regarding what would likely have happend.
More details and examples:
Lets assume, I got three boolean time series (from t0 to T).
I train a LSTM for each of them using the other time series as covariates (series 1 is predicted from series 2 and 3; series 2 is predicted, from 1 and 3, series 3 is predicted from 1 and 2). Training is now complete and I will not train, but just predict from here on.
The usual prediction task
The LSTMs predict "seq to 1", i.e. the input is a sequence (of boolean values), the output is a predicted value for t+1 (Figure 1).
I now use the t+1 predicted data and add it to my observed data as if I would have actually observed t+1.
Now, I repeat the prediction using this predicted data and get a new t+1 value, which is actually my "t+2" prediction (Figure 2).
Figure 1:
observed data 0 to T | prediction for T+1
LSTM1: 000111|1
LSTM2: 100110|0
LSTM3: 011000|0
Figure 2:
observed data from 1 to T | predicted values T + 1 treated as if they were observed | predictions for T + 2
LSTM1: 00111|1|1
LSTM2: 00110|0|0
LSTM3: 11000|0|0
When I repeat this for example 6 times to get a 6 points ahead forecast (Figure 3).
Figure 3:
LSTM1: 1|1|1|0|0|0
LSTM2: 0|0|1|1|1|0
LSTM3: 0|0|0|1|1|1
The potential simulation setting
What happens, if I inject fake data instead of predicted values? Lets say, LSTM1 predicts "1" but I decide to set the value to "0". When I now use this fake zero as my last "observed" value of the sequence 1, I get potentially a different prediction for t+2 from LSTM2 and LSTM3 (Figure 4).
Figure 4 (compare to Figure 2 and notice the "0" as "fake" t+1 value for LSTM1, plugged in instead of the predicted "1")
LSTM1: 00111|0|0
LSTM2: 00110|0|1 - > "simulated outcome" conditional on the fake "0" injected instead of the actual predicted 1 at LSTM1 t+1
LSTM3: 11000|0|1 - > "simulated outcome" conditional on the fake "0" injected instead of the actual predicted 1 at LSTM1 t+1
One could test 3 interventions this way: Keeping time series 1 equal to 0, or doing this with time series 2 or 3.
When I record all of these quasi-exepriments over the course of lets say a month, I could compare the logged data and see which intervention (forcing a series to be 0) would likely affect the system in the short term (6 observations ahead).
I am just a little concerned, because I didn't come across anyone who has done this before. What do you think?
I have a system comprised of three probabilistic boolean timeseries and I want to simulate how two of them would react, if a certain intervention would take place (lets say, timeseries 1 can be influenced and will definitely be "0" for the next 6 observations).
(The managerial question behind this is: Would it be good, to influence a system (comprised of the three time series) such that one of the timeseries involved can only produce zeros in the near future. "Good" hereby means, this would affect the remaining 2 timeseries such that they tend to become zero as well.)
Reasoning:
In my understanding, this is similar to a pre-trained logistic regression which gets applied to classify an unseen observation.
The logit would classify it, based on its pre-trained weights. So does each of the trained LSTMs.
So, when I make up data points, the LSTM would apply its pre-trained weights to classify the outcome: 0 or 1.
Hence, it should react to my "fake" data, as if it would classify actual data and give me a simulation regarding what would likely have happend.
More details and examples:
Lets assume, I got three boolean time series (from t0 to T).
I train a LSTM for each of them using the other time series as covariates (series 1 is predicted from series 2 and 3; series 2 is predicted, from 1 and 3, series 3 is predicted from 1 and 2). Training is now complete and I will not train, but just predict from here on.
The usual prediction task
The LSTMs predict "seq to 1", i.e. the input is a sequence (of boolean values), the output is a predicted value for t+1 (Figure 1).
I now use the t+1 predicted data and add it to my observed data as if I would have actually observed t+1.
Now, I repeat the prediction using this predicted data and get a new t+1 value, which is actually my "t+2" prediction (Figure 2).
Figure 1:
observed data 0 to T | prediction for T+1
LSTM1: 000111|1
LSTM2: 100110|0
LSTM3: 011000|0
Figure 2:
observed data from 1 to T | predicted values T + 1 treated as if they were observed | predictions for T + 2
LSTM1: 00111|1|1
LSTM2: 00110|0|0
LSTM3: 11000|0|0
When I repeat this for example 6 times to get a 6 points ahead forecast (Figure 3).
Figure 3:
LSTM1: 1|1|1|0|0|0
LSTM2: 0|0|1|1|1|0
LSTM3: 0|0|0|1|1|1
The potential simulation setting
What happens, if I inject fake data instead of predicted values? Lets say, LSTM1 predicts "1" but I decide to set the value to "0". When I now use this fake zero as my last "observed" value of the sequence 1, I get potentially a different prediction for t+2 from LSTM2 and LSTM3 (Figure 4).
Figure 4 (compare to Figure 2 and notice the "0" as "fake" t+1 value for LSTM1, plugged in instead of the predicted "1")
LSTM1: 00111|0|0
LSTM2: 00110|0|1 - > "simulated outcome" conditional on the fake "0" injected instead of the actual predicted 1 at LSTM1 t+1
LSTM3: 11000|0|1 - > "simulated outcome" conditional on the fake "0" injected instead of the actual predicted 1 at LSTM1 t+1
One could test 3 interventions this way: Keeping time series 1 equal to 0, or doing this with time series 2 or 3.
When I record all of these quasi-exepriments over the course of lets say a month, I could compare the logged data and see which intervention (forcing a series to be 0) would likely affect the system in the short term (6 observations ahead).
I am just a little concerned, because I didn't come across anyone who has done this before. What do you think?