Do you have a specific example you can share? I'm not really sure what you're doing and a concrete example might clear some things up.
To All:
First, I am not sure if backproagation is the right way to define what I am looking at, but it appears to at least take the smallest amount of characters up.
Process Discription:
Forward:
So, I have a few different sets of distributions where each one is produced from the last and some set of parameters. On the final distribution, a specific field within that distribution is defined as the desired goal.
Process:
Forward Propagation: start --> dist(1) --> dist(2) --> ... ---> dist(end)
Back:
The question is, is there a way to know where in the previous distributions that the specific desired goal came from? If not, is their an alternative route to do this kind of problem?
Process:
Back Propagation: goal = f(dist(end)) --> g(dist(end-1)) --> ... --> g(dist(1))
The main reason that I have gone to distributions is that the algorithms use an amount of data that would not be physically possible to keep up with all the data, so I have gone with an alternative route of trying to represent the effects of the algorithm at different discrete points.
Thanks for all your time, and I am sorry for putting a question up as my first post but this has been boggling my mind.
Do you have a specific example you can share? I'm not really sure what you're doing and a concrete example might clear some things up.
I don't have emotions and sometimes that makes me very sad.
I can't really give a specific example, but I can try to define the process a little.
I have a function 'h' that is effected by previous information 'x' as well as possible commands 'u' and some constraints 'c' to add some spice to the picture.
h(x,u,c) = ?
Lets assume that we start with a single point of previous information and 2 possible commands.
Therefore if we want to keep all information, the amount of information at each step goes like this if we ignore possible constraint effects. (Search Tree)
1,2,4,8,16,...
Because this gets to a point that is computationally impossible or at least improbable, a set of distributions is instead fitted to the state data 'x' at each discrete step; I am using a multitude of gaussians at the moment.
However, the main desire of this process was to not only find what happened while progressing forward but also to be able to pick a specific set of values at the end of the progression and find the distributions at each step that define how to get there from the starting point.
I guess an example would be a car is driving on a flat plane at a constant velocity but at a multitude of possible directions, which are defined as a delta from the previous velocity vector. Assuming this is 4 directions. The first iterations gives 4 different possible places for the car to be, while the next gives 16 and so forth. Because the amount of points become much more than is desired, the states of the car are fitted to a distrubtion at each step and then the next step is seeded by draws within that distribution. After they have run for so long, it is decided that it has finished its run. Now, we know the possible capabilty for the car, but now, we want to know all possible paths to some end point (this is where I am stuck).
Tweet |