Time Series Dynamics (TSD)

TSD characterizes the dynamics of nonlinear dynamical systems, from time series data, for the purposes of description, classification or prediction. Development started in 2012 with the very simple idea that we could extract a wide range of features from synchrony and chaos-based models that could be fitted to known current or future states by any convenient means. Generally, we think of this as two steps: "phase space feature extraction" and "solution space fitting".

 

The introduction of Tensor FLow in 2015 made it possible to explore several hundred natural systems fairly easily and to improve our feature extraction. Ultimately, it allowed us to develop a novel field construct for feature extraction that functions as our "theory of everything". Not surprisingly it considers turbulence, which is the way that systems disspate energy, to be a central issue. As our ability to extract features improved, the demands on solution space fitting declined. By 2018, it was no longer necessary to use neural networks and they were easily replaced by simple rigorous statistical processes.

 

In working with other researchers, we have noted that neural networks are most useful in time series studies when the extracted features are poorly aligned with the system's dynamic. This typically occurs when generic mathematical approaches, such as FFT or ACF, are applied to chaotic time series. Failure to describe these systems optimally, thus, forces researchers to employ neural networks as a kind of "band-aid". While neural networks can find nonlinear relationships among features, they cannot actually convert a feature extracted, for example, from spectral envelope, to one that should have been extracted, for example, from phase space - because the information to do so has been destroyed. Many big tech companies have profited handsomely from the perception that only neural networks can solve certain problems and have shifted the public's focus to architecture innovations. In our view, it's better to start with the right math in the first place; but, of course, then we wouldn't need big tech enabling solutions. We encourage researchers to consider opportunities in more domain relevant feature extraction. However, we are well aware that the influence of big tech is pervasive including, but not limited to, Federal granting agencies.

 

By 2021, we had developed a combinatorial approach to solution space that uses only one degree of freedom. Essentially, the approach places primary emphasis on deducing the most relevant physics which are then combined, without fitting, into a single marker.  This approach is similar to neural networks in that it can accommodate a large number of candidate features; but it is the opposite of neural networks in the sense that it does not actually weight them. The advantage of this approach is that it preserves the physical integrity of the modeling, which is immensely helpful to researchers and regulators, and drastically reduces the drop off between test and train sets. It also dramatically reduces computational requirements. In re-simulation of all of our studies, we find that it has outperformed all approaches based on neural networks and statistical modeling - but only because we can extract highly descriptive domain relevant features.