Abstract:
Due to the enhancements in sensor technology and cloud computing, multichannel time series data is readily acquired and analyzed in many real-world problems. However, since the multichannel data has a high dimension and is often acquired over a long duration of time, data analysis is cumbersome visually and computationally challenging despite available resources. In this thesis, we propose a model-free framework for multichannel data analysis to provide a comprehensive insight into the time-varying governing dynamics using a window-based approach and an incremental approach. We evaluate the framework and show its utility using real magneto-encephalogram (MEG) data acquired during cognitive tasks and using clinical seizure electroencephalogram (EEG) data during epileptic events. The complexity of interactions among multichannel data is exemplified in brain dynamics analysis, whereby recordings often acquire the information not only locally, but also from other sources leading it to be multiplexed and contaminated with noise. To understand such dynamics, many researchers have employed biologically inspired models to identify key biological events and generate hypotheses that can be tested experimentally. Such physical models are often computationally expensive, and because they can only encompass an activity that is explicitly modeled, these models cannot guarantee that all the events will be captured. Therefore, we employ the model-free dynamic mode decomposition (DMD) method to find spectral, spatial, temporal, phase, and instability characteristics in the data and quantify the dynamics. The first objective examines a framework to extract dominant spatial patterns and their temporal evolution. The methodology is then adapted to study long multichannel time series data to extract spatial, temporal, and phase relationships between the channels in the form of color-coded dictionary maps we denote as DMDgrams. Next, we study the oscillatory events as instability of the system; since DMD extracts linear underlying subsystem, the fragility of the modes can be extracted and subsequently mapped onto the fragility in the original signal space (or channel fragility). We then evaluate the proposed measure using receiver operating characteristic curve analysis and patient-specific sequence learning models. The second objective makes use of the DMDgram and the fragility features to learn deep neural networks for seizure event detection. Attention mechanisms are added to the network to identify channels that are most active during seizures. The analysis in the previous objectives was performed using a window-based approach that cannot localize events in time accurately where the results may vary according to the considered window size. Therefore, in the third objective, the incremental dynamic mode decomposition is used to generate instantaneous features that accommodate time-varying characteristics and is suitable for online analysis. We also investigate smoothing and the transformation of the modes from the complex domain to the real domain where the instantaneous frequency, amplitude, and phase are extracted. Then, channel interactions are analyzed incrementally. The framework was tested on synthetic data and then validated using (1) multi-trial and multichannel MEG data obtained during three cognitive tasks, (2) real epileptic EEG data, and (3) intracranial EEG recordings aiming to locate the seizure onset zone. Our analysis shows that the proposed methodology is a useful tool to summarize the long-duration EEG data, study the underlying dynamics in a linearized domain from spectral, phase, and instability perspectives, and localize key spatiotemporal hidden events such as extracting dominant brain network configurations and exploring seizure onset zones.