Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

Im training an autoencoder on a time series that consists of repeating patterns (because the same process is repeated again and again). If I then use this autoencoder to reconstruct another one of these patterns, I expect the reconstruction to be worse if the pattern is different from the ones it has been trained on.

Is the fact that the sime series consists of repeating patterns something that needs to be considered in any way for training or data preprocessing? I am currently using this on raw channels.

Thank you.

  • @ShadowAetherOPM
    link
    English
    31 year ago

    Original answer:

    Short answer: Use a convolutional autoencoder (add convolution layers to the exterior of the autoencoder)

    Long answer: From my experience with time series and autoencoders, it is best to do as much feature extraction as possible outside the autoencoder as it’s more difficult to train them to to do the feature extraction and dimensionality reduction. Consider using FFT or wavelet transforms on your data first. Even if they don’t extract your pattern exactly, it helps many applications. After transforming the data, train the convolutional autoencoder using the features and then to evaluate your model, reverse the transformation and compare with the original.