Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.
Original question:
I have a dataset that contains vectors of shape 1xN where N is the number of features. For each value, there is a float between -4 and 5. For my project I need to make an autoencoder, however, activation functions like ReLU or tanh will either only allow positive values through the layers or within -1 and 1. My concern is that upon decoding from the latent space the data will not be represented in the same way, I will either get vectors with positive values only or constrained negative values while I want it to be close to the original.
Should I apply some kind of transformation like adding a positive constant value, exp() or raise data to power 2, train VAE, and then if I want original representation I just log() or log2() the output? Or am I missing some configuration with activation functions that can give me an output similar to the original input?
Feels, integrated AI started as a shared google doc of links for some projects i was on, realized I had figured out how to wadge through papers and get usable stuff still (used to do similar for industrial robotics years ago).
My hope is as our domains age we can play SEO games and get our instances in the results. Make them look at us on every page!