It’s definitely possible, since all the code for generators on Perchance is openly available and downloadable, but currently there’s unfortunately no “one-click” way to do this right now - and it still would require a bit of coding knowledge at this point.
I think I wrote a comment related to this a few months back - basically you’d need to use something like ollama or llama.cpp or tabbyAPI or Aphrodite or vLLM or TGI (…etc) to run the AI text gen model (and for image gen, ComfyUI or Forge WebUI). Unfortunately even a top-of-the-line gaming GPU like a 4090 is not enough to run 70B text gen models fully in VRAM, so it may be slow. And then you’d need swap out some code in perchance.org/ai-text-plugin and perchance.org/text-to-image-plugin so that it references your localhost API instead of Perchance’s server. You’d just fork the plugins, make the changes, then swap out the imports of the ai plugin for your new copies in the gens you want to self-host.
Someone in the community with some coding experience could do the work to make this easier for non-coders, and hopefully they share it in this forum if they do. I’ll likely get around to implementing something eventually, but probably won’t have time in the near future.
Very cool!! Nice job on this. One thing I was just imagining was an optional “high quality” mode where it screenshots the page using
getDisplayMedia
, which will be more accurate (e.g. iframes, modern CSS, etc.), but has the downside that it requires a browser permission popup:https://perchance.org/getdisplaymedia-screenshot-example
If the mode is set to high quality, and the user denies the permission (or their device doesn’t support it - e.g. mobile devices don’t currently support getDisplayMedia), then it could fall back to the normal approach.
In any case, well done with this plugin!