This video was sponsored by PCBWay!(FREE $5 of new user credit): https://pcbway.com/g/ea9jl9github: https://github.com/chromalock/CGBLinkVideodiscord: https:...
Precomputing an optimal set of tiles for a given video would be a different matter.
EDIT: This text was written about twenty years ago:
Metapixel is fast. It takes about 75 seconds to generate a classical photomosaic for a 2048x2432 image with constituent images of size 64x64 and a database of slightly more than 11000 images on my not-so-fast Alpha. Most of this time is spent loading and saving images.
It’s not intended for encoding video and I suspect probably isn’t multithreaded, so it’s probably possible to parallelize metapixel processes at the frame level to significantly speed that up on modern processors with a lot of cores.
EDIT2: If there were a way to compute an optimal set of tiles for a given video segment, which I don’t have an out-of-box approach for, if you have spare bandwidth, which you should, you could probably, while streaming tilemaps, send over a new set of tiles for different sections of video and keep the one for the next section buffered on the Game Boy side ahead-of-time.
The first edit gets you something like 8088 Corruption, which naively compared every 8x8 block of input video to every codepage 437 character in every color combination, then played from floppy to screen as quickly as possible. As an O(n2) algorithm it’s very easy to bloat beyond any hope of real-time use - especially as you make things more flexible with tile-flipping and so on. With a fixed-ish tileset you can at least speed up search by averaging colors or building some kind of tree.
The second edit gets you that time I tried shoving Dragon’s Lair onto NES, and every clever tweak made it mushier.
Actually - I think the first Command & Conquer homebrewed its own video format, using big chunky tiles. (MPEG1 decoding was bizarrely expensive in terms of both dollars and compute. And it still looked awful.) “The Bitter Lesson” tells us that’s a search problem we should attack with speed instead of complexity.
So probably just K-means over all the tiles in your group-of-pictures.
If you’re willing to use a static set of tiles, and do non-real-time encoding, you can probably use photomosaic software.
Metapixel is apparently packaged in Debian.
Precomputing an optimal set of tiles for a given video would be a different matter.
EDIT: This text was written about twenty years ago:
It’s not intended for encoding video and I suspect probably isn’t multithreaded, so it’s probably possible to parallelize
metapixel
processes at the frame level to significantly speed that up on modern processors with a lot of cores.EDIT2: If there were a way to compute an optimal set of tiles for a given video segment, which I don’t have an out-of-box approach for, if you have spare bandwidth, which you should, you could probably, while streaming tilemaps, send over a new set of tiles for different sections of video and keep the one for the next section buffered on the Game Boy side ahead-of-time.
The first edit gets you something like 8088 Corruption, which naively compared every 8x8 block of input video to every codepage 437 character in every color combination, then played from floppy to screen as quickly as possible. As an O(n2) algorithm it’s very easy to bloat beyond any hope of real-time use - especially as you make things more flexible with tile-flipping and so on. With a fixed-ish tileset you can at least speed up search by averaging colors or building some kind of tree.
The second edit gets you that time I tried shoving Dragon’s Lair onto NES, and every clever tweak made it mushier.
Actually - I think the first Command & Conquer homebrewed its own video format, using big chunky tiles. (MPEG1 decoding was bizarrely expensive in terms of both dollars and compute. And it still looked awful.) “The Bitter Lesson” tells us that’s a search problem we should attack with speed instead of complexity.
So probably just K-means over all the tiles in your group-of-pictures.