Time Crisis is bringing back the nostalgia with a new AI powered gun compatible with modern TVs. This kit from Achievement Electric will allow you to use light guns conveniently without needing to worry about acquiring an old CRT or tinkering with setups.
The AI Gun Con system comes pre-loaded with the original Time Crisis game, designed for all LCD monitors. The pricing starts at $89.99 for a basic unit and goes up to $119.99 for an arcade mode setup which includes a pedal controller.
Expect an official release date announcement soon as the Tokyo Game Show approaches shortly.
Would you want to know more about how AI is used in this device before buying?
Nice to see we’ve progressed from putting blockchain in everything to AI.
Computer vision stuff has been labelled AI long before LLMs hit the scene and that’ll be what’s going on under the hood of this thing too. Sinden’s light gun required a big white box/border to be drawn around the edge of the screen so that it could track the movement of the box to understand where the gun is being pointed. Which is pretty ugly and you still had to mess around with getting emulators set up to use it (thus the product was pretty niche).
If Dashine have got a model that can detect displays without the need for a border, that alone is epic. But also, by shipping a mini games console with the gun so you can just play a game with no messing about, it’ll get better sales numbers and possibly reignite interest in light gun games outside of the emulator space. Naturally there’s no light gun games on Steam, PS5, Xbox stores right now so you need the gun on the market first before you can energize developers to ship games for them. So Dashine have been smart with this and might “trigger” a return of this genre to living rooms. Fingers crossed!
The new part is using AI as a marketing buzzword, it was previously mostly just used as a descriptor to put complex systems in simple terms, but now it’s being used in marketing to pretend things have something like Data from Star Trek’s brain running them 😅
Weirdly I’d say it was the other way around. Late 90s marketing used “AI” to inflate basic decision trees whereas “AI” in the context of this gun running an ANN model is a better application of the term. I’m old though; AI has been a buzzword since the 80s when every org wanted their own expert system (all pitched/marketing as “AI”). There was so much groundwork for these “AIs” that never really came to be - like the whole Semantic Web movement with RDF in the late 90s and ontologists in every major org. And now you can practically replicate that with few-shot.
I’m not suggesting we’re near Data, far from it, but that AI has been a buzz word for a lot longer than people have noticed. It’s just a lot of the technology is now commodity and in consumer products, so the average person gets marketed to also. I remember pitches about the C128 and big orgs like GM swinging around AI for things like “it’s got more RAM” and “we’re using a database”.
Oh I see, I guess the hype had died down by the time I was old enough to be aware of any.
… Does anyone know what the AI actually does?
Honestly, the fact that the article writer is shilling an AI product without actually explaining what the AI does is kinda making me doubt their journalistic integrity.
So I’ve thought about this a bit more. Games like this flash the screen black with a white square on the target, and then detects whether the lightgun is pointing at white or black. I guess they could take a picture of the TV and combine that with sensor data, put it into an AI and then figure out where on the screen the gun is pointed at? I guess that would count as “AI”?
I’m sure the diehard lightgun fans won’t find it accurate enough though.
Does anyone know what the AI actually does?
I’m assuming it’s based on image recognition like the Shinden gun. It’s a branch of AI different from the LLM systems that power things like ChatGPT.
Honestly, the fact that the article writer is shilling an AI product without actually explaining what the AI does is kinda making me doubt their journalistic integrity.
Ha ha hell yes, retro dodo is garbage, they are a site that posts whatever silly thoughts they happen to have about old video games. This kind of announcement about upcoming release is about the best they are good for, and you still have to read through the filler to see that they don’t know anything more than the Japanese Twitter post that actually announces the thing.
Just to add some information, what’s innovative here is that they are likely using a traditional machine learning model (eg: neural networks) to identify the corners of the screen and infer the position the gun is aiming at from this.
Sinden is aiming to do the same thing, but using older techniques known as compute vision. It adds a white border around the screen and uses those CV algorithms to find this rectangle. It is not AI at all.
The reason Sinden is doing this is because it is much more easier this way (and so it is fast to compute, and very accurate).
Whatever AI they use, it will likely be either less accurate and/or be very slow (imagine situations with low ambient light and the screen turning black). I have seen a review in japanese from journalists who tried it, and the response time was not great (and the team wants to divide it by 2 before release, which will still be worse than Sinden).
Another possibility could be there is no AI at all, and they exploit specificities of Time Crisis. When you shoot, the screen goes white for 1 or 2 frames. You don’t need AI to spot this frame and do something very similar to Sinden without using any border.
At this point, it might be too late to move the « cursor » to the right location, but emulators nowadays are able to apply inputs in the past, and « replay » internally the last frames in the background so that you cancel the native input lag of some games (which can make them more responsive than games running on real hardware). They could use this option and it’s done. You have a system only working on games like Time Crisis with white frames while shooting, with no white borders nor machine learning model.
TLDR; if they use AI (=machine learning) as they claim, there will be no constraint like existing alternatives (sensors / white borders), but it will likely be less accurate / responsive. For Time Crisis specifically, it’s possible to come up with a solution without those constraints nowadays, so it’s possible they have no AI at all and use the term for marketing purposes.
Thanks, this is helpful. Got any links to the Japanese review you read? Sounds way more useful than retrododo.com article.
No problem, I found it in my history :) https://game.watch.impress.co.jp/docs/preview/1626343.html
So, what do they (emulators) call that feature (input lag hack)?
It’s called « Run ahead » in Retroarch: https://docs.libretro.com/guides/runahead/
It’s using ai image recognition and I doubt it’s the world’s first to do so.
Shinden also works on image recognition, so this sounds fine, but I need to read more. Light gun games were some of my favorite in the Arcades, I’ll buy this or something like it.
Congratulations on inventing the Wii remote
Yeah… This won’t work well if it’s even real
You think it won’t work? Because it is “AI”?
Kneejerk “it’s a camera, they’re spying on you” gave way to “wait no this is sensible.”
Neural networks were getting super good at any complex task with a simple approximate output. It’s only all this LLM and generator nonsense that got the money-addicts horny and soured any mention of “AI.” A webcam-- sorry, a smartphone camera module that takes a photo and goes “yep, that’s a bright rectangle, you’re pointing at <0.23,0.71>” would be a pain to code from scratch, but relatively straightforward to train.
Of course they might still be using that to spy on you, because we’re trapped in the belly of this horrible machine and the machine is bleeding to death.
It’s gonna be a camera on a stick. Physically, that’s about all it can be.
The hard part of bringing back lightguns has been going from “here is an image of somebody’s living room” to “there’s a screen in this image, and the camera is centered on this specific pixel.” Complex input - simple output. Neural networks are great at that. (Especially when being a little bit off is perfectly fine.) So namedropping “AI” in this product isn’t just trend-chasing nonsense, or a pointless unrelated feature.
But it is still taking pictures of your living room.
The old technology was able to work flawlessly with just a point in the screen and light detection. Why would we need something more complicated now?
Because the old technology had a point on the screen. LCDs light up all-at-once. All the pixels change color at roughly the same time. The whole image is there, the whole time. There’s nothing to detect besides the entire bright rectangle.
Why do you think these peripherals have been gone for twenty years?
OLEDs can light up one spot at a time.
But they don’t.
Nevermind that I don’t own an OLED TV, and I’m betting you don’t either. LCDs won. And if you’ve got a simple way to make LCDs and lightguns “just work,” there’s a niche retro community waiting to throw money at you.
oh ok