That’s fundamentally not what this technology does.
It’s not impossible - neural networks are all about complex inputs with sparse outputs. ‘Is this bullshit?’ is achievable, given a zillion examples. But the way we’re training these models, we can’t know where it’d be hilariously wrong, and you couldn’t just feed in the news every day and update what’s real. It’d be stuck.
It’s not evaluating information. It’s doing math on letters.
The kind of bullshit detector we want requires going even further back into AI concepts that didn’t pan out, and brute-forcing them with modern data sets. We tried being clever and it’s not enough. But we tried doing a zillion times more linear algebra, and now it’s fucking everywhere.
That’s fundamentally not what this technology does.
It’s not impossible - neural networks are all about complex inputs with sparse outputs. ‘Is this bullshit?’ is achievable, given a zillion examples. But the way we’re training these models, we can’t know where it’d be hilariously wrong, and you couldn’t just feed in the news every day and update what’s real. It’d be stuck.
It’s not evaluating information. It’s doing math on letters.
The kind of bullshit detector we want requires going even further back into AI concepts that didn’t pan out, and brute-forcing them with modern data sets. We tried being clever and it’s not enough. But we tried doing a zillion times more linear algebra, and now it’s fucking everywhere.