The main use case for LLMs is writing text nobody wanted to read. The other use case is summarizing text nobody wanted to read. Except they don’t do that either. The Australian Securities and…
I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.
good tools are designed well enough so it’s clear how they are used, held, or what-fucking-ever.
fuck these simpleton takes are a pain in the arse. They’re always pushed by these idiots that have based their whole world view on fortune cookie aphorisms
Lmfao yes, the internet was so easy for people to understand when it first was brought about.
I bet no one has ever held a hammer the wrong way, it’s just intuitive right?
Wrong, common sense isn’t actually that common and many tools we use in our life aren’t understood by even a small fraction of the population, the reality is our entire civilization works using a lot of tools many people don’t understand, nor utilize to the best of that given tools ability.
Regardless pigeon holing me into a specific stance based on a single sentence is peak redmy.
I don’t know what you’re talking about in the slightest, but I do know LLMs can be helpful when used properly, but seemingly 99% of the time it isn’t being used properly and grandiose claims are made about what it can do, which then triggers a reactionary response about it being useless when it doesn’t do the exaggerated or downright impossible thing it was said to accomplish.
Edit: Ah lol seems like it’s a jab at the old iPhone antenna gate, obviously just a shit take in that regard.
For an example, the company I work for is rolling out an AI assistant, fed by internal knowledge base pages that are… Edited by AI, in a highly regulated industry where correct information is very important. I do not forsee it going well.
How did you make sure no hallucinations existed without reading the source material; and if you read the source material, what did using an LLM save you?
I also use it for that pretty often. I always double check and usually it’s pretty good. Once in a great while it turns the summary into a complete shitshow but I always catch it on a reread, ask a second time, and it fixes things up. My biggest problem is that I’m dragged into too many useless meetings every week and this saves a ton of time over rereading entire transcripts and doing a poor job of summarizing because I have real work to get back to.
I also use it as a rubber duck. It works pretty well if you tell it what it’s doing and tell it to ask questions.
what if your rubber duck released just an entire fuckton of CO2 into the environment constantly, even when you weren’t talking to it? surely that means it’s better
I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.
Beats manually summarizing that info myself.
Maybe their prompt sucks?
“Are you sure you’re holding it correctly?”
christ, every damn time
That is how tools tend to work, yes.
“tools” doesn’t mean “good”
good tools are designed well enough so it’s clear how they are used, held, or what-fucking-ever.
fuck these simpleton takes are a pain in the arse. They’re always pushed by these idiots that have based their whole world view on fortune cookie aphorisms
Lmfao yes, the internet was so easy for people to understand when it first was brought about.
I bet no one has ever held a hammer the wrong way, it’s just intuitive right?
Wrong, common sense isn’t actually that common and many tools we use in our life aren’t understood by even a small fraction of the population, the reality is our entire civilization works using a lot of tools many people don’t understand, nor utilize to the best of that given tools ability.
Regardless pigeon holing me into a specific stance based on a single sentence is peak redmy.
we find they tend to post here, though not for long
it makes me feel fucking ancient to find that this dipshit didn’t seem to get the remark, and it wasn’t even that long ago
Jobs is Tech Jesus, but Antennagate is only recorded in one of the apocryphal books
I don’t know what you’re talking about in the slightest, but I do know LLMs can be helpful when used properly, but seemingly 99% of the time it isn’t being used properly and grandiose claims are made about what it can do, which then triggers a reactionary response about it being useless when it doesn’t do the exaggerated or downright impossible thing it was said to accomplish.
Edit: Ah lol seems like it’s a jab at the old iPhone antenna gate, obviously just a shit take in that regard.
For an example, the company I work for is rolling out an AI assistant, fed by internal knowledge base pages that are… Edited by AI, in a highly regulated industry where correct information is very important. I do not forsee it going well.
Said like a person who wouldn’t be able to correctly hold a hammer on first try
I got AcausalRobotGPT to summarise your post and it said “I’m not saying it’s always programming.dev, but”
@RagnarokOnline @dgerard “They failed to say the magic spells correctly”
Did you conduct or read all the interviews in full in order to verify no hallucinations?
How did you make sure no hallucinations existed without reading the source material; and if you read the source material, what did using an LLM save you?
I also use it for that pretty often. I always double check and usually it’s pretty good. Once in a great while it turns the summary into a complete shitshow but I always catch it on a reread, ask a second time, and it fixes things up. My biggest problem is that I’m dragged into too many useless meetings every week and this saves a ton of time over rereading entire transcripts and doing a poor job of summarizing because I have real work to get back to.
I also use it as a rubber duck. It works pretty well if you tell it what it’s doing and tell it to ask questions.
Isn’t the whole point of rubber duck debugging that the method works when talking to a literal rubber duck?
what if your rubber duck released just an entire fuckton of CO2 into the environment constantly, even when you weren’t talking to it? surely that means it’s better
deleted by creator