Gogo Sempai@programming.dev to Comic Strips@lemmy.world · 1 year agoDidn't have ChatGPT back in the day to cook up professional-sounding paragraphs from bullet pointsprogramming.devimagemessage-square27fedilinkarrow-up1666arrow-down110
arrow-up1656arrow-down1imageDidn't have ChatGPT back in the day to cook up professional-sounding paragraphs from bullet pointsprogramming.devGogo Sempai@programming.dev to Comic Strips@lemmy.world · 1 year agomessage-square27fedilink
minus-squareSpaceNoodle@lemmy.worldlinkfedilinkarrow-up42arrow-down1·1 year agoWhat a terrible way to waste everyone’s time.
minus-squarePechente@feddit.delinkfedilinkEnglisharrow-up30arrow-down2·edit-21 year agoYeah but seems more of a cultural issue than ChatGPT’s issue if businesses expect emails to have a certain form.
minus-squareoldGregg@lemm.eelinkfedilinkarrow-up15arrow-down1·1 year agoThe recipiant just copies the message intp chatGPT and asks it for the summary. Its like a shitty cypher
minus-squareGogo Sempai@programming.devOPlinkfedilinkarrow-up16arrow-down2·1 year agoWhat’s becoming mainstream these days: Sender uses ChatGPT/Copilot/Bard to turn content summary into a big professional Email. Receiver uses ChatGPT/Copilot/Bard to break down the big professional email into summary. Time is saved but what a wastage of electricity (LLMs need GPU computation for faster output)!
minus-squarejscummylinkfedilinkarrow-up6·1 year agoIs time saved though? Sounds like two useless steps have been added, with an extra layer of translation that could cause misunderstandings
What a terrible way to waste everyone’s time.
Yeah but seems more of a cultural issue than ChatGPT’s issue if businesses expect emails to have a certain form.
The recipiant just copies the message intp chatGPT and asks it for the summary.
Its like a shitty cypher
What’s becoming mainstream these days:
Sender uses ChatGPT/Copilot/Bard to turn content summary into a big professional Email.
Receiver uses ChatGPT/Copilot/Bard to break down the big professional email into summary.
Time is saved but what a wastage of electricity (LLMs need GPU computation for faster output)!
Is time saved though? Sounds like two useless steps have been added, with an extra layer of translation that could cause misunderstandings