- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
pdf to brainrot
So I guess it’s just only us Millennials that know how to convert a PDF properly, and we’re just sandwiched in between boomers and gen Z finding the most ridiculous ways to try to accomplish that task.
You can’t parse [X]HTML with LLM. Because HTML can’t be parsed by LLM. LLM is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of LLM will not allow you to consume HTML. LLM are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by LLM. LLM queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular LLM as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by LLM. Even Jon Skeet cannot parse HTML using LLM. Every time you attempt to parse HTML with LLM, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with LLM summons tainted souls into the realm of the living. HTML and LLM go together like love, marriage, and ritual infanticide. The <center> cannot hold it is too late. The force of LLM and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with LLM you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-LLM will liquify the nerves of the sentient whilst you observe, your psyche withering in the onslaught of horror. LLM-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the transgression of a chi͡ld ensures LLM will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using LLM to parse HTML has doomed humanity to an eternity of dread torture and security holes using LLM as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of LLM parsers for HTML will instantly transport a programmer’s consciousness into a world of ceaseless screaming, he comes, the pestilent slithy LLM-infection will devour your HTML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fight he com̡e̶s, ̕h̵is un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo͟ur eye͢s̸ ̛l̕ik͏e liquid pain, the song of re̸gular expression parsing will extinguish the voices of mortal man from the sphere I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful the final snuffing of the lies of Man ALL IS LOŚ͖̩͇̗̪̏̈́T ALL IS LOST the pon̷y he comes he c̶̮omes he comes the ichor permeates all MY FACE MY FACE ᵒh god no NO NOO̼OO NΘ stop the an*̶͑̾̾̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨe̠̅s ͎a̧͈͖r̽̾̈́͒͑e not rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ
Breathtaking. I wish i had more than one upvote.
Oh, the timeless classic. Chef’s kiss
Alright new copypasta drop!
It’s not new. https://stackoverflow.com/a/1732454
But it is amazing
Well ok new to me then (maybe now I’m thinking I might have read it before lol
Also, LMAO
deleted by creator
send me your data and i will parse it for you
it may take me a week to get back to you
How many tokens fit in your context window?
About tree fiddy
if java then like 9
Yes, there are LLMs for that, you literally just have to Google “llm parse PDF”.
You could also use tesseract or any number of other solutions which probably work as well…
But an inexperienced kid is gonna act like an inexperienced kid
But an inexperienced kid is gonna act like an inexperienced kid
Let’s give him access to government payroll!
Dangerously.
Doge aside, He’s a kid that won prizes for writing ml models to parse greek letters from unreadable ancient ashen scrolls. Inexperienced? You sure?
It’s kind of easy to forget about or ignore any experience they might have if they’re asking questions like that. Sure, maybe it was a brain fart from a panicked intern who’s having orders barked at them from a powerful individual that they want to impress, but that doesn’t make it any better, does it?
Or he’s asking a more nuanced (or poorly worded) question than people in this thread are assuming.
… which would be?
Because to me it looks like someone asking to use an LLM to parse things that were created to specifically be parsed by machines. Looks like someone who doesn’t know what the fuck they’re talking about. I’m open to being educated on what that subtle question you’re referring to might be, and how this person is somehow experienced and being nuanced, just drop it on me.
I mean … this person works for the department of government EFFICIENCY, right?
<amidala face>
I guess it’s just a dumb tweet question then. In not directly in the field and even I’m up to date in latest model benchmarks for tasks I need to do.
I don’t know anything about this kid other than the tweet and Elon likes him.
I mean he’s not wrong.
Edit: it seems the joke that LLMs just take other people’s data and regurgitates it in another format went over everyone’s head 🥺
Using LLM for format conversion is like taking a picture of an electronic document, taking the card out of the camera and plugging it into a computer, printing the screenshots, taking those prints to a scanner with OCR, turning the result into an audio recording, and then dictating it too an army of 3 million monkeys with typewriters.
Sounds very appropriate for a government operation
So…my process (which you just accurately described) could be replaced by an LLM, after all? Hooray! Monkey feed isn’t too expensive, but a million mouths is still a million mouths.
Haha considering just how much irrelevant third-party training data you’d be looping into a format conversion, this metaphor really is spot-on.
Im not so sure. I think this is more of a question about taking arbitrary, undefined, or highly variable unstructured data and transforming it into a close approximation for structured data.
Yes, the pipeline will include additional steps beyond “LLM do the thing”, but there are plenty of tools that seek to do this with LLM assistance.