Over the last year I’ve been learning Swift and starting to put together some iOS apps. I’d definitely class myself as a Swift beginner.

I’m currently building an app and today I used ChatGPT to help with a function I needed to write. I found myself wondering if somehow I was “cheating”. In the past I would have used YouTube videos, online tutorials and Stack Overflow, and adapted what I found to work for my particular usage case.

Is using ChatGPT different? The fact that ChatGPT explains the code it writes and often the code still needs fettling to get it to work makes me think that it is a useful learning tool and that as long as I take the time to read the explanations given and ensure I understand what the code is doing then it’s probably a good thing on balance.

I was just wondering what other people’s thoughts are?

Also, as a side note, I found that chucking code I had written in to ChatGPT and asking it to comment every line was pretty successful and a. big time saver :D

  • mrkite@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 years ago

    My favorite test for ChatGPT is to ask it to write a function to divide two numbers in 6502 assembly. Not only is there no DIV instruction to rely on, but the 6502 is very register starved, so you get a lot of screwups. Here’s one example of what ChatGPT generated:

    ; Inputs:
    ; - A: dividend
    ; - X: divisor
    
    DIVISION:
        LDX #0       ; initialize quotient to 0
        CPX #0       ; check for divide-by-zero error
        BEQ DIVIDE_BY_ZERO
        CLC          ; clear carry flag
    DIV_LOOP:
        CMP X,A      ; compare divisor with dividend
        BCC DIV_DONE ; if divisor is greater, division is complete
        SEC          ; set carry flag
        ROL A        ; shift dividend left
        INX          ; increment quotient
        JMP DIV_LOOP ; continue division
    DIV_DONE:
        RTS          ; return from subroutine
    DIVIDE_BY_ZERO:
        ; handle divide-by-zero error here
        RTS
    

    You can see it immediately overwrites the divisor with the quotient, so this thing will always give a divide by zero error. But even if it didn’t do that, CMP X,A is an invalid instruction. But even if that wasn’t invalid, multiplying the dividend by two (and adding one) is nonsense.

    • Deely@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 years ago

      Honestly I still don’t get it. Every dialog with ChatGPT where I tried to do something meaningful always ends with ChatGPT hallucinations. It answers general questions, but it imagine something everytime. I asks for a list of command line renderers, it returns list with a few renderers that do not have CLI interface. I asks about library that do something, it returns 5 libraries with one library that definitely can’t do it. And so on, so on. ChatGPT is good on trivial task, but I don’t need help with trivial task, I can do trivial task myself… Sorry for a rant.

      • auv_guy@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 years ago

        That’s what (most) people don’t understand. It’s a language model. It’s not an expert system and it’s not a magical know-it-all oracle. It’s supposed to give you an answer like a random human would do. But people trust it much more as they would trust a random stranger, because “it is an AI”…

      • Dazawassa@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 years ago

        No you aren’t the only one. I’ve prompted ChatGPT before for SFML library commands and it’s given me commands that either don’t work anymore or just never existed everytime.

      • axo10tl@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        That’s because ChatGPT and LLM’s are not oracles. They don’t take into account whether the text they generate is factually correct, because that’s not the task they’re trained for. They’re only trained to generate the next statistically most likely word, then the next word, and then the next one…

        You can take a parrot to a math class, have it listen to lessons for a few months and then you can “have a conversation” about math with it. The parrot won’t have a deep (or any) understanding of math, but it will gladly replicate phrases it has heard. Many of those phrases could be mathematical facts, but just because the parrot can recite the phrases, doesn’t mean it understands their meaning, or that it could even count 3+3.

        LLMs are the same. They’re excellent at reciting known phrases, even combining popular phrases into novel ones, but even then the model lacks any understanding behind the words and sentences it produces.

        If you give an LLM a task in which your objective is to receive factually correct information, you might as well be asking a parrot - the answer may well be factually correct, but it just as well might be a hallucination. In both cases the responsibility of fact checking falls 100% on your shoulders.

        So even though LLMs aren’t good for information retreival, they’re exceptionally good at text generation. The ideal use-cases for LLMs thus lie in the domain of text generation, not information retreival or facts. If you recognize and understand this, you’re all set to use ChatGPT effectively, because you know what kind of questions it’s good for, and with what kind of questions they’re absolutely useless.

    • Dazawassa@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      I’ve only ever done X86 Assembly. But oh lord that does not look like it can really do much. Yet still somehow has like 20 lines.