Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • @Scubus
    link
    English
    111 months ago

    You seem like you are familiar with back-propogation. From my understanding, tokens are basically just a bit of information that is assigned a predicted fitness, and the token with the highest fitness is then used for back-propogation.

    Eli5: im making a recipe. At step 1, i decide a base ingredient. At step 2, based off my starting ingredient, i speculate what would go good with that. Step 3 is to implement that ingredient. Step 4 is to start over at step 2. Each “step” here would be a token.

    I am also not a professional, but I do do a lot of hobby work that involves coding AI’s. As such, if I am incorrect or phrased that poorly, feel free to correct me.

    • @[email protected]
      cake
      link
      fedilink
      English
      211 months ago

      I did manage to write a back-propogation algorithm, at this point I don’t fully understand the math behind back-propogation. Generally back-propogation algorithms take the activation, calculate the delta(?) with the activation and the target output (only on last layer). I don’t know where tokens come in. From your comment it sounds like it has to do something in a unsupervised learning network. I am also not a professional. Sorry if I didn’t really understand your comment.

      • @Scubus
        link
        English
        211 months ago

        Mathematically, I have no idea where the tokens come in exactly. My studies have been more conceptual than actually getting down to the knitty-gritty, for the most part.

        But conceptually, from my understanding, tokens are just a variable that is assigned a speculated fitness, then used as the new “base” data set.

        I think chicken would go good in this, but beef wouldn’t be as good. My token is the next ingredient i am deciding to put in.