• Artyom@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    5 days ago

    This is a solvable problem, but it requires humans to write the algorithms. For example, AIs can’t add, but there are ways to hook in external software that can do addition, which the AI will know to use. Similarly, we can train AI to solve logic puzzles if you give it an algorithm, but it can’t solve a logic puzzle an algorithm cannot.

    • mindbleach
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      This is adorably misguided. We tried letting humans write the rules. It didn’t work.

      The recent explosion of neural-network stuff is exciting specifically because it doesn’t rely on us understanding how the fuck it works.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Some AI can do math, but LLMs and Neural Networks aren’t designed to do complex math, as all of their operations are (typically) based around addition. If it has enough pathways it can learn multiplication through sheer brute force, but only within the input space (hence the articles comments about 3 or 4 digit numbers).

      At the end of the day, LLMs are for processing text. Multimodal models are generally converting audio/images to text so you have a common means of evaluation. However, none of that is concerned about logic, which is fundamental for math.