For LLMs specifically, or do you mean that goal alignment is some made up idea? I disagree on either, but if you infer there is no such thing as miscommunication or hiding true intentions, that’s a whole other discussion.
Alignment is short for goal alignment. Some would argue that alignment suggests a need for intelligence or awareness and so LLMs can’t have this problem, but a simple program that seems to be doing what you want it to do as it runs but then does something totally different in the end is also misaligned. Such a program is also much easier to test and debug than AI neural nets.
Alignment is cargo cult lingo.
For LLMs specifically, or do you mean that goal alignment is some made up idea? I disagree on either, but if you infer there is no such thing as miscommunication or hiding true intentions, that’s a whole other discussion.
Cargo cult pretends to be the thing, but just goes through the motions. You say alignment, alignment with what exactly?
Alignment is short for goal alignment. Some would argue that alignment suggests a need for intelligence or awareness and so LLMs can’t have this problem, but a simple program that seems to be doing what you want it to do as it runs but then does something totally different in the end is also misaligned. Such a program is also much easier to test and debug than AI neural nets.