General consensus seems to be that llama4 was a flop. A head of meta AI research division was let go.

Do you think it was a bad fp32 conversion, or just unerwhelming models all around?

2t parameters was a big increase without much gain. If throwing compute and parameters isnt working to stay competitive anymore, how do you think the next big performance gains will be made? Better CoT reasoning patterns? Omnimodal? something entirely new?