• VirtualOdour
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    Custom hardware designed with ai pipelines in mind similar to how gpu architecture solved a lot of render issues due to how memory can be accessed and what operations are prioritized. The idea people have been talking about is basically the llm on one part of the chip and other NNs beside it that can modify its biases - basically setting the ‘mood’ and focusing things as the answer is created should help enable creativity in some areas while locking it out in others. Coding for example requires creativity in structure or variable names but needs to be very factual about function names or mathematical operations.

    I think it’s very unlikely to be the way things go based on progress with pure llms and llm architecture but maybe in the future it’ll turn out to be a more efficient way of solving the problem, especially with ai designed chips.