If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?

Thanks for any recommendations in advance.

  • throwawayacc0430
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    5 days ago

    Not sure if a mobile device have that type of processing power lol