试试这个开源框架exo,可以在算力不足的普通电脑上分布式运行,每台电脑上启动exo,自动通过p2p提供算力,运行像llama3.1 4050亿参数的大模型,可以不用英伟达的4090显卡:
curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "llama-3-8b", "messages": [{"role": "user", "content": "What is the meaning of exo?"}], "temperature": 0.7 }'
https://github.com/exo-explore/exo
网友回复