[[https://ai.facebook.com/blog/large-language-model-llama-meta-ai/|Blog]] {{ :projets:ia:llama:introducing_llama_a_foundational_65-billion-parameter_language_model_10_03_2023_11_35_29_.html |Archive du 24/02/2023 le 10/03/2023}} [[https://arxiv.org/pdf/2302.13971.pdf|LLaMA: Open and Efficient Foundation Language Models]] {{ :projets:ia:llama:2302.13971.pdf |Archive du 27/02/2023 le 10/03/2023}} [[https://github.com/facebookresearch/llama|Github]] {{ :projets:ia:llama:llama-main-2023-03-10.zip |Archive du 07/03/2023 le 10/03/2023}} Nécessite de grosses ressources GPU. Téléchargement : ''magnet:?xt=urn:btih:ZXXDAUWYLRUXXBHUYEMS6Q5CE5WA3LVA&dn=LLaMA'' [[https://github.com/juncongmoo/pyllama|pyllama]] {{ :projets:ia:llama:pyllama-main-2023-03-10.zip |Archive du 10/03/2023 le 10/03/2023}} Version single GPU