
web.archive.org/web/20230306130301/https:/ai.facebook.com/blog/large-language-model-llama-meta-ai
Preview meta tags from the web.archive.org website.
Linked Hostnames
1Thumbnail

Search Engine Appearance
https://web.archive.org/web/20230306130301/https:/ai.facebook.com/blog/large-language-model-llama-meta-ai
Introducing LLaMA: A foundational, 65-billion-parameter language model
Today, we’re releasing our LLaMA (Large Language Model Meta AI) foundational model with a gated release. LLaMA is more efficient and competitive with...
Bing
Introducing LLaMA: A foundational, 65-billion-parameter language model
https://web.archive.org/web/20230306130301/https:/ai.facebook.com/blog/large-language-model-llama-meta-ai
Today, we’re releasing our LLaMA (Large Language Model Meta AI) foundational model with a gated release. LLaMA is more efficient and competitive with...
DuckDuckGo

Introducing LLaMA: A foundational, 65-billion-parameter language model
Today, we’re releasing our LLaMA (Large Language Model Meta AI) foundational model with a gated release. LLaMA is more efficient and competitive with...
General Meta Tags
14- titleIntroducing LLaMA: A foundational, 65-billion-parameter language model
- charsetutf-8
- referrerdefault
- descriptionToday, we’re releasing our LLaMA (Large Language Model Meta AI) foundational model with a gated release. LLaMA is more efficient and competitive with...
- viewportwidth=device-width, initial-scale=1
Open Graph Meta Tags
1- og:imagehttps://web.archive.org/web/20230306130406im_/https://scontent-sjc3-1.xx.fbcdn.net/v/t39.2365-6/333095137_1286826058904423_4144395724304288774_n.png?_nc_cat=110&ccb=1-7&_nc_sid=ad8a9d&_nc_ohc=_rj4PKlpHhoAX_iQaGy&_nc_ht=scontent-sjc3-1.xx&oh=00_AfBgMOwYD4n203xR_3bqBnZVlN2IjZhLp98KovOBFSTqNg&oe=640B85AB
Twitter Meta Tags
1- twitter:cardsummary
Link Tags
38- canonicalhttps://web.archive.org/web/20230306130406/https://ai.facebook.com/blog/large-language-model-llama-meta-ai/
- preloadhttps://web.archive.org/web/20230306130406/https://static.xx.fbcdn.net/rsrc.php/v3/yN/l/0,cross/Xgza7fSbTtb.css?_nc_x=Ij3Wp8lg5Kz
- preloadhttps://web.archive.org/web/20230306130406/https://static.xx.fbcdn.net/rsrc.php/v3/ye/l/0,cross/zHLDqluIxRK.css?_nc_x=Ij3Wp8lg5Kz
- preloadhttps://web.archive.org/web/20230306130406/https://static.xx.fbcdn.net/rsrc.php/v3/yt/l/0,cross/rc5XmnTvr2D.css?_nc_x=Ij3Wp8lg5Kz
- preloadhttps://web.archive.org/web/20230306130406/https://static.xx.fbcdn.net/rsrc.php/v3/yV/l/0,cross/KxGJ10xTR_J.css?_nc_x=Ij3Wp8lg5Kz
Links
42- https://web.archive.org/web/20230306130406/https://ai.facebook.com
- https://web.archive.org/web/20230306130406/https://ai.facebook.com/blog
- https://web.archive.org/web/20230306130406/https://ai.facebook.com/blog/ai-math-theorem-proving
- https://web.archive.org/web/20230306130406/https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b
- https://web.archive.org/web/20230306130406/https://ai.facebook.com/blog/dino-paws-computer-vision-with-self-supervised-transformers-and-10x-more-efficient-training