github.com/anthropics/hh-rlhf
Preview meta tags from the github.com website.
Linked Hostnames
10- 76 links togithub.com
- 4 links todocs.github.com
- 2 links toarxiv.org
- 2 links toresources.github.com
- 1 link togithub.blog
- 1 link tohuggingface.co
- 1 link topartner.github.com
- 1 link toskills.github.com
Thumbnail
Search Engine Appearance
https://github.com/anthropics/hh-rlhf
GitHub - anthropics/hh-rlhf: Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback" - anthropics/hh-rlhf
Bing
GitHub - anthropics/hh-rlhf: Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
https://github.com/anthropics/hh-rlhf
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback" - anthropics/hh-rlhf
DuckDuckGo
GitHub - anthropics/hh-rlhf: Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback" - anthropics/hh-rlhf
General Meta Tags
46- titleGitHub - anthropics/hh-rlhf: Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
- charsetutf-8
- route-pattern/:user_id/:repository
- route-controllerfiles
- route-actiondisambiguate
Open Graph Meta Tags
9- og:imagehttps://opengraph.githubassets.com/0b8399a942e9b3ff7dc40c34fcf6f7bc57fe357f2b864dbf006e1cd1a06fb2a0/anthropics/hh-rlhf
- og:image:altHuman preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback" - anthropics/hh-rlhf
- og:image:width1200
- og:image:height600
- og:site_nameGitHub
Twitter Meta Tags
5- twitter:imagehttps://opengraph.githubassets.com/0b8399a942e9b3ff7dc40c34fcf6f7bc57fe357f2b864dbf006e1cd1a06fb2a0/anthropics/hh-rlhf
- twitter:site@github
- twitter:cardsummary_large_image
- twitter:titleGitHub - anthropics/hh-rlhf: Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
- twitter:descriptionHuman preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback" - anthropics/hh-rlhf
Link Tags
47- alternate iconhttps://github.githubassets.com/favicons/favicon.png
- assetshttps://github.githubassets.com/
- canonicalhttps://github.com/anthropics/hh-rlhf
- dns-prefetchhttps://github.githubassets.com
- dns-prefetchhttps://avatars.githubusercontent.com
Emails
1Links
90- https://arxiv.org/abs/2204.05862
- https://arxiv.org/abs/2209.07858
- https://docs.github.com
- https://docs.github.com/search-github/github-code-search/understanding-github-code-search-syntax
- https://docs.github.com/site-policy/github-terms/github-terms-of-service