ai.stanford.edu/~kzliu/blog/unlearning
Preview meta tags from the ai.stanford.edu website.
Linked Hostnames
33- 53 links toarxiv.org
- 7 links toen.wikipedia.org
- 4 links toproceedings.neurips.cc
- 2 links toai.stanford.edu
- 2 links togdpr.eu
- 2 links tohuggingface.co
- 2 links towww.nytimes.com
- 2 links tox.com
Search Engine Appearance
Machine Unlearning in 2024
As our ML models today become larger and their (pre-)training sets grow to inscrutable sizes, people are increasingly interested in the concept of machine unlearning to edit away undesired things like private data, stale knowledge, copyrighted materials, toxic/unsafe content, dangerous capabilities, and misinformation, without retraining models from scratch.
Bing
Machine Unlearning in 2024
As our ML models today become larger and their (pre-)training sets grow to inscrutable sizes, people are increasingly interested in the concept of machine unlearning to edit away undesired things like private data, stale knowledge, copyrighted materials, toxic/unsafe content, dangerous capabilities, and misinformation, without retraining models from scratch.
DuckDuckGo
Machine Unlearning in 2024
As our ML models today become larger and their (pre-)training sets grow to inscrutable sizes, people are increasingly interested in the concept of machine unlearning to edit away undesired things like private data, stale knowledge, copyrighted materials, toxic/unsafe content, dangerous capabilities, and misinformation, without retraining models from scratch.
General Meta Tags
10- titleMachine Unlearning in 2024 - Ken Ziyu Liu - Stanford Computer Science
- charsetutf-8
- HandheldFriendlyTrue
- MobileOptimized320
- viewportwidth=device-width, initial-scale=1.0
Open Graph Meta Tags
4- og:localeen-US
- og:site_nameKen Ziyu Liu - Stanford Computer Science
- og:titleMachine Unlearning in 2024
- og:urlhttps://ai.stanford.edu/~kzliu/blog/unlearning
Twitter Meta Tags
4- twitter:site@kenziyuliu
- twitter:cardsummary
- twitter:titleMachine Unlearning in 2024
- twitter:descriptionAs our ML models today become larger and their (pre-)training sets grow to inscrutable sizes, people are increasingly interested in the concept of machine unlearning to edit away undesired things like private data, stale knowledge, copyrighted materials, toxic/unsafe content, dangerous capabilities, and misinformation, without retraining models from scratch.
Item Prop Meta Tags
1- headlineMachine Unlearning in 2024
Link Tags
17- alternatehttps://ai.stanford.edu/~kzliu/feed.xml
- apple-touch-iconhttps://ai.stanford.edu/~kzliu/images/apple-touch-icon-57x57.png?v=M44lzPylqQ
- apple-touch-iconhttps://ai.stanford.edu/~kzliu/images/apple-touch-icon-60x60.png?v=M44lzPylqQ
- apple-touch-iconhttps://ai.stanford.edu/~kzliu/images/apple-touch-icon-72x72.png?v=M44lzPylqQ
- apple-touch-iconhttps://ai.stanford.edu/~kzliu/images/apple-touch-icon-76x76.png?v=M44lzPylqQ
Links
99- http://proceedings.mlr.press/v132/neel21a.html
- https://ai.stanford.edu/~kzliu
- https://ai.stanford.edu/~kzliu/blog/unlearning
- https://arxiv.org/abs/1607.00133
- https://arxiv.org/abs/1905.12101