aiguide.substack.com/p/ai-now-beats-humans-at-basic-tasks/comment/55406850
Preview meta tags from the aiguide.substack.com website.
Linked Hostnames
3Thumbnail

Search Engine Appearance
Lucas Wiman on AI: A Guide for Thinking Humans
The "competition level mathematics" one is a bit off. I'm guessing that's referring to AlphaGeometry (https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/), which is extremely impressive and cool. But it hooks a language model up to a theorem proving and symbolic algebra engine. Many problems in Euclidean geometry, trigonometry and calculus (as in math Olympiad problems) have mechanistically determinable answers once translated into algebraic formulations. Presumably if competitors got to use Mathematica, their scores would improve on the competitions as well. Still, it is extremely encouraging that language models can be hooked up to existing symbolic math systems like this. It should dramatically expand the capabilities of those systems, making them much more powerful tools. A better test for "human level ability" would be the Putnam exam, where getting a nonzero score is above the ability of most math-major undergrads, and there is a pretty good correlation between top scorers and a brilliant career in math (e.g. several Putnam fellows went on to win fields medals).
Bing
Lucas Wiman on AI: A Guide for Thinking Humans
The "competition level mathematics" one is a bit off. I'm guessing that's referring to AlphaGeometry (https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/), which is extremely impressive and cool. But it hooks a language model up to a theorem proving and symbolic algebra engine. Many problems in Euclidean geometry, trigonometry and calculus (as in math Olympiad problems) have mechanistically determinable answers once translated into algebraic formulations. Presumably if competitors got to use Mathematica, their scores would improve on the competitions as well. Still, it is extremely encouraging that language models can be hooked up to existing symbolic math systems like this. It should dramatically expand the capabilities of those systems, making them much more powerful tools. A better test for "human level ability" would be the Putnam exam, where getting a nonzero score is above the ability of most math-major undergrads, and there is a pretty good correlation between top scorers and a brilliant career in math (e.g. several Putnam fellows went on to win fields medals).
DuckDuckGo
Lucas Wiman on AI: A Guide for Thinking Humans
The "competition level mathematics" one is a bit off. I'm guessing that's referring to AlphaGeometry (https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/), which is extremely impressive and cool. But it hooks a language model up to a theorem proving and symbolic algebra engine. Many problems in Euclidean geometry, trigonometry and calculus (as in math Olympiad problems) have mechanistically determinable answers once translated into algebraic formulations. Presumably if competitors got to use Mathematica, their scores would improve on the competitions as well. Still, it is extremely encouraging that language models can be hooked up to existing symbolic math systems like this. It should dramatically expand the capabilities of those systems, making them much more powerful tools. A better test for "human level ability" would be the Putnam exam, where getting a nonzero score is above the ability of most math-major undergrads, and there is a pretty good correlation between top scorers and a brilliant career in math (e.g. several Putnam fellows went on to win fields medals).
General Meta Tags
16- titleComments - "AI now beats humans at basic tasks": Really?
- title
- title
- title
- title
Open Graph Meta Tags
7- og:urlhttps://aiguide.substack.com/p/ai-now-beats-humans-at-basic-tasks/comment/55406850
- og:imagehttps://substackcdn.com/image/fetch/$s_!kf_D!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Faiguide.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D1960279249%26version%3D9
- og:typearticle
- og:titleLucas Wiman on AI: A Guide for Thinking Humans
- og:descriptionThe "competition level mathematics" one is a bit off. I'm guessing that's referring to AlphaGeometry (https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/), which is extremely impressive and cool. But it hooks a language model up to a theorem proving and symbolic algebra engine. Many problems in Euclidean geometry, trigonometry and calculus (as in math Olympiad problems) have mechanistically determinable answers once translated into algebraic formulations. Presumably if competitors got to use Mathematica, their scores would improve on the competitions as well. Still, it is extremely encouraging that language models can be hooked up to existing symbolic math systems like this. It should dramatically expand the capabilities of those systems, making them much more powerful tools. A better test for "human level ability" would be the Putnam exam, where getting a nonzero score is above the ability of most math-major undergrads, and there is a pretty good correlation between top scorers and a brilliant career in math (e.g. several Putnam fellows went on to win fields medals).
Twitter Meta Tags
8- twitter:imagehttps://substackcdn.com/image/fetch/$s_!kf_D!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Faiguide.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D1960279249%26version%3D9
- twitter:cardsummary_large_image
- twitter:label1Likes
- twitter:data16
- twitter:label2Replies
Link Tags
31- alternate/feed
- apple-touch-iconhttps://substackcdn.com/image/fetch/$s_!aKXr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8e69ccd-6cce-4d7c-a75c-eed9e2e39779%2Fapple-touch-icon-57x57.png
- apple-touch-iconhttps://substackcdn.com/image/fetch/$s_!y375!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8e69ccd-6cce-4d7c-a75c-eed9e2e39779%2Fapple-touch-icon-60x60.png
- apple-touch-iconhttps://substackcdn.com/image/fetch/$s_!4xaN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8e69ccd-6cce-4d7c-a75c-eed9e2e39779%2Fapple-touch-icon-72x72.png
- apple-touch-iconhttps://substackcdn.com/image/fetch/$s_!JWtP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8e69ccd-6cce-4d7c-a75c-eed9e2e39779%2Fapple-touch-icon-76x76.png
Links
14- https://aiguide.substack.com
- https://aiguide.substack.com/p/ai-now-beats-humans-at-basic-tasks/comment/55406850
- https://aiguide.substack.com/p/ai-now-beats-humans-at-basic-tasks/comments#comment-55406850
- https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry
- https://substack.com