defenderofthebasic.substack.com/p/feynmans-razor/comment/134482302

Preview meta tags from the defenderofthebasic.substack.com website.

Linked Hostnames

2

Thumbnail

Search Engine Appearance

Google

https://defenderofthebasic.substack.com/p/feynmans-razor/comment/134482302

Ljubomir Josifovski on Defender’s Corner

Very good - thanks for sharing this, like it. This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same. Likewise in your case - we should explain to people as if they can understand, even if maybe they can't. I subscribe to this principle. For one thing, we may be surprised - one never knows. For another, how are we to learn new things, if we are only told as much as we already know, but not more. Yes the teller runs the risk of being overly detailed and ultimately boring. For if the interlocutor doesn't understand, they may get bored and even frustrated. That's fine. My ego can take a hit, I'm fine risking it. When I notice I wrap quickly in a sentence and shut up. Not a biggie. Amusingly, people find similar when teaching computers new things they have never seen before. (check Jeff Clune lectures, talks, podcasts interviews) Teaching them too easy things they already know how to solve - is a waste of time, they already know, so learn nothing new. Teaching them too hard things is a waste of time too, because they don't solve it fail to get to the solution. But we want them to learn to discover a solution on their own, independently. Not just memorise and in the future pattern match an answer they blurt out. The aim of their research is to teach the model how to learn on his own. Not just what's the right answer is, but to learn the process by which we humans find the answer. There is a Goldilocks zone, where the system is at A, and we want it too get to B on its own. If B is about 20% away, 20% more difficult, but no more than that, then the model stands a non-trivial chance of discovering in its own, the stepping stones that allow it to get from A to B successfully. And discover that on its own. That part is crucial. They are training the model how to learn on its own. So them laying the stepping stones is no good, is counterproductive. The aim of the exercise is for the model to learn how to go on about discovering the stepping stones on its own.



Bing

Ljubomir Josifovski on Defender’s Corner

https://defenderofthebasic.substack.com/p/feynmans-razor/comment/134482302

Very good - thanks for sharing this, like it. This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same. Likewise in your case - we should explain to people as if they can understand, even if maybe they can't. I subscribe to this principle. For one thing, we may be surprised - one never knows. For another, how are we to learn new things, if we are only told as much as we already know, but not more. Yes the teller runs the risk of being overly detailed and ultimately boring. For if the interlocutor doesn't understand, they may get bored and even frustrated. That's fine. My ego can take a hit, I'm fine risking it. When I notice I wrap quickly in a sentence and shut up. Not a biggie. Amusingly, people find similar when teaching computers new things they have never seen before. (check Jeff Clune lectures, talks, podcasts interviews) Teaching them too easy things they already know how to solve - is a waste of time, they already know, so learn nothing new. Teaching them too hard things is a waste of time too, because they don't solve it fail to get to the solution. But we want them to learn to discover a solution on their own, independently. Not just memorise and in the future pattern match an answer they blurt out. The aim of their research is to teach the model how to learn on his own. Not just what's the right answer is, but to learn the process by which we humans find the answer. There is a Goldilocks zone, where the system is at A, and we want it too get to B on its own. If B is about 20% away, 20% more difficult, but no more than that, then the model stands a non-trivial chance of discovering in its own, the stepping stones that allow it to get from A to B successfully. And discover that on its own. That part is crucial. They are training the model how to learn on its own. So them laying the stepping stones is no good, is counterproductive. The aim of the exercise is for the model to learn how to go on about discovering the stepping stones on its own.



DuckDuckGo

https://defenderofthebasic.substack.com/p/feynmans-razor/comment/134482302

Ljubomir Josifovski on Defender’s Corner

Very good - thanks for sharing this, like it. This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same. Likewise in your case - we should explain to people as if they can understand, even if maybe they can't. I subscribe to this principle. For one thing, we may be surprised - one never knows. For another, how are we to learn new things, if we are only told as much as we already know, but not more. Yes the teller runs the risk of being overly detailed and ultimately boring. For if the interlocutor doesn't understand, they may get bored and even frustrated. That's fine. My ego can take a hit, I'm fine risking it. When I notice I wrap quickly in a sentence and shut up. Not a biggie. Amusingly, people find similar when teaching computers new things they have never seen before. (check Jeff Clune lectures, talks, podcasts interviews) Teaching them too easy things they already know how to solve - is a waste of time, they already know, so learn nothing new. Teaching them too hard things is a waste of time too, because they don't solve it fail to get to the solution. But we want them to learn to discover a solution on their own, independently. Not just memorise and in the future pattern match an answer they blurt out. The aim of their research is to teach the model how to learn on his own. Not just what's the right answer is, but to learn the process by which we humans find the answer. There is a Goldilocks zone, where the system is at A, and we want it too get to B on its own. If B is about 20% away, 20% more difficult, but no more than that, then the model stands a non-trivial chance of discovering in its own, the stepping stones that allow it to get from A to B successfully. And discover that on its own. That part is crucial. They are training the model how to learn on its own. So them laying the stepping stones is no good, is counterproductive. The aim of the exercise is for the model to learn how to go on about discovering the stepping stones on its own.

  • General Meta Tags

    17
    • title
      Comments - Feynman's Razor - Defender’s Corner
    • title
    • title
    • title
    • title
  • Open Graph Meta Tags

    7
    • og:url
      https://defenderofthebasic.substack.com/p/feynmans-razor/comment/134482302
    • og:image
      https://substackcdn.com/image/fetch/$s_!O67J!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fdefenderofthebasic.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D-817717601%26version%3D9
    • og:type
      article
    • og:title
      Ljubomir Josifovski on Defender’s Corner
    • og:description
      Very good - thanks for sharing this, like it. This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same. Likewise in your case - we should explain to people as if they can understand, even if maybe they can't. I subscribe to this principle. For one thing, we may be surprised - one never knows. For another, how are we to learn new things, if we are only told as much as we already know, but not more. Yes the teller runs the risk of being overly detailed and ultimately boring. For if the interlocutor doesn't understand, they may get bored and even frustrated. That's fine. My ego can take a hit, I'm fine risking it. When I notice I wrap quickly in a sentence and shut up. Not a biggie. Amusingly, people find similar when teaching computers new things they have never seen before. (check Jeff Clune lectures, talks, podcasts interviews) Teaching them too easy things they already know how to solve - is a waste of time, they already know, so learn nothing new. Teaching them too hard things is a waste of time too, because they don't solve it fail to get to the solution. But we want them to learn to discover a solution on their own, independently. Not just memorise and in the future pattern match an answer they blurt out. The aim of their research is to teach the model how to learn on his own. Not just what's the right answer is, but to learn the process by which we humans find the answer. There is a Goldilocks zone, where the system is at A, and we want it too get to B on its own. If B is about 20% away, 20% more difficult, but no more than that, then the model stands a non-trivial chance of discovering in its own, the stepping stones that allow it to get from A to B successfully. And discover that on its own. That part is crucial. They are training the model how to learn on its own. So them laying the stepping stones is no good, is counterproductive. The aim of the exercise is for the model to learn how to go on about discovering the stepping stones on its own.
  • Twitter Meta Tags

    8
    • twitter:image
      https://substackcdn.com/image/fetch/$s_!O67J!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fdefenderofthebasic.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D-817717601%26version%3D9
    • twitter:card
      summary_large_image
    • twitter:label1
      Likes
    • twitter:data1
      2
    • twitter:label2
      Replies
  • Link Tags

    31
    • alternate
      /feed
    • apple-touch-icon
      https://substackcdn.com/image/fetch/$s_!lov4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05a0dca1-32ef-46b8-87f7-9f35c8f49922%2Fapple-touch-icon-57x57.png
    • apple-touch-icon
      https://substackcdn.com/image/fetch/$s_!ZeEl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05a0dca1-32ef-46b8-87f7-9f35c8f49922%2Fapple-touch-icon-60x60.png
    • apple-touch-icon
      https://substackcdn.com/image/fetch/$s_!Azr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05a0dca1-32ef-46b8-87f7-9f35c8f49922%2Fapple-touch-icon-72x72.png
    • apple-touch-icon
      https://substackcdn.com/image/fetch/$s_!guE4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05a0dca1-32ef-46b8-87f7-9f35c8f49922%2Fapple-touch-icon-76x76.png

Links

13