substack.com/@ljubomirjosifovski/note/c-134482302

Preview meta tags from the substack.com website.

Linked Hostnames

2

Thumbnail

Search Engine Appearance

Google

https://substack.com/@ljubomirjosifovski/note/c-134482302

Ljubomir Josifovski (@ljubomirjosifovski)

Very good - thanks for sharing this, like it. This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same. Likewise in your case - we should explain to people as if they can understand, even if maybe they can't. I subscribe to this principle. For one thing, we may be surprised - one never knows. For another, how are we to learn new things, if we are only told as much as we already know, but not more. Yes the teller runs the risk of being overly detailed and ultimately boring. For if the interlocutor doesn't understand, they may get bored and even frustrated. That's fine. My ego can take a hit, I'm fine risking it. When I notice I wrap quickly in a sentence and shut up. Not a biggie. Amusingly, people find similar when teaching computers new things they have never seen before. (check Jeff Clune lectures, talks, podcasts interviews) Teaching them too easy things they already know how to solve - is a waste of time, they already know, so learn nothing new. Teaching them too hard things is a waste of time too, because they don't solve it fail to get to the solution. But we want them to learn to discover a solution on their own, independently. Not just memorise and in the future pattern match an answer they blurt out. The aim of their research is to teach the model how to learn on his own. Not just what's the right answer is, but to learn the process by which we humans find the answer. There is a Goldilocks zone, where the system is at A, and we want it too get to B on its own. If B is about 20% away, 20% more difficult, but no more than that, then the model stands a non-trivial chance of discovering in its own, the stepping stones that allow it to get from A to B successfully. And discover that on its own. That part is crucial. They are training the model how to learn on its own. So them laying the stepping stones is no good, is counterproductive. The aim of the exercise is for the model to learn how to go on about discovering the stepping stones on its own.



Bing

Ljubomir Josifovski (@ljubomirjosifovski)

https://substack.com/@ljubomirjosifovski/note/c-134482302

Very good - thanks for sharing this, like it. This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same. Likewise in your case - we should explain to people as if they can understand, even if maybe they can't. I subscribe to this principle. For one thing, we may be surprised - one never knows. For another, how are we to learn new things, if we are only told as much as we already know, but not more. Yes the teller runs the risk of being overly detailed and ultimately boring. For if the interlocutor doesn't understand, they may get bored and even frustrated. That's fine. My ego can take a hit, I'm fine risking it. When I notice I wrap quickly in a sentence and shut up. Not a biggie. Amusingly, people find similar when teaching computers new things they have never seen before. (check Jeff Clune lectures, talks, podcasts interviews) Teaching them too easy things they already know how to solve - is a waste of time, they already know, so learn nothing new. Teaching them too hard things is a waste of time too, because they don't solve it fail to get to the solution. But we want them to learn to discover a solution on their own, independently. Not just memorise and in the future pattern match an answer they blurt out. The aim of their research is to teach the model how to learn on his own. Not just what's the right answer is, but to learn the process by which we humans find the answer. There is a Goldilocks zone, where the system is at A, and we want it too get to B on its own. If B is about 20% away, 20% more difficult, but no more than that, then the model stands a non-trivial chance of discovering in its own, the stepping stones that allow it to get from A to B successfully. And discover that on its own. That part is crucial. They are training the model how to learn on its own. So them laying the stepping stones is no good, is counterproductive. The aim of the exercise is for the model to learn how to go on about discovering the stepping stones on its own.



DuckDuckGo

https://substack.com/@ljubomirjosifovski/note/c-134482302

Ljubomir Josifovski (@ljubomirjosifovski)

Very good - thanks for sharing this, like it. This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same. Likewise in your case - we should explain to people as if they can understand, even if maybe they can't. I subscribe to this principle. For one thing, we may be surprised - one never knows. For another, how are we to learn new things, if we are only told as much as we already know, but not more. Yes the teller runs the risk of being overly detailed and ultimately boring. For if the interlocutor doesn't understand, they may get bored and even frustrated. That's fine. My ego can take a hit, I'm fine risking it. When I notice I wrap quickly in a sentence and shut up. Not a biggie. Amusingly, people find similar when teaching computers new things they have never seen before. (check Jeff Clune lectures, talks, podcasts interviews) Teaching them too easy things they already know how to solve - is a waste of time, they already know, so learn nothing new. Teaching them too hard things is a waste of time too, because they don't solve it fail to get to the solution. But we want them to learn to discover a solution on their own, independently. Not just memorise and in the future pattern match an answer they blurt out. The aim of their research is to teach the model how to learn on his own. Not just what's the right answer is, but to learn the process by which we humans find the answer. There is a Goldilocks zone, where the system is at A, and we want it too get to B on its own. If B is about 20% away, 20% more difficult, but no more than that, then the model stands a non-trivial chance of discovering in its own, the stepping stones that allow it to get from A to B successfully. And discover that on its own. That part is crucial. They are training the model how to learn on its own. So them laying the stepping stones is no good, is counterproductive. The aim of the exercise is for the model to learn how to go on about discovering the stepping stones on its own.

  • General Meta Tags

    14
    • title
      Ljubomir Josifovski (@ljubomirjosifovski): "Very good - thanks for sharing this, like it. This recalls to my mind "everyone should be treated the same, even if they are not the same." People are different - yet we should treat them all the same, as if they are the same. Likewise in your case - we should explain to people …"
    • title
    • title
    • title
    • title
  • Open Graph Meta Tags

    9
    • og:url
      https://substack.com/@ljubomirjosifovski/note/c-134482302
    • og:image
      https://substackcdn.com/image/fetch/$s_!XCt4!,w_400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Freader%2Fnotes-thumbnail.jpg
    • og:image:width
      400
    • og:image:height
      400
    • og:type
      article
  • Twitter Meta Tags

    8
    • twitter:image
      https://substackcdn.com/image/fetch/$s_!XCt4!,w_400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Freader%2Fnotes-thumbnail.jpg
    • twitter:card
      summary
    • twitter:label1
      Likes
    • twitter:data1
      2
    • twitter:label2
      Replies
  • Link Tags

    18
    • alternate
      https://substack.com/@ljubomirjosifovski/note/c-134482302
    • apple-touch-icon
      https://substackcdn.com/icons/substack/apple-touch-icon.png
    • canonical
      https://substack.com/@ljubomirjosifovski/note/c-134482302
    • icon
      https://substackcdn.com/icons/substack/icon.svg
    • manifest
      /manifest.json

Links

5