web.archive.org/web/20210222082737/https:/www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization

Preview meta tags from the web.archive.org website.

Linked Hostnames

1

Thumbnail

Search Engine Appearance

Google

https://web.archive.org/web/20210222082737/https:/www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization

Utility Maximization = Description Length Minimization - AI Alignment Forum

There’s a useful intuitive notion of “optimization” as pushing the world into a small set of states, starting from any of a large number of states. Visually: Yudkowsky and Flint both have notable formalizations of this “optimization as compression” idea. This post presents a formalization of optimization-as-compression grounded in information theory. Specifically: to “optimize” a system is to reduce the number of bits required to represent the system state using a particular encoding. In other words, “optimizing” a system means making it compressible (in the information-theoretic sense) by a particular model. This formalization turns out to be equivalent to expected utility maximization, and allows us to interpret any expected utility maximizer as “trying to make the world look like a particular model”. CONCEPTUAL EXAMPLE: BUILDING A HOUSE Before diving into the formalism, we’ll walk through a conceptual example, taken directly from Flint’s Ground of Optimization: building a house. Here’s Flint’s diagram: The key idea here is that there’s a wide variety of initial states (piles of lumber, etc) which all end up in the same target configuration set (finished house). The “perturbation” indicates that the initial state could change to some other state - e.g. someone could move all the lumber ten feet to the left - and we’d still end up with the house. In terms of information-theoretic compression: we could imagine a model which says there is probably a house. Efficiently encoding samples from this model will mean using shorter bit-strings for world-states with a house, and longer bit-strings for world-states without a house. World-states with piles of lumber will therefore generally require more bits than world-states with a house. By turning the piles of lumber into a house, we reduce the number of bits required to represent the world-state using this particular encoding/model. If that seems kind of trivial and obvious, then you’ve probably understood the idea;



Bing

Utility Maximization = Description Length Minimization - AI Alignment Forum

https://web.archive.org/web/20210222082737/https:/www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization

There’s a useful intuitive notion of “optimization” as pushing the world into a small set of states, starting from any of a large number of states. Visually: Yudkowsky and Flint both have notable formalizations of this “optimization as compression” idea. This post presents a formalization of optimization-as-compression grounded in information theory. Specifically: to “optimize” a system is to reduce the number of bits required to represent the system state using a particular encoding. In other words, “optimizing” a system means making it compressible (in the information-theoretic sense) by a particular model. This formalization turns out to be equivalent to expected utility maximization, and allows us to interpret any expected utility maximizer as “trying to make the world look like a particular model”. CONCEPTUAL EXAMPLE: BUILDING A HOUSE Before diving into the formalism, we’ll walk through a conceptual example, taken directly from Flint’s Ground of Optimization: building a house. Here’s Flint’s diagram: The key idea here is that there’s a wide variety of initial states (piles of lumber, etc) which all end up in the same target configuration set (finished house). The “perturbation” indicates that the initial state could change to some other state - e.g. someone could move all the lumber ten feet to the left - and we’d still end up with the house. In terms of information-theoretic compression: we could imagine a model which says there is probably a house. Efficiently encoding samples from this model will mean using shorter bit-strings for world-states with a house, and longer bit-strings for world-states without a house. World-states with piles of lumber will therefore generally require more bits than world-states with a house. By turning the piles of lumber into a house, we reduce the number of bits required to represent the world-state using this particular encoding/model. If that seems kind of trivial and obvious, then you’ve probably understood the idea;



DuckDuckGo

https://web.archive.org/web/20210222082737/https:/www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization

Utility Maximization = Description Length Minimization - AI Alignment Forum

There’s a useful intuitive notion of “optimization” as pushing the world into a small set of states, starting from any of a large number of states. Visually: Yudkowsky and Flint both have notable formalizations of this “optimization as compression” idea. This post presents a formalization of optimization-as-compression grounded in information theory. Specifically: to “optimize” a system is to reduce the number of bits required to represent the system state using a particular encoding. In other words, “optimizing” a system means making it compressible (in the information-theoretic sense) by a particular model. This formalization turns out to be equivalent to expected utility maximization, and allows us to interpret any expected utility maximizer as “trying to make the world look like a particular model”. CONCEPTUAL EXAMPLE: BUILDING A HOUSE Before diving into the formalism, we’ll walk through a conceptual example, taken directly from Flint’s Ground of Optimization: building a house. Here’s Flint’s diagram: The key idea here is that there’s a wide variety of initial states (piles of lumber, etc) which all end up in the same target configuration set (finished house). The “perturbation” indicates that the initial state could change to some other state - e.g. someone could move all the lumber ten feet to the left - and we’d still end up with the house. In terms of information-theoretic compression: we could imagine a model which says there is probably a house. Efficiently encoding samples from this model will mean using shorter bit-strings for world-states with a house, and longer bit-strings for world-states without a house. World-states with piles of lumber will therefore generally require more bits than world-states with a house. By turning the piles of lumber into a house, we reduce the number of bits required to represent the world-state using this particular encoding/model. If that seems kind of trivial and obvious, then you’ve probably understood the idea;

  • General Meta Tags

    5
    • title
      Utility Maximization = Description Length Minimization - AI Alignment Forum
    • Accept-CH
      DPR, Viewport-Width, Width
    • charset
      utf-8
    • description
      There’s a useful intuitive notion of “optimization” as pushing the world into a small set of states, starting from any of a large number of states. Visually: Yudkowsky and Flint both have notable formalizations of this “optimization as compression” idea. This post presents a formalization of optimization-as-compression grounded in information theory. Specifically: to “optimize” a system is to reduce the number of bits required to represent the system state using a particular encoding. In other words, “optimizing” a system means making it compressible (in the information-theoretic sense) by a particular model. This formalization turns out to be equivalent to expected utility maximization, and allows us to interpret any expected utility maximizer as “trying to make the world look like a particular model”. CONCEPTUAL EXAMPLE: BUILDING A HOUSE Before diving into the formalism, we’ll walk through a conceptual example, taken directly from Flint’s Ground of Optimization: building a house. Here’s Flint’s diagram: The key idea here is that there’s a wide variety of initial states (piles of lumber, etc) which all end up in the same target configuration set (finished house). The “perturbation” indicates that the initial state could change to some other state - e.g. someone could move all the lumber ten feet to the left - and we’d still end up with the house. In terms of information-theoretic compression: we could imagine a model which says there is probably a house. Efficiently encoding samples from this model will mean using shorter bit-strings for world-states with a house, and longer bit-strings for world-states without a house. World-states with piles of lumber will therefore generally require more bits than world-states with a house. By turning the piles of lumber into a house, we reduce the number of bits required to represent the world-state using this particular encoding/model. If that seems kind of trivial and obvious, then you’ve probably understood the idea;
    • viewport
      width=device-width, initial-scale=1
  • Open Graph Meta Tags

    5
    • og:title
      Utility Maximization = Description Length Minimization - AI Alignment Forum
    • og:type
      article
    • og:url
      https://web.archive.org/web/20210222082737/https://www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization
    • og:image
      https://web.archive.org/web/20210222082737im_/https://docs.google.com/drawings/u/1/d/sChFitrEztHYLMM-mDC_iFw/image?w=449&h=338&rev=70&ac=1&parent=1LTI0DmtfQG39sBb13zFsyGahX41h-yGBy22lfmYqGIU
    • og:description
      There’s a useful intuitive notion of “optimization” as pushing the world into a small set of states, starting from any of a large number of states. Visually: Yudkowsky and Flint both have notable formalizations of this “optimization as compression” idea. This post presents a formalization of optimization-as-compression grounded in information theory. Specifically: to “optimize” a system is to reduce the number of bits required to represent the system state using a particular encoding. In other words, “optimizing” a system means making it compressible (in the information-theoretic sense) by a particular model. This formalization turns out to be equivalent to expected utility maximization, and allows us to interpret any expected utility maximizer as “trying to make the world look like a particular model”. CONCEPTUAL EXAMPLE: BUILDING A HOUSE Before diving into the formalism, we’ll walk through a conceptual example, taken directly from Flint’s Ground of Optimization: building a house. Here’s Flint’s diagram: The key idea here is that there’s a wide variety of initial states (piles of lumber, etc) which all end up in the same target configuration set (finished house). The “perturbation” indicates that the initial state could change to some other state - e.g. someone could move all the lumber ten feet to the left - and we’d still end up with the house. In terms of information-theoretic compression: we could imagine a model which says there is probably a house. Efficiently encoding samples from this model will mean using shorter bit-strings for world-states with a house, and longer bit-strings for world-states without a house. World-states with piles of lumber will therefore generally require more bits than world-states with a house. By turning the piles of lumber into a house, we reduce the number of bits required to represent the world-state using this particular encoding/model. If that seems kind of trivial and obvious, then you’ve probably understood the idea;
  • Twitter Meta Tags

    3
    • twitter:card
      summary
    • twitter:image:src
      https://docs.google.com/drawings/u/1/d/sChFitrEztHYLMM-mDC_iFw/image?w=449&h=338&rev=70&ac=1&parent=1LTI0DmtfQG39sBb13zFsyGahX41h-yGBy22lfmYqGIU
    • twitter:description
      There’s a useful intuitive notion of “optimization” as pushing the world into a small set of states, starting from any of a large number of states. Visually: Yudkowsky and Flint both have notable formalizations of this “optimization as compression” idea. This post presents a formalization of optimization-as-compression grounded in information theory. Specifically: to “optimize” a system is to reduce the number of bits required to represent the system state using a particular encoding. In other words, “optimizing” a system means making it compressible (in the information-theoretic sense) by a particular model. This formalization turns out to be equivalent to expected utility maximization, and allows us to interpret any expected utility maximizer as “trying to make the world look like a particular model”. CONCEPTUAL EXAMPLE: BUILDING A HOUSE Before diving into the formalism, we’ll walk through a conceptual example, taken directly from Flint’s Ground of Optimization: building a house. Here’s Flint’s diagram: The key idea here is that there’s a wide variety of initial states (piles of lumber, etc) which all end up in the same target configuration set (finished house). The “perturbation” indicates that the initial state could change to some other state - e.g. someone could move all the lumber ten feet to the left - and we’d still end up with the house. In terms of information-theoretic compression: we could imagine a model which says there is probably a house. Efficiently encoding samples from this model will mean using shorter bit-strings for world-states with a house, and longer bit-strings for world-states without a house. World-states with piles of lumber will therefore generally require more bits than world-states with a house. By turning the piles of lumber into a house, we reduce the number of bits required to represent the world-state using this particular encoding/model. If that seems kind of trivial and obvious, then you’ve probably understood the idea;
  • Link Tags

    10
    • alternate
      https://web.archive.org/web/20210222082737/https://www.alignmentforum.org/feed.xml
    • canonical
      https://web.archive.org/web/20210222082737/https://www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization
    • shortcut icon
      https://web.archive.org/web/20210222082737im_/https://res.cloudinary.com/dq3pms5lt/image/upload/v1531267596/alignmentForum_favicon_o9bjnl.png
    • stylesheet
      https://web-static.archive.org/_static/css/banner-styles.css?v=p7PEIJWi
    • stylesheet
      https://web-static.archive.org/_static/css/iconochive.css?v=3PDvdIFv

Links

26