cmp.felk.cvut.cz/~flachbor/research/psa

Preview meta tags from the cmp.felk.cvut.cz website.

Linked Hostnames

3

Thumbnail

Search Engine Appearance

Google

https://cmp.felk.cvut.cz/~flachbor/research/psa

PSA Gradient Estimators for Stochastic Binary Networks

In neural networks with binary activations and or binary weights the training by gradient descent is complicated as the model has piecewise constant response. We consider stochastic binary networks, obtained by adding noises in front of activations. The expected model response becomes a smooth function of parameters, its gradient is well defined but it is challenging to estimate it accurately. We propose a new method for this estimation problem combining sampling and analytic approximation steps. The method has a significantly reduced variance at the price of a small bias which gives a very practical tradeoff in comparison with existing unbiased and biased estimators. We further show that one extra linearization step leads to a deep straight-through estimator previously known only as an ad-hoc heuristic. We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models with both proposed methods (Shekhovtsov et al., 2020)



Bing

PSA Gradient Estimators for Stochastic Binary Networks

https://cmp.felk.cvut.cz/~flachbor/research/psa

In neural networks with binary activations and or binary weights the training by gradient descent is complicated as the model has piecewise constant response. We consider stochastic binary networks, obtained by adding noises in front of activations. The expected model response becomes a smooth function of parameters, its gradient is well defined but it is challenging to estimate it accurately. We propose a new method for this estimation problem combining sampling and analytic approximation steps. The method has a significantly reduced variance at the price of a small bias which gives a very practical tradeoff in comparison with existing unbiased and biased estimators. We further show that one extra linearization step leads to a deep straight-through estimator previously known only as an ad-hoc heuristic. We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models with both proposed methods (Shekhovtsov et al., 2020)



DuckDuckGo

https://cmp.felk.cvut.cz/~flachbor/research/psa

PSA Gradient Estimators for Stochastic Binary Networks

In neural networks with binary activations and or binary weights the training by gradient descent is complicated as the model has piecewise constant response. We consider stochastic binary networks, obtained by adding noises in front of activations. The expected model response becomes a smooth function of parameters, its gradient is well defined but it is challenging to estimate it accurately. We propose a new method for this estimation problem combining sampling and analytic approximation steps. The method has a significantly reduced variance at the price of a small bias which gives a very practical tradeoff in comparison with existing unbiased and biased estimators. We further show that one extra linearization step leads to a deep straight-through estimator previously known only as an ad-hoc heuristic. We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models with both proposed methods (Shekhovtsov et al., 2020)

  • General Meta Tags

    13
    • title
      PSA Gradient Estimators for Stochastic Binary Networks –
    • title
      PSA Gradient Estimators for Stochastic Binary Networks
    • charset
      utf-8
    • description
      In neural networks with binary activations and or binary weights the training by gradient descent is complicated as the model has piecewise constant response. We consider stochastic binary networks, obtained by adding noises in front of activations. The expected model response becomes a smooth function of parameters, its gradient is well defined but it is challenging to estimate it accurately. We propose a new method for this estimation problem combining sampling and analytic approximation steps. The method has a significantly reduced variance at the price of a small bias which gives a very practical tradeoff in comparison with existing unbiased and biased estimators. We further show that one extra linearization step leads to a deep straight-through estimator previously known only as an ad-hoc heuristic. We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models with both proposed methods (Shekhovtsov et al., 2020)
    • keywords
  • Open Graph Meta Tags

    12
    • US country flagog:locale
      en_US
    • og:type
      article
    • og:title
      PSA Gradient Estimators for Stochastic Binary Networks
    • og:description
      In neural networks with binary activations and or binary weights the training by gradient descent is complicated as the model has piecewise constant response. We consider stochastic binary networks, obtained by adding noises in front of activations. The expected model response becomes a smooth function of parameters, its gradient is well defined but it is challenging to estimate it accurately. We propose a new method for this estimation problem combining sampling and analytic approximation steps. The method has a significantly reduced variance at the price of a small bias which gives a very practical tradeoff in comparison with existing unbiased and biased estimators. We further show that one extra linearization step leads to a deep straight-through estimator previously known only as an ad-hoc heuristic. We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models with both proposed methods (Shekhovtsov et al., 2020)
    • og:url
      https://cmp.felk.cvut.cz/~flachbor/research/psa/
  • Twitter Meta Tags

    5
    • twitter:title
      PSA Gradient Estimators for Stochastic Binary Networks
    • twitter:description
      In neural networks with binary activations and or binary weights the training by gradient descent is complicated as the model has piecewise constant response. We consider stochastic binary networks, obtained by adding noises in front of activations. The expected model response becomes a smooth function of parameters, its gradient is well defined but it is challenging to estimate it accurately. We propose a new method for this estimation problem combining sampling and analytic approximation steps. The method has a significantly reduced variance at the price of a small bias which gives a very practical tradeoff in comparison with existing unbiased and biased estimators. We further show that one extra linearization step leads to a deep straight-through estimator previously known only as an ad-hoc heuristic. We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models with both proposed methods (Shekhovtsov et al., 2020)
    • twitter:card
      summary
    • twitter:image
      https://cmp.felk.cvut.cz/~flachbor/images/default-thumb.png
    • twitter:card
      summary
  • Link Tags

    12
    • alternate
      https://cmp.felk.cvut.cz/~flachbor/feed.xml
    • apple-touch-icon-precomposed
      https://cmp.felk.cvut.cz/~flachbor/images/apple-touch-icon-precomposed.png
    • apple-touch-icon-precomposed
      https://cmp.felk.cvut.cz/~flachbor/images/apple-touch-icon-72x72-precomposed.png
    • apple-touch-icon-precomposed
      https://cmp.felk.cvut.cz/~flachbor/images/apple-touch-icon-114x114-precomposed.png
    • apple-touch-icon-precomposed
      https://cmp.felk.cvut.cz/~flachbor/images/apple-touch-icon-144x144-precomposed.png

Emails

1

Links

6