blog.chewxy.com/2015/08/04/algorithms-are-chaotic-neutral

Preview meta tags from the blog.chewxy.com website.

Linked Hostnames

15

Search Engine Appearance

Google

https://blog.chewxy.com/2015/08/04/algorithms-are-chaotic-neutral

Algorithms Are Chaotic Neutral

Carina Zona gave the Sunday keynote for PyConAU 2015. It was a very interesting talk about the ethics of insight mining from data, and algorithms. She gave examples of data mining fails – situations where Target discovered a teenage girl was pregnant before her parents even knew; or like machine learned Google search matches that implied black people were more likely to be arrested. It was her last few points that I got interested in the ethical dilemmas that may occur. And it is these last few points that I want to focus the discussion on. One of the key points that I took away* not necessarily the key points she was trying to communicate – it could just be I have shitty comprehension, hence rendering this entire blogpost moot was that the newer and more powerful machine learning algorithms out there are inadvertantly discriminate along the various power axes out there (think race, social economic background, gender, sexual orientation etc). There was an implicit notion that we should be designing better algorithms to deal with these sorts of biases. I have experience designing these things and I quite disagree with that notion. I noted on Twitter that the examples were basically the machine learning algorithms were exposing/mirroring what is learned from the data. The Google example is merely algorithms exposing the inherent bias in the genpop #pyconau — Chewxy (@chewxy) August 1, 2015 Carina did indeed point out that the data is indeed biased – she did indeed point out that for example, film stock in the 1950s were tuned for fairer skin, and therefore the amount of photographic data for darker skinned peole were lacking * This NPR article seems to be the closest reference I have, which by the way is fascinating as hell. But before we dive in deeper, I would like to bring up some caveats: I very much agree with Carina that we have a problem. The points I’m disagreeing upon is the way we should go about to fix it I’m not a professional ethicist, nor am I a philosopher. I’m really more of an armchair expert I’m not an academic dealing with the topics – I consider myself fairly well read, but I am by no means an expert. I am moderately interested in inequality, inequity and injustice, but I am absolutely disinterested with the squabbles of identity politics, and I only have a passing familiarity of the field. I like to think of myself as fairly rational. It is from this point of view that I’m making my arguments. However, in my experience I have been told that this can be quite alienating/uncaring/insensitive. I will bring my biases to this argument, and I will disclose my known biases whereever possible. However, it may be possible that I have missed, and so please tell me.



Bing

Algorithms Are Chaotic Neutral

https://blog.chewxy.com/2015/08/04/algorithms-are-chaotic-neutral

Carina Zona gave the Sunday keynote for PyConAU 2015. It was a very interesting talk about the ethics of insight mining from data, and algorithms. She gave examples of data mining fails – situations where Target discovered a teenage girl was pregnant before her parents even knew; or like machine learned Google search matches that implied black people were more likely to be arrested. It was her last few points that I got interested in the ethical dilemmas that may occur. And it is these last few points that I want to focus the discussion on. One of the key points that I took away* not necessarily the key points she was trying to communicate – it could just be I have shitty comprehension, hence rendering this entire blogpost moot was that the newer and more powerful machine learning algorithms out there are inadvertantly discriminate along the various power axes out there (think race, social economic background, gender, sexual orientation etc). There was an implicit notion that we should be designing better algorithms to deal with these sorts of biases. I have experience designing these things and I quite disagree with that notion. I noted on Twitter that the examples were basically the machine learning algorithms were exposing/mirroring what is learned from the data. The Google example is merely algorithms exposing the inherent bias in the genpop #pyconau — Chewxy (@chewxy) August 1, 2015 Carina did indeed point out that the data is indeed biased – she did indeed point out that for example, film stock in the 1950s were tuned for fairer skin, and therefore the amount of photographic data for darker skinned peole were lacking * This NPR article seems to be the closest reference I have, which by the way is fascinating as hell. But before we dive in deeper, I would like to bring up some caveats: I very much agree with Carina that we have a problem. The points I’m disagreeing upon is the way we should go about to fix it I’m not a professional ethicist, nor am I a philosopher. I’m really more of an armchair expert I’m not an academic dealing with the topics – I consider myself fairly well read, but I am by no means an expert. I am moderately interested in inequality, inequity and injustice, but I am absolutely disinterested with the squabbles of identity politics, and I only have a passing familiarity of the field. I like to think of myself as fairly rational. It is from this point of view that I’m making my arguments. However, in my experience I have been told that this can be quite alienating/uncaring/insensitive. I will bring my biases to this argument, and I will disclose my known biases whereever possible. However, it may be possible that I have missed, and so please tell me.



DuckDuckGo

https://blog.chewxy.com/2015/08/04/algorithms-are-chaotic-neutral

Algorithms Are Chaotic Neutral

Carina Zona gave the Sunday keynote for PyConAU 2015. It was a very interesting talk about the ethics of insight mining from data, and algorithms. She gave examples of data mining fails – situations where Target discovered a teenage girl was pregnant before her parents even knew; or like machine learned Google search matches that implied black people were more likely to be arrested. It was her last few points that I got interested in the ethical dilemmas that may occur. And it is these last few points that I want to focus the discussion on. One of the key points that I took away* not necessarily the key points she was trying to communicate – it could just be I have shitty comprehension, hence rendering this entire blogpost moot was that the newer and more powerful machine learning algorithms out there are inadvertantly discriminate along the various power axes out there (think race, social economic background, gender, sexual orientation etc). There was an implicit notion that we should be designing better algorithms to deal with these sorts of biases. I have experience designing these things and I quite disagree with that notion. I noted on Twitter that the examples were basically the machine learning algorithms were exposing/mirroring what is learned from the data. The Google example is merely algorithms exposing the inherent bias in the genpop #pyconau — Chewxy (@chewxy) August 1, 2015 Carina did indeed point out that the data is indeed biased – she did indeed point out that for example, film stock in the 1950s were tuned for fairer skin, and therefore the amount of photographic data for darker skinned peole were lacking * This NPR article seems to be the closest reference I have, which by the way is fascinating as hell. But before we dive in deeper, I would like to bring up some caveats: I very much agree with Carina that we have a problem. The points I’m disagreeing upon is the way we should go about to fix it I’m not a professional ethicist, nor am I a philosopher. I’m really more of an armchair expert I’m not an academic dealing with the topics – I consider myself fairly well read, but I am by no means an expert. I am moderately interested in inequality, inequity and injustice, but I am absolutely disinterested with the squabbles of identity politics, and I only have a passing familiarity of the field. I like to think of myself as fairly rational. It is from this point of view that I’m making my arguments. However, in my experience I have been told that this can be quite alienating/uncaring/insensitive. I will bring my biases to this argument, and I will disclose my known biases whereever possible. However, it may be possible that I have missed, and so please tell me.

  • General Meta Tags

    7
    • title
      Algorithms Are Chaotic Neutral - Bigger on the Inside
    • charset
      utf-8
    • X-UA-Compatible
      IE=edge
    • viewport
      width=device-width, initial-scale=1.0, maximum-scale=1.0
    • description
      Carina Zona gave the Sunday keynote for PyConAU 2015. It was a very interesting talk about the ethics of insight mining from data, and algorithms. She gave examples of data mining fails – situations where Target discovered a teenage girl was pregnant before her parents even knew; or like machine learned Google search matches that implied black people were more likely to be arrested. It was her last few points that I got interested in the ethical dilemmas that may occur. And it is these last few points that I want to focus the discussion on. One of the key points that I took away* not necessarily the key points she was trying to communicate – it could just be I have shitty comprehension, hence rendering this entire blogpost moot was that the newer and more powerful machine learning algorithms out there are inadvertantly discriminate along the various power axes out there (think race, social economic background, gender, sexual orientation etc). There was an implicit notion that we should be designing better algorithms to deal with these sorts of biases. I have experience designing these things and I quite disagree with that notion. I noted on Twitter that the examples were basically the machine learning algorithms were exposing/mirroring what is learned from the data. The Google example is merely algorithms exposing the inherent bias in the genpop #pyconau — Chewxy (@chewxy) August 1, 2015 Carina did indeed point out that the data is indeed biased – she did indeed point out that for example, film stock in the 1950s were tuned for fairer skin, and therefore the amount of photographic data for darker skinned peole were lacking * This NPR article seems to be the closest reference I have, which by the way is fascinating as hell. But before we dive in deeper, I would like to bring up some caveats: I very much agree with Carina that we have a problem. The points I’m disagreeing upon is the way we should go about to fix it I’m not a professional ethicist, nor am I a philosopher. I’m really more of an armchair expert I’m not an academic dealing with the topics – I consider myself fairly well read, but I am by no means an expert. I am moderately interested in inequality, inequity and injustice, but I am absolutely disinterested with the squabbles of identity politics, and I only have a passing familiarity of the field. I like to think of myself as fairly rational. It is from this point of view that I’m making my arguments. However, in my experience I have been told that this can be quite alienating/uncaring/insensitive. I will bring my biases to this argument, and I will disclose my known biases whereever possible. However, it may be possible that I have missed, and so please tell me.
  • Open Graph Meta Tags

    5
    • og:title
      Algorithms Are Chaotic Neutral
    • og:description
      Carina Zona gave the Sunday keynote for PyConAU 2015. It was a very interesting talk about the ethics of insight mining from data, and algorithms. She gave examples of data mining fails – situations where Target discovered a teenage girl was pregnant before her parents even knew; or like machine learned Google search matches that implied black people were more likely to be arrested. It was her last few points that I got interested in the ethical dilemmas that may occur. And it is these last few points that I want to focus the discussion on. One of the key points that I took away* not necessarily the key points she was trying to communicate – it could just be I have shitty comprehension, hence rendering this entire blogpost moot was that the newer and more powerful machine learning algorithms out there are inadvertantly discriminate along the various power axes out there (think race, social economic background, gender, sexual orientation etc). There was an implicit notion that we should be designing better algorithms to deal with these sorts of biases. I have experience designing these things and I quite disagree with that notion. I noted on Twitter that the examples were basically the machine learning algorithms were exposing/mirroring what is learned from the data. The Google example is merely algorithms exposing the inherent bias in the genpop #pyconau — Chewxy (@chewxy) August 1, 2015 Carina did indeed point out that the data is indeed biased – she did indeed point out that for example, film stock in the 1950s were tuned for fairer skin, and therefore the amount of photographic data for darker skinned peole were lacking * This NPR article seems to be the closest reference I have, which by the way is fascinating as hell. But before we dive in deeper, I would like to bring up some caveats: I very much agree with Carina that we have a problem. The points I’m disagreeing upon is the way we should go about to fix it I’m not a professional ethicist, nor am I a philosopher. I’m really more of an armchair expert I’m not an academic dealing with the topics – I consider myself fairly well read, but I am by no means an expert. I am moderately interested in inequality, inequity and injustice, but I am absolutely disinterested with the squabbles of identity politics, and I only have a passing familiarity of the field. I like to think of myself as fairly rational. It is from this point of view that I’m making my arguments. However, in my experience I have been told that this can be quite alienating/uncaring/insensitive. I will bring my biases to this argument, and I will disclose my known biases whereever possible. However, it may be possible that I have missed, and so please tell me.
    • og:url
      https://blog.chewxy.com/2015/08/04/algorithms-are-chaotic-neutral/
    • og:type
      website
    • og:site_name
      Bigger on the Inside
  • Twitter Meta Tags

    5
    • twitter:title
      Algorithms Are Chaotic Neutral
    • twitter:description
      Carina Zona gave the Sunday keynote for PyConAU 2015. It was a very interesting talk about the ethics of insight mining from data, and algorithms. She gave examples of data mining fails – situations …
    • twitter:card
      summary
    • twitter:site
      @chewxy
    • twitter:creator
      @chewxy
  • Link Tags

    13
    • alternate
      https://blog.chewxy.com/index.xml
    • stylesheet
      https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.10.0/katex.min.css
    • stylesheet
      https://use.fontawesome.com/releases/v5.5.0/css/all.css
    • stylesheet
      https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css
    • stylesheet
      https://blog.chewxy.com/css/main.css

Links

29