Statement:16: Difference between revisions
(CSV import) |
m (Text replacement - "session=Applying Machine Learning and Artificial Intelligence" to "session=Session:6") |
||
(2 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
|firstname=Aaron | |firstname=Aaron | ||
|lastname=Halfaker | |lastname=Halfaker | ||
|tags=Artificial Intelligence | |||
|primarysession=Session:6 | |||
|statement=The future of responsible AI design is auditing systems. When we deploy AIs that make inherently subjective judgments that affect people and their work, we must also provide a means for them to audit and critique the AI. Did the AI mark the wrong thing as vandalism? Then it can silence a contribution. Did the AI fail to note a high quality article? Then we might direct traffic away from good content? Did the AI recommend the wrong type of thing? Then we might keep people in a filter bubble rather than helping them broaden their knowledge. | |statement=The future of responsible AI design is auditing systems. When we deploy AIs that make inherently subjective judgments that affect people and their work, we must also provide a means for them to audit and critique the AI. Did the AI mark the wrong thing as vandalism? Then it can silence a contribution. Did the AI fail to note a high quality article? Then we might direct traffic away from good content? Did the AI recommend the wrong type of thing? Then we might keep people in a filter bubble rather than helping them broaden their knowledge. | ||
There's a lot of conversation in the public sphere about how AIs cause ethical and social problems. Google's image search suggests that all CEOs are men. Facebooks feed filter reduces the visibility of conflicting opinions. The general call among researchers and ethicists is for transparency. At Wikimedia, transparency is an old idea. We've always developed our technologies transparently. But this hasn't made us immune from the problems that AIs wreck. Auditing systems are the future. They are a means towards giving users power over the AIs that govern our experience. We should be talking about how to build them. | There's a lot of conversation in the public sphere about how AIs cause ethical and social problems. Google's image search suggests that all CEOs are men. Facebooks feed filter reduces the visibility of conflicting opinions. The general call among researchers and ethicists is for transparency. At Wikimedia, transparency is an old idea. We've always developed our technologies transparently. But this hasn't made us immune from the problems that AIs wreck. Auditing systems are the future. They are a means towards giving users power over the AIs that govern our experience. We should be talking about how to build them. | ||
}} | }} |
Latest revision as of 09:02, 20 November 2017
Tags | Artificial Intelligence |
---|---|
Primary Session | Research, Analytics, and Machine Learning |
Secondary Sessions |
The future of responsible AI design is auditing systems. When we deploy AIs that make inherently subjective judgments that affect people and their work, we must also provide a means for them to audit and critique the AI. Did the AI mark the wrong thing as vandalism? Then it can silence a contribution. Did the AI fail to note a high quality article? Then we might direct traffic away from good content? Did the AI recommend the wrong type of thing? Then we might keep people in a filter bubble rather than helping them broaden their knowledge.
There's a lot of conversation in the public sphere about how AIs cause ethical and social problems. Google's image search suggests that all CEOs are men. Facebooks feed filter reduces the visibility of conflicting opinions. The general call among researchers and ethicists is for transparency. At Wikimedia, transparency is an old idea. We've always developed our technologies transparently. But this hasn't made us immune from the problems that AIs wreck. Auditing systems are the future. They are a means towards giving users power over the AIs that govern our experience. We should be talking about how to build them.