{"id":6000,"date":"2018-09-28T01:05:09","date_gmt":"2018-09-28T09:05:09","guid":{"rendered":"https:\/\/www.chriscorrigan.com\/parkinglot\/?p=6000"},"modified":"2018-09-28T01:05:11","modified_gmt":"2018-09-28T09:05:11","slug":"the-limits-of-certainty","status":"publish","type":"post","link":"https:\/\/www.chriscorrigan.com\/parkinglot\/the-limits-of-certainty\/","title":{"rendered":"The limits of certainty"},"content":{"rendered":"\n<p><a href=\"https:\/\/insidestory.org.au\/will-a-robot-take-your-job\/\">An interesting review essay<\/a> by John Quiggan looks at a new book by Ellen Broad called <a href=\"https:\/\/www.mup.com.au\/books\/made-by-humans-paperback-softback\">Made by Humans: The Ai Condition<\/a>. Quiggan is intrigued by Broad&#8217;s documentation of the way algorithms have changed over the years, from originating as &#8220;a well-defined formal procedure for deriving a verifiable solution to a mathematical problem&#8221; to becoming a formula for predicting unknown and unknowable futures.\u00a0 Math problems that benefit from algorithms fall firmly in the Ordered domains of Cynefin. But the problems that AI is now be deployed upon are complex and emergent in nature, and therefore instead of producing certainty and replicability, AI is being asked to provide probabilistic forecasts of the future.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>For the last thousand years or so, an algorithm (derived from the name of an Arab mathematician, al-Khwarizmi) has had a pretty clear meaning \u2014 namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid\u2019s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on.<\/p><p><br\/>As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical nature of the task made solving algorithms an ideal task for computers. On the other hand, the hope of many early AI researchers that computers would be able to develop and improve their own algorithms has so far proved almost entirely illusory.<\/p><p><br\/>Why, then, are we suddenly hearing so much about \u201cAI algorithms\u201d? The answer is that the meaning of the term \u201calgorithm\u201d has changed. A typical example, says Broad, is the use of an \u201calgorithm\u201d to predict the chance that someone convicted of a crime will reoffend, drawing on data about their characteristics and those of the previous crime. The \u201calgorithm\u201d turns out to over-predict reoffending by blacks relative to whites.<\/p><p><br\/>Social scientists have been working on problems like these for decades, with varying degrees of success. Until very recently, though, predictive systems of this kind would have been called \u201cmodels.\u201d The archetypal examples \u2014 the first econometric models used in Keynesian macroeconomics in the 1960s, and \u201cglobal systems\u201d models like that of the\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Club_of_Rome\" target=\"_blank\" rel=\"noreferrer noopener\">Club of Rome<\/a>\u00a0in the 1970s \u2014 illustrate many of the pitfalls.<br\/>A vast body of statistical work has developed around models like these, probing the validity or otherwise of the predictions they yield, and a great many sources of error have been found. Model estimation can go wrong because causal relationships are misspecified (as every budding statistician learns, correlation does not imply causation), because crucial variables are omitted, or because models are \u201cover-fitted\u201d to a limited set of data.<\/p><p><br\/>Broad\u2019s book suggests that the developers of AI \u201calgorithms\u201d have made all of these errors anew. Asthmatic patients are classified as being at low risk for pneumonia when in fact their good outcomes on that measure are due to more intensive treatment. Models that are supposed to predict sexual orientation from a photograph work by finding non-causative correlations, such as the angle from which the shot is taken. Designers fail to consider elementary distinctions, such as those between \u201cfalse positives\u201d and \u201cfalse negatives.\u201d\u00a0As with autonomous weapons, moral choices are made in the design and use of computer models. The more these choices are hidden behind a veneer of objectivity, the more likely they are to reinforce existing social structures and inequalities.<\/p><p><br\/>The superstitious reverence with which computer \u201cmodels\u201d were regarded when they first appeared has been replaced by (sometimes excessive) scepticism. Practitioners now understand that models provide a useful way of clarifying our assumptions and deriving their implications, but not a guaranteed path to truth. These lessons will need to be relearned as we deal with AI.<\/p><p><br\/>Broad makes a compelling case that AI techniques can obscure human agency but not replace it. Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers. Whether those choices are the result of careful reflection, or of unthinking prejudice, is up to us.<\/p><\/blockquote>\n\n\n\n<p>In general I think that scientists understand the limits of this approach to modelling, and that was borne out in several discussions that I had with ecologists last week in Quebec. We do have to define what we mean by &#8220;prediction&#8221; though. Potential futures can be predicated with some probability if you understand the nature of the system, but exact outcomes cannot be predicted. However, we (by whom I mean the electorate and policy makers who work to make single decisions out of forecasts) do tend to venerate predictive technologies because we cling to the original definition of an algorithm, and we can come to believe that the model&#8217;s robustness is enough to guarantee the accuracy of a prediction.\u00a0 We end up trusting forecasts without understanding probability, and when things don&#8217;t go according to plan, we blame the forecasters rather than our own complexity illiteracy.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>An interesting review essay by John Quiggan looks at a new book by Ellen Broad called Made by Humans: The Ai Condition. Quiggan is intrigued by Broad&#8217;s documentation of the way algorithms have changed over the years, from originating as &#8220;a well-defined formal procedure for deriving a verifiable solution to a mathematical problem&#8221; to becoming a formula for predicting unknown and unknowable futures.\u00a0 Math problems that benefit from algorithms fall firmly in the Ordered domains of Cynefin. But the problems that AI is now be deployed upon are complex and emergent in nature, and therefore instead of producing certainty and &#8230;<\/p>\n","protected":false},"author":1,"featured_media":6001,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[53,54,56],"tags":[],"class_list":["post-6000","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-complexity","category-evaluation","category-featured"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/www.chriscorrigan.com\/parkinglot\/wp-content\/uploads\/2018\/09\/Predicting.jpg?fit=800%2C600&ssl=1","jetpack_shortlink":"https:\/\/wp.me\/piBp1-1yM","jetpack-related-posts":[],"jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/posts\/6000","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/comments?post=6000"}],"version-history":[{"count":1,"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/posts\/6000\/revisions"}],"predecessor-version":[{"id":6002,"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/posts\/6000\/revisions\/6002"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/media\/6001"}],"wp:attachment":[{"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/media?parent=6000"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/categories?post=6000"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.chriscorrigan.com\/parkinglot\/wp-json\/wp\/v2\/tags?post=6000"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}