Reading


Jul 12, 2019   |   Book
Z
Posthuman glossary
Rosi Braidotti, Maria Hlavajova
ISBN:
978-1-350-03025-1
Library catalog:
Gemeinsamer Bibliotheksverbund ISBN
Jul 5, 2019   |   Web Page
Z
Henry Farrell, Bruce Schneier
A new paper explains why disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.
Jun 27, 2019   |   Newspaper Article
Z
Charlie Warzel
ISSN:
0362-4331
Library catalog:
NYTimes.com
At its heart, privacy is about how data is used to take away our control.
Jun 27, 2019   |   Magazine Article
Z
Lilly Irani,Rumman Chowdhury
ISSN:
1059-1028
Library catalog:
www.wired.com
Opinion: Silicon Valley culture often reveals the optimism of organized ignorance. Rather than lauding "new" experts, we need to respect, sustain, and strengthen the ones we already have.
Jun 26, 2019   |   Web Page
Z
Eric Johnson
On the latest Recode Decode, MIT Media Lab director Joi Ito says we need to resist the urge to oversimplify the problems we’re solving.
Jun 20, 2019   |   Book
Z
Zucked: waking up to the Facebook catastrophe
Roger McNamee
ISBN:
978-0-525-56135-4
Library catalog:
Library of Congress ISBN
"If you had told Roger McNamee even three years ago that he would soon be devoting himself to stopping Facebook from destroying our democracy, he would have howled with laughter. He had mentored many tech leaders in his illustrious career as an investor, but few things had made him prouder, or been better for his fund's bottom line, than his early service to Mark Zuckerberg. Still a large shareholder in Facebook, he had every good reason to stay on the bright side. Until he simply couldn't. ZUCKED is McNamee's intimate reckoning with the catastrophic failure of the head of one of the world's most powerful companies to face up to the damage he is doing. It's a story that begins with a series of rude awakenings. First there is the author's dawning realization that the platform is being manipulated by some very bad actors. Then there is the even more unsettling realization that Zuckerberg and Sheryl Sandberg are unable or unwilling to share his concerns, polite as they may be to his face"--
Jun 17, 2019   |   Journal Article
Z
Vahid Montazerhodjat, Shomesh E. Chaudhuri, Daniel J. Sargent, Andrew W. Lo
ISSN:
2374-2437
DOI:
10.1001/jamaoncol.2017.0123
Library catalog:
jamanetwork.com
ImportanceRandomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis?ObjectiveTo apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses.Data SourcesWe used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology.Study SelectionThe NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.).Data Extraction and SynthesisThe NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors.Main Outcomes and MeasuresOur prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs.ResultsWe computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%.Conclusions and RelevanceBayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences.
Jun 16, 2019   |   Web Page
Z
Anna Patton
Jed Emerson has never been afraid to tell it like it is. Now, after 30-odd years of exploring impact and purpose-driven capital – and eight books later – he believes the field has become ‘lazy’, offering superficial answers to humanity’s most difficult questions as we skip straight to the ‘how’ of tactics and strategy. The advisor, author and (former) keynote speaker tells us why he’s ready to stop talking – and why he hopes the rest of us will do the same
Jun 12, 2019   |   Web Page
Z
The Impact Classes of Investment What are the different kinds of impact that investments have on people and the planet? PGGM
Jun 10, 2019   |   Journal Article
Z
Shawn Cole, V. Kasturi Rangan, Alnoor Ebrahim, Caitlin Reimers Brumme
Library catalog:
www.hbs.edu
Jun 10, 2019   |   Journal Article
Z
Vikram S. Gandhi, Caitlin Reimers Brumme, Sarah Mehta
Library catalog:
www.hbs.edu
It is March 2017, and TPG, a global alternative investment firm with $74 billion assets under management, has recently launched its inaugural impact-investing fund—the $2 billion Rise Fund. In an effort to “take the religion out of impact investing,” Bill McGlashan, founder and managing partner of TPG Growth, an arm of TPG focused on growth equity investments and middle-market buyouts and co-founder and CEO of the Rise Fund, has partnered with The Bridgespan Group, a nonprofit consultancy, to develop an evidence-based methodology for quantifying the impact of prospective Rise investments. Together, they have come up with a framework that ultimately generates an impact multiple of money (IMM), a measure of the social value created by a company per equity dollar invested. If a company fails to meet the IMM threshold, Rise will not invest in it. The case finds McGlashan and Maya Chorengel (HBS MBA ’97), Rise’s senior partner for impact, debating whether to make Rise’s first investment in EverFi, an educational technology company that offers a range of online educational programming to its K-12 school, university, and corporate clients.
Jun 10, 2019   |   Web Page
Z
Matt Bannick, Mike Kubzansky, Robynn Steffen
As impact investing grows, investors are beginning to better understand the relationship between risk, return, and impact. Beyond Trade-offs includes research from the Economist Intelligence Unit on the factors driving this growth, and perspectives from leading investors who have moved beyond the trade-off debate to invest across the returns continuum.
Jun 10, 2019   |   Report
Z
Matt Bannick, Paula Goldman, Michael Kubzansky, Yasemin Saltuk
Jun 10, 2019   |   Report
Z
Abhilash Mudaliar, Hannah Dithrich
Jun 9, 2019   |   Journal Article
Z
Daniel Nettle, Rebecca Saxe
DOI:
10.31234/osf.io/kupqv
Library catalog:
psyarxiv.com
Many human societies feature institutions for redistributing resources from some individuals to others, but preferred levels of redistribution vary greatly within and between populations. We postulate that support for redistribution is the output of moral computations that are sensitive to perceived features of the social situation. We develop a within-subjects experimental approach in which participants prescribe appropriate redistribution for hypothetical villages whose features vary. Over six experiments involving 600 adults from the UK, we show that participants shift their preferences markedly from village to village. Support for redistribution is better predicted by the social features of the village than by individual differences in participants' political orientations. Higher levels of redistribution are systematically favoured when luck is more important in the initial distribution of resources; when social groups are more homogeneous; when the group is at war; and when resources are abundant rather than scarce. Participants have systematic intuitions about when the implementation of redistribution will prove problematic, that are distinct from their intuitions about when redistribution is desirable. We argue that the operation of flexible, context-sensitive moral computations may explain variation and change in support for, and hence existence of, redistributive institutions across societies and over time. The reasons different people come to different conclusions about redistribution may lie mostly in different appraisals about what their social group is like, rather than differences in values.
Jun 1, 2019   |   Book
Z
THEY DON'T REPRESENT US: reclaiming our democracy.
Lawrence Lessig
ISBN:
978-0-06-294571-6
Library catalog:
Open WorldCat
My blurb: In classic Lessig fashion,They Don’t Represent Us connects one of society’s biggest challenges - the impact of technology on our society and democracy - to the evolution of our constitution to show how we’ve lost our voice in our system of government. But as the reader descends into a spiral of despair, he pulls them up with the hope of potential interventions that could successfully enact positive change.
May 14, 2019   |   Book
Z
David Weinberger
ISBN:
978-1-63369-395-1
Library catalog:
Library of Congress ISBN
Modern science, the Internet, big data, and AI are each saying the same thing to us: the world is -- and always has been -- far more complex and unpredictable than we've allowed ourselves to see. As a result we're undergoing a sea change in our understanding of how things happen, and in our deepest strategies for predicting, preparing for, and managing our lives and our businesses. For example, machine learning allows us to make better predictions (think the weather, stock performance, online clicks) but we know less about why those predictions are right--and we need to get used to that. And in fact, over the past twenty years we've been unintentionally developing strategies that avoid anticipating what will happen so we don't have to depend on unreliable revenue forecasts, assumptions about customer needs, and hypotheses about how a product will be used. By embracing these strategies, we're flourishing by creating yet more possibilities and yet more unpredictability. In wide-ranging stories and characteristically all-encompassing syntheses, technology researcher, internet expert, and philosopher David Weinberger reveals the trends that hide in so many aspects of our lives--and shows us how they matter.--
May 14, 2019   |   Journal Article
Z
Alison Gopnik
ISSN:
1572-8641
DOI:
10.1023/A:1008290415597
Library catalog:
Springer Link
I argue that explanation should be thought of as the phenomenological mark of the operation of a particular kind of cognitive system, the theory-formation system. The theory-formation system operates most clearly in children and scientists but is also part of our everyday cognition. The system is devoted to uncovering the underlying causal structure of the world. Since this process often involves active intervention in the world, in the case of systematic experiment in scientists, and play in children, the cognitive system is accompanied by a ‘theory drive’, a motivational system that impels us to interpret new evidence in terms of existing theories and change our theories in the light of new evidence. What we usually think of as explanation is the phenomenological state that accompanies the satisfaction of this drive. However, the relation between the phenomenology and the cognitive system is contingent, as in similar cases of sexual and visual phenomenology. Distinctive explanatory phenomenology may also help us to identify when the theory-formation system is operating.
May 9, 2019   |   Newspaper Article
Z
Rachel Kushner
ISSN:
0362-4331
Library catalog:
NYTimes.com
In three decades of advocating for prison abolition, the activist and scholar has helped transform how people think about criminal justice.
May 7, 2019   |   Book
Z
John Sharp, Colleen Macklin
ISBN:
978-0-262-03963-5
Library catalog:
Library of Congress ISBN
Apr 17, 2019   |   Book
Z
Shoshana Zuboff
ISBN:
978-1-61039-570-0
Library catalog:
Library of Congress ISBN
"Shoshana Zuboff, named "the true prophet of the information age" by the Financial Times, has always been ahead of her time. Her seminal book In the Age of the Smart Machine foresaw the consequences of a then-unfolding era of computer technology. Now, three decades later she asks why the once-celebrated miracle of digital is turning into a nightmare. Zuboff tackles the social, political, business, personal, and technological meaning of "surveillance capitalism" as an unprecedented new market form. It is not simply about tracking us and selling ads, it is the business model for an ominous new marketplace that aims at nothing less than predicting and modifying our everyday behavior--where we go, what we do, what we say, how we feel, who we're with. The consequences of surveillance capitalism for us as individuals and as a society vividly come to life in The Age of Surveillance Capitalism's pathbreaking analysis of power. The threat has shifted from a totalitarian "big brother" state to a universal global architecture of automatic sensors and smart capabilities: A "big other" that imposes a fundamentally new form of power and unprecedented concentrations of knowledge in private companies--free from democratic oversight and control"--
Apr 16, 2019   |   Conference Paper
Z
David Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, Dan Dennison
Mar 30, 2019   |   Document
Z
The Illusion of Algorithmic Fairness
Rodrigo Ochigame
Mar 30, 2019   |   Journal Article
Z
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Cass R. Sunstein
Library catalog:
arXiv.org
The law forbids discrimination. But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity.
Mar 22, 2019   |   Journal Article
Z
Samuel G. Finlayson, John D. Bowers, Joichi Ito, Jonathan L. Zittrain, Andrew L. Beam, Isaac S. Kohane
DOI:
10.1126/science.aaw4399
Mar 12, 2019   |   Book
Z
B. Alan Wallace, Zara Houshmand
ISBN:
978-1-55939-353-9
Library catalog:
Library of Congress ISBN
Mar 7, 2019   |   Journal Article
Z
Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan
Library catalog:
arXiv.org
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.
Mar 7, 2019   |   Journal Article
Z
The AI spring of 2018
Sofia Olhede, Patrick Wolfe
Mar 6, 2019   |   Book
Z
Albert-László Barabási
ISBN:
978-0-316-50549-9
Library catalog:
Library of Congress ISBN
Mar 4, 2019   |   Book
Z
Nina Montgomery, Joi Ito
ISBN:
978-0-429-45279-6
Library catalog:
Open WorldCat
Perspectives on Impact brings together leaders from across sectors to reflect on our approaches to social change. Sharing diverse examples from their work, these authors show how we must think more systemically and work more collaboratively to move the needle on the biggest social, humanitarian, and environmental challenges facing our world. Chapters by: Niko Canner, Shanti Nayak, and Cynthia Warner (Incandescent) Duncan Green (OxFam) Farah Ramzan Golant (Girl Effect, kyu) Sara Holoubek (Luminary Labs) Joi Ito (MIT Media Lab) Leila Janah (Samasource, LXMI, Samaschool) Amirah Jiwa George Kronnisanyon Werner (Republic of Liberia) Chris Larkin (IDEO.org) Eric Maltzer (Medora Ventures, Middlebury College) Jane Nelson (Harvard Kennedy School) Craig Nevill-Manning and Prem Ramaswami (Sidewalk Labs) Jacqueline Novogratz (Acumen) Deena Shakir (GV, formerly Google Ventures) Jose Miguel Sokoloff (MullenLowe Group) Lara Stein (TEDx, Women's March Global) Piyush Tantia (ideas42) Fay Twersky (William & Flora Hewlett Foundation) Sherrie Rollins Westin and Shari Rosenfeld (Sesame Workshop) Perspectives on Impact and its sister book, Perspectives on Purpose, bring together leading voices from across sectors to discuss how we must adapt our organizations for the twenty-first century world. Perspectives on Impact focuses on the recalibration of social impact approaches to tackle complex humanitarian, social, and environmental challenges; Perspectives on Purpose looks at the shifting role of the corporation in society through the lens of purpose. You can find Perspectives on Purpose: Leading Voices on Building Brands and Businesses for the Twenty-First Century here: https://www.amazon.com/Perspectives-Purpose-Building-Businesses-Twenty-First/dp/036711237X
Feb 19, 2019   |   Journal Article
Z
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus
Library catalog:
arXiv.org
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.
Feb 19, 2019   |   Journal Article
Z
Justin Gilmer, Ryan P. Adams, Ian Goodfellow, David Andersen, George E. Dahl
Library catalog:
arXiv.org
Advances in machine learning have led to broad deployment of systems with impressive performance on important problems. Nonetheless, these systems can be induced to make errors on data that are surprisingly similar to examples the learned system handles correctly. The existence of these errors raises a variety of questions about out-of-sample generalization and whether bad actors might use such examples to abuse deployed systems. As a result of these security concerns, there has been a flurry of recent papers proposing algorithms to defend against such malicious perturbations of correctly handled examples. It is unclear how such misclassifications represent a different kind of security problem than other errors, or even other attacker-produced examples that have no specific relationship to an uncorrupted input. In this paper, we argue that adversarial example defense papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, defense papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. Towards this end, we establish a taxonomy of motivations, constraints, and abilities for more plausible adversaries. Finally, we provide a series of recommendations outlining a path forward for future work to more clearly articulate the threat model and perform more meaningful evaluation.
Feb 12, 2019   |   Book
Z
Martin Buber, Ronald Gregor Smith
ISBN:
978-1-57898-997-3
Library catalog:
Open WorldCat
"Today considered a landmark of twentieth-century intellectual history, I and Thou is also one of the most important books of Western theology. In it, Martin Buber, heavily influenced by the writings of Frederich Nietzsche, united the proto-Existentialists currents of modern German thought with the Judeo-Christian tradition, powerfully updating faith for modern times. Since its first appearance in German in 1923, this slender volume has become one of the epoch-making works of our time. Not only does it present the best thinking of one of the greatest Jewish minds in centuries, but has helped to mold approaches to reconciling God with the workings of the modern world and the consciousness of its inhabitants. This work is the centerpiece of Buber's groundbreaking philosophy. It lays out a view of the world in which human beings can enter into relationships using their innermost and whole being to form true partnerships. These deep forms of rapport contrast with those that spring from the Industrial Revolution, namely the common, but basically unethical, treatment of others as objects for our use and the incorrect view of the universe as merely the object of our senses, experiences. Buber goes on to demonstrate how these interhuman meetings are a reflection of the human meeting with God. For Buber, the essence of biblical religion consists in the fact that -- regardless of the infinite abyss between them -- a dialogue between man and God is possible. Ecumenical in its appeal, I and Thou nevertheless reflects the profound Talmudic tradition from which it has emerged. For Judaism, Buber's writings have been of revolutionary importance. No other writer has so shaken Judaism from parochialism and applied it so relevantly to the problems and concerns of contemporary men. On the other hand, the fundamentalist Protestant movement in this country has appropriated Buber's "I and Thou encounter" as the implicit basis of its doctrine of immediate faith-based salvation. In this light, Martin Buber has been viewed as the Jewish counterpart to Paul Tillich."--Publisher description.
Feb 12, 2019   |   Journal Article
Z
Finale Doshi-Velez, Been Kim
Library catalog:
arXiv.org
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.
Feb 12, 2019   |   Journal Article
Z
Zachary C. Lipton
Library catalog:
arXiv.org
Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but interpretable. And yet the task of interpretation appears underspecified. Papers provide diverse and sometimes non-overlapping motivations for interpretability, and offer myriad notions of what attributes render models interpretable. Despite this ambiguity, many papers proclaim interpretability axiomatically, absent further explanation. In this paper, we seek to refine the discourse on interpretability. First, we examine the motivations underlying interest in interpretability, finding them to be diverse and occasionally discordant. Then, we address model properties and techniques thought to confer interpretability, identifying transparency to humans and post-hoc explanations as competing notions. Throughout, we discuss the feasibility and desirability of different notions, and question the oft-made assertions that linear models are interpretable and that deep neural networks are not.
Feb 2, 2019   |   Book
Z
Eric A. Posner, E. Glen Weyl
ISBN:
978-0-691-17750-2
Library catalog:
Library of Congress ISBN
"Many blame today's economic inequality, stagnation, and political instability on the free market. The solution is to rein in the market, right? [This book] turns this thinking--and pretty much all conventional thinking about markets, both for and against--on its head. The book reveals...new ways to organize markets for the good of everyone. It shows how the emancipatory force of genuinely open, free, and competitive markets can reawaken the dormant nineteenth-century spirit of liberal reform and lead to greater equality, prosperity, and cooperation. [The authors] demonstrate why private property is inherently monopolistic, and how we would all be better off if private ownership were converted into a public auction for public benefit. They show how the principle of one person, one vote inhibits democracy, suggesting instead an ingenious way for voters to effectively influence the issues that matter most to them. They argue that every citizen of a host country should benefit from immigration--not just migrants and their capitalist employers. They propose leveraging antitrust laws to liberate markets from the grip of institutional investors and creating a data labor movement to force digital monopolies to compensate people for their electronic data. Only by radically expanding the scope of markets can we reduce inequality, restore robust economic growth, and resolve political conflicts. But to do that, we must replace our most sacred institutions with truly free and open competition--[this book] shows how."--
Jan 29, 2019   |   Journal Article
Z
Zachary C. Lipton, Jacob Steinhardt
Library catalog:
arXiv.org
Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible. Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship: (i) failure to distinguish between explanation and speculation; (ii) failure to identify the sources of empirical gains, e.g., emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning; (iii) mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g., by confusing technical and non-technical concepts; and (iv) misuse of language, e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms. While the causes behind these patterns are uncertain, possibilities include the rapid expansion of the community, the consequent thinness of the reviewer pool, and the often-misaligned incentives between scholarship and short-term measures of success (e.g., bibliometrics, attention, and entrepreneurial opportunity). While each pattern offers a corresponding remedy (don't do it), we also discuss some speculative suggestions for how the community might combat these trends.
Jan 29, 2019   |   Blog Post
Z
Michael Jordan
Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and…
Nov 23, 2018   |   Book
Z
Anand Giridharadas
ISBN:
978-0-451-49324-8
Library catalog:
Library of Congress ISBN
May 29, 2018   |   Book
Z
Justine Larbalestier, Sarah Rees Brennan
ISBN:
978-0-06-208964-9
Library catalog:
Library of Congress ISBN
Residing in New Whitby, Maine, a town founded by vampires trying to escape persecution, Mel finds her negative attitudes challenged when her best friend falls in love with one, another friend's father runs off with one, and she herself is attracted to someone who tries to pass himself off as one
May 20, 2018   |   Book
Z
Judea Pearl, Dana Mackenzie
ISBN:
978-0-465-09760-9
Library catalog:
Library of Congress ISBN
"Everyone has heard the claim, "Correlation does not imply causation." What might sound like a reasonable dictum metastasized in the twentieth century into one of science's biggest obstacles, as a legion of researchers became unwilling to make the claim that one thing could cause another. Even two decades ago, asking a statistician a question like "Was it the aspirin that stopped my headache?" would have been like asking if he believed in voodoo, or at best a topic for conversation at a cocktail party rather than a legitimate target of scientific inquiry. Scientists were allowed to posit only that the probability that one thing was associated with another. This all changed with Judea Pearl, whose work on causality was not just a victory for common sense, but a revolution in the study of the world"--
Dec 21, 2017   |   Journal Article
Z
Chelsea Barabas, Karthik Dinakar, Joichi Ito, Madars Virza, Jonathan Zittrain
Library catalog:
arXiv.org
Actuarial risk assessments might be unduly perceived as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data. Much of this debate is centered around competing notions of fairness and predictive accuracy, resting on the contested use of variables that act as "proxies" for characteristics legally protected against discrimination, such as race and gender. We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose. If machine learning is operationalized merely in the service of predicting individual future crime, then it becomes difficult to break cycles of criminalization that are driven by the iatrogenic effects of the criminal justice system itself. We posit that machine learning should not be used for prediction, but rather to surface covariates that are fed into a causal model for understanding the social, structural and psychological drivers of crime. We propose an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation.
Mar 2, 2017   |   Book
Z
Gandhi, Mahadev H. Desai
ISBN:
978-0-8070-5909-8
Library catalog:
Library of Congress ISBN
Dec 16, 2016   |   Book
Z
Virginia Heffernan
ISBN:
978-1-4391-9170-5
Library catalog:
Library of Congress ISBN