Joanna Bryson (Princeton University) előadása

   2016. október 4.

Kedves Kollégák!

Mindenkit szeretettel várunk az MTA TK "Lendület" RECENS  hálózati előadás-sorozatának következő alkalmára 2016. október 4-én (kedden), melyen Joanna Bryson (Princeton University) tart előadást "The computational utility of human identity” címmel. 

Az előadás megrendezésére az MTA TK Politikatudományi Intézetének tárgyalótermében (1014 Budapest, Országház utca 30.) kerül sor 15 órai kezdettel.

Az előadás kivonata:

Information technology pervades our lives, and not only of those of us in the United States.  Ericsson has predicted that by 2020, 90% of humanity will have a mobile phone (in 2015 there were estimated to be 2.7 billion phone contracts worldwide, an estimated 800 million of which were added in 2014.)  With that technology has come artificial intelligence, able to read and categorise our words and images, and with it a commensurate loss of privacy through anonymity.  Well-qualified experts from Harvard's Margo Seltzer to Facebook's Mark Zuckerberg have declared privacy a thing of the past.

Should we care?  Does identity matter?  Starting from the premise that privacy facilitates variation between individuals, I will begin by looking at the ecological role of cognition, then at that role's twinned consequences: our drives for social conformity and individual differentiation.  I will then review a number of recent modelling-based results from my group concerning identity, communication and cooperation, indicating that:
Both being cooperative and being selfish can be viewed as strategies–a mix of which contribute to the greater good.  Identity appears to be key to the implicit mechanisms that help societies strike the right balance between these.
The imperative to honestly communicate in-group status as a cooperator can be so great that it motivates even dishonest conformity, such as transmitting known false beliefs about others.
Humans' cooperative behaviour includes both blind trust in strangers and a rejection of profitable but unfair offers to cooperate. We can explain this behaviour in a context of partial information, but these fundamental cooperative behaviours are no longer supported by evolution (or learning) if we have too much information, or choice, within our society.
I will frame the above in a discussion of recent claims and concerns about artificial intelligence, which assume that "true" AI would have something like human identity.  I will conclude that this is neither necessary, desirable, or likely to be possible.

Associated recent publications:

Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, currently on sabbatical as a Visiting Fellow at Princeton's Center for Information Technology Policy (CITP). She has broad academic interests in the structure and utility of intelligence, both natural and artificial. She has been publishing on AI ethics since 1998, and been engaged in AI policy in the UK since coauthoring the 2011 “EPSRC Principles of Robotics”. Her sabbatical project, “Public Goods and Artificial Intelligence”, includes both basic research in human sociality and experiments in technological interventions. She holds degrees in Psychology from Chicago and Edinburgh, and in Artificial Intelligence from Edinburgh and MIT.  At Bath she founded the Intelligent Systemsresearch group (one of four in the Department of Computer Science).  She also heads Artificial Models of Natural Intelligence, where she and her colleagues publish in biology, anthropology, ethics, cognitive science and systems AI.