University of Sussex
Browse
0263276420966386.pdf (709.67 kB)

Beyond human: deep learning, explainability and representation

Download (709.67 kB)
journal contribution
posted on 2023-06-09, 22:03 authored by M. Beatrice FaziM. Beatrice Fazi
This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the possibility to ‘re-present’ the algorithmic procedures of feature extraction and feature learning to the human mind. The article thus mobilises the notion of incommensurability (originally developed in the philosophy of science) to address explainability as a communicational and representational issue, which challenges phenomenological and existential modes of comparison between human and algorithmic ‘thinking’ operations.

History

Publication status

  • Published

File Version

  • Published version

Journal

Theory, Culture and Society

ISSN

0263-2764

Publisher

SAGE Publications

Issue

7-8

Volume

38

Page range

55-77

Department affiliated with

  • Media and Film Publications

Research groups affiliated with

  • Sussex Humanities Lab Publications

Full text available

  • Yes

Peer reviewed?

  • Yes

Legacy Posted Date

2020-11-03

First Open Access (FOA) Date

2020-11-03

First Compliant Deposit (FCD) Date

2020-11-03

Usage metrics

    University of Sussex (Publications)

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC