Fazi, M Beatrice (2021) Beyond human: deep learning, explainability and representation. Theory, Culture and Society, 38 (7-8). pp. 55-77. ISSN 0263-2764
![]() |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (726kB) |
![]() |
PDF
- Accepted Version
Restricted to SRO admin only Available under License Creative Commons Attribution-NonCommercial No Derivatives. Download (400kB) |
Abstract
This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the possibility to ‘re-present’ the algorithmic procedures of feature extraction and feature learning to the human mind. The article thus mobilises the notion of incommensurability (originally developed in the philosophy of science) to address explainability as a communicational and representational issue, which challenges phenomenological and existential modes of comparison between human and algorithmic ‘thinking’ operations.
Item Type: | Article |
---|---|
Keywords: | algorithmic thought, deep neural networks, explanation, incommensurability, interpretability, philosophy, XAI |
Schools and Departments: | School of Media, Arts and Humanities > Media and Film |
Research Centres and Groups: | Sussex Humanities Lab |
SWORD Depositor: | Mx Elements Account |
Depositing User: | Mx Elements Account |
Date Deposited: | 03 Nov 2020 11:15 |
Last Modified: | 07 Apr 2022 12:45 |
URI: | http://sro.sussex.ac.uk/id/eprint/94780 |
View download statistics for this item
📧 Request an update