Computer science seeks to build "transparent" human-machine interfaces through a series of techniques in natural language processing powered by machine-learning that generate probabilistic models of human-to-human conversations. The relatively success of such formalist, behaviorist approach in more or less controlled usage contexts is acknowledged by both computer scientists and the general public, who employ anthropomorphic metaphors which point to a certain (language) ideology that applied linguistics needs to problematize.
While a subject-object stance on these metaphors would easily dismiss them as playful anthropomorphism, a more symmetrical approach to artificial intelligence (henceforth AI) language-processing technologies can take the human-machine relation implied to the next level and, thus, open important issues for research in a posthumanist Applied Linguistics (Pennycook, 2018). I argue that post-phenomenology has an important contribution to make in this agenda because it seeks to describe the human-machine relation from the internal structure of the experience, consequently, outside the subject/object divide.
Postphenomenlogy analyzes the character of the bodily-perceptual relations human beings have with technology and the ways technologies affect human relations with the world (Rosenberger & Verbeek, 2015). Among these are "alterity relations", in which the device becomes a presence with a "quasi-other" quality (Ihde, 1990). To what extent are the language choices and language performances of users affected by this quasi-alterity is an example of the research questions opened for a posthumanist applied linguist. Wittkower Wittkower (2022, p.357), for example, argues that interfaces based on "encoded pseudo-mental contents" require the user to conceptualize intentionality and knowledge (a theory of mind) in the device for it to work properly. In other words, the allegedly intelligent device imposes on the user certain uses of natural language which, in turn, require a second order understanding" of the machine`s pseudo-understanding (Wittkower, 2022, p. 261).
This example alone supports the need for certain precautionary methodological principles in the study of natural language used in quasi-alterity relations with AI. Namely, one needs to investigate not only the utterances by the machine and the human user, but also how the second-order understanding by the user restricts her (first-order) use of language towards the device. This argument is part of the preliminary findings of an ongoing research project. These findings suggest that a post-phenomenological approach to human-AI helps a posthumanist Applied Linguistics protect itself from anthropomorphisms that obscure, instead of eroding, the subject/object dichotomy.
References:
Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press.
Pennycook, A. (Org.). (2018). Posthumanist Applied Linguistics. Routledge.
Rosenberger, R., & Verbeek, P.-P. (Orgs.). (2015). Postphenomenological investigations: Essays on human-technology relations. Lexington Books.
Wittkower, D. E. (2022). What Is It Like to Be a Bot? Em S. Vallor (Org.), The Oxford Handbook of Philosophy of Technology (p. 357–373). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190851187.013.23