Bots and Thoughts Exploring Language Use, Pragmatic Competence, and Ethical Challenges in AI-Based Communication Systems
DOI:
https://doi.org/10.70670/sra.v4i1.1850Abstract
How does language shape thought, and what happens when language itself is generated by machines? This question has become increasingly urgent in the age of artificial intelligence–mediated communication. Recent reports suggest that generative AI tools are now used by hundreds of millions of people globally, reshaping how individuals write emails, produce academic texts, and interact in digital spaces (Dwivedi et al., 2023). Indeed, the rapid adoption of systems such as ChatGPT and other large language models demonstrates that machine-generated discourse is no longer peripheral but central to everyday communication. As Noam Chomsky argues, the human capacity for language is closely tied to the human capacity for thought, raising a critical question about whether AI-generated language truly reflects pragmatic understanding or merely statistical prediction (Chomsky et al., 2023). Consequently, the increasing reliance on AI-generated discourse has intensified scholarly interest in how such systems influence language use, pragmatic competence, and ethical communication practices.
