Research
On this page: publications, research in progress or under review, and conference presentations.
Publications
Linguistic Corpora and Ordinary Language: On the Dispute Between Ryle and Austin About the Use of ‘Voluntary’, ‘Involuntary’, ‘Voluntarily’, and ‘Involuntarily’
Michael Zahorec, Robert Bishop, Nat Hansen, John Schwenkler, and Justin Sytsma. 2023. Experimental Philosophy of Language: Perspectives, Methods and Prospects. Ed. David Bordonaba-Plou. Springer: Logic, Argumentation and Reasoning series. [preprint] [published version]
Abstract: The fact that Gilbert Ryle and J.L. Austin seem to disagree about the ordinary use of words such as ‘voluntary’, ‘involuntary’, ‘voluntarily’, and ‘involuntarily’ has been taken to cast doubt on the methods of ordinary language philosophy. As Benson Mates puts the worry, ‘if agreement about usage cannot be reached within so restricted a sample as the class of Oxford Professors of Philosophy, what are the prospects when the sample is enlarged?’ (Mates, Inquiry 1:161–171, 1958, p. 165). In this chapter, we evaluate Mates’s criticism alongside Ryle’s and Austin’s specific claims about the ordinary use of these words, assessing these claims against actual examples of ordinary use drawn from the British National Corpus (BNC). Our evaluation consists in applying a combination of methods: first aggregating judgments about a large set of samples drawn from the corpus, and then using a clustering algorithm to uncover connections between different types of use. In applying these methods, we show where and to what extent Ryle’s and Austin’s accounts of the use of the target terms are accurate as well as where they miss important aspects of ordinary use, and we demonstrate the usefulness of this new combination of methods. At the heart of our approach is a commitment to the idea that systematically looking at actual uses of expressions is an essential component of any approach to ordinary language philosophy.
Research in Progress or Under Review
Understanding AI
Accepted, pending review of final draft, for Philosophy of Artificial Intelligence, eds. D. Černý, V. Mařík, and J. Wiedermann. Springer. [long abstract]
Natural Kinds: Towards A Revised Picture
In review since July 2025. [draft]
Abstract: In the past few decades, attention has shifted from Kripke-Putnam essentialism (Kripke 1972/2001; Putnam 1973, 1975) to property cluster theories of natural kinds (Boyd 1988, 1991, 1999, 2019; Slater 2015; Crane 2021; Samuels and Ferreira 2010; Machery 2004). After characterizing these two popular, competing accounts, I offer a handful of reasons to doubt that either is adequate. First and perhaps most straightforwardly, Kripke-Putnam essentialism suggests that there is, at least roughly and in general, a one-to-one, highly explanatory correspondence between natural kinds and unique compositions or internal structures, but there are convincing arguments against the existence of such a correspondence (e.g., Crane 2021, Weisberg 2003). On the other hand, I argue, cluster theories face a dilemma about grounding and explanation. On the one hand, property clusters might be thought to not require grounding, in which case the explanatory demands which a theory of natural kinds is, even by cluster theorists’ own lights (e.g., Slater 2015 and Boyd 1999), supposed to meet are not met. On the other hand, if property cluster kinds are thought to include grounds which explain the presence of the properties in property clusters, the theory either generates a vicious regress or depends quite centrally on a restricted version of essentialism. This last option is not as promising as it might seem, since the nail in the coffin of both theories, I claim, is that they take too much for granted. A theory of natural kinds should, I argue, provide a picture which clearly indicates what it is to be of a natural kind, but Kripke-Putnam essentialism and cluster theories both presuppose and rely—in surprisingly crucial ways—on such pictures. To end, I suggest a revised approach, where the central aim is to grasp, in what I suggest is a distinctively philosophical way, the practices and thought of those who have sufficient competence with particular kind concepts and terms. My suggested approach is towards a “piecemeal” (Magnus 2015) theory of natural kinds, though of a novel variety.
Kinds Are What There Is—And How We Think of It
In review since July 2025. [draft]
Abstract: By the second paragraph of “On What There Is”, Quine frames the ontological issue in terms of the kinds of entities there are. Indeed, when we wonder about reality and our knowledge of it, we reflect on the kinds of things and stuff we think there are. I develop this thought from the perspective of debates about natural kinds, arguing that the core issues about natural kinds are—and must remain—central to theoretical philosophy. My discussion is structured around two questions: (i) whether natural kinds are genuinely natural and (ii) what natural kinds are. These lead directly to issues foundational to metaphysics, philosophy of science, and epistemology. What is reality like? How do we conceptualize it? Can we know it? Natural kinds offer a uniquely integrative, case-focused perspective on these issues—and even those who reject the topic end up reintroducing it under new names.
A Paradigm for Responsible AI Use
In review since June 2025. [draft]
Abstract: As AI is embedded in our professional, creative, and everyday lives, the issue of how to use it responsibly gains importance. This paper proposes a paradigm for responsible AI use: we should use AI systems as idea generators. The term ‘idea’ suggests modesty—like when we say something is “just an idea” or ask someone for ideas. I defend this paradigm on epistemic grounds, drawing on a distinction between black-box and non-black-box explanations. Despite appearances, language model self-explanations are black-box explanations: they are not constrained to accurately reflect internal processes. Other paradigms—such as AI-as-expert or AI-as-neutral-information-processor—overlook this, inviting unwarranted trust. The idea-generator paradigm reminds us that—as with inspiration from a poem or landscape—getting an idea from AI does not justify belief or endorsement. This approach allows us to benefit from AI while preserving our own claims to competence and expertise, both individually and collectively.
Getting Clear About Opacity: A Variety of Terminologies and Their Significance in the Era of Generative AI
Invited to revise and submit again to the journal Philosophy and Technology. Revisions in progress. [draft]
Abstract: Terms like ‘transparent’, ‘opaque’, ‘black box’, ‘interpretable’, and ‘explainable’ are used and defined differently in different places. Sometimes, authors just stipulate how they will use them. Others attempt to analyze or define them by finally saying what they mean or should be used to mean. For the most part, though, their meanings are left unsettled and vague. Here, I find some order by categorizing the ways these terms are used. I begin with a couple varieties of what I call ‘strong opacity’, which variously concern the possibility of experts following, with understanding, the operations of the systems at issue. Against these, I map some other popular ways to use these terms. After all of this theoretical work, I move on to practical concerns. Namely, I highlight the enduring importance of concerns about strong opacity. Though it is appropriate and even quite useful to use these terms in the various ways they are used, we should remain concerned with strong opacity. This is especially true given the novel difficulties which generative AI raises with respect to governance and evaluation. In short, evaluations which do not reduce strong opacity rely on representative performance assessments with clear success conditions, but these are much harder to come by when the AI at issue is generative. It is therefore important to continue pursuing, and to maintain a conceptual and terminological grasp on, computational methods which reduce or avoid strong opacity.
LLM Explainability: Conceptual Foundations and an Empirical Study of Natural Language Explanations
This research was accepted as my Computer Science Master’s Thesis at Florida State University. I am revising for conference/journal submission. [draft]
Abstract: This is a conceptual and empirical investigation into explainability of large language models (LLMs). Building on recent literature, I propose a flexible, use-oriented definition of explainability: the provision of information that is useful–or in some cases necessary–for guiding, justifying, critiquing, or improving model behavior. In Part 2, I taxonomize explanation methods for LLMs, emphasizing the diversity of purposes and strategies across methods. In Part 3, I introduce a novel approach to evaluating LLM explainability, focused on rigorous, fine-grained analysis of natural language explanations. I apply this method to assess GPT-4o-mini’s chain-of-thought explanations in multi-hop reasoning tasks. The experiments reveal: (1) two-hop reasoning is largely reliable, while three-hop reasoning remains inconsistent; (2) few-shot chain-of-thought prompting improves accuracy but increases the rate of illogical explanations; and (3) accuracy and explanation quality are both highly sensitive to the structure or form of extraneous information in the prompt. While the experiments focus on a single model, the results highlight important possibilities, patterns, and risks for LLM evaluation and usage more generally. I conclude by drawing broader implications for LLM evaluation.
Conference Presentations
The Limits of AI Understanding and What They Mean for Users
[Upcoming] 2025. Illinois Institute of Technology, 2nd Sawyier Conference on Ethics of Technology.
AI as Idea Generator
[Upcoming] 2025. Illinois Philosophical Association.
Black Boxes, Rules, and Reasons: On Understanding and AI Ethics
2025. Alabama Philosophical Society.
Massive Complexity: The Heart of Concerns and Disagreements about Artificial Intelligence
2025. American Philosophical Association Eastern, Society for the Metaphysics of Science.
Semantic, Technical, and Normative Tangles: Three Levels of Disagreement about Artificial Intelligence
2025. Utah State University.
Nature and Natural Kinds
2024. Alabama Philosophical Society.
Natural Kinds: Towards a Revised Approach
2024. Duke/UNC Graduate Conference.
Natural Kinds: Towards a Revised Approach
2024. Mississippi Academy of Sciences.