Michael (Mike) Zahorec
Graduate Student in
Philosophy and
Computer Science
at Florida State University
I'm currently working on a PhD in philosophy and an MS in computer science.
As a philosopher, I have two main research directions. The first is in philosophy of science, but also touches on topics in philosophy of language, metaphysics, and epistemology. In a word, it concerns the rich variety of ways that we understand, discuss, and interact with what there is: especially in our scientific practices but also in ordinary life. My second core research direction is in the Philosophy and Ethics of AI. I work on topics related to computational opacity and transparency, including especially explainability (XAI) and interpretability. I am focused on these especially as applied to generative AI like large language models (LLMs). This work is at the intersection of the theory and ethics of AI: it concerns both the nature of AI as well as policies and procedures for responsible AI use.
As a computer scientist, I especially enjoy theory of computation, algorithms, and other abstract, formal topics. My core efforts in CS are, though, directly related to my work in AI Ethics. They concern explainability, interpretability, and, ultimately, responsible generative AI.
In more ways than one, I work at the intersection of philosophy and computer science. First, as mentioned, I work in Philosophy of AI and AI ethics, and also develop related empirical work in computer science. Second, I have multiple active projects in digital humanities. Third, I have used data science techniques to help resolve philosophical issues.
In the summer of 2024, I worked with the Responsible AI team at Humana. In the summer of 2023, I got married and we spent some time in Hawaii. During the three summers before that, I worked with a handful of data science education teams at MathWorks, the company which is rightly famous for their products, MATLAB and Simulink.
My dissertation is about kinds. I engage with contemporary philosophy of science about 'scientific kinds' and 'natural kinds'. I defend a pluralist realism about kinds in science. In brief, there are many kinds of kinds (pluralism) and we should generally beleive that what scientists talk, think, and theorize about exists (realism). The pluralism flows, in part, from my dissatisfaction with both Kripke-Putnam essentialism and with the more recently popular theory of kinds, property cluster theory. I develop the realism along pragmatic lines, influenced by Kant and Dewey. I consider a range of scientific kinds in my dissertation, focusing especially on biology, chemistry, and computer and data science. I demonstrate that, though kind concepts are often influenced in messy ways by pragmatic factors, we should retain a modest, pluralist realism.
My thesis concerns explainability and interpretability of artificial intelligence (AI). I am focused on generative AI, specifically large language models (LLMs). First, I contribute to theoretical discussions about the nature of explainability and interpretability. I demonstrate how to think clearly about these notions in the era of generative AI. Second, I develop and deploy methods for empirically evaluating these properties. Finally, I propose empirically-informed and theoretically-grounded strategies for improving the explainability of LLM outputs.
Updated November 2024