On Wednesday, January 28, 2026, Ian Hamilton announced on Bluesky that "I've been fired from UploadVR." He was the editor in chief at UploadVR, and he wrote a Substack post titled "Ian is Typing" on January 30th detailing how is co-workers were pushing to do a test of a "clearly disclosed AI author for UploadVR," and that he had three specific concerns that it be brief, for the ability for readers to turn off and hide all AI-authored posts, and for human freelancers to have the right of first refusal. Hamilton claims to have tried to raise these concerns in the context of Slack, but that the experiment was going to proceed regardless. He writes, "Unable to shift the direction of my colleagues and out of options to affect what was coming, I stepped out of Slack and sent a final email to them on Wednesday morning with a number of my contacts in the industry copied, raising some of these concerns. Not long after, I was called by my boss and fired."
I spoke with Hamilton last Friday after his Substack post in order to get more context that led to his departure. Hamilton claims that UploadVR Editor & Developer David Heaney and UploadVR's Operations Manager Kyle Riesenbeck were behind the push to test this clearly disclosed AI author on UploadVR, and that ultimately the proposed test was a business decision made by Riesenbeck. It was a decision that Hamilton ultimately disagreed with, and he cites it as the primary factor that led to behavior that ultimately led to his firing. (UPDATE Feb 5, 2026: It is worth noting here that UploadVR has yet to run this AI bot author test, but that it was the proposed test that was the catalyst for Hamilton’s behavior).
The specific reasons and circumstances around Hamilton's firing are publicly disputed by Heaney, who reacted on Twitter after Hamilton's Substack post went live by saying, "It is indeed only one side of the story. And an incomplete telling of it, with key omissions and wording choices that serve to paint a misleading picture." In another post Heaney says, "I can't get into it more at this point for obvious reasons, but don't believe everything you read, especially a single side of a complex story." I asked Hamilton for his reaction to Heaney's claims that he's being misleading during our interview, and he did provide more context in our conversation that lead up to his firing. Ultimately, it does sounds like the proposed AI bot author test was the primary catalyst for Hamilton, and that this disagreement may have led to other behaviors and reactions that could also be reasonably cited for why he was fired. UploadVR may have a differing opinions as to what happened, but no one from UploadVR has made public comments beyond what Heaney has said on Twitter. I have extended invitations to both Riesenbeck or Heaney to come onto the podcast for a broader discussion about AI, but nothing has been confirmed by the time of publication.
My Personal Take on AI: Technically, Philosophically, Legally, and Culturally
Public discourse around AI has split into a binary of Pro-AI vs Anti-AI, and while my personal views can not be easily collapsed into one side of the other, I'd usually take the Anti-AI side of a debate if given the opportunity. I do think some form of AI is here to stay, and will be around for a long time, but that right now there is a lot of hype and deluded thinking on the topic. I see AI as a technology that consolidates wealth and power, and so a primary question worth asking is “Whose power and wealth is being consolidated?” Karen Hao's The Empire of AI elaborates on how the past patterns of colonialism are replaying out within the context of data and the field of AI, as well as how scaling with more compute power has been the primary mode of innovation in AI, and that Gary Marcus has been pushing against the "Scale is All You Need" theory for many years now.
Technically speaking, I'm more of a skeptic in the short-term around LLMs along the lines of Stochastic Parrots critique that is elaborated upon by Emily M. Bender and Alex Hanna in The AI Con book, but also Yann Lecun's call for more sensory grounding, as well as Gary Marcus' calls for more neurosymbolic cognitive architectures. AI has always been a marketing term as elaborated by Dr. Jonnie Penn’s Ph.D. thesis on "Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century." My perspective on AI has been informed by 122 unpublished interviews with AI researchers, many of whom also cite how the empirical results often outpace the theoretical results (i.e. there are often benchmark improvements without full knowledge around the theoretical foundations behind it leading resulting in plateaus rather than monotonic progress). I've also spoken to over 100 XR artists, storytellers, and engineers about AI on the Voices of VR podcast over the past decade. When the context is bounded, and the data are gathered while being in right relationship, then there can be some real utility. But there's also many gaps and ways that LLMs cause harm to marginalized communities. See the film Coded Bias for more details on that front.
Philosophically speaking, Process Philosophy has had a big influence on me, and so check out my conversation with Whitehead scholar Matt Segall on AI. Timnit Gebru and Émile P. Torres' paper on the TESCREAL bundle has also been a key influence that deconstructs the influence of philosophies like Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism on AI research. I don't think AI is conscious, but I lean towards Whitehead's panexperientialism, which sees experience as going all the way down. This perspective also helps to differentiate humans from machines by looking at things like emotions, meaning, value, intention, context, relationships, all of which can easily get collapsed if only looking through the lens of “intelligence.” I'm curious about Data Science as Neoplatonism ideas, and Michael Levin’s work on ingressing minds (influenced by Platonic forms and Whitehead's eternal objects) and his general calls for SUTI: the Search for Unconventional Terrestrial Intelligence. I also love Timothy E. Eastman’s Logoi Framework as elaborated in his Untying the Gordian Knot: Process, Reality, Context book. He highlights the triadic nature of reality being input-output-context, and the logic of actualizations being Boolean logic and the logic of potential being non-Boolean logic, which is something that Hans Primas elaborates on in Knowledge and Time. So AI needs to account for the pluralism of non-Boolean realities, but often collapses them into a singular formal system that collapses situated knowledges. Also see James Bradley’s “Beyond Hermeneutics: Peirce’s Semiology as a Trinitarian Metaphysics of Communication,” which elaborates on Charles Saunders Peirce’s semiotics as being a triadic system that includes a sign, object, and interpretant, and LLMs take a nominalist, dyadic approach that collapses the deeper meaning or interpretation (see computational linguist Bender’s elaboration of this argument in The AI Con). Also see Michèle Friend’s Pluralism in Mathematics: A New Position in Philosophy of Mathematics as it applies Gödel's Incompleteness to the foundations of mathematics itself and points out the limits of Boolean logic, and the need for an overall paraconsistent logic. AI researcher Ben Goertzel wrote a paper on "Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation." Here's a talk I gave with some of my preliminary thoughts on AI. I also have a lot more thoughts and resources in my write-up from when I argued against AI in a Socratic debate at AWE 2025. Also check out this recent philosophical talk that digs into some of the philosophical foundations to my experiential design framework and Whitehead's panexperientialism.
Legally speaking, I generally advocate for a relational approach as well as open source, decentralized approaches, but also I see that there's a need for some legal checks and balances around privacy. I elaborate on these in a paper titled "Privacy Pitfalls of Contextually-Aware AI: Sensemaking Frameworks for Context and XR Data Qualities" that was written for the Stanford's Cyber Policy Center's "Existing Law and Extended Reality" Symposium. But there is no sign of any new comprehensive federal privacy law in the US, which is where these major Big Tech companies are located. So the privacy implications of contextually-aware AI remain to be extremely fraught, especially with the trend of democratic backsliding in the US and beyond.
Culturally speaking, I find the forced integrations of AI into many layers of UX / UI to be largely non-consensual and with me being left with the feeling that AI is being shoved down my throat when I didn't ask for it and usually avoid using it whenever I can. I don't want AI to write for me, because writing is the process of thinking for me, and I'd rather think for myself (see “thinking as craft” argument from Hanna in The AI Con). I do find the experience of AI slop videos, photos, and text to be profoundly dehumanizing and makes me want to retreat from any social media space where AI slop is flooding the feeds. I hate the experience of having to question the provenance and legitimacy of everything I see and hear, and the AI-driven misinformation campaigns are a blight on democracy. I really resonate with the view that AI is the Aesthetics of Fascism considering the extent of how authoritarian leaders are using AI slop to push their democratic backsliding agendas.
So my perspectives on AI don't fit neatly into a single category, but I do resonate with some of the Anti-AI, Neo-Luddite sentiment. I'd point to Emily M. Bender and Alex Hanna’s The AI Con book, Karen Hao’s Empire of AI, Shoshana Zuboff’s Age of Surveillance Capitalism,...