PodcastsArtsVoices of VR

Voices of VR

Kent Bye
Voices of VR
Latest episode

349 episodes

  • Voices of VR

    1713: CIIIC’s €200 Million in Public Funding: The Creative Industries Immersive Impact Coalition

    2026/05/01 | 42 mins.
    The CIIIC is the Creative Industries Immersive Impact Coalition based out of the Netherlands, which will be spending about €200 Million in Public Funding over the next five years. It is a really exciting development in Europe that is promoting the development of Immersive Experiences (which they abbreviate IX). They will be cultivating knowledge and methods of experiential design, developing immersive talent and human capital, cultivating immersive ecosystem and facilities, catalyzing innovation via various projects, and creating an over synergy across all of their efforts.

    For a comprehensive recap of CIIIC and what they're doing, then also be sure to check out the CIIIC section starting on page 62 of the extensive 121-page IDFA DocLab Think Tank Report that I wrote, which was recently published on April 21, 2026. I provide a bit more context to this report in the intro and outro of this episode, which is an oral history interview with CIIIC Program Director Heleen Rouw at UnitedXR in December. This conversation forms the basis for that section, but also has some additional updates on their various efforts including:

    Artistic & Design Research for Immersive Experiences (ADRIE) (5 projects)

    Phase I of Innovation Impact Challenge: IX in Urban Development (17 projects)

    Phase II Innovation Impact Challenge: IX in Urban Development (10 projects)

    The "Shared Realities" consortium is part of the initial ADRIE cohort, which includes a collaboration between IDFA DocLab, Amsterdam University of Applied Sciences, MIT Open Documentary Lab, PHI, ARTIS Planetarium, and a number of XR studios based in the Netherlands including POPKRAFT, Polymorf, Studio Biarritz, WeMakeVR, ALLLESSS (Ali Eslami), Ado Ato Pictures (Tamara Shogaolu), and Cassette (Nu:Reality). Be sure to check out episode #1697 to hear more about how the Shared Realities initiative will be facilitating experiential designers and artists collaborating with researchers to see if immersive art can help to revitalize civic society.

    This interview with Rouw provides an overview of the CIIIC, how they're defining "immersive" to be much broader than any single technology, and why they think immersive will be the next big wave of innovation that can help promote public interest values.

    This is a listener-supported podcast through the Voices of VR Patreon.

    Music: Fatality
  • Voices of VR

    #1712: Preview of SXSW XR Experience 2026 with Blake Kammerdiener

    2026/03/13 | 1h 4 mins.
    I interviewed SXSW XR Experience 2026 curator Blake Kammerdiener about this year's selection, and how immersive artists are using Generative AI in a series of different projects. Below is the selection (ordered from longest to shortest). This year's program runs from 11a to 6p CDT from Sunday, March 15-17, 2026.

    XR Experience Competition

    Escape The Internet (Part 1) (50 min)

    Inter(mediate) Spaces (45 min)

    Winterover (45 min)

    Fabula Rasa: Dead Man Talking (30 min)

    Frustrain: Trainman (30 min)

    The Forgotten War (30 min)

    Watsonville (30 min)

    Fillos do Vento: A Rapa (28 min)

    Crafting Crimes: The Mona Lisa Heist (20 min)

    Love Bird (20 min)

    The Baby Factory is Closed (20 min)

    Lionia Is Leaving (18 min)

    Body Proxy (15 min)

    Cycle (15 min)

    The Great Dictator: A participatory AI installation about power, rhetoric, and memory (15 min)

    XR Experience Spotlight

    The Clouds Are Two Thousand Meters Up (62 min)

    The Great Orator (50 min)

    Lesbian Simulator (40 min)

    A Long Goodbye (35 min)

    Dark Rooms (35 min)

    Lacuna (34 min)

    The Dollhouse (24 min)

    Reality Looks Back (21 min)

    Insider Outsider (12 min)

    loss·y (10 min)

    Lost Love Hotline (10 min)

    Out of Nowhere (10 min)

    Spectacular: The Art of Jonathan Yeo in Augmented Reality (10 min)

    Ascended Intelligence (9 min)

    MIT Open Documentary Lab’s AR and Public Space Artist Collective

    Layers of Place: Austin [90 min total]

    ORYZA: Healing Ground (15 min)

    The Founders Pillars (15 min)

    Open Access Memorial (15 min)

    Paper Boat (15 min)

    Humble Monuments (15 min)

    Moving Memory (15 min)

    This is a listener-supported podcast through the Voices of VR Patreon.

    Music: Fatality
  • Voices of VR

    #1711: Mission Responsible 3: Discussion on AI Ethics with 6 Winners of Polys Ombudsperson of the Year

    2026/03/13 | 52 mins.
    This is the panel discussion of Mission Responsible 3 featuring the winners of the Polys Ombudsperson of the year including: Kent Bye (2020), Avi Bar-Zeev (2021), Brittan Heller (2022), Micaela Mantegna (2023), Ingrid Kopp (2024), and Nonny de la Pena (2025). Introduced by Renard T. Jenkins. The big topic this year was AI, but lots to say about XR as well.

    Here are some links that I mentioned in the introduction that were referenced within the show:

    "Freedom of Expression in Next-Generation Computing" by Brittan Heller

    XR Guild's Principles

    US sanctioning individual ICC judges for decisions they don't like.

    The Polys 6th Annual Immersive Awards takes place next weekend on Sunday, March 22, 2026 at SVA Theatre in New York City.

    This is a listener-supported podcast through the Voices of VR Patreon.

    Music: Fatality
  • Voices of VR

    #1710: When Integration Becomes Subordination: Big Tech Parallels in Carney’s Davos Speech & Untethering from the AI Big Brother

    2026/02/14 | 53 mins.
    Canada’s Prime Minister Mark Carney gave a rousing speech at the World Economic Forum on January 20, 2026 about the rupture of the rules-based order of the globalized economy, and he emphasized the need to build new coalitions to sustain the pressure coming from the United States' emerging authoritarianism. Carney said, “Great powers have begun using economic integration as weapons, tariffs as leverage, financial infrastructure as coercion, supply chains as vulnerabilities to be exploited. You cannot live within the lie of mutual benefit through integration, when integration becomes the source of your subordination.”

    Just as globalized, economic integrations are being weaponized by the United States, then Big Tech's integrations woven throughout our lives will continue to become the source of our own subordination, especially as surveillance capitalism heads towards its logical conclusion of an all-pervasive, AI Big Brother, perhaps eventually explicitly tied into authoritarian governments.

    The AI Big Brother has already started within the context of private companies, but with the outdated Third-Party doctrine of the Fourth Amendment, then any data given to a third party has "no legitimate 'expectation of privacy'." From UNITED STATES v. MILLER (1976): "The Fourth Amendment does not prohibit the obtaining of information revealed to a third party and conveyed by him to Government authorities." So the US government can request almost any data shared with a third party without a warrant, and given Big Tech's cozy relationship to a democratically-backsliding US government, then who knows what kinds of backroom deals are being made to automate data sharing.

    We're already in an era where almost all data given to a third party is not considered to be private, and you can start to see some early indications for how this can go wrong in Taylor Lorenz's interview with 404 Media's Joe Cox about ICE's surveillance technologies. It seems likely that we are entering into the very early phases of Orwell's worst nightmare of a 1984 surveillance state powered by Big Tech's AI.

    In this op-ed podcast episode, I connect some dots between Carney’s Davos speech about the hegemonic forces in the geopolitical sphere and the parallels with Big Tech's push towards "contextually aware-AI," which is just an always-on AI that is surveillance capitalism on steroids. Carney's speech provides a lot of insights for how Canada is navigating this new reality where the rules-based order on the International stage seems to be dissolving. One of his deepest insights is to simply name the truth, and to describe precisely what is happening. He refers to a powerful story from Vaclav Havel's The Power of the Powerless where shopkeepers eventually "took their [propaganda] signs down" during communist rule after they were no longer willing to live within a lie.

    Carney says: "The system's power comes not from its truth, but from everyone's willingness to perform as if it were true, and its fragility comes from the same source. When even one person stops performing, when the greengrocer removes his sign, the illusion begins to crack. Friends, it is time for companies and countries to take their signs down."

    Taking down metaphoric signs breaks the spell of the collective performative ritual that sustains the power of an authoritarian regime. Taking a sign down is also the embodiment of the first lesson of Timothy Synder's On Tyranny, which is "Do Not Obey in Advance." This lesson is certainly easier said than done, and I've been surprised how pervasive and powerful the chilling effects to remain silent can be. I find myself self-censoring, going dark on social media, and just generally not speaking the full truth as I see it. So this episode is a step in that direction of trying to name things as I see them, but also drawing the parallels between these broader political contexts and how they're collapsing into the technological contexts.

    As a society, one sign we've been holding up is that we've collectively been willing to mortgage our privacy by giving our data to Big Tech because it allows us to get access to software and services for free. But as the line between Big Tech and authoritarian governments continues to blur, then I expect to see more people start "taking down their signs" of tolerating surveillance capitalism by tapering down or cutting off their relationship completely.

    I'm already seeing some signs of this resistance to Big Tech starting to happen with the resurgence of dumb phones to counter smart-phone addiction, quitting social media to reduce the algorithmic filter bubbles that curate our realties, and a implementing a digital detox to unplug from the Internet in favor of more embodied, immersive, and experiential entertainment. We're starving for authenticity as social media networks are flooded with AI slop because it makes numbers go up, but yet it is a profoundly dehumanizing experience that feels like it's the logical extreme of novelty-optimized AI dopamine machines leading us to an Idiocracy dystopian future.

    With the democratic-backsliding in the US, the Trump Administration has been following the "seven basic tactics in the pursuit of power" as detailed by The Authoritarian Playbook (2024) as they politicize independent institutions, spread disinformation, pursue the unitary executive theory at the expense of checks and balances, quash criticism and dissent, scapegoat vulnerable and marginalized communities, work to corrupt elections, and stoke violence with their Operation Metro Surge.

    I'm seeing the abandonment of due process, and I've lost all faith in the enforcement of the rule of law as the Department of Justice has been weaponized. This abandonment of the rules-based order of the rule of law has a profoundly destabilizing psychological impact, and other countries have also been reckoning with it. In response, the Prime Minister of Canada Mark Carney has called for new coalitions of the middle powers given that the United States has chosen to abandon rules-based order in favor of coercive negotiating techniques. The US is leveraging their asymmetry of power to turn all relationships into a transaction that can be won or lost. Canada is unwilling to bend the knee to these authoritarian ways, and is making the call to arms for all middle powers to unite in order to resist the power of these hegemonic forces. There is a real strength in collective resistance, and so Canada is taking a hybrid approach towards coalition building. Their approach is primarily led by collaborating with countries that have shared values, but they also recognize the need for more pragmatic, ad-hoc, "variable geometry" coalitions based upon mutual benefit or interest.

    Just as countries are thinking about how to maintain their sovereignty, we are all entering into a new era that has moved beyond a rules-based order. So people around the world are also thinking about how they can maintain their own sovereignty in the context of Big Tech's push towards an all-pervasive, AI surveillance machine.

    One recent example of Big Tech's surveillance aspirations comes from an internal Meta memo shared with the New York Times arguing that the political chaos in the world right now makes it the perfect time to push out controversial tech that would normally get a lot of blowback. They're considering launching facial recognition features for their RayBan-Meta AI glasses as they callously characterize this moment as a "dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” This type of realpolitik moral reasoning follows the logic of surveillance capitalism, which completely ignores the broader potential cultural and legal impact of their technologies in favor of their short-term gain. I've previously written about how Meta's dream of contextually-aware AI is a dystopian privacy nightmare within the Proceedings of Stanford's Existing Law and Extended Reality Symposium.

    The always-on and persistent sensing from face-mounted cameras embedded into glasses is the next frontier for Meta, but this persistent capturing from wearable technology across all contextual domains will start to change the legal definition of our "reasonable expectation of privacy." This is because part of the legal test laid out by Harlan's concurring opinion in KATZ v UNITED STATES (1967) is what "society is prepared to recognize as reasonable." In other words, whatever the culture accepts as the boundary between public and private contexts becomes a part of the legal test for what the government considers to be protected by the Fourth Amendment. So an always-on AI surveillance from wearable face cameras will inevitably change these legal definitions and weaken everyone's Fourth Amendment protections.

    Even if all of the raw data remained on these devices, then inferences made from devices would not be protected if they're shared with a third party. Imagine a noisy raw camera feed from Meta's AI glasses is processed, but it makes some incorrect inferences from computer vision algorithms or hallucinations from a large language model, then these incorrect inferences could end up in the hands of a government and used as evidence against you in a court of law. The film Coded Bias does a great job of elaborating how marginalized communities have been harmed by biased algorithms that have been integrated into automatic decision-making in the context of policing, housing, employment, etc.

    Carney's roadmap has many lessons that we can also apply to our own encounters of a new reality. He named the truth of this situation and is taking Canada's metaphoric sign down signaling that they are no longer willing to live within a lie. Canada is untethering itself from their relationship to the United States as the US takes an authoritarian turn into dem
  • Voices of VR

    #1709: Ian Hamilton on Getting Fired from UploadVR & Concerns on AI Authorship in News

    2026/02/05 | 1h 35 mins.
    On Wednesday, January 28, 2026, Ian Hamilton announced on Bluesky that "I've been fired from UploadVR." He was the editor in chief at UploadVR, and he wrote a Substack post titled "Ian is Typing" on January 30th detailing how is co-workers were pushing to do a test of a "clearly disclosed AI author for UploadVR," and that he had three specific concerns that it be brief, for the ability for readers to turn off and hide all AI-authored posts, and for human freelancers to have the right of first refusal. Hamilton claims to have tried to raise these concerns in the context of Slack, but that the experiment was going to proceed regardless. He writes, "Unable to shift the direction of my colleagues and out of options to affect what was coming, I stepped out of Slack and sent a final email to them on Wednesday morning with a number of my contacts in the industry copied, raising some of these concerns. Not long after, I was called by my boss and fired."

    I spoke with Hamilton last Friday after his Substack post in order to get more context that led to his departure. Hamilton claims that UploadVR Editor & Developer David Heaney and UploadVR's Operations Manager Kyle Riesenbeck were behind the push to test this clearly disclosed AI author on UploadVR, and that ultimately the proposed test was a business decision made by Riesenbeck. It was a decision that Hamilton ultimately disagreed with, and he cites it as the primary factor that led to behavior that ultimately led to his firing. (UPDATE Feb 5, 2026: It is worth noting here that UploadVR has yet to run this AI bot author test, but that it was the proposed test that was the catalyst for Hamilton’s behavior).

    The specific reasons and circumstances around Hamilton's firing are publicly disputed by Heaney, who reacted on Twitter after Hamilton's Substack post went live by saying, "It is indeed only one side of the story. And an incomplete telling of it, with key omissions and wording choices that serve to paint a misleading picture." In another post Heaney says, "I can't get into it more at this point for obvious reasons, but don't believe everything you read, especially a single side of a complex story." I asked Hamilton for his reaction to Heaney's claims that he's being misleading during our interview, and he did provide more context in our conversation that lead up to his firing. Ultimately, it does sounds like the proposed AI bot author test was the primary catalyst for Hamilton, and that this disagreement may have led to other behaviors and reactions that could also be reasonably cited for why he was fired. UploadVR may have a differing opinions as to what happened, but no one from UploadVR has made public comments beyond what Heaney has said on Twitter. I have extended invitations to both Riesenbeck or Heaney to come onto the podcast for a broader discussion about AI, but nothing has been confirmed by the time of publication.

    My Personal Take on AI: Technically, Philosophically, Legally, and Culturally

    Public discourse around AI has split into a binary of Pro-AI vs Anti-AI, and while my personal views can not be easily collapsed into one side of the other, I'd usually take the Anti-AI side of a debate if given the opportunity. I do think some form of AI is here to stay, and will be around for a long time, but that right now there is a lot of hype and deluded thinking on the topic. I see AI as a technology that consolidates wealth and power, and so a primary question worth asking is “Whose power and wealth is being consolidated?” Karen Hao's The Empire of AI elaborates on how the past patterns of colonialism are replaying out within the context of data and the field of AI, as well as how scaling with more compute power has been the primary mode of innovation in AI, and that Gary Marcus has been pushing against the "Scale is All You Need" theory for many years now.

    Technically speaking, I'm more of a skeptic in the short-term around LLMs along the lines of Stochastic Parrots critique that is elaborated upon by Emily M. Bender and Alex Hanna in The AI Con book, but also Yann Lecun's call for more sensory grounding, as well as Gary Marcus' calls for more neurosymbolic cognitive architectures. AI has always been a marketing term as elaborated by Dr. Jonnie Penn’s Ph.D. thesis on "Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century." My perspective on AI has been informed by 122 unpublished interviews with AI researchers, many of whom also cite how the empirical results often outpace the theoretical results (i.e. there are often benchmark improvements without full knowledge around the theoretical foundations behind it leading resulting in plateaus rather than monotonic progress). I've also spoken to over 100 XR artists, storytellers, and engineers about AI on the Voices of VR podcast over the past decade. When the context is bounded, and the data are gathered while being in right relationship, then there can be some real utility. But there's also many gaps and ways that LLMs cause harm to marginalized communities. See the film Coded Bias for more details on that front.

    Philosophically speaking, Process Philosophy has had a big influence on me, and so check out my conversation with Whitehead scholar Matt Segall on AI. Timnit Gebru and Émile P. Torres' paper on the TESCREAL bundle has also been a key influence that deconstructs the influence of philosophies like Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism on AI research. I don't think AI is conscious, but I lean towards Whitehead's panexperientialism, which sees experience as going all the way down. This perspective also helps to differentiate humans from machines by looking at things like emotions, meaning, value, intention, context, relationships, all of which can easily get collapsed if only looking through the lens of “intelligence.” I'm curious about Data Science as Neoplatonism ideas, and Michael Levin’s work on ingressing minds (influenced by Platonic forms and Whitehead's eternal objects) and his general calls for SUTI: the Search for Unconventional Terrestrial Intelligence. I also love Timothy E. Eastman’s Logoi Framework as elaborated in his Untying the Gordian Knot: Process, Reality, Context book. He highlights the triadic nature of reality being input-output-context, and the logic of actualizations being Boolean logic and the logic of potential being non-Boolean logic, which is something that Hans Primas elaborates on in Knowledge and Time. So AI needs to account for the pluralism of non-Boolean realities, but often collapses them into a singular formal system that collapses situated knowledges. Also see James Bradley’s “Beyond Hermeneutics: Peirce’s Semiology as a Trinitarian Metaphysics of Communication,” which elaborates on Charles Saunders Peirce’s semiotics as being a triadic system that includes a sign, object, and interpretant, and LLMs take a nominalist, dyadic approach that collapses the deeper meaning or interpretation (see computational linguist Bender’s elaboration of this argument in The AI Con). Also see Michèle Friend’s Pluralism in Mathematics: A New Position in Philosophy of Mathematics as it applies Gödel's Incompleteness to the foundations of mathematics itself and points out the limits of Boolean logic, and the need for an overall paraconsistent logic. AI researcher Ben Goertzel wrote a paper on "Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation." Here's a talk I gave with some of my preliminary thoughts on AI. I also have a lot more thoughts and resources in my write-up from when I argued against AI in a Socratic debate at AWE 2025. Also check out this recent philosophical talk that digs into some of the philosophical foundations to my experiential design framework and Whitehead's panexperientialism.

    Legally speaking, I generally advocate for a relational approach as well as open source, decentralized approaches, but also I see that there's a need for some legal checks and balances around privacy. I elaborate on these in a paper titled "Privacy Pitfalls of Contextually-Aware AI: Sensemaking Frameworks for Context and XR Data Qualities" that was written for the Stanford's Cyber Policy Center's "Existing Law and Extended Reality" Symposium. But there is no sign of any new comprehensive federal privacy law in the US, which is where these major Big Tech companies are located. So the privacy implications of contextually-aware AI remain to be extremely fraught, especially with the trend of democratic backsliding in the US and beyond.

    Culturally speaking, I find the forced integrations of AI into many layers of UX / UI to be largely non-consensual and with me being left with the feeling that AI is being shoved down my throat when I didn't ask for it and usually avoid using it whenever I can. I don't want AI to write for me, because writing is the process of thinking for me, and I'd rather think for myself (see “thinking as craft” argument from Hanna in The AI Con). I do find the experience of AI slop videos, photos, and text to be profoundly dehumanizing and makes me want to retreat from any social media space where AI slop is flooding the feeds. I hate the experience of having to question the provenance and legitimacy of everything I see and hear, and the AI-driven misinformation campaigns are a blight on democracy. I really resonate with the view that AI is the Aesthetics of Fascism considering the extent of how authoritarian leaders are using AI slop to push their democratic backsliding agendas.

    So my perspectives on AI don't fit neatly into a single category, but I do resonate with some of the Anti-AI, Neo-Luddite sentiment. I'd point to Emily M. Bender and Alex Hanna’s The AI Con book, Karen Hao’s Empire of AI, Shoshana Zuboff’s Age of Surveillance Capitalism,...

More Arts podcasts

About Voices of VR

Designing for Virtual Reality. Oral history podcast featuring the pioneering artists, storytellers, and technologists driving the resurgence of virtual & augmented reality. Learn about the patterns of immersive storytelling, experiential design, ethical frameworks, & the ultimate potential of XR.
Podcast website

Listen to Voices of VR, Team No Sleep and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Voices of VR: Podcasts in Family