PodcastsEducationOracle University Podcast

Oracle University Podcast

Oracle Corporation
Oracle University Podcast
Latest episode

166 episodes

  • Oracle University Podcast

    Encore: Cloud Data Centers - Core Concepts Part 3

    2026/05/12 | 15 mins.
    Have you ever considered how a single server can support countless applications and workloads at once?
    In this episode, hosts Lois Houston and Nikita Abraham explore the sophisticated technologies that make this possible in modern cloud data centers.
    They discuss the roles of hypervisors, virtual machines, and containers, explaining how these innovations enable efficient resource sharing, robust security, and greater flexibility for organizations.
     
    Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Radhika Banka, and the OU Studio Team for helping us create this episode.
     
    ----------------------------------------------------
     
    Episode Transcript:
     
    00:00
    Hi there! We're hitting rewind for the next few weeks and bringing back some of our most popular episodes. So, sit back and enjoy these highlights from our archive.
    00:12
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:38
    Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services.
    Nikita: Hi everyone! For the last two weeks, we've been talking about different aspects of cloud data centers. In this episode, Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again to discuss how virtualization, through hypervisors, virtual machines, and containers, has transformed data centers.
    01:11
    Lois: That's right, Niki. We'll begin with a quick look at the history of virtualization and why it became so widely adopted. Orlando, what can you tell us about that? 
    Orlando: To truly grasp the power of virtualization, it's helpful to understand its journey from its humble beginnings with mainframes to its pivotal role in today's cloud computing landscape. It might surprise you, but virtualization isn't a new concept. Its roots go back to the 1960s with mainframes.
    In those early days, the primary goal was to isolate workloads on a single powerful mainframe, allowing different applications to run without interfering with each other. As we moved into the 1990s, the challenge shifted to underutilized physical servers.
    Organizations often had numerous dedicated servers, each running a single application, leading to significant waste of computing resources. This led to the emergence of virtualization as we know it today, primarily from the 1990s to the 2000s.
    The core idea here was to run multiple isolated operating systems on a single physical server. This innovation dramatically improved the resource utilization and laid the technical foundation for cloud computing, enabling the scalable and flexible environments we rely on today.
    02:39
    Nikita: Interesting. So, from an economic standpoint, what pushed traditional data centers to change and opened the door to virtualization?
    Orlando: In the past, running applications often meant running them on dedicated physical servers. This led to a few significant challenges.
    First, more hardware purchases. Every new application, every new project often required its own dedicated server. This meant constantly buying new physical hardware, which quickly escalated capital expenditure.
    Secondly, and hand-in-hand with more servers came higher power and cooling costs. Each physical server consumed power and generated heat, necessitating significant investment in electricity and cooling infrastructure. The more servers, the higher these operational expenses became.
    And finally, a major problem was unused capacity. Despite investing heavily in these physical servers, it was common for them to run well below their full capacity. Applications typically didn't need 100% of server's resources all the time.
    This meant we were wasting valuable compute power, memory, and storage, effectively wasting resources and diminishing the return of investment from those expensive hardware purchases. These economic pressures became a powerful incentive to find more efficient ways to utilize data center resources, setting the stage for technologies like virtualization.
    04:18
    Lois: I guess we can assume virtualization emerged as a financial game-changer. So, what kind of economic efficiencies did virtualization bring to the table?
    Orlando: From a CapEx or capital expenditure perspective, companies spent less on servers and data center expansion. From an OpEx or operational expenditure perspective, fewer machines meant lower electricity, cooling, and maintenance costs.
    It also sped up provisioning. Spinning a new VM took minutes, not days or weeks. That improved agility and reduced the operational workload on IT teams. It also created a more scalable, cost-efficient foundation which made virtualization not just a technical improvement, but a financial turning point for data centers.
    This economic efficiency is exactly what cloud providers like Oracle Cloud Infrastructure are built on, using virtualization to deliver scalable pay as you go infrastructure. 
    05:22
    Nikita: Ok, Orlando. Let's get into the core components of virtualization. To start, what exactly is a hypervisor?
    Orlando: A hypervisor is a piece of software, firmware, or hardware that creates and runs virtual machines, also known as VMs.
    Its core function is to allow multiple virtual machines to run concurrently on a single physical host server. It acts as virtualization layer, abstracting the physical hardware resources like CPU, memory, and storage, and allocating them to each virtual machine as needed, ensuring they can operate independently and securely.
    06:02
    Lois: And are there types of hypervisors?
    Orlando: There are two primary types of hypervisors. The type 1 hypervisors, often called bare metal hypervisors, run directly on the host server's hardware.
    This means they interact directly with the physical resources offering high performance and security. Examples include VMware ESXi, Oracle VM Server, and KVM on Linux. They are commonly used in enterprise data centers and cloud environments.
    In contrast, type 2 hypervisors, also known as hosted hypervisors, run on top of an existing operating system like Windows or macOS. They act as an application within that operating system. Popular examples include VirtualBox, VMware Workstation, and Parallels. These are typically used for personal computing or development purposes, where you might run multiple operating systems on your laptop or desktop.
    07:08
    Nikita: We've spoken about the foundation provided by hypervisors. So, can we now talk about the virtual entities they manage: virtual machines? What exactly is a virtual machine and what are its fundamental characteristics?
    Orlando: A virtual machine is essentially a software-based virtual computer system that runs on a physical host computer. The magic happens with the hypervisor. The hypervisor's job is to create and manage these virtual environments, abstracting the physical hardware so that multiple VMs can share the same underlying resources without interfering with each other.
    Each VM operates like a completely independent computer with its own operating system and applications. 
    07:53
    Lois: What are the benefits of this?
    Orlando: Each VM is isolated from the others. If one VM crashes or encounters an issue, it doesn't affect the other VMs running on the same physical host. This greatly enhances stability and security.
    A powerful feature is the ability to run different operating systems side-by-side on the very same physical host. You could have a Windows VM, a Linux VM, and even other specialized OS, all operating simultaneously.
    Consolidate workloads directly addresses the unused capacity problem. Instead of one application per physical server, you can now run multiple workloads, each in its own VM on a single powerful physical server. This dramatically improves hardware utilization, reducing the need of constant new hardware purchases and lowering power and cooling costs.
    And by consolidating workloads, virtualization makes it possible for cloud providers to dynamically create and manage vast pools of computing resources. This allows users to quickly provision and scale virtual servers on demand, tapping into these shared pools of CPU, memory, and storage as needed, rather than being tied to a single physical machine.
    09:25
    Do you want to boost your data management skills for free? The Oracle Data Platform Foundations Associate Learning Path covers everything from Autonomous Database to modern data architectures like lakehouse and mesh—and prepares you for the certification. Get started today by visiting mylearn.oracle.com.
    09:50
    Nikita: Welcome back! Orlando, let's move on to containers. Many see them as a lighter, more agile way to build and run applications. What's your take?
    Orlando: A container packages an application in all its dependencies, like libraries and other binaries, into a single, lightweight executable unit. Unlike a VM, a container shares the host operating system's kernel, running on top of the container runtime process.
    This architectural difference provides several key advantages. Containers are incredibly portable. They can be taken virtually anywhere, from a developer's laptop to a cloud environment, and run consistently, eliminating it works on my machine issues. Because containers share the host OS kernel, they don't need to bundle a full operating system themselves. This results in significantly smaller footprints and less administration overhead compared to VMs.
    They are faster to start. Without the need to boot a full operating system, containers can start up in seconds, or even milliseconds, providing rapid deployment and scaling capabilities.
    11:08
    Nikita: Ok. Throughout our conversation, you've spoken about the various advantages of virtualization but let's consolidate them now. 
    Orlando: From a security standpoint, virtualization offers several crucial benefits. Each VM operates in its own isolated sandbox. This means if one VM experiences a security breach, the impact is generally contained to that single virtual machine, significantly limiting the spread of potential threats across your infrastructure. Containers also provide some isolation.
    Virtualization allows for rapid recovery. This is invaluable for disaster recovery or undoing changes after a security incident. You can implement separate firewalls, access rules, and network configuration for each VM. This granular control reduces the overall exposure and attack surface across your virtualized environments, making it harder for malicious actors to move laterally.
    Beyond security, virtualization also brings significant advantages in terms of operational and agility benefits for IT management. Virtualization dramatically improves operational efficiency and agility. Things are faster. With virtualization, you can provision new servers or containers in minutes rather than days or weeks. This speed allows for quicker deployment of applications and services.
    It becomes much simpler to deploy consistent environment using templates and preconfigured VM images or containers. This reduces errors and ensures uniformity across your infrastructure. It's more scalable. Virtualization makes your infrastructure far more scalable. You can reshape VMs and containers to meet changing demands, ensuring your resources align precisely with your needs.
    These operational benefits directly contribute to the power of cloud computing, especially when we consider virtualization's role in enabling cloud and scalability. Virtualization is the very backbone of modern cloud computing, fundamentally enabling its scalability. It allows multiple virtual machines to run on a single physical server, maximizing hardware utilization, which is essential for cloud providers.
    This capability is core of infrastructure as a service offerings, where users can provision virtualized compute resources on demand. Virtualization makes services globally scalable. Resources can be easily deployed and managed across different geographic regions to meet worldwide demand. Finally, it provides elasticity, meaning resources can be automatically scaled up or down in response to fluctuating workloads, ensuring optimal performance and cost efficiency.
    14:18
    Lois: That's amazing. Thank you, Orlando, for joining us once again. 
    Nikita: Yeah, and remember, if you want to learn more about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. 
    Lois: Well, that's all we have for today. Until next time, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    14:37
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Encore: Cloud Data Centers - Core Concepts Part 2

    2026/05/05 | 14 mins.
    Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud?
    In this episode, Lois Houston and Nikita Abraham discuss cloud storage.
    They explore how data is carefully organized, the different ways it can be stored—whether right next to the server or across the network—and what keeps it safe and easy to find.
     
    Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Radhika Banka, and the OU Studio Team for helping us create this episode.


    ------------------------------------------------------
     
    Episode Transcript: 
     
    00:00
    Hi there! We're hitting rewind for the next few weeks and bringing back some of our most popular episodes. So, sit back and enjoy these highlights from our archive.
    00:12
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:38
    Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs.
    Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven't listened to the episode yet, I'd suggest going back and listening to it before you dive into this one. 
    Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we're going to ask him about another fundamental concept: storage.
    01:16
    Lois: That's right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers? 
    Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center.
    02:05
    Nikita: But how is data organized and controlled on disks?
    Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices.
    Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive.
    Once partitions are created, they are formatted with a file system.
    02:53
    Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system? 
    Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux.
    Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions. 
    03:55
    Lois: And what are permissions?
    Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute.
    This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center. 
    04:21
    Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server?  
    Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority.
    Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data.
    Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server. 
    06:11
    Lois: I'm guessing you're hinting at remote storage. Can you tell us more about that, Orlando?
    Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications.
    06:48
    Lois: Let's talk about the common forms of remote storage. Can you run us through them?
    Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files.
    A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach. 
    07:50
    Nikita: And what might this approach be? 
    Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network.
    iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network. 
    This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed. 
    08:59
    Nikita: And what's this specialized network called?
    Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount.
    09:56
    Do you want to master Oracle Database on AWS? Check out the Oracle Database@AWS course, where you'll learn provisioning, migration, security, and high availability. Validate your new skills with a certification and stand out in the multicloud space. Visit mylearn.com to learn more! 
    10:23
    Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about?
    Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage. 
    Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data.
    This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play.
    11:59
    Lois: And what's that exactly?
    Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes.
    13:01
    Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management. 
    Nikita: That's right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. 
    Lois: In our next episode, we'll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can't wait to learn more about it. Until then, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    13:44
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Encore: Cloud Data Centers - Core Concepts Part 1

    2026/04/28 | 16 mins.
    Curious about what really goes on inside a cloud data center?
     
    In this episode, Lois Houston and Nikita Abraham dive into how cloud data centers are transforming the way organizations manage technology.
    They explore the differences between traditional and cloud data centers, the roles of CPUs, GPUs, and RAM, and why operating systems and remote access matter more than ever.
     
    Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Radhika Banka, and the OU Studio Team for helping us create this episode.
     
    --------------------------------------------------------
     
    Episode Transcript:
     
    00:00
    Hi there! We're hitting rewind for the next few weeks and bringing back some of our most popular episodes. So, sit back and enjoy these highlights from our archive.
    00:12
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:37
    Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services.  
    Nikita: Hi everyone! Today, we're covering the fundamentals you need to be successful in a cloud environment. If you're new to cloud, coming from a SaaS environment, or planning to move from on-premises to the cloud, you won't want to miss this. With us today is Orlando Gentil, Principal OCI Instructor at Oracle University. Hi Orlando! Thanks for joining us.  
    01:13
    Lois: So Orlando, we know that Oracle has been a pioneer of cloud technologies and has been pivotal in shaping modern cloud data centers, which are different from traditional data centers. For our listeners who might be new to this, could you tell us what a traditional data center is? 
    Orlando: A traditional data center is a physical facility that houses an organization's mission critical IT infrastructure, including servers, storage systems, and networking equipment, all managed on site.  
    01:44
    Nikita: So why would anyone want to use a cloud data center? 
    Orlando: The traditional model requires significant upfront investment in physical hardware, which you are then responsible for maintaining along with the underlying infrastructure like physical security, HVAC, backup power, and communication links. 
    In contrast, cloud data centers offer a more agile approach. You essentially rent the infrastructure you need, paying only for what you use. In the traditional data center, scaling resources up and down can be a slow and complex process. 
    On cloud data centers, scaling is automated and elastic, allowing resources to adjust dynamically based on demand. This shift allows business to move their focus from the constant upkeep of infrastructure to innovation and growth. 
    The move represents a shift from maintenance to momentum, enabling optimized costs and efficient scaling. This fundamental shift is how IT infrastructure is managed and consumed, and precisely what we mean by moving to the cloud. 
    02:52
    Lois: So, when we talk about moving to the cloud, what does it really mean for businesses today? 
    Orlando: Moving to the cloud represents the strategic transition from managing your own on-premise hardware and software to leveraging internet-based computing services provided by a third-party. 
    This involves migrating your applications, data, and IT operations to a cloud environment. This transition typically aims to reduce operational overhead, increase flexibility, and enhance scalability, allowing organizations to focus more on their core business functions.   
    03:29
    Nikita: Orlando, what's the "brain" behind all this technology? 
    Orlando: A CPU or Central Processing Unit is the primary component that performs most of the processing inside the computer or server. It performs calculations handling the complex mathematics and logic that drive all applications and software. 
    It processes instructions, running tasks, and operations in the background that are essential for any application. A CPU is critical for performance, as it directly impacts the overall speed and efficiency of the data center. 
    It also manages system activities, coordinating user input, various application tasks, and the flow of data throughout the system. Ultimately, the CPU drives data center workloads from basic server operations to powering cutting edge AI applications. 
    04:23
    Lois: To better understand how a CPU achieves these functions and processes information so efficiently, I think it's important for us to grasp its fundamental architecture. Can you briefly explain the fundamental architecture of a CPU, Orlando? 
    Orlando: When discussing CPUs, you will often hear about sockets, cores, and threads. A socket refers to the physical connection on the motherboard where a CPU chip is installed. 
    A single server motherboard can have one or more sockets, each holding a CPU. A core is an independent processing unit within a CPU. Modern CPUs often have multiple cores, enabling them to handle several instructions simultaneously, thus increasing processing power. 
    Think of it as having multiple mini CPUs on a single chip. Threads are virtual components that allow a single CPU core to handle multiple sequence of instructions or threads concurrently. This technology, often called hyperthreading, makes a single core appear as two logical processors to the operating system, further enhancing efficiency. 
    05:39
    Lois: Ok. And how do CPUs process commands? 
    Orlando: Beyond these internal components, CPUs are also designed based on different instruction set architectures which dictate how they process commands.  
    CPU architectures are primarily categorized in two designs-- Complex Instruction Set Computer or CISC and Reduced Instruction Set Computer or RISC. CISC processors are designed to execute complex instructions in a single step, which can reduce the number of instructions needed for a task, but often leads to a higher power consumption. These are commonly found in traditional Intel and AMD CPUs. 
    In contrast, RISC processors use a simpler, more streamlined set of instructions. While this might require more steps for a complex task, each step is faster and more energy efficient. This architecture is prevalent in ARM-based CPUs. 
    06:47
    Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification, now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details.  
     
    07:22
    Nikita: Welcome back! We were discussing CISC and RISC processors. So Orlando, where are they typically deployed? Are there any specific computing environments and use cases where they excel? 
    Orlando: On the CISC side, you will find them powering enterprise virtualization and server workloads, such as bare metal hypervisors in large databases where complex instructions can be efficiently processed. High performance computing that includes demanding simulations, intricate analysis, and many traditional machine learning systems. 
    Enterprise software suites and business applications like ERP, CRM, and other complex enterprise systems that benefit from fewer steps per instruction. Conversely, RISC architectures are often preferred for cloud-native workloads such as Kubernetes clusters, where simpler, faster instructions and energy efficiency are paramount for distributed computing. 
    Mobile device management and edge computing, including cell phones and IoT devices where power efficiency and compact design are critical. Cost optimized cloud hosting supporting distributed workloads where the cumulative energy savings and simpler design lead to more economical operations. 
    The choice between CISC and RISC depends heavily on the specific workload and performance requirements. While CPUs are versatile generalists, handling a broad range of tasks, modern data centers also heavily rely on another crucial processing unit for specialized workloads. 
    09:07
    Lois: We've spoken a lot about CPUs, but our conversation would be incomplete without understanding what a Graphics Processing Unit is and why it's important. What can you tell us about GPUs, Orlando? 
    Orlando: A GPU or Graphics Processing Unit is distinct from a CPU. While the CPU is a generalist excelling at sequential processing and managing a wide variety of tasks, the GPU is a specialist. 
    It is designed specifically for parallel compute heavy tasks. This means it can perform many calculations simultaneously, making it incredibly efficient for workloads like rendering graphics, scientific simulations, and especially in areas like machine learning and artificial intelligence, where massive parallel computation is required. 
    In the modern data center, GPUs are increasingly vital for accelerating these specialized, data intensive workloads.  
    10:11
    Nikita: Besides the CPU and GPU, there's another key component that collaborates with these processors to facilitate efficient data access. What role does Random Access Memory play in all of this? 
    Orlando: The core function of RAM is to provide faster access to information in use. Imagine your computer or server needing to retrieve data from a long-term storage device, like a hard drive. This process can be relatively slow. 
    RAM acts as a temporary high-speed buffer. When your CPU or GPU needs data, it first checks RAM. If the data is there, it can be accessed almost instantaneously, significantly speeding up operations. 
    This rapid access to frequently used data and programming instructions is what allows applications to run smoothly and systems to respond quickly, making RAM a critical factor in overall data center performance. 
    While RAM provides quick access to active data, it's volatile, meaning data is lost when power is off, or persistent data storage, the information that needs to remain available even after a system shut down.  
    11:26
    Nikita: Let's now talk about operating systems in cloud data centers and how they help everything run smoothly. Orlando, can you give us a quick refresher on what an operating system is, and why it is important for computing devices? 
    Orlando: At its core, an operating system, or OS, is the fundamental software that manages all the hardware and software resources on a computer. Think of it as a central nervous system that allows everything else to function. 
    It performs several critical tasks, including managing memory, deciding which programs get access to memory and when, managing processes, allocating CPU time to different tasks and applications, managing files, organizing data on storage devices, handling input and output, facilitate communication between the computer and its peripherals, like keyboards, mice, and displays. And perhaps, most importantly, it provides the user interface that allows us to interact with the computer. 
    12:31
    Lois: Can you give us a few examples of common operating systems? 
    Orlando: Common operating system examples you are likely familiar with include Microsoft Windows and MacOS for personal computers, iOS and Android for mobile devices, and various distributions of Linux, which are incredibly prevalent in servers and increasingly in cloud environments. 
    12:54
    Lois: And how are these operating systems specifically utilized within the demanding environment of cloud data centers? 
    Orlando: The two dominant operating systems in data centers are Linux and Windows. Linux is further categorized into enterprise distributions, such as Oracle Linux or SUSE Linux Enterprise Server, which offer commercial support and stability, and community distributions, like Ubuntu and CentOS, which are developed and maintained by communities and are often free to use. 
    On the other side, we have Windows, primarily represented by Windows Server, which is Microsoft's server operating system known for its robust features and integration with other Microsoft products. While both Linux and Windows are powerful operating systems, their licensing modes can differ significantly, which is a crucial factor to consider when deploying them in a data center environment. 
    13:55
    Nikita: In what way do the licensing models differ? 
    Orlando: When we talk about licensing, the differences between Linux and Windows become quite apparent. For Linux, Enterprise Distributions come with associated support fees, which can be bundled into the initial cost or priced separately. These fees provide access to professional support and updates. On the other hand, Community Distributions are typically free of charge, with some providers offering basic community-driven support. 
    Windows server, in contrast, is a commercial product. Its license cost is generally included in the instance cost when using cloud providers or purchased directly for on-premise deployments. It's also worth noting that some cloud providers offer a bring your own license, or BYOL program, allowing organizations to use their existing Windows licenses in the cloud, which can sometimes provide cost efficiencies. 
    14:58
    Nikita: Beyond choosing an operating system, are there any other important aspects of data center management? 
    Orlando: Another critical aspect of data center management is how you remotely access and interact with your servers. Remote access is fundamental for managing servers in a data center, as you are rarely physically sitting in front of them. The two primary methods that we use are SSH, or secure shell, and RDP, remote desktop. 
    Secure shell is widely used for secure command line access for Linux servers. It provides an encrypted connection, allowing you to execute commands, transfer files, and manage your servers securely from a remote location. The remote desktop protocol is predominantly used for graphical remote access to Windows servers. RDP allows you to see and interact with the server's desktop interface, just as if you were sitting directly in front of it, making it ideal for tasks that require a graphical user interface. 
    16:06
    Lois: Thank you so much, Orlando, for shedding light on this topic.   
    Nikita: Yeah, that's a wrap for today! To learn more about what we discussed, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we'll take a close look at how data is stored and managed. Until then, this is Nikita Abraham…  
    Lois: And Lois Houston, signing off!  
    16:28
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Vector AI Supporting Features: What's New in Oracle Exadata and GoldenGate

    2026/04/22 | 13 mins.
    Hosts Lois Houston and Nikita Abraham are joined by Brent Dayley, Senior Principal APEX and Apps Dev Instructor, to explore the latest vector AI supporting features in Oracle Exadata and GoldenGate 23ai. The conversation begins with an overview of Exadata's capabilities and then shifts to how GoldenGate is powering distributed AI, real-time data streaming, and analytics with advanced microservices architecture. Brent highlights recent GoldenGate enhancements, including distributed vector support, robust monitoring, OCI IAM integration, and support for next-generation AI workloads via real-time vector hubs.
     
    Oracle AI Vector Search Deep Dive: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-deep-dive/144706/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, and the OU Studio Team for helping us create this episode.
     
    Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    -------------------------------------------------------
     
    Episode Transcript:
     
    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to another episode of the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead of Editorial Services with Oracle University. 
    Nikita: Hi everyone! Thanks for joining us! In our previous episode of this series, we took a deep dive into Oracle AI Vector Search and Retrieval Augmented Generation, or RAG, showing how unstructured data can be transformed into embeddings to power smarter, more context-aware AI with Oracle Database 23ai.
    Lois: That's right, Niki. We also explored how the OCI Generative AI service can be used with both Python and PL/SQL, and how AI Vector Search enables relevant information retrieval for large language model prompts.
    01:21
    Nikita: Today, we're focusing on the latest supporting features for Oracle AI Vector Search. Joining us once again is Brent Dayley, Senior Principal APEX and Apps Dev Instructor. Welcome back, Brent! To kick things off, could you outline what's new in Exadata with the 24ai release, particularly for AI storage?
    Brent: So Exadata has ushered in a new era of AI capabilities with 24ai release. Key features of Exadata system software 24ai include AI Smart Scan, Exadata RDMA Memory, known as XRMEM, Exadata Smart Flash Cache, and on-storage processing. 
    In-Memory Columnar Speed JSON Queries, Transparent Cross-Tier Scans, and caching enhancements, including Columnar Smart Scan at Memory Speed, Exadata Cache Observability, and Automatic KEEP Object Load into Exadata Flash Cache. 
    Now, Exadata system software 24ai is a significant release. It ushers in a new era of AI capabilities for Oracle Database users. 
    Now there have been some infrastructure improvements, including the ability to increase the number of virtual machines on X10M and Secure Boot for KVM Virtual Machines. 
    We have also improved and enhanced high availability and network resilience, including improved RoCE Network Resilience and enhanced RoCE Network Discovery. There have been some enhancements for monitoring and management, including AWR and SQL Monitor Enhancements and JSON API for Management Server. 
    Additionally, security enhancement. SNMP Security. Now, Exadata system software 24ai is supported on Exadata database machines and storage expansion racks from X6 and newer. 
    03:40
    Lois: Those are some fantastic advancements for Exadata users. Now, let's pivot to distributed AI. Brent, can you walk us through how GoldenGate enables distributed AI?
    Brent: Let's take a look at some common GoldenGate use cases as a refresher. The first use case is multi-active, high availability, and cross-region deployments, spanning on-premises and cloud environments. 
    Another use case includes data offloading and data hub creation in order to support multiple downstream applications. Real-time data stores for Downstream Marts and Analytics. Micro and mini services architecture and an audit history of transactions. 
    Other use cases include migrations and upgrades of databases, including OCI-hosted databases. Another use case would be creating analytic data feeds for various applications, including SaaS and on-premises apps. And finally, stream analytics using application and transaction events captured by GoldenGate Stream Analytics. 
    05:03
    Nikita: We know GoldenGate has long been a staple for enterprise data integration. So Brent, what makes GoldenGate the best choice today, and how has its architecture evolved?
    Brent: It offers DIY Stream Analytics. GoldenGate does remain the top choice for Enterprise Standard, real-time data streaming. It supports Oracle and third-party databases, vector sources, messaging systems, and NoSQL databases. 
    OCI offers a fully managed pipeline builder for Stream Analytics. This pipeline leverages various OCI services, such as OCI Streaming for real-time event ingestion, OCI Dataflow for stream processing, OCI Big Data for data storage and processing, and OCI Stream Analytics for real-time event processing and analysis. 
    GoldenGate microservices, available since 2017 in Oracle GoldenGate 12.3, is used in over 4,000 deployments in OCI. Benefits of GoldenGate microservices include the ability to employ the same trusted Extract and Replicat processes as the classic architecture. 
    Provides flexible and secure remote administration through a user-friendly web interface or CLI. Deployable on-premises in OCI as a service and in third-party cloud environments. Simplified patching and upgrading process. 
    Now the GoldenGate architecture evolution. First, classic architecture that was deprecated in version 19c and desupported in 23ai. Microservices Architecture introduced in version 12.3 and is the recommended architecture. A migration utility is available to upgrade from classic to microservices architecture. 
    07:12
    Are you ready to create and manage AI Agents in Fusion Applications? Check out the Oracle AI Agent Studio for Fusion Applications courses! Start with the Foundations course to build, customize, and deploy AI Agents, and then advance to the Developer Professional certification. Explore hands-on labs and real-world case studies. Visit mylearn.oracle.com for all the details. 
    07:39
    Nikita: Welcome back! It sounds like the latest GoldenGate updates offer new features and integrations. Could you share more about these enhancements?
    Brent: There are many new features and enhancements in GoldenGate, along with microservices, including a redesigned GUI for enhanced usability. Integration with StatsD and Telegraf for monitoring and metrics. OCI IAM integration for secure access control. 
    JSON Relational Duality for flexible data handling. Next-generation AI with distributed vector support. PDB Extract Capture for efficient data extraction from Oracle Pluggable Databases. DDL notification on Target Tables for schema evolution management. 
    Support for non-Oracle and Big Data technologies. Online DDL and EBR enhancement for improved performance. Data Streams Pub-Sub for asynchronous data dissemination. Async API support for standardized event communication. High-availability clusters for increased resilience. Trail Files Management for efficient data storage. And support for new features in 23ai database. 
    It also includes integrated diagnostics for improved troubleshooting of IE and IR processes. And 30 or more OS and database certifications for wider platform support. @Dbfunction Mapping for custom data transformations. And lastly, GoldenGate free recipes for pre-built solutions and best practices. 
    New in GoldenGate, distributed AI processing with vector replication. 
    09:37
    Lois: And what type of use cases does this enable?
    Brent: Migrating vectors into Oracle Vector Database. Replicating and consolidating vector changes. Implementing multi-cloud, multi-active Oracle vector databases. Streaming text and vector changes to search engines. 
    Key considerations include that embedding models must be consistent across all vector stores for effective similarity searches. 
    10:09
    Lois: Now, many organizations wonder if they can use generative AI with their own business data. Brent, how do enterprises typically approach this?
    Brent: Organizations are using generative AI typically like this. 
    Building LLMs from scratch. Training models on proprietary data for specific tasks. Fine-tuning LLMs, adapting pre-trained models to a specific domain using private data. And prompt engineering with retrieval augmented generation or RAG. Augmenting prompts with relevant information retrieved from a knowledge base to improve the accuracy and relevance of LLM responses. 
    Now it's possible to create a real-time vector hub for GenAI. This hub can ingest real-time data from various sources, including Oracle and third-party relational databases, vector databases, third-party messaging systems, and NoSQL databases, business updates, documents, events, and alerts. 
    11:11
    Nikita: And how does the vector hub work? 
    Brent: DML and DDL changes, vector changes, and prompt or chat history are used to enrich prompts. And embedding model generates embeddings from the text data. 
    Similarity search is performed on these embeddings to retrieve relevant information from the vector hub. The retrieved information is used to augment the prompt, leading to more accurate and trustworthy answers from the LLM. Now, the benefits of real-time data and generative AI include the ability to ensure answers are based on fresh business data. And helps reduce hallucinations in generative AI responses. 
    Actionable AI and machine learning from streaming pipelines allows data from ERP and SaaS applications, databases, event messaging systems, and NoSQL databases to be ingested into streaming pipelines. This data can then be used for AI and machine learning model training, similarity searches, machine learning tasks, external AI, and machine learning integrations, alerts, and data product creation. 
    12:25
    Lois: So if you had to summarize, Brent, why does GoldenGate 23ai stand out for artificial intelligence workloads?
    Brent: Well, first up, it improves data quality for AI model training and fine-tuning. And secondly, it enhances retrieval augmented generation by providing real-time access to relevant business data, leading to more accurate and trustworthy generative AI responses. 
    Nikita: Thank you, Brent, for sharing your insights and detailing these exciting new features across Oracle's AI stack. If you'd like to dive deeper into these topics, don't forget to visit mylearn.oracle.com and look for Oracle AI Vector Search Deep Dive course. Until next time, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!
    13:16
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    RAG with Oracle AI Vector Search and OCI Generative AI: Python and PL/SQL Approaches

    2026/04/14 | 11 mins.
    In this episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham are joined by Brent Dayley, Senior Principal APEX & Apps Dev Instructor. Together, they explore how to implement Retrieval Augmented Generation (RAG) using Oracle AI Vector Search and OCI Generative AI. Brent walks listeners through the similarities and differences between building RAG workflows with Python and PL/SQL, offering practical insights into embedding creation, semantic search, and prompt engineering within Oracle's technology stack.
     
    Oracle AI Vector Search Deep Dive: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-deep-dive/144706/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.
     
    Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    --------------------------------------------
     
    Episode Transcript:

    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to another episode of the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead for Editorial Services with Oracle University. 
    Nikita: Hi everyone! If you joined us last week, you'll remember we explored AI Vector Search and how Retrieval Augmented Generation, or RAG, empowers large language models by surfacing relevant business content for smarter, more context-aware answers.
    Lois: That's right, Niki. We also looked at how unstructured data gets transformed into embeddings, how these vectors power semantic search, and how Oracle Database 23ai is uniquely designed to support these advanced AI workflows.
    Nikita: Today, we're building on that foundation with an exciting double feature. We'll start with an introduction to OCI Generative AI Service and how you can use it with Python, and then dive into Retrieval Augmented Generation with Oracle AI Vector Search and the OCI Gen AI service using PL/SQL.
    01:32
    Lois: And to walk us through these topics, we're delighted to welcome back Brent Dayley, Senior Principal APEX & Apps Dev Instructor. Brent, it's great to have you. So, tell us, how does the OCI Generative AI service use Oracle AI Vector Search?
    Brent: So OCI Generative AI service allows us to take user questions and augment those using external data from outside of the large language model that allows us to return augmented content. 
    We would leverage Oracle AI Vector Search in order to retrieve contextually relevant information. And we would create prompts that have some sort of a meaning to help guide the user to input the appropriate types of questions. And this allows us to retrieve the data using a large language model. 
    02:27
    Nikita: What are the typical steps for implementing a RAG workflow using the OCI Generative AI service in Python?
    Brent: We would load the document. Transform the document to text. And then split the text into chunks. 
    So if you're talking about maybe a PDF that contains chapters, we might split the different chapters into individual chunks. We would then set up Oracle AI Vector Search and insert the embedding vectors. We would build the prompt to query the document. And then we would invoke the chain. 
    So first, you would load the text sources from a file. Open a terminal window and connect to your compute instance. And launch ipython to allow interactive work. 
    Ipython allows you to insert a series of steps in order to put different commands in different steps. You might load the source file called FAQs.
    Next, you would load the FAQ chunks into the Vector Database. You would create a connection and connect to your database. And then create the table. And then you would vectorize the text chunks and then encode the text chunks. And then insert the chunks and vectors into the database. 
    Next, you would vectorize the question. Define the SQL script ordering the results by the calculated score. Define the question. Write the retrieval code. And then execute the code. Finally, you would print the results.
    Then we would create the large language model prompt and call the AI generative LLM. Ensure that our prompt does not exceed the maximum context length of the model. And then define the prompt content. 
    We would then initialize the OCI client and then make the call. 
    04:47
    Here's some exciting news! Oracle University has training to help your teams unlock Redwood—the next-gen design system for Fusion Cloud Applications. Learn how Redwood improves your user experience and discover how to personalize your Fusion investment using Visual Builder Studio. Whatever your role, visit mylearn.oracle.com and check out these courses today! 
    05:12
    Nikita: Thanks, Brent. That gives us a nice overview of how Python can be leveraged with OCI Generative AI. Now, how would you compare working with Python for building RAG applications to using PL/SQL? Can you walk us through the high-level process for building a RAG solution in this environment?
    Brent: First, we would want to load the document. Next, we would transform the document into plain text. After that, we would take that text and split it into meaningful chunks. Next, we would go ahead and set up Oracle AI Vector Search and insert the embedding vectors. We would then build the prompt so that we can query the document. And then we would invoke all of those previous steps as our chain. 
    06:04
    Lois: OK, and can we take a closer look at each of these steps? 
    Brent: Step 1, text extraction and preparation. So, let's imagine we have some sort of document that we want to use as the augmented information. We would load that document. Next, we would transform the document to text. And we have a function in the DBMS Vector Chain Package called util to text. And this is used to extract plain text from the loaded documents. 
    Next, we would want to split the text into meaningful chunks. The DBMS Vector Chain Package has another function called util two chunks, that allows us to divide the extracted text into smaller, more manageable pieces, which we call chunks. 
    07:02
    Nikita: Once we have our text chunks ready, what's the next step to make our data searchable and useful for the large language model?
    Brent: Step number 2, we would want to go ahead and use embedding models in order to create our vectors. We would load multiple ONNX models into the database. And the reason we would do this is because models with a greater number of dimensions usually produce higher quality vector embeddings. 
    So you might want to load multiple different ONNX models into the database so that you can generate embeddings from each of the models, and then compare those vector embeddings using those different models. You would create vector embeddings using PL/SQL packages. 
    07:55
    Lois: After embeddings are created, how does the solution find the most relevant content in response to a user's question?
    Brent: Step 3, we would then go and do a similarity search so that we can return a response. We would select the text chunks that have the relevant information for the input user question based on vector search. This allows for integrating with Oracle's Gen AI Large Language Model Service to generate responses. The process ensures that the large language model generates contextually appropriate and relevant answers for those users' queries. 
    Now, step 4 is to build the prompt, and I want to stress the importance of large language model prompt engineering. What this will do is to carefully craft input queries or instructions so that we can get more accurate and desirable outputs from the large language model. 
    This allows developers to guide the LLM's behavior and tailor its responses to specific requirements. This is what we call LLM Prompt Engineering. And it allows us, as I was saying, to craft input queries or instructions so that we can create more accurate and desirable outputs. 
    Next, we would use an example interactive RAG application that uses the Streamlit framework in order to create a user-friendly interface. This interface will allow us to upload documents, pose the question, and receive relevant answers generated by the underlying RAG pipeline within the database. 
    In the final step, we will have an input prompt that asks us to ask a question about the PDF. We will then type in some sort of a question relative to the PDF content. And then we would retrieve the return data based on the input question. 
    10:11
    Nikita: Brent, thank you for walking us through both the Python and PL/SQL approaches for building RAG solutions with Oracle Generative AI. If you'd like to dive deeper into these topics, don't forget to visit mylearn.oracle.com and look for the Oracle AI Vector Search Deep Dive course. Until next time, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!
    10:33
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

More Education podcasts

About Oracle University Podcast

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
Podcast website

Listen to Oracle University Podcast, The Mel Robbins Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Oracle University Podcast: Podcasts in Family