Powered by RND
PodcastsEducationOracle University Podcast

Oracle University Podcast

Oracle Corporation
Oracle University Podcast
Latest episode

Available Episodes

5 of 139
  • Cloud Data Centers: Core Concepts - Part 3
    Have you ever considered how a single server can support countless applications and workloads at once?   In this episode, hosts Lois Houston and Nikita Abraham, together with Principal OCI Instructor Orlando Gentil, explore the sophisticated technologies that make this possible in modern cloud data centers.   They discuss the roles of hypervisors, virtual machines, and containers, explaining how these innovations enable efficient resource sharing, robust security, and greater flexibility for organizations.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------- Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we’ve been talking about different aspects of cloud data centers. In this episode, Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again to discuss how virtualization, through hypervisors, virtual machines, and containers, has transformed data centers. 00:58 Lois: That’s right, Niki. We’ll begin with a quick look at the history of virtualization and why it became so widely adopted. Orlando, what can you tell us about that?  Orlando: To truly grasp the power of virtualization, it's helpful to understand its journey from its humble beginnings with mainframes to its pivotal role in today's cloud computing landscape. It might surprise you, but virtualization isn't a new concept. Its roots go back to the 1960s with mainframes. In those early days, the primary goal was to isolate workloads on a single powerful mainframe, allowing different applications to run without interfering with each other. As we moved into the 1990s, the challenge shifted to underutilized physical servers. Organizations often had numerous dedicated servers, each running a single application, leading to significant waste of computing resources. This led to the emergence of virtualization as we know it today, primarily from the 1990s to the 2000s. The core idea here was to run multiple isolated operating systems on a single physical server. This innovation dramatically improved the resource utilization and laid the technical foundation for cloud computing, enabling the scalable and flexible environments we rely on today. 02:26 Nikita: Interesting. So, from an economic standpoint, what pushed traditional data centers to change and opened the door to virtualization? Orlando: In the past, running applications often meant running them on dedicated physical servers. This led to a few significant challenges. First, more hardware purchases. Every new application, every new project often required its own dedicated server. This meant constantly buying new physical hardware, which quickly escalated capital expenditure. Secondly, and hand-in-hand with more servers came higher power and cooling costs. Each physical server consumed power and generated heat, necessitating significant investment in electricity and cooling infrastructure. The more servers, the higher these operational expenses became. And finally, a major problem was unused capacity. Despite investing heavily in these physical servers, it was common for them to run well below their full capacity. Applications typically didn't need 100% of server's resources all the time. This meant we were wasting valuable compute power, memory, and storage, effectively wasting resources and diminishing the return of investment from those expensive hardware purchases. These economic pressures became a powerful incentive to find more efficient ways to utilize data center resources, setting the stage for technologies like virtualization. 04:05 Lois: I guess we can assume virtualization emerged as a financial game-changer. So, what kind of economic efficiencies did virtualization bring to the table? Orlando: From a CapEx or capital expenditure perspective, companies spent less on servers and data center expansion. From an OpEx or operational expenditure perspective, fewer machines meant lower electricity, cooling, and maintenance costs. It also sped up provisioning. Spinning a new VM took minutes, not days or weeks. That improved agility and reduced the operational workload on IT teams. It also created a more scalable, cost-efficient foundation which made virtualization not just a technical improvement, but a financial turning point for data centers. This economic efficiency is exactly what cloud providers like Oracle Cloud Infrastructure are built on, using virtualization to deliver scalable pay as you go infrastructure.  05:09 Nikita: Ok, Orlando. Let’s get into the core components of virtualization. To start, what exactly is a hypervisor? Orlando: A hypervisor is a piece of software, firmware, or hardware that creates and runs virtual machines, also known as VMs. Its core function is to allow multiple virtual machines to run concurrently on a single physical host server. It acts as virtualization layer, abstracting the physical hardware resources like CPU, memory, and storage, and allocating them to each virtual machine as needed, ensuring they can operate independently and securely. 05:49 Lois: And are there types of hypervisors? Orlando: There are two primary types of hypervisors. The type 1 hypervisors, often called bare metal hypervisors, run directly on the host server's hardware. This means they interact directly with the physical resources offering high performance and security. Examples include VMware ESXi, Oracle VM Server, and KVM on Linux. They are commonly used in enterprise data centers and cloud environments. In contrast, type 2 hypervisors, also known as hosted hypervisors, run on top of an existing operating system like Windows or macOS. They act as an application within that operating system. Popular examples include VirtualBox, VMware Workstation, and Parallels. These are typically used for personal computing or development purposes, where you might run multiple operating systems on your laptop or desktop. 06:55 Nikita: We’ve spoken about the foundation provided by hypervisors. So, can we now talk about the virtual entities they manage: virtual machines? What exactly is a virtual machine and what are its fundamental characteristics? Orlando: A virtual machine is essentially a software-based virtual computer system that runs on a physical host computer. The magic happens with the hypervisor. The hypervisor's job is to create and manage these virtual environments, abstracting the physical hardware so that multiple VMs can share the same underlying resources without interfering with each other. Each VM operates like a completely independent computer with its own operating system and applications.  07:40 Lois: What are the benefits of this? Orlando: Each VM is isolated from the others. If one VM crashes or encounters an issue, it doesn't affect the other VMs running on the same physical host. This greatly enhances stability and security. A powerful feature is the ability to run different operating systems side-by-side on the very same physical host. You could have a Windows VM, a Linux VM, and even other specialized OS, all operating simultaneously. Consolidate workloads directly addresses the unused capacity problem. Instead of one application per physical server, you can now run multiple workloads, each in its own VM on a single powerful physical server. This dramatically improves hardware utilization, reducing the need of constant new hardware purchases and lowering power and cooling costs. And by consolidating workloads, virtualization makes it possible for cloud providers to dynamically create and manage vast pools of computing resources. This allows users to quickly provision and scale virtual servers on demand, tapping into these shared pools of CPU, memory, and storage as needed, rather than being tied to a single physical machine. 09:10 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest technology. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 09:54 Nikita: Welcome back! Orlando, let’s move on to containers. Many see them as a lighter, more agile way to build and run applications. What’s your take? Orlando: A container packages an application in all its dependencies, like libraries and other binaries, into a single, lightweight executable unit. Unlike a VM, a container shares the host operating system's kernel, running on top of the container runtime process. This architectural difference provides several key advantages. Containers are incredibly portable. They can be taken virtually anywhere, from a developer's laptop to a cloud environment, and run consistently, eliminating it works on my machine issues. Because containers share the host OS kernel, they don't need to bundle a full operating system themselves. This results in significantly smaller footprints and less administration overhead compared to VMs. They are faster to start. Without the need to boot a full operating system, containers can start up in seconds, or even milliseconds, providing rapid deployment and scaling capabilities. 11:12 Nikita: Ok. Throughout our conversation, you’ve spoken about the various advantages of virtualization but let’s consolidate them now.  Orlando: From a security standpoint, virtualization offers several crucial benefits. Each VM operates in its own isolated sandbox. This means if one VM experiences a security breach, the impact is generally contained to that single virtual machine, significantly limiting the spread of potential threats across your infrastructure. Containers also provide some isolation. Virtualization allows for rapid recovery. This is invaluable for disaster recovery or undoing changes after a security incident. You can implement separate firewalls, access rules, and network configuration for each VM. This granular control reduces the overall exposure and attack surface across your virtualized environments, making it harder for malicious actors to move laterally. Beyond security, virtualization also brings significant advantages in terms of operational and agility benefits for IT management. Virtualization dramatically improves operational efficiency and agility. Things are faster. With virtualization, you can provision new servers or containers in minutes rather than days or weeks. This speed allows for quicker deployment of applications and services. It becomes much simpler to deploy consistent environment using templates and preconfigured VM images or containers. This reduces errors and ensures uniformity across your infrastructure. It's more scalable. Virtualization makes your infrastructure far more scalable. You can reshape VMs and containers to meet changing demands, ensuring your resources align precisely with your needs. These operational benefits directly contribute to the power of cloud computing, especially when we consider virtualization's role in enabling cloud and scalability. Virtualization is the very backbone of modern cloud computing, fundamentally enabling its scalability. It allows multiple virtual machines to run on a single physical server, maximizing hardware utilization, which is essential for cloud providers. This capability is core of infrastructure as a service offerings, where users can provision virtualized compute resources on demand. Virtualization makes services globally scalable. Resources can be easily deployed and managed across different geographic regions to meet worldwide demand. Finally, it provides elasticity, meaning resources can be automatically scaled up or down in response to fluctuating workloads, ensuring optimal performance and cost efficiency. 14:21 Lois: That’s amazing. Thank you, Orlando, for joining us once again.  Nikita: Yeah, and remember, if you want to learn more about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course.  Lois: Well, that’s all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:40 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
    --------  
    15:09
  • Cloud Data Centers: Core Concepts - Part 2
    Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud?   In this episode, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to discuss cloud storage.   They explore how data is carefully organized, the different ways it can be stored, and what keeps it safe and easy to find.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------   Episode Transcript:    00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven’t listened to the episode yet, I’d suggest going back and listening to it before you dive into this one.  Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we’re going to ask him about another fundamental concept: storage. 01:04 Lois: That’s right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers?  Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center. 01:52 Nikita: But how is data organized and controlled on disks? Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices. Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive. Once partitions are created, they are formatted with a file system. 02:40 Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system?  Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux. Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions.  03:42 Lois: And what are permissions? Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute. This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center.  04:09 Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server?   Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority. Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data. Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server.  05:59 Lois: I’m guessing you’re hinting at remote storage. Can you tell us more about that, Orlando? Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications. 06:35 Lois: Let’s talk about the common forms of remote storage. Can you run us through them? Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files. A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach.  07:38 Nikita: And what might this approach be?  Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network. iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network.  This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed.  08:47 Nikita: And what’s this specialized network called? Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount. 09:42 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest technology. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 10:26 Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about? Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage.  Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data. This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play. 12:02 Lois: And what’s that exactly? Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes. 13:05 Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management.  Nikita: That’s right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course.  Lois: In our next episode, we’ll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can’t wait to learn more about it. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:47 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  
    --------  
    14:16
  • Cloud Data Centers: Core Concepts - Part 1
    Curious about what really goes on inside a cloud data center?   In this episode, Lois Houston and Nikita Abraham chat with Principal OCI Instructor Orlando Gentil about how cloud data centers are transforming the way organizations manage technology.   They explore the differences between traditional and cloud data centers, the roles of CPUs, GPUs, and RAM, and why operating systems and remote access matter more than ever.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services.   Nikita: Hi everyone! Today, we’re covering the fundamentals you need to be successful in a cloud environment. If you’re new to cloud, coming from a SaaS environment, or planning to move from on-premises to the cloud, you won’t want to miss this. With us today is Orlando Gentil, Principal OCI Instructor at Oracle University. Hi Orlando! Thanks for joining us.   01:01 Lois: So Orlando, we know that Oracle has been a pioneer of cloud technologies and has been pivotal in shaping modern cloud data centers, which are different from traditional data centers. For our listeners who might be new to this, could you tell us what a traditional data center is?  Orlando: A traditional data center is a physical facility that houses an organization's mission critical IT infrastructure, including servers, storage systems, and networking equipment, all managed on site.   01:32 Nikita: So why would anyone want to use a cloud data center?  Orlando: The traditional model requires significant upfront investment in physical hardware, which you are then responsible for maintaining along with the underlying infrastructure like physical security, HVAC, backup power, and communication links.  In contrast, cloud data centers offer a more agile approach. You essentially rent the infrastructure you need, paying only for what you use. In the traditional data center, scaling resources up and down can be a slow and complex process.  On cloud data centers, scaling is automated and elastic, allowing resources to adjust dynamically based on demand. This shift allows business to move their focus from the constant upkeep of infrastructure to innovation and growth.  The move represents a shift from maintenance to momentum, enabling optimized costs and efficient scaling. This fundamental shift is how IT infrastructure is managed and consumed, and precisely what we mean by moving to the cloud.  02:39 Lois: So, when we talk about moving to the cloud, what does it really mean for businesses today?  Orlando: Moving to the cloud represents the strategic transition from managing your own on-premise hardware and software to leveraging internet-based computing services provided by a third-party.  This involves migrating your applications, data, and IT operations to a cloud environment. This transition typically aims to reduce operational overhead, increase flexibility, and enhance scalability, allowing organizations to focus more on their core business functions.    03:17 Nikita: Orlando, what’s the “brain” behind all this technology?  Orlando: A CPU or Central Processing Unit is the primary component that performs most of the processing inside the computer or server. It performs calculations handling the complex mathematics and logic that drive all applications and software.  It processes instructions, running tasks, and operations in the background that are essential for any application. A CPU is critical for performance, as it directly impacts the overall speed and efficiency of the data center.  It also manages system activities, coordinating user input, various application tasks, and the flow of data throughout the system. Ultimately, the CPU drives data center workloads from basic server operations to powering cutting edge AI applications.  04:10 Lois: To better understand how a CPU achieves these functions and processes information so efficiently, I think it’s important for us to grasp its fundamental architecture. Can you briefly explain the fundamental architecture of a CPU, Orlando?  Orlando: When discussing CPUs, you will often hear about sockets, cores, and threads. A socket refers to the physical connection on the motherboard where a CPU chip is installed.  A single server motherboard can have one or more sockets, each holding a CPU. A core is an independent processing unit within a CPU. Modern CPUs often have multiple cores, enabling them to handle several instructions simultaneously, thus increasing processing power.  Think of it as having multiple mini CPUs on a single chip. Threads are virtual components that allow a single CPU core to handle multiple sequence of instructions or threads concurrently. This technology, often called hyperthreading, makes a single core appear as two logical processors to the operating system, further enhancing efficiency.  05:27 Lois: Ok. And how do CPUs process commands?  Orlando: Beyond these internal components, CPUs are also designed based on different instruction set architectures which dictate how they process commands.   CPU architectures are primarily categorized in two designs-- Complex Instruction Set Computer or CISC and Reduced Instruction Set Computer or RISC. CISC processors are designed to execute complex instructions in a single step, which can reduce the number of instructions needed for a task, but often leads to a higher power consumption.  These are commonly found in traditional Intel and AMD CPUs. In contrast, RISC processors use a simpler, more streamlined set of instructions. While this might require more steps for a complex task, each step is faster and more energy efficient. This architecture is prevalent in ARM-based CPUs.  06:34 Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification—now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details.     07:09 Nikita: Welcome back! We were discussing CISC and RISC processors. So Orlando, where are they typically deployed? Are there any specific computing environments and use cases where they excel?  Orlando: On the CISC side, you will find them powering enterprise virtualization and server workloads, such as bare metal hypervisors in large databases where complex instructions can be efficiently processed. High performance computing that includes demanding simulations, intricate analysis, and many traditional machine learning systems.  Enterprise software suites and business applications like ERP, CRM, and other complex enterprise systems that benefit from fewer steps per instruction. Conversely, RISC architectures are often preferred for cloud-native workloads such as Kubernetes clusters, where simpler, faster instructions and energy efficiency are paramount for distributed computing.  Mobile device management and edge computing, including cell phones and IoT devices where power efficiency and compact design are critical. Cost optimized cloud hosting supporting distributed workloads where the cumulative energy savings and simpler design lead to more economical operations.  The choice between CISC and RISC depends heavily on the specific workload and performance requirements. While CPUs are versatile generalists, handling a broad range of tasks, modern data centers also heavily rely on another crucial processing unit for specialized workloads.  08:54 Lois: We’ve spoken a lot about CPUs, but our conversation would be incomplete without understanding what a Graphics Processing Unit is and why it’s important. What can you tell us about GPUs, Orlando?  Orlando: A GPU or Graphics Processing Unit is distinct from a CPU. While the CPU is a generalist excelling at sequential processing and managing a wide variety of tasks, the GPU is a specialist.  It is designed specifically for parallel compute heavy tasks. This means it can perform many calculations simultaneously, making it incredibly efficient for workloads like rendering graphics, scientific simulations, and especially in areas like machine learning and artificial intelligence, where massive parallel computation is required.  In the modern data center, GPUs are increasingly vital for accelerating these specialized, data intensive workloads.   09:58 Nikita: Besides the CPU and GPU, there’s another key component that collaborates with these processors to facilitate efficient data access. What role does Random Access Memory play in all of this?  Orlando: The core function of RAM is to provide faster access to information in use. Imagine your computer or server needing to retrieve data from a long-term storage device, like a hard drive. This process can be relatively slow.  RAM acts as a temporary high-speed buffer. When your CPU or GPU needs data, it first checks RAM. If the data is there, it can be accessed almost instantaneously, significantly speeding up operations.  This rapid access to frequently used data and programming instructions is what allows applications to run smoothly and systems to respond quickly, making RAM a critical factor in overall data center performance.  While RAM provides quick access to active data, it's volatile, meaning data is lost when power is off, or persistent data storage, the information that needs to remain available even after a system shut down.   11:14 Nikita: Let’s now talk about operating systems in cloud data centers and how they help everything run smoothly. Orlando, can you give us a quick refresher on what an operating system is, and why it is important for computing devices?  Orlando: At its core, an operating system, or OS, is the fundamental software that manages all the hardware and software resources on a computer. Think of it as a central nervous system that allows everything else to function.  It performs several critical tasks, including managing memory, deciding which programs get access to memory and when, managing processes, allocating CPU time to different tasks and applications, managing files, organizing data on storage devices, handling input and output, facilitate communication between the computer and its peripherals, like keyboards, mice, and displays. And perhaps, most importantly, it provides the user interface that allows us to interact with the computer.  12:19 Lois: Can you give us a few examples of common operating systems?  Orlando: Common operating system examples you are likely familiar with include Microsoft Windows and MacOS for personal computers, iOS and Android for mobile devices, and various distributions of Linux, which are incredibly prevalent in servers and increasingly in cloud environments.  12:41 Lois: And how are these operating systems specifically utilized within the demanding environment of cloud data centers?  Orlando: The two dominant operating systems in data centers are Linux and Windows. Linux is further categorized into enterprise distributions, such as Oracle Linux or SUSE Linux Enterprise Server, which offer commercial support and stability, and community distributions, like Ubuntu and CentOS, which are developed and maintained by communities and are often free to use.  On the other side, we have Windows, primarily represented by Windows Server, which is Microsoft's server operating system known for its robust features and integration with other Microsoft products. While both Linux and Windows are powerful operating systems, their licensing modes can differ significantly, which is a crucial factor to consider when deploying them in a data center environment.  13:43 Nikita: In what way do the licensing models differ?  Orlando: When we talk about licensing, the differences between Linux and Windows become quite apparent. For Linux, Enterprise Distributions come with associated support fees, which can be bundled into the initial cost or priced separately. These fees provide access to professional support and updates. On the other hand, Community Distributions are typically free of charge, with some providers offering basic community-driven support.  Windows server, in contrast, is a commercial product. Its license cost is generally included in the instance cost when using cloud providers or purchased directly for on-premise deployments. It's also worth noting that some cloud providers offer a bring your own license, or BYOL program, allowing organizations to use their existing Windows licenses in the cloud, which can sometimes provide cost efficiencies.  14:46 Nikita: Beyond choosing an operating system, are there any other important aspects of data center management?  Orlando: Another critical aspect of data center management is how you remotely access and interact with your servers. Remote access is fundamental for managing servers in a data center, as you are rarely physically sitting in front of them. The two primary methods that we use are SSH, or secure shell, and RDP, remote desktop.  Secure shell is widely used for secure command line access for Linux servers. It provides an encrypted connection, allowing you to execute commands, transfer files, and manage your servers securely from a remote location. The remote desktop protocol is predominantly used for graphical remote access to Windows servers. RDP allows you to see and interact with the server's desktop interface, just as if you were sitting directly in front of it, making it ideal for tasks that require a graphical user interface.  15:54 Lois: Thank you so much, Orlando, for shedding light on this topic.    Nikita: Yeah, that's a wrap for today! To learn more about what we discussed, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we’ll take a close look at how data is stored and managed. Until then, this is Nikita Abraham…   Lois: And Lois Houston, signing off!   16:16 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
    --------  
    16:45
  • AI Across Industries and the Importance of Responsible AI
    AI is reshaping industries at a rapid pace, but as its influence grows, so do the ethical concerns that come with it.   This episode examines how AI is being applied across sectors such as healthcare, finance, and retail, while also exploring the crucial issue of ensuring that these technologies align with human values.   In this conversation, Lois Houston and Nikita Abraham are joined by Hemant Gahankari, Senior Principal OCI Instructor, who emphasizes the importance of fairness, inclusivity, transparency, and accountability in AI systems.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------- Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about how Oracle integrates AI capabilities into its Fusion Applications to enhance business workflows, and we focused on Predictive, Generative, and Agentic AI. Lois: Today, we’ll discuss the various applications of AI. This is the final episode in our AI series, and before we close, we’ll also touch upon ethical and responsible AI.  01:01 Nikita: Taking us through all of this is Senior Principal OCI Instructor Hemant Gahankari. Hi Hemant! AI is pretty much everywhere today. So, can you explain how it is being used in industries like retail, hospitality, health care, and so on?  Hemant: AI isn't just for sci-fi movies anymore. It's helping doctors spot diseases earlier and even discover new drugs faster. Imagine an AI that can look at an X-ray and say, hey, there is something sketchy here before a human even notices. Wild, right? Banks and fintech companies are all over AI. Fraud detection. AI has got it covered. Those robo advisors managing your investments? That's AI too. Ever noticed how e-commerce companies always seem to know what you want? That's AI studying your habits and nudging you towards that next purchase or binge watch. Factories are getting smarter. AI predicts when machines will fail so they can fix them before everything grinds to a halt. Less downtime, more efficiency. Everyone wins. Farming has gone high tech. Drones and AI analyze crops, optimize water use, and even help with harvesting. Self-driving cars get all the hype, but even your everyday GPS uses AI to dodge traffic jams. And if AI can save me from sitting in bumper-to-bumper traffic, I'm all for it. 02:40 Nikita: Agreed! Thanks for that overview, but let’s get into specific scenarios within each industry.  Hemant: Let us take a scenario in the retail industry-- a retail clothing line with dozens of brick-and-mortar stores. Maintaining proper inventory levels in stores and regional warehouses is critical for retailers. In this low-margin business, being out of a popular product is especially challenging during sales and promotions. Managers want to delight shoppers and increase sales but without overbuying. That's where AI steps in. The retailer has multiple information sources, ranging from point-of-sale terminals to warehouse inventory systems. This data can be used to train a forecasting model that can make predictions, such as demand increase due to a holiday or planned marketing promotion, and determine the time required to acquire and distribute the extra inventory. Most ERP-based forecasting systems can produce sophisticated reports. A generative AI report writer goes further, creating custom plain-language summaries of these reports tailored for each store, instructing managers about how to maximize sales of well-stocked items while mitigating possible shortages. 04:11 Lois: Ok. How is AI being used in the hospitality sector, Hemant? Hemant: Let us take an example of a hotel chain that depends on positive ratings on social media and review websites. One common challenge they face is keeping track of online reviews, leading to missed opportunities to engage unhappy customers complaining on social media. Hotel managers don't know what's being said fast enough to address problems in real-time. Here, AI can be used to create a large data set from the tens of thousands of previously published online reviews. A textual language AI system can perform a sentiment analysis across the data to determine a baseline that can be periodically re-evaluated to spot trends. Data scientists could also build a model that correlates these textual messages and their sentiments against specific hotel locations and other factors, such as weather. Generative AI can extract valuable suggestions and insights from both positive and negative comments. 05:27 Nikita: That’s great. And what about Financial Services? I know banks use AI quite often to detect fraud. Hemant: Unfortunately, fraud can creep into any part of a bank's retail operations. Fraud can happen with online transactions, from a phone or browser, and offsite ATMs too. Without trust, banks won't have customers or shareholders. Excessive fraud and delays in detecting it can violate financial industry regulations. Fraud detection combines AI technologies, such as computer vision to interpret scanned documents, document verification to authenticate IDs like driver's licenses, and machine learning to analyze patterns. These tools work together to assess the risk of fraud in each transaction within seconds. When the system detects a high risk, it triggers automated responses, such as placing holds on withdrawals or requesting additional identification from customers, to prevent fraudulent activity and protect both the business and its client. 06:42 Nikita: Wow, interesting. And how is AI being used in the health industry, especially when it comes to improving patient care? Hemant: Medical appointments can be frustrating for everyone involved—patients, receptionists, nurses, and physicians. There are many time-consuming steps, including scheduling, checking in, interactions with the doctors, checking out, and follow-ups. AI can fix this problem through electronic health records to analyze lab results, paper forms, scans, and structured data, summarizing insights for doctors with the latest research and patient history. This helps practice reduced costs, boost earnings, and deliver faster, more personalized care. 07:32 Lois: Let’s take a look at one more industry. How is manufacturing using AI? Hemant: A factory that makes metal parts and other products use both visual inspections and electronic means to monitor product quality. A part that fails to meet the requirements may be reworked or repurposed, or it may need to be scrapped. The factory seeks to maximize profits and throughput by shipping as much good material as possible, while minimizing waste by detecting and handling defects early. The way AI can help here is with the quality assurance process, which creates X-ray images. This data can be interpreted by computer vision, which can learn to identify cracks and other weak spots, after being trained on a large data set. In addition, problematic or ambiguous data can be highlighted for human inspectors. 08:36 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest tech. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 09:20 Nikita: Welcome back! AI can be used effectively to automate a variety of tasks to improve productivity, efficiency, cost savings. But I’m sure AI has its constraints too, right? Can you talk about what happens if AI isn’t able to echo human ethics?  Hemant: AI can fail due to lack of ethics.  AI can spot patterns, not make moral calls. It doesn't feel guilt, understand context, or take responsibility. That is still up to us.  Decisions are only as good as the data behind them. For example, health care AI underdiagnosing women because research data was mostly male. Artificial narrow intelligence tends to automate discrimination at scale. Recruiting AI downgraded resumes just because it had a word "women's" (for example, women's chess club). Who is responsible when AI fails? For example, if a self-driving car hits someone, we cannot blame the car. Then who owns the failure? The programmer? The CEO? Can we really trust corporations or governments having programmed the use of AI not to be evil correctly? So, it's clear that AI needs oversight to function smoothly. 10:48 Lois: So, Hemant, how can we design AI in ways that respect and reflect human values? Hemant: Think of ethics like a tree. It needs all parts working together. Roots represent intent. That is our values and principles. The trunk stands for safeguards, our systems, and structures. And the branches are the outcomes we aim for. If the roots are shallow, the tree falls. If the trunk is weak, damage seeps through. The health of roots and trunk shapes the strength of our ethical outcomes. Fairness means nothing without ethical intent behind it. For example, a bank promotes its loan algorithm as fair. But it uses zip codes in decision-making, effectively penalizing people based on race. That's not fairness. That's harm disguised as data. Inclusivity depends on the intent sustainability. Inclusive design isn't just a check box. It needs a long-term commitment. For example, controllers for gamers with disabilities are only possible because of sustained R&D and intentional design choices. Without investment in inclusion, accessibility is left behind. Transparency depends on the safeguard robustness. Transparency is only useful if the system is secure and resilient. For example, a medical AI may be explainable, but if it is vulnerable to hacking, transparency won't matter. Accountability depends on the safeguard privacy and traceability. You can't hold people accountable if there is no trail to follow. For example, after a fatal self-driving car crash, deleted system logs meant no one could be held responsible. Without auditability, accountability collapses. So remember, outcomes are what we see, but they rely on intent to guide priorities and safeguards to support execution. That's why humans must have a final say. AI has no grasp of ethics, but we do. 13:16 Nikita: So, what you’re saying is ethical intent and robust AI safeguards need to go hand in hand if we are to truly leverage AI we can trust. Hemant: When it comes to AI, preventing harm is a must. Take self-driving cars, for example. Keeping pedestrians safe is absolutely critical, which means the technology has to be rock solid and reliable. At the same time, fairness and inclusivity can't be overlooked. If an AI system used for hiring learns from biased past data, say, mostly male candidates being hired, it can end up repeating those biases, shutting out qualified candidates unfairly. Transparency and accountability go hand in hand. Imagine a loan rejection if the AI's decision isn't clear or explainable. It becomes impossible for someone to challenge or understand why they were turned down. And of course, robustness supports fairness too. Loan approval systems need strong security to prevent attacks that could manipulate decisions and undermine trust.  We must build AI that reflects human values and has safeguards. This makes sure that AI is fair, inclusive, transparent, and accountable.  14:44 Lois: Before we wrap, can you talk about why AI can fail? Let’s continue with your analogy of the tree. Can you explain how AI failures occur and how we can address them? Hemant: Root elements like do not harm and sustainability are fundamental to ethical AI development. When these roots fail, the consequences can be serious. For example, a clear failure of do not harm is AI-powered surveillance tools misused by authoritarian regimes. This happens because there were no ethical constraints guiding how the technology was deployed. The solution is clear-- implement strong ethical use policies and conduct human rights impact assessment to prevent such misuse. On the sustainability front, training AI models can consume massive amount of energy. This failure occurs because environmental costs are not considered. To fix this, organizations are adopting carbon-aware computing practices to minimize AI's environmental footprint. By addressing these root failures, we can ensure AI is developed and used responsibly with respect for human rights and the planet. An example of a robustness failure can be a chatbot hallucinating nonexistent legal precedence used in court filings. This could be due to training on unverified internet data and no fact-checking layer. This can be fixed by grounding in authoritative databases. An example of a privacy failure can be AI facial recognition database created without user consent. The reason being no consent was taken for data collection. This can be fixed by adopting privacy-preserving techniques. An example of a fairness failure can be generated images of CEOs as white men and nurses as women, minorities. The reason being training on imbalanced internet images reflecting societal stereotypes. And the fix is to use diverse set of images. 17:18 Lois: I think this would be incomplete if we don’t talk about inclusivity, transparency, and accountability failures. How can they be addressed, Hemant? Hemant: An example of an inclusivity failure can be a voice assistant not understanding accents. The reason being training data lacked diversity. And the fix is to use inclusive data. An example of a transparency and accountability failure can be teachers could not challenge AI-generated performance scores due to opaque calculations. The reason being no explainability tools are used. The fix being high-impact AI needs human review pathways and explainability built in. 18:04 Lois: Thank you, Hemant, for a fantastic conversation. We got some great insights into responsible and ethical AI. Nikita: Thank you, Hemant! If you’re interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…. Lois: And Lois Houston, signing off! 18:26 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  
    --------  
    18:55
  • Oracle AI for Fusion Apps
    Want to make AI work for your business? In today’s episode, Lois Houston and Nikita Abraham continue their discussion of AI in Oracle Fusion Applications by focusing on three key AI capabilities: predictive, generative, and agentic.   Joining them is Principal Instructor Yunus Mohammed, who explains how predictive, generative, and agentic AI can optimize efficiency, support decision-making, and automate tasks—all without requiring technical expertise.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we explored the essential components of the Oracle AI stack and spoke about Oracle’s suite of AI services.  Nikita: Yeah, and in today’s episode, we’re going to go down a similar path and take a closer look at the AI functionalities within Oracle Fusion Applications. 00:53 Lois: With us today is Principal Instructor Yunus Mohammed. Hi Yunus! It’s lovely to have you back with us. For anyone who doesn’t already know, what are Oracle Fusion Cloud Applications?  Yunus: Oracle Fusion Applications are a suite of cloud-based enterprise applications designed to run for your business across finance, HR, supply chain, sales, services and more, all on a unified platform. They are designed to help enterprises operate smarter, faster by embedding AI directly into business process. That means better forecasts in finance, faster hiring decisions in HR, and optimized supply chains, and more personalized customer experience.  01:42 Nikita: And we know they’ve been built for today's fast-paced, AI-driven business environment. So, what are the different functional pillars within Oracle Fusion Apps? Yunus: The first one is the ERP, Enterprise Resource Planning, which supports financials, procurements, and project management. It's the backbone of many organizations, or day-to-day operations. HCM or Human Capital Management, handles workforce-related processes such as hiring, payroll, performance, and talent development, helping HR teams operate more efficiently. SCM, the Supply Chain Management, enables businesses to manage their logistics, inventory, and suppliers and manufacturers in the business. It's particularly critical in industries with complex operations like retail and manufacturing. The CX, which is the Customer Experience, covers the full customer life cycle, which includes sales, marketing, and service. These models help the businesses connect with their customers more personally and proactively, whether through the targeted campaigns or responsive support.  03:02 Lois: Yunus, what sets Fusion apart? Yunus: What sets Fusion apart is how these applications work seamlessly together. They share data natively and continuously improve with AI and automation, giving you not just tools, but intelligence at scale.  Oracle applications are built to be AI first, with a complete suite of finance, supply chain, manufacturing, HR, sales, service, and marketing, that is tightly coupled with our industry and data intelligence applications. The easiest and the most effective way to start building your organization’s AI muscle is with AI embedded in Fusion applications. For example, if the customer needs to return a defective product, the service representative simply clicks on Ask Oracle for the answers. Since the AI agent is embedded in the application, it has contextual information about the customer, the order, and any special service, contract, or any other feature that is required for this process. The AI agent automatically figures out the return policy, including the options to send a replacement product immediately or offer a discount for the inconvenience, and also expedite shipping. Another AI agent sends a personalized email confirming details of the return, and different AI agent creates the replacement order for fulfillment and shipping. Our AI-embedded Fusion Applications can automate an end-to-end business process from service request to return order to fulfillment and shipping and then accounting.  These are pre-built and tested so that all the worry and hard work is removed from the implementation point of view. They cover the core workflows. Basically, they address tasks that form part of the organization's core workflow. User requires no technical knowledge in the scenarios.  05:16 Lois: That’s great! So, you don’t need to be an AI expert or a data scientist to get going. Yunus: The outcomes are super fast in business softwares and context is everything. Just having the right information isn't enough. This is about having the information in the right place at the right time for it to be instantly actionable. They are ready from day one and can be optimized over time. They are powerful out of the box and only get better with day-to-day processes and performance. 05:55 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today!  06:20 Nikita: Welcome back! So, when we talk about the AI capabilities in Fusion apps, I know we have different types. Can you tell us more about them?  Yunus: Predictive AI is where it all started. These models analyze historical patterns and data to anticipate what might happen next. For example, predicting employee attrition, forecasting demand in supply chain, or flagging potential late payments in finance workflows. These are embedded into business processes to surface insights before action is needed. Then we have got the generative AI, which takes this a step more further. Instead of just providing insights, it creates content, such as auto-generating job descriptions, summarizing performance reviews, or even crafting draft responses to supplier queries. This saves time and boosts productivity across functions like HR, CX, and procurement. Last but not the least, we have got the agentic AI, which is the most advanced layer. These agents don't just provide suggestions, they take actions on behalf of the users. Think of an agent that not only recommends actions in a workflow, but also executes them, creating tasks, filling tickets, updating systems, and communicating with stakeholders, all autonomously but under user control. And importantly, many business scenarios today benefit from a blend of these types. For example, an AI assistant in Fusion HCM might predict employees turnover, which is predictive AI, generates tailored retention plans, which is generative, and it is generative AI, and initiate outreach or next steps, which is done by the process of agents, which is called agentic AI. So, Oracle integrates these capabilities in a harmonious way, enabling users to act faster, personalize at scale, and drive better business outcomes.  08:39 Lois: Ok, let’s get into the specifics. How does Oracle use predictive AI across its Fusion apps, helping businesses anticipate what’s coming and act proactively.  Yunus: So in HCM, things like recommended jobs, in this, candidates visiting a potential employer’s website encountered an improved online experience, whereby if they have uploaded their resumes, they will be shown job opportunities that match their skills and experience mix. This helps candidates who are unsure what to search by showing them roles and titles they may not have considered. Time to hire provides an estimated as to how long it will take for an HR team to fill an open role, but this is really useful not only in terms of planning, recruitment, but also in terms of understanding whether you might need some temporary cover and for how long will it actually take the process to complete. In the process of supply chain management, the predictive AI is leveraged to revolutionize transit time and estimated time of arrival, which is called as the predictive analysis, enhancing efficiency, and optimizing operations. It can flag abnormal patterns in supply or inventory. For example, if a batch of parts is behaving differently in the production line and predict future demands, helping avoid overstocking or stockouts is a process that can be done by using the SCM predictive analysis or predictive AI. In ERPs, where you can audit your expenses, plan for future expenses, and do dynamic discounting for vendors who are likely to accept earlier payments or earlier payment discounts, it can also speed up reimbursements by automated expense entries. In CX, you have the options to go with adaptive intelligence for sales, which helps representatives prioritize the leads and the likelihood that a specific lead will close, helping representatives focus their time and effort. So predictive scheduling and routing in service delivery ensures that the right resource is assigned to the right customer at the right time, boosting operational efficiency and customer satisfaction, also known as fatigue analysis. 11:23 Lois: Now let’s shift our focus to generative AI. How does Oracle implement generative AI across HCM, ERP, Supply Chain, and CX? Yunus: So, in HCM, the generative AI can automatically generate performance review summaries from raw data, saving time for HR teams, and can help you in providing candidates with summaries of their interview process, feedback, and next steps, all auto generated. With AI assistance, goal creation for employees can be automated, and the system analyzes performance data and trends to propose meaningful and attainable goals, aligning them with organizational objectives and employee capabilities. In SCM, similarly, the generative AI process helps you in defining drafting summaries of purchase orders. So generative AI can automatically create clear, readable synopses, and can be summarized with complex negotiations and discussions, making it easier for supply chain managers to analyze supplier proposals, track negotiations, processes, and understand key takeaways. With predictive AI embedded, it is helping you to leverage to help generate the repairs of master definitions of summaries, and can generate descriptions for item based on their specification, helping product teams automatically generate catalog contents. With ERPs, you can automate the creation of business reports, offering more insights and actionable narratives, rather than just showing the raw data. The AI can provide context, interpretations, and recommendations. AI can also take raw project data and generate a comprehensive, easy-to-read project status, reports that stakeholders can quickly review. In CX, we have got service request summarization, which can provide these long summaries for the customer services and the tickets that have been requested by the customers, allowing support teams to understand the key points in the fraction of time, and can also create knowledge base articles directly from common service requests or inquiries, which not only improves internal knowledge management but also empowers customers by enabling self-service. So generative AI can automatically generate success stories or case studies from successful opportunities or sales, which can be used as marketing content or for internal knowledge sharing. 14:20 Nikita: And what about Oracle's Agentic AI? What are its capabilities across the different pillars? Yunus: In HCM, Agentic AI handles the end-to-end onboarding experience, from explaining policies to guiding document submissions, even booking orientation sessions, allowing the HR staff to focus on human engagement. It can further support HR teams during performance review cycles by surfacing high potential employees, pulling in performance data, and recommending next actions like promotions or learning paths. It helps manage time with requests by checking eligibility, policy constraints, and suggesting appropriate substitutes, reducing administrative frictions and errors. With SCM, the Agentic AI Fusion Applications act as a real time Assistant to ensure buyers follow procurement policies, and reducing compliance risk and manual errors. It can also support sales representatives with real-time insights and next best actions during the quoting or ordering process, improving customer satisfaction and sales performance. With ERP, you can handle document intake, extraction, and routing, saving significant time on manual document management across financial functions using Fusion Applications. AI automates reconciliation tasks by matching transactions, flagging anomalies, and suggesting resolutions. It helps you in reducing close cycle timelines and continuously analyzes profit margins. And it recommends the pricing adjustments that can be done in your ERPs. In CX, the Agentic AI Fusion Application supports staff by instantly compiling full customer histories, orders, service requests, interactions, and can act like a real-time assistant, summarizing open tickets and resolutions, helping agents take over or escalate without needing to dig through the notes, and dynamically adjust technicals and technician routes based on traffic, priority, or cancelation, increasing the field efficiency and customer satisfaction. 17:04 Lois: Thank you so much, Yunus. To learn more about the topics covered today, visit mylearn.oracle.com and search for the AI for You course. Nikita: Join us next week as we cover how AI is being applied across sectors like healthcare, finance, and retail, and tackle the big question: how do we keep these technologies aligned with human values? Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:30 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
    --------  
    18:00

More Education podcasts

About Oracle University Podcast

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
Podcast website

Listen to Oracle University Podcast, Motivation Daily by Motiversity and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Oracle University Podcast: Podcasts in Family

Social
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 10/25/2025 - 7:38:29 PM