Powered by RND
PodcastsScienceData Science Decoded

Data Science Decoded

Mike E
Data Science Decoded
Latest episode

Available Episodes

5 of 33
  • Data Science #34 - The deep learning original paper review, Hinton, Rumelhard & Williams (1985)
    On the 34th episode, we review the 1986 paper, "Learning representations by back-propagating errors" , which was pivotal because it provided a clear, generalized framework for training neural networks with internal 'hidden' units. The core of the procedure, back-propagation, repeatedly adjusts the weights of connections in the network to minimize the error between the actual and desired output vectors. Crucially, this process forces the hidden units, whose desired states aren't specified, to develop distributed internal representations of the task domain's important features.This capability to construct useful new features distinguishes back-propagation from earlier, simpler methods like the perceptron-convergence procedure. The authors demonstrate its power on non-trivial problems, such as detecting mirror symmetry in an input vector and storing information about isomorphic family trees. By showing how the network generalizes correctly from one family tree to its Italian equivalent, the paper illustrated the algorithm's ability to capture the underlying structure of the task domain.Despite recognizing that the procedure was not guaranteed to find a global minimum due to local minima in the error-surface , the paper's clear formulation (using equations 1-9 ) and its successful demonstration of learning complex, non-linear representations served as a powerful catalyst. It fundamentally advanced the field of connectionism and became the standard, foundational algorithm used today to train multi-layered networks, or deep learning models, despite the earlier, lesser-known work by Werbos
    --------  
    46:37
  • Data Science #33 - The Backpropagation method, Paul Werbos (1980)
    On the 33rd episdoe we review Paul Werbos’s “Applications of Advances in Nonlinear Sensitivity Analysis” which presents efficient methods for computing derivatives in nonlinear systems, drastically reducing computational costs for large-scale models. Werbos, Paul J. "Applications of advances in nonlinear sensitivity analysis." System Modeling and Optimization: Proceedings of the 10th IFIP Conference New York City, USA, August 31–September 4, 1981These methods, especially the backward differentiation technique, enable better sensitivity analysis, optimization, and stochastic modeling across economics, engineering, and artificial intelligence. The paper also introduces Generalized Dynamic Heuristic Programming (GDHP) for adaptive decision-making in uncertain environments.Its importance to modern data science lies in laying the foundation for backpropagation, the core algorithm behind training neural networks. Werbos’s work bridged traditional optimization and today’s AI, influencing machine learning, reinforcement learning, and data-driven modeling.
    --------  
    57:45
  • Data Science #32 - A Markovian Decision Process, Richard Bellman (1957)
    We reviewed Richard Bellman’s “A Markovian Decision Process” (1957), which introduced a mathematical framework for sequential decision-making under uncertainty. By connecting recurrence relations to Markov processes, Bellman showed how current choices shape future outcomes and formalized the principle of optimality, laying the groundwork for dynamic programming and the Bellman equationThis paper is directly relevant to reinforcement learning and modern AI: it defines the structure of Markov Decision Processes (MDPs), which underpin algorithms like value iteration, policy iteration, and Q-learning. From robotics to large-scale systems like AlphaGo, nearly all of RL traces back to the foundations Bellman set in 1957
    --------  
    46:05
  • Data Science #31 - Correlation and causation (1921), Wright Sewall
    On the 31st episode of the podcast, we add Liron to the team, we review a gem from 1921, where Sewall Wright introduced path analysis, mapping hypothesized causal arrows into simple diagrams and proving that any sample correlation can be written as the sum of products of “path coefficients.” By treating each arrow as a standardised regression weight, he showed how to split the variance of an outcome into direct, indirect, and joint pieces, then solve for unknown paths from an ordinary correlation matrix—turning the slogan “correlation ≠ causation” into a workable calculus for observational data.Wright’s algebra and diagrams became the blueprint for modern graphical causal models, structural‑equation modelling, and DAG‑based inference that power libraries such as DoWhy, Pyro and CausalNex. The same logic underlies feature‑importance decompositions, counterfactual A/B testing, fairness audits, and explainable‑AI tooling, making a century‑old livestock‑breeding study a foundation stone of present‑day data‑science and AI practice.
    --------  
    48:11
  • Data Science #30 - The Bootstrap Method (1977)
    In the 30th episode we review the the bootstrap, method which was introduced by Bradley Efron in 1979, is a non-parametric resampling technique that approximates a statistic’s sampling distribution by repeatedly drawing with replacement from the observed data, allowing estimation of standard errors, confidence intervals, and bias without relying on strong distributional assumptions. Its ability to quantify uncertainty cheaply and flexibly underlies many staples of modern data science and AI, powering model evaluation and feature stability analysis, inspiring ensemble methods like bagging and random forests, and informing uncertainty calibration for deep-learning predictions—thereby making contemporary models more reliable and robust.Efron, B. "Bootstrap methods: Another look at the bootstrap." The Annals of Statistics 7 (1977): 1-26.
    --------  
    41:05

More Science podcasts

About Data Science Decoded

We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs
Podcast website

Listen to Data Science Decoded, The Infinite Monkey Cage and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Data Science Decoded: Podcasts in Family

Social
v7.23.13 | © 2007-2025 radio.de GmbH
Generated: 11/23/2025 - 6:21:35 PM