Skip to main content Codects

Posts on Python

  1. Stress-Testing a Tiny Neural Network

    We often treat machine learning models as mathematical abstractions, pure functions that map inputs to outputs. We assume that if the math is correct, the system is secure. But models don’t exist in a vacuum; they run on imperfect hardware, rely on approximate floating-point arithmetic, and execute within physical constraints. I wanted to understand exactly how fragile these implicit assumptions are. My goal wasn’t to build a high-performance classifier or to learn the basics of deep learning.
  2. Estimating π with the Monte Carlo Method

    Exploring Monte Carlo simulations has always intrigued me because of their real-world applications in areas like physics, finance, and artificial intelligence. And what better place to start than with estimating \(π\)? This seemingly abstract number, 3.14159…, holds a special place in mathematics and everyday life, and Monte Carlo simulations give us an intuitive, probability-based approach to approximate it. And here it is, the journey from the math to the code, with everything in between!
  3. Bayesian Inference: A Modern Approach to Uncertainty

    In the world of data science, dealing with uncertainty is a constant challenge. Predicting outcomes, modeling trends, or making decisions based on incomplete data all boil down to one thing: how confident are we in the predictions we make? This is where Bayesian Inference comes into play. It provides a framework for updating our beliefs about a hypothesis as we gather new evidence, allowing us to model uncertainty in a probabilistic and flexible manner.
  4. Exploring Bias and Fairness in Machine Learning Models

    Machine learning models are shaping our world, from recommending what movies to watch next, to determining credit scores, to screening job candidates. At first glance, these algorithms seem like objective decision-makers, built to remove human error and bias. But as we’ve seen time and again, algorithms are only as unbiased as the data they’re trained on. And if there’s one thing we know about historical data—it often carries the weight of societal prejudices and inequalities.