Skip to main content Codects

Posts on Machine Learning

  1. Stress-Testing a Tiny Neural Network

    We often treat machine learning models as mathematical abstractions, pure functions that map inputs to outputs. We assume that if the math is correct, the system is secure. But models don’t exist in a vacuum; they run on imperfect hardware, rely on approximate floating-point arithmetic, and execute within physical constraints. I wanted to understand exactly how fragile these implicit assumptions are. My goal wasn’t to build a high-performance classifier or to learn the basics of deep learning.
  2. Exploring Bias and Fairness in Machine Learning Models

    Machine learning models are shaping our world, from recommending what movies to watch next, to determining credit scores, to screening job candidates. At first glance, these algorithms seem like objective decision-makers, built to remove human error and bias. But as we’ve seen time and again, algorithms are only as unbiased as the data they’re trained on. And if there’s one thing we know about historical data—it often carries the weight of societal prejudices and inequalities.
  3. Understanding Convolutional Neural Networks (CNNs)

    If you’ve ever wondered how machines are able to “see” things in images or videos, then Convolutional Neural Networks (CNNs) are one of the key technologies behind that magic. CNNs have become the backbone of many modern computer vision tasks, from identifying objects in images to powering facial recognition software. In this Codects installment, I’ll explain the inner workings of CNNs, their structure, how they process images, and the mathematics behind them.