Skip to main content Codects

Posts on Computer Security

  1. Stress-Testing a Tiny Neural Network

    We often treat machine learning models as mathematical abstractions, pure functions that map inputs to outputs. We assume that if the math is correct, the system is secure. But models don’t exist in a vacuum; they run on imperfect hardware, rely on approximate floating-point arithmetic, and execute within physical constraints. I wanted to understand exactly how fragile these implicit assumptions are. My goal wasn’t to build a high-performance classifier or to learn the basics of deep learning.