From DSP Hardware to Data Science: Why the Winding Road Was the Point
EEG signal processing, swarm robotics, medical imaging, then data science. Every detour added a tool I use daily — and non-linear careers produce the best engineers.
EEG signal processing, swarm robotics, medical imaging, then data science. Every detour added a tool I use daily — and non-linear careers produce the best engineers.
My career path reads like a random walk: EEG signal processing at Analog Devices, swarm robotics at IRIDIA Brussels, medical imaging at IIT Madras, then a hard pivot into data science at Georgia Tech and Asurion.
"How did you plan this?" is the question I get most. I didn't. But every detour left me with tools and intuitions I reach for constantly.
At Analog Devices, I verified 4G LTE transceivers using SystemC and UVM. The mental model of signal processing — inputs, transforms, outputs, noise, feedback loops — maps directly to how I think about ML pipelines today. When I design a feature engineering pipeline, I'm unconsciously applying the same first-principles decomposition I learned debugging transceiver behavior.
The project that shaped me most wasn't technical at all. I built a virtual support platform for the firmware team that improved their product cycle by 6 months. Not through a clever algorithm — through a well-designed interface. That was my first lesson in a truth I keep relearning: the most impactful technical work is often the work that unblocks other people.
At IRIDIA in Brussels, I trained neural networks to control robot swarms using the NEAT algorithm — NeuroEvolution of Augmenting Topologies. Instead of hand-designing a network architecture, you evolve one through selection pressure. My NEAT controller outperformed hand-designed MLP controllers by 31%. The lesson: your intuition about the right architecture is often wrong. Let the data structure guide you.
At IIT Madras, working on biomedical signal processing under Prof. Mohanasankar Sivaprakasam, I learned something different. Medical data is messy, small, and the cost of being wrong is a misdiagnosis. You learn to respect uncertainty and demand rigorous validation — habits that serve you well in any applied ML role.
Being a GTA for Prof. Charles Isbell's Machine Learning course and Prof. Judy Hoffman's Computer Vision course was transformative. Teaching forces understanding at a depth that merely using techniques never does.
Prof. Isbell's approach — always grounded in decision theory, always questioning whether the assumptions hold — shaped how I evaluate every model I build. The computer vision course with Prof. Hoffman and Prof. Devi Parikh gave me deep intuition about representation learning that I apply to NLP problems regularly. Representations are representations, regardless of modality.
Good. You're exploring the state space. The exploitation phase comes naturally once you find the right objective function.
Mine turned out to be: build AI systems that make real decisions in real businesses. Every zigzag was a feature, not a bug.