NAN120: How Network Engineers Can Thrive in an AI-Driven World
Ashwin Chosi, a senior solution engineer at Keysight Technologies, joins the Network Automation Nerds podcast to discuss how AI is reshaping networking careers. He distinguishes between 'AI for networking' and 'networking for AI' as two distinct disciplines, and shares his philosophy on continuous learning, community contribution, and using AI as an empowering tool rather than a replacement for domain expertise.
Summary
In this episode of the Network Automation Nerds podcast, host Eric Cho interviews Ashwin Chosi, a senior solution engineer at Keysight Technologies, about the intersection of AI and networking. The conversation covers two primary domains: how AI can be applied to improve network operations, and how networks must evolve to support AI infrastructure.
Ashwin begins by addressing the common narrative that AI threatens junior engineers' jobs. He argues that while AI will handle grunt work like log retrieval and basic troubleshooting summaries, human domain expertise remains essential for complex, context-rich scenarios. He uses the example of on-call rotations receiving 100 tickets, suggesting AI agents can handle intermittent or trivial issues, freeing senior engineers to focus on genuinely difficult problems. This 'intelligent automation' model is framed as an empowering engine rather than a replacement.
A significant portion of the conversation focuses on the conceptual distinction between 'AI for networking' and 'networking for AI.' AI for networking refers to using AI tools—agents, anomaly detection, telemetry correlation—to make existing networks operate better. Networking for AI refers to the infrastructure demands that training and inferencing large AI models place on data centers and networks, requiring high throughput, low latency, and specialized protocols. Ashwin notes that AI infrastructure, unlike applications, is foundational and more durable, comparing the network to airport runways that AI applications fly on.
Ashwin shares his personal learning philosophy extensively. He advocates for working backwards from a customer or community problem, using tools like Notebook LM to generate learning roadmaps, and structuring learning in easy-medium-hard progressions. He emphasizes 'doing over reading,' building in public, and leaving tangible artifacts like GitHub repositories, blog posts, and demos. He created a '100 Days of Generative AI' series on LinkedIn that he converted into a searchable web portal, and a curated PyATS learning path to help the community navigate fragmented resources.
The episode also touches on time management and work-life balance. Ashwin describes his routine of early morning 1-hour focused learning blocks and 30-minute end-of-day planning sessions. He participates in a peer accountability group that meets weekly, using presentation commitments as learning milestones. He acknowledges the challenge of balancing learning with family and personal health, and agrees with Eric's point about the importance of sleep for cognitive performance.
Toward the end, Ashwin discusses a personal project using Gemini CLI to convert GitHub course repositories into interactive AI tutors, effectively enabling personalized, on-demand instruction. He suggests this model could transform how instructor-led content is consumed. He also hints at work-related projects involving converting customer intent into end-to-end products using AI sub-agents. Both guests close by encouraging network engineers to build visible, clickable portfolios as a career differentiator in the age of AI.
Key Insights
- Ashwin argues that AI will handle network troubleshooting grunt work—log retrieval, basic triage, intermittent ticket resolution—but that human domain expertise remains irreplaceable for complex, historically-contextual issues that AI lacks awareness of.
- Ashwin distinguishes 'AI for networking' (using AI agents and ML to improve existing networks) from 'networking for AI' (building high-throughput, low-latency infrastructure to support model training and inferencing), calling these two fundamentally different career and technology paths.
- Ashwin claims that AI infrastructure—the physical and protocol layer supporting GPU clusters—is more durable than applications, comparing it to airport runways: you need great runways before you can think about what planes fly on them.
- Ashwin argues that AI training clusters require networks with fundamentally different properties than traditional data centers, specifically ultra-low latency and high bandwidth, because distributed training is only as fast as the slowest GPU node—a tail latency problem.
- Ashwin contends that the business case for AI in network operations is specifically about redirecting senior engineers away from trivial on-call tickets toward genuinely complex problems, framing this as 'putting your best brains on problems that actually matter.'
- Ashwin describes using Notebook LM to generate structured learning roadmaps from raw course material, then validating those roadmaps with a separate LLM like Gemini to avoid bias—treating AI itself as a curriculum design tool.
- Ashwin claims that in the current AI era, ideas have become more valuable than execution because execution has become cheap—tools like Claude Code can clone a product from its source code—shifting the competitive moat to creativity and problem framing.
- Ashwin argues that AI literacy, not necessarily AI coding skill, is now a baseline requirement across all professional fields, and that his '100 Days of Generative AI' series is designed to give people the vocabulary to participate in AI conversations even without writing code.
Topics
Full transcript available for MurmurCast members
Sign Up to Access