Key Takeaways Platform engineering teams must recognize AI agents as a new execution model (on par with microservices) that requires shared, production-grade infrastructure for security, compliance, reliability, and observability to move beyond scattered proofs of concept. A comprehensive AI strategy distinguishes between “foreground agents” that assist developers directly in the IDE and “background agents” that autonomously handle repetitive toil, such as large-scale code migrations, upgrades, and test coverage improvements. To maintain high engineering standards, autonomous agents should operate within a secure sandboxed environment to execute specific developer intent, ultimately producing standard Pull Requests that must pass existing CI/CD pipelines and human review. MCP provides a unified, vendor-neutral way for LinkedIn’s foreground and background agents to use the same tools and enterprise context (powered by RAG, PR history, and semantic code indexes) while evals, sandboxing, and auditing ensure safe, compliant, production-grade agentic workflows. Organizations should avoid reinventing generic coding tools and instead focus on solving specific domain problems by feeding agents deep organizational context—such as historical PR data—and implementing rigorous evaluations to track performance. Transcript Wes Reisz: On today’s episode of the InfoQ Podcast, we’re diving into a topic that is rapidly transforming how engineering organizations build software today. How platform
Read More
Platform Engineering for AI: Scaling Agents and MCP at LinkedIn – InfoQ

Platform Engineering for AI: Scaling Agents and MCP at LinkedIn – InfoQ