Ai | A hidden prompt in a LinkedIn bio turned recruiter outreach into Olde English, and the viral prank exposed a real weakness in AI hiring tools. A viral LinkedIn stunt turned recruiter outreach into Olde English, but the joke lands because the weakness is real. A LinkedIn user has turned a hidden line in a profile bio into a working prompt injection attack, and the result was exactly the kind of absurdity that makes the point stick. Recruiter bots reportedly began writing outreach in Olde English and addressing the target as “My Lord,” a prank that spread quickly on Reddit’s r/technology and other social feeds over the weekend. The comedy matters less than the mechanism. What the stunt exposed is a basic failure mode in agentic hiring systems, where text from a public profile is copied into an AI workflow and treated as something closer to instructions than data. That is the core problem with prompt injection, and OWASP still lists it as LLM01, the top risk in its 2025 security guidance for large language model applications. The attack works because many AI tools do not really distinguish between trusted instructions and untrusted content once everything is inside the
Read More











