Agent Skills for Large Language Models: Architecture, Acquisition, Security, and the Path Forward
arXiv:2602.12430v1 Announce Type: new Abstract: The transition from monolithic language models to modular, skill-equipped agents marks a defining shift in how large language models (LLMs) are deployed in practice. Rather than encoding all procedural knowledge within model weights, agent skills — composable packages of instructions, code, and resources that agents load on demand — enable dynamic capability extension without retraining. It is formalized in a paradigm of progressive disclosure, portable skill definitions, and integration with the Model Context […]