In the previous post , I shared my update about building collaborative AI agents with the CrewAI framework. We explored the power of orchestrating AI agents to tackle complex tasks. However, that setup relied heavily on powerful Large Language Models (LLMs) accessible through expensive API tokens. Since then, I've discovered a transformative approach: running CrewAI agents entirely locally. This shift leverages tools like Ollama and open-source models such as DeepSeek . The method was brilliantly demonstrated by Tyler AI in his YouTube video "How to Build CrewAI Agents with Deepseek R1 100% Locally" . The Economic and Practical Advantages of Local LLMs As Tyler AI emphasizes, the financial contrast between API-driven models like OpenAI's gpt-3.5-turbo-016 (around $60 per million output tokens) and open-source alternatives like DeepSeek R1 (roughly $2.19 per million output tokens) is substantial. Impressively, DeepSeek R1 achieves performance comparable to or ...
The Talent Gap Myth Generative AI has created a perception that an entirely new set of roles is emerging — “prompt engineers,” “AI whisperers,” “autonomous-agent designers.” That narrative, while exciting, is incomplete. In reality, the foundations of GenAI systems are still deeply rooted in software engineering discipline . The roles are evolving, not disappearing. What changes is the interface: engineers now work with language and context instead of code and schema . The future won’t be built by replacing traditional engineers — it will be built by those who learn to translate their existing skills into this new paradigm of adaptive, context-aware systems. This article kicks off a series exploring how traditional software roles are evolving in the GenAI era. Over the next few posts, we’ll deep-dive into each transition — backend to context architecture, frontend to AI interaction design, QA to hallucination testing, and more. Backend Engineer → Context Architect / LLM Integrator Th...
There’s a lot of buzz around Generative AI, with many touting it as a new computing paradigm — but that misses the point. GenAI and Agentic AI are still software systems , built on the same engineering discipline that’s powered the IT industry for decades. Both follow principles of modularity, clear responsibility, orchestration, and observability. What’s changed isn’t the foundation — it’s the interface . Intelligence is now expressed through natural language instead of code, and coordination happens through context instead of strict APIs. The Core Hypothesis Both Traditional and Generative AI share the same architectural backbone: modular components with clear boundaries and predictable interactions. What’s evolved is the language of communication — from code contracts to context contracts. Deterministic logic has given way to probabilistic reasoning, but the need for disciplined engineering remains unchanged. Parallels Between Traditional Systems and Agentic AI a. Single...
Comments
Post a Comment