7
Podcasts that helped me build smarter agents, not just bigger ones
r/aiagents
8/15/2025
Content Summary
The post shares a list of podcasts that helped the author build smarter AI agents rather than just bigger ones. The podcasts cover topics like MLOps, RAG systems, retrieval design, hallucination reduction, and responsible AI. The author recommends these resources for those working on or scaling LLMs, particularly focusing on practical implementation, model evaluation, and ethical AI deployment.
Opinion Analysis
Mainstream opinion appears to be that practical, community-driven knowledge sharing is highly valuable for AI developers. Many commenters agree that focusing on smart agent design, not just scale, is crucial for effective AI systems. There is also consensus on the importance of addressing issues like hallucinations, model drift, and ethical compliance. Some debate exists about whether large models are inherently better, but most seem to favor quality over quantity in AI agent development.
SAAS TOOLS
SaaS | URL | Category | Features/Notes |
---|---|---|---|
MLOps Community Podcast | https://home.mlops.community/public/collections/mlops-community-podcast | AI/ML Development | Engineers and researchers share how they ship ML and LLM systems |
YAAP (Yet Another AI Podcast) | https://yaap.podbean.com | AI/ML Enterprise Solutions | Focuses on enterprise-grade RAG systems, structured chunking, and evaluation strategy |
Unstructured Data | https://open.spotify.com/show/1yVTFF4yCkmrKS12gbGkYS | AI/ML Use Cases | Covers customer support and e-commerce use cases, includes developer interviews |
RAG and Beyond | https://open.spotify.com/show/7BLWLhXPqpmazpt4pSNv1Q | AI/ML Retrieval Systems | Explores retrieval system design from a vector database perspective, hybrid search insights |
Gradient Dissent | https://wandb.ai/site/resources/podcast | AI/ML Research | Discusses LLM evaluation, hallucination reduction, combines theory and practice |
Responsible AI Podcast | https://podcasts.apple.com/us/podcast/responsible-ai-podcast/id1780564172 | AI Ethics & Compliance | Focuses on model behavior evaluation in regulated environments, compliance, auditability |
USER NEEDS
Pain Points:
- Dealing with hallucinations in LLMs
- Managing model drift
- Building scalable LLM systems
- Ensuring model compliance and auditability
Problems to Solve:
- Improving agent intelligence over size
- Implementing effective retrieval and generation systems
- Evaluating and reducing hallucinations
- Ensuring ethical and compliant AI deployment
Potential Solutions:
- Using RAG (Retrieval-Augmented Generation) systems
- Implementing MLOps practices for ML/LLM deployment
- Focusing on model evaluation and auditing
- Leveraging community-driven knowledge sharing through podcasts
GROWTH FACTORS
Effective Strategies:
- Focusing on practical, real-world applications of AI/ML
- Providing educational content that helps users solve specific problems
- Highlighting community and collaboration in the AI space
Marketing & Acquisition:
- Sharing curated resources like podcasts that provide value to developers and engineers
- Engaging with niche communities like r/aiagents for targeted outreach
Monetization & Product:
- Offering tools and platforms that address pain points in AI development (e.g., RAG, MLOps)
- Emphasizing product-market fit by solving real-world issues faced by LLM builders
User Engagement:
- Encouraging knowledge sharing through podcasts and discussions
- Building communities around specific AI challenges and solutions