RAG vs Agentic RAG — When to Use Which

A practical comparison of what problems standard RAG and Agentic RAG solve in HazelJS, and when to choose each approach.

Overview

Both RAG and Agentic RAG in HazelJS use the same foundation: vector stores, embeddings, and document retrieval. The difference is how they handle queries and improve over time. Standard RAG is a fixed pipeline; Agentic RAG adds autonomous, adaptive behavior on top.


What RAG (Standard) Solves

Standard RAG in @hazeljs/rag focuses on core retrieval and generation:

ProblemSolution
LLM hallucination / outdated knowledgeRetrieves relevant documents from a vector store and augments the LLM prompt with them
Document ingestion11 document loaders (PDF, Markdown, web, GitHub, etc.) and text splitters for chunking
Semantic searchVector similarity search over embeddings (Pinecone, Qdrant, Weaviate, ChromaDB, Memory)
Basic retrieval strategiesSimilarity, MMR (diversity), hybrid (vector + keyword)
Context-aware Q&ARAGPipelineWithMemory for conversation history and entity memory
Knowledge graph retrievalGraphRAG for entity/relationship and thematic search across documents
Answer generationRetrieve → build context → generate answer via LLM

Typical use cases: FAQ bots, simple Q&A over docs, help centers, product catalogs.


What Agentic RAG Solves (Beyond Standard RAG)

Agentic RAG adds autonomous, adaptive capabilities on top of the same retrieval primitives:

ProblemSolution
Complex multi-part questions@QueryPlanner — Decomposes into sub-queries and runs them (optionally in parallel)
Low retrieval quality@SelfReflective — Evaluates results and iteratively improves (up to N iterations)
Choosing the right retrieval strategy@AdaptiveRetrieval — Picks similarity, hybrid, or MMR based on query/context
Abstract or vague queries@HyDE — Generates hypothetical answers and uses them to improve retrieval
Bad or irrelevant results@CorrectiveRAG — Detects low relevance and can fall back (e.g. web search)
Multi-step reasoning across docs@MultiHop — Chains multiple retrieval steps (e.g. breadth-first)
Conversational context@ContextAware — Uses conversation window, entity tracking, topic modeling
Query phrasing@QueryRewriter — Expansion, synonyms, clarification for better coverage
Trust and citations@SourceVerification — Checks freshness, authority, requires citations
Improvement over time@ActiveLearning + @Feedback — Learns from user feedback, adapts ranking
Performance@Cached — LRU cache with TTL for repeated queries

Typical use cases: Research assistants, legal/medical Q&A, customer support with context, knowledge management that improves with use.


Side-by-Side Comparison

DimensionRAGAgentic RAG
Query handlingSingle query → single retrievalPlans, rewrites, decomposes, adapts
QualityFixed pipelineSelf-reflection, correction, verification
StrategyYou choose (similarity, hybrid, MMR)Chooses strategy per query
ReasoningOne retrieval stepMulti-hop across documents
LearningStaticLearns from feedback
ComplexitySimpler, predictableMore capable, more moving parts

Decision Guide

Use standard RAG when:

  • You have straightforward Q&A or document search
  • Queries are simple and well-formed
  • You want predictable latency and cost
  • You don't need multi-step reasoning or self-correction

Use Agentic RAG when:

  • Questions are complex or multi-part
  • You need better handling of vague or abstract queries
  • Multi-hop reasoning across documents is required
  • You want self-correction and quality verification
  • The system should improve from user feedback over time

Quick Start Examples

Standard RAG

import { RAGPipeline, OpenAIEmbeddings, MemoryVectorStore } from '@hazeljs/rag';

const embeddings = new OpenAIEmbeddings({ apiKey: process.env.OPENAI_API_KEY });
const vectorStore = new MemoryVectorStore(embeddings);
const rag = new RAGPipeline({
  vectorStore,
  embeddingProvider: embeddings,
  topK: 5,
});

await rag.initialize();
await rag.addDocuments(documents);

const results = await rag.query('What is HazelJS?');

Agentic RAG

import { AgenticRAGService } from '@hazeljs/rag/agentic';
import { MemoryVectorStore } from '@hazeljs/rag';
import { OpenAIEmbeddings } from '@hazeljs/ai';

const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());
const agenticRAG = new AgenticRAGService({ vectorStore });

// Uses query planning, self-reflection, adaptive retrieval, caching
const results = await agenticRAG.retrieve('Compare machine learning approaches for NLP');

Learn More

  • RAG Package — Vector stores, loaders, GraphRAG, and core pipeline
  • RAG Patterns — Advanced patterns and best practices
  • Agentic RAG — Full Agentic RAG feature reference