AI Package
The @hazeljs/ai package provides comprehensive AI integration capabilities for your HazelJS applications. It supports multiple AI providers including OpenAI, Anthropic, Gemini, Cohere, and Ollama, with features like streaming, function calling, embeddings, and vector search.
Purpose
Building AI-powered applications typically requires integrating with multiple AI providers, managing API keys, handling streaming responses, and implementing complex logic for embeddings and vector search. The @hazeljs/ai package solves these challenges by providing:
- Unified API: A single, consistent interface for all AI providers, eliminating the need to learn different SDKs
- Provider Abstraction: Switch between OpenAI, Anthropic, Gemini, and other providers without changing your code
- Decorator-Based API: Use simple decorators to add AI capabilities to your methods
- Built-in Streaming: Native support for streaming responses for real-time user experiences
- Vector Search: Integrated semantic search capabilities with embeddings
Architecture
The package is built on a provider-based architecture that abstracts away provider-specific implementations:
Key Components
- AIService: Core service that manages AI task execution and provider routing
- Provider Implementations: Pluggable providers for different AI services
- Decorators:
@AITask,@AIFunction,@AIValidatefor declarative AI integration - VectorService: Specialized service for embeddings and semantic search
Advantages
1. Provider Flexibility
Switch between AI providers without code changes. Start with OpenAI, switch to Anthropic for cost savings, or use Ollama for local development—all with the same API.
2. Developer Experience
Decorator-based API means you can add AI capabilities with a single decorator. No need to manage API clients, handle errors, or implement streaming logic manually.
3. Type Safety
Full TypeScript support with proper types for all providers, responses, and configurations.
4. Performance
Built-in streaming support for real-time responses, token tracking for cost optimization, and efficient embedding management.
5. Extensibility
Easy to add custom providers or extend existing ones. The provider interface is simple and well-defined.
6. Production Ready
Includes error handling, retry logic, rate limiting support, and comprehensive logging.
Installation
npm install @hazeljs/ai
Quick Start
Basic Setup
Import and register the AI module in your application:
import { HazelModule } from '@hazeljs/core';
import { AIModule } from '@hazeljs/ai';
@HazelModule({
imports: [AIModule.register({
provider: 'openai',
model: 'gpt-4-turbo-preview',
apiKey: process.env.OPENAI_API_KEY,
})],
})
export class AppModule {}
Using AIService
Inject the AIService into your controllers or services:
import { Injectable } from '@hazeljs/core';
import { AIService } from '@hazeljs/ai';
@Injectable()
export class ChatService {
constructor(private readonly aiService: AIService) {}
async generateResponse(prompt: string) {
const result = await this.aiService.executeTask({
name: 'chat',
provider: 'openai',
model: 'gpt-4-turbo-preview',
prompt: 'You are a helpful assistant. Respond to: {{input}}',
outputType: 'string',
stream: false,
}, { input: prompt });
return result.data;
}
}
Decorators and Annotations
HazelJS uses decorators (also called annotations) to add metadata and behavior to your code. Decorators are functions that modify classes, methods, or parameters at design time. The AI package provides several decorators that make it easy to integrate AI capabilities into your application.
Understanding Decorators
Decorators in TypeScript are special declarations that can be attached to classes, methods, properties, or parameters. They use the @ symbol and are evaluated at runtime. HazelJS decorators store metadata using reflection, which the framework then uses to configure behavior.
How Decorators Work:
- Metadata Storage: Decorators store configuration in metadata using
Reflect.defineMetadata() - Runtime Processing: The framework reads this metadata when processing your code
- Behavior Injection: Based on the metadata, the framework modifies or wraps your code
@AITask Decorator
The @AITask decorator is a method decorator that transforms a method into an AI-powered function. When you call the method, it automatically executes an AI task using the configured provider.
How it works:
- The decorator replaces the method's implementation with AI execution logic
- It reads the task configuration from the decorator options
- It injects the
AIServiceautomatically (must be available in the class) - It handles streaming, error handling, and response parsing
Configuration Options:
interface AITaskConfig {
name: string; // Unique name for the task
provider: string; // AI provider: 'openai', 'anthropic', 'ollama', etc.
model: string; // Model to use (e.g., 'gpt-4-turbo-preview')
prompt: string; // Prompt template with {{variable}} placeholders
outputType: string; // Expected output: 'string', 'json', 'number', 'boolean'
temperature?: number; // Creativity level (0-1, default: 0.7)
maxTokens?: number; // Maximum tokens in response
stream?: boolean; // Enable streaming responses
}
Example with Detailed Explanation:
import { Injectable } from '@hazeljs/core';
import { AIService, AITask } from '@hazeljs/ai';
@Injectable()
export class ContentService {
// AIService must be injected for @AITask to work
constructor(public aiService: AIService) {}
// @AITask is a method decorator - it modifies the summarize method
@AITask({
name: 'summarize', // Task identifier
provider: 'openai', // Use OpenAI provider
model: 'gpt-4-turbo-preview', // Specific model
prompt: 'Summarize the following text in 3 sentences: {{input}}',
// The {{input}} placeholder will be replaced with the method's first argument
outputType: 'string', // Expect a string response
temperature: 0.7, // Moderate creativity
maxTokens: 500, // Limit response length
stream: false, // Return complete response, not stream
})
async summarize(text: string) {
// This method body is replaced by the decorator
// When called: summarize("Long article text...")
// The decorator will:
// 1. Replace {{input}} with "Long article text..."
// 2. Call OpenAI API with the formatted prompt
// 3. Parse and return the response
// 4. Handle errors automatically
}
}
Important Notes:
- The method must have
AIServiceinjected in the constructor (aspublic aiService) - The first parameter to the method becomes the
{{input}}in the prompt template - The method body is replaced, so any code you write here won't execute
- For streaming, the method returns an async generator instead of a value
AI Providers
OpenAI Provider
The OpenAI provider supports GPT models, embeddings, and streaming:
import { OpenAIProvider } from '@hazeljs/ai';
const provider = new OpenAIProvider(process.env.OPENAI_API_KEY, {
defaultModel: 'gpt-4-turbo-preview',
});
// Generate completion
const response = await provider.complete({
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
model: 'gpt-4-turbo-preview',
temperature: 0.7,
});
// Streaming completion
for await (const chunk of provider.streamComplete({
messages: [{ role: 'user', content: 'Tell me a story' }],
})) {
process.stdout.write(chunk.delta);
}
// Generate embeddings
const embeddings = await provider.embed({
input: 'Hello, world!',
model: 'text-embedding-3-small',
});
Anthropic Provider
import { AnthropicProvider } from '@hazeljs/ai';
const provider = new AnthropicProvider(process.env.ANTHROPIC_API_KEY);
const response = await provider.complete({
messages: [
{ role: 'user', content: 'Explain quantum computing' },
],
model: 'claude-3-opus-20240229',
});
Gemini Provider
import { GeminiProvider } from '@hazeljs/ai';
const provider = new GeminiProvider(process.env.GEMINI_API_KEY);
const response = await provider.complete({
messages: [
{ role: 'user', content: 'What is machine learning?' },
],
model: 'gemini-pro',
});
Ollama Provider
For local AI models:
import { OllamaProvider } from '@hazeljs/ai';
const provider = new OllamaProvider({
baseURL: 'http://localhost:11434',
});
const response = await provider.complete({
messages: [
{ role: 'user', content: 'Hello!' },
],
model: 'llama2',
});
AI Function Decorator
The @AIFunction decorator is a more flexible alternative to @AITask. It marks a method as AI-powered and allows you to use parameter decorators to specify which parameters should be processed by AI.
@AIFunction Decorator
Purpose: Marks a method as an AI function that will process its parameters through an AI provider.
How it works:
- Stores AI configuration in method metadata
- Works with
@AIPromptparameter decorator to identify AI inputs - The framework intercepts method calls and processes them through AI
- More flexible than
@AITaskas it doesn't replace the method body
Configuration Options:
interface AIFunctionOptions {
provider: string; // AI provider to use
model: string; // Model identifier
streaming?: boolean; // Enable streaming (default: false)
temperature?: number; // Response creativity (default: 0.7)
maxTokens?: number; // Maximum response tokens
}
Example with Detailed Explanation:
import { Injectable } from '@hazeljs/core';
import { AIFunction, AIPrompt } from '@hazeljs/ai';
@Injectable()
export class TextService {
// @AIFunction is a method decorator that marks this method for AI processing
@AIFunction({
provider: 'openai', // Use OpenAI
model: 'gpt-4-turbo-preview', // Model selection
streaming: true, // Enable streaming responses
temperature: 0.7, // Creativity level
})
// @AIPrompt is a parameter decorator that marks this parameter as the AI prompt
async generateText(@AIPrompt() prompt: string) {
// When this method is called:
// 1. The framework intercepts the call
// 2. Extracts the 'prompt' parameter (marked with @AIPrompt)
// 3. Sends it to OpenAI with the configured settings
// 4. Returns a streaming response (async generator)
// 5. Your method body can process the result if needed
// For streaming, you'll receive an async generator
// For non-streaming, you'll receive the complete response
}
}
@AIPrompt Parameter Decorator
Purpose: Marks a method parameter as the input prompt for AI processing.
How it works:
- Stores metadata about which parameter index contains the prompt
- The framework uses this to extract the correct parameter value
- Works in conjunction with
@AIFunction
Usage:
// Single prompt parameter
async generateText(@AIPrompt() prompt: string) { }
// Multiple parameters (only one marked as prompt)
async processData(
@AIPrompt() userInput: string, // This will be sent to AI
context: any // This won't be processed by AI
) { }
Key Differences from @AITask:
@AIFunctiondoesn't replace the method body - you can add custom logic- More flexible for complex scenarios
- Better for cases where you need to process AI results further
- Supports multiple parameters with selective AI processing
AI Validation Decorators
AI-powered validation allows you to use AI models to validate data beyond traditional rule-based validation. This is useful for content moderation, quality checks, and complex validation logic.
@AIValidate Decorator
Purpose: Marks a property or class for AI-powered validation. The AI model evaluates the data against your validation criteria.
How it works:
- When validation runs, the property value is sent to the AI provider
- The AI evaluates it against the validation prompt
- Returns a boolean or detailed validation result
- Integrates with HazelJS's validation pipeline
Configuration Options:
interface AIValidationOptions {
provider: string; // AI provider to use
prompt: string; // Validation criteria prompt
model?: string; // Optional model override
strict?: boolean; // Require explicit approval (default: true)
}
Example with Detailed Explanation:
import { AIValidate, AIValidateProperty } from '@hazeljs/ai';
class CreatePostDto {
// @AIValidateProperty is a property decorator
// It marks this property for AI validation
@AIValidateProperty({
provider: 'openai',
prompt: 'Validate that this title is appropriate and engaging. ' +
'Check for: 1) Appropriate language, 2) Engaging content, ' +
'3) No spam or misleading information. ' +
'Respond with "VALID" or "INVALID" followed by reason.',
})
title: string;
// When this DTO is validated:
// 1. The title value is extracted
// 2. Sent to OpenAI with the validation prompt
// 3. AI responds with validation result
// 4. Framework throws validation error if invalid
// @AIValidate can be used on the class level or property level
@AIValidate({
provider: 'openai',
prompt: 'Check if this content follows our content guidelines. ' +
'Ensure it is: professional, accurate, and free of harmful content.',
})
content: string;
// Similar process - content is validated by AI before processing
}
@AIValidateProperty Decorator
Purpose: Specifically marks a class property for AI validation. More explicit than @AIValidate for property-level validation.
When to use:
- When you need different validation rules for different properties
- When you want explicit property-level validation
- When combining with other validation decorators
Example:
import { IsString, MinLength } from 'class-validator';
import { AIValidateProperty } from '@hazeljs/ai';
class ArticleDto {
// Combine traditional validation with AI validation
@IsString()
@MinLength(10)
@AIValidateProperty({
provider: 'openai',
prompt: 'Check if title is SEO-friendly and engaging',
})
title: string;
@AIValidateProperty({
provider: 'openai',
prompt: 'Validate content quality and readability',
})
content: string;
}
// Validation runs in order:
// 1. Traditional validators (@IsString, @MinLength)
// 2. AI validators (@AIValidateProperty)
// All must pass for validation to succeed
Best Practices for AI Validation:
- Be specific in prompts: Clear validation criteria produce better results
- Combine with traditional validation: Use AI for complex checks, traditional validators for simple rules
- Handle costs: AI validation has API costs - use selectively
- Cache results: Consider caching validation results for repeated content
- Error handling: Provide fallback validation for AI failures
Vector Search
The package includes vector search capabilities for semantic search:
import { VectorService } from '@hazeljs/ai';
const vectorService = new VectorService({
provider: 'openai',
embeddingModel: 'text-embedding-3-small',
});
// Store documents
await vectorService.addDocument({
id: 'doc1',
content: 'HazelJS is a modern Node.js framework',
metadata: { category: 'framework' },
});
// Search
const results = await vectorService.search({
query: 'What is HazelJS?',
limit: 5,
});
Complete Example
Here's a complete example of an AI-powered chat service:
import { Injectable } from '@hazeljs/core';
import { Controller, Get, Post, Body, Param } from '@hazeljs/core';
import { AIService, AITask } from '@hazeljs/ai';
@Injectable()
export class ChatService {
constructor(public aiService: AIService) {}
@AITask({
name: 'chat',
provider: 'openai',
model: 'gpt-4-turbo-preview',
prompt: 'You are a helpful assistant. User says: {{input}}',
outputType: 'string',
stream: true,
})
async chat(message: string) {
// Returns a stream of AI responses
}
}
@Controller('chat')
export class ChatController {
constructor(private readonly chatService: ChatService) {}
@Post()
async sendMessage(@Body() body: { message: string }) {
const stream = await this.chatService.chat(body.message);
// Handle streaming response
const chunks: string[] = [];
for await (const chunk of stream) {
chunks.push(chunk);
}
return { response: chunks.join('') };
}
}
Configuration
Configure the AI module with custom options:
AIModule.register({
provider: 'openai',
model: 'gpt-4-turbo-preview',
apiKey: process.env.OPENAI_API_KEY,
// Additional provider-specific options
})
Best Practices
-
Use streaming for long responses: Enable streaming for better user experience with long AI-generated content.
-
Cache embeddings: Store embeddings for frequently searched content to reduce API costs.
-
Error handling: Always wrap AI calls in try-catch blocks and provide fallback responses.
-
Token limits: Be mindful of token limits and set appropriate
maxTokensvalues. -
Rate limiting: Implement rate limiting for AI endpoints to prevent abuse.