ML Package
The @hazeljs/ml package provides machine learning and model management capabilities for HazelJS applications. It includes a model registry, decorator-based training and prediction APIs, batch inference, and metrics tracking.
Purpose
Building ML-powered applications requires model registration, training pipelines, inference services, and evaluation metrics. The @hazeljs/ml package simplifies this by providing:
- Model Registry – Register and discover models by name and version
- Decorator-Based API –
@Model,@Train,@Predictfor declarative ML classes - Training Pipeline – PipelineService for data preprocessing (normalize, filter)
- Inference – PredictorService for single and batch predictions
- Metrics – MetricsService for evaluation, A/B testing, and monitoring
- Framework-Agnostic – Works with TensorFlow.js, ONNX, Transformers.js, or custom backends
Architecture
The package uses a registry-based architecture with decorator-driven model registration:
graph TD A["MLModule.forRoot()<br/>(Model Registration)"] --> B["MLModelBootstrap<br/>(Discovers @Train, @Predict)"] B --> C["ModelRegistry<br/>(Name/Version Lookup)"] D["@Model Decorator<br/>(Metadata)"] --> E["@Train / @Predict<br/>(Method Discovery)"] E --> B C --> F["TrainerService<br/>(Training)"] C --> G["PredictorService<br/>(Inference)"] C --> H["BatchService<br/>(Batch Predictions)"] C --> I["MetricsService<br/>(Evaluation)"] G --> J["Single / Batch Prediction"] F --> K["Training Pipeline"] style A fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff style B fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff style C fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff style D fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff
Key Components
- MLModule – Registers ModelRegistry, TrainerService, PredictorService, BatchService, MetricsService
- ModelRegistry – Stores and retrieves models by name and version
- TrainerService – Discovers and invokes
@Trainmethods - PredictorService – Discovers and invokes
@Predictmethods - PipelineService – Data preprocessing for training
- MetricsService – Model evaluation and metrics tracking
- Decorators –
@Model,@Train,@Predictfor declarative ML
Advantages
1. Declarative ML
Define models with decorators—training and prediction methods are discovered automatically.
2. Model Versioning
Register multiple versions of a model; the registry supports lookup by name and version.
3. Framework Flexibility
Use TensorFlow.js, ONNX, Transformers.js, or custom implementations—the package is backend-agnostic.
4. Batch Inference
BatchService for efficient batch predictions with configurable batch size.
5. Evaluation Built-In
MetricsService for accuracy, F1, precision, recall, and custom metrics.
Installation
npm install @hazeljs/ml @hazeljs/core
Optional Peer Dependencies
# TensorFlow.js
npm install @tensorflow/tfjs-node
# ONNX Runtime
npm install onnxruntime-node
# Hugging Face Transformers (embeddings, sentiment)
npm install @huggingface/transformers
Quick Start
1. Import MLModule
import { HazelApp } from '@hazeljs/core';
import { MLModule } from '@hazeljs/ml';
const app = new HazelApp({
imports: [
MLModule.forRoot({
models: [SentimentClassifier, SpamClassifier],
}),
],
});
app.listen(3000);
2. Define a Model
import { Injectable } from '@hazeljs/core';
import { Model, Train, Predict, ModelRegistry } from '@hazeljs/ml';
@Model({ name: 'sentiment-classifier', version: '1.0.0', framework: 'custom' })
@Injectable()
export class SentimentClassifier {
private labels = ['positive', 'negative', 'neutral'];
private weights: Record<string, number[]> = {};
constructor(private registry: ModelRegistry) {}
@Train()
async train(data: { text: string; label: string }[]): Promise<void> {
// Your training logic – e.g. bag-of-words, embeddings
const vocab = this.buildVocabulary(data);
this.weights = this.computeWeights(data, vocab);
}
@Predict()
async predict(input: { text: string }): Promise<{ sentiment: string; confidence: number }> {
const scores = this.score(input.text);
const idx = scores.indexOf(Math.max(...scores));
return {
sentiment: this.labels[idx],
confidence: scores[idx],
};
}
}
3. Predict from a Controller
import { Controller, Post, Body, Inject } from '@hazeljs/core';
import { PredictorService } from '@hazeljs/ml';
@Controller('ml')
export class MLController {
constructor(private predictor: PredictorService) {}
@Post('predict')
async predict(@Body() body: { text: string; model?: string }) {
const result = await this.predictor.predict(
body.model ?? 'sentiment-classifier',
body
);
return { result };
}
}
Training Pipeline
Preprocess data before training with PipelineService:
import { PipelineService } from '@hazeljs/ml';
const pipeline = new PipelineService();
const steps = [
{ name: 'normalize', fn: (d: { text: string }) => ({ ...d, text: d.text.toLowerCase() }) },
{ name: 'filter', fn: (d: { text: string }) => d.text.length > 0 },
];
const processed = await pipeline.run(data, steps);
await model.train(processed);
Batch Predictions
import { BatchService } from '@hazeljs/ml';
const batchService = new BatchService(predictorService);
const results = await batchService.predictBatch('sentiment-classifier', items, {
batchSize: 32,
});
Metrics and Evaluation
import { MetricsService } from '@hazeljs/ml';
const metricsService = new MetricsService();
const evaluation = await metricsService.evaluate(modelName, testData, {
metrics: ['accuracy', 'f1', 'precision', 'recall'],
});
Manual Model Registration
When not using MLModule.forRoot({ models: [...] }):
import { registerMLModel, ModelRegistry, TrainerService, PredictorService } from '@hazeljs/ml';
registerMLModel(
sentimentInstance,
modelRegistry,
trainerService,
predictorService
);
Service Summary
| Service | Purpose |
|---|---|
ModelRegistry | Register and lookup models by name/version |
TrainerService | Discover and invoke @Train methods |
PredictorService | Discover and invoke @Predict methods |
PipelineService | Data preprocessing pipeline for training |
BatchService | Batch prediction with configurable batch size |
MetricsService | Model evaluation and metrics tracking |
Related Resources
- AI Package – LLM integration for hybrid AI/ML workflows
- Cache Package – Cache model outputs and embeddings
- Config Package – Model paths and API keys
- hazeljs-ml-starter – Full example with sentiment, spam, intent classifiers