Formatters
Output log formats for different destinations and platforms
Formatters
Cenglu supports multiple output formats optimized for different logging platforms and use cases. Choose the format that best matches your log aggregation system or operational requirements.
Quick Start
Set the output format using the structured option:
import { createLogger } from "cenglu";
const logger = createLogger({
structured: {
type: "json", // json | ecs | datadog | splunk | logfmt
},
});
logger.info("User logged in", { userId: 123 });Available Formats
JSON (Default)
Standard JSON format with all log record fields:
const logger = createLogger({
structured: {
type: "json",
},
});
logger.info("User logged in", { userId: 123, email: "user@example.com" });Output:
{
"level": "info",
"msg": "User logged in",
"time": 1705334400000,
"context": {
"userId": 123,
"email": "user@example.com"
},
"service": "my-app",
"env": "production",
"version": "1.0.0",
"hostname": "server-01",
"pid": 12345
}Use when:
- Generic JSON logging
- Custom log processing pipelines
- Maximum flexibility
- Default choice for most applications
Characteristics:
- Simple, flat structure
- All fields included
- Easy to parse
- Compatible with any JSON-based system
ECS (Elastic Common Schema)
Elasticsearch-compatible format following the ECS specification:
const logger = createLogger({
structured: {
type: "ecs",
},
});
logger.info("User logged in", { userId: 123 });Output:
{
"@timestamp": "2024-01-15T12:00:00.000Z",
"message": "User logged in",
"log.level": "info",
"service.name": "my-app",
"service.version": "1.0.0",
"service.environment": "production",
"event.dataset": "my-app.production",
"trace.id": "abc123",
"span.id": "def456",
"labels": {
"userId": 123
}
}Use when:
- Logging to Elasticsearch
- Using Elastic Stack (ELK/Elastic Cloud)
- Following ECS standards
- Need structured field mapping
Characteristics:
- Dotted field notation (
service.name,log.level) @timestampin ISO 8601 format- Context mapped to
labels - Error details in
error.*fields - Trace context in
trace.idandspan.id
ECS field mapping:
@timestamp- ISO timestampmessage- Log messagelog.level- Log levelservice.name- Service nameservice.version- Service versionservice.environment- Environmenttrace.id- Trace ID (distributed tracing)span.id- Span ID (distributed tracing)labels.*- Custom context fieldserror.type- Error nameerror.message- Error messageerror.stack_trace- Stack trace
Datadog
Datadog-optimized format with proper field names and structure:
const logger = createLogger({
structured: {
type: "datadog",
},
});
logger.info("User logged in", { userId: 123, action: "login" });Output:
{
"message": "User logged in",
"status": "info",
"level": "info",
"service": "my-app",
"env": "production",
"version": "1.0.0",
"timestamp": 1705334400000,
"dd": {
"trace_id": "abc123",
"span_id": "def456"
},
"userId": 123,
"action": "login"
}Use when:
- Logging to Datadog
- Using Datadog APM
- Need Datadog-specific features (facets, trace correlation)
Characteristics:
statusfield for log level- Context fields spread at top level (for Datadog facets)
- Trace context in
dd.trace_idanddd.span_id - Timestamp as milliseconds since epoch
- Error details in
error.*fields
Datadog field mapping:
message- Log messagestatus- Log level (for UI display)level- Log level (for filtering)service- Service nameenv- Environmentversion- Service versiontimestamp- Unix timestamp (milliseconds)dd.trace_id- Trace ID (APM correlation)dd.span_id- Span ID (APM correlation)error.kind- Error typeerror.message- Error messageerror.stack- Stack trace*- Context fields at top level
Splunk
Splunk HEC (HTTP Event Collector) format:
const logger = createLogger({
structured: {
type: "splunk",
},
});
logger.info("User logged in", { userId: 123 });Output:
{
"time": 1705334400,
"host": "server-01",
"source": "my-app",
"sourcetype": "_json",
"event": {
"message": "User logged in",
"level": "info",
"service": "my-app",
"env": "production",
"version": "1.0.0",
"traceId": "abc123",
"spanId": "def456",
"context": {
"userId": 123
}
}
}Use when:
- Logging to Splunk
- Using Splunk HEC endpoint
- Need Splunk indexing metadata
Characteristics:
timein seconds (not milliseconds)- Event data nested in
eventfield - Metadata at top level (
host,source,sourcetype) - Compatible with Splunk HEC
Configure Splunk options:
const logger = createLogger({
structured: {
type: "splunk",
},
// Pass options via adapter or custom formatter
});Splunk field mapping:
time- Unix timestamp (seconds)host- Hostnamesource- Source name (service name)sourcetype- Source type (_json)index- Splunk index (optional)event.message- Log messageevent.level- Log levelevent.context- Custom context fieldsevent.error- Error details
Logfmt
Human-readable key-value format popularized by Heroku:
const logger = createLogger({
structured: {
type: "logfmt",
},
});
logger.info("User logged in", { userId: 123, email: "user@example.com" });Output:
time=2024-01-15T12:00:00.000Z level=info msg="User logged in" service=my-app env=production userId=123 email=user@example.comUse when:
- Human-readable structured logs
- Command-line tools
- Simple parsing requirements
- Heroku or 12-factor app style logging
Characteristics:
- Space-separated key-value pairs
- Values quoted if they contain spaces or special characters
- Flat structure (no nesting)
- Easy to read and grep
- Compact format
Logfmt field mapping:
time- ISO 8601 timestamplevel- Log levelmsg- Log message (quoted if contains spaces)service- Service nameenv- Environmentversion- Versiontrace_id- Trace IDspan_id- Span ID- Context fields at top level
error_name- Error nameerror_message- Error message (quoted)
Custom Transform
Transform the log record before formatting:
const logger = createLogger({
structured: {
type: "json",
transform: (record) => ({
// Custom structure
timestamp: new Date(record.time).toISOString(),
severity: record.level.toUpperCase(),
message: record.msg,
metadata: {
service: record.service,
env: record.env,
...record.context,
},
}),
},
});
logger.info("User logged in", { userId: 123 });Output:
{
"timestamp": "2024-01-15T12:00:00.000Z",
"severity": "INFO",
"message": "User logged in",
"metadata": {
"service": "my-app",
"env": "production",
"userId": 123
}
}Pretty Format (Development)
Pretty, colored format for development:
const logger = createLogger({
pretty: {
enabled: process.env.NODE_ENV !== "production",
colors: true,
timestamp: "local",
},
});
logger.info("User logged in", { userId: 123 });Output:
[2024-01-15 12:00:00] INFO User logged in
userId: 123See Pretty Formatting for details.
Format Comparison
| Format | Size | Human Readable | Parse Speed | Use Case |
|---|---|---|---|---|
| JSON | Medium | ⚠️ | Fast | General purpose |
| ECS | Medium | ❌ | Fast | Elasticsearch |
| Datadog | Medium | ❌ | Fast | Datadog |
| Splunk | Large | ❌ | Fast | Splunk HEC |
| Logfmt | Small | ✅ | Medium | CLI tools, Heroku |
| Pretty | Large | ✅ | N/A | Development |
Platform-Specific Examples
Elasticsearch / Kibana
const logger = createLogger({
service: "user-api",
env: process.env.NODE_ENV,
structured: {
type: "ecs",
},
});
logger.info("User registered", {
userId: 123,
email: "user@example.com",
country: "US",
});Kibana query:
service.name:"user-api" AND labels.country:"US"Datadog
const logger = createLogger({
service: "user-api",
env: process.env.DD_ENV || "production",
version: process.env.DD_VERSION,
structured: {
type: "datadog",
},
});
logger.info("User registered", {
userId: 123,
action: "registration",
plan: "premium",
});Datadog query:
service:user-api @action:registration @plan:premiumSplunk
const logger = createLogger({
service: "user-api",
structured: {
type: "splunk",
},
});
logger.info("User registered", { userId: 123 });Splunk query:
source="user-api" event.level="info" event.context.userId=123Google Cloud Logging
Use JSON with custom transform:
const logger = createLogger({
structured: {
type: "json",
transform: (record) => ({
severity: record.level.toUpperCase(),
message: record.msg,
timestamp: new Date(record.time).toISOString(),
"logging.googleapis.com/trace": record.traceId,
"logging.googleapis.com/spanId": record.spanId,
...record.context,
}),
},
});AWS CloudWatch
Use JSON format (default):
const logger = createLogger({
service: process.env.AWS_LAMBDA_FUNCTION_NAME,
structured: {
type: "json",
},
});CloudWatch Insights query:
fields @timestamp, msg, context.userId
| filter level = "error"
| sort @timestamp descEnvironment-Based Format
Switch format based on environment:
const logger = createLogger({
service: "my-app",
env: process.env.NODE_ENV,
// Pretty in development, structured in production
pretty: {
enabled: process.env.NODE_ENV !== "production",
},
structured: {
type: process.env.NODE_ENV === "production"
? (process.env.LOG_FORMAT as "json" | "ecs" | "datadog") || "json"
: "json",
},
});Usage:
# Development (pretty)
npm run dev
# Production (JSON)
NODE_ENV=production node app.js
# Production (ECS format for Elasticsearch)
NODE_ENV=production LOG_FORMAT=ecs node app.js
# Production (Datadog format)
NODE_ENV=production LOG_FORMAT=datadog node app.jsMultiple Formats (Multiple Transports)
Output different formats to different destinations:
import { createLogger, createConsoleTransport, createFileTransport } from "cenglu";
const logger = createLogger({
service: "my-app",
transports: [
// Console: Pretty format for development
createConsoleTransport({
enabled: process.env.NODE_ENV !== "production",
}),
// File: JSON format for production
createFileTransport({
enabled: process.env.NODE_ENV === "production",
dir: "./logs",
}),
],
// Default format
structured: {
type: process.env.NODE_ENV === "production" ? "json" : "json",
},
pretty: {
enabled: process.env.NODE_ENV !== "production",
},
});Note: Format applies to all transports. For transport-specific formatting, use custom transports or adapters.
Custom Formatter Function
Use standalone formatter functions:
import { formatEcs, formatDatadog, formatLogfmt } from "cenglu";
import type { LogRecord } from "cenglu";
const record: LogRecord = {
level: "info",
msg: "User logged in",
time: Date.now(),
context: { userId: 123 },
service: "my-app",
};
const ecsLog = formatEcs(record);
const datadogLog = formatDatadog(record);
const logfmtLog = formatLogfmt(record);
console.log(ecsLog); // ECS JSON
console.log(datadogLog); // Datadog JSON
console.log(logfmtLog); // Logfmt stringPerformance
Best Practices
- Production: Use JSON, ECS, or Datadog (platform-specific)
- Development: Use pretty format
- CLI tools: Use logfmt
- High-throughput: Use JSON or logfmt (fastest)
- Avoid: Pretty format in production (slow, large output)
Testing
Test Formatter Output
import { createLogger } from "cenglu";
import { test, expect } from "vitest";
test("formats logs as JSON", () => {
const logs: string[] = [];
const logger = createLogger({
structured: { type: "json" },
adapters: [
{
name: "test",
handle: (record, formatted) => {
logs.push(formatted);
},
},
],
});
logger.info("Test message", { userId: 123 });
const parsed = JSON.parse(logs[0]);
expect(parsed.msg).toBe("Test message");
expect(parsed.context.userId).toBe(123);
});Test Custom Transform
test("applies custom transform", () => {
const logs: string[] = [];
const logger = createLogger({
structured: {
type: "json",
transform: (record) => ({
severity: record.level.toUpperCase(),
text: record.msg,
}),
},
adapters: [
{
name: "test",
handle: (record, formatted) => {
logs.push(formatted);
},
},
],
});
logger.info("Test");
const parsed = JSON.parse(logs[0]);
expect(parsed.severity).toBe("INFO");
expect(parsed.text).toBe("Test");
});Troubleshooting
Format Not Applied
Problem: Logs still in default format
Solutions:
-
Check
structuredconfig:structured: { type: "ecs", // Not "ECS" (case-sensitive) } -
Verify format is supported:
// Valid: json, ecs, datadog, splunk, logfmt // Invalid: xml, yaml, etc. -
Check pretty mode isn't overriding:
pretty: { enabled: false, // Disable for structured format }
Fields Missing in Output
Problem: Expected fields not in formatted output
Solutions:
-
Check field names for format:
// ECS: Uses labels.userId (not context.userId) // Datadog: Uses userId at top level (not context.userId) -
Verify context is provided:
logger.info("Message", { userId: 123 }); // ✅ Included logger.info("Message"); // ❌ No context -
Check transform function:
transform: (record) => ({ message: record.msg, // Must include all desired fields })
Parse Errors
Problem: Log aggregation system can't parse logs
Solutions:
-
Use correct format for platform:
// Elasticsearch → use "ecs" // Datadog → use "datadog" // Generic → use "json" -
Check for invalid JSON:
// Circular references handled automatically // BigInt converted to string // Errors serialized properly