简单的 API
使用 log.set 累积上下文,使用 why 抛出结构化错误和修复方法。一个宽事件捕获一切,无论请求成功或失败。
export default defineEventHandler(async (event) => {
const log = useLogger(event)
log.set({ user: { id: user.id, plan: user.plan } })
log.set({ cart: { items: 3, total: 9999 } })
if (!charge.success) {
throw createError({
status: 402,
why: 'Card declined by issuer',
fix: 'Try a different card',
})
}
return { orderId: charge.id }
})✓ One log with full context
✓ Actionable error with context
代理就绪
结构化字段、可机读的上下文和可操作元数据,为 AI 代理提供诊断和解决问题所需的一切。启用文件系统排水以将 NDJSON 日志本地写入,并让代理直接读取它们。
Card declined by issuer — insufficient funds
Pro plan user (#1842) blocked on payment
Prompt for alternate payment method
stripe.com/docs/declines/codes
✓ Auto-created issue PAY-4521
Non-blocking
Pipeline runs in the background. Your response ships immediately.
Guaranteed delivery
Exponential backoff with jitter ensures logs reach every destination.
Bring your own drain
Write a simple function to send logs anywhere.
import { createDrainPipeline } from 'evlog/pipeline'
import { createAxiomDrain } from 'evlog/axiom'
import { createSentryDrain } from 'evlog/sentry'
const pipeline = createDrainPipeline({
drains: [
createAxiomDrain(),
createSentryDrain(),
],
batchSize: 50,
flushInterval: 5000,
})客户端日志
捕获浏览器事件并将其排水到服务器。自动批量、重试和页面感知的刷新,以及与服务器相同的客户端到服务器管道。
Automatic batching
Events are batched by size and time interval, reducing network overhead.
Page-aware delivery
Switches to sendBeacon when the page is hidden. No event left behind.
Server-side validation
Origin check, payload sanitization, and source tagging on every ingest.
import { createHttpLogDrain } from 'evlog/http'
const drain = createHttpLogDrain({
drain: {
endpoint: '/api/_evlog/ingest',
},
pipeline: {
batch: { size: 25, intervalMs: 2000 },
retry: { maxAttempts: 2 },
},
})
initLogger({ drain })BATCH · FLUSH
POST · BEACON
采样
两级过滤:头部采样按级别丢弃噪声,尾部采样挽救关键事件。永远不会错过错误、慢速请求或关键路径。
initLogger({
sampling: {
// Head: per-level rates
rates: {
info: 10, // keep 10%
warn: 50, // keep 50%
error: 100, // always
},
// Tail: force keep if match
keep: [
{ status: 400 },
{ duration: 1000 },
{ path: '/api/critical/**' },
]
}
})5 kept·3 dropped· noise reduced without data loss
AI 可观测性
您的 AI 端点是黑盒。您不知道每个请求消耗多少令牌,模型调用了哪些工具,或者流的速度有多快。用一行代码包装您的模型,每个调用都将被捕获到宽事件中。成本跟踪、工具可见性、流式性能、缓存命中、推理令牌。
Zero boilerplate
Wrap the model, done. No manual token tracking needed.
Works with everything
generateText, streamText, ToolLoopAgent, generateObject.
Cost and performance
Token usage, cache hits, time to first chunk, tokens per second.
const ai = createAILogger(log)
const result = streamText({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
messages,
})性能
零依赖,5.2 kB gzip,约每个请求 3µs。在 pino、consola 和 winston 之间进行了基准测试。在宽事件场景中比 pino 快 8 倍,同时产生更丰富、更有用的输出。
ops/sec · higher is better · silent mode (no I/O)
ops/sec · higher is better · silent mode (no I/O)
ops/sec · higher is better · silent mode (no I/O)
1 event, not N log lines
Accumulate context, emit once. 75% less data downstream.
In-place mutations
No object spreads, no copies. Direct recursive merge.
Lazy allocation
Timestamps, sampling context — created only when needed.
No serialization until drain
Plain objects throughout. JSON.stringify runs once at the end.
Zero dependencies
No transitive deps. Nothing to audit, nothing to break.
Total overhead per request
create + 3x set + emit + sampling + enrichers
~3µs
0.003ms
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const { cartId } = await readBody(event)
const cart = await db.findCart(cartId)
log.set({ cart: { items: cart.items.length, total: cart.total } })
const charge = await stripe.charge(cart.total)
log.set({ stripe: { chargeId: charge.id } })
if (!charge.success) {
throw createError({
status: 402,
message: '支付失败',
why: charge.decline_reason,
fix: '尝试其他支付方式',
})
}
return { orderId: charge.id }
})