Skip to main content
Version: Next

1.10.x Release Notes

This page summarizes notable changes in the 1.10 minor series. Patch-level releases are listed below.

New admin-visible features in 1.10 focus on LDAP robustness, efficiency, and observability. Most changes are opt-in and ship with conservative defaults.

Highlights

  • LDAP Backpressure and Queueing
    • Per-pool queue limits with drop policy and metrics.
  • Context- and Deadline-awareness
    • Respect client deadlines when acquiring pool capacity or connecting.
  • Timeouts, Retries, Circuit Breaker
    • Per-op timeouts; jittered retries for transient network errors; per-target circuit breaker.
  • Target Selection & Health
    • Least-outstanding selection, passive/active health integration, breaker-aware picking.
  • Request Optimization & Caching
    • Negative cache with singleflight; optional LRU cache; raw result shaping.
  • Robustness & Security
    • Lame-duck connection handling; RFC4515 filter escaping; optional per-pool auth rate limits.
  • Observability
    • New metrics for errors, retries, breaker state, target health/inflight, and queue depth/wait/drops.

Configuration changes (New in v1.10.0)

All keys documented on the LDAP backend page and in the Full Configuration Example. Quick overview:

  • Queue limits per pool:
    • lookup_queue_length, auth_queue_length (0 = unlimited)
  • Operation timeouts:
    • search_timeout, bind_timeout, modify_timeout
  • Search guardrails:
    • search_size_limit, search_time_limit
  • Retry/backoff and breaker:
    • retry_max, retry_base, retry_max_backoff
    • cb_failure_threshold, cb_cooldown, cb_half_open_max
  • Health checks:
    • health_check_interval, health_check_timeout
  • Caching and shaping:
    • dn_cache_ttl, membership_cache_ttl, negative_cache_ttl
    • cache_max_entries, cache_impl (ttl|lru), include_raw_result
  • Optional per-pool rate limiting:
    • auth_rate_limit_per_second, auth_rate_limit_burst

Server (New in v1.10.0)

  • Global operation timeouts under server.timeouts:
    • redis_read — Default: 1s
    • redis_write — Default: 2s
    • ldap_search — Default: 3s
    • ldap_bind — Default: 3s
    • ldap_modify — Default: 5s
    • singleflight_work — Default: 3s
    • lua_backend — Default: 5s

Notes

  • Values use Go duration syntax (e.g., 250ms, 2s, 1m30s).
  • Omitted or non-positive values fall back to the defaults above.

See:

  • Configuration → Server Configuration → Timeouts
  • Configuration → Full Configuration Example

Lua:

  • Introduced backend_number_of_workers for Lua backend workers.
  • Deprecated number_of_workers in favor of backend_number_of_workers (still supported for compatibility).
  • Action workers: removed environment variable; use lua.config.action_number_of_workers in configuration.
  • New vm pool size keys for non-worker categories: feature_vm_pool_size, filter_vm_pool_size, hook_vm_pool_size (fallback to backend workers if unset).
  • Action VM pool size is now automatically coupled 1:1 to action_number_of_workers (no separate key).

Lua filter execution flags (New in v1.10.0)

Nauthilus now lets you control when individual Lua filters run, based on the authentication outcome of the current request:

  • when_authenticated (bool): run the filter when request.authenticated == true.
  • when_unauthenticated (bool): run the filter when request.authenticated == false.
  • when_no_auth (bool): run the filter when request.no_auth == true (passwordless flows, e.g. certain HTTP/OIDC endpoints).

Defaults and selection logic:

  • If all three flags are omitted, or all are explicitly set to false, Nauthilus applies safe defaults: when_authenticated=true, when_unauthenticated=true, when_no_auth=false.
  • Local/in-memory cache (aka local cache) sets authenticated=true on cache hits, so filters configured for authenticated requests will run for cache hits as well.
  • The selected mode is logged as filter_mode=authenticated|unauthenticated|no_auth in the request’s main log record.
  • When the default fallback is applied because all flags were false, an informational log entry is emitted to make this explicit.

Configuration examples are included on the Full Configuration Example page.

See:

  • Configuration → Full Configuration Example
  • Configuration → Database Backends → LDAP Backend
  • Configuration → Database Backends → Lua Backend

Security and protection (New in v1.10.0)

  • Account Protection dry‑run mode by default:

    • New environment variable PROTECT_ENFORCE_REJECT controls enforcement.
    • Default behavior (unset or "false"): do not block in the Account Protection filter; instead apply progressive delay and set Step‑Up hints.
    • When set to true: restores the previous behavior and actively rejects unauthenticated requests while protection is active.
    • Exposed headers for HTTP/OIDC frontends:
      • X-Nauthilus-Protection: stepup
      • X-Nauthilus-Protection-Reason: <reasons>
      • X-Nauthilus-Protection-Mode: dry-run (only when enforcement is disabled)
    • Redis keys for coordination: ntc:acct:<username>:protection, ntc:acct:<username>:stepup, ntc:acct:protection_active.
  • Scoped IP normalization and cluster‑wide dedup for metrics:

  • See also Configuration → Deduplication for in-process dedup. Note: Since v1.10.3, distributed deduplication (Redis-based) for auth was removed.

    • New Lua configuration options under lua.config:
      • ip_scoping_v6_cidr (e.g., 64 for IPv6 /64 privacy normalization)
      • ip_scoping_v4_cidr (e.g., 24 for IPv4 /24 NAT aggregation)
    • New Lua API function nauthilus_misc.scoped_ip(ctx, ip) that wraps the Go scoper for consistent results across components.
      • Contexts: "lua_generic" (default), "rwp", "tolerations".
    • Long‑window per‑account metrics (uniq_ips_24h, uniq_ips_7d) can now be based on scoped IPs, reducing false positives from IPv6 privacy rotation and multi‑node duplication.
  • Guidance: Start with ip_scoping_v6_cidr: 64 and ip_scoping_v4_cidr: 24 and adjust thresholds in Account Protection accordingly. See the Account Protection docs for details.

Logging (migration to slog) — New in v1.10.0

Nauthilus switched the internal logging from go-kit/log (+ level) to Go's standard library log/slog. This affects field names and formatting. Admins should be aware of the following changes:

  • Timestamps: the field was previously named ts (go-kit). With slog it is named time.
  • Caller/source information: previously caller; with slog it is source when enabled and it renders differently:
    • JSON: a structured object, e.g. {"source":{"file":"server/core/http.go","line":123}}.
    • Text: a compact suffix like source=server/core/http.go:123.
  • Message: go-kit patterns often logged a key msg="...". With slog the message is a dedicated field; our compatibility wrapper extracts msg and places it into the message column. The msg key is not duplicated in attributes.
  • Level: no separate level key needs to be added by callers; slog renders the level automatically.

New configuration:

  • server.log.add_source (bool): controls whether slog includes source (file:line). Default: false. See configuration docs for details and performance notes.

Examples

  • Previous JSON (kit/log):
    • { "ts":"2024-11-30T12:34:56Z", "level":"info", "caller":"core/http.go:42", "msg":"HTTP request", "path":"/ping" }
  • New JSON (slog):
    • { "time":"2024-11-30T12:34:56Z", "level":"INFO", "msg":"HTTP request", "path":"/ping", "source":{"file":"server/core/http.go","line":42} }

Notes

  • The exact text rendering differs between Text and JSON handlers; JSON is recommended for machine processing.
  • The migration is backward compatible for most log call sites due to an internal wrapper, but field names in outputs change as described above.

Metrics (Prometheus)

New and extended metrics added in 1.10:

  • ldap_errors_total{pool,op,code}
  • ldap_retries_total{pool,op}
  • ldap_breaker_state{pool,target}
  • ldap_target_health{pool,target}
  • ldap_target_inflight{pool,target}
  • ldap_queue_depth{pool,type}
  • ldap_queue_wait_seconds{pool,type}
  • ldap_queue_dropped_total{pool,type}
  • Cache metrics:
    • ldap_cache_hits_total{pool,type}
    • ldap_cache_misses_total{pool,type}
    • ldap_cache_entries{pool,type}
    • ldap_cache_evictions_total{pool,type}
      • Note: With the shared TTL cache implementation, evictions are reported with labels pool="shared", type="ttl"; per-pool eviction metrics are available when using the LRU cache implementation.

Upgrade and compatibility notes

  • Defaults are conservative; most new behaviors are disabled unless configured (e.g., queue limits, rate limits, include_raw_result=false remains default).
  • Account Protection: enforcement is now opt‑in. Default is dry‑run (no blocking). Set PROTECT_ENFORCE_REJECT=true to enforce rejections while protection is active.
  • If you rely on unbounded queues, consider setting lookup_queue_length/auth_queue_length explicitly to 0 (unlimited) to preserve behavior.
  • When enabling search guardrails, validate protocol filters and attribute projections to avoid server-side size/time limit hits.

1.10.3 — 2025-11-07

Distributed deduplication (Redis-based cross-instance coordination for the password flow) has been removed due to persistent reliability and complexity issues. Authentication now only uses in-process deduplication (singleflight) within one instance.

Changes

  • server.dedup.distributed_enabled: Deprecated in 1.10.3 and ignored. It no longer has any effect.
  • server.dedup.in_process_enabled: Still supported. Controls local dedup within a single instance (default: true).
  • server.timeouts.singleflight_work: Leader work budget for the in-process dedup path (default: 3s).
  • HandlePassword path simplified accordingly; no Redis locks/PubSub/result envelopes.

Admin guidance

  • Remove distributed_enabled from your configuration (safe to keep, but it has no effect and logs a deprecation warning).
  • Keep in_process_enabled at its default (true) for best performance under bursts.
  • See Configuration → Deduplication, which now documents only in-process dedup.

1.10.2 — 2025-11-06

New account-centric Global Pattern Monitoring heuristics to reduce false positives in distributed brute-force detection. Introduces new environment variables (defaults conservative):

  • GPM_THRESH_UNIQ_1H (default 12)
  • GPM_THRESH_UNIQ_24H (default 25)
  • GPM_THRESH_UNIQ_7D (default 60)
  • GPM_MIN_FAILS_24H (default 8)
  • GPM_THRESH_IP_TO_FAIL_RATIO (default 1.2)
  • GPM_ATTACK_TTL_SEC (default 43200 = 12h)

Behavioral notes:

  • Detection now requires a short-term (1h OR 24h) unique-IP threshold AND the 7d long-term threshold, plus a minimum number of failed attempts (24h) and a ratio check (unique IPs / fails) in 1h OR 24h.
  • The ZSET flag for attacked accounts now uses GPM_ATTACK_TTL_SEC as sliding horizon, reducing long tail effects from old peaks.

Admin guidance:

  • Environments with Carrier-NAT/mobile or TOR may want to start with higher thresholds (e.g., 24h: 30–45, 7d: 80–110) and adjust after observing 48–72h.
  • See Configuration → Reference → Global Pattern Monitoring (GPM_*) for details.

1.10.0

Initial 1.10 release bringing all features listed above.