Their CFO Asked Why Splunk Cost More Than Their Core Banking Platform
The Splunk renewal came in at $3.7M. For context, that was more than they spent on their core banking middleware. The problem wasn't Splunk - it was that 73% of what they were ingesting was debug logs and health checks.
See How It Works- Client
- Regional Bank (Top 25 US)
- Industry
- Financial Services - Enterprise IT
- Use Case
- Observability Cost Optimization & Security Event Processing
- Timeline
- Pilot in 4 weeks, full rollout in 9 weeks
- ROI
- $2.3M annual savings (7-month payback)
The Challenge
Every December, the Splunk renewal triggered the same conversation: why is this so expensive? The infrastructure team knew the answer - 73% of what they ingested was noise. But they didn't have a way to filter it without building custom solutions for each of their 247 log sources.
- 01 Splunk bill at $3.7M annually - 38% increase from prior year
- 02 73% of ingested logs were debug, health checks, or duplicates
- 03 Security team couldn't find real alerts in 14.3TB/day of noise
- 04 247 different log sources, no consistent classification
- 05 GDPR requiring PII masking before EU data left the region
- 06 Procurement gave them 90 days to show a cost reduction path
The Solution
We deployed Expanso collectors at each major log source. The collectors classify logs on arrival - debug gets dropped, security events get priority, PII gets masked. Splunk only ingests what someone might actually look at.
Source-Side Classification
Each log line gets classified on arrival. DEBUG and TRACE drop immediately. INFO aggregates into hourly summaries. WARN, ERROR, and security events forward in real-time. Simple rules, massive reduction.
PII Masking at Source
Credit card numbers, SSNs, and account IDs get masked before logs leave the source server. No PII ever reaches Splunk. GDPR and PCI auditors stopped asking questions.
Security Event Extraction
Authentication failures, privilege escalations, and anomaly patterns get extracted and enriched before forwarding. Security team gets structured events, not grep sessions.
The Results
The next Splunk renewal came in at $1.4M. Security team response time improved because they weren't searching through terabytes of health checks. The CFO stopped asking about observability costs.
- Splunk ingestion dropped from 14.3TB/day to 5.2TB/day
- Annual cost reduced from $3.7M to $1.4M
- Security alert triage time dropped from 23 minutes to 5.6 minutes
- Zero PII incidents since deployment - previous year had 3
- Pilot proved value in 4 weeks with 12 sources, remaining 235 in 9 weeks
- Same log retention policy - just stopped storing garbage
- Splunk search performance improved 2.3x with less noise

Your observability bill climbing faster than your infrastructure?
If you're paying to store logs no one looks at, we should talk. We've cut Splunk, Datadog, and Elastic bills by filtering noise at the source.
