Immersion Analytics for Security & Observability
Effective Security & Observability each entail multiple columns of data, for example:
- Cybersecurity: prioritize threats and cut dwell time.
- Observability: resolve faster and decide rollback vs. ride-through.
- Corporate Security: reduce risk and accelerate response.
- FinOps: cut cloud waste and align cost with value.
Cybersecurity (SIEM/SOAR/XDR)
Here are 10 numeric, per-alert/per-asset variables that are high-value in cybersecurity (e.g., Splunk, Microsoft Sentinel, IBM QRadar, Google Chronicle, CrowdStrike Falcon, Palo Alto Cortex) and well suited to visualize using Immersion Analytics:
| # | Variable | What it is (numeric) | Why it matters | Good IA mapping (suggestion) |
|---|---|---|---|---|
| 1 | Severity Score (0–100) | Engine or analyst-assigned risk | Drives triage order | Y-axis (higher → up) |
| 2 | Asset Criticality (0–100) | Business impact weight of asset | Puts risk in business context | X-axis (higher → right) |
| 3 | Dwell Time (hours) | Time from first malicious activity to detection | Longer dwell = higher urgency | Z-depth (closer = longer dwell) |
| 4 | Lateral Movement Hops (#) | Distinct hosts/users touched | Indicates spread and complexity | Size (bigger = more hops) |
| 5 | Privilege Level (0–5) | Highest effective privilege attained | Higher privilege = higher blast potential | Satellites |
| 6 | Exploitability Probability (EPSS %, 0–100) | Likelihood a related CVE is exploited | Prioritizes exploitable paths | Shimmer (stronger = higher) |
| 7 | Data Exfil Volume (MB/GB) | Outbound to untrusted destinations | Direct loss indicator | Color (hotter = more) |
| 8 | Detection Confidence (0–1) | Model/analytic confidence score | Reduces false positives | Transparency (solid = higher confidence) |
| 9 | SLA Time Remaining (min) | Minutes until triage/containment SLA breach | Prevents missed obligations | Pulsation (faster = nearer breach) |
| 10 | Blast Radius (# affected endpoints/users) | Scope of impact so far | Focuses containment effort | Glow (brighter = larger radius) |
What risk could you retire—and how much response time could you save—by seeing all ten, simultaneously, across every alert, asset, and user?
Observability
Here are 10 numeric, per-service/per-endpoint variables that are high-value in observability (e.g., Datadog, Dynatrace, New Relic, Grafana, Prometheus) and well suited to visualize using Immersion Analytics:
| # | Variable | What it is (numeric) | Why it matters | Good IA mapping (suggestion) |
|---|---|---|---|---|
| 1 | p95 Latency (ms) | 95th-percentile end-to-end latency | Captures tail pain users feel | X-axis (right = slower) |
| 2 | Error Rate (%) | Errors ÷ requests over window | Primary indicator of broken experience | Y-axis (up = worse) |
| 3 | Throughput (req/s) | Requests per second | Shows load/traffic pressure | Size (bigger = busier) |
| 4 | Saturation (%) | CPU/mem/IO utilization | Predicts throttling and brownouts | Color (hotter = higher) |
| 5 | SLO Burn Rate (×) | Error budget consumption rate | Prioritizes “page now” vs. “watch” | Glow (brighter = burning faster) |
| 6 | Deployment Frequency (#/day) | Successful deploys per day | Change velocity to correlate with incidents | Pulsation (faster = more deploys) |
| 7 | Change Failure Rate (%) | % of deploys causing incidents/rollback | Quality of releases | Transparency (higher CFR = more hollow) |
| 8 | Time to Restore (MTTR, min) | Mean minutes to recover service | Core reliability outcome | Outline thickness (thicker = slower) |
| 9 | Dependency Impact (#) | Downstream services currently affected | Reveals blast radius | Satellites (# satellites = # impacted deps) |
| 10 | Retry/Timeout Rate (#/min) | Client/server retries & timeouts | Signals instability/jitter | Shimmer (stronger = more instability) |
What reliability and mean-time-to-recover could you unlock by gaining perspective on all ten—simultaneously—across your entire stack?
Corporate Security
Here are 10 numeric, per-site/per-incident variables that are high-value in corporate security (e.g., LenelS2 OnGuard, Genetec Security Center, Brivo/Avigilon/Verkada, Everbridge, Dataminr, Resolver) and well suited to visualize using Immersion Analytics:
| # | Variable | What it is (numeric) | Why it matters | Good IA mapping (suggestion) |
|---|---|---|---|---|
| 1 | Severity Score (0–100) | Risk rating for the incident | Drives triage order and executive attention | Y-axis (higher → up) |
| 2 | Mean Time to Acknowledge (MTTA, min) | Minutes to first operator action | Reveals alert fatigue and workflow friction | X-axis (left = faster) |
| 3 | Mean Time to Resolve (MTTR, min) | Minutes to containment/close | Core measure of response effectiveness | Z-depth (closer = faster) |
| 4 | Incident Rate (per 1,000 staff / month) | Normalized incidents at a site | Identifies hotspots and trend shifts | Size (bigger = more) |
| 5 | Access Anomaly Rate (%) | Forced doors, tailgates, denials (normalized) | Signals insider/physical access risk | Transparency (higher = more hollow) |
| 6 | Alarm False-Positive Rate (%) | Nuisance alarms ÷ total alarms | Wastes time; masks real threats | Satellites |
| 7 | System Uptime (%) | Camera/ACS/VMS availability | Ensures coverage and evidence quality | Color (cooler/greener = higher) |
| 8 | Threat Chatter Volume (#/day) | External signals mentioning sites/execs | Early warning for protests/crime | Pulsation (faster = more) |
| 9 | Proximity to Critical Assets (m) | Distance from incident to key assets | Prioritizes high-impact locations | Glow (brighter = closer) |
| 10 | SLA Time Remaining (min) | Minutes until triage/containment SLA breach | Prevents missed obligations/fines | Shimmer (stronger = nearer breach) |
What risk could you retire—and how many response minutes could you save—by seeing all ten, simultaneously, across every site and incident?
FinOps
Here are 10 numeric, per-service/per-workload variables that are high-value in FinOps (e.g., Apptio Cloudability, VMware CloudHealth, AWS Cost Explorer, Azure Cost Management, Google Cloud Billing, Kubecost) and well suited to visualize using Immersion Analytics:
| # | Variable | What it is (numeric) | Why it matters | Good IA mapping (suggestion) |
|---|---|---|---|---|
| 1 | Total Cloud Cost ($) | Spend by service/app/env in the period | Baseline impact and prioritization | Size (bigger = higher spend) |
| 2 | Unit Cost ($/req, $/vCPU-hr, $/GB) | Cost normalized to business output | Reveals efficiency vs. value | Y-axis (higher → up) |
| 3 | Utilization (%) | Avg resource usage (CPU/Memory/IO) | Exposes overprovisioning | X-axis (higher → right) |
| 4 | Rightsizing Potential (%) or ($) | Estimated waste from oversized assets | Direct, low-risk savings | Z-depth (closer = more to save) |
| 5 | Commitment Coverage (%) | RI/SP coverage of eligible usage | Protects from on-demand premiums | Color (cooler/greener = higher) |
| 6 | Commitment Utilization (%) | Portion of purchased commitments actually used | Avoids under/over-buying | Transparency (solid = well-utilized) |
| 7 | MoM Cost Change (%) | Month-over-month spend delta | Spots anomalies and trend shifts | Pulsation (faster = rising) |
| 8 | Data Egress Cost ($ or % of total) | Outbound transfer to other regions/internet | Hidden driver of runaway bills | Shimmer (stronger = higher) |
| 9 | Idle/Orphaned Spend (%) | Spend on stopped/idle unattached resources | Quick, low-controversy cuts | Glow (brighter = more waste) |
| 10 | Untagged / Unallocated Spend ($) | Cost not mapped to team/product | Blocks showback/chargeback | Satellites (more satellites = more untagged items) |
What savings could you unlock—and what reliability could you safeguard—by seeing all ten, simultaneously, across every account, service, and environment?