BIPI
BIPI

Threat Hunting Beyond IOC Lists: What Mature Teams Actually Do.

Cybersecurity

If your threat hunting is grepping logs for known-bad indicators, you are not hunting. You are running a delayed signature engine. Real hunts target behaviours adversaries cannot avoid, not artefacts they can change.

By Arjun Raghavan, Security & Systems Lead, BIPI · April 4, 2026 · 7 min read

#threat-hunting#soc#detection-engineering#mitre-att&ck

Most 'threat hunting' programs we audit are IOC-matching with a different name. The hunter loads a feed of known-bad domains, IPs, file hashes. They search the SIEM for matches. They write a report. The report says zero findings. The team feels productive. They have hunted nothing.

IOCs are the artifacts of past attacks. Adversaries change them between operations: new infrastructure, new malware hashes, new C2 domains. The hunt that searches for last quarter's IOCs is searching for a campaign that has already moved on.

What real hunting targets

The principle is: hunt for behaviours adversaries cannot easily avoid, not artefacts they easily change. The MITRE ATT&CK framework organises this well: tactics and techniques are durable; specific tools and IPs are not.

  • Lateral movement patterns: a single account authenticating from multiple endpoints in a short window.
  • Persistence creation: new scheduled tasks, services, registry run keys, especially on servers that rarely change.
  • Living-off-the-land binary execution: PowerShell, certutil, mshta, rundll32 with command-line flags consistent with abuse.
  • Credential access: LSASS handle opens by non-Microsoft processes, NTDS.dit reads, comsvcs.dll loads.
  • Defense evasion: event log clearing, security service stops, AMSI bypass attempts.
  • Discovery: rapid net.exe / net1.exe / nltest / dsquery sequences indicating reconnaissance scripts.

These behaviours are not specific to any threat actor. They are patterns common to most intrusions. A hunt that finds them surfaces something every time.

The hunting hypothesis

Mature hunts start with a hypothesis: 'If an attacker had compromised an admin account, we would expect to see authentication from a new ASN, followed by a domain controller LDAP query, within a 30-minute window.' The hunt translates the hypothesis into a query, runs it, examines the results.

Most results are benign and explainable. The team learns the baseline. The few that are not explainable get escalated. Over time, repeated benign results become tuned-out, repeated unexplained results become detection rules.

Hunts that find nothing are not failed hunts. Hunts that learn baselines are how you build durable detection capability.

Cadence and ownership

We recommend a weekly hunt cadence with rotating ownership. Each hunter takes one hypothesis per week, develops the query, runs it, presents findings to the team. The cycle creates institutional knowledge about what your environment looks like normally and what does not.

Over six months, the team has covered 25 hypotheses and produced 5-10 new detection rules. The detection rules are higher-quality than vendor defaults because they reflect your actual environment.

Tooling

You do not need a dedicated hunting platform. Most mature teams hunt directly in their SIEM (Splunk, Sentinel, Elastic) using saved searches. The tooling that matters is data quality: are EDR events normalised, are auth logs from every system, are network flow records retained long enough to look back. Hunting on incomplete data finds incomplete things.

Specialised tools (e.g., Hunters, Stairwell, Vectra) accelerate specific hunt types. They do not replace the hypothesis-driven discipline.

What it produces over a year

Twelve months of disciplined hunting at one client produced: 3 confirmed compromises that no detection caught initially (recovered before exfil); 14 new high-confidence detection rules (reducing SOC alert noise by 30 percent because they replaced broader noisier rules); a documented baseline of 40+ behaviours that allowed a tier-1 analyst to triage anomalies without escalation. The investment was approximately one FTE.

Common failure modes

  1. Hunters who are also incident responders. Hunting requires uninterrupted focus; on-call disrupts it. Separate the rotations.
  2. No follow-through on findings. A hunt that finds an anomaly but never produces a detection rule is throwaway work.
  3. Hunting only when there is a known active campaign. The point is to find unknown ones.
  4. Treating hunts as a checkbox: '4 hunts per quarter' regardless of quality. Two deep hunts beat eight superficial ones.

Closing

Threat hunting is one of the highest-leverage investments a security program can make once basics are in place. It produces specific, durable detection capability, surfaces real intrusions, and builds the kind of operational knowledge that makes a SOC actually good. The teams getting value from it are not the ones with the biggest tooling budgets. They are the ones with the discipline to run hypothesis-driven hunts every week and act on what they find.

Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.