This first part examines the sample’s execution flow and analysis-resistance model: Petite packing, reflective staging, self-modifying code, SIDT-based inspection, user-mode port I/O, interrupt traps, FPU validation, and segment-register abuse. The focus is on why the malware cannot be understood reliably from ordinary static triage alone.
Key Judgments
· The sample uses the host execution environment as part of its decryption logic, which means debugger interference, altered stack layout, or sandbox-specific runtime conditions can corrupt payload recovery instead of merely revealing the malware.
· The implant combines Petite packing, reflective PE loading, and runtime API resolution to keep both its outer wrapper and its inner payload chain out of ordinary static and loader-based visibility.
· The C2 protocol exposes a stable network fingerprint: every generated session tag ends with the fixed low byte 0x99, creating a durable hunting pivot even though the upper 24 bits vary by session.
· The malware executes unusually aggressive anti-analysis logic in user mode, including SIDT, port I/O against 0x4F, interrupt-based traps, FPU state validation, and segment register abuse. That mix is consistent with bespoke tooling rather than commodity malware.
· The sample carries at least five embedded PE images and dispatches them through a reflective loader, indicating a modular post-compromise design rather than a single-stage backdoor.
Overview
This report documents a static analysis of a custom Windows RAT/backdoor wrapped in the Petite packer and linked against the Visual Basic 6 runtime via msvbvm60.dll.. Zenyard Agent produced this analysis from a raw PE32 binary using Ghidra decompilation and disassembly, with no prior context or labeling provided. The sample uses a narrow bootstrap import set, decompresses its working sections at runtime, derives parts of its execution state from the host environment, and establishes a raw TCP command channel protected by three custom transformation layers.
The outer binary carries at least five additional PE images within its body. It does not rely on a conventional LoadLibrary path to stage them. Instead, it allocates memory, rewrites code pages, resolves APIs dynamically, and pivots into a reflective loading path. That design leaves the unpacked payload chain only partially visible to the Windows loader and makes memory acquisition more important than on-disk inspection.
It treats the host environment not just as a target, but as a source of entropy for its cryptographic operations.
Technical Findings
Unpacking and execution flow
The sample starts execution at 0x0040b00f and presents the expected Petite footprint. Its primary PE sections are effectively zeroed on disk and reconstructed in memory at runtime. The repeated @petite marker appears nine times across the image, which aligns with the presence of multiple embedded inner PEs rather than a single packed payload. The outer import table is intentionally sparse and contains only the functions required to unpack, allocate memory, change page permissions, resolve APIs, and terminate.
Once unpacked, execution flows into a self-decoding path that rewrites instructions in place, performs a series of debugger and emulator checks, derives session-specific state from runtime conditions, decrypts C2 material into DAT_00430280, and then enters a finite-state parser for newline-delimited TCP traffic. From there, command content passes through a three-stage transformation pipeline before control reaches a reflective loader that maps one of several embedded PE images and transfers execution into the selected payload.
The VirtualProtect import is operationally significant. The malware uses it to make its own code writable, patch instructions, and continue execution from the modified stream. This is not incidental obfuscation but part of the normal execution path, which means static disassembly alone will always underrepresent the real code layout seen at runtime.
Anti-analysis techniques
This sample layers classic debugger checks with lower-level environment probes in a way that forces analysis to account for exact runtime conditions.
The debugger-facing logic includes INT 3 traps, custom single-step handling, FPU state capture and validation, spin-wait loops, and integrity checks around a canary value of 0x8B8F9397. The FPU handling matters because the malware does not simply save floating-point state. It validates fields such as the last instruction pointer and uses mismatches as evidence that exception handling or stepping behavior has altered execution. Several routines also inflate execution cost deliberately, including per-byte loops that route bad data into halt paths, making single-stepping and timing-neutral emulation impractical.
The sample also performs SIDT-based inspection of the Interrupt Descriptor Table Register. In practice, this lets it compare expected execution conditions against debugger, sandbox, or hypervisor-specific layouts. That check pairs with stack-derived key material:
This matters because the anti-analysis behavior feeds directly into decryption. A modified stack layout, instrumentation shim, or debugger-induced control-flow change can cause later decoding stages to produce invalid output rather than simply tripping a visible anti-debug branch.
The anti-emulation logic is stronger than routine sleep timing or API-level environment checks. The malware executes port I/O against 0x4F, applies the mask 0x478CCC4B, writes the result back, and branches based on the returned behavior. That sequence should not occur in normal user-mode application code. The malware also uses software interrupts as timing and control-flow primitives, including int 0x21, int 0x40, and an interrupt-mediated dispatch path inside tag generation. Combined with segment register abuse involving FS and CS, these checks complicate tracing and can destabilize analysis environments that assume conventional selector usage and sanitized I/O behavior.
Control-flow obfuscation reinforces these checks. The binary contains overlapping instruction regions, opaque predicates, intentional infinite loops, and recursive error paths that end in stack exhaustion. Those constructs do not merely slow analysis. They make incorrect path recovery expensive and punish analysts who bypass one gate without understanding the next.
Part 1 Conclusion
The first half of the sample’s design is built around denying analysts a stable execution view. Petite packing, self-modifying code, runtime API resolution, SIDT, port 0x4F I/O, interrupt traps, and FPU validation all converge on the same outcome: analysis conditions become part of the malware’s behavior.
Part 2 examines how this environment-sensitive execution model feeds into the implant’s custom cryptography, newline-delimited C2 protocol, stable 0x99 session-tag fingerprint, attribution limits, detection opportunities, IOCs, and YARA hunting logic.
Annex A: Indicators of Compromise
Annex B: YARA Hunting Rules
The following rule targets the packed outer loader. It relies on the repeated Petite marker, the VB6 runtime string, and multiple stable constants recovered during analysis. It is intentionally narrow to keep false positives down.



