
File-based workflows are essential part of today's payment processing. It’s normal to land transaction data to local persistence as DCF imports/exports, Black/White list loads, BIN lists (ingress), produce Mastercard / Visa clearing files such as IPM, Base2, MBF (egress), and generate operational extracts and reports along the way.
Historically, the pattern has been: have clear text files locally → rely on DMZ / host controls / full-disk encryption → encrypt/archive later → distribute/consume.
That approach is now increasingly hard to defend under PCIDSS v4.x, not because file workflows are ‘wrong’, but because any local clear text persistence creates an avoidable exposure window while in today's cloud-native deployments that window spreads across even more surfaces (ephemeral volumes,node re-scheduling, snapshots, backups, and log pipelines), making it harder to control and harder to prove it never happened.
PCI’s core framing: if it’s stored, it must be protected- and stored includes files, backups, and logs
PCI DSS Requirement 3 explicitly targets stored account data. It calls out that PAN must be secured wherever it is stored and defines approved approaches to render it unreadable (tokenization, truncation,strong cryptography, etc.).
1/ Storage includes flat files and non-primary locations
PCI DSS makes it explicit that this applies not only to databases, but to all flat files (including text files/spreadsheets), and to non-primary storage such as backups and logs (audit, exception, troubleshooting).
2/ Full-disk encryption is not a get-out-of-jail card
PCI DSS guidance explains why disk/partition encryption is often no appropriate as the sole protection for stored PAN on systems that transparently decrypt after user authentication, and it restricts when disk-level encryption can be used on non-removable media (it typically must be paired with a mechanism that meets Requirement 3.5.1, such as data-level encryption).
This is a big deal for drop it to disk and deal with it later architectures: it’s no longer enough to say the box is encrypted if the system behavior effectively makes PAN readable to anyone who obtains legitimate access to the host/session.
PCI DSS clearly distinguishes between Primary Account Number (PAN) - which may be stored provided it is rendered unreadable - and Sensitive Authentication Data (SAD), which generally must not be stored after authorization. The DSS requirements and guidance are explicit: SAD must not be retained post-authorization, even if it is encrypted.
PCI’s testing and assessment guidance further reinforces this position by explicitly listing where assessors will look for evidence of improper SAD retention. These locations include all incoming transaction data, logs (transaction, history, debug, and error logs), history and trace files, database schemas, data stores (both on-premises and cloud), and even memory and crash dumps.
This provides a practical insight into why temporary files as an operational design pattern are inherently risky. Regardless of intent or assumed short lifespan, the compliance assessment perspective is always: where could the data end up?
Temporary files are not prohibited - but that’s not the same as business-as-usual spooling
PCI DSS does include an important nuance: the requirement does not preclude the use of temporary files containing clear text PAN while encrypting and decrypting PAN. The practical problem is what happens after temporary. Even if a clear text file exists only for the duration of an encrypt/decrypt step, you’re now relying on correct deletion behavior and on the underlying storage stack to honor that intent.
In modern environments that’s hard to guarantee: file systems journal, SSDs remap blocks, storage is layered and replicated, and backup/snapshot systems can capture data mid-flight. The result is that a temporary clear text file can become a durable artifact in places the application team can’t easily enumerate, control, or prove were cleaned up.
PCI DSS acknowledges that cryptographic operations may involve transient handling, not that clear text file spools are an acceptable long-running integration pattern. The difference is intent and control:
PCI SSF pushes the same direction - as a software responsibility. PCI Secure Software Standard (SSF) frames this as a software design obligation:
Sensitive data capable of being stored must be protected during storage, and where cryptography is used/required, it must be strong cryptography.
SSF also emphasizes retention and deletion: sensitive data is to be stored in accordance with defined retention policies and then securely deleted (or rendered unrecoverable) once no longer necessary. In other words: even when files on disk are convenient, PCI’s direction is to reduce the number of places sensitive data can persist unprotected, and to make retention enforceable and auditable.
Real payment platforms still need file-like artifacts:
The pragmatic answer is to avoid the Clear Text Window completely - encrypt on ingress, persist only ciphertext, decrypt only on authorized egress. That gives you the operational benefits of file workflows without expanding your PCI scope to every host that happens to store or touch those files.
This is exactly the gap EFThub’s Virtual File System (VFS)is designed to close.
At a high level:
This aligns cleanly with PCI’s posture that PAN must be secured anywhere it is stored, including flat files and log-like repositories, and avoids leaning solely on external encryption mechanisms that PCI explicitly treats as insufficient on typical servers/storage arrays when they decrypt transparently in normal operation.
Legacy platforms normalized “Write clear text now, Encrypt later” because there was nothing else to solve this.
PCI DSS v4.x and SSF v2.0 increasingly push toward the opposite:
Encrypt-on-ingress/egress plus controlled, auditable storage is one of the few patterns that keeps file-driven payment operations practical and keeps the compliance argument straightforward.