After having gained a solid overview of the entire ecosystem of secrets engines in the first part, we now delve into the daily life of every Vault cluster. The Key / Value (KV) Secrets Engine is the workhorse for all scenarios where secrets need to be securely stored, versioned, and later retrieved in a targeted way.
Although Vault is especially known for its dynamic capabilities, in every real-world environment there exists a substantial number of static values - ranging from API keys of external SaaS services to service account passwords with no expiration date, and even certificate bundles for legacy systems. Without a robust key/value storage, this data would continue to drift uncontrolled through text files, Git repos, CI systems, or even Excel spreadsheets. The Key/Value Secrets Engine prevents exactly that.
In this subchapter, we take a practical look at
- how such a Key/Value Secrets Engine is enabled,
- what considerations go into choosing the mount path, and
- how to design a hierarchical structure that will still be understandable five years from now.
In the following subchapters starting with part 2b, we will explore various aspects of the Key/Value Secrets Engine - including the differences between the two variants of the KV store, how to perform upgrades from the old to the new version, how to write and read data, how to handle metadata, and how to version, delete, and restore stored secrets.
What is the Key / Value Secrets Engine?
The Key/Value Secrets Engine (often simply referred to as KV Engine) answers a simple yet omnipresent question:
Where do we store immutable secrets in a way that protects them from prying eyes, ensures audit-proof logging, and at the same time keeps them easily accessible for integration pipelines, human users, and applications?
Technically, it is an encrypted, transactional key-value store that transparently encrypts all data with 256-bit AES before it is written to the storage backend.
Every read or write operation is logged in detail by the Vault audit subsystem, so it can be traced precisely afterwards who accessed which path and when.
For teams that previously relied on cloud KMS solutions or parameter stores, the KV Engine is often the first point of contact with Vault, as it offers a familiar data model while significantly expanding security and automation capabilities.
Two Versions, Two Philosophies
One detail that repeatedly causes confusion: there are two variants of this engine.
- Version 1 (kv): The classic, non-versioned variant
- Version 2 (kv-v2): The modern, versioned variant
Version 1, internally simply called “kv”, stores exactly one current version per entry. Overwriting it irreversibly deletes the previous data. This behavior was perfectly sufficient in the early days of Vault, but it no longer meets today's compliance requirements.
Version 2, configurable under the name kv‑v2 or via the parameter ‑version=2, keeps a complete history for each entry. Every change to a key/value pair creates a new revision that can later be selectively retrieved or restored. Soft deletes with a defined retention period are also possible before a value is permanently destroyed.
In modern installations, only the v2 variant should be used. v1 is only appropriate when legacy workloads rely on a stable API and cannot be migrated in the short term.
Beyond pure versioning, v2 offers additional features: each entry can be protected with an optional check‑and‑set index, which prevents so‑called blind writes. It is also possible to limit the maximum number of retained revisions on the server side to keep storage usage and replication traffic predictable.
For certification exams, it is important to know that an upgrade from v1 to v2 does not happen automatically. The migration process must be initiated manually and requires appropriate ACLs, as well as possibly a multi-step plan if dozens of applications are involved. API paths also change, which means that any scripts and applications must be adapted when migrating from v1 to v2.
Important for practical use: In new implementations, Version 2 should be used almost exclusively. Version 1 is considered outdated and is only used in specific legacy scenarios.
Access Options and Integration
As with all components of Vault, there are three primary ways to interact with the system.
- UI: For interactive use and quick insights, ideal for ad‑hoc work in development and demo environments. Since this is not sufficient for production environments, we will not cover the UI further in this article series.
- CLI: For administrators and scripting. The CLI is the production-grade equivalent of mouse clicks in the UI. Our code examples use the CLI.
- API: For automation and integration into pipelines and applications.
Using the API via Infrastructure-as-Code is the standard for production environments. For example, a build job can request a fresh token just before deployment and then retrieve the desired secret from the KV path apps/payment/prod/api_key using a simple GET request.
For workloads like applications, the Vault Agent is ideal. It provides the data in temporary files and automatically renews the application's access tokens. Thanks to this flexibility, the KV Engine can adapt to virtually any workflow without compromising central security objectives.
Security by Design
All data in the KV Engine is encrypted before being stored. As a result, it does not matter to auditors or operations teams whether the physical backend is a local filesystem, a Consul cluster (which was the standard until a few years ago), or the integrated Raft backend (today’s standard). Even a compromised storage backend has no way of reconstructing the plaintext.
Access permissions are controlled by Vault's policy system. Individual paths can be secured with policy rules based on permissions such as read, create, update, or delete. A typical production scenario might involve developers having read access to a path like apps/*/prod, while only the operations team is allowed to write to it.
By assigning specific capabilities such as list, you can prevent curious users from even discovering what subpaths exist.
As an additional security layer, token tiers or control groups can be used - for example, to require multi-factor approval for particularly sensitive actions.
Practical Activation
As shown in Part 1, the KV Engine is activated via the command vault secrets enable <engine>.
Once the mount point is defined, it acts as a namespace - completely isolated from other engines. Keep in mind that renaming it afterward is only possible via vault secrets move, which may require extensive policy adjustments.
Activating Version 1
# Standard activation of KV v1
$ vault secrets enable kv
Success! Enabled the kv secrets engine at: kv/
# Activation at a custom path
$ vault secrets enable -path=legacy-secrets kv
Success! Enabled the kv secrets engine at: legacy-secrets/
Activating Version 2
# Method 1: Explicitly specify kv-v2 $ vault secrets enable kv-v2 Success! Enabled the kv-v2 secrets engine at: kv-v2/ # Method 2: Set version as parameter \$ vault secrets enable -path=secrets -version=2 kv Success! Enabled the kv-v2 secrets engine at: secrets/
Pro Tips from the Field
1. Every mount should immediately be given a description so that colleagues will still know six months later what it was meant for.
2. For production-grade v2 mounts, it is worthwhile to set the max_versions option as soon as the required number of revisions is clear. In regulated industries where ten-year retention is required, the value must be calculated accordingly.
3. I recommend setting the parameter cas_required to true to activate the Check-and-Set mechanism and prevent accidental parallel overwrites. In that case, every write access to a secret (i.e., create, update, or patch) must include the cas parameter. The cas parameter acts as a version check - a write operation is only completed successfully if the cas value matches the current version of the secret. This gives you control over parallel writes to secrets.
Version Detection
Especially in large environments, the question often arises which mount points are already running on the KV Engine v2. The following command shows the answer in the “Options” column.
$ vault secrets list --detailed Path Type Accessor Description Options ---- ---- -------- ----------- ------- cubbyhole/ cubbyhole cubbyhole_abc123 [...] map[] kv/ kv kv_def456 [...] map[] secrets/ kv kv_ghi789 [...] map[version:2]
Yes, you are right: this is something you have to know first.
If the entry version:2 is missing inside map[], it is an old v1 instance.
In automation scripts, it is recommended to parse this output in a machine-readable way by checking for version:2 instead of relying only on the path name. After all, not only is the path name (mount point) of a KV Engine freely configurable, but users can also name a v1 mount kv‑v2 without restriction.
Path Concept and Isolation
Each enabled KV Engine forms its own namespace. This means:
- Case-Sensitivity: The paths kv/ and KV/ are different and completely independent because Vault distinguishes between uppercase and lowercase letters.
- No Overlaps: A mount at secrets/ prevents additional mounts at subpaths like secrets/sub/. This ensures that two teams do not accidentally write to the same directories, which could happen if there are misunderstandings about the assigned namespaces.
- Complete Isolation: Behind the scenes, each engine receives a randomly generated UUID path that functions like a chroot. Even if one mount is compromised, it cannot access data from other engines.
Structuring Secrets
A clean naming convention ensures short access paths, clear responsibilities, and ultimately fewer support requests. A frequently used pattern is “//<secret‑group>/”. In practice, it might look like this:
apps/ └── aws/ ├── prod/ │ ├── user: dbadmin │ ├── password: P@ssw0rd │ └── api_key: b93md83mdmapw └── dev/ ├── cert: -----BEGIN CERTIFICATE----- └── key: -----BEGIN PRIVATE KEY-----
The directory apps/aws/ created here serves only as a container, it contains no secret. Only the lowest nodes hold the actual key‑value pairs.
The advantage of this hierarchy lies in the ease of delegating permissions. A dev team could receive read permission for all paths containing dev, while the ops team retains global write permissions for production within the assigned namespace (namespaces are virtual and mutually isolated Vault instances within Vault Enterprise).
Permission Model
Working with the KV Engine requires five different permissions, known as capabilities:
- create: Create new secrets (overwrites existing ones in v1)
- update: Modify existing secrets (only relevant for v2)
- read: Read secrets (returned in plaintext)
- delete: Delete secrets, either soft or hard
- list: List available secrets in a path (keys only, not values)
These capabilities are assigned via policies - a topic we will cover in more detail in a later article. In brief: policies combine these rights and link them to token roles. This allows, for example, a CI system to write to paths like apps/build during the build phase, but only read from apps//prod during the release job.
Outlook
In this article, we have covered the fundamentals of the Key/Value Secrets Engine and understood the key differences between Version 1 and Version 2.
In the next part, we will take a deeper dive into practical usage: how do we store, read, and manage secrets? What are best practices for path structures? And how do we make the most of KV v2 versioning?
Stay tuned - it’s going to get hands-on!