
In the last part of this series, we showed how seemingly harmless data sources in Terraform modules can become a serious performance issue. Multi-minute terraform plan runtimes, unstable pipelines and uncontrollable API throttling effects were the result.
But how can you avoid this scalability trap in an elegant and sustainable way?
In this part, we present proven architectural patterns that allow you to centralize data sources, inject them efficiently and thereby achieve fast, stable and predictable Terraform executions even with hundreds of module instances.
Included: three scalable solution strategies, a practical step-by-step guide and a best practices checklist for production-ready infrastructure modules.
Read more: Terraform @ Scale - Part 4b: Best Practices for Scaling Data Sources

Terraform's Data Sources are a popular way to dynamically populate variables with real-world values from the respective cloud environment. However, using them in dynamic infrastructures requires some foresight. A harmless data.oci_identity_availability_domains in a module, for example, is enough - and suddenly every terraform plan takes minutes instead of seconds. Because 100 module instances mean 100 API calls, and your cloud provider starts throttling. Welcome to the world of unintended API amplification through Data Sources.
In this article, I will show you why Data Sources in Terraform modules can pose a scaling problem.
Read more: Terraform @ Scale - Part 4a: Data Sources are Dangerous!

Even the most sophisticated infrastructure architecture cannot prevent every error. That is why it is essential to monitor Terraform operations proactively - especially those with potentially destructive impact. The goal is to detect critical changes early and trigger automated alerts before an uncontrolled blast radius occurs.
Sure - your system engineer will undoubtedly point out that Terraform displays the full plan before executing an apply, and that execution must be confirmed by entering "yes".
What your engineer does not mention: they do not actually read the plan before allowing it to proceed.
“It'll be fine.”
Read more: Terraform @ Scale - Part 3c: Monitoring and Alerting for Blast Radius Events

The Key/Value Secrets Engine is an integral part of almost every Vault implementation. It forms the foundation for securely storing static secrets and is used far more frequently in practice than many dynamic engines.
Following the theoretical introduction in part 2a, this article turns to the practical work with the KV Engine. We demonstrate how to write, read, update and delete secrets, and provide a practical analysis of the differences between KV Version 1 and Version 2. The focus is on production-relevant commands, realistic pitfalls and concrete recommendations for day-to-day operations, which is why I present this knowledge as a mixture of tutorial and cheat sheet.
Read more: HashiCorp Vault Deep Dive – Part 2b: Practical Work with the Key/Value Secrets Engine

Despite careful blast radius minimisation, segmented states and lifecycle guardrails, it can happen sooner or later: a terraform apply accidentally deletes production resources, or a terraform destroy affects more than intended.
What to do once the damage is already done?
In the previous article of this series, I explained how to minimise the blast radius. In this follow-up, I will show proven techniques for restoring damaged Terraform states and limiting the impact after an incident.
Read more: Terraform @Scale - Part 3b: Blast Radius Recovery Strategies