Cloud Infrastructure SAST: Scanning Terraform for security vulnerabilities and non-compliance using Checkov

In this article I explore Checkov, a static code analysis tool for Terraform.

Banner image for Cloud Infrastructure SAST: Scanning Terraform for security vulnerabilities and non-compliance using Checkov

Automatically scanning Terraform code for security vulnerabilities has been missing from my toolbelt for a long time. So far, I’ve relied on a hodge-podge of tools, peer reviews, and Scout2 to make sure my GRC team wouldn’t call me in the middle of the night to yell at me.

So when I found an excellent article on IaC SAST tool comparison, I learnt that getting SAST working for Terraform was easier than I imagined. Since Checkov seems to be leading the pack right now, I sat down with a nice cup of tea to figure out what it was and how it worked.

Checkov (Anton?)

Checkov is a SAST tool for Terraform, Cloudformation, Kubernetes, etc., which checks over 1000+ best practices and security configs for the three major cloud providers. It can even detect AWS credentials stuffed in the usual places and supports the creation of custom checks (like checking tagging policies! yay!).

Installing Checkov is as simple as you would expect. On Mac OSX, I just ran pip3 install checkov, and that was it. You can find detailed installation instructions here.

To get things started, I have a bad.tf file with a simple S3 bucket:

resource "aws_s3_bucket" "my-kfc-bucket" {

  bucket = "kluck-kluck-bucket"
  acl    = "public-read"

  tags = {
    Name       = "kluck-kluck-bucket"
    CostCenter = "Poultry"
    Owner      = "Tester"
  }
}

(Note: You can find all the files I’ve used in this article in my GitLab repo).

Checkov’s --file argument tests individual files. Running it using checkov --file bad.tf outputs the following:

    ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.267 

terraform scan results:

Passed checks: 3, Failed checks: 7, Skipped checks: 0

Check: CKV_AWS_70: "Ensure S3 bucket does not allow an action with any Principal"
        PASSED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/bc_aws_s3_23

Check: CKV_AWS_57: "S3 Bucket has an ACL defined which allows public WRITE access."
        PASSED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/s3_2-acl-write-permissions-everyone

Check: CKV_AWS_93: "Ensure S3 bucket policy does not lockout all but root user. (Prevent lockouts needing root account fixes)"
        PASSED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/bc_aws_iam_53

Check: CKV_AWS_19: "Ensure all data stored in the S3 bucket is securely encrypted at rest"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/s3_14-data-encrypted-at-rest

                1  | resource "aws_s3_bucket" "my-kfc-bucket" {
                2  | 
                3  |   bucket = "kluck-kluck-bucket"
                4  |   acl    = "public-read"
                5  | 
                6  |   tags = {
                7  |     Name       = "kluck-kluck-bucket"
                8  |     CostCenter = "Poultry"
                9  |     Owner      = "Tester"
                10 |   }
                11 | }

Check: CKV_AWS_20: "S3 Bucket has an ACL defined which allows public READ access."
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/s3_1-acl-read-permissions-everyone

                1  | resource "aws_s3_bucket" "my-kfc-bucket" {
                2  | 
                3  |   bucket = "kluck-kluck-bucket"
                4  |   acl    = "public-read"
                5  | 
                6  |   tags = {
                7  |     Name       = "kluck-kluck-bucket"
                8  |     CostCenter = "Poultry"
                9  |     Owner      = "Tester"
                10 |   }
                11 | }

Check: CKV_AWS_18: "Ensure the S3 bucket has access logging enabled"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/s3_13-enable-logging

                1  | resource "aws_s3_bucket" "my-kfc-bucket" {
                2  | 
                3  |   bucket = "kluck-kluck-bucket"
                4  |   acl    = "public-read"
                5  | 
                6  |   tags = {
                7  |     Name       = "kluck-kluck-bucket"
                8  |     CostCenter = "Poultry"
                9  |     Owner      = "Tester"
                10 |   }
                11 | }

Check: CKV_AWS_144: "Ensure that S3 bucket has cross-region replication enabled"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/ensure-that-s3-bucket-has-cross-region-replication-enabled

                1  | resource "aws_s3_bucket" "my-kfc-bucket" {
                2  | 
                3  |   bucket = "kluck-kluck-bucket"
                4  |   acl    = "public-read"
                5  | 
                6  |   tags = {
                7  |     Name       = "kluck-kluck-bucket"
                8  |     CostCenter = "Poultry"
                9  |     Owner      = "Tester"
                10 |   }
                11 | }

Check: CKV_AWS_145: "Ensure that S3 buckets are encrypted with KMS by default"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/ensure-that-s3-buckets-are-encrypted-with-kms-by-default

                1  | resource "aws_s3_bucket" "my-kfc-bucket" {
                2  | 
                3  |   bucket = "kluck-kluck-bucket"
                4  |   acl    = "public-read"
                5  | 
                6  |   tags = {
                7  |     Name       = "kluck-kluck-bucket"
                8  |     CostCenter = "Poultry"
                9  |     Owner      = "Tester"
                10 |   }
                11 | }

Check: CKV_AWS_21: "Ensure all data stored in the S3 bucket have versioning enabled"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/s3_16-enable-versioning

                1  | resource "aws_s3_bucket" "my-kfc-bucket" {
                2  | 
                3  |   bucket = "kluck-kluck-bucket"
                4  |   acl    = "public-read"
                5  | 
                6  |   tags = {
                7  |     Name       = "kluck-kluck-bucket"
                8  |     CostCenter = "Poultry"
                9  |     Owner      = "Tester"
                10 |   }
                11 | }

Check: CKV2_AWS_6: "Ensure that S3 bucket has a Public Access block"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11
        Guide: https://docs.bridgecrew.io/docs/s3-bucket-should-have-public-access-blocks-defaults-to-false-if-the-public-access-block-is-not-attached

                1  | resource "aws_s3_bucket" "my-kfc-bucket" {
                2  | 
                3  |   bucket = "kluck-kluck-bucket"
                4  |   acl    = "public-read"
                5  | 
                6  |   tags = {
                7  |     Name       = "kluck-kluck-bucket"
                8  |     CostCenter = "Poultry"
                9  |     Owner      = "Tester"
                10 |   }
                11 | }

My file passed 3 checks but failed the following 7:

  1. Check: CKV_AWS_20: “S3 Bucket has an ACL defined which allows public READ access.”
  2. Check: CKV_AWS_20: “S3 Bucket has an ACL defined which allows public READ access.”
  3. Check: CKV_AWS_18: “Ensure the S3 bucket has access logging enabled”
  4. Check: CKV_AWS_144: “Ensure that S3 bucket has cross-region replication enabled”
  5. Check: CKV_AWS_145: “Ensure that S3 buckets are encrypted with KMS by default”
  6. Check: CKV_AWS_21: “Ensure all data stored in the S3 bucket have versioning enabled”
  7. Check: CKV2_AWS_6: “Ensure that S3 bucket has a Public Access block”

Not all my buckets will need cross-region replication or access logging, so I’ll skip those 2 checks and run Checkov again (this time with --compact, --quiet and --no-guide):

checkov --compact --quiet --no-guide --skip-check CKV_AWS_144,CKV_AWS_18 --file bad.tf:

   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.267 

terraform scan results:

Passed checks: 3, Failed checks: 5, Skipped checks: 0

Check: CKV_AWS_19: "Ensure all data stored in the S3 bucket is securely encrypted at rest"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11

Check: CKV_AWS_20: "S3 Bucket has an ACL defined which allows public READ access."
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11

Check: CKV_AWS_145: "Ensure that S3 buckets are encrypted with KMS by default"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11

Check: CKV_AWS_21: "Ensure all data stored in the S3 bucket have versioning enabled"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11

Check: CKV2_AWS_6: "Ensure that S3 bucket has a Public Access block"
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /bad.tf:1-11

This time, 3 passed, and 5 failed. Exactly as expected. The output mentions the file name and line number for each failed check. I’ve disabled the Guide links via --no-guide, but if you enable it and follow the links, the Guide page contains clear instructions to fix the issue using the AWS Console or Terraform.

After fixing the above issues, my good.tf looks like this:

resource "aws_s3_bucket" "my-kfc-bucket" {

  bucket              = "kluck-kluck-bucket"
  block_public_acls   = true
  block_public_policy = true

  versioning {
    enabled = true
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"

      }
    }
  }

  tags = {
    Name       = "kluck-kluck-bucket"
    CostCenter = "Poultry"
    Owner      = "Tester"
  }
}

resource "aws_s3_bucket_public_access_block" "my-kfc-bucket-policy" {
  bucket = aws_s3_bucket.my-kfc-bucket.id

  block_public_acls       = true
  block_public_policy     = true
  restrict_public_buckets = true
  ignore_public_acls      = true
}

… and re-running Checkov on good.tf shows:

   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.267 

terraform scan results:

Passed checks: 12, Failed checks: 0, Skipped checks: 0

12 checks passed, 0 failed!

Checking Tagging rules with custom policies!

So far, I’ve figured out how to use Checkov to test against the built-in policies.

But organisation standards, conventions, and GCR policies require custom logic. And when executing tag-based automation and deployments or establishing cost controls for products and business units, consistent resource tagging is one of the most versatile tools available to us. It’s also one of the hardest to implement and maintain.

Checkov provides the framework for defining custom policies in YAML or Python.

Custom policies in Checkov contain two sections:

  1. The metadata section has the name, ID, and Category of the policy. The ID and Category have a defined syntax and list of values which can be used.
  2. The definition section contains the policy logic, using a combination of conditions, resources types, keys/values, and operators.

My code for custom_checks/check_tagging.yaml is as follows:

---
metadata:
 name: "Check that S3 buckets have Owner = Testing Team tag."
 id: "CKV2_AWS_TAGGING_1"
 category: "CONVENTION"
definition:
       cond_type: "attribute"
       resource_types:
       - "aws_s3_bucket"
       attribute: "tags.Owner"
       operator: "equals"
       value: "Testing Team"

My custom policy will check all aws_s3_bucket resources and ensure they have an Owner tag with value Testing Team. If not, the policy check will fail.

My good.tf above contains Owner with a value Tester which should fail with this policy.

So running checkov --compact --quiet --no-guide --skip-check CKV_AWS_144,CKV_AWS_18 --external-checks-dir custom_checks --file good.tf will show:

   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.267 

terraform scan results:

Passed checks: 12, Failed checks: 1, Skipped checks: 0

Check: CKV2_AWS_TAGGING_1: "Check that S3 buckets have Owner = Testing Team tag."
        FAILED for resource: aws_s3_bucket.my-kfc-bucket
        File: /good.tf:1-26

I will now fix the tag tag in my good.tf:

tags = {
    Name       = "kluck-kluck-bucket"
    CostCenter = "Poultry"
    Owner      = "Testing Team"
  }

… and rerun the test:

   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.267 

terraform scan results:

Passed checks: 13, Failed checks: 0, Skipped checks: 0

Success! The policy works as expected :)

Conclusion

The 1000+ built-in checks and the ability to define custom policies makes Checkov a vital tool in my DevOps toolbelt. Its integration with Git pre-commit hooks and several CI/CD tools means infrastructure can be unit tested along with code. I’m especially fond of the simple syntax for defining custom policies. The option to choose YAML over Python will make policy reviews with non-technical stakeholders more productive.

After this little POC, I’m eager to start using this in production projects, and I hope you are too.

Happy coding :)

← previous post
Terraform + Helm: A match made in heaven/hell?

next post →
5 key best practices for sane and usable Terraform setups

107TechnologyView source