madhuakula / cloudquery

cloudquery transforms your cloud infrastructure into SQL database for easy monitoring, governance and security.

Home Page:https://cloudquery.io

Repository from Github https://github.commadhuakula/cloudqueryRepository from Github https://github.commadhuakula/cloudquery

cloudquery logo

Latest Release License: MPL-2.0 Documentation Schema Explorer Go Report Card PRs Welcome Go Reference GitHub Downloads Follow Twitter Gitub Stars

cloudquery transforms your cloud infrastructure into queryable SQL or Graphs for easy monitoring, governance, and security.

cloudquery overview

Table of contents

What is cloudquery and why use it?

cloudquery pulls, normalize, exposes, and monitors your cloud infrastructure and SaaS apps as SQL database. This abstracts various scattered APIs enabling you to define security, governance, cost, and compliance policies with SQL.

  • cloudquery can be easily extended to more resources and SaaS providers (open an Issue).
  • cloudquery comes with built-in policy packs such as AWS CIS (more is coming!).

Think about cloudquery as a compliance-as-code tool inspired by tools like osquery and terraform, cool right?

Supported providers (Actively expanding)

Currently cloudquery supports multiple provides including

Check out https://hub.cloudquery.io for more details about the providers.

If you want us to add a new provider or resource please open an Issue.

Installing cloudquery

Binary

  • You can download the precompiled binary from releases, or using CLI
export OS=Darwin # Possible values: Linux,Windows,Darwin
curl -L https://github.com/cloudquery/cloudquery/releases/latest/download/cloudquery_${OS}_x86_64 -o cloudquery
chmod a+x cloudquery
./cloudquery --help

# If you want to download a specific version and not the latest use the following endpoint
export VERSION=v0.13.6 # specifiy a version, refer to releases https://github.com/cloudquery/cloudquery/releases
curl -L https://github.com/cloudquery/cloudquery/releases/download/${VERSION}/cloudquery_${OS}_x86_64 -o cloudquery

Homebrew

  • You can use homebrew on macOS to install cloudquery
brew install cloudquery/tap/cloudquery
# After initial install you can upgrade the version via:
brew upgrade cloudquery

Docker

docker pull ghcr.io/cloudquery/cloudquery:latest

# If you want to download a specific version and not the latest use the following format (refer to releases for tags)
docker pull ghcr.io/cloudquery/cloudquery:0.13.6
docker pull ghcr.io/cloudquery/cloudquery:0.13

Compile from source

git clone https://github.com/cloudquery/cloudquery.git
cd cloudquery
# Make sure you have installed go and its required dependencies
go build .
./cloudquery # --help to see all options

Quick Start

Running cloudquery

  • First generate a config.yml file that will describe which resources you want cloudquery to pull, normalize and transform resources to the specified SQL database by running the following command
cloudquery init aws # choose one or more from [aws azure gcp okta]
# cloudquery init gcp azure # This will generate a config containing gcp and azure providers
# cloudquery init --help # Show all possible auto-generated configs and flags
  • Once your config.yml is generated run the following command to fetch the resources from the provider
# you can spawn a local postgresql with docker
# docker run -p 5432:5432 -e POSTGRES_PASSWORD=pass -d postgres
cloudquery fetch --dsn "host=localhost user=postgres password=pass DB.name=postgres port=5432"
# cloudquery fetch --help # Show all possible fetch flags
  • Log in to the Postgres database using psql -h localhost -p 5432 -U postgres -d postgres
postgres=# \dt
                                    List of relations
 Schema |                            Name                             | Type  |  Owner   
--------+-------------------------------------------------------------+-------+----------
 public | aws_autoscaling_launch_configuration_block_device_mapping   | table | postgres
 public | aws_autoscaling_launch_configurations                       | table | postgres

Querying the data from cloudquery results using psql shell

  • List AWS EC2 images
SELECT * FROM aws_ec2_images;
  • Find all public facing AWS load balancers
SELECT * FROM aws_elbv2_load_balancers WHERE scheme = 'internet-facing';

Running policy packs

cloudquery comes with some ready-to-use compliance policy pack (a curated list of queries for achieving the compliance) which you can use as-is or modify to fit your use case.

  • Currently, cloudquery support AWS CIS policy pack (it is under active development, so it doesn't cover the whole spec yet).

  • To run AWS CIS pack enter the following commands (make sure you fetched all the resources beforehand by the fetch command)

./cloudquery policy --path=<PATH_TO_POLICY_FILE> --output=<PATH_TO_OUTPUT_POLICY_RESULT> --dsn "host=localhost user=postgres password=pass DB.name=postgres port=5432"

AWS CIS Benchmarks

  • You can also create your own policy file, for example
views:
  - name: "my_custom_view"
    query: >
        CREATE VIEW my_custom_view AS ...
queries:
  - name: "Find thing that violates policy"
    query: >
        SELECT account_id, arn FROM ...

The policy command uses the policy file path ./policy.yml by default, but this can be overridden via the --path flag, or the CQ_POLICY_PATH environment variable

  • Full documentation, resources, and SQL schema definitions are available here

cloudquery schema

Providers Authentication

AWS

  • You should be authenticated with an AWS account with correct permission with either option (see full documentation)

    • You can specify using AWS access key AWS_ACCESS_KEY_ID, and secret key AWS_SECRET_ACCESS_KEY environment variables
    • Also, you can use the awscli configured credentials from ~/.aws/credentials created via aws configure
    • You can use AWS_PROFILE environment variable to specify the AWS profile you wanted to use for cloudquery when you have multiple profiles
  • Multi-account AWS support is available by using an account which can AssumeRole to other accounts

  • In your config.hcl you need to specify role_arns if you want to query multiple accounts in the following way

accounts "<YOUR ACCOUNT ID>"{
 // Optional. Role ARN we want to assume when accessing this account
 role_arn = "<YOUR_ROLE_ARN>"
}

Azure

You should set the following environment variables AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID which you can generate via az ad sp create-for-rbac --sdk-auth. See full details at environment based authentication for sdk

GCP

You should be authenticated with a GCP that has correct permissions for the data you want to pull. You should set GOOGLE_APPLICATION_CREDENTIALS to point to your downloaded credential file.

Okta

You need to set the OKTA_TOKEN environment variable.

Bring your own

You can leverage the cloudquery provider SDK, which enables building providers to query any service or custom in-house solutions with SQL

Common example queries

The below are some example queries to perform basic operations like retrieving insecure buckets and information, identifying publicly exposed load balancers, and misconfigurations.

Find GCP buckets with public-facing read permissions

SELECT gcp_storage_buckets.name
FROM gcp_storage_buckets
         JOIN gcp_storage_bucket_policy_bindings ON gcp_storage_bucket_policy_bindings.bucket_id = gcp_storage_buckets.id
         JOIN gcp_storage_bucket_policy_binding_members ON gcp_storage_bucket_policy_binding_members.bucket_policy_binding_id = gcp_storage_bucket_policy_bindings.id
WHERE gcp_storage_bucket_policy_binding_members.name = 'allUsers' AND gcp_storage_bucket_policy_bindings.role = 'roles/storage.objectViewer';

Find all public facing AWS load balancers

SELECT * FROM aws_elbv2_load_balancers WHERE scheme = 'internet-facing';

Find all unencrypted RDS instances

SELECT * from aws_rds_clusters where storage_encrypted = 0;

Find all unencrypted AWS buckets

SELECT * from aws_s3_buckets
    JOIN aws_s3_bucket_encryption_rules ON aws_s3_buckets.id != aws_s3_bucket_encryption_rules.bucket_id;

More examples and information are available here

Running cloudquery on AWS (Lambda, Terraform)

You can use the Makefile to build, deploy, and destroy the entire terraform infrastructure. The default execution configuration file can be found on: ./deploy/aws/terraform/tasks/us-east-1

  • You can define more tasks by adding cloudwatch periodic events

For example

  • The default configuration will execute the cloudquery every one day with the default configuration.
resource "aws_cloudwatch_event_rule" "scan_schedule" {
  name = "Cloudquery-us-east-1-scan"
  description = "Run cloudquery everyday on us-east-1 resources"

  schedule_expression = "rate(1 day)"
}

resource "aws_cloudwatch_event_target" "sns" {
  rule      = aws_cloudwatch_event_rule.scan_schedule.name
  arn       = aws_lambda_function.cloudquery.arn
  input     = file("tasks/us-east-1/input.json")
}
make build
  • Deploy cloudquery infrastructure to AWS using terraform
make apply

You can also use init, plan, and destroy to perform different operations with terraform, refer to Terraform docs for more information.

Resources

License

By contributing to cloudquery you agree that your contributions will be licensed as defined on the LICENSE file.

Contribution

Feel free to open Pull-Request for small fixes and changes. For bigger changes and new providers please open an issue first to prevent double work and discuss relevant stuff.

About

cloudquery transforms your cloud infrastructure into SQL database for easy monitoring, governance and security.

https://cloudquery.io

License:Mozilla Public License 2.0


Languages

Language:Go 89.9%Language:HCL 9.6%Language:Makefile 0.6%