Amazon CloudWatch Logs Integration Guide
Taegis™ XDR supports the ability to ingest data from sources produced by CloudWatch Logs, but does not collect directly from CloudWatch's Read API due to Amazon's CloudWatch Logs quota limitations. Taegis™ XDR instead follows the recommended practices for streaming data by applying a subscription filter to forward log data to Amazon S3, a more available read source for continuous reading. By doing so, we ensure that data can arrive in a timely and consistent manner to Taegis™ XDR.
This document guides you through applying a subscription filter to Amazon CloudWatch Logs data sources in a way that can then be ingested by Taegis™ XDR.
This guide does not review setting up collection of logs to Taegis™ XDR, but instead guides you through forwarding data from CloudWatch Logs to a service that can be ingested directly, Amazon S3. Effectively, for CloudWatch Logs produced data, this guide is a prerequisite to using our S3 integration guides for collection of data.
Process Summary ⫘
This guide summarizes the process documented in the Amazon user guide Using CloudWatch Logs subscription filters. For detailed command examples or further support, reference the Amazon documentation directly.
Create an S3 Bucket ⫘
If a bucket already exists, proceed to the next step. Amazon recommends creating a bucket for use specifically for CloudWatch Logs subscription forwarding.
Create an IAM Role To Allow Firehose To Write to S3 ⫘
An IAM role is needed to grant Amazon Kinesis Data Firehose permission to put data into the preferred Amazon S3 bucket. Amazon's documentation provides an example policy statement and corresponding IAM create-role command to assist with this task.
Create a Permissions Policy To Allow Kinesis Firehose To Access S3 ⫘
A policy is needed to allow Kinesis Firehose to perform a variety of functions on the Amazon S3 bucket of choice. Amazon's documentation provides a sample statement and corresponding put-role-policy command to enable this.
Create a Destination Kinesis Data Firehose Delivery Stream ⫘
A delivery stream is used to put data from CloudWatch Logs into your Amazon S3 bucket. Amazon provides a sample command to create the delivery stream. Note that delivery streams can take several seconds to minutes to become enabled after being created. You can either view the status of the stream from the Kinesis Firehose console or by using Amazon's provided aws-cli command to describe the created delivery stream.
Create an IAM Role To Allow CloudWatch To Write to the Kinesis Firehose Delivery Stream ⫘
The created Kinesis Firehose delivery stream needs write access to send data to S3. Amazon's documentation provides a sample IAM statement and corresponding create-role command to enable this role.
Create a Permissions Policy To Allow CloudWatch To Access S3 ⫘
A policy is needed to allow CloudWatch to perform a variety of functions on the Amazon S3 bucket of choice. Amazon's documentation provides a sample statement and corresponding put-role-policy command to enable this.
Add a CloudWatch Logs Subscription Filter ⫘
Adding the subscription filter enables the flow of logs from CloudWatch Logs to Amazon S3. Amazon's documentation provides a sample put-subscription-filter command to enable this.
Amazon's command specifically looks to enable flow from a CloudWatch log group by the name of
CloudTrail and uses a filter-pattern to only send logs with the user identity as
root. In most cases, you must alter the log group name to fit your environment and supply a broader or no value for the filter pattern to forward all logs from the group.
Enable Ingest from Amazon S3 ⫘
Once data is flowing to an Amazon S3 bucket, proceed with setting up the Taegis™ XDR integration with that bucket.
References for S3 integrations: