Satori joins Commvault to power the future of Data & AI Security. Learn more →

AWS Redshift,

Data Activity Monitoring

Database Activity Monitoring on AWS Redshift

|Chief Scientist
Auditing and monitoring are essential for security and compliance in any data warehouse, and Amazon Redshift is no exception. This guide provides an overview of how to leverage Redshift's native capabilities and the broader AWS ecosystem to effectively track user activity, investigate events, and maintain a secure data environment in 2025.

Refresher: what's in a database audit log?

An audit log captures a detailed trail of user activity within a database. Security teams hold onto audit logs so they can perform investigations into events. Furthermore, audit logs are analyzed continuously to find incidents that could create risk of data loss, such as compliance breaches or over-privileged users. The more relevant information and context you have about the events logged, the better. In addition to their security and operational usefulness, audit logs are also an important part of meeting compliance requirements.

Database audit logs are separated into two parts:

  1. An access log, detailing the history of successful and failed logins to the database.
  2. A query log, detailing the history of successful and failed queries made on the database.
A database session contains one successful login event, but may contain a large number of queries sent to the database. The amount of information in an audit log may be substantially larger, as a query can be anything from a SELECT 1 heartbeat to a hundred lines of code.

Why are audit logs especially important in Amazon Redshift?

Amazon Redshift is widely used by companies as a data warehouse or as part of a data lake solution with Redshift Spectrum. Over time, companies naturally onboard more data stores, develop more projects, and have more users interacting with data. The more complex their data analysis environments become, the more important it is to understand what’s happening within them, both from a security and operational efficiency standpoint. Learn more:

Where are Amazon Redshift logs kept?

There are two levels of auditing in Redshift. The logs are natively kept in system tables, but for long term storage, you can enable audit logging to external AWS services, such as S3 buckets. Let’s explore these two options:

Using Amazon Redshift System Tables

With Redshift’s built-in system tables, you can investigate events quickly using SQL from within the database itself. The retention period for such logs is generally less than a week, so don’t expect to use these for the long term. However, these logs are still very useful, for: 
  1. Building live operational dashboards.
  2. Debugging and investigating recent or ongoing incidents. Note that it takes time for logs to get from your system tables to your S3, so new events will only be available in your system tables (see the below section).
  3. Providing a fallback if you haven’t enabled native logs or external logging was temporarily unavailable.
The following are the key system tables (STLs) used for logging in Redshift (note that if you need to log queries, there is an extra step before these are logged, see below):
  • stl_connection_log holds the information about connections, disconnections, and logins to the cluster.
  • stl_userlog lists changes to user definitions.
  • stl_connection_log: Holds information about connections, disconnections and logins to the cluster.
  • stl_userlog: Lists changes to user definitions (creation, modification, deletion).
  • stl_query: Stores information about query execution. Note that the queries here may be truncated, and so, for the query texts themselves, you should reconstruct the queries using stl_querytext
  • stl_querytext: Contains the full text of executed queries. The rows in this table are split into chunks of 200 characters, so any query longer than that requires reassembly.
  • stl_ddltext: Holds Data Definition Language (DDL) commands such as CREATE, ALTER, or DROP. You will not find these in the stl_querytext (unlike other databases like Snowflake which keep all queries and commands in the same place).
  • stl_utilitytext: Logs other SQL commands, among them important ones to audit like GRANT and REVOKE.
Here are some examples of queries using these system tables:

Generating a query log from stl_querytext

This query reconstructs and displays the last 10 queries executed on the cluster:
SELECT query, LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END) WITHIN GROUP (ORDER BY sequence) AS query_statement, COUNT(*) as row_count FROM stl_querytext GROUP BY query ORDER BY query desc LIMIT 10;
The following query returns the top 10 longest queries:
WITH queries AS ( SELECT query, LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END) WITHIN GROUP (ORDER BY sequence) AS query_statement, COUNT(*) as row_count FROM stl_querytext GROUP BY query) SELECT * FROM queries WHERE query_statement ILIKE 'select%' ORDER BY LEN(query_statement) DESC LIMIT 10;

Amazon Redshift Permissions Changes Log

The following query retrieves all GRANT and REVOKE permission commands:  
WITH util_cmds AS ( SELECT userid, LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END) WITHIN GROUP (ORDER BY sequence) AS query_statement FROM stl_utilitytext GROUP BY userid, xid order by xid) SELECT util_cmds.userid, stl_userlog.username, query_statement FROM util_cmds LEFT JOIN stl_userlog ON (util_cmds.userid = stl_userlog.userid) WHERE query_statement ILIKE '%GRANT%' OR query_statement ILIKE '%REVOKE%';

This one shows the last 10 failed logins:

 
SELECT * FROM stl_connection_log WHERE event='authentication failure' ORDER BY recordtime DESC LIMIT 10;
  You can find additional system table queries in our Amazon Redshift security guide.

Enabling Amazon Redshift Query Logs

This section will describe how to enable automatic export of log data from Redshift to an external AWS service such as CloudWatch or an S3 bucket for long term storage. Of course, this may incur additional storage costs.  Redshift's automated audit logging can export three types of log files:
  • Connection log (access log): Logs the authentication attempts to the cluster, as well as connections and disconnections.
  • User log: Logs changes in user database definitions.
  • User activity log (query log): Records each query submitted to the database before it's executed.
Here's how to enable this via the AWS console:
  1. Access your cluster: Log in to the AWS Management Console, navigate to Redshift, and select your specific cluster or Serverless workgroup.
  2. Configure audit logging: On your cluster's detail page, locate the "Properties" or "Configuration" tab.
    3. Within the "Database configurations" or "Monitoring" section, find and select the option to "Edit audit logging."       4. Choose your log destination: 
  • Amazon S3: Select an existing S3 bucket or create a new one. You can optionally specify a key prefix (e.g., audit-logs/my-cluster/) to organize logs within your bucket, which is highly recommended if the bucket serves multiple purposes.
  • Amazon CloudWatch: Choose an existing CloudWatch Log Group or create a new one.
    5. Save changes: Confirm your selections to apply the audit logging configuration. Be aware that by default, even if you set up logging to be automatically exported to S3, only the connection log and user log will be recorded. If you want to enable query logging as well, follow the instructions in the next section.

Enabling Query Logging in Amazon Redshift

Enabling the user activity log can significantly increase your storage consumption, as every query (from short strings to hundreds of lines long) is logged. This will directly impact your costs for S3 or CloudWatch. Therefore, this change should be implemented carefully, first in a staging environment, while continuously monitoring its performance and cost implications before deploying to production. User activity logging is disabled by default and requires changing the cluster's associated parameter group.
  1. From your Redshift cluster's details page in the AWS console, navigate to the "Configuration" or "Properties" tab. Click on the "Parameter group" linked to your cluster.
  2. On the parameter group management screen, select the "Parameters" tab.
  3. Set enable_user_activity_logging: Search for the enable_user_activity_logging parameter and change its value to true.
  4. After saving the parameter group modification, the changes must be applied to your cluster. In some cases, a cluster reboot (ouch!) may be required for the new setting to take full effect. Always consult the latest AWS documentation for specific guidance on parameter group updates and their impact on cluster availability.
     

Sensitive Data in Logs

It's critical to understand that queries logged in Redshift system tables and exported audit logs are recorded verbatim and are not redacted by default. This means they can contain highly sensitive information. For example, a query like the following may contain PII:  
SELECT * FROM users WHERE firstname = 'Vladimir' AND lastname='Humphrey';
  The query will appear in a non-redacted format in the logs as well:     This means your database logs are a place where you might find sensitive data such as PII, PHI, and more, even though it contains only queries rather than the results returned from the database itself. For compliance and security reasons, this will probably require additional controls placed on access to these logs.

Analyzing the Audit Logs

Having audit logs is one thing, but analyzing them is a different story. For that you’ll need to use one of the following solutions:
  • Amazon CloudWatch: Stream logs here for real-time monitoring, use Logs Insights for SQL-like querying, and set up alarms to detect anomalies.
  • Amazon Athena: For logs stored in S3, use this serverless query engine for ad-hoc and historical analysis. 
  • AWS Lake Formation: This service provides centralized governance and auditing for your data lake, complementing Redshift's logs with a broader view of data access.
  • Third-party platforms: Many SIEMs and observability platforms can ingest and enrich Redshift logs.
Regardless of your chosen analysis method, you may want to anonymize the queries (mask sensitive data). This is not a trivial task, but will make sure that the final audit logs are free from sensitive data, and can be used by more teams. When building this process, you also want to make sure they’re deleted after a certain period of time. This varies depending on compliance requirements, but in most cases you want to store these logs for at least a year.

Amazon Redshift Audit Logs in Satori

For Amazon Redshift customers of Satori, you can use our database activity monitoring (DAM) features, which come out of the box, and logs all activities from all your data platforms (Amazon Redshift and others) in the same place. You can access the audit data in the following ways:
  • Using the Satori user interface
  • Using API calls
  • By connecting our logs so that they’re pushed to your data platform
Some key advantages of using Satori’s DAM over the built-in capabilities of Redshift are: Works out of the box The logs are in one place (as in, there is no need for looking both in system tables and in S3) and require no bucket configuration, ETLs, or such housekeeping. Logs are also automatically redacted to remove any sensitive data from the queries.

Context-rich information

In addition to the queries, the logs contain a lot of other information, such as the usernames, directory groups, number of records pulled, and even metadata about the result set (column names) and the data transfer classification).  

Ability to investigate and create pre-built and custom reports

The audit logs in Satori allow you to filter, analyze and investigate events. You can easily look for activities involving sensitive data, done by specific users, in specific time periods, etc. Then, create custom or out-of-the-box compliance reports in the platform.  

 

Access to all data platforms from one single pane

In addition to your Amazon Redshift audit log, you can instantaneously analyze and investigate your logs from multiple data platforms. For example, you can see what a user did across your Redshift, Snowflake and SQL Server environments in one place.       Audit logs are only a tiny subset of Satori's data security platform.  To learn more about our full product capabilities, check out:   Or schedule a demo with our team here.  
Learn More About Satori
in a Live Demo
Book A Demo
About the author
|Chief Scientist

Ben is an experienced tech leader and book author with a background in endpoint security, analytics, and application & data security. Ben filled roles such as the CTO of Cynet, and Director of Threat Research at Imperva. Ben is the Chief Scientist for Satori, the DataSecOps platform.

Back to Blog