Amazon Redshift Workload Management (WLM) allows users to manage workload priorities in a flexible way to avoid short, fast-running queries that fall behind long-running queries.
In the next video, let’s hear from our expert, Kamlesh, as he explains how workload helps Redshift in terms of query performance.
Amazon Redshift Workload Management (WLM) generates query queues at runtime according to service classes, which set the configuration parameters for a number of queues, including internal system queues and user-accessible queues.
Note:
We have used the word ‘queue’ to describe both a user-accessible service and a runtime queue for consistency.
Now, suppose you are at a grocery shop that has two types of billing counters.
Case 1: Single Billing Counter
In this case, as illustrated in the image given above, there is only one billing counter, and you are standing in a queue to get your items billed. You are in the third or fourth position and have only two items for billing, but the person before you has 20 items, and the person before that person has 10 items. This increases the waiting time for you, as you have to wait until all their items are billed.
Case 2: Multiple Billing Counter
Unlike Case 1, here, there are multiple billing counters, and you can join a queue as per your item count. The image given above illustrates that the billing counters are divided as per the item count of the customers. In this case, you can quickly get your items billed, as the waiting time here is less than that in Case 1.
In a similar manner, you can assign workloads to a Redshift cluster through WLM queues and assign a portion of a to compute node’s memory to each queue.
Let’s understand how to configure Workload Management for your Redshift Cluster.
In the video above, our expert has explained the creation of custom parameter group. You have also learnt how to create a custom queue so that each user who belongs to your group will route from the custom queue and other users those belong to different groups will route through the default queue. In the next video, you will learn how to configure memory percentage and concurrency scaling of the custom queue and default queue.
Finally, you have understood the concept of workload management and how to configure workload for default queue and custom queue. In the next segment, you will learn about fault tolerance and resize.
Additional Recommended Reading
Workload Management Classification
Short Query Acceleration
Note:
It is highly recommended that you go through these additional reading links carefully as questions might be asked from these concepts.