Apache Druid 33.0.0 contains over 190 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 44 contributors.
Review the upgrade notes before you upgrade to Druid 33.0.0.
If you are upgrading across multiple versions, see the Upgrade notes page, which lists upgrade notes for the most recent Druid versions.
You can now increase the speed at which segments get loaded on a Historical by providing a list of servers for the Coordinator dynamic config turboLoadingNodes. For these servers, the Coordinator ignores druid.coordinator.loadqueuepeon.http.batchSize and uses the value of the respective numLoadingThreads instead. Please note that putting a Historical in turbo-loading mode might affect query performance since more resources would be used by the segment loading threads.
You can use the following Overlord compaction APIs to manage compaction status and configs. These APIs work seamlessly irrespective of whether compaction supervisors are enabled or not.
You can now schedule batch ingestions with the MSQ task engine by using the scheduled batch supervisor. You can specify the schedule using either the standard Unix cron syntax or Quartz cron syntax by setting the type field to either unix or quartz. Unix also supports macro expressions such as @daily and others.
Submit your supervisor spec to the /druid/v2/sql/task/ endpoint.
The following example scheduled batch supervisor spec submits a REPLACE query every 5 minutes:
{
"type": "scheduled_batch",
"schedulerConfig": {
"type": "unix",
"schedule": "*/5 * * * *"
},
"spec": {
"query": "REPLACE INTO foo OVERWRITE ALL SELECT * FROM bar PARTITIONED BY DAY"
},
"suspended": false
}
Druid can now use AWS S3 Transfer Manager for S3 uploads, which can significantly reduce segment upload time. This feature is on by default and controlled with the following configs in common.runtime.properties:
You can now use an optional query parameter called skipRestartIfUnmodified for the /druid/indexer/v1/supervisor endpoint. You can set skipRestartIfUnmodified=true to not restart the supervisor if the spec is unchanged.
For example:
curl -X POST --header "Content-Type: application/json" -d @supervisor.json localhost:8888/druid/indexer/v1/supervisor?skipRestartIfUnmodified=true
Improved the efficiency of streaming ingestion by fetching active tasks from memory. This reduces the number of calls to the metadata store for active datasource task payloads #16098
The query results API (GET /druid/v2/sql/statements/{queryId}/results) now supports an optional filename parameter. When provided, the response instructs web browsers to save the results as a file instead of showing them inline (via the Content-Disposition header).
You can now control how many task slots are available for MSQ task engine controller tasks by using the following configs:
| Property | Description | Default value |
|-------|--------------|--------|
| druid.indexer.queue.controllerTaskSlotRatio | (Optional) The proportion of available task slots that can be allocated to MSQ task engine controller tasks. This is a floating-point value between 0 and 1 | null |
| druid.indexer.queue.maxControllerTaskSlots | (Optional) The maximum number of task slots that can be allocated to controller tasks. This is an integer value that defines a hard limit on the number of task slots available for MSQ task engine controller tasks. | null |
You now configure compaction supervisors with the following Coordinator compaction config:
useSupervisors - Enable compaction to run as a supervisor on the Overlord instead of as a Coordinator duty
engine - Choose between native and msq to run compaction tasks. The msq setting uses the MSQ task engine and can be used only when useSupervisors is true.
Previously, you used runtime properties for the Overlord. Support for these has been removed.
You can use the following Overlord APIs to manage compaction:
|Method|Path|Description|Required Permission|
|--------|--------------------------------------------|------------|--------------------|
|GET|/druid/indexer/v1/compaction/config/cluster|Get the cluster-level compaction config|Read configs|
|POST|/druid/indexer/v1/compaction/config/cluster|Update the cluster-level compaction config|Write configs|
|GET|/druid/indexer/v1/compaction/config/datasources|Get the compaction configs for all datasources|Read datasource|
|GET|/druid/indexer/v1/compaction/config/datasources/{dataSource}|Get the compaction config of a single datasource|Read datasource|
|POST|/druid/indexer/v1/compaction/config/datasources/{dataSource}|Update the compaction config of a single datasource|Write datasource|
|GET|/druid/indexer/v1/compaction/config/datasources/{dataSource}/history|Get the compaction config history of a single datasource|Read datasource|
|GET|/druid/indexer/v1/compaction/status/datasources|Get the compaction status of all datasources|Read datasource|
|GET|/druid/indexer/v1/compaction/status/datasources/{dataSource}|Get the compaction status of a single datasource|Read datasource|
Enable segment metadata caching on the Overlord with the runtime property druid.manager.segments.useCache. This feature is off by default.
You can set the property to the following values:
never: Cache is disabled (default)
always: Reads are always done from the cache. Service start-up will be blocked until the cache has synced with the metadata store at least once. Transactions are blocked until the cache has synced with the metadata store at least once after becoming leader.
ifSynced: Reads are done from the cache only if it has already synced with the metadata store. This mode does not block service start-up or transactions unlike the always setting.
As part of this change, additional metrics have been introduced. For more information about these metrics, see Segment metadata cache metrics.
The Coordinator can optionally issue kill tasks for cleaning up unused segments. Starting with this release, individual kill tasks are limited to processing 30 days or fewer worth of segments per task by default. This improves performance of the individual kill tasks.
The previous behavior (no limit on interval per kill task) can be restored by setting druid.coordinator.kill.maxInterval = P0D.
Metadata queries now return maxIngestedEventTime, which is the timestamp of the latest ingested event for the datasource. For realtime datasources, this may be later than MAX(__time) if queryGranularity is being used. For non-realtime datasources, this is equivalent to MAX(__time)#17686
Metadata kill queries are now more efficient. They consider a maximum end time since the last segment was killed #17770
Newly added segments are loaded more quickly #17732
You can now configure custom Histogram buckets for timer metrics from the Prometheus emitter using the histogramBuckets parameter.
If no custom buckets are provided, the following default buckets are used: [0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0, 30.0, 60.0, 120.0, 300.0]. If the user does not specify their own JSON file, a default mapping is used.
The Kafka supervisor now includes additional lag metrics for how many minutes of data Druid is behind:
|Metric|Description|Default value|
|-|-|-|
|ingest/kafka/updateOffsets/time|Total time (in milliseconds) taken to fetch the latest offsets from Kafka stream and the ingestion tasks.|dataSource, taskId, taskType, groupId, tags|Generally a few seconds at most.|
|ingest/kafka/lag/time|Total lag time in milliseconds between the current message sequence number consumed by the Kafka indexing tasks and latest sequence number in Kafka across all shards. Minimum emission period for this metric is a minute. Enabled only when pusblishLagTime is set to true on supervisor config.|dataSource, stream, tags|Greater than 0, up to max kafka retention period in milliseconds. |
|ingest/kafka/maxLag/time|Max lag time in milliseconds between the current message sequence number consumed by the Kafka indexing tasks and latest sequence number in Kafka across all shards. Minimum emission period for this metric is a minute. Enabled only when pusblishLagTime is set to true on supervisor config.|dataSource, stream, tags|Greater than 0, up to max kafka retention period in milliseconds. |
|ingest/kafka/avgLag/time|Average lag time in milliseconds between the current message sequence number consumed by the Kafka indexing tasks and latest sequence number in Kafka across all shards. Minimum emission period for this metric is a minute. Enabled only when pusblishLagTime is set to true on supervisor config.|dataSource, stream, tags|Greater than 0, up to max kafka retention period in milliseconds. |
|ingest/kinesis/updateOffsets/time|Total time (in milliseconds) taken to fetch the latest offsets from Kafka stream and the ingestion tasks.|dataSource, taskId, taskType, groupId, tags|Generally a few seconds at most.|
Added the ingest/processed/bytes metric that tracks the total number of bytes processed during ingestion tasks for JSON-based batch, SQL-based batch, and streaming ingestion tasks #17581
useMaxMemoryEstimates is now set to false for MSQ task engine tasks. Additionally, the property has been deprecated and will be removed in a future release. Setting this to false allows for better on-heap memory estimation.
By default, the Docker image now uses the canonical hostname to register services in ZooKeeper for internal communication if you're running Druid in Kubernetes. Otherwise, it uses the IP address. #17697.
You can set the environment variable DRUID_SET_HOST_IP to 1 to restore old behavior.
If you need to downgrade to a version where Druid doesn't support the segment metadata cache, you must set the druid.manager.segments.useCache config to false or remove it prior to the upgrade.