Skip to main content

Extensions

Druid implements an extension system that allows for adding functionality at runtime. Extensions are commonly used to add support for deep storages (like HDFS and S3), metadata stores (like MySQL and PostgreSQL), new aggregators, new input formats, and so on.

Production clusters will generally use at least two extensions; one for deep storage and one for a metadata store. Many clusters will also use additional extensions.

Core extensions

Core extensions are maintained by Druid committers.

NameDescriptionDocs
druid-avro-extensionsSupport for data in Apache Avro data format.link
druid-azure-extensionsMicrosoft Azure deep storage.link
druid-basic-securitySupport for Basic HTTP authentication and role-based access control.link
druid-bloom-filterSupport for providing Bloom filters in druid queries.link
druid-datasketchesSupport for approximate counts and set operations with Apache DataSketches.link
druid-google-extensionsGoogle Cloud Storage deep storage.link
druid-hdfs-storageHDFS deep storage.link
druid-histogramApproximate histograms and quantiles aggregator. Deprecated, please use the DataSketches quantiles aggregator from the druid-datasketches extension instead.link
druid-kafka-extraction-namespaceApache Kafka-based namespaced lookup. Requires namespace lookup extension.link
druid-kafka-indexing-serviceSupervised exactly-once Apache Kafka ingestion for the indexing service.link
druid-kinesis-indexing-serviceSupervised exactly-once Kinesis ingestion for the indexing service.link
druid-kerberosKerberos authentication for druid processes.link
druid-lookups-cached-globalA module for lookups providing a jvm-global eager caching for lookups. It provides JDBC and URI implementations for fetching lookup data.link
druid-lookups-cached-singlePer lookup caching module to support the use cases where a lookup need to be isolated from the global pool of lookupslink
druid-multi-stage-querySupport for the multi-stage query architecture for Apache Druid and the multi-stage query task engine.link
druid-orc-extensionsSupport for data in Apache ORC data format.link
druid-parquet-extensionsSupport for data in Apache Parquet data format. Requires druid-avro-extensions to be loaded.link
druid-protobuf-extensionsSupport for data in Protobuf data format.link
druid-ranger-securitySupport for access control through Apache Ranger.link
druid-s3-extensionsInterfacing with data in Amazon S3, and using S3 as deep storage.link
druid-ec2-extensionsInterfacing with AWS EC2 for autoscaling middle managersUNDOCUMENTED
druid-aws-rds-extensionsSupport for AWS token based access to AWS RDS DB Cluster.link
druid-statsStatistics related module including variance and standard deviation.link
mysql-metadata-storageMySQL metadata store.link
postgresql-metadata-storagePostgreSQL metadata store.link
simple-client-sslcontextSimple SSLContext provider module to be used by Druid's internal HttpClient when talking to other Druid processes over HTTPS.link
druid-pac4jOpenID Connect authentication for druid processes.link
druid-kubernetes-extensionsDruid cluster deployment on Kubernetes without Zookeeper.link

Community extensions

info

Community extensions are not maintained by Druid committers, although we accept patches from community members using these extensions. They may not have been as extensively tested as the core extensions.

A number of community members have contributed their own extensions to Druid that are not packaged with the default Druid tarball. If you'd like to take on maintenance for a community extension, please post on dev@druid.apache.org to let us know!

All of these community extensions can be downloaded using pull-deps while specifying a -c coordinate option to pull org.apache.druid.extensions.contrib:{EXTENSION_NAME}:{DRUID_VERSION}.

NameDescriptionDocs
aliyun-oss-extensionsAliyun OSS deep storagelink
ambari-metrics-emitterAmbari Metrics Emitterlink
druid-cassandra-storageApache Cassandra deep storage.link
druid-cloudfiles-extensionsRackspace Cloudfiles deep storage.link
druid-compressed-bigdecimalCompressed Big Decimal Typelink
druid-ddsketchSupport for DDSketch approximate quantiles based on DDSketchlink
druid-deltalake-extensionsSupport for ingesting Delta Lake tables.link
druid-distinctcountDistinctCount aggregatorlink
druid-iceberg-extensionsSupport for ingesting Iceberg tables.link
druid-redis-cacheA cache implementation for Druid based on Redis.link
druid-time-min-maxMin/Max aggregator for timestamp.link
sqlserver-metadata-storageMicrosoft SQLServer deep storage.link
graphite-emitterGraphite metrics emitterlink
statsd-emitterStatsD metrics emitterlink
kafka-emitterKafka metrics emitterlink
druid-thrift-extensionsSupport thrift ingestionlink
druid-opentsdb-emitterOpenTSDB metrics emitterlink
materialized-view-selection, materialized-view-maintenanceMaterialized Viewlink
druid-moving-average-querySupport for Moving Average and other Aggregate Window Functions in Druid queries.link
druid-influxdb-emitterInfluxDB metrics emitterlink
druid-momentsketchSupport for approximate quantile queries using the momentsketch librarylink
druid-tdigestsketchSupport for approximate sketch aggregators based on T-Digestlink
gce-extensionsGCE Extensionslink
prometheus-emitterExposes Druid metrics for Prometheus server collection (https://prometheus.io/)link
druid-kubernetes-overlord-extensionsSupport for launching tasks in k8s without Middle Managerslink
druid-spectator-histogramSupport for efficient approximate percentile querieslink
druid-rabbit-indexing-serviceSupport for creating and managing RabbitMQ indexing taskslink

Promoting community extensions to core extensions

Please post on dev@druid.apache.org if you'd like an extension to be promoted to core. If we see a community extension actively supported by the community, we can promote it to core based on community feedback.

For information how to create your own extension, please see here.

Loading extensions

Loading core extensions

Apache Druid bundles all core extensions out of the box. See the list of extensions for your options. You can load bundled extensions by adding their names to your common.runtime.properties druid.extensions.loadList property. For example, to load the postgresql-metadata-storage and druid-hdfs-storage extensions, use the configuration:

druid.extensions.loadList=["postgresql-metadata-storage", "druid-hdfs-storage"]

These extensions are located in the extensions directory of the distribution.

info

Druid bundles two sets of configurations: one for the quickstart and one for a clustered configuration. Make sure you are updating the correct common.runtime.properties for your setup.

info

Because of licensing, the mysql-metadata-storage extension does not include the required MySQL JDBC driver. For instructions on how to install this library, see the MySQL extension page.

Loading community extensions

You can also load community and third-party extensions not already bundled with Druid. To do this, first download the extension and then install it into your extensions directory. You can download extensions from their distributors directly, or if they are available from Maven, the included pull-deps can download them for you. To use pull-deps, specify the full Maven coordinate of the extension in the form groupId:artifactId:version. For example, for the (hypothetical) extension com.example:druid-example-extension:1.0.0, run:

java \
-cp "lib/*" \
-Ddruid.extensions.directory="extensions" \
-Ddruid.extensions.hadoopDependenciesDir="hadoop-dependencies" \
org.apache.druid.cli.Main tools pull-deps \
--no-default-hadoop \
-c "com.example:druid-example-extension:1.0.0"

You only have to install the extension once. Then, add "druid-example-extension" to druid.extensions.loadList in common.runtime.properties to instruct Druid to load the extension.

info

Please make sure all the Extensions related configuration properties listed here are set correctly.

info

The Maven groupId for almost every community extension is org.apache.druid.extensions.contrib. The artifactId is the name of the extension, and the version is the latest Druid stable version.

Loading extensions from the classpath

If you add your extension jar to the classpath at runtime, Druid will also load it into the system. This mechanism is relatively easy to reason about, but it also means that you have to ensure that all dependency jars on the classpath are compatible. That is, Druid makes no provisions while using this method to maintain class loader isolation so you must make sure that the jars on your classpath are mutually compatible.