Adoptable Cookbooks List

Looking for a cookbook to adopt? You can now see a list of cookbooks available for adoption!
List of Adoptable Cookbooks

Supermarket Belongs to the Community

Supermarket belongs to the community. While Chef has the responsibility to keep it running and be stewards of its functionality, what it does and how it works is driven by the community. The chef/supermarket repository will continue to be where development of the Supermarket application takes place. Come be part of shaping the direction of Supermarket by opening issues and pull requests or by joining us on the Chef Mailing List.

Select Badges

Select Supported Platforms


confluent-platform (18) Versions 3.2.0

Install and configure Confluent platform (Kafka)

cookbook 'confluent-platform', '= 3.2.0'
cookbook 'confluent-platform', '= 3.2.0', :supermarket
knife supermarket install confluent-platform
knife supermarket download confluent-platform
Quality 63%

Confluent Platform


Apache Kafka, an open source technology created and maintained by the founders of Confluent, acts as a real-time, fault tolerant, highly scalable messaging system. It is widely adopted for use cases ranging from collecting user activity data, logs, application metrics, stock ticker data, and device instrumentation. Its key strength is its ability to make high volume data available as a real-time stream for consumption in systems with very different requirements from batch systems like Hadoop, to real-time systems that require low-latency access, to stream processing engines that transform the data streams as they arrive.

This infrastructure lets you build around a single central nervous system transmitting messages to all the different systems and applications within your company. Learn more on

This cookbook focuses on deploying Confluent Platform elements on your clusters via Chef on systemd managed distributions. At the moment, this includes Kafka, Kafka Connect, Schema Registry and Kafka Rest.


Cookbooks and gems

Declared in [metadata.rb](metadata.rb) and in [Gemfile](Gemfile).


A systemd managed distribution: - RHEL Family 7, tested on Centos

Note: it should work quite fine on Debian 8+ with some attributes tuning


Easy Setup

Default recipe does nothing. Each service Kafka, Kafka Connect, Schema Registry or Kafka Rest will be installed by calling respectively recipe [install_kafka](recipes/install_kafka.rb), [install_connect](recipes/install_connect.rb), [install_registry](recipes/install_registry.rb) and [install_rest](recipes/install_rest.rb).

By default, this cookbook installs openjdk from the official repositories (openjdk 8 on centos 7) in services recipe, just before launching the service. You can deactivate this behavior by setting node['confluent-platform']['java'] to "", or choose your package by setting the package name in node['confluent-platform']['java'][node[:platform]].

The recommended way to use this cookbook is through the creation of a different role per cluster, that is a role for Kafka(probably including Kafka Connect), Schema Registry and Kafka Rest. This enables the search by role feature, allowing a simple service discovery.

In fact, there are two ways to configure the search: 1. with a static configuration through a list of hostnames (attributes hosts like in ['confluent-platform']['kafka']['hosts']) 2. with a real search, performed on a role (attributes role and size like in ['confluent-platform']['kafka']['role']). The role should be in the run-list of all nodes of the cluster. The size is a safety and should be the number of nodes in the cluster.

If hosts is configured, role and size are ignored.

See [roles](test/integration/roles) for some examples and Cluster Search documentation for more information.

Zookeeper Cluster

To install properly a Kafka cluster, you need a Zookeeper cluster. This is not in the scope of this cookbook but if you need one, you should consider using Zookeeper Platform.

The configuration of Zookeeper hosts use search and is done similarly as for Kafka, Kafka Connect, Schema Registry and Kafka Rest hosts, ie with a static list of hostnames or by using a search on a role.


This cookbook is fully tested through the installation of the full platform in docker hosts. This uses kitchen, docker and some monkey-patching.

If you run kitchen list, you will see 5 suites: - zookeeper-centos-7 - kafka-01-centos-7 - kafka-02-centos-7 - registry-01-centos-7 - rest-01-centos-7

Each corresponds to a different node in the cluster. They are connected through a bridge network named kitchen, which is created if necessary.

For more information, see [.kitchen.yml](.kitchen.yml) and [test](test) directory.

Local cluster

Of course, the cluster you install by running kitchen converge is fully working so you can use it as a local cluster to test your development (like a new Kafka client). Moreover, compared to a single node cluster usually installed on workstations, you can detected partition/timing/fault-tolerance issues you could not because of the simplicity of a single-node system.

You can access it by using internal DNS of the docker network named kitchen or by declaring each node in your hosts file. You can get each IP by running:

docker inspect --format \
  '{{}}' container_name

Then to produce some messages: \
  --broker-list \
  --topic my_topic

And to read them: \
  --zookeeper \
  --topic my_topic \

Or you can use Rest API with and full Schema Registry support, located at


Configuration is done by overriding default attributes. All configuration keys have a default defined in [attributes/default.rb](attributes/default.rb). Please read it to have a comprehensive view of what and how you can configure this cookbook behavior.



Does nothing.


Configure confluent repository.


Install and fully configure a given service by running repository, java and its four dedicated recipes: package, user, config, and service in that order.

install_kafka also includes kafka_topics recipe calling topic resource based on attributes and install_connect includes connect_connectors to manage connectors via connector resource and attribute definitions.


Install given service from confluent repository.


Create given service system user and group.


Generate service configuration. May search for dependencies (like Zookeeper or other nodes of the same cluster) with the help of cluster-search cookbook.


Install systemd unit for the given service, then enable and start it.

Note: install java package (OpenJDK 8 on centos 7) by default, can be disabled by setting node['confluent-platform']['java'] to "". A platform specific configuration for the package to install is also possible.



Create/update/delete/pause/resume/restart a Kafka Connect connector.


Create or delete a kafka topic. Currently, it cannot update a topic configuration like its replication.




Please read carefully []( before making a merge request.

License and Author

Copyright (c) 2015-2016 Sam4Mobile, 2017-2018

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
See the License for the specific language governing permissions and
limitations under the License.

Dependent cookbooks

cluster-search >= 0.0.0

Contingent cookbooks

There are no cookbooks that are contingent upon this one.




  • feat: can manage package upgrade with a version
    • Each component can have a version specified (rpm version). By using "latest" (defined as default), it will always upgrade to the latest one.
  • feat: support Kafka Connect & connector resource
    • Add support for Kafka Connect running in distributed mode (and not stand-alone). Connectors are installed via Confluent Hub (
    • Add connector resource which can create/update/delete/pause/resume/restart a connector.
  • feat: add topic resource to create/delete topics (but not update)
  • fix: transform custom service resource to library
    • This fixes the "auto-restart" feature. To be consistent with buggy behavior, "auto_restart" configurations are set to false by default


  • test: show unit logs when waiting for a service


  • fix: restart services only if previously started


Breaking changes: (mostly a fix for 3.0.0)

  • default users and groups are differents (ex: cp-kafka instead of kafka)
  • change log directory to /var/log/confluent as in the package


  • set default users/group to packages ones


Breaking changes:

  • attributes 'rest'/'brokers_protocol' and 'rest'/'brokers_port' are replaced by 'kafka'/'protocol' and 'kafka'/'port'
  • some default configuration values were updated reflecting the latest defaults in confluent 5.0
  • all schema-registry must be shutdown when switching on 'kafkastore.bootstrap.servers' configuration


  • feat: use 5.0 & adapt all config by default
  • feat: lazy evaluation of configuration variables
  • feat: use brokers for bootstrap in schema-registry


  • feat: use sensitive in templates
  • style: fix rubocop EmptyLineAfterGuardClause
  • chore: add supermarket category in .category



  • fix: configure ZK port for rest & schema config
  • doc: set a better cookbook description



  • feat: set confluent default version to 4.1.0
  • fix(debian): update apt repository config
  • feat: add bootstrap.servers support in rest proxy (#6)
  • feat: make Zookeeper port configurable (#7)
  • feat: sort config files by keys


  • test: include .gitlab-ci.yml from test-cookbook
  • test: replace deprecated require_chef_omnibus
  • test: increase timeouts for shared runners


  • doc: use doc in git message instead of docs
  • style(rubocop): fix Lint/MissingCopEnableDirective
  • style(rubocop): remove useless deactivation
  • chore: add 2018 to copyright notice
  • chore: set generic maintainer & helpdesk email


Breaking changes:

  • drop support for Kakfa < 0.9.0


  • feat: use auto-generated by default
    • Stop using search IDs to set Set it by default to -1 to let Kafka generates it itself.
    • You can override this behavior by setting kafka/config/ to the ID you want for each node.
    • Note: Setting the broker id to -1 should not affect an existing cluster that is already running. The auto broker ID generation is only used when there is no known broker ID. If you had a previously assigned ID, it will keep that ID.


  • docs: minor fix on kitchen suites description
  • test: revert condition on molinillo to be < 0.6.0



  • fix: do not try to create a nil directory (aka do not need to have kafka.logs.dir key in kafka/log4j configuration)


  • style(rubocop): fix latest offences, mostly heredoc delimiters



  • set confluent default version to 3.3.0
  • fix(Chef 13): do not set retries if package_retries is nil
  • fix #2: setting java to "" work as expected
  • fix #3: nil error when search return to wait nodes


  • force molinillo to be < 0.6.0 to fix tests
  • fix condition to restart a service in tests
  • use .gitlab-ci.yml template [20170731]
  • strengthen rest test by parsing JSON


  • change default size for search
  • set new contributing guide with karma style



  • Handover maintainance to
  • Use confluent 3.2.1 by default, fix repository
  • Fix metadata: license and set correct chef_version (12.14)
  • Remove yum cookbook dependency
  • Refactoring services, use systemd_unit resource
    • Factorize code by using a custom resource


  • Set build_pull & always_update in tests config
  • Fix destroy in tests, stop converge in verify
  • Use latest template for .gitlab-ci.yml [20170405]
  • Fix #1: Fix kitchen tests (nondeterministic)
  • Reduce memory usage for tests
  • Make tests work in Gitlab CI shared runners


  • Fix misc rubocop offenses
  • Use cookbook_name alias everywhere



  • Default confluent version to install is set to 3.0
    • Scala version to install is set to 2.11
    • Mandatory option ssl.client.auth is added to registry config
  • Make Systemd unit path configurable


  • Start Continuous Integration with gitlab-ci
  • Add security opts for docker, add package retries
  • Remove sleep in recipes, wait to strengthen tests



  • Switch to confluent 2.0
  • Rename recipes to respect rubocop rules (breaking change)


  • Switch to docker_cli, use prepared docker image
    • Switch kitchen driver from docker to docker_cli
    • Use sbernard/centos-systemd-kitchen image instead of bare centos
    • Remove privileged mode :)
    • Remove some now useless monkey patching
    • Remove dnsdock, use docker DNS (docker >= 1.10)
    • Use "kitchen" network, create it if needed


  • Fix all rubocop offenses
  • Use specific name for resources to avoid cloning
  • Add more details on configuration in README



  • Clarify and fix JVM options for services
  • Use to_hash instead of dup to work on node values
  • Improve readibility of default system user names


  • Fix and clean the creation of Kafka work directories
  • Fix zookeeper.connect chroot path


  • Rationalize docker provision to limit images
  • Fix typo in roles/rest-kitchen.json name
  • Wait 15s after registry start to strengthen tests


  • Reorganize README:
    • Move changelog from README to CHANGELOG
    • Move contribution guide to
    • Reorder README, fix Gemfile missing
  • Add Apache 2 license file
  • Add missing chefignore
  • Fix long lines in rest and registry templates


  • Cleaning, use only dependencies from supermarket


  • Set java-1.8.0-openjdk-headless as default java package


  • Initial version with Centos 7 support

Collaborator Number Metric

3.2.0 passed this metric

Contributing File Metric

3.2.0 failed this metric

Failure: To pass this metric, your cookbook metadata must include a source url, the source url must be in the form of, and your repo must contain a file

Foodcritic Metric

3.2.0 passed this metric

No Binaries Metric

3.2.0 passed this metric

Publish Metric

3.2.0 passed this metric

Supported Platforms Metric

3.2.0 passed this metric

Testing File Metric

3.2.0 failed this metric

Failure: To pass this metric, your cookbook metadata must include a source url, the source url must be in the form of, and your repo must contain a file

Version Tag Metric

3.2.0 failed this metric

Failure: To pass this metric, your cookbook metadata must include a source url, the source url must be in the form of, and your repo must include a tag that matches this cookbook version number