cookbook 'kafka-cluster', '= 1.1.1'
kafka-cluster (9) Versions 1.1.1 Follow7
Application cookbook which installs and configures Apache Kafka.
cookbook 'kafka-cluster', '= 1.1.1', :supermarket
knife supermarket install kafka-cluster
knife supermarket download kafka-cluster
kafka-cluster-cookbook
Application cookbook which installs and configures
Apache Kafka.
Apache Kafka is publish-subscribe messaging rethought as a distributed
commit log. This cookbook takes a simplified approach towards
configuring and installing Apache Kafka.
It is important to note that Apache Zookeeper is a required
component of deploying an Apache Kafka cluster. We have developed a
Zookeeper cluster cookbook which takes the same simplified
approach and works seamlessly here.
Basic Usage
This cookbook was designed from the ground up to make it dead simple
to install and configure an Apache Kafka cluster using Chef. It also
highlights several of our best practices for developing reusable Chef
cookbooks at Bloomberg.
This cookbook provides [node attributes](attributes/default.rb) which
can be used to fine tune the default recipe which installs and
configures Kafka. The values from the node attributes are passed
directly into the configuration and service resources.
Out of the box the following platforms are certified to work and
are tested using our Test Kitchen configuration. Additional platforms
may work, but your mileage may vary.
- CentOS (RHEL) 6.6, 7.1
- Ubuntu 12.04, 14.04
The correct way to use this cookbook is to create a wrapper cookbook
which configures all of the members of the Apache Kafka cluster. This
includes reading the Zookeeper ensemble (cluster) configuration and passing
that into Kafka as a parameter. In this example we use our Zookeeper Cluster
cookbook to configure the ensemble on the same nodes.
```ruby
bag = data_bag_item('config', 'zookeeper-cluster')[node.chef_environment]
node.default['zookeeper-cluster']['config']['instance_name'] = node['ipaddress']
node.default['zookeeper-cluster']['config']['ensemble'] = bag['ensemble']
include_recipe 'zookeeper-cluster::default'
node.default['kafka-cluster']['config']['properties']['broker.id'] = node['ipaddress'].rpartition('.').last
node.default['kafka-cluster']['config']['properties']['zookeeper.connect'] = bag['ensemble'].map { |m| "#{m}:2181"}.join(',').concat('/kafka')
include_recipe 'kafka-cluster::default'
```
In the above example the Zookeeper ensemble configuration is read in
from a data bag. This is our suggested method for deploying using our
Zookeeper Cluster cookbook. But if you already have your
Zookeeper ensemble feel free to format the string zookeeper.connect
string appropriately.
Dependent cookbooks
java >= 0.0.0 |
libartifact ~> 1.3 |
poise ~> 2.2 |
poise-service ~> 1.0 |
selinux >= 0.0.0 |
sysctl >= 0.0.0 |
ulimit >= 0.0.0 |
Contingent cookbooks
There are no cookbooks that are contingent upon this one.
Foodcritic Metric
1.1.1 failed this metric
FC031: Cookbook without metadata file: /tmp/cook/def9774ca1e550c8b6d4f2e8/kafka-cluster/metadata.rb:1
FC045: Consider setting cookbook name in metadata: /tmp/cook/def9774ca1e550c8b6d4f2e8/kafka-cluster/metadata.rb:1
1.1.1 failed this metric
FC045: Consider setting cookbook name in metadata: /tmp/cook/def9774ca1e550c8b6d4f2e8/kafka-cluster/metadata.rb:1