Adoptable Cookbooks List

Looking for a cookbook to adopt? You can now see a list of cookbooks available for adoption!
List of Adoptable Cookbooks

Supermarket Belongs to the Community

Supermarket belongs to the community. While Chef has the responsibility to keep it running and be stewards of its functionality, what it does and how it works is driven by the community. The chef/supermarket repository will continue to be where development of the Supermarket application takes place. Come be part of shaping the direction of Supermarket by opening issues and pull requests or by joining us on the Chef Mailing List.

Select Badges

Select Supported Platforms

Select Status

RSS

hadoop (72) Versions 0.8.0

Installs/Configures Hadoop (HDFS/YARN/MRv2), HBase, Hive, Flume, Oozie, Pig, Spark, Storm, Tez, and ZooKeeper

Policyfile
Berkshelf
Knife
cookbook 'hadoop', '= 0.8.0', :supermarket
cookbook 'hadoop', '= 0.8.0'
knife supermarket install hadoop
knife supermarket download hadoop
README
Dependencies
Quality -%
= DESCRIPTION: Installs Apache hadoop and sets up a basic distributed cluster per the quick start documentation. = REQUIREMENTS: == Platform: Tested on Ubuntu 8.10, though should work on most Linux distributions, see hadoop[:java_home]. == Cookbooks: Opscode cookbooks, http://github.com/opscode/cookbooks/tree/master: * java = ATTRIBUTES: * hadoop[:mirror_url] - Get a mirror from http://www.apache.org/dyn/closer.cgi/hadoop/core/. * hadoop[:version] - Specify the version of hadoop to install. * hadoop[:uid] - Default userid of the hadoop user. * hadoop[:gid] - Default group for the hadoop user. * hadoop[:java_home] - You will probably want to change this to match where Java is installed on your platform. You may wish to add more attributes for tuning the configuration file templates. = USAGE: This cookbook performs the tasks described in the Hadoop Quick Start[1] to get the software installed. You should copy this to a site-cookbook and modify the templates to meet your requirements. Once the recipe is run, the distributed filesystem can be formated using the script /usr/bin/hadoop. sudo -u hadoop /usr/bin/hadoop namenode -format You may need to set up SSH keys for hadoop management commands. Note that this is not the 'default' config per se, so using the start-all.sh script won't start the processes because the config files live elsewhere. For running various hadoop processes as services, we suggest runit. A sample 'run' script is provided. The HADOOP_LOG_DIR in the run script must exist for each process. These could be wrapped in a define. * datanode * jobtracker * namenode * tasktracker [1] http://hadoop.apache.org/core/docs/current/quickstart.html = LICENSE and AUTHOR: Author:: Joshua Timberman (<joshua@opscode.com>) Copyright:: 2009, Opscode, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

No quality metric results found