Adoptable Cookbooks List

Looking for a cookbook to adopt? You can now see a list of cookbooks available for adoption!
List of Adoptable Cookbooks

Supermarket Belongs to the Community

Supermarket belongs to the community. While Chef has the responsibility to keep it running and be stewards of its functionality, what it does and how it works is driven by the community. The chef/supermarket repository will continue to be where development of the Supermarket application takes place. Come be part of shaping the direction of Supermarket by opening issues and pull requests or by joining us on the Chef Mailing List.

Select Badges

Select Supported Platforms

RSS

veritas_infoscale (1) Versions 0.1.0

Provides infoscale_deploy and infoscale_configure resources for Veritas Infoscale deployment

Berkshelf
Policyfile
Knife
cookbook 'veritas_infoscale', '~> 0.1.0'
cookbook 'veritas_infoscale', '~> 0.1.0', :supermarket
knife supermarket install veritas_infoscale
knife supermarket download veritas_infoscale
README
Dependencies
Quality 50%

You can now deploy and configure the Veritas InfoScale product suite in your environment using Chef. Chef's powerful automation platform transforms infrastructure into code and allows you to automate various deployment and configuration operations performed in Veritas InfoScale.

This readme provides information about using the veritas_infoscale cookbook. To perform any operation in Veritas InfoScale using Chef, you must perform the following high-level steps:

  • Prerequisites
  • Step 1: Downloading the Veritas InfoScale cookbook from the supermarket
  • Step 2: Specifying operations in a recipe file
  • Step 3: Creating roles and defining attributes
  • Step 4: Running the Chef-client on the Chef node

Note that you will require basic knowledge of the JavaScript Object Notation (JSON) format and Chef while performing the procedure in this readme.

Supported Chef version

The procedure in this readme is supported with Chef Client 12.5.1 and later versions.

Supported platforms

You can use the veritas_infoscale cookbook to perform operations on Veritas InfoScale 7.0 and later versions. The following platforms are supported by the veritas_infoscale cookbook:

  • Linux
  • Solaris
  • Aix

Prerequisites

Ensure that the following prerequisites are met in your environment:

  • Passwordless communication is established between all servers in a cluster. Tip: The user can use the pl utility to set up the ssh and rsh connections automatically.
  • Veritas InfoScale installation files are downloaded on the chef node where chef-client is run.
  • (Only for Veritas InfoScale version 7.4 and later) All required Veritas InfoScale license files (with an .slf extension) are downloaded on a local server. Contact the Customer Care for your region to procure an applicable slf license key file. Refer to the following link for contact information of the Customer Care center for your region:
  • The veritas_infoscale cookbook uses response files to perform all operations such as installation, upgrade, patch upgrade, and so on. Before using the veritas_infoscale cookbook, ensure that you have read and understood all prerequisites that are applicable while performing those operations with response files. For more information about using response files, refer to the following guides from theVeritas Documentation Library:
    • Veritas InfoScale Installation Guide - Linux
    • Storage Foundation Configuration and Upgrade Guide - Linux
    • Cluster Server Configuration and Upgrade Guide - Linux
    • Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
    • Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
    • Storage Foundation for Oracle RAC Configuration and Upgrade Guide - Linux
    • Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide - Linux
  • Ensure that you read through and agree with the End User License Agreement in the PDF file (located in the installation media), before proceeding with any operation in Veritas InfoScale using Chef.

Step 1: Downloading the Veritas InfoScale cookbook from the supermarket

You can download the veritas_infoscale cookbook from the Chef supermarket to use its resources.

To download the Veritas InfoScale cookbook from the public Supermarket

  1. To see a list of all community cookbooks available from Supermarket, run the following: knife cookbook site list
  2. Search for the veritas_infoscale cookbook using the following command: knife cookbook site search veritas_infoscale
  3. Download the veritas_infoscale cookbook using the following command: knife cookbook site download veritas_infoscale
  4. The cookbook is downloaded in a tar.gz package. Use the following command to extract the package: tar xvzf veritas_infoscale.tar.gz

If you are in an air-gapped environment you can download the cookbook from the Chef Supermarket website and physically transfer the cookbook to your local network using removable storage. Alternatively, you can refer to the Chef documentation for steps on setting up a private supermarket in your local network.

Important: After you have downloaded and extracted the veritas_infoscale cookbook, ensure that you upload the cookbook from the current working directory in the chef repository to the Chef server. Use the following command:

knife cookbook upload veritas_infoscale

For more information aboout uploading cookbooks, see the Chef documentation.

Step 2: Specifying operations in a recipe file

Before you run the Chef client on a Chef node, you must specify the operations you want to perform in the recipe file you are using. The following resources are shipped in the veritas_infoscale cookbook:

  • infoscale_deploy
  • infoscale_configure

Depending on which operations you want to perform, enter the required values in the action block within the infoscale_deploy or infoscale_configure resource.

Value Resource name Description
install infoscale_deploy Installs Veritas InfoScale products in your environment.
configure infoscale_deploy Configures the various components used by Veritas InfoScale products.
fullupgrade infoscale_deploy Upgrades Veritas InfoScale products.
patch_upgrade infoscale_deploy Upgrades Veritas InfoScale products to a particular patch.
uninstall infoscale_deploy Uninstalls Veritas InfoScale products.
rollingupgrade infoscale_deploy Performs a rolling upgrade for highly available clusters.
license infoscale_deploy Manages Veritas InfoScale licenses.
security infoscale_configure Configures security in Veritas InfoScale environment.
fencing infoscale_configure Configures I/O fencing to prevent data corruption in the event of a communication failure. Note that only disk-based, majority-based, and disabled-based fencing can be configured using Chef.

Syntax:

<resource_name> '<instance_name>' do
action [ :<value_1> :<value_2> :<value_n>]
end

Replace the following variables in the above syntax:

  • <resource_name> is the name of the resource used by the operation.

  • <value_1>, <value_2>, ... <value_n> are n number of values corresponding to the operations you want to perform. The operations will be performed in the sequence in which the values are entered.

  • <instance_name> is the name of the resource instance. You can provide any instance name for the resource.

Example 1:

infoscale_deploy 'Instance1' do
action [ :install, :configure ]
end

Example 2:

infoscale_configure 'Instance' do
action [ :security ]
end

Step 3: Creating roles and defining attributes

Chef roles can be used to perform operations to manage cluster configurations in a Veritas InfoScale environment. A separate role (specifying its run list and override attributes) must be created for each cluster configuration in your environment. When a role is run against a node, the configuration details of that node are compared against the attributes of the role, and then the contents of that role's run-list are applied to the node's configuration details.

To create a role using the Chef Manage interface

  1. Log on to the Chef server with with Administrator privileges.
  2. Click on the Policy tab at the top of the page.
  3. Expand Role in the left pane and click Create. The Create Role dialog box appears.
  4. Enter a name and description to help you easily identify the role.
  5. Click Next.
  6. Search for the recipe you are using in the Available Recipes list.
  7. Drag and drop the recipe you are using to the Create Run List box so that the recipe is run whenever the Chef client is invoked using this role. Ensure that you have customized the default recipe file to perform the desired operations in your environment (see Specifying operations in a recipe file).
  8. Click Next.
  9. You do not need to enter default attributes in the role, because the default values for Veritas InfoScale operations are pre-defined in the attribute file of the default.rb recipe.
  10. Click Next.
  11. Enter the verride attributes for the role in JSON format. Refer to the Defining attributes in a role section for a list of the attributes used in an Veritas InfoScale operation.
  12. Click Create Role.

Defining attributes in a role

When creating a role, you must specify values for attributes that are used in the operations you want to perform. When a chef-client runs, it merges its own attributes and run-lists with those contained within each assigned role.

The attributes can be of the following two types:

  • Optional: Optional attributes need not be defined by the user. The default values of optional attributes are pre-defined with default values in the attribute file of the default.rb recipe. However, you can override the default values by defining verride type of attributes at the role-level.
  • Mandatory: All mandatory attributes required in an operation must be defined at the role-level as an verride type of attribute.

Important: The veritas_infoscale cookbook uses response files to perform all operations such as installation, upgrade, patch upgrade, and so on. Before using the veritas_infoscale cookbook, ensure that you have read and understood all prerequisites that are applicable while performing those operations with response files. For more information about using response files, refer to the following guides:

  • Installation Guide - Linux
  • Storage Foundation Configuration and Upgrade Guide - Linux
  • Cluster Server Configuration and Upgrade Guide - Linux
  • Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
  • Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
  • Storage Foundation for Oracle RAC Configuration and Upgrade Guide - Linux
  • Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide - Linux

See the following sections for a list of attributes used in a particular operation. The tables lists the attribute name, dimension, description, and type (Optional or Mandatory).

Install

The attributes listed in the below table are used while performing an installation of Veritas InfoScale products. Ensure that you specify values for all mandatory attributes in the role you are creating for installation. Optional attributes which are not specified by the user will use their pre-defined default values.

  • install_script

Dimension: Scalar

Type: Mandatory

Description: Path to where the Veritas InfoScale installation files are stored. For example, _ <temporary_location> _ /installer.

  • responsefile_path

Dimension: Scalar

Type: Optional

Description: Temporary location where you want to store the response file created during installation. By default, the response file is stored in the /tmp directory. You can override the default path by specifying a different path for this attribute. Note that after successful installation is complete the response file is moved to /opt/VRTS/install/chef.

  • prod

Dimension: Scalar

Type: Mandatory

Description: Specifies which Veritas InfoScale product and version you want to install.

Syntax:

     <product_name><version_number>

<product_name> can be either ENTERPRISE. AVAILABILITY, STORAGE, or FOUNDATION.

<version_number> is the version number of the Veritas InfoScale product.

Examples: ENTERPRISE74, AVAILABILITY731, STORAGE70, or FOUNDATION70

  • systems

Dimension: List

Type: Mandatory

Description: List of the systems on which the product is to be installed.

Syntax:

  "systems": ["<server_1>","<server_2>", ... , "<server_n>"]

<server_1>, <server_2>, ...<server_n> are the host names or IP addresses of n number of systems on which you want to perform the installation.

Example:

"systems": ["vmrac008","vmrac009"]

  • keyless

Dimension: List

Type: Mandatory (Even though keyless is a mandatory attribtue, you must leave the keyless attribute empty if you are providing values for either the license or licensefile attribute.)

Description: List of the product keys for keyless installation that are to be registered on the systems specified in the systems attribute. The value can be either ENTERPRISE. AVAILABILITY, STORAGE, or FOUNDATION.

  • license

Dimension: List

Type: Mandatory (Even though license is a mandatory attribtue, you must leave the license attribute empty if you are providing values for either the keyless or licensefile attribute.)

Description: List of license keys to be registered on the systems specified in the systems attribute.

  • licensefile (only for versions 7.4 and later)

Dimension: List

Type: Mandatory (Even though licensefile is a mandatory attribtue, you must leave the licensefile attribute empty if you are providing values for either the keyless or license attribute.)

Description: List of paths to the slf license file to be registered on the systems specified in the systems attribute. Ensure that the license files are stored on the same server where the installer is saved.

  • logpath

Dimension: Scalar

Type: Optional

Description: Specifies the location where the log files are to be stored. If no value is defined the default location is /opt/VRTS/install/logs.

  • tmppath

Dimension: Scalar

Type: Optional

Description: Specifies the location where temporary files and dependent RPMs are stored during the install. The default location is /var/tmp.

  • rsh

Dimension: Scalar

Type: Optional

Description: Specifies that RSH protocol must be used instead of SSH protocol while communicating between systems. A boolean value of 1 indicates that RSH will be used and a boolean value of 0 indicates that SSH will be used. This is an optional attribute and if no value is provided SSH protocol is used by default.

  • uploadlogs

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to upload logs to the Veritas website. The value 1 indicates that the installation logs are uploaded to the Veritas website. The value 0 indicates that the installation logs are not uploaded to the Veritas website. By default this value is set to 1.

  • keyfile

Dimension: Scalar

Type: Optional

Description: Location of the SSH key file that is used by the Chef node to communicate with other nodes in the cluster.

  • noipc

Dimension: Scalar

Type: Optional

Description: Disables the installer from making outbound networking calls to Veritas Services and Operations Readiness Tool (SORT) in order to automatically obtain patch and release information updates. (0: sends outbound networking calls to SORT, 1: does not send outbound networking calls to SORT)

Full upgrade

The attributes listed in the below table are used while performing a full upgrade of Veritas InfoScale products.

Note: Ensure that Veritas InfoScale version you are upgrading to is later than 7.0. The veritas_infoscale cookbook only supports versions released after Veritas InfoScale 7.0. Refer to the following guides for a list of supported upgrade paths for Veritas InfoScale products:

  • Storage Foundation Configuration and Upgrade Guide - Linux
  • Cluster Server Configuration and Upgrade Guide - Linux
  • Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
  • Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
  • Storage Foundation for Oracle RAC Configuration and Upgrade Guide - Linux
  • Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide - Linux

Ensure that you specify values for all mandatory attributes in the role you are creating for an upgrade. Optional attributes which are not specified by the user will use their pre-defined default values.

  • install_script

Dimension: Scalar

Type: Mandatory

Description: Path to where the Veritas InfoScale installation files are stored. For example, _ <temporary_location> _ /installer.

  • responsefile_path

Dimension: Scalar

Type: Optional

Description: Location where you want to store the response file created during the upgrade. By default, the response file is stored in the /opt/VRTS/install/chef directory. You can override the default path by specifying a different path for this attribute.

  • systems

Dimension: List

Type: Mandatory

Description: List of the systems on which the product is to be upgraded.

Syntax:

  "systems": ["<server_1>","<server_2>", ... , "<server_n>"]

<server_1>, <server_2>, ...<server_n> are the host names or IP addresses of n number of systems on which you want to perform the installation.

Example:

"systems": ["vmrac008","vmrac009"]

  • licensefile (only for versions 7.4 and later)

Dimension: List

Type: Mandatory (Even though licensefile is a mandatory attribtue, you can leave the licensefile attribute empty if you wish to upgrade using the existing keyless license of the source version.)

Description: List of paths to the slf license file to be registered on the systems specified in the systems attribute. Ensure that the license files are stored on the same server where the installer is saved.

  • logpath

Dimension: Scalar

Type: Optional

Description: Specifies the location where the log files are to be stored. If no value is defined the default location is /opt/VRTS/install/logs.

  • tmppath

Dimension: Scalar

Type: Optional

Description: Specifies the location where temporary files and dependent RPMs are stored during the install. The default location is /var/tmp.

  • keyfile

Dimension: Scalar

Type: Optional

Description: Location of the SSH key file that is used by the Chef node to communicate with other nodes in the cluster.

  • noipc

Dimension: Scalar

Type: Optional

Description: Disables the installer from making outbound networking calls to Veritas Services and Operations Readiness Tool (SORT) in order to automatically obtain patch and release information updates. (0: sends outbound networking calls to SORT, 1: does not send outbound networking calls to SORT).

  • rsh

Dimension: Scalar

Type: Optional

Description: Specifies that RSH protocol must be used instead of SSH protocol while communicating between systems. A boolean value of 1 indicates that RSH will be used and a boolean value of 0 indicates that SSH will be used. This is an optional attribute and if no value is provided SSH protocol is used by default.

  • allowcomms

Dimension: Scalar

Type: Optional

Description: Indicates whether to start LLT and GAB when you set up a single-node cluster. The value can be 0 (do not start) or 1 (start). By default, this value is set to 1.

  • eat_recreate_creds

Dimension: Scalar

Type: Optional

Description: Specifies whether to upgrade the certificates from 1024 bit key/SHA1 signature to 2048 bit key/SHA256 signature. By default, this value is set to 1, which means that 2048 bit key/SHA256 signature certificates will be used.

  • client_vxfen_warning

Dimension: Scalar

Type: Mandatory

Description: Specifies whether to prompt users to upgrade CP servers before upgrading the client cluster. By default, this value is set to 1, which means that users will be prompted to upgrade CP servers.

  • disable_dmp_native_support

Dimension: Scalar

Type: Optional

Description: Specifies whether to disable dynamic multi-pathing support for the native LVM volume groups or ZFS pools during an upgrade. By default this attribute is set to 0. Note that retaining Dynamic multi-pathing support for the native LVM volume groups/ZFS pools during upgrade will increase package upgrade time depending on the number of luns and native LVM volume groups or ZFS pools configured on the system.

  • vm_no_open_vols

Dimension: Scalar

Type: Optional

Description: Specifies whether to ask the user if there are any open volumes (when vxconfigd is not enabled). Such prompts are asked during uninstallations. (1: affirms there are no open volumes on the system).

  • patch_path

Dimension: List

Type: Optional

Description: Defines the path of a patch level release to be integrated with a base or a maintenance level release in order for multiple releases to be simultaneously installed.

Patch upgrade

The attributes listed in the below table are used while upgrading Veritas InfoScale products to the latest patch. Ensure that you specify values for all mandatory attributes in the role you are creating for the patch upgrade. Optional attributes which are not specified by the user will use their pre-defined default values.

  • install_script

Dimension: Scalar

Type: Mandatory

Description: Path to where the Veritas InfoScale installation files are stored. For example, _ <temporary_location> _ /installer.

  • responsefile_path

Dimension: Scalar

Type: Optional

Description: Location where you want to store the response file created during the patch upgrade. By default, the response file is stored in the /opt/VRTS/install/chef directory. You can override the default path by specifying a different path for this attribute.

  • systems

Dimension: List

Type: Mandatory

Description: List of the systems on which the product is to be upgraded.

Syntax:

  "systems": ["<server_1>","<server_2>", ... , "<server_n>"]

<server_1>, <server_2>, ...<server_n> are the host names or IP addresses of n number of systems on which you want to perform the installation.

Example:

"systems": ["vmrac008","vmrac009"]

  • logpath

Dimension: Scalar

Type: Optional

Description: Specifies the location where the log files are to be stored. If no value is defined the default location is /opt/VRTS/install/logs.

  • tmppath

Dimension: Scalar

Type: Optional

Description: Specifies the location where temporary files and dependent RPMs are stored during the install. The default location is /var/tmp.

  • keyfile

Dimension: Scalar

Type: Optional

Description: Location of the SSH key file that is used by the Chef node to communicate with other nodes in the cluster.

  • noipc

Dimension: Scalar

Type: Optional

Description: Disables the installer from making outbound networking calls to Veritas Services and Operations Readiness Tool (SORT) in order to automatically obtain patch and release information updates. (0: sends outbound networking calls to SORT, 1: does not send outbound networking calls to SORT).

  • rsh

Dimension: Scalar

Type: Optional

Description: Specifies that RSH protocol must be used instead of SSH protocol while communicating between systems. A boolean value of 1 indicates that RSH will be used and a boolean value of 0 indicates that SSH will be used. This is an optional attribute and if no value is provided SSH protocol is used by default.

  • allowcomms

Dimension: Scalar

Type: Optional

Description: Indicates whether to start LLT and GAB when you set up a single-node cluster. The value can be 0 (do not start) or 1 (start). By default, this value is set to 1.

  • eat_recreate_creds

Dimension: Scalar

Type: Optional

Description: Specifies whether to upgrade the certificates from 1024 bit key/SHA1 signature to 2048 bit key/SHA256 signature. By default, this value is set to 1, which means that 2048 bit key/SHA256 signature certificates will be used.

  • client_vxfen_warning

Dimension: Scalar

Type: Mandatory

Description: Specifies whether to prompt users to upgrade CP servers before upgrading the client cluster. By default, this value is set to 1, which means that users will be prompted to upgrade CP servers.

Rolling upgrade

While performing a rolling upgrade, It is recommended that you manually switch the service groups. Automatic switching of service groups does not resolve dependency issues if any dependent resource is not under VCS control. The attributes listed in the below table are used while performing a rolling upgrade on Veritas InfoScale products. Ensure that you specify values for all mandatory attributes in the role you are creating for the patch upgrade. Optional attributes which are not specified by the user will use their pre-defined default values.

  • systems

Dimension: List

Type: Mandatory

Description: List of the systems on which the product is to be upgraded.

Syntax:

  "systems": ["<server_1>","<server_2>", ... , "<server_n>"]

<server_1>, <server_2>, ...<server_n> are the host names or IP addresses of n number of systems on which you want to perform the installation.

Example:

"systems": ["vmrac008","vmrac009"]

  • licensefile (only for versions 7.4 and later)

Dimension: List

Type: Mandatory (Even though licensefile is a mandatory attribtue, you can leave the licensefile attribute empty if you wish to upgrade using the existing keyless license of the source version.)

Description: List of paths to the slf license file to be registered on the systems specified in the systems attribute. Ensure that the license files are stored on the same server where the installer is saved.

  • logpath

Dimension: Scalar

Type: Optional

Description: Specifies the location where the log files are to be stored. If no value is defined the default location is /opt/VRTS/install/logs.

  • tmppath

Dimension: Scalar

Type: Optional

Description: Specifies the location where temporary files and dependent RPMs are stored during the install. The default location is /var/tmp.

  • uploadlogs

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to upload logs to the Veritas website. The value 1 indicates that the installation logs are uploaded to the Veritas website. The value 0 indicates that the installation logs are not uploaded to the Veritas website. By default this value is set to 1.

  • keyfile

Dimension: Scalar

Type: Optional

Description: Location of the SSH key file that is used by the Chef node to communicate with other nodes in the cluster.

  • noipc

Dimension: Scalar

Type: Optional

Description: Disables the installer from making outbound networking calls to Veritas Services and Operations Readiness Tool (SORT) in order to automatically obtain patch and release information updates. (0: sends outbound networking calls to SORT, 1: does not send outbound networking calls to SORT).

  • rsh

Dimension: Scalar

Type: Optional

Description: Specifies that RSH protocol must be used instead of SSH protocol while communicating between systems. A boolean value of 1 indicates that RSH will be used and a boolean value of 0 indicates that SSH will be used. This is an optional attribute and if no value is provided SSH protocol is used by default.

  • allowcomms

Dimension: Scalar

Type: Optional

Description: Indicates whether to start LLT and GAB when you set up a single-node cluster. The value can be 0 (do not start) or 1 (start). By default, this value is set to 1.

  • eat_recreate_creds

Dimension: Scalar

Type: Optional

Description: Specifies whether to upgrade the certificates from 1024 bit key/SHA1 signature to 2048 bit key/SHA256 signature. By default, this value is set to 1, which means that 2048 bit key/SHA256 signature certificates will be used.

  • client_vxfen_warning

Dimension: Scalar

Type: Mandatory

Description: Specifies whether to prompt users to upgrade CP servers before upgrading the client cluster. By default, this value is set to 1, which means that users will be prompted to upgrade CP servers.

  • patch_path

Dimension: List

Type: Optional

Description: Defines the path of a patch level release to be integrated with a base or a maintenance level release in order for multiple releases to be simultaneously installed.

  • rollingupgrade_phase2

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to perform the phase 2 operation or not. By default, this value is set to 1, which means that the phase 2 operation will be performed.

  • phase1_order

Dimension: Hash

Type: Mandatory

Description: This attribute defines the order in which the Phase 1 upgrade will be performed. The systems will be upgraded in the sequence of the key number (n) they are listed under. Systems listed under a particular key number will be upgraded simultaneously.

Syntax:

  "phase1_order": {

     "0": [

       "<server_1>",

       "<server_2>",

        ],

     "1": [

      "<server_3>",

        ],

     "2": [        

      "<server_4>",

        ],

      "n": [        

      "<server_5>",

      "<server_6>",

        ]

<server_1>, <server_2>, <server_3>, <server_4>, <server_5>, <server_6> are the host names or IP addresses of systems in the cluster.

Configure

A role can be used to define the configuration of a Chef node. Each cluster in the Chef-InfoScale environment consists of a Chef node. Veritas InfoScale replicates the configuration of the Chef node to other nodes in the cluster. In this way, we can use role attributes to define the configuration of the Chef node, and subsequently define the configuration of the cluster.

The attributes listed in the below table are used while configuring a cluster in Veritas InfoScale. Ensure that you specify values for all mandatory attributes in the role you are creating for the particular cluster. Optional attributes which are not specified by the user will use their pre-defined default values.

  • prod

Dimension: Scalar

Type: Mandatory

Description: Specifies which Veritas InfoScale product and version you want to configure.

Syntax:

     <product_name><version_number>

<product_name> can be either ENTERPRISE. AVAILABILITY, STORAGE, or FOUNDATION.

<version_number> is the version number of the Veritas InfoScale product.

Examples: ENTERPRISE74, AVAILABILITY731, STORAGE70, or FOUNDATION70

  • systems

Dimension: List

Type: Mandatory

Description: List of the systems on which the product is to be configured.

Syntax:

  "systems": ["<server_1>","<server_2>", ... , "<server_n>"]

<server_1>, <server_2>, ...<server_n> are the host names or IP addresses of n number of systems on which you want to perform the installation.

Example:

"systems": ["vmrac008","vmrac009"]

  • keyless

Dimension: List

Type: Mandatory (Even though keyless is a mandatory attribtue, you must leave the keyless attribute empty if you are providing values for either the license or licensefile attribute.)

Description: List of the product keys for keyless installation that are to be registered on the systems specified in the systems attribute. The value can be either ENTERPRISE. AVAILABILITY, STORAGE, or FOUNDATION.

  • license

Dimension: List

Type: Mandatory (Even though license is a mandatory attribtue, you must leave the license attribute empty if you are providing values for either the keyless or licensefile attribute.)

Description: List of license keys to be registered on the systems specified in the systems attribute.

  • licensefile (only for versions 7.4 and later)

Dimension: List

Type: Mandatory (Even though licensefile is a mandatory attribtue, you must leave the licensefile attribute empty if you are providing values for either the keyless or license attribute.)

Description: List of paths to the slf license file to be registered on the systems specified in the systems attribute. Ensure that the license files are stored on the same server where the installer is saved.

  • logpath

Dimension: Scalar

Type: Optional

Description: Specifies the location where the log files are to be stored. If no value is defined the default location is /opt/VRTS/install/logs.

  • tmppath

Dimension: Scalar

Type: Optional

Description: Specifies the location where temporary files and dependent RPMs are stored during the install. The default location is /var/tmp.

  • uploadlogs

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to upload logs to the Veritas website. The value 1 indicates that the installation logs are uploaded to the Veritas website. The value 0 indicates that the installation logs are not uploaded to the Veritas website. By default this value is set to 1.

  • keyfile

Dimension: Scalar

Type: Optional

Description: Location of the SSH key file that is used by the Chef node to communicate with other nodes in the cluster.

  • rsh

Dimension: Scalar

Type: Optional

Description: Specifies that RSH protocol must be used instead of SSH protocol while communicating between systems. A boolean value of 1 indicates that RSH will be used and a boolean value of 0 indicates that SSH will be used. This is an optional attribute and if no value is provided SSH protocol is used by default.

  • allowcomms

Dimension: Scalar

Type: Optional

Description: Indicates whether to start LLT and GAB when you set up a single-node cluster. The value can be 0 (do not start) or 1 (start). By default, this value is set to 1.

  • vm_no_open_vols

Dimension: Scalar

Type: Optional

Description: Specifies whether to ask the user if there are any open volumes (when vxconfigd is not enabled). Such prompts are asked during uninstallations. (1: affirms there are no open volumes on the system).

  • secusrgrps

Dimension: List

Type: Optional

Description: Specifies the user groups which will receive read access to the cluster.

  • defaultaccess

Dimension: Scalar

Type: Optional

Description: Specifies whether to grant read access to all. By default this value is set to 0.

  • eat_security_fips

Dimension: Scalar

Type: Optional

Description: Specifies whether to enable or disable security with FIPS mode on a running VCS cluster. By default, this value is set to 0, which means that security with FIPS mode will be disabled in the cluster.

  • activecomponent

Dimension: List

Type: Mandatory

Description: Specifies the component for operations like precheck, configure, addnode, install, and configure (together). The value can be either of the following: - VCS<VersionNumber> for VCS component. - SF<VersionNumber> for SF component. - SVS<VersionNumber> for SVS component. - SFRAC<VersionNumber> for SF Oracle RAC component. - SFSYBASECE<VersionNumber> for SFSYBASECE component. - SFCFSHA<VersionNumber> for SFCFSHA component. - SFHA<VersionNumber> for SFHA component.

  • heartbeat_links_interface

Dimension: List

Type: Mandatory

Description: Lists the NIC interfaces to be configured for low priority heartbeat links. You can configure a minimum of 2 and a maximum of 4 heartbeat links per system. You must enclose the system name within double quotes.

  • lopri_link_interface

Dimension: List

Type: Mandatory

Description: Lists the NIC interfaces to be configured for low priority heartbeat links. You can enter up to 4 entries in the list, one for each low priority heartbeat link. Typically, lopri_link_interface is used on a public network link to provide an additional layer of communication. If you use different media speed for the private NICs, you can configure the NICs with lesser speed as low-priority links to enhance LLT performance. For example, lltlinklowpri1, lltlinklowpri2, and so on. You must enclose the system name within double quotes.

  • clustername

Dimension: Scalar

Type: Mandatory

Description: Defines the name of the cluster.

  • clusterid

Dimension: Scalar

Type: Optional (If value not provided, we randomly generate number and assign it.)

Description: Defines an integer value between 0 and 65535 that uniquely identifies the cluster.

  • username

Dimension: List

Type: Optional

Description: Lists the names of users.

  • userenpw

Dimension: List

Type: Optional

Description: List of encoded passwords for users. The value in the list can be "Administrators Operators Guests"

Note: The order of the values for the userenpw list must match the order of the values in the username list.

  • userpriv

Dimension: List

Type: Optional

Description: Lists the privileges to be granted to the users.

Note: The order of the values for the userpriv list must match the order of the values in the username list.

  • donotreconfigurevcs

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to re-configure VCS. Note that any customizations made to the configuration will be deleted and the default configuration will be re-installed. By default this value is set to 1.

  • donotreconfigurefencing

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to re-configure fencing. Note that any customizations made to the configuration will be deleted and the default configuration will be re-installed. By default this value is set to 1.

  • fencingenabled

Dimension: Scalar

Type: Mandatory

Description: Specifies whether to enable fencing in your Veritas product. Valid values are 0 or 1.

  • eat_security

Dimension: Scalar

Type: Optional

Description: Specifies whether the cluster should be enabled with the secure mode or not. By default, this value is set to 1, which means that security will be enabled in the cluster.

  • csgnic

Dimension: Scalar

Type: Optional (This attribute must be specified if (Cluster Service Group) CSG is used.)

Description: Defines the NIC device to use on a system. You can enter all as a system value if the same NIC is used on all systems.

  • csgvip

Dimension: Scalar

Type: Optional (This attribute must be specified if (Cluster Service Group) CSG is used.)

Description: Defines the virtual IP address for the cluster.

  • csgnetmask

Dimension: Scalar

Type: Optional (This attribute must be specified if (Cluster Service Group) CSG is used.)

Description: Defines the netmask of the virtual IP address for the cluster.

  • smtpserver

Dimension: Scalar

Type: Optional (This attribute must be specified if SMTP protocol is used.)

Description: Defines the domain-based host name (for example smtp.example.com) of the SMTP server to be used for web notification.

  • smtprecp

Dimension: List

Type: Optional (This attribute must be specified if SMTP protocol is used.)

Description: List of full email addresses (example: user@example.com) of SMTP recipients.

  • smtprsev

Dimension: List

Type: Optional (This attribute must be specified if SMTP protocol is used.)

Description: Defines the minimum severity level of messages (Information, Warning, Error, SevereError) that listed SMTP recipients are to receive. Note that the ordering of severity levels must match that of the addresses of SMTP recipients.

  • snmpcons

Dimension: List

Type: Optional (This attribute must be specified if SNMP protocol is used.)

Description: List of SNMP console system names.

  • snmpport

Dimension: Scalar

Type: Optional (This attribute must be specified if SNMP protocol is used.)

Description: Defines the SNMP trap daemon port (default is 162).

  • snmpcsev

Dimension: List

Type: Optional (This attribute must be specified if SNMP protocol is used.)

Description: Defines the minimum severity level of messages (Information, Warning, Error, SevereError) that listed SNMP consoles are to receive. Note that the ordering of severity levels must match that of the SNMP console system names.

  • gconic

Dimension: Scalar

Type: Optional (This attribute must be specified if GCO is used.)

Description: Specifies the NIC for the virtual IP that the Global Cluster Option uses. You can enter all as a system value if the same NIC is used on all systems.

  • gcovip

Dimension: Scalar

Type: Optional (This attribute must be specified if GCO is used.)

Description: Defines the virtual IP address that the Global Cluster Option uses.

  • gconetmask

Dimension: Scalar

Type: Optional (This attribute must be specified if GCO is used.)

Description: Defines the Netmask of the virtual IP address that the Global Cluster Option uses.

  • lltoverudp

Dimension: Scalar

Type: Mandatory

Description: Specifies whether to configure heartbeat link using LLT over UDP.

  • heartbeat_udplink_address

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over UDP.)

Description: Lists the IP address (IPv4 or IPv6) that each heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the IP addresses for each of the heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_udplink_address": {

     "<server_1>": [

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

     "<server_2>": [

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

     "<server_3>": [        

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

      "<server_n>": [        

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster.

<IP_address_1>, <IP_address_2>, ... <IP_address_4> are the IP addresses that each heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each heartbeat link.

  • heartbeat_udplinklowpri_address

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over UDP.)

Description: Lists the IP address (IPv4 or IPv6) that each low priority heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the IP addresses for each of the low priority heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_udplinklowpri_address": {

     "<server_1>": [

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

     "<server_2>": [

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

     "<server_3>": [        

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

      "<server_n>": [        

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <IP_address_1>, <IP_address_2>, ... <IP_address_4> are the IP addresses that each low priority heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each low priority heartbeat link.

  • heartbeat_udplink_port

Dimension: Hash (two-dimensional array)

Type: (This attribute is mandatory if LLT is configured over UDP.)

Description: Lists the UDP port number (16-bit integer value) that each heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the UDP port number for each of the heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_udplink_port": {

    "<server_1>": [

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ],

    "<server_2>": [

        "<port_1>",

        "<port_2>",

        "<port_3>",

        "<port_4>"

      ],

    "<server_3>": [        

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ],

    "<server_n>": [        

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <port_1>, <port_2>, ... <port_4> are the port numbers that each heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each heartbeat link.

  • heartbeat_udplinklowpri_port

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over UDP.)

Description: Lists the UDP port number (16-bit integer value) that each low priority heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the UDP port number for each of the low priority heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_udplinklowpri_port": {

    "<server_1>": [

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ],

    "<server_2>": [

        "<port_1>",

        "<port_2>",

        "<port_3>",

        "<port_4>"

      ],

    "<server_3>": [        

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ],

    "<server_n>": [        

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <port_1>, <port_2>, ... <port_4> are the port numbers that each low priority heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each low priority heartbeat link.

  • heartbeat_udplink_netmask

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over UDP.)

Description: Lists the netmask (prefix for IPv6) that each heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the netmask for each of the heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_udplink_netmask": {

     "<server_1>": [

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

   "<server_2>": [

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

   "<server_3>": [        

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

  "<server_n>": [        

      "<netmask_1>",

      "<netmask_2>",

      "<netmask_3>",

      "<netmask_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <netmask_1>, <netmask_2>, ... <netmask_4> are the netmasks that each heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each heartbeat link.

  • heartbeat_udplinklowpri_netmask

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over UDP.)

Description: Lists the netmask (prefix for IPv6) that each low priority heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the netmask for each of the low priority heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_udplinklowpri_netmask": {

     "<server_1>": [

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

   "<server_2>": [

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

   "<server_3>": [        

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

  "<server_n>": [        

      "<netmask_1>",

      "<netmask_2>",

      "<netmask_3>",

      "<netmask_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <netmask_1>, <netmask_2>, ... <netmask_4> are the netmasks that each low priority heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each low priority heartbeat link.

  • lltoverrdma

Dimension: Scalar

Type: Mandatory

Description: Indicates whether to configure heartbeat link using LLT over Remote Direct Memory Access (RDMA).

  • heartbeat_rdmalink_address

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over RDMA.)

Description: Lists the IP address (IPv4 or IPv6) that each heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the IP addresses for each of the heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_rdmalink_address": {

     "<server_1>": [

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

     "<server_2>": [

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

     "<server_3>": [        

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

      "<server_n>": [        

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster.

<IP_address_1>, <IP_address_2>, ... <IP_address_4> are the IP addresses that each heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each heartbeat link.

  • heartbeat_rdmalinklowpri_address

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over RDMA.)

Description: Lists the IP address (IPv4 or IPv6) that each low priority heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the IP addresses for each of the low priority heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_rdmalinklowpri_address": {

     "<server_1>": [

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

     "<server_2>": [

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

     "<server_3>": [        

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ],

      "<server_n>": [        

        "<IP_address_1>",

        "<IP_address_2>",

        "<IP_address_3>",

        "<IP_address_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <IP_address_1>, <IP_address_2>, ... <IP_address_4> are the IP addresses that each low priority heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each low priority heartbeat link.

  • heartbeat_rdmalink_port

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over RDMA.)

Description: Lists the RDMA port number (16-bit integer value) that each heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the RDMA port number (16-bit integer value) for each of the heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_rdmalink_port": {

    "<server_1>": [

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ],

    "<server_2>": [

        "<port_1>",

        "<port_2>",

        "<port_3>",

        "<port_4>"

      ],

    "<server_3>": [        

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ],

    "<server_n>": [        

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <port_1>, <port_2>, ... <port_4> are the port numbers that each heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each heartbeat link.

  • heartbeat_rdmalinklowpri_port

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over RDMA.)

Description: Lists the RDMA port number (16-bit integer value) that each low priority heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the RDMA port number for each of the low priority heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_rdmalinklowpri_port": {

    "<server_1>": [

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ],

    "<server_2>": [

        "<port_1>",

        "<port_2>",

        "<port_3>",

        "<port_4>"

      ],

    "<server_3>": [        

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ],

    "<server_n>": [        

         "<port_1>",

         "<port_2>",

         "<port_3>",

         "<port_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <port_1>, <port_2>, ... <port_4> are the port numbers that each low priority heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each low priority heartbeat link.

  • heartbeat_rdmalink_netmask

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over RDMA.)

Description: Lists the netmask (prefix for IPv6) that each heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the netmask for each of the heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_rdmalink_netmask": {

     "<server_1>": [

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

   "<server_2>": [

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

   "<server_3>": [        

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

  "<server_n>": [        

      "<netmask_1>",

      "<netmask_2>",

      "<netmask_3>",

      "<netmask_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <netmask_1>, <netmask_2>, ... <netmask_4> are the netmasks that each heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each heartbeat link.

  • heartbeat_rdmalinklowpri_netmask

Dimension: Hash (two-dimensional array)

Type: Optional (This attribute is mandatory if LLT is configured over RDMA.)

Description: Lists the netmask (prefix for IPv6) that each low priority heartbeat link uses on each of the nodes in the cluster. The system name of each of the nodes and the netmask for each of the low priority heartbeat links are specified in a hash (two dimensional array) in JSON.

Syntax:

  "heartbeat_rdmalinklowpri_netmask": {

     "<server_1>": [

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

   "<server_2>": [

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

   "<server_3>": [        

       "<netmask_1>",

       "<netmask_2>",

       "<netmask_3>",

       "<netmask_4>"

      ],

  "<server_n>": [        

      "<netmask_1>",

      "<netmask_2>",

      "<netmask_3>",

      "<netmask_4>"

      ]

<server_1>, <server_2>, ... <server_n> are the host names or IP addresses of n number of systems in the cluster. <netmask_1>, <netmask_2>, ... <netmask_4> are the netmasks that each low priority heartbeat link uses on that particular system. You can enter up to 4 entries per system, one for each low priority heartbeat link.

  • fencing_option

Dimension: Scalar

Type: Mandatory

Description: Specifies the I/O fencing configuration mode. - 1: Disk-based fencing - 2: Majority-based fencing - 3: Disabled-based I/O fencing

  • fencing_newdg_disks

Dimension: List

Type: Optional

Description: Specifies the disks to use to create a new disk group for I/O fencing. Note: You must define the fencing_dgname variable to use an existing disk group. If you want to create a new disk group, you must use both the fencing_dgname variable and the fencing_newdg_disks variable.

  • fencing_dgname

Dimension: Scalar

Type: Optional

Description: Specifies the disk group for I/O fencing. **Note: You must define the fencing_dgname variable to use an existing disk group. If you want to create a new disk group, you must use both the fencing_dgname variable and the fencing_newdg_disks variable.

  • fencing_config_cpagent

Dimension: Scalar

Type: Optional

Description: Enter '1' or '0' depending upon whether you want to configure the Coordination Point agent using the installer or not. Enter "0" if you do not want to configure the Coordination Point agent using the installer. Enter "1" if you want to use the installer to configure the Coordination Point agent.

  • fencing_cpagentgrp

Dimension: Scalar

Type: Optional

Description: Name of the service group which will have the Coordination Point agent resource as part of it. Note: This field is obsolete if the fencing_config_cpagent field is given a value of '0'.

  • fencing_cpagent_monitor_freq

Dimension: Scalar

Type: Optional

Description: Specifies the frequency at which the Coordination Point Agent monitors for any changes to the Coordinator Disk Group constitution. Note: Coordination Point Agent can also monitor changes to the Coordinator Disk Group constitution such as a disk being accidently deleted from the Coordinator Disk Group. The frequency of this detailed monitoring can be tuned with the LevelTwoMonitorFreq attribute. For example, if you set this attribute to 5, the agent will monitor the Coordinator Disk Group constitution every five monitor cycles. If LevelTwoMonitorFreq attribute is not set, the agent will not monitor any changes to the Coordinator Disk Group. 0 means not to monitor the Coordinator Disk Group constitution.

  • fencing_auto_refresh_reg

Dimension: Scalar

Type: Optional

Description: Enable the auto refresh of coordination points variable in case registration keys are missing on any of CP servers.

Uninstall

The attributes listed in the below table are used while uninstalling Veritas InfoScale products. Ensure that you specify values for all mandatory attributes in the role you are creating for uninstallation. Optional attributes which are not specified by the user will use pre-defined default values.

  • install_script

Dimension: Scalar

Type: Mandatory

Description: Path to where the Veritas InfoScale installation files are stored. For example, _ <temporary_location> _ /installer.

  • responsefile_path

Dimension: Scalar

Type: Optional

Description: Location where you want to store the response file created during the uninstallation. By default, the response file is stored in the /opt/VRTS/install/chef directory. You can override the default path by specifying a different path for this attribute.

  • prod

Dimension: Scalar

Type: Mandatory

Description: Specifies which Veritas InfoScale product and version you want to uninstall.

Syntax:

     <product_name><version_number>

<product_name> can be either ENTERPRISE. AVAILABILITY, STORAGE, or FOUNDATION.

<version_number> is the version number of the Veritas InfoScale product.

Examples: ENTERPRISE74, AVAILABILITY731, STORAGE70, or FOUNDATION70

  • systems

Dimension: List

Type: Mandatory

Description: List of the systems on which the product is to be uninstalled.

Syntax:

  "systems": ["<server_1>","<server_2>", ... , "<server_n>"]

<server_1>, <server_2>, ...<server_n> are the host names or IP addresses of n number of systems on which you want to perform the installation.

Example:

"systems": ["vmrac008","vmrac009"]

License

The attributes listed in the below table are used while managing Veritas InfoScale licenses. Ensure that you specify values for all mandatory attributes in the role you are creating for managing licenses. Optional attributes which are not specified by the user will use pre-defined default values.

  • prod

Dimension: Scalar

Type: Mandatory

Description: Specifies which Veritas InfoScale product's license you want to manage.

Syntax:

     <product_name><version_number>

<product_name> can be either ENTERPRISE. AVAILABILITY, STORAGE, or FOUNDATION.

<version_number> is the version number of the Veritas InfoScale product.

Examples: ENTERPRISE74, AVAILABILITY731, STORAGE70, or FOUNDATION70

  • systems

Dimension: List

Type: Mandatory

Description: List of the systems on which the product license is to be managed.

Syntax:

  "systems": ["<server_1>","<server_2>", ... , "<server_n>"]

<server_1>, <server_2>, ...<server_n> are the host names or IP addresses of n number of systems on which you want to perform the installation.

Example:

"systems": ["vmrac008","vmrac009"]

  • keyless

Dimension: List

Type: Mandatory (Even though keyless is a mandatory attribtue, you must leave the keyless attribute empty if you are providing values for either the license or licensefile attribute.)

Description: List of the product keys for keyless installation that are to be registered on the systems specified in the systems attribute. The value can be either ENTERPRISE. AVAILABILITY, STORAGE, or FOUNDATION.

  • license

Dimension: List

Type: Mandatory (Even though license is a mandatory attribtue, you must leave the license attribute empty if you are providing values for either the keyless or licensefile attribute.)

Description: List of license keys to be registered on the systems specified in the systems attribute.

  • licensefile (only for versions 7.4 and later)

Dimension: List

Type: Mandatory (Even though licensefile is a mandatory attribtue, you must leave the licensefile attribute empty if you are providing values for either the keyless or license attribute.)

Description: List of paths to the slf license file to be registered on the systems specified in the systems attribute. Ensure that the license files are stored on the same server where the installer is saved.

  • logpath

Dimension: Scalar

Type: Optional

Description: Specifies the location where the log files are to be stored. If no value is defined the default location is /opt/VRTS/install/logs.

  • tmppath

Dimension: Scalar

Type: Optional

Description: Specifies the location where temporary files and dependent RPMs are stored during the install. The default location is /var/tmp.

  • uploadlogs

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to upload logs to the Veritas website. The value 1 indicates that the installation logs are uploaded to the Veritas website. The value 0 indicates that the installation logs are not uploaded to the Veritas website. By default this value is set to 1.

  • keyfile

Dimension: Scalar

Type: Optional

Description: Location of the SSH key file that is used by the Chef node to communicate with other nodes in the cluster.

  • noipc

Dimension: Scalar

Type: Optional

Description: Disables the installer from making outbound networking calls to Veritas Services and Operations Readiness Tool (SORT) in order to automatically obtain patch and release information updates. (0: sends outbound networking calls to SORT, 1: does not send outbound networking calls to SORT).

  • rsh

Dimension: Scalar

Type: Optional

Description: Specifies that RSH protocol must be used instead of SSH protocol while communicating between systems. A boolean value of 1 indicates that RSH will be used and a boolean value of 0 indicates that SSH will be used. This is an optional attribute and if no value is provided SSH protocol is used by default.

Security

The attributes listed in the below table are used while configuring security in Veritas InfoScale. Ensure that you specify values for all mandatory attributes in the role you are creating for configuring security. Optional attributes which are not specified by the user will use pre-defined default values.

Note: Ensure that you run the Chef client from a node that is part of the cluster on which you are configuring security.

  • logpath

Dimension: Scalar

Type: Optional

Description: Specifies the location where the log files are to be stored. If no value is defined the default location is /opt/VRTS/install/logs.

  • tmppath

Dimension: Scalar

Type: Optional

Description: Specifies the location where temporary files and dependent RPMs are stored during the install. The default location is /var/tmp.

  • keyfile

Dimension: Scalar

Type: Optional

Description: Location of the SSH key file that is used by the Chef node to communicate with other nodes in the cluster.

  • rsh

Dimension: Scalar

Type: Optional

Description: Specifies that RSH protocol must be used instead of SSH protocol while communicating between systems. A boolean value of 1 indicates that RSH will be used and a boolean value of 0 indicates that SSH will be used. This is an optional attribute and if no value is provided SSH protocol is used by default.

  • secusrgrps

Dimension: List

Type: Optional

Description: Specifies the user groups which will receive read access to the cluster.

  • defaultaccess

Dimension: Scalar

Type: Optional

Description: Specifies whether to grant read access to all. By default this value is set to 0.

  • eat_security_fips

Dimension: Scalar

Type: Optional

Description: Specifies whether to enable or disable security with FIPS mode on a running VCS cluster. By default, this value is set to 0, which means that security with FIPS mode will be disabled in the cluster.

  • reconfigure_security

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to re-configure security. Note that any customizations made to the configuration will be deleted and the default configuration will be applied.

Fencing

The attributes listed in the below table are used while configuring fencing in Veritas InfoScale. Note that only disk-based, majority-based, and disabled-based fencing can be configured using Chef. Ensure that you specify values for all mandatory attributes in the role you are creating for configuring fencing. Optional attributes which are not specified by the user will use pre-defined default values.

  • fencing_option

Dimension: Scalar

Type: Mandatory

Description: Specifies the I/O fencing configuration mode. - 1: Disk-based fencing - 2: Majority-based fencing - 3: Disabled-based I/O fencing

  • fencing_newdg_disks

Dimension: List

Type: Optional

Description: Specifies the disks to use to create a new disk group for I/O fencing. Note: You must define the fencing_dgname variable to use an existing disk group. If you want to create a new disk group, you must use both the fencing_dgname variable and the fencing_newdg_disks variable.

  • fencing_dgname

Dimension: Scalar

Type: Optional

Description: Specifies the disk group for I/O fencing. **Note: You must define the fencing_dgname variable to use an existing disk group. If you want to create a new disk group, you must use both the fencing_dgname variable and the fencing_newdg_disks variable.

  • fencing_config_cpagent

Dimension: Scalar

Type: Optional

Description: Enter '1' or '0' depending upon whether you want to configure the Coordination Point agent using the installer or not. Enter "0" if you do not want to configure the Coordination Point agent using the installer. Enter "1" if you want to use the installer to configure the Coordination Point agent.

  • fencing_cpagentgrp

Dimension: Scalar

Type: Optional

Description: Name of the service group which will have the Coordination Point agent resource as part of it. Note: This field is obsolete if the fencing_config_cpagent field is given a value of '0'.

  • fencing_cpagent_monitor_freq

Dimension: Scalar

Type: Optional

Description: Specifies the frequency at which the Coordination Point Agent monitors for any changes to the Coordinator Disk Group constitution. Note: Coordination Point Agent can also monitor changes to the Coordinator Disk Group constitution such as a disk being accidently deleted from the Coordinator Disk Group. The frequency of this detailed monitoring can be tuned with the LevelTwoMonitorFreq attribute. For example, if you set this attribute to 5, the agent will monitor the Coordinator Disk Group constitution every five monitor cycles. If LevelTwoMonitorFreq attribute is not set, the agent will not monitor any changes to the Coordinator Disk Group. 0 means not to monitor the Coordinator Disk Group constitution.

  • fencing_auto_refresh_reg

Dimension: Scalar

Type: Optional

Description: Enable the auto refresh of coordination points variable in case registration keys are missing on any of CP servers.

  • donotreconfigurefencing

Dimension: Scalar

Type: Optional

Description: Specifies whether you want to re-configure fencing. Note that any customizations made to the configuration will be deleted and the default configuration will be re-installed. By default this value is set to 1.

  • keyfile

Dimension: Scalar

Type: Optional

Description: Location of the SSH key file that is used by the Chef node to communicate with other nodes in the cluster.

  • rsh

Dimension: Scalar

Type: Optional

Description: Specifies that RSH protocol must be used instead of SSH protocol while communicating between systems. A boolean value of 1 indicates that RSH will be used and a boolean value of 0 indicates that SSH will be used. This is an optional attribute and if no value is provided SSH protocol is used by default.

  • logpath

Dimension: Scalar

Type: Optional

Description: Specifies the location where the log files are to be stored. If no value is defined the default location is /opt/VRTS/install/logs.

  • tmppath

Dimension: Scalar

Type: Optional

Description: Specifies the location where temporary files and dependent RPMs are stored during the install. The default location is /var/tmp.

Step 4: Running the Chef-client on the Chef node

A Chef node is a node, which is managed by Chef, and is used to run the Chef-client. We recommend that you run the Chef client from a Chef node that is part of the cluster on which you are performing the operation. Use the following command to run the Chef-client on the Chef node:

chef-client -r "role[<name_of_the_role>]"

<name_of_the_role> is the name of the role that you have created in Step 3: Creating roles and defining attributes.

Note: Ensure that you have added a dependency on the veritas_infoscale cookbook in your cookbook's metadata.rb, as follows:

depends 'veritas_infoscale'

Dependent cookbooks

This cookbook has no specified dependencies.

Contingent cookbooks

There are no cookbooks that are contingent upon this one.

Collaborator Number Metric
            

0.1.0 passed this metric

Contributing File Metric
            

0.1.0 failed this metric

Failure: To pass this metric, your cookbook metadata must include a source url, the source url must be in the form of https://github.com/user/repo, and your repo must contain a CONTRIBUTING.md file

Foodcritic Metric
            

0.1.0 failed this metric

FC064: Ensure issues_url is set in metadata: veritas_infoscale/metadata.rb:1
FC065: Ensure source_url is set in metadata: veritas_infoscale/metadata.rb:1
Run with Foodcritic Version 14.0.0 with tags metadata,correctness ~FC031 ~FC045 and failure tags any

No Binaries Metric
            

0.1.0 passed this metric

Publish Metric
            

0.1.0 passed this metric

Supported Platforms Metric
            

0.1.0 passed this metric

Testing File Metric
            

0.1.0 failed this metric

Failure: To pass this metric, your cookbook metadata must include a source url, the source url must be in the form of https://github.com/user/repo, and your repo must contain a TESTING.md file

Version Tag Metric
            

0.1.0 failed this metric

Failure: To pass this metric, your cookbook metadata must include a source url, the source url must be in the form of https://github.com/user/repo, and your repo must include a tag that matches this cookbook version number