(until C++23) | ||||
(until C++23) | ||||
(until C++23) | ||||
(until C++23) | ||||
(until C++23) | ||||
(until C++23) | ||||
) | ||||
) | ||||
) |
start_lifetime_as_array (C++23) | ||||
) | ||||
unique_ptr::operator-> | ||||
make_unique_for_overwrite (C++20) | ||||
operator!=operator<operator>operator<=operator>=operator<=> (until C++20)(C++20) | ||||
operator=( unique_ptr&& r ) noexcept; | (1) | (constexpr since C++23) |
< class U, class E > unique_ptr& operator=( unique_ptr<U, E>&& r ) noexcept; | (2) | (constexpr since C++23) |
operator=( ) noexcept; | (3) | (constexpr since C++23) |
operator=( const unique_ptr& ) = delete; | (4) | |
Parameters Return value Notes Example Defect reports |
r | - | smart pointer from which ownership will be transferred |
[ edit ] notes.
As a move-only type, unique_ptr 's assignment operator only accepts rvalues arguments (e.g. the result of std::make_unique or a std::move 'd unique_ptr variable).
[ edit ] defect reports.
The following behavior-changing defect reports were applied retroactively to previously published C++ standards.
DR | Applied to | Behavior as published | Correct behavior |
---|---|---|---|
C++11 | for overload (2), was assigned from <Deleter>(r.get_deleter()) | corrected to <E>(r.get_deleter()) | |
C++11 | rejected qualification conversions | accepts | |
C++11 | the converting assignment operator was not constrained | constrained | |
C++11 | the move assignment operator was not constrained | constrained |
Vishal Chovatiya
The smart pointers are a really good mechanism to manage dynamically allocated resources. In this article, we will see unique_ptr with example in C++11. But we don’t discuss standard smart pointers from a library. Rather, we implement our own smart pointer equivalent to it. This will give us an idea of inside working of smart pointers.
Prior to C++11, the standard provided std::auto_ptr . Which had some limitations. But from C++11, standard provided many smart pointers classes. Understanding unique_ptr with example in C++ requires an understanding of move semantics which I have discussed here & here .
But before all these nuisances, we will see “Why do we need smart pointer in 1st place?”:
Making open source more inclusive.
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. Because of the enormity of this endeavor, these changes are being updated gradually and where possible. For more details, see our CTO Chris Wright’s message .
You can provide feedback or report an error by submitting the Create Issue form in Jira. The Jira issue will be created in the Red Hat Hybrid Cloud Infrastructure Jira project, where you can track the progress of your feedback.
Click Create Issue
We appreciate your feedback on our documentation.
The Assisted Installer for Red Hat OpenShift Container Platform is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports various deployment platforms with a focus on bare metal, Nutanix, vSphere, and Oracle Cloud Infrastructure.
You can install OpenShift Container Platform on premises in a connected environment, with an optional HTTP/S proxy, for the following platforms:
The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following features:
Before installing, the Assisted Installer checks the following configurations:
You can customize your installation by selecting one or more options.
These options are installed as Operators, which are used to package, deploy, and manage services and applications on the control plane. See the Operators documentation for details.
You can deploy these Operators after the installation if you require advanced configuration options.
You can deploy OpenShift Virtualization to perform the following tasks:
See the OpenShift Virtualization documentation for details.
You can deploy the multicluster engine for Kubernetes to perform the following tasks in a large, multi-cluster environment:
Use GitOps Zero Touch Provisioning to manage remote edge sites at scale. See Edge computing for details.
You can deploy the multicluster engine with Red Hat OpenShift Data Foundation on all OpenShift Container Platform clusters.
Multicluster engine and storage configurations
Deploying multicluster engine without OpenShift Data Foundation results in the following scenarios:
Assisted Installer APIs are supported for a minimum of three months from the announcement of deprecation.
The Assisted Installer validates the following prerequisites to ensure successful installation.
If you use a firewall, you must configure it so that Assisted Installer can access the resources it requires to function.
The Assisted Installer is supported on the following CPU architectures:
This section describes the resource requirements for different clusters and installation options.
The multicluster engine for Kubernetes requires additional resources.
If you deploy the multicluster engine with storage, such as OpenShift Data Foundation or LVM Storage, you must also allocate additional resources to each node.
The resource requirements of a multi-node cluster depend on the installation options.
Control plane nodes:
The disks must be reasonably fast, with an etcd wal_fsync_duration_seconds p99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP .
Compute nodes:
Additional 16 GB RAM
If you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation.
The resource requirements for single-node OpenShift depend on the installation options.
Additional 32 GB RAM
If you deploy multicluster engine without OpenShift Data Foundation, LVM Storage is enabled.
The network must meet the following requirements:
A base domain name. You must ensure that the following requirements are met:
The OpenShift Container Platform cluster’s network must also meet the following requirements:
This section provides A and PTR record configuration examples that meet the DNS requirements for deploying OpenShift Container Platform using the Assisted Installer. The examples are not meant to provide advice for choosing one DNS solution over another.
In the examples, the cluster name is ocp4 and the base domain is example.com .
The following example is a BIND zone file that shows sample A records for name resolution in a cluster installed using the Assisted Installer.
Example DNS zone database
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
The following example is a BIND zone file that shows sample PTR records for reverse name resolution in a cluster installed using the Assisted Installer.
Example DNS zone database for reverse records
A PTR record is not required for the OpenShift Container Platform application wildcard.
The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations:
If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed.
After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster.
Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices:
To create a cluster with the Assisted Installer web user interface, use the following procedure.
Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain.
The base domain must be a valid DNS name. You must not have a wild card domain set up for the base domain.
Select the version of OpenShift Container Platform to install.
Optional: Select Install single node Openshift (SNO) if you want to install OpenShift Container Platform on a single node.
Currently, SNO is not supported on IBM zSystems and IBM Power platforms.
Optional: If you are installing OpenShift Container Platform on a third-party platform, select the platform from the Integrate with external parter platforms list. Valid values are Nutanix , vSphere or Oracle Cloud Infrastructure . Assisted Installer defaults to having no platform integration.
For details on each of the external partner integrations, see Additional Resources .
Assisted Installer supports Oracle Cloud Infrastructure (OCI) integration from OpenShift Container Platform 4.14 and later. For OpenShift Container Platform 4.14, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features - Scope of Support .
Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on a different architecture select the respective architecture to use. Valid values are arm64 , ppc64le , and s390x . Keep in mind, some features are not available with arm64 , ppc64le , and s390x CPU architectures.
For a mixed-architecture cluster installation, use the default x86_64 architecture. For instructions on installing a mixed-architecture cluster, see Additional resources .
Optional: Select Include custom manifests if you have at least one custom manifest to include in the installation. A custom manifest contains additional configurations not currently supported in the Assisted Installer. Selecting the checkbox adds the Custom manifests page to the wizard, where you upload the manifests.
Optional: The Assisted Installer defaults to DHCP networking. If you are using a static IP configuration, bridges or bonds for the cluster nodes instead of DHCP reservations, select Static IP, bridges, and bonds .
A static IP configuration is not supported for OpenShift Container Platform installations on Oracle Cloud Infrastructure.
You cannot change the base domain, the SNO checkbox, the CPU architecture, the host’s network configuration, or the disk-encryption after installation begins.
Additional resources
The Assisted Installer supports IPv4 networking with SDN up to OpenShift Container Platform 4.14 and OVN, and supports IPv6 and dual stack networking with OVN only. The Assisted Installer supports configuring the network with static network interfaces with IP address/MAC address mapping. The Assisted Installer also supports configuring host network interfaces with the NMState library, a declarative network manager API for hosts. You can use NMState to deploy hosts with static IP addressing, bonds, VLANs and other advanced networking features. First, you must set network-wide configurations. Then, you must create a host-specific configuration for each host.
For installations on IBM Z with z/VM, ensure that the z/VM nodes and vSwitches are properly configured for static networks and NMState. Also, the z/VM nodes must have a fixed MAC address assigned as the pool MAC addresses might cause issues with NMState.
Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses.
Enter the host-specific configuration.
This step is optional.
See the product documentation for prerequisites and configuration options:
If you require advanced options, install the Operators after you have installed the cluster.
Select one or more from the following options:
Install multicluster engine
You can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters.
Deploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations:
You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent.
Perform the following procedure for each host on the cluster.
Click the Add hosts button and select the provisioning type.
Select iPXE: Provision from your network server to boot the hosts using iPXE. This is the recommended method for IBM Z with z/VM nodes. ISO boot is the recommended method on the RHEL KVM installation.
Optional: Activate the Run workloads on control plane nodes switch to schedule workloads to run on control plane nodes, in addition to the default worker nodes.
This option is available for clusters of five or more nodes. For clusters of under five nodes, the system runs workloads on the control plane nodes only, by default. For more details, see Configuring schedulable control plane nodes in Additional Resources .
Optional: Add an SSH public key so that you can connect to the cluster nodes as the core user. Having a login to the cluster nodes can provide you with debugging information during the installation.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
After booting the hosts with the discovery ISO, the hosts will appear in the table at the bottom of the page. You can optionally configure the hostname and role for each host. You can also delete a host if necessary.
From the Options (⋮) menu for a host, select Change hostname . If necessary, enter a new name for the host and click Change . You must ensure that each host has a valid and unique hostname.
Alternatively, from the Actions list, select Change hostname to rename multiple selected hosts. In the Change Hostname dialog, type the new name and include {{n}} to make each hostname unique. Then click Change .
You can see the new names appearing in the Preview pane as you type. The name will be identical for all selected hosts, with the exception of a single-digit increment per host.
From the Options (⋮) menu, you can select Delete host to delete a host. Click Delete to confirm the deletion.
Alternatively, from the Actions list, select Delete to delete multiple selected hosts at the same time. Then click Delete hosts .
In a regular deployment, a cluster can have three or more hosts, and three of these must be control plane hosts. If you delete a host that is also a control plane, or if you are left with only two hosts, you will get a message saying that the system is not ready. To restore a host, you will need to reboot it from the discovery ISO.
For multi-host clusters, in the Role column next to the host name, you can click on the menu to change the role of the host.
If you do not select a role, the Assisted Installer will assign the role automatically. The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements.
Once all cluster hosts appear with a status of Ready , proceed to the next step.
Each of the hosts retrieved during host discovery can have multiple storage disks. The storage disks are listed for the host on the Storage page of the Assisted Installer wizard.
You can optionally modify the default configurations for each disk.
The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the previous disk.
The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss.
You can choose to disable the formatting of a specific disk. This should be performed with caution, as bootable disks may interfere with the installation process, mainly in terms of boot order.
You cannot disable formatting for the installation disk.
Before installing OpenShift Container Platform, you must configure the cluster network.
In the Networking page, select one of the following if it is not already selected for you:
Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology, including keepalived and Virtual Router Redundancy Protocol (VRRP) for managing the API and Ingress VIP addresses.
For cluster-managed networking, configure the following settings:
For user-managed networking, configure the following settings:
Select your Networking stack type :
Optional: Select Use advanced networking to configure the following advanced networking properties:
A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party.
You can upload a custom manifest from your file system to either the openshift folder or the manifests folder. There is no limit to the number of custom manifest files permitted.
Only one file can be uploaded at a time. However, each uploaded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.
For a file containing a single custom manifest, accepted file extensions include .yaml , .yml , or .json .
Single custom manifest example
For a file containing multiple custom manifests, accepted file types include .yaml or .yml .
Multiple custom manifest example
When uploading a custom manifest, enter the manifest filename and select a destination folder.
Prerequisites
You can change the folder and file name of an uploaded custom manifest. You can also copy the content of an existing manifest, or download it to the folder defined in the Chrome download settings.
It is not possible to modify the content of an uploaded manifest. However, you can overwrite the file.
You can remove uploaded custom manifests before installation in one of two ways:
Once you have removed a manifest you cannot undo the action. The workaround is to upload the manifest again.
You can delete one manifest at a time. This option does not allow you to delete the last remaining manifest.
You can remove all custom manifests at once. This also hides the Custom manifest page.
The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation.
After you have completed the configuration and all the nodes are Ready , you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation.
After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin username and password, and the kubeconfig file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses.
Download the kubeconfig file and copy it to the auth directory under your working directory:
The kubeconfig file is available for download for 24 hours after completing the installation.
Add the kubeconfig file to your environment:
Login with the oc CLI tool:
Replace <password> with the password of the kubeadmin user.
After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster using the Assisted Installer API. To use the API, you must perform the following procedures:
Once you perform these steps, you can modify the cluster definition, create discovery ISOs, add hosts to the cluster, and install the cluster. This document does not cover every endpoint of the Assisted Installer API , but you can review all of the endpoints in the API viewer or the swagger.yaml file.
Download the offline token from the Assisted Installer web console. You will use the offline token to set the API token.
Click Load Token .
Disable pop-up blockers.
In your terminal, set the offline token to the OFFLINE_TOKEN variable:
To make the offline token permanent, add it to your profile.
(Optional) Confirm the OFFLINE_TOKEN variable definition.
API calls require authentication with the API token. Assuming you use API_TOKEN as a variable name, add -H "Authorization: Bearer ${API_TOKEN}" to API calls to authenticate with the REST API.
The API token expires after 15 minutes.
On the command line terminal, set the API_TOKEN variable using the OFFLINE_TOKEN to validate the user.
Confirm the API_TOKEN variable definition:
Create a script in your path for one of the token generating methods. For example:
Then, save the file.
Change the file mode to make it executable:
Refresh the API token:
Verify that you can access the API by running the following command:
Example output
Many of the Assisted Installer API calls require the pull secret. Download the pull secret to a file so that you can reference it in API calls. The pull secret is a JSON object that will be included as a value within the request’s JSON object. The pull secret JSON must be formatted to escape the quotes. For example:
To use the pull secret from a shell variable, execute the following command:
To slurp the pull secret file using jq , reference it in the pull_secret variable, piping the value to tojson to ensure that it is properly formatted as escaped JSON. For example:
Confirm the PULL_SECRET variable definition:
During the installation of OpenShift Container Platform, you can optionally provide an SSH public key to the installation program. This is useful for initiating an SSH connection to a remote node when troubeshooting an installation error.
If you do not have an existing SSH key pair on your local machine to use for the authentication, create one now.
From the root user in your terminal, get the SSH public key:
Set the SSH public key to the CLUSTER_SSHKEY variable:
Confirm the CLUSTER_SSHKEY variable definition:
To register a new cluster definition with the API, use the /v2/clusters endpoint. Registering a new cluster requires the following settings:
See the cluster-create-params model in the API viewer for details on the fields you can set when registering a new cluster. When setting the olm_operators field, see Additional Resources for details on installing Operators.
After you create the cluster definition, you can modify the cluster definition and provide values for additional settings.
Register a new cluster.
Optional: You can register a new cluster by slurping the pull secret file in the request:
Optional: You can register a new cluster by writing the configuration to a JSON file and then referencing it in the request:
Assign the returned cluster_id to the CLUSTER_ID variable and export it:
If you close your terminal session, you need to export the CLUSTER_ID variable again in a new terminal session.
Check the status of the new cluster:
Once you register a new cluster definition, create the infrastructure environment for the cluster.
You cannot see the cluster configuration settings in the Assisted Installer user interface until you create the infrastructure environment.
You can install the following Operators when you register a new cluster:
OpenShift Virtualization Operator
Currently, OpenShift Virtualization is not supported on IBM zSystems and IBM Power.
Run the following command:
To modify a cluster definition with the API, use the /v2/clusters/{cluster_id} endpoint. Modifying a cluster resource is a common operation for adding settings such as changing the network type or enabling user-managed networking. See the v2-cluster-update-params model in the API viewer for details on the fields you can set when modifying a cluster definition.
You can add or remove Operators from a cluster resource that has already been registered.
To create partitions on nodes, see Configuring storage on nodes in the OpenShift Container Platform documentation.
Modify the cluster. For example, change the SSH key:
You can add or remove Operators from a cluster resource that has already been registered as part of a previous installation. This is only possible before you start the OpenShift Container Platform installation.
You set the required Operator definition by using the PATCH method for the /v2/clusters/{cluster_id} endpoint.
Run the following command to modify the Operators:
Sample output
The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types:
Once you register a new cluster definition with the Assisted Installer API, create an infrastructure environment using the v2/infra-envs endpoint. Registering a new infrastructure environment requires the following settings:
See the infra-env-create-params model in the API viewer for details on the fields you can set when registering a new infrastructure environment. You can modify an infrastructure environment after you create it. As a best practice, consider including the cluster_id when creating a new infrastructure environment. The cluster_id will associate the infrastructure environment with a cluster definition. When creating the new infrastructure environment, the Assisted Installer will also generate a discovery ISO.
Register a new infrastructure environment. Provide a name, preferably something including the cluster name. This example provides the cluster ID to associate the infrastructure environment with the cluster resource. The following example specifies the image_type . You can specify either full-iso or minimal-iso . The default value is minimal-iso .
Optional: You can register a new infrastructure environment by slurping the pull secret file in the request:
Optional: You can register a new infrastructure environment by writing the configuration to a JSON file and then referencing it in the request:
Assign the returned id to the INFRA_ENV_ID variable and export it:
Once you create an infrastructure environment and associate it to a cluster definition via the cluster_id , you can see the cluster settings in the Assisted Installer web user interface. If you close your terminal session, you need to re-export the id in a new terminal session.
You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. Modifying an infrastructure environment is a common operation for adding settings such as networking, SSH keys, or ignition configuration overrides.
See the infra-env-update-params model in the API viewer for details on the fields you can set when modifying an infrastructure environment. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.
Modify the infrastructure environment:
Providing kernel arguments to the Red Hat Enterprise Linux CoreOS (RHCOS) kernel via the Assisted Installer means passing specific parameters or options to the kernel at boot time, particularly when you cannot customize the kernel parameters of the discovery ISO. Kernel parameters can control various aspects of the kernel’s behavior and the operating system’s configuration, affecting hardware interaction, system performance, and functionality. Kernel arguments are used to customize or inform the node’s RHCOS kernel about the hardware configuration, debugging preferences, system services, and other low-level settings.
The RHCOS installer kargs modify command supports the append , delete , and replace options.
You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.
Modify the kernel arguments:
After configuring the cluster resource and infrastructure environment, download the discovery ISO image. You can choose from two images:
Currently, ISO images are not supported for installations on IBM Z ( s390x ) with z/VM. For details, see Booting hosts using iPXE .
You can boot hosts with the discovery image using three methods. For details, see Booting hosts with the discovery image .
Get the download URL:
Download the discovery image:
Replace <url> with the download URL from the previous step.
After adding hosts, modify the hosts as needed. The most common modifications are to the host_name and the host_role parameters.
You can modify a host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. See the host-update-params model in the API viewer for details on the fields you can set when modifying a host.
A host might be one of two roles:
By default, the Assisted Installer sets a host to auto-assign , which means the installation program determines whether the host is a master or worker role automatically. Use the following procedure to set the host’s role:
Get the host IDs:
Modify the host:
Each host retrieved during host discovery can have multiple storage disks. You can optionally modify the default configurations for each disk.
You can view the hosts in your cluster, and the disks on each host. This enables you to perform actions on a specific disk.
Get the host IDs for the cluster:
This is the ID of a single host. Multiple host IDs are separated by commas.
Get the disks for a specific host:
This is the output for one disk. It contains the disk_id and installation_eligibility properties for the disk.
You can select any disk whose installation_eligibility property is eligible: true to be the installation disk.
Optional: Identify the current installation disk:
Assign a new installation disk:
A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party. To create a custom manifest with the API, use the /v2/clusters/$CLUSTER_ID/manifests endpoint.
You can upload a base64-encoded custom manifest to either the openshift folder or the manifests folder with the Assisted Installer API. There is no limit to the number of custom manifests permitted.
Only one base64-encoded JSON manifest can be uploaded at a time. However, each uploaded base64-encoded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.
Add the custom manifest to the cluster by executing the following command:
Replace manifest.json with the name of your manifest file. The second instance of manifest.json is the path to the file. Ensure the path is correct.
The base64 -w 0 command base64-encodes the manifest as a string and omits carriage returns. Encoding with carriage returns will generate an exception.
Verify that the Assisted Installer added the manifest:
Replace manifest.json with the name of your manifest file.
Once the cluster hosts past validation, you can install the cluster.
Install the cluster:
You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes.
In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a previous installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support.
Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware.
Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 .
Enable TPM v2 encryption:
Valid settings for enable_on are all , master , worker , or none .
On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys :
Optional: Replace <port> with the port number. The default port number is 80 .
Example thumbprint
Optional: Retrieve the thumbprint for the Tang server using jose .
Ensure jose is installed on the Tang server:
On the Tang server, retrieve the thumbprint using jose :
Replace <public_key> with the public exchange key for the Tang server.
Optional: Using the API, follow the "Modifying hosts" procedure.
Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers:
Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s).
In a high availability deployment, three or more nodes comprise the control plane. The control plane nodes are used for managing OpenShift Container Platform and for running the OpenShift containers. The remaining nodes are workers, used to run the customer containers and workloads. There can be anywhere between one to thousands of worker nodes.
For a single-node OpenShift cluster or for a cluster that comprises up to four nodes, the system automatically schedules the workloads to run on the control plane nodes.
For clusters of between five to ten nodes, you can choose to schedule workloads to run on the control plane nodes in addition to the worker nodes. This option is recommended for enhancing efficiency and preventing underutilized resources. You can select this option either during the installation setup, or as part of the post-installation steps.
For larger clusters of more than ten nodes, this option is not recommended.
This section explains how to schedule workloads to run on control plane nodes using the Assisted Installer web console and API, as part of the installation setup.
For instructions on how to configure schedulable control plane nodes following an installation, see Configuring control plane nodes as schedulable in the OpenShift Container Platform documentation.
When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes.
Set Run workloads on control plane nodes to on.
For four nodes or less, this switch is activated automatically and cannot be changed.
Use the schedulable_masters attribute to enable workloads to run on control plane nodes.
When you reach the step for registering a new cluster, set the schedulable_masters attribute as follows:
The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can use Ignition to customize the discovery image.
Modifications to the discovery image will not persist in the system.
Ignition is a low-level system configuration utility, which is part of the temporary initial root filesystem, the initramfs . When Ignition runs on the first boot, it finds configuration data in the Ignition configuration file and applies it to the host before switch_root is called to pivot to the host’s root filesystem.
Ignition uses a JSON configuration specification file to represent the set of changes that occur on the first boot.
Ignition versions newer than 3.2 are not supported, and will raise an error.
Create an Ignition file and specify the configuration specification version:
Add configuration data to the Ignition file. For example, add a password to the core user.
Generate a password hash:
Add the generated password hash to the core user:
Save the Ignition file and export it to the IGNITION_FILE variable:
Once you create an Ignition configuration file, you can modify the discovery image by patching the infrastructure environment using the Assisted Installer API.
Create an ignition_config_override JSON object and redirect it to a file:
Patch the infrastructure environment:
The ignition_config_override object references the Ignition file.
The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can boot hosts with the discovery image using three methods:
You can install the Assisted Installer agent using a USB drive that contains the discovery ISO image. Starting the host with the USB drive prepares the host for the software installation.
Copy the ISO image to the USB drive, for example:
is the location of the connected USB drive, for example, /dev/sdb .
After the ISO is copied to the USB drive, you can use the USB drive to install the Assisted Installer agent on the cluster host.
To register nodes with the Assisted Installer using a bootable USB drive, use the following procedure.
Wait for the host to boot up.
For API installations, refresh the token, check the enabled host count, and gather the host IDs:
You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API.
Boot the host from the hosted ISO file, for example:
Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command:
Set the host to boot from the VirtualMedia device by running the following command:
Reboot the host:
Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command:
The Assisted Installer provides an iPXE script including all the artifacts needed to boot the discovery image for an infrastructure environment. Due to the limitations of the current HTTPS implementation of iPXE, the recommendation is to download and expose the needed artifacts in an HTTP server. Currently, even if iPXE supports HTTPS protocol, the supported algorithms are old and not recommended.
The full list of supported ciphers is in https://ipxe.org/crypto .
If you configure iPXE by using the web console, the $INFRA_ENV_ID and $API_TOKEN variables are preset.
IBM Power only supports PXE, which also requires: You have installed grub2 at /var/lib/tftpboot You have installed DHCP and TFTP for PXE
Download the iPXE script directly from the web console, or get the iPXE script from the Assisted Installer:
Download the required artifacts by extracting URLs from the ipxe-script .
Download the initial RAM disk:
Download the linux kernel:
Download the root filesystem:
Change the URLs to the different artifacts in the ipxe-script` to match your local HTTP server. For example:
Optional: When installing with RHEL KVM on IBM zSystems you must boot the host by specifying additional kernel arguments
If you install with iPXE on RHEL KVM, in some circumstances, the VMs on the VM host are not rebooted on first boot and need to be started manually.
Optional: When installing on IBM Power you must download intramfs, kernel, and root as follows:
Add following entry to `/var/lib/tftpboot/boot/grub2/grub.cfg `:
You can assign roles to your discovered hosts. These roles define the function of the host within the cluster. The roles can be one of the standard Kubernetes types: control plane (master) or worker .
The host must meet the minimum requirements for the role you selected. You can find the hardware requirements by referring to the Prerequisites section of this document or using the preflight requirement API.
If you do not select a role, the system selects one for you. You can change the role at any time before installation starts.
You can select a role after the host finishes its discovery.
You can select a role for the host using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. A host may be one of two roles:
By default, the Assisted Installer sets a host to auto-assign , which means the installer will determine whether the host is a master or worker role automatically. Use this procedure to set the host’s role.
Modify the host_role setting:
Replace <host_id> with the ID of the host.
Assisted Installer selects a role automatically for hosts if you do not assign a role yourself. The role selection mechanism factors the host’s memory, CPU, and disk space. It aims to assign a control plane role to the 3 weakest hosts that meet the minimum requirements for control plane nodes. All other hosts default to worker nodes. The goal is to provide enough resources to run the control plane and reserve the more capacity-intensive hosts for running the actual workloads.
You can override the auto-assign decision at any time before installation.
The validations make sure that the auto selection is a valid one.
Chapter 10. preinstallation validations, 10.1. definition of preinstallation validations.
The Assisted Installer aims to make cluster installation as simple, efficient, and error-free as possible. The Assisted Installer performs validation checks on the configuration and the gathered telemetry before starting an installation.
The Assisted Installer will use the information provided prior to installation, such as control plane topology, network configuration and hostnames. It will also use real time telemetry from the hosts you are attempting to install.
When a host boots the discovery ISO, an agent will start on the host. The agent will send information about the state of the host to the Assisted Installer.
The Assisted Installer uses all of this information to compute real time preinstallation validations. All validations are either blocking or non-blocking to the installation.
A blocking validation will prevent progress of the installation, meaning that you will need to resolve the issue and pass the blocking validation before you can proceed.
A non-blocking validation is a warning and will tell you of things that might cause you a problem.
The Assisted Installer performs two types of validation:
Host validations ensure that the configuration of a given host is valid for installation.
Cluster validations ensure that the configuration of the whole cluster is valid for installation.
10.4.1. getting host validations by using the rest api.
If you use the web console, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure.
Get all validations for all hosts:
Get non-passing validations for all hosts:
Parameter | Validation type | Description |
---|---|---|
| non-blocking | Checks that the host has recently communicated with the Assisted Installer. |
| non-blocking | Checks that the Assisted Installer received the inventory from the host. |
| non-blocking | Checks that the number of CPU cores meets the minimum requirements. |
| non-blocking | Checks that the amount of memory meets the minimum requirements. |
| non-blocking | Checks that at least one available disk meets the eligibility criteria. |
| blocking | Checks that the number of cores meets the minimum requirements for the host role. |
| blocking | Checks that the amount of memory meets the minimum requirements for the host role. |
| blocking | For day 2 hosts, checks that the host can download ignition configuration from the day 1 cluster. |
| blocking | The majority group is the largest full-mesh connectivity group on the cluster, where all members can communicate with all other members. This validation checks that hosts in a multi-node, day 1 cluster are in the majority group. |
| blocking | Checks that the platform is valid for the network settings. |
| non-blocking | Checks if an NTP server has been successfully used to synchronize time on the host. |
| non-blocking | Checks if container images have been successfully pulled from the image registry. |
| blocking | Checks that disk speed metrics from an earlier installation meet requirements, if they exist. |
| blocking | Checks that the average network latency between hosts in the cluster meets the requirements. |
| blocking | Checks that the network packet loss between hosts in the cluster meets the requirements. |
| blocking | Checks that the host has a default route configured. |
| blocking | For a multi node cluster with user managed networking. Checks that the host is able to resolve the API domain name for the cluster. |
| blocking | For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal API domain name for the cluster. |
| blocking | For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal apps domain name for the cluster. |
| non-blocking | Checks that the host is compatible with the cluster platform |
| blocking | Checks that the wildcard DNS *.<cluster_name>.<base_domain> is not configured, because this causes known problems for OpenShift |
| non-blocking | Checks that the type of host and disk encryption configured meet the requirements. |
| blocking | Checks that this host does not have any overlapping subnets. |
| blocking | Checks that the hostname is unique in the cluster. |
| blocking | Checks the validity of the hostname, meaning that it matches the general form of hostnames and is not forbidden. |
| blocking | Checks that the host IP is in the address range of the machine CIDR. |
| blocking | Validates that the cluster meets the requirements of the Local Storage Operator. |
| blocking | Validates that the cluster meets the requirements of the Openshift Data Foundation Operator. |
| blocking | Validates that the cluster meets the requirements of Container Native Virtualization. |
| blocking | Validates that the cluster meets the requirements of the Logical Volume Manager Operator. |
| non-blocking | Verifies that each valid disk sets to . In VSphere this will result in each disk having a UUID. |
| blocking | Checks that the discovery agent version is compatible with the agent docker image version. |
| blocking | Checks that installation disk is not skipping disk formatting. |
| blocking | Checks that all disks marked to skip formatting are in the inventory. A disk ID can change on reboot, and this validation prevents issues caused by that. |
| blocking | Checks the connection of the installation media to the host. |
| non-blocking | Checks that the machine network definition exists for the cluster. |
| blocking | Checks that the platform is compatible with the network settings. Some platforms are only permitted when installing Single Node Openshift or when using User Managed Networking. |
10.5.1. getting cluster validations by using the rest api.
If you use the web console, many of these validations will not show up by name. To obtain a list of validations consistent with the labels, use the following procedure.
Get all cluster validations:
Get non-passing cluster validations:
Parameter | Validation type | Description |
---|---|---|
| non-blocking | Checks that the machine network definition exists for the cluster. |
| non-blocking | Checks that the cluster network definition exists for the cluster. |
| non-blocking | Checks that the service network definition exists for the cluster. |
| blocking | Checks that the defined networks do not overlap. |
| blocking | Checks that the defined networks share the same address families (valid address families are IPv4, IPv6) |
| blocking | Checks the cluster network prefix to ensure that it is valid and allows enough address space for all hosts. |
| blocking | For a non user managed networking cluster. Checks that or are members of the machine CIDR if they exist. |
| non-blocking | For a non user managed networking cluster. Checks that exist. |
| blocking | For a non user managed networking cluster. Checks if the belong to the machine CIDR and are not in use. |
| blocking | For a non user managed networking cluster. Checks that exist. |
| non-blocking | For a non user managed networking cluster. Checks if the belong to the machine CIDR and are not in use. |
| blocking | Checks that all hosts in the cluster are in the "ready to install" status. |
| blocking | This validation only applies to multi-node clusters. |
| non-blocking | Checks that the base DNS domain exists for the cluster. |
| non-blocking | Checks that the pull secret exists. Does not check that the pull secret is valid or authorized. |
| blocking | Checks that each of the host clocks are no more than 4 minutes out of sync with each other. |
| blocking | Validates that the cluster meets the requirements of the Local Storage Operator. |
| blocking | Validates that the cluster meets the requirements of the Openshift Data Foundation Operator. |
| blocking | Validates that the cluster meets the requirements of Container Native Virtualization. |
| blocking | Validates that the cluster meets the requirements of the Logical Volume Manager Operator. |
| blocking | Checks the validity of the network type if it exists. |
This section describes the basics of network configuration using the Assisted Installer.
There are various network types and addresses used by OpenShift and listed in the table below.
Type | DNS | Description |
---|---|---|
| The IP address pools from which Pod IP addresses are allocated. | |
| The IP address pool for services. | |
| The IP address blocks for machines forming the cluster. | |
|
| The VIP to use for API communication. This setting must either be provided or preconfigured in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. |
|
| The VIPs to use for API communication. This setting must either be provided or preconfigured in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the setting. |
|
| The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. |
|
| The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the setting. |
OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept multiple IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration using the API.
Depending on the desired network stack, you can choose different network controllers. Currently, the Assisted Service can deploy OpenShift Container Platform clusters using one of the following configurations:
Supported network controllers depend on the selected stack and are summarized in the table below. For a detailed Container Network Interface (CNI) network provider feature comparison, refer to the OCP Networking documentation .
Stack | SDN | OVN |
---|---|---|
IPv4 | Yes | Yes |
IPv6 | No | Yes |
Dual-stack | No | Yes |
OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases.
11.1.1.1. sdn.
Please see the OVN-Kubernetes limitations section in the OCP documentation .
The cluster network is a network from which every Pod deployed in the cluster gets its IP address. Given that the workload may live across many nodes forming the cluster, it’s important for the network provider to be able to easily find an individual node based on the Pod’s IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix .
The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster may assign addresses for the multi-node cluster:
Creating a 3-node cluster using the snippet above may create the following network topology:
Explaining OVN-K8s internals is out of scope for this document, but the pattern described above provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes.
The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs.
Depending on whether you are deploying a Single Node OpenShift or a multi-node cluster, different values are mandatory. The table below explains this in more detail.
Parameter | SNO | Multi-Node Cluster with DHCP mode | Multi-Node Cluster without DHCP mode |
---|---|---|---|
| Required | Required | Required |
| Required | Required | Required |
| Auto-assign possible (*) | Auto-assign possible (*) | Auto-assign possible (*) |
| Forbidden | Forbidden | Required |
| Forbidden | Forbidden | Required in 4.12 and later releases |
| Forbidden | Forbidden | Required |
| Forbidden | Forbidden | Required in 4.12 and later releases |
(*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly.
The workflow for deploying a cluster without Internet access has some prerequisites which are out of scope of this document. You may consult the Zero Touch Provisioning the hard way Git repository for some insights.
The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.
If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network.
Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier.
VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later.
11.2.2. example payload to disable autoallocation, 11.3. additional resources.
User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include:
There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change:
You may use static network configurations when generating or updating the discovery ISO.
The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time.
11.5.3. mac interface mapping.
MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host.
The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces.
11.5.4. additional nmstate configuration examples.
The examples below are only meant to show a partial configuration. They are not meant to be used as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they may leave your machines with no network connectivity.
11.5.4.2. network bond, 11.6. applying a static network configuration with the api.
You can apply a static network configuration using the Assisted Installer API.
Create a temporary file /tmp/request-body.txt with the API request:
Send the request to the Assisted Service API endpoint:
Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets.
11.8.3. example payload for an openshift container platform cluster consisting of many nodes, 11.8.4. limitations.
The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values.
You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API.
You must check that your cluster can support multiple architectures before you add a node with a different architecture.
Check that your cluster uses the architecture payload by running the following command:
Verification
If you see the following output, your cluster supports multiple architectures:
A cluster with an x86_64 control plane can support worker nodes that have two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads.
For example, you can add arm64 , IBM Power , or IBM zSystems worker nodes to an existing OpenShift Container Platform cluster with an x86_64 .
The main steps of the installation are as follows:
Supported platforms
The table below lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing.
OpenShift Container Platform version | Supported platforms | Day 1 control plane architecture | Day 2 node architecture |
---|---|---|---|
4.12.0 | |||
4.13.0 | |||
4.14.0 |
Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .
When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster:
When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64 :
When you reach the "Adding hosts" step of the installation, set host_role to master :
For more information, see Assigning Roles to Hosts in Additional Resources .
Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power), s390x (for IBM Z), or arm64 . For example:
Repeat the "Adding hosts" step of the installation. This time, set host_role to worker :
For more details, see Assigning Roles to Hosts in Additional Resources .
View the arm64 , ppc64le or s390x worker nodes in the cluster by running the following command:
You can add hosts to clusters that were created using the Assisted Installer .
Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up.
As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation.
When the host is successfully installed, it is listed as a host in the cluster web console.
New hosts will be encrypted using the same method as the original cluster.
You can add hosts to clusters using the Assisted Installer REST API.
Set the $API_URL variable by running the following command:
Import the cluster by running the following commands:
Set the $CLUSTER_ID variable. Log in to the cluster and run the following command:
Set the $CLUSTER_REQUEST variable that is used to import the cluster:
Import the cluster and set the $CLUSTER_ID variable. Run the following command:
Generate the InfraEnv resource for the cluster and set the $INFRA_ENV_ID variable by running the following commands:
Set the $INFRA_ENV_REQUEST variable:
Post the $INFRA_ENV_REQUEST to the /v2/infra-envs API and set the $INFRA_ENV_ID variable:
Get the URL of the discovery ISO for the cluster host by running the following command:
Download the ISO:
Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up:
Set the $HOST_ID variable for the new host, for example:
Check that the host is ready to install by running the following command:
Ensure that you copy the entire command including the complete jq expression.
When the previous command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command:
As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host.
You must approve the CSRs to complete the installation.
Keep running the following API call to monitor the cluster installation:
Optional: Run the following command to see all the events for the cluster:
Check that the new host was successfully added to the cluster with a status of Ready :
This procedure describes how to install a primary control plane node on a healthy OpenShift Container Platform cluster.
If the cluster is unhealthy, additional operations are required before they can be managed. See Additional Resources for more information.
Retrieve pending CertificateSigningRequests (CSRs):
Approve pending CSRs:
Confirm the primary node is in Ready status:
The etcd-operator requires a Machine Custom Resource (CR) referencing the new node when the cluster runs with a functional Machine API.
Link the Machine CR with BareMetalHost and Node :
Create the BareMetalHost CR with a unique .metadata.name value:
Apply the BareMetalHost CR:
Create the Machine CR using the unique .machine.name value:
Apply the Machine CR:
Link BareMetalHost , Machine , and Node using the link-machine-and-node.sh script:
Confirm etcd members:
Confirm the etcd-operator configuration applies to all nodes:
Confirm etcd-operator health:
Confirm node health:
Confirm the ClusterOperators health:
Confirm the ClusterVersion :
Remove the old control plane node:
Delete the BareMetalHost CR:
Confirm the Machine is unhealthy:
Delete the Machine CR:
Confirm removal of the Node CR:
Check etcd-operator logs to confirm status of the etcd cluster:
Remove the physical machine to allow etcd-operator to reconcile the cluster members:
This procedure describes how to install a primary control plane node on an unhealthy OpenShift Container Platform cluster.
Confirm initial state of the cluster:
Confirm the etcd-operator detects the cluster as unhealthy:
Confirm the etcdctl members:
Confirm that etcdctl reports an unhealthy member of the cluster:
Remove the unhealthy control plane by deleting the Machine Custom Resource:
The Machine and Node Custom Resources (CRs) will not be deleted if the unhealthy cluster cannot run successfully.
Confirm that etcd-operator has not removed the unhealthy machine:
Remove the unhealthy etcdctl member manually:
Remove the unhealthy cluster by deleting the etcdctl member Custom Resource:
Confirm members of etcdctl by running the following command:
Review and approve Certificate Signing Requests
Review the Certificate Signing Requests (CSRs):
Approve all pending CSRs:
Confirm ready status of the control plane node:
Validate the Machine , Node and BareMetalHost Custom Resources.
The etcd-operator requires Machine CRs to be present if the cluster is running with the functional Machine API. Machine CRs are displayed during the Running phase when present.
Create Machine Custom Resource linked with BareMetalHost and Node .
Make sure there is a Machine CR referencing the newly added node.
Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors when running etcd-operator .
Add BareMetalHost Custom Resource:
Add Machine Custom Resource:
Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script:
Confirm the etcd operator has configured all nodes:
Confirm health of etcdctl :
Confirm the health of the nodes:
Confirm the health of the ClusterOperators :
If you install OpenShift Container Platform on Nutanix, the Assisted Installer can integrate the OpenShift Container Platform cluster with the Nutanix platform, which exposes the Machine API to Nutanix and enables autoscaling and dynamically provisioning storage containers with the Nutanix Container Storage Interface (CSI).
To deploy an OpenShift Container Platform cluster and maintain its daily operation, you need access to a Nutanix account with the necessary environment requirements. For details, see Environment requirements .
To add hosts on Nutanix with the user interface (UI), generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.
After this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.
Optional: Add an SSH public key so that you can connect to the Nutanix VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation.
Select the desired provisioning type.
Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot.
In Networking , select Cluster-managed networking . Nutanix does not support User-managed networking .
In the Nutanix Prism UI, create the control plane (master) VMs through Prism Central .
In the Nutanix Prism UI, create the worker VMs through Prism Central .
To add hosts on Nutanix with the API, generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.
Once this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.
Create a Nutanix cluster configuration file to hold the environment variables:
If you have to start a new terminal session, you can reload the environment variables easily. For example:
Assign the Nutanix cluster’s name to the NTX_CLUSTER_NAME environment variable in the configuration file:
Replace <cluster_name> with the name of the Nutanix cluster.
Assign the Nutanix cluster’s subnet name to the NTX_SUBNET_NAME environment variable in the configuration file:
Replace <subnet_name> with the name of the Nutanix cluster’s subnet.
Create the Nutanix image configuration file:
Replace <image_url> with the image URL downloaded from the previous step.
Create the Nutanix image:
Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 .
Assign the returned UUID to the NTX_IMAGE_UUID environment variable in the configuration file:
Get the Nutanix cluster UUID:
Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <nutanix_cluster_name> with the name of the Nutanix cluster.
Assign the returned Nutanix cluster UUID to the NTX_CLUSTER_UUID environment variable in the configuration file:
Replace <uuid> with the returned UUID of the Nutanix cluster.
Get the Nutanix cluster’s subnet UUID:
Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <subnet_name> with the name of the cluster’s subnet.
Assign the returned Nutanix subnet UUID to the NTX_CLUSTER_UUID environment variable in the configuration file:
Replace <uuid> with the returned UUID of the cluster subnet.
Ensure the Nutanix environment variables are set:
Create a VM configuration file for each Nutanix host. Create three control plane (master) VMs and at least two worker VMs. For example:
Replace <host_name> with the name of the host.
Boot each Nutanix virtual machine:
Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <vm_config_file_name> with the name of the VM configuration file.
Assign the returned VM UUID to a unique environment variable in the configuration file:
Replace <uuid> with the returned UUID of the VM.
The environment variable must have a unique name for each VM.
Wait until the Assisted Installer has discovered each VM and they have passed validation.
Modify the cluster definition to enable integration with Nutanix:
Follow the steps below to complete and validate the OpenShift Container Platform integration with the Nutanix cloud provider.
After installing OpenShift Container Platform on the Nutanix platform using the Assisted Installer, you must update the following Nutanix configuration settings manually:
In the OpenShift Container Platform command line interface, update the Nutanix cluster configuration settings:
For additional details, see Creating a machine set on Nutanix .
Create the Nutanix secret:
When installing OpenShift Container Platform version 4.13 or later, update the Nutanix cloud provider configuration:
Get the Nutanix cloud provider configuration YAML file:
Create a backup of the configuration file:
Open the configuration YAML file:
Edit the configuration YAML file as follows:
Apply the configuration updates:
Create an Operator group for the Nutanix CSI Operator.
For a description of operator groups and related concepts, see Common Operator Framework Terms in Additional Resources .
Open the Nutanix CSI Operator Group YAML file:
Edit the YAML file as follows:
Create the Operator Group:
The Nutanix Container Storage Interface (CSI) Operator for Kubernetes deploys and manages the Nutanix CSI Driver.
For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the Operator section of the Nutanix CSI Operator document in Additional Resources .
Get the parameter values for the Nutanix CSI Operator YAML file:
Check that the Nutanix CSI Operator exists:
Assign the default channel for the Operator to a BASH variable:
Assign the starting cluster service version (CSV) for the Operator to a BASH variable:
Assign the catalog source for the subscription to a BASH variable:
Assign the Nutanix CSI Operator source namespace to a BASH variable:
Create the Nutanix CSI Operator YAML file using the BASH variables:
Create the CSI Nutanix Operator:
Run the following command until the Operator subscription state changes to AtLatestKnown . This indicates that the Operator subscription has been created, and may take some time.
The Nutanix Container Storage Interface (CSI) Driver for Kubernetes provides scalable and persistent storage for stateful applications.
For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the CSI Driver using the Operator section of the Nutanix CSI Operator document in Additional Resources .
Create a NutanixCsiStorage resource to deploy the driver:
Create a Nutanix secret YAML file for the CSI storage driver:
Run the following steps to validate the configuration.
Verify that you can create a storage class:
Verify that you can create the Nutanix persistent volume claim (PVC):
Create the persistent volume claim (PVC):
Validate that the persistent volume claim (PVC) status is Bound:
The Assisted Installer integrates the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling.
You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc vSphere CLI tool. The following procedure demonstrates adding hosts with the govc CLI tool. To use the online vSphere Client, refer to the documentation for vSphere.
To add hosts on vSphere with the vSphere govc CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.
After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines.
Add an SSH public key so that you can connect to the vSphere VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation.
Select the desired discovery image ISO.
In Networking , select Cluster-managed networking or User-managed networking :
Download the discovery ISO:
Replace <discovery_url> with the Discovery ISO URL from the preceding step.
On the command line, power down and destroy any preexisting virtual machines:
Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder.
Remove preexisting ISO images from the data store, if there are any:
Replace <iso_datastore> with the name of the data store. Replace image with the name of the ISO image.
Upload the Assisted Installer discovery ISO:
Replace <iso_datastore> with the name of the data store.
All nodes in the cluster must boot from the discovery image.
Boot three control plane (master) nodes:
See vm.create for details.
The foregoing example illustrates the minimum required resources for control plane nodes.
Boot at least two worker nodes:
The foregoing example illustrates the minimum required resources for worker nodes.
Ensure the VMs are running:
After 2 minutes, shut down the VMs:
Set the disk.enableUUID setting to TRUE :
You must set disk.enableUUID to TRUE on all of the nodes to enable autoscaling with vSphere.
Restart the VMs:
After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually:
Generate a base64-encoded username and password for vCenter:
Replace <vcenter_username> with your vCenter username.
Replace <vcenter_password> with your vCenter password.
Backup the vSphere credentials:
Edit the vSphere credentials:
Replace <vcenter_address> with the vCenter address. Replace <vcenter_username_encoded> with the base64-encoded version of your vSphere username. Replace <vcenter_password_encoded> with the base64-encoded version of your vSphere password.
Replace the vSphere credentials:
Redeploy the kube-controller-manager pods:
Backup the vSphere cloud provider configuration:
Edit the cloud provider configuration:
Replace <vcenter_address> with the vCenter address. Replace <datacenter> with the name of the data center. Replace <datastore> with the name of the data store. Replace <folder> with the folder containing the cluster VMs.
Apply the cloud provider configuration:
Taint the nodes with the uninitialized taint:
Follow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later.
Identify the nodes to taint:
Run the following command for each node:
Replace <node_name> with the name of the node.
Back up the infrastructures configuration:
Edit the infrastructures configuration:
Replace <vcenter_address> with your vCenter address. Replace <datacenter> with the name of your vCenter data center. Replace <datastore> with the name of your vCenter data store. Replace <folder> with the folder containing the cluster VMs. Replace <vcenter_cluster> with the vSphere vCenter cluster where OpenShift Container Platform is installed.
Apply the infrastructures configuration:
In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed.
This step is mandatory if you installed OpenShift Container Platform 4.13 or later.
In the Password field, enter your vSphere vCenter password.
The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable.
In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename .
Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes .
The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected.
Follow the steps below to monitor the configuration process.
Check that the configuration process completed successfully:
A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again.
Check that you are able to bind PersistentVolumeClaims objects by performing the following steps:
Create a StorageClass object using the following YAML:
Create a PersistentVolumeClaims object using the following YAML:
For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a PersistentVolumeClaims object, navigate to Storage → PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console.
From OpenShift Container Platform 4.14 and later versions, you can use the Assisted Installer to install a cluster on Oracle Cloud Infrastructure by using infrastructure that you provide. Oracle Cloud Infrastructure provides services that can meet your needs for regulatory compliance, performance, and cost-effectiveness. You can access OCI Resource Manager configurations to provision and configure OCI resources.
For OpenShift Container Platform 4.14 and 4.15, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
This section is a summary of the steps required in the Assisted Installer web console to support the integration with Oracle Cloud Infrastructure. It does not document the steps to be performed in Oracle Cloud Infrastructure, nor does it cover the integration between the two platforms. For a complete and comprehensive procedure, see Using the Assisted Installer to install a cluster on OCI .
Generate the discovery ISO image in Assisted Installer by completing the required steps. You must upload the image to the Oracle Cloud Infrastructure before you install OpenShift Container Platform on Oracle Cloud Infrastructure.
On the Cluster Details page, complete the following fields:
On the Host Discovery page, perform the following actions:
After you provision Oracle Cloud Infrastructure (OCI) resources and upload OpenShift Container Platform custom manifest configuration files to OCI, you must complete the remaining cluster installation steps on the Assisted Installer before you can create an instance OCI.
Click Add another manifest . Repeat the same steps for the following manifests provided by Oracle:
There are cases where the Assisted Installer cannot begin the installation or the cluster fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure.
The Assisted Installer uses an ISO image to run an agent that registers the host to the cluster and performs hardware and network validations before attempting to install OpenShift. You can follow these procedures to troubleshoot problems related to the host discovery.
Once you start the host with the discovery ISO image, the Assisted Installer discovers the host and presents it in the Assisted Service web console. See Configuring the discovery image for additional details.
Verify that you can access your host machine using SSH, a console such as the BMC, or a virtual machine console:
You can specify private key file using the -i parameter if it isn’t stored in the default directory.
If you fail to ssh to the host, the host failed during boot or it failed to configure the network.
Upon login you should see this message:
Example login
Check the agent service logs:
In the following example, the errors indicate there is a network issue:
Example agent service log screenshot of agent service log
If there is an error pulling the agent image, check the proxy settings. Verify that the host is connected to the network. You can use nmcli to get additional information about your network configuration.
Check the agent logs to verify the agent can access the Assisted Service:
The errors in the following example indicate that the agent failed to access the Assisted Service.
Example agent log
Check the proxy settings you configured for the cluster. If configured, the proxy must allow access to the Assisted Service URL.
The minimal ISO image should be used when bandwidth over the virtual media connection is limited. It includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The resulting ISO image is about 100MB in size compared to 1GB for the full ISO image.
If your environment requires static network configuration to access the Assisted Installer service, any issues with that configuration might prevent the minimal ISO from booting properly. If the boot screen shows that the host has failed to download the root file system image, the network might not be configured correctly.
You can interrupt the kernel boot early in the bootstrap process, before the root file system image is downloaded. This allows you to access the root console and review the network configurations.
Example rootfs download failure
Add the .spec.kernelArguments stanza to the infraEnv object of the cluster you are deploying:
For details on modifying an infrastructure environment, see Additional Resources .
Identify and change the incorrect network configurations. Here are some useful diagnostic commands:
View system logs by using journalctl , for example:
View network connection information by using nmcli , as follows:
Check the configuration files for incorrect network connections, for example:
Once the installation that runs as part of the Discovery Image completes, the Assisted Installer reboots the host. The host must boot from its installation disk to continue forming the cluster. If you have not correctly configured the host’s boot order, it will boot from another disk instead, interrupting the installation.
If the host boots the discovery image again, the Assisted Installer will immediately detect this event and set the host’s status to Installing Pending User Action . Alternatively, if the Assisted Installer does not detect that the host has booted the correct disk within the allotted time, it will also set this host status.
There are cases where the Assisted Installer declares an installation to be successful even though it encountered errors:
When you add a node to an existing cluster as part of day 2 operations, the node downloads the ignition configuration file from the day 1 cluster. If the download fails and the node is unable to connect to the cluster, the status of the host in the Host discovery step changes to Insufficient . Clicking this status displays the following error message:
There are a number of possible reasons for the connectivity failure. Here are some recommended actions.
Check the IP address and domain name of the cluster:
Check the agent logs of the host to verify that the agent can access the Assisted Service via SSH:
For more details, see Verify the agent can access the Assisted Service .
We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog .
We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Find centralized, trusted content and collaborate around the technologies you use most.
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Get early access and see previews of new features.
In this answer to this question , it was noted that:
If you're going to 'clear' the pointer in the dtor, a different idiom would be better - set the pointer to a known bad pointer value.
and also that the destructor should be:
~Foo() { delete bar; if (DEBUG) bar = (bar_type*)(long_ptr)(0xDEADBEEF); }
I have two question about these parts of the answer.
Firstly, how can you set the pointer to a known bad pointer value? How can you set a pointer to an address which you can assure won't be allocated?
Secondly, what does: if (DEBUG) bar = (bar_type*)(long_ptr)(0xDEADBEEF); even do? What's DEBUG ? I couldn't find a macro named so. Also, what's long_ptr ? What does it do?
0xDEADBEEF is a pointer value like any other. Strictly speaking, there is no "known bad pointer value".
However, as a very rough simplification we can assume pointer values to be random. The chances to see 0xDEADBEEF as 32-bit value is 2^-32, that is: 2.3283064e-10. It is much more likely to be hit by a lightning or to find 2 four leaved glovers in a row than to see exactly that value on any given pointer. In other words, for all practical purposes we can assume that a 0xDEADBEEF is from our assignment when we see it.
DEBUG is not a standard macro. The author just wanted to illustrate that doing the assignment can be enabled in debug builds and disabled in non-debug builds (by employing a dedicated flag).
Unless you have system-level access, pointer values near NULL are all bad. Like 1, 2, 3, etc.
I disagree with the premise of the linked question. If you think that testing should find memory leaks using magic numbers for pointer values, then there is a serious design failure in your application and testing strategies.
Write good code. Don’t write bad code to find bad code.
how can you set the pointer to a known bad pointer value? How can you set a pointer to an address which you can assure won't be allocated?
First, let me be clear that the example code you refer to is a dirty hack and is intended to provide some guidance to assist debugging. It is not intended as a production quality memory management tool; it isn't even intended to be "drop in" code - it's an example of a debugging technique.
Setting a pointer to a hardcoded value isn't guaranteed to be a "bad pointer" unless you know something about the target environment. 0xDEADBEEF is a value that is likely to be an invalid pointer on many environments just out of luck. The value was chosen in because I had seen it used as a marker for "invalid data" in other code and it is easily spotted when viewing memory dumps. I believe it is (or was, maybe not anymore - that answer was from 14 years ago!) commonly used to indicate memory areas that are invalid/unused. Similar to some of the values Microsoft used in their debug library versions of some memory management routines (see https://stackoverflow.com/a/370362 )
what does: if (DEBUG) bar = (bar_type*)(long_ptr)(0xDEADBEEF); even do? What's DEBUG ? I couldn't find a macro named so.
I might not have said explicitly in the answer you refer to, but the example code might more properly be call a pseudo-code example. if (DEBUG) is used to indicate a bit of code that is conditionally executed "if this is a debug build".
For example, DEBUG could be a macro (or variable) that is defined as non-zero during a debug build of the problem. Possibly on the compiler's command line (maybe something like -DDEBUG=1 ). A DEBUG macro is something that I found commonly used for code that is enabled in debug builds only.
Also, what's long_ptr ? What does it do?
To aid in the transition from 32-bit to 64-bit systems, MS added types that are size of pointers. LONG_PTR is an integer type that has the size of a pointer (32 or 64 bits as appropriate). I probably should have used LONG_PTR instead of long_ptr . I believe the cast is technically unnecessary, but I think it's still useful as a notation that makes clear that an integer is being 'converted' to a pointer - a coding idiom that uses dirty looking casts to call out a dirty hack.
Reminder: Answers generated by artificial intelligence tools are not allowed on Stack Overflow. Learn more
Post as a guest.
Required, but never shown
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .
IMAGES
VIDEO
COMMENTS
std::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the unique_ptr goes out of scope.. The object is disposed of, using the associated deleter when either of the following happens: the managing unique_ptr object is destroyed.; the managing unique_ptr object is assigned another pointer via operator= or reset().
0. In order to assign a new value to a std::unique_ptr use the reset() method. However, a big word of caution regarding the use of this method is that the std::unique_ptr object will try to dispose of the managed pointer by invoking a Deleter function on the managed pointer when the std::unique_ptr object will be being destroyed or when the ...
std::unique_ptr is by far the most used smart pointer class, so we'll cover that one first. In the following lessons, we'll cover std::shared_ptr and std::weak_ptr. std::unique_ptr. std::unique_ptr is the C++11 replacement for std::auto_ptr. It should be used to manage any dynamically allocated object that is not shared by multiple objects.
A unique_ptr object has two components: a stored pointer : the pointer to the object it manages. This is set on construction , can be altered by an assignment operation or by calling member reset , and can be individually accessed for reading using members get or release .
The assignment operation between unique_ptr objects that point to different types (3) needs to be between types whose pointers are implicitly convertible, and shall not involve arrays in any case (the third signature is not part of the array specialization of unique_ptr). Copy assignment (4) to a unique_ptr type is not allowed (deleted signature).
unique_ptr(const unique_ptr&)= delete; (7) 1) Constructs a std::unique_ptr that owns nothing. Value-initializes the stored pointer and the stored deleter. Requires that Deleter is DefaultConstructible and that construction does not throw an exception. These overloads participate in overload resolution only if std::is_default_constructible ...
Constructs a unique_ptr object, depending on the signature used: default constructor (1), and (2) The object is empty (owns nothing), with value-initialized stored pointer and stored deleter. construct from pointer (3) The object takes ownership of p, initializing its stored pointer to p and value-initializing its stored deleter. construct from pointer + lvalue deleter (4)
Description. std::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the unique_ptr goes out of scope. The object is disposed of, using the associated deleter when either of the following happens: The object is disposed of, using a potentially user-supplied deleter by calling ...
In this article. A unique_ptr does not share its pointer. It cannot be copied to another unique_ptr, passed by value to a function, or used in any C++ Standard Library algorithm that requires copies to be made.A unique_ptr can only be moved. This means that the ownership of the memory resource is transferred to another unique_ptr and the original unique_ptr no longer owns it.
std::unique_ptr is a smart pointer introduced in C++11. It automatically manages the dynamically allocated resources on the heap. Smart pointers are just wrappers around regular old pointers that help you prevent widespread bugs. Namely, forgetting to delete a pointer and causing a memory leak or accidentally deleting a pointer twice or in the ...
Return value (none) [] NoteTo replace the managed object while supplying a new deleter as well, move assignment operator may be used. A test for self-reset, i.e. whether ptr points to an object already managed by * this, is not performed, except where provided as a compiler extension or as a debugging assert.Note that code such as p. reset (p. release ()) does not involve self-reset, only code ...
Notes. Only non-const unique_ptr can transfer the ownership of the managed object to another unique_ptr.If an object's lifetime is managed by a const std:: unique_ptr, it is limited to the scope in which the pointer was created.. std::unique_ptr is commonly used to manage the lifetime of objects, including: . providing exception safety to classes and functions that handle objects with dynamic ...
unique_ptr uniquely manages a resource. Each unique_ptr object stores a pointer to the object that it owns or stores a null pointer. A resource can be owned by no more than one unique_ptr object; when a unique_ptr object that owns a particular resource is destroyed, the resource is freed. A unique_ptr object may be moved, but not copied; for ...
A unique_ptr is a type of smart pointer provided by the C++ Standard Library that is designed to manage the memory of a dynamically allocated memory. It holds the exclusive ownership of the memory it points to, meaning there can be no other unique_ptr pointing to the same memory at the same time. This exclusive ownership is the first way in ...
There are three smart pointers types: unique_ptr, shared_ptr, and weak_ptr. What is the difference between smart pointers and normal pointers? A Smart Pointer is a wrapper class for a pointer with overloaded operators like * and ->. The smart pointer class's objects resemble regular pointers. It may, however, deallocate and release damaged ...
Copy assignment operator unique_ptr<T>& operator=(const unique_ptr<T>& uptr) = delete; This is redundant, this part is not necessary as this operator will never be called taking into account that the motion constructor exists and the copy constructor is disabled (= delete).
For the array specialization (unique_ptr<T[]>), this overload participates in overload resolution only if U is an array type, ... As a move-only type, unique_ptr's assignment operator only accepts rvalues arguments (e.g. the result of std::make_unique or a std::move 'd unique_ptr variable).
std::unique_ptr with example in C++11. std::unique_ptr is the C++11 replacement for std::auto_ptr. It is used to manage use to manage any dynamically allocated object not shared by multiple objects. That is, std::unique_ptr should completely own the object it manages, not share that ownership with other classes.
The usual case for one to have a unique_ptr in a class is to be able to use inheritance (otherwise a plain object would often do as well, see RAII). For this case, there is no appropriate answer in this thread up to now. So, here is the starting point: struct Base { //some stuff }; struct Derived : public Base { //some stuff }; struct Foo { std::unique_ptr<Base> ptr; //points to Derived or ...
A DNS PTR record for each node in the cluster if you want to boot with the preset hostname when using static IP addressing. Otherwise, the Assisted Installer has an automatic node renaming feature when using static IP addressing that will rename the nodes to their network interface MAC address.
A unique_ptr can only leak if you release the owned resource and don't destroy it yourself.. In your case, the operator* of unique_ptr is called for a and returns an lvalue referring to the pointee. You then call the copy assignment operator for that pointee, but that does not in any way affect the management of the memory it occupies.
The author just wanted to illustrate that doing the assignment can be enabled in debug builds and disabled in non-debug builds (by employing a dedicated flag). ... then use unique_ptr which manages the lifetime for you. If everything is an RAII type then no memory errors can occur unless there is UB in the code somewhere. - NathanOliver ...