22.5 — std::unique_ptr

The compiler is given a lot of flexibility in terms of how it handles this call. It could create a new T, then call function_that_can_throw_exception(), then create the std::unique_ptr that manages the dynamically allocated T. If function_that_can_throw_exception() throws an exception, then the T that was allocated will not be deallocated, because the smart pointer to do the deallocation hasn’t been created yet. This leads to T being leaked.

In the above code, createResource() returns a std::unique_ptr by value. If this value is not assigned to anything, the temporary return value will go out of scope and the Resource will be cleaned up. If it is assigned (as shown in main()), in C++14 or earlier, move semantics will be employed to transfer the Resource from the return value to the object assigned to (in the above example, ptr), and in C++17 or newer, the return will be elided. This makes returning a resource by std::unique_ptr much safer than returning raw pointers!

  • <cassert> (assert.h)
  • <cctype> (ctype.h)
  • <cerrno> (errno.h)
  • C++11 <cfenv> (fenv.h)
  • <cfloat> (float.h)
  • C++11 <cinttypes> (inttypes.h)
  • <ciso646> (iso646.h)
  • <climits> (limits.h)
  • <clocale> (locale.h)
  • <cmath> (math.h)
  • <csetjmp> (setjmp.h)
  • <csignal> (signal.h)
  • <cstdarg> (stdarg.h)
  • C++11 <cstdbool> (stdbool.h)
  • <cstddef> (stddef.h)
  • C++11 <cstdint> (stdint.h)
  • <cstdio> (stdio.h)
  • <cstdlib> (stdlib.h)
  • <cstring> (string.h)
  • C++11 <ctgmath> (tgmath.h)
  • <ctime> (time.h)
  • C++11 <cuchar> (uchar.h)
  • <cwchar> (wchar.h)
  • <cwctype> (wctype.h)

Containers:

  • C++11 <array>
  • <deque>
  • C++11 <forward_list>
  • <list>
  • <map>
  • <queue>
  • <set>
  • <stack>
  • C++11 <unordered_map>
  • C++11 <unordered_set>
  • <vector>

Input/Output:

  • <fstream>
  • <iomanip>
  • <ios>
  • <iosfwd>
  • <iostream>
  • <istream>
  • <ostream>
  • <sstream>
  • <streambuf>

Multi-threading:

  • C++11 <atomic>
  • C++11 <condition_variable>
  • C++11 <future>
  • C++11 <mutex>
  • C++11 <thread>
  • <algorithm>
  • <bitset>
  • C++11 <chrono>
  • C++11 <codecvt>
  • <complex>
  • <exception>
  • <functional>
  • C++11 <initializer_list>
  • <iterator>
  • <limits>
  • <locale>
  • <memory>
  • <new>
  • <numeric>
  • C++11 <random>
  • C++11 <ratio>
  • C++11 <regex>
  • <stdexcept>
  • <string>
  • C++11 <system_error>
  • C++11 <tuple>
  • C++11 <type_traits>
  • C++11 <typeindex>
  • <typeinfo>
  • <utility>
  • <valarray>
  • C++11 allocator_arg_t
  • C++11 allocator_traits
  • auto_ptr_ref
  • C++11 bad_weak_ptr
  • C++11 default_delete
  • C++11 enable_shared_from_this
  • C++11 owner_less
  • C++11 pointer_traits
  • raw_storage_iterator
  • C++11 shared_ptr
  • C++11 unique_ptr
  • C++11 uses_allocator
  • C++11 weak_ptr

enum classes

  • C++11 pointer_safety
  • C++11 addressof
  • C++11 align
  • C++11 allocate_shared
  • C++11 const_pointer_cast
  • C++11 declare_no_pointers
  • C++11 declare_reachable
  • C++11 dynamic_pointer_cast
  • C++11 get_deleter
  • C++11 get_pointer_safety
  • get_temporary_buffer
  • C++11 make_shared
  • return_temporary_buffer
  • C++11 static_pointer_cast
  • C++11 undeclare_no_pointers
  • C++11 undeclare_reachable
  • uninitialized_copy
  • C++11 uninitialized_copy_n
  • uninitialized_fill
  • uninitialized_fill_n
  • C++11 allocator_arg
  • C++11 unique_ptr::~unique_ptr
  • C++11 unique_ptr::unique_ptr

member functions

  • C++11 unique_ptr::get
  • C++11 unique_ptr::get_deleter
  • C++11 unique_ptr::operator bool
  • C++11 /" title="unique_ptr::operator->"> unique_ptr::operator->
  • C++11 unique_ptr::operator[]
  • C++11 unique_ptr::operator*
  • C++11 unique_ptr::operator=
  • C++11 unique_ptr::release
  • C++11 unique_ptr::reset
  • C++11 unique_ptr::swap

non-member overloads

  • C++11 relational operators (unique_ptr)
  • C++11 swap (unique_ptr)

std:: unique_ptr ::unique_ptr

default (1)
from null pointer (2)
from pointer (3)
from pointer + lvalue deleter (4)
from pointer + rvalue deleter (5)
move (6)
move-cast (7)
move from auto_ptr (8)
copy (deleted!) (9)
main () { std::default_delete< > d; std::unique_ptr< > u1; std::unique_ptr< > u2 ( ); std::unique_ptr< > u3 ( ); std::unique_ptr< > u4 ( , d); std::unique_ptr< > u5 ( , std::default_delete< >()); std::unique_ptr< > u6 (std::move(u5)); std::unique_ptr< > u7 (std::move(u6)); std::unique_ptr< > u8 (std::auto_ptr< >( )); std::cout << << (u1? : ) << ; std::cout << << (u2? : ) << ; std::cout << << (u3? : ) << ; std::cout << << (u4? : ) << ; std::cout << << (u5? : ) << ; std::cout << << (u6? : ) << ; std::cout << << (u7? : ) << ; std::cout << << (u8? : ) << ; 0; }

std::unique_ptr

Defined in header <memory> .

Declarations ​

Description ​.

std::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the unique_ptr goes out of scope.

The object is disposed of, using the associated deleter when either of the following happens:

the managing unique_ptr object is destroyed

the managing unique_ptr object is assigned another pointer via operator = or reset()

The object is disposed of, using a potentially user-supplied deleter by calling get_deleter()(ptr) . The default deleter uses the delete operator, which destroys the object and deallocates the memory.

A unique_ptr may alternatively own no object, in which case it is called empty .

There are two versions of std::unique_ptr :

  • Manages a single object (e.g. allocated with new );
  • Manages a dynamically-allocated array of objects (e.g. allocated with new[] ).

The class satisfies the requirements of MoveConstructible and MoveAssignable, but of neither CopyConstructible nor CopyAssignable.

Type requirements ​

Deleter must be FunctionObject or lvalue reference to a FunctionObject or lvalue reference to function, callable with an argument of type unique_ptr<T, Deleter>::pointer .

Only non-const unique_ptr can transfer the ownership of the managed object to another unique_ptr . If an object's lifetime is managed by a const std::unique_ptr , it is limited to the scope in which the pointer was created.

std::unique_ptr is commonly used to manage the lifetime of objects, including:

  • Providing exception safety to classes and functions that handle objects with dynamic lifetime, by guaranteeing deletion on both normal exit and exit through exception;
  • Passing ownership of uniquely-owned objects with dynamic lifetime into functions;
  • Acquiring ownership of uniquely-owned objects with dynamic lifetime from functions;
  • As the element type in move-aware containers, such as std::vector , which hold pointers to dynamically-allocated objects (e.g. if polymorphic behavior is desired).

std::unique_ptr may be constructed for an incomplete type T , such as to facilitate the use as a handle in the pImpl idiom. If the default deleter is used, T must be complete at the point in code where the deleter is invoked, which happens in the destructor, move assignment operator, and reset member function of std::unique_ptr . (Conversely, std::shared_ptr can't be constructed from a raw pointer to incomplete type, but can be destroyed where T is incomplete). Note that if T is a class template specialization, use of unique_ptr as an operand, e.g. !p requires T 's parameters to be complete due to ADL.

If T is a derived class of some base B , then std::unique_ptr<T> is implicitly convertible to std::unique_ptr<B> . The default deleter of the resulting std::unique_ptr<B> will use operator delete for B , leading to undefined behavior unless the destructor of B is virtual. Note that std::shared_ptr behaves differently: std::shared_ptr<B> will use the operator delete for the type T and the owned object will be deleted correctly even if the destructor of B is not virtual.

Unlike std::shared_ptr , std::unique_ptr may manage an object through any custom handle type that satisfies NullablePointer. This allows, for example, managing objects located in shared memory, by supplying a Deleter that defines typedef boost::offset_ptr pointer ; or another fancy pointer.

Feature-test macroValueStdComment
__cpp_lib_constexpr_memory202202L(C++23)constexpr std::unique_ptr

Member types ​

pubpointerstd::remove_reference<Deleter>::type::pointer if that type exists, otherwise T*. Must satisfy NullablePointer
pubelement_typeT, the type of the object managed by this unique_ptr
pubdeleter_typeDeleter, the function object or lvalue reference to function or to function object, to be called from the destructor

Member functions ​

pub constructs a new unique_ptr
pub destructs the managed object if such is present
pub assigns the unique_ptr

Modifiers ​

pub returns a pointer to the managed object and releases the ownership
pub replaces the managed object
pub swaps the managed objects

Observers ​

pub returns a pointer to the managed object
pub returns the deleter that is used for destruction of the managed object
pub checks if there is an associated managed object

Single-object version, unique_ptr<T> ​

pub dereferences pointer to the managed object

Array version, unique_ptr<T[]> ​

pub provides indexed access to the managed array

Non-member functions ​

pub
make_unique_for_overwrite
creates a unique pointer that manages a new object
pub
operator<
operator<=
operator>
operator>=
operator<=>
compares to another unique_ptr or with nullptr
pub outputs the value of the managed pointer to an output stream
pub specializes the std::swap algorithm

Helper Classes ​

pub hash support for std::unique_ptr
  • Declarations
  • Description
  • Member types
  • Member functions
  • Non-member functions
  • Helper Classes
  • C++ Data Types
  • C++ Input/Output
  • C++ Pointers
  • C++ Interview Questions
  • C++ Programs
  • C++ Cheatsheet
  • C++ Projects
  • C++ Exception Handling
  • C++ Memory Management

Unique_ptr in C++

std::unique_ptr is a smart pointer introduced in C++11. It automatically manages the dynamically allocated resources on the heap. Smart pointers are just wrappers around regular old pointers that help you prevent widespread bugs. Namely, forgetting to delete a pointer and causing a memory leak or accidentally deleting a pointer twice or in the wrong way. They can be used in a similar way to standard pointers. They automate some of the manual processes that cause common bugs.

Prerequisites: Pointer in C++ , Smart Pointers in C++.

  • unique_ptr<A>: It specifies the type of the std::unique_ptr. In this case- an object of type A.
  • new A : An object of type A is dynamically allocated on the heap using the new operator.
  • ptr1 : This is the name of the std::unique_ptr variable.

What happens when unique_ptr is used?

When we write unique_ptr<A> ptr1 (new A), memory is allocated on the heap for an instance of datatype A. ptr1 is initialized and points to newly created A object. Here, ptr1 is the only owner of the newly created object A and it manages this object’s lifetime. This means that when ptr1 is reset or goes out of scope, memory is automatically deallocated and A’s object is destroyed.

When to use unique_ptr?

When ownership of resource is required. When we want single or exclusive ownership of a resource, then we should go for unique pointers. Only one unique pointer can point to one resource. So, one unique pointer cannot be copied to another. Also, it facilitates automatic cleanup when dynamically allocated objects go out of scope and helps preventing memory leaks.

Note: We need to use the <memory> header file for using these smart pointers.

Examples of Unique_ptr

Lets create a structure A and it will have a method named printA to display some text. Then in the main section, let’s create a unique pointer that will point to the structure A. So at this point, we have an instance of structure A and p1 holds the pointer to that.

     

Now let’s create another pointer p2 and we will try to copy the pointer p1 using the assignment operator(=).

     

The above code will give compile time error as we cannot assign pointer p2 to p1 in case of unique pointers. We have to use the move semantics for such purpose as shown below.

Managing object of type A using move semantics.

       

Note once the address in pointer p1 is copied to pointer p2, the pointer p1’s address becomes NULL(0) and the address stored by p2 is now the same as the address stored by p1 showing that the address in p1 has been transferred to the pointer p2 using the move semantics.

Please Login to comment...

Similar reads.

  • Geeks Premier League 2023
  • Geeks Premier League

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

unique_ptr Class

  • 8 contributors

Stores a pointer to an owned object or array. The object/array is owned by no other unique_ptr . The object/array is destroyed when the unique_ptr is destroyed.

Right A unique_ptr .

Nptr An rvalue of type std::nullptr_t .

Ptr A pointer .

Deleter A deleter function that is bound to a unique_ptr .

No exceptions are generated by unique_ptr .

The unique_ptr class supersedes auto_ptr , and can be used as an element of C++ Standard Library containers.

Use the make_unique helper function to efficiently create new instances of unique_ptr .

unique_ptr uniquely manages a resource. Each unique_ptr object stores a pointer to the object that it owns or stores a null pointer. A resource can be owned by no more than one unique_ptr object; when a unique_ptr object that owns a particular resource is destroyed, the resource is freed. A unique_ptr object may be moved, but not copied; for more information, see Rvalue Reference Declarator: && .

The resource is freed by calling a stored deleter object of type Del that knows how resources are allocated for a particular unique_ptr . The default deleter default_delete<T> assumes that the resource pointed to by ptr is allocated with new , and that it can be freed by calling delete _Ptr . (A partial specialization unique_ptr<T[]> manages array objects allocated with new[] , and has the default deleter default_delete<T[]> , specialized to call delete[] ptr .)

The stored pointer to an owned resource, stored_ptr has type pointer . It's Del::pointer if defined, and T * if not. The stored deleter object stored_deleter occupies no space in the object if the deleter is stateless. Note that Del can be a reference type.

Constructors

Name Description
There are seven constructors for .
Name Description
A synonym for the template parameter .
A synonym for the template parameter .
A synonym for if defined, otherwise .
Name Description
Returns .
Returns a reference to .
stores in and returns its previous contents.
Releases the currently owned resource and accepts a new resource.
Exchanges resource and with the provided .
Name Description
The operator returns a value of a type that is convertible to . The result of the conversion to is when , otherwise .
The member function returns .
The member function returns .
Assigns the value of a (or a ) to the current .

deleter_type

The type is a synonym for the template parameter Del .

element_type

The type is a synonym for the template parameter Type .

The type is a synonym for the template parameter Ty .

Returns stored_ptr .

The member function returns stored_ptr .

get_deleter

Returns a reference to stored_deleter .

The member function returns a reference to stored_deleter .

Assigns the address of the provided unique_ptr to the current one.

A unique_ptr reference used to assign the value of to the current unique_ptr .

The member functions call reset(right.release()) and move right.stored_deleter to stored_deleter , then return *this .

A synonym for Del::pointer if defined, otherwise Type * .

The type is a synonym for Del::pointer if defined, otherwise Type * .

Releases ownership of the returned stored pointer to the caller and sets the stored pointer value to nullptr .

Use release to take over ownership of the raw pointer stored by the unique_ptr . The caller is responsible for deletion of the returned pointer. The unique-ptr is set to the empty default-constructed state. You can assign another pointer of compatible type to the unique_ptr after the call to release .

This example shows how the caller of release is responsible for the object returned:

Takes ownership of the pointer parameter, and then deletes the original stored pointer. If the new pointer is the same as the original stored pointer, reset deletes the pointer and sets the stored pointer to nullptr .

ptr A pointer to the resource to take ownership of.

Use reset to change the stored pointer owned by the unique_ptr to ptr and then delete the original stored pointer. If the unique_ptr wasn't empty, reset invokes the deleter function returned by get_deleter on the original stored pointer.

Because reset first stores the new pointer ptr , and then deletes the original stored pointer, it's possible for reset to immediately delete ptr if it's the same as the original stored pointer.

Exchanges pointers between two unique_ptr objects.

right A unique_ptr used to swap pointers.

The member function swaps stored_ptr with right.stored_ptr and stored_deleter with right.stored_deleter .

There are seven constructors for unique_ptr .

ptr A pointer to the resource to be assigned to a unique_ptr .

_Deleter A deleter to be assigned to a unique_ptr .

right An rvalue reference to a unique_ptr from which unique_ptr fields are move assigned to the newly constructed unique_ptr .

The first two constructors construct an object that manages no resource. The third constructor stores ptr in stored_ptr . The fourth constructor stores ptr in stored_ptr and deleter in stored_deleter .

The fifth constructor stores ptr in stored_ptr and moves deleter into stored_deleter . The sixth and seventh constructors store right.release() in stored_ptr and moves right.get_deleter() into stored_deleter .

~unique_ptr

The destructor for unique_ptr , destroys a unique_ptr object.

The destructor calls get_deleter()(stored_ptr) .

Was this page helpful?

Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see: https://aka.ms/ContentUserFeedback .

Submit and view feedback for

Additional resources

thisPointer

Programming Tutorials

What is unique_ptr in C++?

In this article, we will look into a Smart Pointer unique_ptr in C++. It provides an easier way of memory management. Smart pointers are an integral part of modern C++ as they help prevent memory leaks and ensure that memory is managed automatically.

Table of Contents

What is a unique_ptr.

A unique_ptr is a type of smart pointer provided by the C++ Standard Library that is designed to manage the memory of a dynamically allocated memory. It holds the exclusive ownership of the memory it points to, meaning there can be no other unique_ptr pointing to the same memory at the same time. This exclusive ownership is the first way in which unique_ptr simplifies memory management.

How does unique_ptr help?

The benefits of using unique_ptr are as follows,

  • Exclusive Ownership : At any given time, there can only be one unique_ptr object managing a specific block of memory. This eliminates the complexities of handling multiple pointers for the same memory resource, reducing the chances of programming errors.
  • Automatic Memory Deallocation : When a unique_ptr goes out of scope, the memory it manages is automatically deallocated. This is extremely beneficial as it removes the burden from programmers to manually delete memory, which is error-prone and can lead to memory leaks if forgotten.

Avoiding the delete Operator

Traditionally, dynamic memory in C++ is allocated using the new operator and deallocated using delete . However, with unique_ptr , the need to directly use these operators can be avoided, thus allowing the C++ runtime system to manage memory for us. This can be achieved by using the std::make_unique function introduced in C++14, which creates a unique_ptr that manages a new object.

Traditional Memory Management

Traditional approaches of memory management in C++ are as follows,

  • Stack Variables:

Initially, managing memory via stack variables is straightforward. You create a variable, use it, and once it goes out of scope, the system cleans it up. There’s no need for active management, which is ideal and error-free.

  • Raw Pointers and Heap Memory:

Alternatively, raw pointers have been used for dynamic memory allocation on the heap. With raw pointers, developers need to be meticulous about deallocating memory with the delete operator once an object is no longer needed. Failure to do so results in memory leaks, a common issue in C++ development.

Frequently Asked:

  • Smart Pointer vs Raw Pointer in C++
  • Copying and Moving unique_ptr in C++
  • How to use unique_ptr with Arrays in C++?
  • How to Return unique_ptr from a function in C++?

Smart pointers, like unique_ptr , are designed to mimic the ease of stack variables while providing the flexibility of heap allocation. When you employ a smart pointer, it assumes the role of a caretaker for the allocated memory, ensuring that once its job is done, it cleans up after itself, leaving no memory leaks in its wake.

How to use unique_ptr in C++ ?

To utilize unique_ptr , you must include the <memory> header file at the beginning of your C++ program:

unique_ptr can also be created and assigned heap memory directly upon its declaration:

Here we have dynamically allocated memory on the heap to store an integer value and passed the pointer pointing to this memory to a unique_ptr . Now, the unique_ptr object is responsible for this memory, and you don’t need to manually delete this allocated memory.

When the unique_ptr object ptrObj goes out of scope, it will automatically delete the memory linked with the unique_ptr object.

Initializing unique_ptr with nullptr

If you’re not ready to assign memory to unique_ptr immediately, you can initialize it with nullptr and assign it later:

Utilizing unique_ptr Like Raw Pointers

Using unique_ptr feels familiar as you can utilize it similarly to raw pointers. Here’s how you can manipulate the underlying object or primitive:

And if you need to obtain the raw pointer for some operations, especially when interacting with APIs that require raw pointers, unique_ptr provides a get() method:

Remember, the use of get() should be limited and never for managing the memory that unique_ptr is responsible for.

RAII and unique_ptr in C++

RAII, which stands for Resource Acquisition Is Initialization, is a core concept in C++ that ensures resources are properly released when they are no longer needed. Smart pointers, like unique_ptr , are a perfect example of RAII in action. They manage the lifecycle of dynamically allocated memory, ensuring automatic deallocation when the smart pointer goes out of scope. This pattern helps prevent memory leaks and dangling pointers, common issues in manual memory management.

Let’s look at a practical example using unique_ptr :

In the code above, tweetPtr is an instance of unique_ptr managing the lifecycle of a Tweet object. Here’s how unique_ptr upholds the principles of RAII:

  • Resource Acquisition : The Tweet object is dynamically allocated with new , and its pointer is immediately passed to tweetPtr . The acquisition of the resource and its initialization with a managing entity are simultaneous.
  • Resource Management : As soon as unique_ptr takes control, it becomes the sole manager of the Tweet object’s memory. The original raw pointer ( rawTweet ) is set to nullptr to prevent accidental deletion or access, reinforcing that tweetPtr now has exclusive management over the object.
  • Resource Release : When tweetPtr goes out of scope, which would be at the end of the main function in this case, its destructor is automatically invoked. This destructor frees the associated heap memory, destroying the Tweet object. This automatic deallocation is the cornerstone of RAII—resources are cleaned up without explicit instructions from the developer.

By adhering to RAII principles through unique_ptr , C++ developers can write more robust applications. Smart pointers automate memory management, which not only simplifies code but also dramatically reduces the risk of resource leaks and errors. With unique_ptr , you have a powerful tool that aligns with modern C++ best practices, ensuring that resources are managed safely and efficiently.

By using unique_ptr , you can write safer programs with automatic memory management, reducing the risk of memory leaks and pointer errors.

Related posts:

  • What is shared_ptr in C++?
  • Shared_ptr & Custom Deleter in Modern C++
  • How not to use Smart Pointers in C++?
  • What is weak_ptr in Modern C++ & why do we need it?
  • Introduction to Smart Pointers in Modern C++
  • Reset unique_ptr in Modern C++
  • Using std::find & std::find_if with User Defined Classes
  • C++11 Multithreading – Part 4: Data Sharing and Race Conditions
  • C++11 Multithreading – Part 5: Using mutex to fix Race Conditions
  • C++ Set example and Tutorial – Part 1
  • C++11 Multithreading – Part 7: Condition Variables Explained
  • C++11 Multithreading – Part 8: std::future , std::promise and Returning values from Thread
  • Lambda Functions in C++
  • std::bind in C++ – Explained with Examples
  • The auto Keyword in C++11
  • multimap Example and Tutorial in C++

Share your love

2 thoughts on “what is unique_ptr in c++”.

' src=

Very Nice articles…

' src=

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

My implementation for std::unique_ptr

I just finished learning about move semantics and realized that a nice practical example for this concept is unique_ptr (it cannot be copied, only moved).

For learning purposes, and as a personal experiment, I proceed to try to create my implementation for a smart unique pointer:

For a small set of test cases, this is working like the real unique_ptr .

However, it just seems too simple enough.

I have two questions regarding this code:

  • Is it well-formed? i.e. does it follow common C++ standard and patterns (for example, should private members be declared before public ones?
  • Am I missing something regarding functionality? Is there maybe a bug in my code that I'm not seeing?
  • reinventing-the-wheel

Matias Cicero's user avatar

  • \$\begingroup\$ If there is something I can do to improve my question, feel free to comment any tips, don't just leave your downvote \$\endgroup\$ –  Matias Cicero Commented May 21, 2017 at 1:30
  • \$\begingroup\$ I left a downvote and an edit. Wasn't that enough information? \$\endgroup\$ –  πάντα ῥεῖ Commented May 21, 2017 at 1:50
  • 2 \$\begingroup\$ @πάνταῥεῖ You just added a missing tag. Do I deserve a downvote for that? \$\endgroup\$ –  Matias Cicero Commented May 21, 2017 at 2:18
  • \$\begingroup\$ Do you really need to reinvent the square wheel? Can you elaborate bout why actually? \$\endgroup\$ –  πάντα ῥεῖ Commented May 21, 2017 at 3:04
  • 1 \$\begingroup\$ Did you mean to omit the members that allow you to actually use a unique pointer? I mean, operator*() and operator->() . \$\endgroup\$ –  Toby Speight Commented May 22, 2017 at 11:06

2 Answers 2

Is it well-formed?

It compiles, so yes.

i.e. does it follow common C++ standard and patterns (for example, should private members be declared before public ones?

Personally I think so.

When reading the code I want to know the members so I can verify that the constructors initialize them all, as a result I usually put them first. But other people prefer to put all private stuff at the bottom.

Am I missing something regarding functionality?

Yes. Quite a lot.

Is there maybe a bug in my code that I'm not seeing?

Yes. It potentially leaks on assignment.

Code Review

Constructing from object.

That's exceedingly dangerous:

Use member initializing list.

You should always attempt to use the member initializer list for initializing members. Any non-trivial object will have its constructor called before the initializer code is called and thus it is inefficient to then re-initialize it in the code.

Member variable Names

Prefer not to use _ as the first character in an identifier name.

Even if you know all the rules of when to use them most people don't so they are best avoided. If you must have a prefix to identify members use m_ - but if you name your member variables well then there is no need for any prefix (in my opinion prefixes makes the code worse not better, because you are relying on unwritten rules. If you have good well-defined names (see self-documenting code) then members should be obvious).

The move operators should be marked as noexcept .

When used with standard containers this will enable certain optimizations. This is because if the move is noexcept then certain operations can be guaranteed to work and thus provide the strong exception guarantee.

Leak in assignment

Note: Your current assignment potentially leaks. If this currently has a pointer assigned then you overwrite it without freeing.

Checking for this pessimization

Yes you do need to make it work when there is self assignment. But in real code the self assignment happens so infrequently that this test becomes a pessimization on the normal case (same applies for copy operation). There have been studies on this (please somebody post a link; I have lost mine and would like to add it back to my notes).

The standard way of implementing move is via swap. Just like Copy is normally implemented by Copy and Swap.

Using the swap technique also delays the calling of the destructor on the pointer for the current object. Which means that it can potentially be re-used. But if it is going out of scope the unique_ptr destructor will correctly destroy it.

Good first try but still lots of issues.

Please read the article I wrote on unique_ptr and shared_ptr for lots more things you should implement.

Smart-Pointer - Unique Pointer Smart-Pointer - Shared Pointer Smart-Pointer - Constructors

Some things you missed:

  • Constructor with nullptr
  • Constructor from derived type
  • Casting to bool
  • Checking for empty
  • Guaranteeing delete on construction failure.
  • Implicit construction issues
  • Dereferencing

When you have read all three articles then the bare bones unique_ptr looks like this:

Test to make sure it compiles:

Loki Astari's user avatar

  • \$\begingroup\$ What would be a safer way of constructing the unique_ptr from a pointer of type T? \$\endgroup\$ –  dav Commented Jul 20, 2018 at 23:18
  • \$\begingroup\$ @DavidTran. Please look at the standard version and its interface. \$\endgroup\$ –  Loki Astari Commented Jul 22, 2018 at 6:13
  • \$\begingroup\$ @Mashpoe there is no need to check if a pointer is null before calling delete. \$\endgroup\$ –  Loki Astari Commented Jan 14, 2019 at 17:02
  • \$\begingroup\$ @U62 Compiles fine for me. In both cases they are converted to the correct type before the swap. It will only fail to compile if the class U is not derived from the class T which is exactly what it is supposed to do . \$\endgroup\$ –  Loki Astari Commented Apr 5, 2019 at 16:51
  • 1 \$\begingroup\$ @user4893106 much of that constructor ridiculousness... : For an average class zero. In most situations the default implementation works out of the box with no need to do anything. For complex classes like this case there is usually a standard implementation already available in std:: . Not sure what "simple ops only" means. \$\endgroup\$ –  Loki Astari Commented Sep 3, 2020 at 18:06

Yes, have good formatting

I have the same thought present in Martin York' answer:

Yes, you forgot to add T, and T[], and the following features:

type* release();

void reset(type* item);

void swap(unique_ptr &other);

type* get();

operator->;

operator[];

Yes, you must force the receiving of types strictly to be pointers

Copy assignment operator

This is redundant, this part is not necessary as this operator will never be called taking into account that the motion constructor exists and the copy constructor is disabled (= delete).

Template only typename T

You must accept both types T, and T[], ie: array or not.

Constructing from object

Verify in move assignment operator.

Before _ptr =, you need to check if this->ptr is initialized, if yes delete ptr before assign.

This above code is incorrect, this is pointer and uptr is reference, you need to add &other to verify successfully

Note: std::move in uptr._ptr is irrelevant.

Example code

Lucas Paixão's user avatar

  • \$\begingroup\$ I certainly didn't plagiarize the answer, I adopted the same html design pattern to create my answer, but the content is different, obviously I forgot to put the proper reference to the two contents I mentioned the previous answer, being them #NoExcept and #It follow common C++, as my way of thinking is similar. Anyway, in these specific topics I added the reference to the original answer (@Martin York), thanks for the warning. \$\endgroup\$ –  Lucas Paixão Commented Sep 22, 2021 at 19:02

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged c++ c++11 reinventing-the-wheel pointers or ask your own question .

  • Featured on Meta
  • Upcoming sign-up experiments related to tags

Hot Network Questions

  • Can I route audio from a macOS Safari PWA to specific speakers, different from my system default?
  • Cleaning chain a few links at a time
  • Why do these two pieces of code, which only differ by a transformation of the formula, exhibit a significant difference in running speed?
  • Are there examples of triple entendres in English?
  • Have children's car seats not been proven to be more effective than seat belts alone for kids older than 24 months?
  • Could space habitats have large transparent roofs?
  • How to engagingly introduce a ton of history that happens in, subjectively, a moment?
  • What is the term for when a hyperlink maliciously opens different URL from URL displayed when hovered over?
  • Specific calligraphic font for lowercase g
  • Summation not returning a timely result
  • What are these courtesy names and given names? - confusion in translation
  • Were there engineers in airship nacelles, and why were they there?
  • Is arxiv strictly for new stuff?
  • Where does someone go with Tzara'as if they are dwelling in a Ir Miklat?
  • Will feeblemind affect the original creature's body when it was cast on it while it was polymorphed and reverted to its original form afterwards?
  • Do S&P 500 funds run by different investment companies have different performance based on the buying / selling speed of the company?
  • Correlation for Small Dataset?
  • Did the BBC censor a non-binary character in Transformers: EarthSpark?
  • Was Paul's Washing in Acts 9:18 a Ritual Purification Rather Than a Christian Baptism?
  • Where can I access records of the 1947 Superman copyright trial?
  • Are combat sports ethical?
  • What kind of sequence is between an arithmetic and a geometric sequence?
  • Cloud masking ECOSTRESS LST data
  • What is a positive coinductive type and why are they so bad?

unique_ptr c assignment

cppreference.com

Std::unique_ptr<t,deleter>:: operator=.

(C++20)
(C++20)
(C++11)
(C++20)
(C++17)
(C++11)
(C++11)
(basic types, RTTI)
(C++20)
(C++20)
three_way_comparable_with (C++20)
   
is_ltis_lteq (C++20)(C++20)
is_gtis_gteq (C++20)(C++20)
General utilities
(C++20)
(deprecated in C++20)
rel_ops::operator>
rel_ops::operator>=
cmp_lesscmp_less_than (C++20)(C++20)   
cmp_greatercmp_greater_than (C++20)(C++20)


  
  
(until C++23)
(until C++23)
(until C++23)
(until C++23)
(until C++23)
(until C++23)



)
)
)
start_lifetime_as_array (C++23)

)

unique_ptr::operator->
make_unique_for_overwrite (C++20)
operator!=operator<operator>operator<=operator>=operator<=> (until C++20)(C++20)
operator=( unique_ptr&& r ) noexcept; (1) (constexpr since C++23)
< class U, class E >
unique_ptr& operator=( unique_ptr<U, E>&& r ) noexcept;
(2) (constexpr since C++23)
operator=( ) noexcept; (3) (constexpr since C++23)
operator=( const unique_ptr& ) = delete; (4)
  • Deleter is not MoveAssignable , or
  • assigning get_deleter() from an rvalue of type Deleter would throw an exception.
  • std::remove_reference<Deleter>::type is not CopyAssignable , or
  • assigning get_deleter() from an lvalue of type Deleter would throw an exception.
  • U is not an array type,
  • unique_ptr<U, E>::pointer is implicitly convertible to pointer , and
  • std:: is_assignable < Deleter & , E && > :: value is true .
  • U is an array type,
  • pointer is the same type as element_type* ,
  • unique_ptr<U, E>::pointer is the same type as unique_ptr<U, E>::element_type* ,
  • unique_ptr<U, E>::element_type(*)[] is convertible to element_type(*)[] , and
Parameters Return value Notes Example Defect reports

[ edit ] Parameters

r - smart pointer from which ownership will be transferred

[ edit ] Return value

[ edit ] notes.

As a move-only type, unique_ptr 's assignment operator only accepts rvalues arguments (e.g. the result of std::make_unique or a std::move 'd unique_ptr variable).

[ edit ] Example

[ edit ] defect reports.

The following behavior-changing defect reports were applied retroactively to previously published C++ standards.

DR Applied to Behavior as published Correct behavior
C++11 for overload (2), was assigned from
<Deleter>(r.get_deleter())
corrected to
<E>(r.get_deleter())
C++11
rejected qualification conversions
accepts
C++11 the converting assignment operator was not constrained constrained
C++11 the move assignment operator was not constrained constrained
  • Recent changes
  • Offline version
  • What links here
  • Related changes
  • Upload file
  • Special pages
  • Printable version
  • Permanent link
  • Page information
  • In other languages
  • This page was last modified on 6 October 2023, at 13:15.
  • This page has been accessed 316,644 times.
  • Privacy policy
  • About cppreference.com
  • Disclaimers

Powered by MediaWiki

Vishal Chovatiya

Understanding unique_ptr with Example in C++11

21 new features of Modern C++ to use in your project, Move Constructor & Assignment Operator, unique_ptr with example in C++

The smart pointers are a really good mechanism to manage dynamically allocated resources. In this article, we will see unique_ptr with example in C++11. But we don’t discuss standard smart pointers from a library. Rather, we implement our own smart pointer equivalent to it. This will give us an idea of inside working of smart pointers.

  • 2 Why do we need smart pointers?
  • 3 smart_ptr aka std::auto_ptr from C++98
  • 4 std::auto_ptr, and why to avoid it
  • 5 std::unique_ptr with example in C++11
  • 6 References

Prior to C++11, the standard provided std::auto_ptr . Which had some limitations. But from C++11, standard provided many smart pointers classes. Understanding unique_ptr with example in C++ requires an understanding of move semantics which I have discussed here & here .

But before all these nuisances, we will see “Why do we need smart pointer in 1st place?”:

Why do we need smart pointers?

  • In the above code, the early return or throw statement, causing the function to terminate without variable ptr being deleted.
  • Consequently, the memory allocated for variable ptr is now leaked (and leaked again every time this function is called and returns early).
  • These kinds of issues occur because pointer variables have no inherent mechanism to clean up after themselves.
  • Following class cleans-up automatically when sources are no longer in use:

smart_ptr aka std::auto_ptr from C++98

  • Now, let’s go back to our func() example above, and show how a smart pointer class can solve our challenge:
  • Note that even in the case where the user enters zero and the function terminates early, the Resource is still properly deallocated.
  • Because of the ptr variable is a local variable. ptr destroys when the function terminates (regardless of how it terminates). And because of the smart_ptr destructor will clean up the Resource , we are assured that the Resource will be properly cleaned up.
  • There is still some problem with our code. Like:
  • In this case destructor of our Resource object will be called twice which can crash the program.
  • What if, instead of having our copy constructor and assignment operator copy the pointer (“copy semantics”), we instead transfer/move ownership of the pointer from the source to the destination object? This is the core idea behind move semantics. Move semantics means the class will transfer ownership of the object rather than making a copy.
  • Let’s update our smart_ptr class to show how this can be done:

std::auto_ptr , and why to avoid it

  • What we have seen above as smart_ptr is basically an std::auto_ptr which was introduced in C++98, was C++’s first attempt at a standardized smart pointer.
  • However, std::auto_ptr (and our smart_ptr class) has a number of problems that make using it dangerous.
  • Because std::auto_ptr implements move semantics through the copy constructor and assignment operator, passing an std::auto_ptr by value to a function will cause your resource to get moved to the function parameter (and be destroyed at the end of the function when the function parameters go out of scope). Then when you go to access your std::auto_ptr argument from the caller (not realizing it was transferred and deleted), you’re suddenly dereferencing a null pointer. Crash!
  • std::auto_ptr always deletes its contents using non-array delete. This means std::auto_ptr won’t work correctly with dynamically allocated arrays, because it uses the wrong kind of deallocation. Worse, it won’t prevent you from passing it a dynamic array, which it will then mismanage, leading to memory leaks.
  • Because of the above-mentioned shortcomings, std::auto_ptr has been deprecated in C++11, and it should not used. In fact, std::auto_ptr slated for complete removal from the standard library as part of C++17!
  • Overriding the copy semantics to implement move semantics leads to weird edge cases and inadvertent bugs. Because of this, in C++11, the concept of “move” formally defined. And “move semantics” added to the language to properly differentiate copying from moving. In C++11, std::auto_ptr has been replaced by a bunch of other types of “move-aware” smart pointers: std::scoped_ptr , std::unique_ptr , std::weak_ptr , and std::shared_ptr .
  • We’ll also explore the two most popular of these: std::unique_ptr (which is a direct replacement for std::auto_ptr ) and std::shared_ptr .

std::unique_ptr with example in C++11

  • std::unique_ptr is the C++11 replacement for std::auto_ptr . It is used to manage use to manage any dynamically allocated object not shared by multiple objects. That is, std::unique_ptr should completely own the object it manages, not share that ownership with other classes.
  • We can convert our smart_ptr we designed above into std::unique_ptr . And for that one thing, we can do is delete the copy constructor & assignment operator so that no one can copy smart pointer.
  • As we are not allowing a copy of smart pointer we can’t pass our smart pointer to any function by value or return by value. And this is not good design.
  • To pass or return by value, we can add move constructor & move assignment operator, so that while passing or returning by value, we would have to transfer ownership through move semantics. This way we can also ensure single ownership throughout the lifetime of the object.
  • This is not the exact implementation of std::unique_ptr as there is deleter, implicit cast to bool & other security features included in an actual implementation, but this gives you a bigger picture of how std::unique_ptr is implemented.
  • https://www.learncpp.com/cpp-tutorial/15-1-intro-to-smart-pointers-move-semantics/
  • https://stackoverflow.com/questions/106508/what-is-a-smart-pointer-and-when-should-i-use-one
  • https://docs.microsoft.com/en-us/cpp/cpp/smart-pointers-modern-cpp?view=vs-2017

Related Articles

2-wrong-way-to-learn-copy-assignment-operator-in-c

2 Wrong Way to Learn Copy Assignment Operator in C++ With Example

What exactly nullptr is in C++ vishal chovatiya

What Exactly nullptr Is in C++?

MEMORY LAYOUT OF C++ OBJECT, virtual function works internally

Memory Layout of C++ Object in Different Scenarios

Installing OpenShift Container Platform with the Assisted Installer

Making open source more inclusive.

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. Because of the enormity of this endeavor, these changes are being updated gradually and where possible. For more details, see our CTO Chris Wright’s message .

Providing feedback on Red Hat documentation

You can provide feedback or report an error by submitting the Create Issue form in Jira. The Jira issue will be created in the Red Hat Hybrid Cloud Infrastructure Jira project, where you can track the progress of your feedback.

  • Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback.

Click Create Issue

  • Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
  • Click Create .

We appreciate your feedback on our documentation.

Chapter 1. About the Assisted Installer

The Assisted Installer for Red Hat OpenShift Container Platform is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports various deployment platforms with a focus on bare metal, Nutanix, vSphere, and Oracle Cloud Infrastructure.

You can install OpenShift Container Platform on premises in a connected environment, with an optional HTTP/S proxy, for the following platforms:

  • Highly available OpenShift Container Platform or single-node OpenShift cluster
  • OpenShift Container Platform on bare metal or vSphere with full platform integration, or other virtualization platforms without integration
  • Optionally, OpenShift Virtualization and Red Hat OpenShift Data Foundation

1.1. Features

The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following features:

  • You can install your cluster by using the Hybrid Cloud Console instead of creating installation configuration files manually.
  • You do not need a bootstrap node because the bootstrapping process runs on a node within the cluster.
  • You do not need in-depth knowledge of OpenShift Container Platform to deploy a cluster. The Assisted Installer provides reasonable default configurations.
  • You do not need to run the OpenShift Container Platform installer locally.
  • You have access to the latest Assisted Installer for the latest tested z-stream releases.
  • The Assisted Installer supports IPv4 networking with SDN and OVN, IPv6 and dual stack networking with OVN only, NMState-based static IP addressing, and an HTTP/S proxy.
  • OVN is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later.
  • SDN is supported up to OpenShift Container Platform 4.14 and deprecated in OpenShift Container Platform 4.15.

Before installing, the Assisted Installer checks the following configurations:

  • Network connectivity
  • Network bandwidth
  • Connectivity to the registry
  • Upstream DNS resolution of the domain name
  • Time synchronization between cluster nodes
  • Cluster node hardware
  • Installation configuration parameters
  • You can automate the installation process by using the Assisted Installer REST API.

1.2. Customizing your installation

You can customize your installation by selecting one or more options.

These options are installed as Operators, which are used to package, deploy, and manage services and applications on the control plane. See the Operators documentation for details.

You can deploy these Operators after the installation if you require advanced configuration options.

You can deploy OpenShift Virtualization to perform the following tasks:

  • Create and manage Linux and Windows virtual machines (VMs).
  • Run pod and VM workloads alongside each other in a cluster.
  • Connect to VMs through a variety of consoles and CLI tools.
  • Import and clone existing VMs.
  • Manage network interface controllers and storage disks attached to VMs.
  • Live migrate VMs between nodes.

See the OpenShift Virtualization documentation for details.

You can deploy the multicluster engine for Kubernetes to perform the following tasks in a large, multi-cluster environment:

  • Provision and manage additional Kubernetes clusters from your initial cluster.
  • Use hosted control planes to reduce management costs and optimize cluster deployment by decoupling the control and data planes. See Introduction to hosted control planes for details.

Use GitOps Zero Touch Provisioning to manage remote edge sites at scale. See Edge computing for details.

You can deploy the multicluster engine with Red Hat OpenShift Data Foundation on all OpenShift Container Platform clusters.

Multicluster engine and storage configurations

Deploying multicluster engine without OpenShift Data Foundation results in the following scenarios:

  • Multi-node cluster: No storage is configured. You must configure storage after the installation process.
  • Single-node OpenShift: LVM Storage is installed.

1.3. API support policy

Assisted Installer APIs are supported for a minimum of three months from the announcement of deprecation.

Chapter 2. Prerequisites

The Assisted Installer validates the following prerequisites to ensure successful installation.

If you use a firewall, you must configure it so that Assisted Installer can access the resources it requires to function.

2.1. Supported CPU architectures

The Assisted Installer is supported on the following CPU architectures:

2.2. Resource requirements

This section describes the resource requirements for different clusters and installation options.

The multicluster engine for Kubernetes requires additional resources.

If you deploy the multicluster engine with storage, such as OpenShift Data Foundation or LVM Storage, you must also allocate additional resources to each node.

2.2.1. Multi-node cluster resource requirements

The resource requirements of a multi-node cluster depend on the installation options.

Control plane nodes:

  • 4 CPU cores
  • 100 GB storage

The disks must be reasonably fast, with an etcd wal_fsync_duration_seconds p99 duration that is less than 10 ms. For more information, see the Red Hat Knowledgebase solution How to Use 'fio' to Check Etcd Disk Performance in OCP .

Compute nodes:

  • 2 CPU cores
  • Additional 4 CPU cores

Additional 16 GB RAM

If you deploy multicluster engine without OpenShift Data Foundation, no storage is configured. You configure the storage after the installation.

  • Additional 75 GB storage

2.2.2. Single-node OpenShift resource requirements

The resource requirements for single-node OpenShift depend on the installation options.

  • 8 CPU cores
  • Additional 8 CPU cores

Additional 32 GB RAM

If you deploy multicluster engine without OpenShift Data Foundation, LVM Storage is enabled.

  • Additional 95 GB storage

2.3. Networking requirements

The network must meet the following requirements:

  • A DHCP server unless using static IP addressing.

A base domain name. You must ensure that the following requirements are met:

  • There is no wildcard, such as *.<cluster_name>.<base_domain> , or the installation will not proceed.
  • A DNS A/AAAA record for api.<cluster_name>.<base_domain> .
  • A DNS A/AAAA record with a wildcard for *.apps.<cluster_name>.<base_domain> .
  • Port 6443 is open for the API URL if you intend to allow users outside the firewall to access the cluster via the oc CLI tool.
  • Port 443 is open for the console if you intend to allow users outside the firewall to access the console.
  • A DNS A/AAAA record for each node in the cluster when using User Managed Networking, or the installation will not proceed. DNS A/AAAA records are required for each node in the cluster when using Cluster Managed Networking after installation is complete in order to connect to the cluster, but installation can proceed without the A/AAAA records when using Cluster Managed Networking.
  • A DNS PTR record for each node in the cluster if you want to boot with the preset hostname when using static IP addressing. Otherwise, the Assisted Installer has an automatic node renaming feature when using static IP addressing that will rename the nodes to their network interface MAC address.
  • DNS A/AAAA record settings at top-level domain registrars can take significant time to update. Ensure the A/AAAA record DNS settings are working before installation to prevent installation delays.
  • For DNS record examples, see Example DNS configuration in this chapter.

The OpenShift Container Platform cluster’s network must also meet the following requirements:

  • Connectivity between all cluster nodes
  • Connectivity for each node to the internet
  • Access to an NTP server for time synchronization between the cluster nodes

2.4. Example DNS configuration

This section provides A and PTR record configuration examples that meet the DNS requirements for deploying OpenShift Container Platform using the Assisted Installer. The examples are not meant to provide advice for choosing one DNS solution over another.

In the examples, the cluster name is ocp4 and the base domain is example.com .

2.4.1. Example DNS A record configuration

The following example is a BIND zone file that shows sample A records for name resolution in a cluster installed using the Assisted Installer.

Example DNS zone database

In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

2.4.2. Example DNS PTR record configuration

The following example is a BIND zone file that shows sample PTR records for reverse name resolution in a cluster installed using the Assisted Installer.

Example DNS zone database for reverse records

A PTR record is not required for the OpenShift Container Platform application wildcard.

2.5. Preflight validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing software on the nodes, the Assisted Installer conducts the following validations:

  • Ensures network connectivity
  • Ensures sufficient network bandwidth
  • Ensures connectivity to the registry
  • Ensures that any upstream DNS can resolve the required domain name
  • Ensures time synchronization between cluster nodes
  • Verifies that the cluster nodes meet the minimum hardware requirements
  • Validates the installation configuration parameters

If the Assisted Installer does not successfully validate the foregoing requirements, installation will not proceed.

Chapter 3. Installing with the Assisted Installer web console

After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster.

3.1. Preinstallation considerations

Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices:

  • Which base domain to use
  • Which OpenShift Container Platform product version to install
  • Whether to install a full cluster or single-node OpenShift
  • Whether to use a DHCP server or a static network configuration
  • Whether to use IPv4 or dual-stack networking
  • Whether to install OpenShift Virtualization
  • Whether to install Red Hat OpenShift Data Foundation
  • Whether to install multicluster engine for Kubernetes
  • Whether to integrate with the platform when installing on vSphere or Nutanix
  • Whether to install a mixed-cluster architecture

3.2. Setting the cluster details

To create a cluster with the Assisted Installer web user interface, use the following procedure.

  • Log in to the Red Hat Hybrid Cloud Console .
  • In the Red Hat OpenShift tile, click Scale your applications .
  • In the menu, click Clusters .
  • Click Create cluster .
  • Click the Datacenter tab.
  • Under Assisted Installer , click Create cluster .
  • Enter a name for the cluster in the Cluster name field.

Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain.

The base domain must be a valid DNS name. You must not have a wild card domain set up for the base domain.

Select the version of OpenShift Container Platform to install.

  • For IBM Power and IBM zSystems platforms, only OpenShift Container Platform 4.13 and later is supported.
  • For a mixed-architecture cluster installation, select OpenShift Container Platform 4.12 or later, and use the -multi option. For instructions on installing a mixed-architecture cluster, see Additional resources .

Optional: Select Install single node Openshift (SNO) if you want to install OpenShift Container Platform on a single node.

Currently, SNO is not supported on IBM zSystems and IBM Power platforms.

  • Optional: The Assisted Installer already has the pull secret associated to your account. If you want to use a different pull secret, select Edit pull secret .

Optional: If you are installing OpenShift Container Platform on a third-party platform, select the platform from the Integrate with external parter platforms list. Valid values are Nutanix , vSphere or Oracle Cloud Infrastructure . Assisted Installer defaults to having no platform integration.

For details on each of the external partner integrations, see Additional Resources .

Assisted Installer supports Oracle Cloud Infrastructure (OCI) integration from OpenShift Container Platform 4.14 and later. For OpenShift Container Platform 4.14, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features - Scope of Support .

Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on a different architecture select the respective architecture to use. Valid values are arm64 , ppc64le , and s390x . Keep in mind, some features are not available with arm64 , ppc64le , and s390x CPU architectures.

For a mixed-architecture cluster installation, use the default x86_64 architecture. For instructions on installing a mixed-architecture cluster, see Additional resources .

Optional: Select Include custom manifests if you have at least one custom manifest to include in the installation. A custom manifest contains additional configurations not currently supported in the Assisted Installer. Selecting the checkbox adds the Custom manifests page to the wizard, where you upload the manifests.

  • If you are installing OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) third-party platform, it is mandatory to add the custom manifests provided by Oracle.
  • If you have already added custom manifests, unchecking the Include custom manifests box automatically deletes them all. You will be asked to confirm the deletion.

Optional: The Assisted Installer defaults to DHCP networking. If you are using a static IP configuration, bridges or bonds for the cluster nodes instead of DHCP reservations, select Static IP, bridges, and bonds .

A static IP configuration is not supported for OpenShift Container Platform installations on Oracle Cloud Infrastructure.

  • Optional: If you want to enable encryption of the installation disks, under Enable encryption of installation disks you can select Control plane node, worker for single-node OpenShift. For multi-node clusters, you can select Control plane nodes to encrypt the control plane node installation disks and select Workers to encrypt worker node installation disks.

You cannot change the base domain, the SNO checkbox, the CPU architecture, the host’s network configuration, or the disk-encryption after installation begins.

Additional resources

  • Optional: Installing on Nutanix
  • Optional: Installing on vSphere
  • Optional: Installing on Oracle Cloud Infrastructure (OCI)

3.3. Optional: Configuring static networks

The Assisted Installer supports IPv4 networking with SDN up to OpenShift Container Platform 4.14 and OVN, and supports IPv6 and dual stack networking with OVN only. The Assisted Installer supports configuring the network with static network interfaces with IP address/MAC address mapping. The Assisted Installer also supports configuring host network interfaces with the NMState library, a declarative network manager API for hosts. You can use NMState to deploy hosts with static IP addressing, bonds, VLANs and other advanced networking features. First, you must set network-wide configurations. Then, you must create a host-specific configuration for each host.

For installations on IBM Z with z/VM, ensure that the z/VM nodes and vSwitches are properly configured for static networks and NMState. Also, the z/VM nodes must have a fixed MAC address assigned as the pool MAC addresses might cause issues with NMState.

  • Select the internet protocol version. Valid options are IPv4 and Dual stack .
  • If the cluster hosts are on a shared VLAN, enter the VLAN ID.

Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses.

  • Enter the cluster network’s IP address range in CIDR notation.
  • Enter the default gateway IP address.
  • Enter the DNS server IP address.

Enter the host-specific configuration.

  • If you are only setting a static IP address that uses a single network interface, use the form view to enter the IP address and the MAC address for each host.
  • If you use multiple interfaces, bonding, or other advanced networking features, use the YAML view and enter the desired network state for each host that uses NMState syntax. Then, add the MAC address and interface name for each host interface used in your network configuration.
  • NMState version 2.1.4

3.4. Optional: Installing Operators

This step is optional.

See the product documentation for prerequisites and configuration options:

  • OpenShift Virtualization
  • Multicluster Engine for Kubernetes
  • Red Hat OpenShift Data Foundation
  • Logical Volume Manager Storage

If you require advanced options, install the Operators after you have installed the cluster.

Select one or more from the following options:

  • Install OpenShift Virtualization

Install multicluster engine

You can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters.

Deploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations:

  • Multi-node cluster: No storage is configured. You must configure storage after the installation.
  • Install Logical Volume Manager Storage
  • Install OpenShift Data Foundation
  • Click Next .

3.5. Adding hosts to the cluster

You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent.

Perform the following procedure for each host on the cluster.

Click the Add hosts button and select the provisioning type.

  • Select Minimal image file: Provision with virtual media to download a smaller image that will fetch the data needed to boot. The nodes must have virtual media capability. This is the recommended method for x86_64 and arm64 architectures.
  • Select Full image file: Provision with physical media to download the larger full image. This is the recommended method for the ppc64le architecture and for the s390x architecture when installing with RHEL KVM.

Select iPXE: Provision from your network server to boot the hosts using iPXE. This is the recommended method for IBM Z with z/VM nodes. ISO boot is the recommended method on the RHEL KVM installation.

  • If you install on RHEL KVM, in some circumstances, the VMs on the KVM host are not rebooted on first boot and need to be restarted manually.
  • If you install OpenShift Container Platform on Oracle Cloud Infrastructure, select Minimal image file: Provision with virtual media only.

Optional: Activate the Run workloads on control plane nodes switch to schedule workloads to run on control plane nodes, in addition to the default worker nodes.

This option is available for clusters of five or more nodes. For clusters of under five nodes, the system runs workloads on the control plane nodes only, by default. For more details, see Configuring schedulable control plane nodes in Additional Resources .

  • Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.

Optional: Add an SSH public key so that you can connect to the cluster nodes as the core user. Having a login to the cluster nodes can provide you with debugging information during the installation.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

  • If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access .
  • In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu.
  • Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates . Add additional certificates in X.509 format.
  • Configure the discovery image if needed.
  • Optional: If you are installing on a platform and want to integrate with the platform, select Integrate with your virtualization platform . You must boot all hosts and ensure they appear in the host inventory. All the hosts must be on the same platform.
  • Click Generate Discovery ISO or Generate Script File .
  • Download the discovery ISO or iPXE script.
  • Boot the host(s) with the discovery image or iPXE script.
  • Configuring the discovery image for additional details.
  • Booting hosts with the discovery image for additional details.
  • Red Hat Enterprise Linux 9 - Configuring and managing virtualization for additional details.
  • How to configure a VIOS Media Repository/Virtual Media Library for additional details.
  • Adding hosts on Nutanix with the web console
  • Adding hosts on vSphere
  • Configurng schedulable control plane nodes

3.6. Configuring hosts

After booting the hosts with the discovery ISO, the hosts will appear in the table at the bottom of the page. You can optionally configure the hostname and role for each host. You can also delete a host if necessary.

From the Options (⋮) menu for a host, select Change hostname . If necessary, enter a new name for the host and click Change . You must ensure that each host has a valid and unique hostname.

Alternatively, from the Actions list, select Change hostname to rename multiple selected hosts. In the Change Hostname dialog, type the new name and include {{n}} to make each hostname unique. Then click Change .

You can see the new names appearing in the Preview pane as you type. The name will be identical for all selected hosts, with the exception of a single-digit increment per host.

From the Options (⋮) menu, you can select Delete host to delete a host. Click Delete to confirm the deletion.

Alternatively, from the Actions list, select Delete to delete multiple selected hosts at the same time. Then click Delete hosts .

In a regular deployment, a cluster can have three or more hosts, and three of these must be control plane hosts. If you delete a host that is also a control plane, or if you are left with only two hosts, you will get a message saying that the system is not ready. To restore a host, you will need to reboot it from the discovery ISO.

  • From the Options (⋮) menu for the host, optionally select View host events . The events in the list are presented chronologically.

For multi-host clusters, in the Role column next to the host name, you can click on the menu to change the role of the host.

If you do not select a role, the Assisted Installer will assign the role automatically. The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements.

  • Click the Status link to view hardware, network and operator validations for the host.
  • Click the arrow to the left of a host name to expand the host details.

Once all cluster hosts appear with a status of Ready , proceed to the next step.

3.7. Configuring storage disks

Each of the hosts retrieved during host discovery can have multiple storage disks. The storage disks are listed for the host on the Storage page of the Assisted Installer wizard.

You can optionally modify the default configurations for each disk.

Changing the installation disk

The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the previous disk.

  • Navigate to the Storage page of the wizard.
  • Expand a host to display the associated storage disks.
  • Select Installation disk from the Role list.
  • When all storage disks return to Ready status, proceed to the next step.

Disabling disk formatting

The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss.

You can choose to disable the formatting of a specific disk. This should be performed with caution, as bootable disks may interfere with the installation process, mainly in terms of boot order.

You cannot disable formatting for the installation disk.

  • Clear Format for a disk.
  • Configuring hosts

3.8. Configuring networking

Before installing OpenShift Container Platform, you must configure the cluster network.

In the Networking page, select one of the following if it is not already selected for you:

Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology, including keepalived and Virtual Router Redundancy Protocol (VRRP) for managing the API and Ingress VIP addresses.

  • Currently, Cluster-Managed Networking is not supported on IBM zSystems and IBM Power in OpenShift Container Platform version 4.13.
  • Oracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only.
  • User-Managed Networking : Selecting user-managed networking allows you to deploy OpenShift Container Platform with a non-standard network topology. For example, if you want to deploy with an external load balancer instead of keepalived and VRRP, or if you intend to deploy the cluster nodes across many distinct L2 network segments.

For cluster-managed networking, configure the following settings:

  • Define the Machine network . You can use the default network or select a subnet.
  • Define an API virtual IP . An API virtual IP provides an endpoint for all users to interact with, and configure the platform.
  • Define an Ingress virtual IP . An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster.

For user-managed networking, configure the following settings:

Select your Networking stack type :

  • IPv4 : Select this type when your hosts are only using IPv4.
  • Dual-stack : You can select dual-stack when your hosts are using IPv4 together with IPv6.
  • Optional: You can select Allocate IPs via DHCP server to automatically allocate the API IP and Ingress IP using the DHCP server.

Optional: Select Use advanced networking to configure the following advanced networking properties:

  • Cluster network CIDR : Define an IP address block from which Pod IP addresses are allocated.
  • Cluster network host prefix : Define a subnet prefix length to assign to each node.
  • Service network CIDR : Define an IP address to use for service IP addresses.
  • Network type : Select either Software-Defined Networking (SDN) for standard networking or Open Virtual Networking (OVN) for IPv6, dual-stack networking, and telco features. In OpenShift Container Platform 4.12 and later releases, OVN is the default Container Network Interface (CNI). In OpenShift Container Platform 4.15 and later releases, Software-Defined Networking (SDN) is not supported.
  • Network configuration

3.9. Adding custom manifests

A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party.

You can upload a custom manifest from your file system to either the openshift folder or the manifests folder. There is no limit to the number of custom manifest files permitted.

Only one file can be uploaded at a time. However, each uploaded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.

For a file containing a single custom manifest, accepted file extensions include .yaml , .yml , or .json .

Single custom manifest example

For a file containing multiple custom manifests, accepted file types include .yaml or .yml .

Multiple custom manifest example

  • When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional.
  • For more information about custom manifests, see Additional Resources .

Uploading a custom manifest in the Assisted Installer user interface

When uploading a custom manifest, enter the manifest filename and select a destination folder.

Prerequisites

  • You have at least one custom manifest file saved in your file system.
  • On the Cluster details page of the wizard, select the Include custom manifests checkbox.
  • On the Custom manifest page, in the folder field, select the Assisted Installer folder where you want to save the custom manifest file. Options include openshift or manifest .
  • In the Filename field, enter a name for the manifest file, including the extension. For example, manifest1.json or multiple1.yaml .
  • Under Content , click the Upload icon or Browse button to upload a file. Alternatively, drag the file into the Content field from your file system.
  • To upload another manifest, click Add another manifest and repeat the process. This saves the previously uploaded manifest.
  • Click Next to save all manifests and proceed to the Review and create page. The uploaded custom manifests are listed under Custom manifests .

Modifying a custom manifest in the Assisted Installer user interface

You can change the folder and file name of an uploaded custom manifest. You can also copy the content of an existing manifest, or download it to the folder defined in the Chrome download settings.

It is not possible to modify the content of an uploaded manifest. However, you can overwrite the file.

  • You have uploaded at least one custom manifest file.
  • To change the folder, select a different folder for the manifest from the Folder list.
  • To modify the file name, type the new name for the manifest in the File name field.
  • To overwrite a manifest, save the new manifest in the same folder with the same file name.
  • To save a manifest as a file in your file system, click the Download icon.
  • To copy the manifest, click the Copy to clipboard icon.
  • To apply the changes, click either Add another manifest or Next .

Removing custom manifests in the Assisted Installer user interface

You can remove uploaded custom manifests before installation in one of two ways:

  • Removing one or more manifests individually.
  • Removing all manifests at once.

Once you have removed a manifest you cannot undo the action. The workaround is to upload the manifest again.

Removing a single manifest

You can delete one manifest at a time. This option does not allow you to delete the last remaining manifest.

  • You have uploaded at least two custom manifest files.
  • Navigate to the Custom manifests page.
  • Hover over the manifest name to display the Delete (minus) icon.
  • Click the icon and then click Delete in the dialog box.

Removing all manifests

You can remove all custom manifests at once. This also hides the Custom manifest page.

  • Navigate to the Cluster details page of the wizard.
  • Clear the Include custom manifests checkbox.
  • In the Remove custom manifests dialog box, click Remove .
  • Manifest configuration files
  • Multi-document YAML files

3.10. Preinstallation validations

The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation.

  • Preinstallation validation

3.11. Installing the cluster

After you have completed the configuration and all the nodes are Ready , you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation.

  • Press Begin installation .
  • Click the link in the Status column of the Host Inventory list to see the installation status of a particular host.

3.12. Completing the installation

After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin username and password, and the kubeconfig file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses.

  • You have installed the oc CLI tool.
  • Make a copy of the kubeadmin username and password.

Download the kubeconfig file and copy it to the auth directory under your working directory:

The kubeconfig file is available for download for 24 hours after completing the installation.

Add the kubeconfig file to your environment:

Login with the oc CLI tool:

Replace <password> with the password of the kubeadmin user.

  • Click the web console URL or click Launch OpenShift Console to open the console.
  • Enter the kubeadmin username and password. Follow the instructions in the OpenShift Container Platform console to configure an identity provider and configure alert receivers.
  • Add a bookmark of the OpenShift Container Platform console.
  • Complete any postinstallation platform integration steps.
  • Nutanix postinstallation configuration
  • vSphere postinstallation configuration

Chapter 4. Installing with the Assisted Installer API

After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster using the Assisted Installer API. To use the API, you must perform the following procedures:

  • Set up the API authentication.
  • Configure the pull secret.
  • Register a new cluster definition.
  • Create an infrastructure environment for the cluster.

Once you perform these steps, you can modify the cluster definition, create discovery ISOs, add hosts to the cluster, and install the cluster. This document does not cover every endpoint of the Assisted Installer API , but you can review all of the endpoints in the API viewer or the swagger.yaml file.

4.1. Generating the offline token

Download the offline token from the Assisted Installer web console. You will use the offline token to set the API token.

  • Install jq .
  • Log in to the OpenShift Cluster Manager as a user with cluster creation privileges.
  • In the menu, click Downloads .
  • In the Tokens section under OpenShift Cluster Manager API Token , click View API Token .

Click Load Token .

Disable pop-up blockers.

  • In the Your API token section, copy the offline token.

In your terminal, set the offline token to the OFFLINE_TOKEN variable:

To make the offline token permanent, add it to your profile.

(Optional) Confirm the OFFLINE_TOKEN variable definition.

4.2. Authenticating with the REST API

API calls require authentication with the API token. Assuming you use API_TOKEN as a variable name, add -H "Authorization: Bearer ${API_TOKEN}" to API calls to authenticate with the REST API.

The API token expires after 15 minutes.

  • You have generated the OFFLINE_TOKEN variable.

On the command line terminal, set the API_TOKEN variable using the OFFLINE_TOKEN to validate the user.

Confirm the API_TOKEN variable definition:

Create a script in your path for one of the token generating methods. For example:

Then, save the file.

Change the file mode to make it executable:

Refresh the API token:

Verify that you can access the API by running the following command:

Example output

4.3. Configuring the pull secret

Many of the Assisted Installer API calls require the pull secret. Download the pull secret to a file so that you can reference it in API calls. The pull secret is a JSON object that will be included as a value within the request’s JSON object. The pull secret JSON must be formatted to escape the quotes. For example:

  • In the menu, click OpenShift .
  • In the submenu, click Downloads .
  • In the Tokens section under Pull secret , click Download .

To use the pull secret from a shell variable, execute the following command:

To slurp the pull secret file using jq , reference it in the pull_secret variable, piping the value to tojson to ensure that it is properly formatted as escaped JSON. For example:

Confirm the PULL_SECRET variable definition:

4.4. Optional: Generating the SSH public key

During the installation of OpenShift Container Platform, you can optionally provide an SSH public key to the installation program. This is useful for initiating an SSH connection to a remote node when troubeshooting an installation error.

If you do not have an existing SSH key pair on your local machine to use for the authentication, create one now.

  • Generate the OFFLINE_TOKEN and API_TOKEN variables.

From the root user in your terminal, get the SSH public key:

Set the SSH public key to the CLUSTER_SSHKEY variable:

Confirm the CLUSTER_SSHKEY variable definition:

4.5. Registering a new cluster

To register a new cluster definition with the API, use the /v2/clusters endpoint. Registering a new cluster requires the following settings:

  • openshift-version
  • pull_secret
  • cpu_architecture

See the cluster-create-params model in the API viewer for details on the fields you can set when registering a new cluster. When setting the olm_operators field, see Additional Resources for details on installing Operators.

After you create the cluster definition, you can modify the cluster definition and provide values for additional settings.

  • For certain installation platforms and OpenShift Container Platform versions, you can also create a mixed-architecture cluster by combining two different architectures on the same cluster. For details, see Additional Resources .
  • If you are installing OpenShift Container Platform on a third-party platform, see Additional Resources for the relevant instructions.
  • For clusters of between five to ten nodes, you can choose to schedule workloads to run on control plane nodes in addition to the worker nodes, while registering a cluster. For details, see Configuring schedulable control plane nodes in Additional resources .
  • You have generated a valid API_TOKEN . Tokens expire every 15 minutes.
  • You have downloaded the pull secret.
  • Optional: You have assigned the pull secret to the $PULL_SECRET variable.

Register a new cluster.

Optional: You can register a new cluster by slurping the pull secret file in the request:

Optional: You can register a new cluster by writing the configuration to a JSON file and then referencing it in the request:

Assign the returned cluster_id to the CLUSTER_ID variable and export it:

If you close your terminal session, you need to export the CLUSTER_ID variable again in a new terminal session.

Check the status of the new cluster:

Once you register a new cluster definition, create the infrastructure environment for the cluster.

You cannot see the cluster configuration settings in the Assisted Installer user interface until you create the infrastructure environment.

  • Modifying a cluster
  • Installing a mixed-architecture cluster
  • Optional: Installing on Oracle Cloud Infrastructure

4.5.1. Optional: Installing Operators

You can install the following Operators when you register a new cluster:

OpenShift Virtualization Operator

Currently, OpenShift Virtualization is not supported on IBM zSystems and IBM Power.

  • Multicluster engine Operator
  • OpenShift Data Foundation Operator
  • LVM Storage Operator

Run the following command:

  • OpenShift Virtualization documentation
  • Red Hat OpenShift Cluster Manager documentation
  • Red Hat OpenShift Data Foundation documentation
  • Logical Volume Manager Storage documentation

4.6. Modifying a cluster

To modify a cluster definition with the API, use the /v2/clusters/{cluster_id} endpoint. Modifying a cluster resource is a common operation for adding settings such as changing the network type or enabling user-managed networking. See the v2-cluster-update-params model in the API viewer for details on the fields you can set when modifying a cluster definition.

You can add or remove Operators from a cluster resource that has already been registered.

To create partitions on nodes, see Configuring storage on nodes in the OpenShift Container Platform documentation.

  • You have created a new cluster resource.

Modify the cluster. For example, change the SSH key:

4.6.1. Modifying Operators

You can add or remove Operators from a cluster resource that has already been registered as part of a previous installation. This is only possible before you start the OpenShift Container Platform installation.

You set the required Operator definition by using the PATCH method for the /v2/clusters/{cluster_id} endpoint.

  • You have refreshed the API token.
  • You have exported the CLUSTER_ID as an environment variable.

Run the following command to modify the Operators:

Sample output

The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types:

  • "operator_type": "builtin" : Operators of this type are an integral part of OpenShift Container Platform.
  • "operator_type": "olm" : Operators of this type are added manually by a user or automatically, as a dependency. In this example, the LVM Storage Operator is added automatically as a dependency of OpenShift Virtualization.

4.7. Registering a new infrastructure environment

Once you register a new cluster definition with the Assisted Installer API, create an infrastructure environment using the v2/infra-envs endpoint. Registering a new infrastructure environment requires the following settings:

See the infra-env-create-params model in the API viewer for details on the fields you can set when registering a new infrastructure environment. You can modify an infrastructure environment after you create it. As a best practice, consider including the cluster_id when creating a new infrastructure environment. The cluster_id will associate the infrastructure environment with a cluster definition. When creating the new infrastructure environment, the Assisted Installer will also generate a discovery ISO.

  • Optional: You have registered a new cluster definition and exported the cluster_id .

Register a new infrastructure environment. Provide a name, preferably something including the cluster name. This example provides the cluster ID to associate the infrastructure environment with the cluster resource. The following example specifies the image_type . You can specify either full-iso or minimal-iso . The default value is minimal-iso .

Optional: You can register a new infrastructure environment by slurping the pull secret file in the request:

Optional: You can register a new infrastructure environment by writing the configuration to a JSON file and then referencing it in the request:

Assign the returned id to the INFRA_ENV_ID variable and export it:

Once you create an infrastructure environment and associate it to a cluster definition via the cluster_id , you can see the cluster settings in the Assisted Installer web user interface. If you close your terminal session, you need to re-export the id in a new terminal session.

4.8. Modifying an infrastructure environment

You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. Modifying an infrastructure environment is a common operation for adding settings such as networking, SSH keys, or ignition configuration overrides.

See the infra-env-update-params model in the API viewer for details on the fields you can set when modifying an infrastructure environment. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.

  • You have created a new infrastructure environment.

Modify the infrastructure environment:

4.8.1. Optional: Adding kernel arguments

Providing kernel arguments to the Red Hat Enterprise Linux CoreOS (RHCOS) kernel via the Assisted Installer means passing specific parameters or options to the kernel at boot time, particularly when you cannot customize the kernel parameters of the discovery ISO. Kernel parameters can control various aspects of the kernel’s behavior and the operating system’s configuration, affecting hardware interaction, system performance, and functionality. Kernel arguments are used to customize or inform the node’s RHCOS kernel about the hardware configuration, debugging preferences, system services, and other low-level settings.

The RHCOS installer kargs modify command supports the append , delete , and replace options.

You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO.

Modify the kernel arguments:

4.9. Adding hosts

After configuring the cluster resource and infrastructure environment, download the discovery ISO image. You can choose from two images:

  • Full ISO image: Use the full ISO image when booting must be self-contained. The image includes everything needed to boot and start the Assisted Installer agent. The ISO image is about 1GB in size. This is the recommended method for the s390x architecture when installing with RHEL KVM.
  • Minimal ISO image: Use the minimal ISO image when bandwidth over the virtual media connection is limited. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

Currently, ISO images are not supported for installations on IBM Z ( s390x ) with z/VM. For details, see Booting hosts using iPXE .

You can boot hosts with the discovery image using three methods. For details, see Booting hosts with the discovery image .

  • You have created a cluster.
  • You have created an infrastructure environment.
  • You have completed the configuration.
  • If the cluster hosts are behind a firewall that requires the use of a proxy, you have configured the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server.
  • You have selected an image type or will use the default minimal-iso .
  • Configure the discovery image if needed. For details, see Configuring the discovery image .

Get the download URL:

Download the discovery image:

Replace <url> with the download URL from the previous step.

  • Boot the host(s) with the discovery image.
  • Assign a role to host(s).
  • Configuring the discovery image
  • Booting hosts with the discovery image
  • Adding hosts on Nutanix with the API
  • Assigning roles to hosts
  • Booting hosts using iPXE

4.10. Modifying hosts

After adding hosts, modify the hosts as needed. The most common modifications are to the host_name and the host_role parameters.

You can modify a host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. See the host-update-params model in the API viewer for details on the fields you can set when modifying a host.

A host might be one of two roles:

  • master : A host with the master role will operate as a control plane host.
  • worker : A host with the worker role will operate as a worker host.

By default, the Assisted Installer sets a host to auto-assign , which means the installation program determines whether the host is a master or worker role automatically. Use the following procedure to set the host’s role:

  • You have added hosts to the cluster.

Get the host IDs:

Modify the host:

4.10.1. Modifying storage disk configuration

Each host retrieved during host discovery can have multiple storage disks. You can optionally modify the default configurations for each disk.

  • Configure the cluster and discover the hosts. For details, see Additional resources .

Viewing the storage disks

You can view the hosts in your cluster, and the disks on each host. This enables you to perform actions on a specific disk.

Get the host IDs for the cluster:

This is the ID of a single host. Multiple host IDs are separated by commas.

Get the disks for a specific host:

This is the output for one disk. It contains the disk_id and installation_eligibility properties for the disk.

You can select any disk whose installation_eligibility property is eligible: true to be the installation disk.

  • Get the host and storage disk IDs. For details, see Viewing the storage disks .

Optional: Identify the current installation disk:

Assign a new installation disk:

4.11. Adding custom manifests

A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party. To create a custom manifest with the API, use the /v2/clusters/$CLUSTER_ID/manifests endpoint.

You can upload a base64-encoded custom manifest to either the openshift folder or the manifests folder with the Assisted Installer API. There is no limit to the number of custom manifests permitted.

Only one base64-encoded JSON manifest can be uploaded at a time. However, each uploaded base64-encoded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually.

  • You have registered a new cluster definition and exported the cluster_id to the $CLUSTER_ID BASH variable.
  • Create a custom manifest file.
  • Save the custom manifest file using the appropriate extension for the file format.

Add the custom manifest to the cluster by executing the following command:

Replace manifest.json with the name of your manifest file. The second instance of manifest.json is the path to the file. Ensure the path is correct.

The base64 -w 0 command base64-encodes the manifest as a string and omits carriage returns. Encoding with carriage returns will generate an exception.

Verify that the Assisted Installer added the manifest:

Replace manifest.json with the name of your manifest file.

4.12. Preinstallation validations

  • Preinstallation validations

4.13. Installing the cluster

Once the cluster hosts past validation, you can install the cluster.

  • You have created a cluster and infrastructure environment.
  • You have added hosts to the infrastructure environment.
  • The hosts have passed validation.

Install the cluster:

Chapter 5. Optional: Enabling disk encryption

You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes.

In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a previous installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support.

5.1. Enabling TPM v2 encryption

  • Check to see if TPM v2 encryption is enabled in the BIOS on each host. Most Dell systems require this. Check the manual for your computer. The Assisted Installer will also validate that TPM is enabled in the firmware. See the disk-encruption model in the Assisted Installer API for additional details.

Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware.

  • Optional: Using the web console, in the Cluster details step of the user interface wizard, choose to enable TPM v2 encryption on either the control plane nodes, workers, or both.

Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 .

Enable TPM v2 encryption:

Valid settings for enable_on are all , master , worker , or none .

5.2. Enabling Tang encryption

  • You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key.
  • Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. You can set multiple Tang servers, but the Assisted Installer must be able to connect to all of them during installation.

On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys :

Optional: Replace <port> with the port number. The default port number is 80 .

Example thumbprint

Optional: Retrieve the thumbprint for the Tang server using jose .

Ensure jose is installed on the Tang server:

On the Tang server, retrieve the thumbprint using jose :

Replace <public_key> with the public exchange key for the Tang server.

  • Optional: In the Cluster details step of the user interface wizard, choose to enable Tang encryption on either the control plane nodes, workers, or both. You will be required to enter URLs and thumbprints for the Tang servers.

Optional: Using the API, follow the "Modifying hosts" procedure.

Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers:

Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s).

5.3. Additional resources

  • Modifying hosts

Chapter 6. Optional: Configuring schedulable control plane nodes

In a high availability deployment, three or more nodes comprise the control plane. The control plane nodes are used for managing OpenShift Container Platform and for running the OpenShift containers. The remaining nodes are workers, used to run the customer containers and workloads. There can be anywhere between one to thousands of worker nodes.

For a single-node OpenShift cluster or for a cluster that comprises up to four nodes, the system automatically schedules the workloads to run on the control plane nodes.

For clusters of between five to ten nodes, you can choose to schedule workloads to run on the control plane nodes in addition to the worker nodes. This option is recommended for enhancing efficiency and preventing underutilized resources. You can select this option either during the installation setup, or as part of the post-installation steps.

For larger clusters of more than ten nodes, this option is not recommended.

This section explains how to schedule workloads to run on control plane nodes using the Assisted Installer web console and API, as part of the installation setup.

For instructions on how to configure schedulable control plane nodes following an installation, see Configuring control plane nodes as schedulable in the OpenShift Container Platform documentation.

When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes.

6.1. Configuring schedulable control planes using the web console

  • You have set the cluster details.
  • You are installing OpenShift Container Platform 4.14 or later.
  • Log in to the Red Hat Hybrid Cloud Console and follow the instructions for installing OpenShift Container Platform using the Assisted Installer web console. For details, see Installing with the Assisted Installer web console in Additional Resources .
  • When you reach the Host discovery page, click Add hosts .
  • Optionally change the Provisioning type and additional settings as required. All options are compatible with schedulable control planes.
  • Click Generate Discovery ISO to download the ISO.

Set Run workloads on control plane nodes to on.

For four nodes or less, this switch is activated automatically and cannot be changed.

6.2. Configuring schedulable control planes using the API

Use the schedulable_masters attribute to enable workloads to run on control plane nodes.

  • You have created a $PULL_SECRET variable.
  • Follow the instructions for installing Assisted Installer using the Assisted Installer API. For details, see Installing with the Assisted Installer API in Additional Resources .

When you reach the step for registering a new cluster, set the schedulable_masters attribute as follows:

6.3. Additional resources

  • Installing with the Assisted Installer web console
  • Installing with the Assisted Installer API

Chapter 7. Configuring the discovery image

The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can use Ignition to customize the discovery image.

Modifications to the discovery image will not persist in the system.

7.1. Creating an Ignition configuration file

Ignition is a low-level system configuration utility, which is part of the temporary initial root filesystem, the initramfs . When Ignition runs on the first boot, it finds configuration data in the Ignition configuration file and applies it to the host before switch_root is called to pivot to the host’s root filesystem.

Ignition uses a JSON configuration specification file to represent the set of changes that occur on the first boot.

Ignition versions newer than 3.2 are not supported, and will raise an error.

Create an Ignition file and specify the configuration specification version:

Add configuration data to the Ignition file. For example, add a password to the core user.

Generate a password hash:

Add the generated password hash to the core user:

Save the Ignition file and export it to the IGNITION_FILE variable:

7.2. Modifying the discovery image with Ignition

Once you create an Ignition configuration file, you can modify the discovery image by patching the infrastructure environment using the Assisted Installer API.

  • If you used the web console to create the cluster, you have set up the API authentication.
  • You have an infrastructure environment and you have exported the infrastructure environment id to the INFRA_ENV_ID variable.
  • You have a valid Ignition file and have exported the file name as $IGNITION_FILE .

Create an ignition_config_override JSON object and redirect it to a file:

Patch the infrastructure environment:

The ignition_config_override object references the Ignition file.

  • Download the updated discovery image.

Chapter 8. Booting hosts with the discovery image

The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can boot hosts with the discovery image using three methods:

  • Redfish virtual media

8.1. Creating an ISO image on a USB drive

You can install the Assisted Installer agent using a USB drive that contains the discovery ISO image. Starting the host with the USB drive prepares the host for the software installation.

  • On the administration host, insert a USB drive into a USB port.

Copy the ISO image to the USB drive, for example:

is the location of the connected USB drive, for example, /dev/sdb .

After the ISO is copied to the USB drive, you can use the USB drive to install the Assisted Installer agent on the cluster host.

8.2. Booting with a USB drive

To register nodes with the Assisted Installer using a bootable USB drive, use the following procedure.

  • Insert the RHCOS discovery ISO USB drive into the target host.
  • Configure the boot drive order in the server firmware settings to boot from the attached discovery ISO, and then reboot the server.

Wait for the host to boot up.

  • For web console installations, on the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts.

For API installations, refresh the token, check the enabled host count, and gather the host IDs:

8.3. Booting from an HTTP-hosted ISO image using the Redfish API

You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API.

  • Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO.
  • Copy the ISO file to an HTTP server accessible in your network.

Boot the host from the hosted ISO file, for example:

Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command:

Set the host to boot from the VirtualMedia device by running the following command:

Reboot the host:

Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command:

8.4. Booting hosts using iPXE

The Assisted Installer provides an iPXE script including all the artifacts needed to boot the discovery image for an infrastructure environment. Due to the limitations of the current HTTPS implementation of iPXE, the recommendation is to download and expose the needed artifacts in an HTTP server. Currently, even if iPXE supports HTTPS protocol, the supported algorithms are old and not recommended.

The full list of supported ciphers is in https://ipxe.org/crypto .

  • You have created an infrastructure environment by using the API or you have created a cluster by using the web console.
  • You have your infrastructure environment ID exported in your shell as $INFRA_ENV_ID .
  • You have credentials to use when accessing the API and have exported a token as $API_TOKEN in your shell.

If you configure iPXE by using the web console, the $INFRA_ENV_ID and $API_TOKEN variables are preset.

  • You have an HTTP server to host the images.

IBM Power only supports PXE, which also requires: You have installed grub2 at /var/lib/tftpboot You have installed DHCP and TFTP for PXE

Download the iPXE script directly from the web console, or get the iPXE script from the Assisted Installer:

Download the required artifacts by extracting URLs from the ipxe-script .

Download the initial RAM disk:

Download the linux kernel:

Download the root filesystem:

Change the URLs to the different artifacts in the ipxe-script` to match your local HTTP server. For example:

Optional: When installing with RHEL KVM on IBM zSystems you must boot the host by specifying additional kernel arguments

If you install with iPXE on RHEL KVM, in some circumstances, the VMs on the VM host are not rebooted on first boot and need to be started manually.

Optional: When installing on IBM Power you must download intramfs, kernel, and root as follows:

  • Copy initrd.img and kernel.img to PXE directory `/var/lib/tftpboot/rhcos `
  • Copy rootfs.img to HTTPD directory `/var/www/html/install `

Add following entry to `/var/lib/tftpboot/boot/grub2/grub.cfg `:

Chapter 9. Assigning roles to hosts

You can assign roles to your discovered hosts. These roles define the function of the host within the cluster. The roles can be one of the standard Kubernetes types: control plane (master) or worker .

The host must meet the minimum requirements for the role you selected. You can find the hardware requirements by referring to the Prerequisites section of this document or using the preflight requirement API.

If you do not select a role, the system selects one for you. You can change the role at any time before installation starts.

9.1. Selecting a role by using the web console

You can select a role after the host finishes its discovery.

  • Go to the Host Discovery tab and scroll down to the Host Inventory table.
  • Select the Auto-assign drop-down for the required host.
  • Select Control plane node to assign this host a control plane role.
  • Select Worker to assign this host a worker role.
  • Check the validation status.

9.2. Selecting a role by by using the API

You can select a role for the host using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. A host may be one of two roles:

By default, the Assisted Installer sets a host to auto-assign , which means the installer will determine whether the host is a master or worker role automatically. Use this procedure to set the host’s role.

Modify the host_role setting:

Replace <host_id> with the ID of the host.

9.3. Auto-assigning roles

Assisted Installer selects a role automatically for hosts if you do not assign a role yourself. The role selection mechanism factors the host’s memory, CPU, and disk space. It aims to assign a control plane role to the 3 weakest hosts that meet the minimum requirements for control plane nodes. All other hosts default to worker nodes. The goal is to provide enough resources to run the control plane and reserve the more capacity-intensive hosts for running the actual workloads.

You can override the auto-assign decision at any time before installation.

The validations make sure that the auto selection is a valid one.

9.4. Additional resources

Chapter 10. preinstallation validations, 10.1. definition of preinstallation validations.

The Assisted Installer aims to make cluster installation as simple, efficient, and error-free as possible. The Assisted Installer performs validation checks on the configuration and the gathered telemetry before starting an installation.

The Assisted Installer will use the information provided prior to installation, such as control plane topology, network configuration and hostnames. It will also use real time telemetry from the hosts you are attempting to install.

When a host boots the discovery ISO, an agent will start on the host. The agent will send information about the state of the host to the Assisted Installer.

The Assisted Installer uses all of this information to compute real time preinstallation validations. All validations are either blocking or non-blocking to the installation.

10.2. Blocking and non-blocking validations

A blocking validation will prevent progress of the installation, meaning that you will need to resolve the issue and pass the blocking validation before you can proceed.

A non-blocking validation is a warning and will tell you of things that might cause you a problem.

10.3. Validation types

The Assisted Installer performs two types of validation:

Host validations ensure that the configuration of a given host is valid for installation.

Cluster validations ensure that the configuration of the whole cluster is valid for installation.

10.4. Host validations

10.4.1. getting host validations by using the rest api.

If you use the web console, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure.

  • You have installed the jq utility.
  • You have created an Infrastructure Environment by using the API or have created a cluster by using the web console.
  • You have hosts booted with the discovery ISO
  • You have your Cluster ID exported in your shell as CLUSTER_ID .
  • You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell.

Get all validations for all hosts:

Get non-passing validations for all hosts:

10.4.2. Host validations in detail

ParameterValidation typeDescription

non-blocking

Checks that the host has recently communicated with the Assisted Installer.

non-blocking

Checks that the Assisted Installer received the inventory from the host.

non-blocking

Checks that the number of CPU cores meets the minimum requirements.

non-blocking

Checks that the amount of memory meets the minimum requirements.

non-blocking

Checks that at least one available disk meets the eligibility criteria.

blocking

Checks that the number of cores meets the minimum requirements for the host role.

blocking

Checks that the amount of memory meets the minimum requirements for the host role.

blocking

For day 2 hosts, checks that the host can download ignition configuration from the day 1 cluster.

blocking

The majority group is the largest full-mesh connectivity group on the cluster, where all members can communicate with all other members. This validation checks that hosts in a multi-node, day 1 cluster are in the majority group.

blocking

Checks that the platform is valid for the network settings.

non-blocking

Checks if an NTP server has been successfully used to synchronize time on the host.

non-blocking

Checks if container images have been successfully pulled from the image registry.

blocking

Checks that disk speed metrics from an earlier installation meet requirements, if they exist.

blocking

Checks that the average network latency between hosts in the cluster meets the requirements.

blocking

Checks that the network packet loss between hosts in the cluster meets the requirements.

blocking

Checks that the host has a default route configured.

blocking

For a multi node cluster with user managed networking. Checks that the host is able to resolve the API domain name for the cluster.

blocking

For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal API domain name for the cluster.

blocking

For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal apps domain name for the cluster.

non-blocking

Checks that the host is compatible with the cluster platform

blocking

Checks that the wildcard DNS *.<cluster_name>.<base_domain> is not configured, because this causes known problems for OpenShift

non-blocking

Checks that the type of host and disk encryption configured meet the requirements.

blocking

Checks that this host does not have any overlapping subnets.

blocking

Checks that the hostname is unique in the cluster.

blocking

Checks the validity of the hostname, meaning that it matches the general form of hostnames and is not forbidden.

blocking

Checks that the host IP is in the address range of the machine CIDR.

blocking

Validates that the cluster meets the requirements of the Local Storage Operator.

blocking

Validates that the cluster meets the requirements of the Openshift Data Foundation Operator.

blocking

Validates that the cluster meets the requirements of Container Native Virtualization.

blocking

Validates that the cluster meets the requirements of the Logical Volume Manager Operator.

non-blocking

Verifies that each valid disk sets to . In VSphere this will result in each disk having a UUID.

blocking

Checks that the discovery agent version is compatible with the agent docker image version.

blocking

Checks that installation disk is not skipping disk formatting.

blocking

Checks that all disks marked to skip formatting are in the inventory. A disk ID can change on reboot, and this validation prevents issues caused by that.

blocking

Checks the connection of the installation media to the host.

non-blocking

Checks that the machine network definition exists for the cluster.

blocking

Checks that the platform is compatible with the network settings. Some platforms are only permitted when installing Single Node Openshift or when using User Managed Networking.

10.5. Cluster validations

10.5.1. getting cluster validations by using the rest api.

If you use the web console, many of these validations will not show up by name. To obtain a list of validations consistent with the labels, use the following procedure.

Get all cluster validations:

Get non-passing cluster validations:

10.5.2. Cluster validations in detail

ParameterValidation typeDescription

non-blocking

Checks that the machine network definition exists for the cluster.

non-blocking

Checks that the cluster network definition exists for the cluster.

non-blocking

Checks that the service network definition exists for the cluster.

blocking

Checks that the defined networks do not overlap.

blocking

Checks that the defined networks share the same address families (valid address families are IPv4, IPv6)

blocking

Checks the cluster network prefix to ensure that it is valid and allows enough address space for all hosts.

blocking

For a non user managed networking cluster. Checks that or are members of the machine CIDR if they exist.

non-blocking

For a non user managed networking cluster. Checks that exist.

blocking

For a non user managed networking cluster. Checks if the belong to the machine CIDR and are not in use.

blocking

For a non user managed networking cluster. Checks that exist.

non-blocking

For a non user managed networking cluster. Checks if the belong to the machine CIDR and are not in use.

blocking

Checks that all hosts in the cluster are in the "ready to install" status.

blocking

This validation only applies to multi-node clusters.

non-blocking

Checks that the base DNS domain exists for the cluster.

non-blocking

Checks that the pull secret exists. Does not check that the pull secret is valid or authorized.

blocking

Checks that each of the host clocks are no more than 4 minutes out of sync with each other.

blocking

Validates that the cluster meets the requirements of the Local Storage Operator.

blocking

Validates that the cluster meets the requirements of the Openshift Data Foundation Operator.

blocking

Validates that the cluster meets the requirements of Container Native Virtualization.

blocking

Validates that the cluster meets the requirements of the Logical Volume Manager Operator.

blocking

Checks the validity of the network type if it exists.

Chapter 11. Network configuration

This section describes the basics of network configuration using the Assisted Installer.

11.1. Cluster networking

There are various network types and addresses used by OpenShift and listed in the table below.

TypeDNSDescription

 

The IP address pools from which Pod IP addresses are allocated.

 

The IP address pool for services.

 

The IP address blocks for machines forming the cluster.

The VIP to use for API communication. This setting must either be provided or preconfigured in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address.

The VIPs to use for API communication. This setting must either be provided or preconfigured in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the setting.

The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address.

The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the setting.

OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept multiple IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration using the API.

Depending on the desired network stack, you can choose different network controllers. Currently, the Assisted Service can deploy OpenShift Container Platform clusters using one of the following configurations:

  • Dual-stack (IPv4 + IPv6)

Supported network controllers depend on the selected stack and are summarized in the table below. For a detailed Container Network Interface (CNI) network provider feature comparison, refer to the OCP Networking documentation .

StackSDNOVN

IPv4

Yes

Yes

IPv6

No

Yes

Dual-stack

No

Yes

OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases.

11.1.1. Limitations

11.1.1.1. sdn.

  • The SDN controller is not supported with single-node OpenShift.
  • The SDN controller does not support IPv6.
  • The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes.

11.1.1.2. OVN-Kubernetes

Please see the OVN-Kubernetes limitations section in the OCP documentation .

11.1.2. Cluster network

The cluster network is a network from which every Pod deployed in the cluster gets its IP address. Given that the workload may live across many nodes forming the cluster, it’s important for the network provider to be able to easily find an individual node based on the Pod’s IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix .

The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster may assign addresses for the multi-node cluster:

Creating a 3-node cluster using the snippet above may create the following network topology:

  • Pods scheduled in node #1 get IPs from 10.128.0.0/23
  • Pods scheduled in node #2 get IPs from 10.128.2.0/23
  • Pods scheduled in node #3 get IPs from 10.128.4.0/23

Explaining OVN-K8s internals is out of scope for this document, but the pattern described above provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes.

11.1.3. Machine network

The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs.

11.1.4. SNO compared to multi-node cluster

Depending on whether you are deploying a Single Node OpenShift or a multi-node cluster, different values are mandatory. The table below explains this in more detail.

ParameterSNOMulti-Node Cluster with DHCP modeMulti-Node Cluster without DHCP mode

Required

Required

Required

Required

Required

Required

Auto-assign possible (*)

Auto-assign possible (*)

Auto-assign possible (*)

Forbidden

Forbidden

Required

Forbidden

Forbidden

Required in 4.12 and later releases

Forbidden

Forbidden

Required

Forbidden

Forbidden

Required in 4.12 and later releases

(*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly.

11.1.5. Air-gapped environments

The workflow for deploying a cluster without Internet access has some prerequisites which are out of scope of this document. You may consult the Zero Touch Provisioning the hard way Git repository for some insights.

11.2. VIP DHCP allocation

The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server.

If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network.

Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier.

VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later.

11.2.1. Example payload to enable autoallocation

11.2.2. example payload to disable autoallocation, 11.3. additional resources.

  • Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses.

11.4. Understanding differences between user- and cluster-managed networking

User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include:

  • Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses.
  • Deployments with cluster nodes distributed across many distinct L2 network segments.

11.4.1. Validations

There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change:

  • L3 connectivity check (ICMP) is performed instead of L2 check (ARP)

11.5. Static network configuration

You may use static network configurations when generating or updating the discovery ISO.

11.5.1. Prerequisites

  • You are familiar with NMState .

11.5.2. NMState configuration

The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time.

11.5.2.1. Example of NMState configuration

11.5.3. mac interface mapping.

MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host.

The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces.

11.5.3.1. Example of MAC interface mapping

11.5.4. additional nmstate configuration examples.

The examples below are only meant to show a partial configuration. They are not meant to be used as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they may leave your machines with no network connectivity.

11.5.4.1. Tagged VLAN

11.5.4.2. network bond, 11.6. applying a static network configuration with the api.

You can apply a static network configuration using the Assisted Installer API.

  • You have created an infrastructure environment using the API or have created a cluster using the web console.
  • You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml .

Create a temporary file /tmp/request-body.txt with the API request:

Send the request to the Assisted Service API endpoint:

11.7. Additional resources

  • Applying a static network configuration with the web console

11.8. Converting to dual-stack networking

Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets.

11.8.1. Prerequisites

  • You are familiar with OVN-K8s documentation

11.8.2. Example payload for Single Node OpenShift

11.8.3. example payload for an openshift container platform cluster consisting of many nodes, 11.8.4. limitations.

The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values.

11.9. Additional resources

  • Understanding OpenShift networking
  • OpenShift SDN - CNI network provider
  • OVN-Kubernetes - CNI network provider
  • Dual-stack Service configuration scenarios
  • Installing on bare metal OCP .
  • Cluster Network Operator configuration .

Chapter 12. Expanding the cluster

You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API.

  • API connectivity failure when adding nodes to a cluster
  • Configuring multi-architecture compute machines on an OpenShift cluster

12.1. Checking for multi-architecture support

You must check that your cluster can support multiple architectures before you add a node with a different architecture.

  • Log in to the cluster using the CLI.

Check that your cluster uses the architecture payload by running the following command:

Verification

If you see the following output, your cluster supports multiple architectures:

12.2. Installing a multi-architecture cluster

A cluster with an x86_64 control plane can support worker nodes that have two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads.

For example, you can add arm64 , IBM Power , or IBM zSystems worker nodes to an existing OpenShift Container Platform cluster with an x86_64 .

The main steps of the installation are as follows:

  • Create and register a multi-architecture cluster.
  • Create an x86_64 infrastructure environment, download the ISO discovery image for x86_64 , and add the control plane. The control plane must have the x86_64 architecture.
  • Create an arm64 , IBM Power , or IBM zSystems infrastructure environment, download the ISO discovery images for arm64 , IBM Power or IBM zSystems , and add the worker nodes.

Supported platforms

The table below lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing.

OpenShift Container Platform versionSupported platformsDay 1 control plane architectureDay 2 node architecture

4.12.0

4.13.0

4.14.0

Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .

  • Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section.

When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster:

When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64 :

When you reach the "Adding hosts" step of the installation, set host_role to master :

For more information, see Assigning Roles to Hosts in Additional Resources .

  • Download the discovery image for the x86_64 architecture.
  • Boot the x86_64 architecture hosts using the generated discovery image.
  • Start the installation and wait for the cluster to be fully installed.

Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power), s390x (for IBM Z), or arm64 . For example:

Repeat the "Adding hosts" step of the installation. This time, set host_role to worker :

For more details, see Assigning Roles to Hosts in Additional Resources .

  • Download the discovery image for the arm64 , ppc64le or s390x architecture.
  • Boot the architecture hosts using the generated discovery image.

View the arm64 , ppc64le or s390x worker nodes in the cluster by running the following command:

12.3. Adding hosts with the web console

You can add hosts to clusters that were created using the Assisted Installer .

Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up.

  • Log in to OpenShift Cluster Manager and click the cluster that you want to expand.
  • Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed.
  • Optional: Modify ignition files as needed.
  • Boot the target host using the discovery ISO, and wait for the host to be discovered in the console.
  • Select the host role. It can be either a worker or a control plane host.
  • Start the installation.

As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation.

When the host is successfully installed, it is listed as a host in the cluster web console.

New hosts will be encrypted using the same method as the original cluster.

12.4. Adding hosts with the API

You can add hosts to clusters using the Assisted Installer REST API.

  • Install the OpenShift Cluster Manager CLI ( ocm ).
  • Log in to OpenShift Cluster Manager as a user with cluster creation privileges.
  • Ensure that all the required DNS records exist for the cluster that you want to expand.
  • Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only.

Set the $API_URL variable by running the following command:

Import the cluster by running the following commands:

Set the $CLUSTER_ID variable. Log in to the cluster and run the following command:

Set the $CLUSTER_REQUEST variable that is used to import the cluster:

Import the cluster and set the $CLUSTER_ID variable. Run the following command:

Generate the InfraEnv resource for the cluster and set the $INFRA_ENV_ID variable by running the following commands:

  • Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com .

Set the $INFRA_ENV_REQUEST variable:

Post the $INFRA_ENV_REQUEST to the /v2/infra-envs API and set the $INFRA_ENV_ID variable:

Get the URL of the discovery ISO for the cluster host by running the following command:

Download the ISO:

  • Boot the new worker host from the downloaded rhcos-live-minimal.iso .

Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up:

Set the $HOST_ID variable for the new host, for example:

Check that the host is ready to install by running the following command:

Ensure that you copy the entire command including the complete jq expression.

When the previous command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command:

As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host.

You must approve the CSRs to complete the installation.

Keep running the following API call to monitor the cluster installation:

Optional: Run the following command to see all the events for the cluster:

  • Log in to the cluster and approve the pending CSRs to complete the installation.

Check that the new host was successfully added to the cluster with a status of Ready :

12.5. Installing a primary control plane node on a healthy cluster

This procedure describes how to install a primary control plane node on a healthy OpenShift Container Platform cluster.

If the cluster is unhealthy, additional operations are required before they can be managed. See Additional Resources for more information.

  • You have installed a healthy cluster with a minimum of three nodes.
  • You have assigned role: master to a single node.

Retrieve pending CertificateSigningRequests (CSRs):

Approve pending CSRs:

Confirm the primary node is in Ready status:

The etcd-operator requires a Machine Custom Resource (CR) referencing the new node when the cluster runs with a functional Machine API.

Link the Machine CR with BareMetalHost and Node :

Create the BareMetalHost CR with a unique .metadata.name value:

Apply the BareMetalHost CR:

Create the Machine CR using the unique .machine.name value:

Apply the Machine CR:

Link BareMetalHost , Machine , and Node using the link-machine-and-node.sh script:

Confirm etcd members:

Confirm the etcd-operator configuration applies to all nodes:

Confirm etcd-operator health:

Confirm node health:

Confirm the ClusterOperators health:

Confirm the ClusterVersion :

Remove the old control plane node:

Delete the BareMetalHost CR:

Confirm the Machine is unhealthy:

Delete the Machine CR:

Confirm removal of the Node CR:

Check etcd-operator logs to confirm status of the etcd cluster:

Remove the physical machine to allow etcd-operator to reconcile the cluster members:

  • Installing a primary control plane node on an unhealthy cluster

12.6. Installing a primary control plane node on an unhealthy cluster

This procedure describes how to install a primary control plane node on an unhealthy OpenShift Container Platform cluster.

  • You have created a control plane.

Confirm initial state of the cluster:

Confirm the etcd-operator detects the cluster as unhealthy:

Confirm the etcdctl members:

Confirm that etcdctl reports an unhealthy member of the cluster:

Remove the unhealthy control plane by deleting the Machine Custom Resource:

The Machine and Node Custom Resources (CRs) will not be deleted if the unhealthy cluster cannot run successfully.

Confirm that etcd-operator has not removed the unhealthy machine:

Remove the unhealthy etcdctl member manually:

Remove the unhealthy cluster by deleting the etcdctl member Custom Resource:

Confirm members of etcdctl by running the following command:

Review and approve Certificate Signing Requests

Review the Certificate Signing Requests (CSRs):

Approve all pending CSRs:

Confirm ready status of the control plane node:

Validate the Machine , Node and BareMetalHost Custom Resources.

The etcd-operator requires Machine CRs to be present if the cluster is running with the functional Machine API. Machine CRs are displayed during the Running phase when present.

Create Machine Custom Resource linked with BareMetalHost and Node .

Make sure there is a Machine CR referencing the newly added node.

Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors when running etcd-operator .

Add BareMetalHost Custom Resource:

Add Machine Custom Resource:

Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script:

Confirm the etcd operator has configured all nodes:

Confirm health of etcdctl :

Confirm the health of the nodes:

Confirm the health of the ClusterOperators :

12.7. Additional resources

  • Installing a primary control plane node on a healthy cluster
  • Authenticating with the REST API

Chapter 13. Optional: Installing on Nutanix

If you install OpenShift Container Platform on Nutanix, the Assisted Installer can integrate the OpenShift Container Platform cluster with the Nutanix platform, which exposes the Machine API to Nutanix and enables autoscaling and dynamically provisioning storage containers with the Nutanix Container Storage Interface (CSI).

To deploy an OpenShift Container Platform cluster and maintain its daily operation, you need access to a Nutanix account with the necessary environment requirements. For details, see Environment requirements .

13.1. Adding hosts on Nutanix with the UI

To add hosts on Nutanix with the user interface (UI), generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

After this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.

  • You have created a cluster profile in the Assisted Installer UI.
  • You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name.
  • In Cluster details , select Nutanix from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
  • In Host discovery, click the Add hosts button.

Optional: Add an SSH public key so that you can connect to the Nutanix VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation.

Select the desired provisioning type.

Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot.

In Networking , select Cluster-managed networking . Nutanix does not support User-managed networking .

  • Optional: Configure the discovery image if you want to boot it with an ignition file. See Configuring the discovery image for additional details.
  • Click Generate Discovery ISO .
  • Copy the Discovery ISO URL .
  • In the Nutanix Prism UI, follow the directions to upload the discovery image from the Assisted Installer .

In the Nutanix Prism UI, create the control plane (master) VMs through Prism Central .

  • Enter the Name . For example, control-plane or master .
  • Enter the Number of VMs . This should be 3 for the control plane.
  • Ensure the remaining settings meet the minimum requirements for control plane hosts.

In the Nutanix Prism UI, create the worker VMs through Prism Central .

  • Enter the Name . For example, worker .
  • Enter the Number of VMs . You should create at least 2 worker nodes.
  • Ensure the remaining settings meet the minimum requirements for worker hosts.
  • Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status.
  • Continue with the installation procedure.

13.2. Adding hosts on Nutanix with the API

To add hosts on Nutanix with the API, generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

Once this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines.

  • You have set up the Assisted Installer API authentication.
  • You have created an Assisted Installer cluster profile.
  • You have created an Assisted Installer infrastructure environment.
  • You have completed the Assisted Installer cluster configuration.
  • Configure the discovery image if you want it to boot with an ignition file.

Create a Nutanix cluster configuration file to hold the environment variables:

If you have to start a new terminal session, you can reload the environment variables easily. For example:

Assign the Nutanix cluster’s name to the NTX_CLUSTER_NAME environment variable in the configuration file:

Replace <cluster_name> with the name of the Nutanix cluster.

Assign the Nutanix cluster’s subnet name to the NTX_SUBNET_NAME environment variable in the configuration file:

Replace <subnet_name> with the name of the Nutanix cluster’s subnet.

Create the Nutanix image configuration file:

Replace <image_url> with the image URL downloaded from the previous step.

Create the Nutanix image:

Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 .

Assign the returned UUID to the NTX_IMAGE_UUID environment variable in the configuration file:

Get the Nutanix cluster UUID:

Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <nutanix_cluster_name> with the name of the Nutanix cluster.

Assign the returned Nutanix cluster UUID to the NTX_CLUSTER_UUID environment variable in the configuration file:

Replace <uuid> with the returned UUID of the Nutanix cluster.

Get the Nutanix cluster’s subnet UUID:

Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <subnet_name> with the name of the cluster’s subnet.

Assign the returned Nutanix subnet UUID to the NTX_CLUSTER_UUID environment variable in the configuration file:

Replace <uuid> with the returned UUID of the cluster subnet.

Ensure the Nutanix environment variables are set:

Create a VM configuration file for each Nutanix host. Create three control plane (master) VMs and at least two worker VMs. For example:

Replace <host_name> with the name of the host.

Boot each Nutanix virtual machine:

Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <vm_config_file_name> with the name of the VM configuration file.

Assign the returned VM UUID to a unique environment variable in the configuration file:

Replace <uuid> with the returned UUID of the VM.

The environment variable must have a unique name for each VM.

Wait until the Assisted Installer has discovered each VM and they have passed validation.

Modify the cluster definition to enable integration with Nutanix:

13.3. Nutanix postinstallation configuration

Follow the steps below to complete and validate the OpenShift Container Platform integration with the Nutanix cloud provider.

  • The Assisted Installer has finished installing the cluster successfully.
  • The cluster is connected to console.redhat.com .
  • You have access to the Red Hat OpenShift Container Platform command line interface.

13.3.1. Updating the Nutanix configuration settings

After installing OpenShift Container Platform on the Nutanix platform using the Assisted Installer, you must update the following Nutanix configuration settings manually:

  • <prismcentral_username> : The Nutanix Prism Central username.
  • <prismcentral_password> : The Nutanix Prism Central password.
  • <prismcentral_address> : The Nutanix Prism Central address.
  • <prismcentral_port> : The Nutanix Prism Central port.
  • <prismelement_username> : The Nutanix Prism Element username.
  • <prismelement_password> : The Nutanix Prism Element password.
  • <prismelement_address> : The Nutanix Prism Element address.
  • <prismelement_port> : The Nutanix Prism Element port.
  • <prismelement_clustername> : The Nutanix Prism Element cluster name.
  • <nutanix_storage_container> : The Nutanix Prism storage container.

In the OpenShift Container Platform command line interface, update the Nutanix cluster configuration settings:

For additional details, see Creating a machine set on Nutanix .

Create the Nutanix secret:

When installing OpenShift Container Platform version 4.13 or later, update the Nutanix cloud provider configuration:

Get the Nutanix cloud provider configuration YAML file:

Create a backup of the configuration file:

Open the configuration YAML file:

Edit the configuration YAML file as follows:

Apply the configuration updates:

13.3.2. Creating the Nutanix CSI Operator group

Create an Operator group for the Nutanix CSI Operator.

For a description of operator groups and related concepts, see Common Operator Framework Terms in Additional Resources .

Open the Nutanix CSI Operator Group YAML file:

Edit the YAML file as follows:

Create the Operator Group:

13.3.3. Installing the Nutanix CSI Operator

The Nutanix Container Storage Interface (CSI) Operator for Kubernetes deploys and manages the Nutanix CSI Driver.

For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the Operator section of the Nutanix CSI Operator document in Additional Resources .

Get the parameter values for the Nutanix CSI Operator YAML file:

Check that the Nutanix CSI Operator exists:

Assign the default channel for the Operator to a BASH variable:

Assign the starting cluster service version (CSV) for the Operator to a BASH variable:

Assign the catalog source for the subscription to a BASH variable:

Assign the Nutanix CSI Operator source namespace to a BASH variable:

Create the Nutanix CSI Operator YAML file using the BASH variables:

Create the CSI Nutanix Operator:

Run the following command until the Operator subscription state changes to AtLatestKnown . This indicates that the Operator subscription has been created, and may take some time.

13.3.4. Deploying the Nutanix CSI storage driver

The Nutanix Container Storage Interface (CSI) Driver for Kubernetes provides scalable and persistent storage for stateful applications.

For instructions on performing this step through the OpenShift Container Platform web console, see the Installing the CSI Driver using the Operator section of the Nutanix CSI Operator document in Additional Resources .

Create a NutanixCsiStorage resource to deploy the driver:

Create a Nutanix secret YAML file for the CSI storage driver:

13.3.5. Validating the postinstallation configurations

Run the following steps to validate the configuration.

Verify that you can create a storage class:

Verify that you can create the Nutanix persistent volume claim (PVC):

Create the persistent volume claim (PVC):

Validate that the persistent volume claim (PVC) status is Bound:

  • Creating a machine set on Nutanix .
  • Nutanix CSI Operator
  • Storage Management
  • Common Operator Framework Terms

Chapter 14. Optional: Installing on vSphere

The Assisted Installer integrates the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling.

14.1. Adding hosts on vSphere

You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc vSphere CLI tool. The following procedure demonstrates adding hosts with the govc CLI tool. To use the online vSphere Client, refer to the documentation for vSphere.

To add hosts on vSphere with the vSphere govc CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size.

After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines.

  • You are using vSphere 7.0.2 or higher.
  • You have the vSphere govc CLI tool installed and configured.
  • You have set clusterSet disk.enableUUID to true in vSphere.
  • You have created a cluster in the Assisted Installer web console, or
  • You have created an Assisted Installer cluster profile and infrastructure environment with the API.
  • You have exported your infrastructure environment ID in your shell as $INFRA_ENV_ID .
  • In Cluster details , select vSphere from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional.
  • In Host discovery , click the Add hosts button and select the provisioning type.

Add an SSH public key so that you can connect to the vSphere VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation.

Select the desired discovery image ISO.

In Networking , select Cluster-managed networking or User-managed networking :

  • Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy or the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates and add the additional certificates.

Download the discovery ISO:

Replace <discovery_url> with the Discovery ISO URL from the preceding step.

On the command line, power down and destroy any preexisting virtual machines:

Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder.

Remove preexisting ISO images from the data store, if there are any:

Replace <iso_datastore> with the name of the data store. Replace image with the name of the ISO image.

Upload the Assisted Installer discovery ISO:

Replace <iso_datastore> with the name of the data store.

All nodes in the cluster must boot from the discovery image.

Boot three control plane (master) nodes:

See vm.create for details.

The foregoing example illustrates the minimum required resources for control plane nodes.

Boot at least two worker nodes:

The foregoing example illustrates the minimum required resources for worker nodes.

Ensure the VMs are running:

After 2 minutes, shut down the VMs:

Set the disk.enableUUID setting to TRUE :

You must set disk.enableUUID to TRUE on all of the nodes to enable autoscaling with vSphere.

Restart the VMs:

  • Select roles if needed.
  • In Networking , uncheck Allocate IPs via DHCP server .
  • Set the API VIP address.
  • Set the Ingress VIP address.

14.2. vSphere postinstallation configuration using the CLI

After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually:

  • vCenter username
  • vCenter password
  • vCenter address
  • vCenter cluster

Generate a base64-encoded username and password for vCenter:

Replace <vcenter_username> with your vCenter username.

Replace <vcenter_password> with your vCenter password.

Backup the vSphere credentials:

Edit the vSphere credentials:

Replace <vcenter_address> with the vCenter address. Replace <vcenter_username_encoded> with the base64-encoded version of your vSphere username. Replace <vcenter_password_encoded> with the base64-encoded version of your vSphere password.

Replace the vSphere credentials:

Redeploy the kube-controller-manager pods:

Backup the vSphere cloud provider configuration:

Edit the cloud provider configuration:

Replace <vcenter_address> with the vCenter address. Replace <datacenter> with the name of the data center. Replace <datastore> with the name of the data store. Replace <folder> with the folder containing the cluster VMs.

Apply the cloud provider configuration:

Taint the nodes with the uninitialized taint:

Follow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later.

Identify the nodes to taint:

Run the following command for each node:

Replace <node_name> with the name of the node.

Back up the infrastructures configuration:

Edit the infrastructures configuration:

Replace <vcenter_address> with your vCenter address. Replace <datacenter> with the name of your vCenter data center. Replace <datastore> with the name of your vCenter data store. Replace <folder> with the folder containing the cluster VMs. Replace <vcenter_cluster> with the vSphere vCenter cluster where OpenShift Container Platform is installed.

Apply the infrastructures configuration:

14.3. vSphere postinstallation configuration using the web console

  • Default data store
  • Virtual machine folder
  • In the Administrator perspective, navigate to Home → Overview .
  • Under Status , click vSphere connection to open the vSphere connection configuration wizard.
  • In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui .

In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed.

This step is mandatory if you installed OpenShift Container Platform 4.13 or later.

  • In the Username field, enter your vSphere vCenter username.

In the Password field, enter your vSphere vCenter password.

The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable.

  • In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter .

In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename .

Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes .

  • In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder.
  • Click Save Configuration . This updates the cloud-provider-config file in the openshift-config namespace, and starts the configuration process.
  • Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy .

The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected.

Follow the steps below to monitor the configuration process.

Check that the configuration process completed successfully:

  • In the OpenShift Container Platform Administrator perspective, navigate to Home → Overview .
  • Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed.
  • Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed.

A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again.

Check that you are able to bind PersistentVolumeClaims objects by performing the following steps:

Create a StorageClass object using the following YAML:

Create a PersistentVolumeClaims object using the following YAML:

For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a PersistentVolumeClaims object, navigate to Storage → PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform web console.

Chapter 15. Optional: Installing on Oracle Cloud Infrastructure (OCI)

From OpenShift Container Platform 4.14 and later versions, you can use the Assisted Installer to install a cluster on Oracle Cloud Infrastructure by using infrastructure that you provide. Oracle Cloud Infrastructure provides services that can meet your needs for regulatory compliance, performance, and cost-effectiveness. You can access OCI Resource Manager configurations to provision and configure OCI resources.

For OpenShift Container Platform 4.14 and 4.15, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

This section is a summary of the steps required in the Assisted Installer web console to support the integration with Oracle Cloud Infrastructure. It does not document the steps to be performed in Oracle Cloud Infrastructure, nor does it cover the integration between the two platforms. For a complete and comprehensive procedure, see Using the Assisted Installer to install a cluster on OCI .

15.1. Generating an OCI-compatible discovery ISO image

Generate the discovery ISO image in Assisted Installer by completing the required steps. You must upload the image to the Oracle Cloud Infrastructure before you install OpenShift Container Platform on Oracle Cloud Infrastructure.

  • You created a child compartment and an object storage bucket on Oracle Cloud Infrastructure. See Creating OCI resources and services in the OpenShift Container Platform documentation.
  • You meet the requirements necessary for installing a cluster. For details, see Prerequisites .
  • On the Cluster type page, click the Datacenter tab.
  • In the Assisted Installer section, click Create cluster .

On the Cluster Details page, complete the following fields:

  • In the Cluster name field, specify the name of your cluster, such as ocidemo .
  • In the Base domain field, specify the base domain of the cluster, such as splat-oci.devcluster.openshift.com . Take the base domain from OCI after creating a compartment and a zone.
  • In the OpenShift version field, specify OpenShift 4.15 or a later version.
  • In the CPU architecture field, specify x86_64 or Arm64 .
  • From the Integrate with external partner platforms list, select Oracle Cloud Infrastructure . The Include custom manifests checkbox is automatically selected.
  • On the Operators page, click Next .

On the Host Discovery page, perform the following actions:

  • Click Add host to display a dialog box.
  • For the SSH public key field, upload a public SSH key from your local system. You can generate an SSH key pair with ssh-keygen .
  • Click Generate Discovery ISO to generate the discovery image ISO file.
  • Download the file to your local system. You will then upload the file to the bucket in Oracle Cloud Infrastructure as an Object.

15.2. Assigning node roles and custom manifests

After you provision Oracle Cloud Infrastructure (OCI) resources and upload OpenShift Container Platform custom manifest configuration files to OCI, you must complete the remaining cluster installation steps on the Assisted Installer before you can create an instance OCI.

  • You created a resource stack on OCI, and the stack includes the custom manifest configuration files and OCI Resource Manager configuration resources. For details, see Downloading manifest files and deployment resources in the OpenShift Container Platform documentation.
  • From the Red Hat Hybrid Cloud Console , go to the Host discovery page.
  • Under the Role column, assign a node role, either Control plane node or Worker , for each targeted hostname. Click Next .
  • Accept the default settings for the Storage and Networking pages.
  • Click Next to go to the Custom manifests page.
  • In the Folder field, select manifests .
  • In the File name field, enter a value such as oci-ccm.yml .
  • In the Content section, click Browse . Select the CCM manifest located in custom_ manifest/manifests/oci-ccm.yml .

Click Add another manifest . Repeat the same steps for the following manifests provided by Oracle:

  • CSI driver manifest: custom_ manifest/manifests/oci-csi.yml .
  • CCM machine configuration: custom_ manifest/openshift/machineconfig-ccm.yml .
  • CSI driver machine configuration: custom_ manifest/openshift/machineconfig-csi.yml .
  • Complete the Review and create step to create your OpenShift Container Platform cluster on OCI.
  • Click Install cluster to finalize the cluster installation.

Chapter 16. Troubleshooting

There are cases where the Assisted Installer cannot begin the installation or the cluster fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure.

16.1. Troubleshooting discovery ISO issues

The Assisted Installer uses an ISO image to run an agent that registers the host to the cluster and performs hardware and network validations before attempting to install OpenShift. You can follow these procedures to troubleshoot problems related to the host discovery.

Once you start the host with the discovery ISO image, the Assisted Installer discovers the host and presents it in the Assisted Service web console. See Configuring the discovery image for additional details.

16.1.1. Verify the discovery agent is running

  • You have created an infrastructure environment by using the API or have created a cluster by using the web console.
  • You booted a host with the Infrastructure Environment discovery ISO and the host failed to register.
  • You have SSH access to the host.
  • You provided an SSH public key in the "Add hosts" dialog before generating the Discovery ISO so that you can SSH into your machine without a password.
  • Verify that your host machine is powered on.
  • If you selected DHCP networking , check that the DHCP server is enabled.
  • If you selected Static IP, bridges and bonds networking, check that your configurations are correct.

Verify that you can access your host machine using SSH, a console such as the BMC, or a virtual machine console:

You can specify private key file using the -i parameter if it isn’t stored in the default directory.

If you fail to ssh to the host, the host failed during boot or it failed to configure the network.

Upon login you should see this message:

Example login

screenshot of assisted iso login message

Check the agent service logs:

In the following example, the errors indicate there is a network issue:

Example agent service log screenshot of agent service log

screenshot of agent service log

If there is an error pulling the agent image, check the proxy settings. Verify that the host is connected to the network. You can use nmcli to get additional information about your network configuration.

16.1.2. Verify the agent can access the assisted-service

  • You verified the discovery agent is running.

Check the agent logs to verify the agent can access the Assisted Service:

The errors in the following example indicate that the agent failed to access the Assisted Service.

Example agent log

screenshot of the agent log failing to access the Assisted Service

Check the proxy settings you configured for the cluster. If configured, the proxy must allow access to the Assisted Service URL.

16.2. Troubleshooting minimal discovery ISO issues

The minimal ISO image should be used when bandwidth over the virtual media connection is limited. It includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The resulting ISO image is about 100MB in size compared to 1GB for the full ISO image.

16.2.1. Troubleshooting minimal ISO boot failure by interrupting the boot process

If your environment requires static network configuration to access the Assisted Installer service, any issues with that configuration might prevent the minimal ISO from booting properly. If the boot screen shows that the host has failed to download the root file system image, the network might not be configured correctly.

You can interrupt the kernel boot early in the bootstrap process, before the root file system image is downloaded. This allows you to access the root console and review the network configurations.

Example rootfs download failure

Failed root file system image download

Add the .spec.kernelArguments stanza to the infraEnv object of the cluster you are deploying:

For details on modifying an infrastructure environment, see Additional Resources .

  • Wait for the related nodes to reboot automatically and for the boot to abort at the iniqueue stage, before rootfs is downloaded. You will be redirected to the root console.

Identify and change the incorrect network configurations. Here are some useful diagnostic commands:

View system logs by using journalctl , for example:

View network connection information by using nmcli , as follows:

Check the configuration files for incorrect network connections, for example:

  • Press control+d to resume the bootstrap process. The server downloads rootfs and completes the process.
  • Reopen the infraEnv object and remove the .spec.kernelArguments stanza.
  • Modifying an infrastructure environment

16.3. Correcting a host’s boot order

Once the installation that runs as part of the Discovery Image completes, the Assisted Installer reboots the host.  The host must boot from its installation disk to continue forming the cluster.  If you have not correctly configured the host’s boot order, it will boot from another disk instead, interrupting the installation.

If the host boots the discovery image again, the Assisted Installer will immediately detect this event and set the host’s status to Installing Pending User Action .  Alternatively, if the Assisted Installer does not detect that the host has booted the correct disk within the allotted time, it will also set this host status.

  • Reboot the host and set its boot order to boot from the installation disk. If you didn’t select an installation disk, the Assisted Installer selected one for you. To view the selected installation disk, click to expand the host’s information in the host inventory, and check which disk has the “Installation disk” role.

16.4. Rectifying partially-successful installations

There are cases where the Assisted Installer declares an installation to be successful even though it encountered errors:

  • If you requested to install OLM operators and one or more failed to install, log into the cluster’s console to remediate the failures.
  • If you requested to install more than two worker nodes and at least one failed to install, but at least two succeeded, add the failed workers to the installed cluster.

16.5. API connectivity failure when adding nodes to a cluster

When you add a node to an existing cluster as part of day 2 operations, the node downloads the ignition configuration file from the day 1 cluster. If the download fails and the node is unable to connect to the cluster, the status of the host in the Host discovery step changes to Insufficient . Clicking this status displays the following error message:

There are a number of possible reasons for the connectivity failure. Here are some recommended actions.

Check the IP address and domain name of the cluster:

  • Click the set the IP or domain used to reach the cluster hyperlink.
  • In the Update cluster hostname window, enter the correct IP address or domain name for the cluster.
  • Check your DNS settings to ensure that the DNS can resolve the domain that you provided.
  • Ensure that port 22624 is open in all firewalls.

Check the agent logs of the host to verify that the agent can access the Assisted Service via SSH:

For more details, see Verify the agent can access the Assisted Service .

Legal Notice

  • Developer resources
  • Cloud learning hub
  • Interactive labs
  • Training and certification
  • Customer support
  • See all documentation

Try, buy, & sell

  • Product trial center
  • Red Hat Marketplace
  • Red Hat Ecosystem Catalog
  • Red Hat Store
  • Buy online (Japan)

Communities

  • Customer Portal Community
  • How we contribute

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog .

  • About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Red Hat legal and privacy links

  • Contact Red Hat
  • Red Hat Blog
  • Diversity, equity, and inclusion
  • Cool Stuff Store
  • Red Hat Summit
  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility
  • Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
  • Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand
  • OverflowAI GenAI features for Teams
  • OverflowAPI Train & fine-tune LLMs
  • Labs The future of collective knowledge sharing
  • About the company Visit the blog

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Get early access and see previews of new features.

What is a "known bad pointer value"?

In this answer to this question , it was noted that:

If you're going to 'clear' the pointer in the dtor, a different idiom would be better - set the pointer to a known bad pointer value.

and also that the destructor should be:

~Foo() { delete bar; if (DEBUG) bar = (bar_type*)(long_ptr)(0xDEADBEEF); }

I have two question about these parts of the answer.

Firstly, how can you set the pointer to a known bad pointer value? How can you set a pointer to an address which you can assure won't be allocated?

Secondly, what does: if (DEBUG) bar = (bar_type*)(long_ptr)(0xDEADBEEF); even do? What's DEBUG ? I couldn't find a macro named so. Also, what's long_ptr ? What does it do?

Remy Lebeau's user avatar

  • DEBUG is not a standard macro, it's one that you would define to work with this code. –  Barmar Commented Jun 24 at 19:34
  • 3 The reason to prefer 0xDEADBEEF over NULL is that it will be more obvious in debugging output. NULL pointers are often used for initial values, so you can't distinguish a pointer that hasn't been updated from one that has been cleared. –  Barmar Commented Jun 24 at 19:39
  • 1 @CSStudent The whole point is to make it more likely for a bug to show up in a recognizable way while you are testing. If the rest of the program doesn't have undefined behavior, then there is absolutely zero effect of writing anything to bar . When the destructor exits reading from it will be undefined behavior. But if you made a mistake somewhere and accidentally try to read it after it was destructed, then you want your program to fail as quickly and hard as possible while testing. A random address gives a better chance than zero or a previous value that has been valid earlier. –  user17732522 Commented Jun 24 at 19:53
  • 3 Because of alignment, an odd-address pointer (like 0xDEADBEEF) has a very good chance of not being a valid pointer on many architectures. (Or even if valid, not likely to be malloc 'd by the heap manager.) –  Eljay Commented Jun 24 at 20:02
  • 2 It's worth noting that answer was written in 2011. You should be aware that a decade is a long time in software, and the current recommendations like asan just didn't exist then. –  Useless Commented Jun 25 at 7:36

3 Answers 3

0xDEADBEEF is a pointer value like any other. Strictly speaking, there is no "known bad pointer value".

However, as a very rough simplification we can assume pointer values to be random. The chances to see 0xDEADBEEF as 32-bit value is 2^-32, that is: 2.3283064e-10. It is much more likely to be hit by a lightning or to find 2 four leaved glovers in a row than to see exactly that value on any given pointer. In other words, for all practical purposes we can assume that a 0xDEADBEEF is from our assignment when we see it.

DEBUG is not a standard macro. The author just wanted to illustrate that doing the assignment can be enabled in debug builds and disabled in non-debug builds (by employing a dedicated flag).

463035818_is_not_an_ai's user avatar

  • Memory is allocated in pages and all addresses in the same page have the same validity. –  Barmar Commented Jun 24 at 20:22
  • 5 There is the additional factor that 0xDEADBEEF is odd, so the only object it can point to is an object of alignment 1. It could only point to an object of size 1 which further restricts its appearance as a pointer value. –  François Andrieux Commented Jun 24 at 20:23
  • So you think it's a trap, you are expecting the program to crash and then look in the debugger and seeing DEADBEEF you realize you are accessing the object that you deleted and then you are trying to get through the program to find where it could possibly come from. I don't see how it's different from null pointer, you have to do the same. But actually with DEADBEEF you may not see access violation/alignment or segfault, because it's an undefined behavior and compiler could optimize away your assignment. –  Gene Commented Jun 24 at 23:58
  • @Gene -- Hence using a "magic number" or using nullptr wouldn't matter? Btw, then what would you advise? Assigning to nullptr for possibe-crash and hence bug detection, or leaving the pointer as is after destructor's call? –  CS Student Commented Jun 25 at 1:45
  • 1 @CSStudent The better solution is to avoid manual memory management. If you absolutely need a pointer, then use unique_ptr which manages the lifetime for you. If everything is an RAII type then no memory errors can occur unless there is UB in the code somewhere. –  NathanOliver Commented Jun 25 at 2:17

Unless you have system-level access, pointer values near NULL are all bad. Like 1, 2, 3, etc.

I disagree with the premise of the linked question. If you think that testing should find memory leaks using magic numbers for pointer values, then there is a serious design failure in your application and testing strategies.

Write good code. Don’t write bad code to find bad code.

  • Then how would you replicate the "magic numbers" method? –  CS Student Commented Jun 24 at 22:46
  • 1 It is unclear what you are asking, but as I understand it my answer is don’t use magic numbers. –  Dúthomhas Commented Jun 24 at 22:48
  • @CSStudent the problem is the use of raw pointers in general. You're trying to prevent a "use after free"-type of situation, but in most cases, that's not going to help much. You're even doing this in a destructor, so that "bar" pointer will be gone anyway. "Use after free" is much more likely to show up in other objects or threads that happen to have their own copy of that pointer, which is not "DEADBEEF"... –  Christian Stieber Commented Jun 25 at 1:01
  • @ChristianStieber -- and how would you solve such a problem then? –  CS Student Commented Jun 25 at 1:52
  • @CSStudent I don't have this kind of problem. "Use after free" is rarely an issue these days, since using automatic resource management through destructors and smartpointers mostly prevents it by also forcing us to be more aware of pointer lifetimes: if I never store a "raw pointer" anywhere, and now I suddenly do, there's bound to be some thought behind it how that interacts with the unique_ptr or smart_ptr that it came from. And while valgrind is too bugged for me to be useful right now, I also keep an eye open for crashes. Also, runtime libraries might assist by trashing freed memory. –  Christian Stieber Commented Jun 25 at 10:25
how can you set the pointer to a known bad pointer value? How can you set a pointer to an address which you can assure won't be allocated?

First, let me be clear that the example code you refer to is a dirty hack and is intended to provide some guidance to assist debugging. It is not intended as a production quality memory management tool; it isn't even intended to be "drop in" code - it's an example of a debugging technique.

Setting a pointer to a hardcoded value isn't guaranteed to be a "bad pointer" unless you know something about the target environment. 0xDEADBEEF is a value that is likely to be an invalid pointer on many environments just out of luck. The value was chosen in because I had seen it used as a marker for "invalid data" in other code and it is easily spotted when viewing memory dumps. I believe it is (or was, maybe not anymore - that answer was from 14 years ago!) commonly used to indicate memory areas that are invalid/unused. Similar to some of the values Microsoft used in their debug library versions of some memory management routines (see https://stackoverflow.com/a/370362 )

what does: if (DEBUG) bar = (bar_type*)(long_ptr)(0xDEADBEEF); even do? What's DEBUG ? I couldn't find a macro named so.

I might not have said explicitly in the answer you refer to, but the example code might more properly be call a pseudo-code example. if (DEBUG) is used to indicate a bit of code that is conditionally executed "if this is a debug build".

For example, DEBUG could be a macro (or variable) that is defined as non-zero during a debug build of the problem. Possibly on the compiler's command line (maybe something like -DDEBUG=1 ). A DEBUG macro is something that I found commonly used for code that is enabled in debug builds only.

Also, what's long_ptr ? What does it do?

To aid in the transition from 32-bit to 64-bit systems, MS added types that are size of pointers. LONG_PTR is an integer type that has the size of a pointer (32 or 64 bits as appropriate). I probably should have used LONG_PTR instead of long_ptr . I believe the cast is technically unnecessary, but I think it's still useful as a notation that makes clear that an integer is being 'converted' to a pointer - a coding idiom that uses dirty looking casts to call out a dirty hack.

Michael Burr's user avatar

  • So you used long_ptr for more clarity? If you did do it only so it'd be more apparent that "an integer is being 'converted' to a pointer", then I think that just seeing the asterisk would've suffice. Also, excuse me for the hassle, but in your answer, why did you write: Now if anything has a dangling reference to the Foo object that's been deleted, any use of bar will not avoid referencing it due to a NULL check - it'll happily try to use the pointer and you'll get a crash that you can fix: ? Because from what I've checked, it doesn't crash –  CS Student Commented Jun 27 at 17:11
  • I honestly can't remember nuances of what I was thinking 14 years ago. I don't think it's a crucial part of the answer. Today's C++ provides smart pointer types such as unique_pointer and shared_pointer which are likely to be better tools to deal with the problem that was being discussed back then. –  Michael Burr Commented yesterday

Your Answer

Reminder: Answers generated by artificial intelligence tools are not allowed on Stack Overflow. Learn more

Sign up or log in

Post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged c++ pointers null destructor nullptr or ask your own question .

  • Featured on Meta
  • Upcoming sign-up experiments related to tags
  • Policy: Generative AI (e.g., ChatGPT) is banned
  • The [lib] tag is being burninated
  • What makes a homepage useful for logged-in users

Hot Network Questions

  • What is the term for when a hyperlink maliciously opens different URL from URL displayed when hovered over?
  • Could space habitats have large transparent roofs?
  • Does it matter if a fuse is on a positive or negative voltage?
  • Different outdir directories in one Quantum ESPRESSO run
  • Summation not returning a timely result
  • Does this double well potential contradict the fact that there is no degeneracy for one-dimensional bound states?
  • Remove assignment of [super] key from showing "activities" in Ubuntu 22.04
  • How many steps are needed to turn one "a" into at least 100,000 "a"s using only the three functions of "select all", "copy" and "paste"?
  • Logical AND (&&) does not short-circuit correctly in #if
  • Is Légal’s reported “psychological trick” considered fair play or unacceptable conduct under FIDE rules?
  • Why was the animal "Wolf" used in the title "The Wolf of Wall Street (2013)"?
  • What is a positive coinductive type and why are they so bad?
  • How to make D&D easier for kids?
  • Specific calligraphic font for lowercase g
  • Was BCD a limiting factor on 6502 speed?
  • Embedded terminal for Nautilus >= version 43
  • Cleaning chain a few links at a time
  • Were there engineers in airship nacelles, and why were they there?
  • Is it possible to complete a Phd on your own?
  • How do guitarists remember what note each string represents when fretting?
  • Both my adult kids and their father (ex husband) dual citizens
  • Can front gear be replaced on a Retrospec Judd folding bicycle?
  • Is there any legal justification for content on the web without an explicit licence being freeware?
  • Can I route audio from a macOS Safari PWA to specific speakers, different from my system default?

unique_ptr c assignment

IMAGES

  1. How to create and use unique pointer in C++?

    unique_ptr c assignment

  2. How to create and use unique pointer in C++?

    unique_ptr c assignment

  3. Unique_ptr in C++

    unique_ptr c assignment

  4. How to create and use unique pointer in C++?

    unique_ptr c assignment

  5. What is the C++ unique_ptr?

    unique_ptr c assignment

  6. C++

    unique_ptr c assignment

VIDEO

  1. Double Pointer

  2. Assignment Operator in C Programming

  3. NPTEL Problem Solving through Programming in C ASSIGNMENT 6 ANSWERS 2024

  4. C++

  5. C++11 : weak_ptr (Smart Pointers)

  6. they are our standards..🤌#aesthetic #viral #edit #fypシ #trending #tiktok #ytshorts #youtubeshorts

COMMENTS

  1. std::unique_ptr

    std::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the unique_ptr goes out of scope.. The object is disposed of, using the associated deleter when either of the following happens: the managing unique_ptr object is destroyed.; the managing unique_ptr object is assigned another pointer via operator= or reset().

  2. How to assign value to the unique_ptr after declaring it?

    0. In order to assign a new value to a std::unique_ptr use the reset() method. However, a big word of caution regarding the use of this method is that the std::unique_ptr object will try to dispose of the managed pointer by invoking a Deleter function on the managed pointer when the std::unique_ptr object will be being destroyed or when the ...

  3. 22.5

    std::unique_ptr is by far the most used smart pointer class, so we'll cover that one first. In the following lessons, we'll cover std::shared_ptr and std::weak_ptr. std::unique_ptr. std::unique_ptr is the C++11 replacement for std::auto_ptr. It should be used to manage any dynamically allocated object that is not shared by multiple objects.

  4. unique_ptr

    A unique_ptr object has two components: a stored pointer : the pointer to the object it manages. This is set on construction , can be altered by an assignment operation or by calling member reset , and can be individually accessed for reading using members get or release .

  5. unique_ptr

    The assignment operation between unique_ptr objects that point to different types (3) needs to be between types whose pointers are implicitly convertible, and shall not involve arrays in any case (the third signature is not part of the array specialization of unique_ptr). Copy assignment (4) to a unique_ptr type is not allowed (deleted signature).

  6. std::unique_ptr<T,Deleter>::unique_ptr

    unique_ptr(const unique_ptr&)= delete; (7) 1) Constructs a std::unique_ptr that owns nothing. Value-initializes the stored pointer and the stored deleter. Requires that Deleter is DefaultConstructible and that construction does not throw an exception. These overloads participate in overload resolution only if std::is_default_constructible ...

  7. unique_ptr

    Constructs a unique_ptr object, depending on the signature used: default constructor (1), and (2) The object is empty (owns nothing), with value-initialized stored pointer and stored deleter. construct from pointer (3) The object takes ownership of p, initializing its stored pointer to p and value-initializing its stored deleter. construct from pointer + lvalue deleter (4)

  8. std::unique_ptr

    Description. std::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the unique_ptr goes out of scope. The object is disposed of, using the associated deleter when either of the following happens: The object is disposed of, using a potentially user-supplied deleter by calling ...

  9. How to: Create and use unique_ptr instances

    In this article. A unique_ptr does not share its pointer. It cannot be copied to another unique_ptr, passed by value to a function, or used in any C++ Standard Library algorithm that requires copies to be made.A unique_ptr can only be moved. This means that the ownership of the memory resource is transferred to another unique_ptr and the original unique_ptr no longer owns it.

  10. Unique_ptr in C++

    std::unique_ptr is a smart pointer introduced in C++11. It automatically manages the dynamically allocated resources on the heap. Smart pointers are just wrappers around regular old pointers that help you prevent widespread bugs. Namely, forgetting to delete a pointer and causing a memory leak or accidentally deleting a pointer twice or in the ...

  11. std::unique_ptr<T,Deleter>::reset

    Return value (none) [] NoteTo replace the managed object while supplying a new deleter as well, move assignment operator may be used. A test for self-reset, i.e. whether ptr points to an object already managed by * this, is not performed, except where provided as a compiler extension or as a debugging assert.Note that code such as p. reset (p. release ()) does not involve self-reset, only code ...

  12. std::unique_ptr

    Notes. Only non-const unique_ptr can transfer the ownership of the managed object to another unique_ptr.If an object's lifetime is managed by a const std:: unique_ptr, it is limited to the scope in which the pointer was created.. std::unique_ptr is commonly used to manage the lifetime of objects, including: . providing exception safety to classes and functions that handle objects with dynamic ...

  13. unique_ptr Class

    unique_ptr uniquely manages a resource. Each unique_ptr object stores a pointer to the object that it owns or stores a null pointer. A resource can be owned by no more than one unique_ptr object; when a unique_ptr object that owns a particular resource is destroyed, the resource is freed. A unique_ptr object may be moved, but not copied; for ...

  14. What is unique_ptr in C++?

    A unique_ptr is a type of smart pointer provided by the C++ Standard Library that is designed to manage the memory of a dynamically allocated memory. It holds the exclusive ownership of the memory it points to, meaning there can be no other unique_ptr pointing to the same memory at the same time. This exclusive ownership is the first way in ...

  15. Smart Pointers in C++

    There are three smart pointers types: unique_ptr, shared_ptr, and weak_ptr. What is the difference between smart pointers and normal pointers? A Smart Pointer is a wrapper class for a pointer with overloaded operators like * and ->. The smart pointer class's objects resemble regular pointers. It may, however, deallocate and release damaged ...

  16. My implementation for std::unique_ptr

    Copy assignment operator unique_ptr<T>& operator=(const unique_ptr<T>& uptr) = delete; This is redundant, this part is not necessary as this operator will never be called taking into account that the motion constructor exists and the copy constructor is disabled (= delete).

  17. std::unique_ptr<T,Deleter>::operator=

    For the array specialization (unique_ptr<T[]>), this overload participates in overload resolution only if U is an array type, ... As a move-only type, unique_ptr's assignment operator only accepts rvalues arguments (e.g. the result of std::make_unique or a std::move 'd unique_ptr variable).

  18. Understanding unique_ptr with Example in C++11

    std::unique_ptr with example in C++11. std::unique_ptr is the C++11 replacement for std::auto_ptr. It is used to manage use to manage any dynamically allocated object not shared by multiple objects. That is, std::unique_ptr should completely own the object it manages, not share that ownership with other classes.

  19. Copy constructor for a class with unique_ptr

    The usual case for one to have a unique_ptr in a class is to be able to use inheritance (otherwise a plain object would often do as well, see RAII). For this case, there is no appropriate answer in this thread up to now. So, here is the starting point: struct Base { //some stuff }; struct Derived : public Base { //some stuff }; struct Foo { std::unique_ptr<Base> ptr; //points to Derived or ...

  20. Installing OpenShift Container Platform with the Assisted Installer

    A DNS PTR record for each node in the cluster if you want to boot with the preset hostname when using static IP addressing. Otherwise, the Assisted Installer has an automatic node renaming feature when using static IP addressing that will rename the nodes to their network interface MAC address.

  21. c++11

    A unique_ptr can only leak if you release the owned resource and don't destroy it yourself.. In your case, the operator* of unique_ptr is called for a and returns an lvalue referring to the pointee. You then call the copy assignment operator for that pointee, but that does not in any way affect the management of the memory it occupies.

  22. c++

    The author just wanted to illustrate that doing the assignment can be enabled in debug builds and disabled in non-debug builds (by employing a dedicated flag). ... then use unique_ptr which manages the lifetime for you. If everything is an RAII type then no memory errors can occur unless there is UB in the code somewhere. - NathanOliver ...