Tuesday, July 20, 2021

Interview With Oracle FPP head Ludovico Caldara

I want to thank Ludovico Caldara [FPP & Cloud MAA Product Manager @Oracle] for accepting the publication of this interview which is based on a conversation we had some time ago. It is mainly focused on the Oracle Fleet Patching and Provisioning (FPP) “FUNDAMENTALS”, but I hope this could help the community to glean a better understanding as to which is which and which does what within its Architecture before trying their labs.

: If you want to check the hottest news about FPP, please jump to the 4th section What’s cooking for FPP

Main Topics

 1. Storage options for provisioned Software
2. Client/Server relationship
 3. Upgrade in FPP
 4. What’s cooking for FPP in 2021
      ○  Helpful resources

                                                                -- ⚜  “Latin Greetings”  ⚜ --                                                                    

Ciao Ludovico, come stai! Grazie per aver accettato questa intervista.

Ciao BrokeDBA, tutto bene grazie. Grazie a te per l’invito!


I. Storage Options for Provisioned Software

First, I recently read a section in the FPP technical brief where they stated something like the below

This image has an empty alt attribute; its file name is image-5.png

My Question is what does “FPP managed” storage change really for provisioning?
Does that mean if the storage is FPP managed, it can't store and provision grid images ?

The option to provision software “LOCAL” or “FPP_MANAGED” is related to the possibility of using ACFS on the client to store a copy of the gold image locally to the client and add the working copy as an ACFS snapshot.

So from there, any working copy that you want to provision out of the same gold image (or image series), will be provisioned as ACFS snapshot of the corresponding image (see link)

  • What changes on the client is that if you provision to a “LOCAL” filesystem, you have to care about it (its existence, size, etc.), and every working copy based on the same image will be a full copy occupying space.
  • If you provision to FPP_MANAGED, you just need to provision a diskgroup with enough capacity, and the ACFS filesystems and snapshots are created/managed automatically by the FPP client.

The dependency with ACFS makes it impossible to have working copies of Grid Infrastructure using RHP/FPP_MANAGED.

The image management on the FPP server does not change, you can import and manage GI images or DB images on the FPP server, they’ll always go in the ACFS filesystem.

-- Follow-up

So it’s not a provisioning limitation for simple target servers but only for FPP clients? We could still provision working copies of GI gold images stored in the FPP server,  just not on an FPP_MANAGED storage if the destination is an FPP server or client  correct?   

[Ludovico] Correct.

Does it mean I can only add DB home working copies to an FPP_MANGED storage in the FPP servers’ ACFS system? 

I was referring to images, not working copies. I’ll try to schematize it here:
This image has an empty alt attribute; its file name is image-1.png

(*) when provisioning DB Working Copies on FPP_MANAGED, the base ACFS file system contains a copy of the image, but you cannot "add image" to a client

So all GI working copies need to be LOCAL, cue the mandatory -path option ?

[Ludovico] Correct.

II. Client/Server Relationship

Now back to FPP clients, what kind of relationship is there between an FPP client & the FPP server in terms of role and content?

  • From the documentation I could read the following

This image has an empty alt attribute; its file name is image-8.png

  • A bit further

Is HA the reason behind Client/Server architecture or could you clarify this relationship a bit more?

These two statements are a bit unrelated. The first says that to promote a cluster as FPP client (and not just target), you need at least GI 12.2 if the server is 19c. If you have GI 12.1 on the client, it cannot become a client but will stay an unmanaged target. The difference is that the client is “registered” and further operations on it do not require root password anymore.
The client/server relationship is established once and for all, with credential wallets, when doing “add client/ add rhpclient”). Also, once a cluster becomes FPP client, it can trigger actions on its own (if the local user has the correct roles).

The second statement just suggests that the FPP server should be highly available so that FPP server stays available in case of server failure. As it says, not mandatory but recommended.

Does the client store images like any FPP server, or just uses the FPP server’s repository for creating working copies?

The second, Clients do not store their own images, they always get them from the server.

What does FPP_MANAGED storage option mean in practice for FPP client environment provisioning?

  • Option 1
    Add a DB working copy snapshot after importing the image from the FPP server to a local ACFS , (total size = image + snapshot)?

  • Option 2
    Add a DB working copy as  a snapshot directly from FPP server’s ACFS image without having to import it first, (total size = snapshot)?


  • Option 1:  Exactly, the import of the image and the creation of the snapshot are implicit with the ”add workingcopy” command.

  • Option 2:  No, this was possible in 12.1 (NFS working copy provisioning), but it has been dismissed because NFS availability was a concern more than a solution.

Upgrade in FPP


Let's say we have an existing 12c non CDB (non working Copy) installed and wish to migrate it to 19c CDB.
In a normal world (non FPP) we usually have the below options (see
19c migration white paper)

This image has an empty alt attribute; its file name is image-4.png

Which scenarios are available in an FPP environment (i.e. : FPP Server + FPP Target having 12c NonCDB DB) that allow us to do the same migration in question (Non CDB 12c to 19c CDB)?
Early documentation wasn’t very clear for me on that scenario, is multi tenant conversion supported too?

You can use Fleet Patching and Provisioning to upgrade CDBs but 19c Fleet Patching and Provisioning does not support converting a non-CDB to a CDB during upgrade.

Up to 19c, FPP uses DBUA and DBCA in the backend for database upgrade and creation. If you need to have special templates for DBCA you can create the templates in the Oracle Home and create the gold image from that. From that moment, the working copies provisioned from that image will have the template that you need.

I realized that a local database on a FPP target doesn’t have to be a working copy to be upgraded with FPP but we can have a new copy created on the fly during the upgrade using “-image” option.
Could you tell more about this feature?
Syntax: rhpctl upgrade database  … [-image 19c_image_name [-path where_path]]      

Correct, that’s also in the 19c doc.

“ …. If the destination working copy does not exist, then specify the gold image from which to create it, and optionally, the path to where to provision the working copy.”

IV. What’s Cooking for FPP

I have an idea about what’s hot lately for FPP but could you elaborate more about the exciting news that are in store for FPP in 2021 and beyond?


Fleet Patching and Provisioning 21c comes with support for the Autoupgrade tool. This simplifies the preparation, execution and the troubleshooting of upgrade campaigns. The feature will be hopefully backported to 19c.

As more and more customers migrate their fleet to Exadata Cloud Services and Exadata Cloud at Customer, the next big development will be toward integrating fleet patching capabilities in the OCI service portfolio. Today, patching cloud services with the on-premises version of FPP is not supported.

For the long-term vision, FPP will be the core of all database fleet patching operations within Oracle. It will be critical to make it easier to implement and maintain, so expect improvements in this direction for the future releases. Sorry, I cannot tell more :-)

Helpful resources

No comments:

Post a Comment