Anshul Rastogi
6 min readJun 13, 2021

--

CP4I Part3 — Learn & Share

Today is a Bright Sunny Day in Goteborg and I have been thinking for a long time to share my experiences on CP4I, so a good day to start..

Target Audience : Integration Architects aiming to deploy CP4I

Disclaimer : these are my own individual opinions . For final information always check with product support or official documentation.

CP4I is just booming and will be key path for next years in Integration Stories — like deploying new projects, Cloud migrations / expansion, modernize Legacy infra , be more Containerized. And, I can see demand is now increasing so that shows its moving forward.. Really proud !!

Now aim is we are moving on next level then POC.

Sample deployment in 3 site topology ( could be public cloud or on-premise )

Most of points in today blog are referred to my experience of deploying CP4I in Azure.

CP4I Quick updates: version 2021.1 , this is the latest version with lots of new feature, changes in product -

  • IBM Common services is now called IBM Foundation services
  • IBM Platform Navigator is now powered by IBM Automation ( Zen operator )
  • Logging operator is now removed from Foundation ( earlier Common) services. So now CP4I natively uses ELK(Redhat Logging operator ) setup at Openshift level
  • Monitoring can also leveraged natively from openshift level
  • Access is now separated between foundation services and Platform navigator ( in 2020 they were same )
  • MQ Native HA now introduced ( not in GA yet ), quiet interesting to look in next releases.

Considerations for openshift.

  • Both ARO or IAAS based OCP deployment are ok. OCP 4.6 or 4.7
  • now in Azure, per various discussions seems recommendation is to use F v2 series ( 16 or 32 cpu ) for worker nodes. reason, is F v2 series are compute optimized with better vCPUs as compared to D series.
  • Since, Master doesnt run any cp4i workloads as such so maybe a D8 or D16 should be ok. ( I prefer D8 for non-prod and D16 for prod , just to be on safer & protective side )
  • User deploying CP4I will require Cluster-Admin privilege. This is must.

Considerations for Storage

  • Platform Navigator requires a RWX — File based storage. This operator uses another component which will not work Azure Files — so its best to use OCS
  • in Azure, if storage class for Managed Premium is using LRS ( Local Redundant storage ) that the stateful set (like MQ Qmgr) may not fail from Site1 to Site2 in case of node failures. this is due limitations on Azure Disk LRS ( I think as of now Azure Disk ZRS is still in limited preview )
  • So, various indications pointed at OCS to be a storage solution that seems best at this moment for CP4I.
  • OCS setup on Azure Openshift is still via OCS operator which in turn relies on Azure Disk storage class. Requires minimum of 3 nodes and it replicates storage across 3 sites. Please ensure the 3 worker nodes chosen for OCS are in different zone/sites ( not all in same , that will limit HA/DR across zones )
  • So, for a 1.5 TB OCS cluster — the actual usable volume is only 500 GB. But, its possible to scale cluster 2x or 3x or more. It’s not must but advised to know OCS cluster capacity ( overall & usable ) as per needs of CP4I workloads. Try making some calculations based on PVs requirement for different CP4I services.
  • one should read OCS documentation on Azure for a good setup, the operators are built to ease management & operation. But you should still know bit of OCS to cover aspects like capacity, scaling out, monitoring, administration etc.
  • OCS should run on 3 dedicated worker nodes , which are not shared with CP4I workload — ideally we can start with D16 but if workloads becomes more heavy & if needed its possible to replace with new storage optimized nodes. Contact your Redhat representative for good information on OCS. As this is quiet important setup before starting CP4I install & runtime.
  • please do ensure to run needed performance test for Cp4I workload with OCS ( or in fact any storage provider )
  • post OCS cluster setup, 3 new storage classes for block, file and object will be created. Just verify with creating test PVs and see if its Bound.

Logging

  • Good news in 2021.1 is that now CP4I uses ELK stack natively available in openshift .
  • so, please install ELK following OCP documentation ( basically install Redhat Logging and Elastic operator followed up with ELK cluster setup )
  • this will setup a Kibana in openshift-logging workspace and this is common for both OCP and CP4I
  • And, if its required to forward CP4I logs outside Openshift like to a central ELK — please use cluster logging forwarder ( seen in a demo )
  • For Cp4I logs, in Kibana create an index got app*

Platform Navigator

  • this is like the first step to setup CP4I. Just install the operator and create an Navigator instance. This will automatically install various dependencies like IBM Foundation services ( the common stack ) , IBM Automation etc
  • it takes approx 30–45 min for complete deployment of navigator instance
  • Now in 2021.1 ( as compared to 2020.4 ) you will observe lot of new pods & PVCs which are related to IBM Automation . ( Tip is dont use Azure File for platform navigator , it will fail )
  • There are some cool features like attaching a company logo, adding user custom pages and it will improve further.

User access and roles ( Considering we want to use Azure AD for end-user access )

  • CP4I foundation service does support various LDAP providers and user/group look ups. If the need is to use Azure AD, below is 1 of ways I learned to do it.
  • Integrate Azure AD with openshift — this is well documented and supported on OCP layer. With this users can authenticate via Azure AD to login openshift console . The catchy part is , for new users their Azure ID shall be first added into a OCP group ( with respective Role Bindings ) and once this user logs in for 1st time in OCP, their user identity is created in OCP . Its very important as to look up users in Foundation service or Navigator then their identity must exist on OCP .( this is only if you wish user to use Azure AD )
  • Now, on openshift level — few of CP4I admins should have cluster-admin role, however for others we can start with viewer/read only role on cluster level or specific namespaces.
  • Next, add user on Foundation services Console — this is required so that a user Roles on openshift level can be managed more smartly . Like , earlier step we added user on OCP with viewer access only, but now in this step we can give more permissions like Admin on namespace etc.
  • Next, add user on Platform Navigator instance — this is not related to OCP permissions but more of Roles & access on CP4I capabilities like MQ, ACE, APIC etc.

CP4I Architect Responsibility — per my experience below is very important

  • CP4I being an Hybrid PAAS platform requires some good skills and responsibilities like knowledge on DataCenter/Cloud, Hosting ( vmware/bare metal/cloud) , storage ( OCS ), openshift , devops tools (deployment of operator/instance yamls ) , CP4I layers like Foundation services, Platform Navigator and capabilities like MQ , ACE.
  • Basically , the Product Owner or Lead Architect of CP4I will require to have an “Umbrella” knowledge on various aspects. This is bit of change as compare to running MQ earlier on VM or physical server.
  • And this really requires good thinking on upskilling integration team on Cloud, OpenShift/Kubernetes , SRE ways of working , understanding cross-dependencies .
  • Obviously different organization will make best decision on their CP4I journey as per their team structure.

MQ

  • For Single Instance Qmgr, it runs as a Stateful set . During node failure this qmgr instances should fail over to next available node. However, if using Azure Disk with LRS (storage class) this may not fail to a node running in different Site/Zone — because its nature how LRS works. For this , one should use supported storage options like OCS which replicates data in all 3 sites/zone as .
  • As always create Qmgr . Mqsc config with a Yaml based approach and integrate with your CI-CD ecosystem
  • For Logging, its expected that Qmgr pod logs will automatically flow to ELK setup on Openshift level.

Thanks Anshul

( these are my individual opinions )

Sincere thanks to everyone whom I work on Daily basis in & around CP4I.

--

--