Anshul Rastogi
7 min readMar 28, 2021

--

CP4I Part2 — Proof Of Concept (POC) .. learn & share

Hi All. today is a Full Grey day here in Sweden, so being at home I decided to write Part2 ( Learn & Share on CP4I )

Target Audience : Integration Platform SME/Architect.

Purpose : CP4I Platform POC ( basically today on Platform Provider side , we will cover Consumer side in next parts )

Disclaimer : Views are of my own and reflects my personal understanding .

So, lets start — in Part1, I have tried to share my understanding on CP4I at a very high level. But for us being Architects or SME, we are also very curious to get our hands on something.. For me “ Seeing is Believing” is key for anything new tech, I am sure its same for most of us — as its incomplete with just some ppts , pdf or videos..

So, almost Q4–2020 I set out an agenda to Play with Cp4I in our own ( I mean a mini Lab), and do it from scratch.

Recipe for Cp4I POC @ Onpremise

( For AWS : please follow https://aws.amazon.com/quickstart/architecture/ibm-cloud-pak-for-integration/ , everything is captured there and its quiet great )

( For Azure : for purpose of POC , I think one can start with ARO )

Components: ( it was much simpler when we did on ARO, lets see in later parts )

  1. Vmware — for sake of POC, we choose Vmware as hosting platform for running openshift. ( obviously, there are other options like Redhat virtual (RHV) or bare Metal etc )
  2. Storage — CP4I requires both Block & File storage. Either we should have supported storage which Cp4I supports or one recommended way is to us OCS and let OCS deal with underlying storage. In my case, it was NFS.
  3. Load Balancer — We used a Layer7 Load Balancer with multiple VIPS ( for various purpose like apps, api traffic )
  4. Software — For purpose of POC, we used Trail licenses from both Redhat and CP4I ( https://cloud.ibm.com/docs/cloud-pak-integration ) . With this I assume one should sign up on IBM and get entitlement keys which we need later.
  5. Outbound connection to Internet ( this is to connect to IBM Registry for operators and container images ). Incase its not possible, follow CP4I knowledge center for Airgap installation
  6. Skills : This is crucial, we need either 1 person or a team with skills of Openshift installation ( IPI or UPI ) , Integration SME ( who understands MQ , ACE , Datapower ) — in future we see this Integration SME should have OCP skills , thats where SRE roles will come.
  7. Sizing the cluster →For each of CP4I components, for dev environment there are recommended CPU and memory listed on knowledge center. But, if one has to start — think of a small OCP cluster with 3 master ( 16 CPU + 32 GB each ) & 3 CP4I worker ( 16 CPU + 32 GB each ) & if using OCS recommendation is to setup another 3 Worker nodes with sizable storage .. Obviously, its possible to start with even small cluster , but I did same and then have to scale it 2 times — Cp4I components are specific .
  8. DNS Alias, A Records and CNAMEs — These are specific to any standard OCP cluster .. ( There is no specific DNS alias req for CP4I as such , it will adapt to OCP )

Lets start assembling it now..

9. First and foremost, our aim is to get a working openshift cluster with sizing we have decided for this POC. So, in this Blog we will not discuss OCP setup details — but I am sure most of orgs have skilled OCP admins making a cluster. ( Obviously , if someone plans to run on Managed openshift in Azure or AWS or more — its much easy to lauch OCP cluster ) . But, it would be worth to include a skilled OCP admin to prepare this cluster.

Note: Be sure to check version comptabilities of Cp4I and OCP, as both of these products are releasing new Versions/Patches etc. At this moment of writing, I have use OCP 4.6 with Cp4I 2020.4.1..

10. Assuming the OCP cluster is launched, meaning we have access to oc ( openshift client cli ) and this cli is connected to cluster . So, at this stage we have kubeadmin credentials, oc connected to cluster and can open OCP console in browser. This is very important, to make sure “oc get nodes” command shows all nodes needed as “READY” .

11. Now, we are letting our Integration SME/Architect take control — so this person has cluster admin role ( either have kubeadmin creds or assigned a role equivalent ).

12. Setup Openshift Cluster Storage (OCS) instance. Follow https://www.openshift.com/blog/deploying-openshift-container-storage-using-local-devices . Purpose of this task, is to get 3 storage class respectively for Block, File and Object at end.. For sure, one can skip this step if your environment has Cp4I supported Block and File storage. Per my discussions with Cp4I SMEs, its highly recommended to use OCS, as its supported almost for all Cp4I components and on most of deployments ( on-premise, cloud )

13. At this, we have a Healthy Openshift cluster and Storage Classes.

14. Follow CP4I Installation (https://www.ibm.com/support/knowledgecenter/SSGT7J_21.1/install/install.html)But let me write how I did it.

  • Enable IBM operator Hub — this is required to access CP4I Operators
  • Create few namespaces/project — each for platform navigator , mq , ace, datapower, ops dashboard ( call it by your choice of logical name )
  • For each namespace, add secret to mention IBM entitlement key ( this you have got either as part of Trial license or from IBM passport advantage )
  • Again , ensure OCS powered storage classes are availablle for block and file.
  • open operator Hub and 1st foremost start by installing operator for “ IBM Cloud Pak for Integration Platform Navigator” , you can choose to install in all namespaces or specific one , it should work either ways.
  • in principle, for CP4I to start we need “IBM Common services or now called Foundatiomn services”, since these is mapped as a Dependent operator & operand in Platform Navigator — so doing above step , will kick it off automatically. Just to note, IBM common services will be installed in a namespace “ibm-common-services” operator will create it.
  • Keep a good watch on pods inside ibm-common-services, it take a bit few extra minutes as it holds the whole cloud paks in terms of IAM, Logging, monitoring etc. ( check for admin password per https://www.ibm.com/support/knowledgecenter/SSGT7J_21.1/install/initial_admin_password.html )
  • Now switch to namespace create for platform navigator , open the operator for Navigator, create a new Instance ( check License Accepted = true ) , this should launch pods and check until they are in running state. Once instanc status is Succeeded , open the instance and see the Navigator URL , open it and use the admin password as captured in above step.
  • At this point, its very important to establish that we Login to Platform Navigator and can say navigate to Common service URL.
  • This is option — IBM common services allows to integration with your LDAP server and further create Teams to achieve RBAC. For sake of POC, lets do it per your preference.

( while writing, I noticed this Blog has gone quiet lengthy — but hope its ok , as I wanted to capture all of key points , lets continue )

  • Navigate to MQ namespace, Install IBM MQ operator and install in this namespace. For this POC and Part lets just do MQ
  • now there are 2ways of creating MQ Queue Manager. ( refer to https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.2.0/com.ibm.mq.ctr.doc/ctr.html )
  • 1) open Platform Navigator and see “ Create Capabilities or Runtime “ and you will see a good UI to create Queue Manager — here you see options on License, Persistance , storage class , certificates and more.. For POC, just choose minimal settings.
  • 2) programtically via openshift cli , here we will use a YAML file for creating Openshift Custom Resource Defination (CRD) “Queue Manager” — this will be recommended approach for a real time environment , where Devops and SRE ways if working would be required, to maintain Queue Manager and its configuration as Code.
  • Lets assume Queue Manager instance is created, by either of above ways . Ensure you have also seen the pods and could also login to POD to check logs , perform basic test by logging into terminal followed with basic MQ admin tasks ( like dspmq , runmqsc commands , check logs etc )
  • Lets open Platform Navigator , now we should see 1 MQ Queue Manager instance and then click on it — this will open Qmgr Web Management console ( which is standard to MQ , be it software or container based ). This is enabled with SSO , incase needed provide admin password as above.
  • On MQ Web Mgmt console, please create 1 Queue ( if Persistance required, then ensure you had chose Storage clases , as it would need it ). Put/Browse Message into Queue.. Incase, you have choosen Persistance Queue, then try deleting Pod of this MQ Qmgr and then it should automatically come up ( as part of Deployment controller ) and we should see the message intact in queue .
  • good to observer, as part of Qmgr creation — operators have also created a Network Route and service in OCP. This is required for both internal cluster reference to this Qmgr pod and also , when connecting from outside.
  • Finally to connect to this CP4I MQ from your own MQ explorer ( from laptop ), its a bit extra Mile — for CP4I based MQs we have to setup TLS keystore ( as it works over OCP Ingress HTTP based ) and a SNI based Route for channel defination. ( follow — https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.2.0/com.ibm.mq.ctr.doc/cc_conn_qm_openshift.html and I will try to cover in next upcoming parts ) But for a good skilled MQ technician , this shall be piece of cake.
  • Finally Finally, for a very basic minimal POC — we have Cp4I up & runnning on openshift and 1 MQ Queue Manager..

Thanks Note :

  • Dineshwar ( I would call him Openshift Surgeon )
  • CPAT aka Garage team
  • Leadership team at HCL.

--

--