Thursday, October 14, 2021

IBMCLOUD

 IBMCLOUD

Index:

  1. Basics
  2. Pre-Req
  3. Free CommandLine Tool
  4. Create Free Application
  5. API Keys
  6. Getting oAuth Tokens
    1. Standalone
    2. Ibm CLI tool
  7. Create AI application
  8. Cloudant Database
    1. Fetch the Clouddant Document from API
  9. Functions
  10. API GateWay
  11. Simple ETL from COS to DB2
  12. Copy ETL using REST
  13. Run Spark Job on COS 

Basics

  • IAM = Shared Account
  • Provisioning= Create an App
  • Helm Charts = Add Addons to the Provisioned App
  • There are 3 Types of App
    • Classic Infrastructure  - For Individuals
    • IAM Managed Services - For Enterprise / Resource Groups 
    • Cloud Foundary - Open Source 

Pre-Req

  • open ibmcloud
  • create a free account
  • Login as directed

Free CommandLine with python3.8+

  • Login to ibmcloud
  • On the tool bar of Landing Page, Click on IBMCloud Shell
  • $python3

Create Free Application

  • Login to ibmcloud
  • click on Catalog
  • Search for CloudFoundary
  • Click on Cloud Foundary Application >Click on Create
  • Add details : Resource ,App Name etc., 
  • Click on Create 
  • Goto homepage > Resource List > CloudFoundaryApp > Click on the app 
  • Click on link Visit app URL

API Keys

Getting oAuth Tokens 


1) Standalone installer (https://cloud.ibm.com/docs/cli?topic=cli-getting-started)

  • Run $curl -fsSL https://clis.cloud.ibm.com/install/linux | sh #Linux
  • ibmcloud login #ibmcloud login --sso
  • ibmcloud iam oauth-tokens
  • copy the result
  • SET IAM_TOKEN=<paste here>
  • Use "Authorization: Bearer IAM_TOKEN"

2) IBMCLOUD CLI

  • Log in to IBM Cloud 
  • select Manage > Security > Platform API Keys.
  • Create an API key for your own personal identity, 
  • copy the value
  • Run below
    $curl -X POST 'https://iam.cloud.ibm.com/identity/token' \
    -H 'Content-Type: application/x-www-form-urlencoded' \
    -d 'grant_type=urn:ibm:params:oauth:grant-type:apikey&apikey=<MY_APIKEY>'/

Response :
 {
        "access_token": "eyJraWQiOiIyMDExxxxxxxxxxx
  • copy access token and use as below
  • Syntax-
    • Authorization: Bearer <access_token_value_here>. 
  • example-
    • Authorization: Bearer eyJraWQiOiIyMDE3MDgwOS0wMDoxxxxxxxxx

Create a AI Application - Language Translator

  • Login to ibmcloud
  • goto to Catalog
  • filter :Pricing plan=lite 
  • Category : Select AI /MAchine Learning
  • Click on Language Translator 
  • Create
  • Check on consent on Agreement 
  • Create
  • Copy the api-key . url  under : Language Translator >Service Credentials
  • Replace api-key and url  (More REST calls : Language Translator >GettingStarted)
curl -X POST --user "apikey:{apikey}" \ --header "Content-Type: text/plain" \ --data "Language Translator translates text from one language to another"
"{url}/v3/identify?version=2018-05-01" 
  • open Ibmcloud Shell from the ibmcloud tool bar
  • Run the new Command

Cloudant Database 

  • Login to IBMCloud
  • Goto Catalog
  • Select and Create a Cloudant Instance
  • Open the Cloudant Instance provisioned from Resource List > Services and Software >Cloudant
  • Click on Manage > Launch Dashboard
  • Create Database > test >Click on Create
  • Open test DB > Design Document > New Doc > add new json key value 
eg:
{
  "_id": "ce9575de70477c932e222bf5b6bd7fea",
  "name": "deepak"
}
  • Click on Create Document

Lets fetch this document from API 

  • Under Cloudant page > Service Credentails > Create New Role > Manager >Add
  • Open the New Service Credentails Created , Note down apikey , url
  • Open ibmcli  from ibmcloud tool bar (https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey&interface=api)
  • $curl -X POST 'https://iam.cloud.ibm.com/identity/token' -H 'Content-Type: application/x-www-form-urlencoded' -d 'grant_type=urn:ibm:params:oauth:grant-type:apikey&apikey=<MY_APIKEY>'
  • Copy the Token generated
  • Run below commands
API_BEARER_TOKEN=<paste token here>
curl -H "Authorization: Bearer $API_BEARER_TOKEN" -X GET "{url}/test/{_id from cloudant}"

Other Api:

curl -H "Authorization: Bearer $API_BEARER_TOKEN" -X PUT /{db}" #Create DB
curl -H "Authorization: Bearer $API_BEARER_TOKEN" -X PUT /{db}/{doc_id}" Create Document
curl -H "Authorization: Bearer $API_BEARER_TOKEN" -X GET "{url}/test/{_id from cloudant}" #Read Document

Ref : 

https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey&interface=api
https://cloud.ibm.com/docs/Cloudant
https://cloud.ibm.com/apidocs/cloudant#getdocument

Functions

  • Login to IBMCloud
  • catalog > search and click Functions
  • Click on StartCreating
  • Select QuickStart templates > Hello World
  • select python3 > clk Deploy
Note:
TO modify the python code: Function/Actions/helloworld

Test1:

  • click Invoke:Result - {"greeting": "Hello stranger!"}
  • click Invoke with parameters:{"name":"deepak"}
  • click Invoke :Result- {"greeting": "Hello deepak!"}

Test2

  • Open ibmCloud Cli
  • curl -u xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \
  • -X POST https://eu-gb.functions.cloud.ibm.com/api/v1/namespaces/j.thepac%40gmail.com_dev/actions/hello-world/helloworld?blocking=true

Test3

Open ibmcloudcli
$python3    #open pythonshell
url="https://eu-gb.functions.cloud.ibm.com/api/v1/namespaces/j.thepac%40gmail.com_dev/actions/hello-world/helloworld?blocking=true"
auth=("xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx","xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")
data={"name":"deepak"}
r=requests.post(url,json=data,auth=auth,verify=False)
r.content

API GateWay (Proxy) :

U can create a proxy link for "https://eu-gb.functions.cloud.ibm.com/api/v1/namespaces/j.thepac%40gmail.com_dev/actions/hello-world/helloworld?blocking=true" link by Creating ApiGateWay and providing the above url .

Simple ETL from COS to DB2

Pre- Req:

DB2:

  • Make sure u have created a DB2 instance in IBMCLoud
  • Create a table in DB2 (do not insert any records)
  • CREATE TABLE table_name (col1 int, col1 varchar(255)); -- successfully created
  • In Db2 Ui > Data icon >  Tables 
  • Click on the scheme
  • check if the table is created
    • Test it
      • Syntax : Select * from scheme.table;
      • Example:Select * from DXC02390.table_name;
  • note down the Scheme name and table name
  • Click on about icon in DB2 UI 
  • Note down from "<crn ..........::>" 

Cloudant:

  • Create a Cloudant Object Storage (COS) in IBM Cloud 
  • Create a Bucket 
  • Add a parq File , with scheme similar to the above Table created (use apache spark to create the file locally and drag and drop)
  • select the uploaded parq file > Object Details > copy Object SQL URL

Steps:

  • Create SQL Query instance in ibmcloud 
  • Run the below command to copy the data from COS to DB2
Syntax :
SELECT * FROM <Object SQL URL>  STORED AS PARQUET INTO crn:xxxxxxx:/scheme.table PARALLELISM 2

Example:
SELECT * FROM cos://jp-tok/cloud-object-storage-7d-cos-standard-gsi/test2Cols.parquet STORED AS PARQUET
INTO 
crn:v1:bluemix:public:dashdb-for-transactions:eu-gb:a/e31b7085afca4ab8b6ac9b1077cd8af9:9257e5bc-49f0-43a1-b776-f7a0ff41b2b6::/DXC02390.MONOREPO_POC PARALLELISM 2

Copy ETL using REST 

Pre-Req:  Simple ETL from COS to DB2

curl -X POST 'https://iam.cloud.ibm.com/identity/token' \
    -H 'Content-Type: application/x-www-form-urlencoded' \
    -d 'grant_type=urn:ibm:params:oauth:grant-type:apikey&apikey={Create APi Key from Manage >Access>Api keys}'

Copy Response Token and save it to 
API_TOKEN = "xxxxxx"
or 
SET API_TOKEN="xxxxxx"

Get Current Jobs

curl -XGET   \
--url "https://api.sql-query.cloud.ibm.com/v3/sql_jobs?type=batch&instance_crn=crn:v1:bluemix:public:sql-query:in-che:a/e31b7085afca4ab8b6ac9b1077cd8af9:29b693b9-b195-4549-a2b0-03c93a26e3d1::"  \
 -H "Accept: application/json"  \
 -H "Authorization: Bearer <API_TOKEN>" 

#type=batch or type=stream

#Copy from 1 parq to another
curl -XPOST  \
--url "https://api.sql-query.cloud.ibm.com/v3/sql_jobs?instance_crn=crn:v1:bluemix:public:sql-query:in-che:a/e31b7085afca4ab8b6ac9b1077cd8af9:29b693b9-b195-4549-a2b0-03c93a26e3d1::"  \
-H "Accept: application/json"  \
-H "Authorization:Bearer <API_TOKEN>"  \
-H "Content-Type: application/json"   \
-d '{"statement":"SELECT * FROM cos://jp-tok/cloud-object-storage-7d-cos-standard-gsi/test2Cols.parquet STORED AS PARQUET INTO cos://jp-tok/cloud-object-storage-7d-cos-standard-gsi/test2Cols_result"  }'

Run Spark Job on COS Data

  • login to IBMCLOUD
  • Goto Catalog > Search for Watson Studio
  • Agree to terms and conditions> Click on Create 
  • Click On next >Next > click Create Watson Studio
  • Click on Projects > New Project >Empty Project
  • Add to Project > Notebook 
  • Select Runtime > python (least configuration)
!pip -q install ibmcloudsql
import ibmcloudsql

cloud_api_key="Create api key from Manage"
sql_crn="crn of SQL Query Instance"
sql_cos_endpoint="cosendpoint of bucket/result_prefix"
query="right click on the COS parq file and click on SQL Query"

sqlClient = ibmcloudsql.SQLQuery(cloud_api_key, sql_crn, sql_cos_endpoint) 
#sqlClient =ibmcloud.sqlSQLQuery(my_ibmcloud_apikey, my_instance_crn)

res=sqlClient.run_sql(query)
  • You can create a job and run the notebook at a specific time and results can be seen in the Jobs tab.

Note :

  1. Any file you drag and drop in Notebook will automatically get saved into COS . 
  2. Click on insert code to add spark code to work on the Dataframe.


Ref:
  1. https://cloud.ibm.com/docs/sql-query
  2. https://medium.com/codait/analyzing-data-with-ibm-cloud-sql-query-bc53566a59f5
  3. https://cloud.ibm.com/docs/sql-query?topic=sql-query-data-transport-automation-to-db2-on-cloud
  4. https://www.ibm.com/cloud/blog/announcements/automate-serverless-data-pipelines-for-your-data-warehouse-or-data-lakes
  5. https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4a9bb1c816fb1e0f31fec5d580e4e14d
  6. https://cloud.ibm.com/docs/sql-query?topic=sql-query-sql-reference
  7. https://video.ibm.com/playlist/633112 #https://www.youtube.com/watch?v=s-FznfHJpoU
  8. https://cloud.ibm.com/apidocs/sql-query-v3#introduction #REST
  9. https://cloud.ibm.com/apidocs/db2-on-cloud/db2-on-cloud-v4
  10. https://video.ibm.com/playlist/633075 #jupyter notebook
  11. https://cloud.ibm.com/docs/AnalyticsEngine?topic=AnalyticsEngine-working-with-sql#running-spark-sql-with-scala
  12. https://github.com/IBM-Cloud/sql-query-clients
  13. https://github.com/IBM-Cloud/sql-query-clients/tree/master/Python

No comments:

Post a Comment