Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
Talend ESB is a reliable and scalable enterprise service bus (ESB) that lets development teams manage integration projects in a holistic manner, combining the integration of applications and data management in complex and heterogeneous computing environments.
This learning plan helps administrators better understand Talend server in the context of ESB deployments. It prepares you to use ESB Conductor, ESB Publisher, and ESB Runtime with Routes and data services.
If you are already a Talend Academy subscriber or want to access the publicly available content on the platform, go to the Talend Academy Welcome page to log in or create an account.
Content:
The Talend Community Knowledge Base (KB) article, Managing Talend microservices with Istio service mesh on Kubernetes, shows you how to connect, secure, control, and observe Talend microservices leveraging Istio.
This article describes the installation and platform-specific steps for managing Talend microservices using Istio service mesh on Azure Kubernetes Service (AKS).
For more information on installing Talend microservices using Istio on other cloud providers, see the following Talend Community KB articles:
Managing Talend microservices with Istio service mesh on Google Kubernetes Engine
Managing Talend microservices with Istio service mesh on Amazon Elastic Kubernetes Service
Create an Azure Kubernetes Service by using Azure CLI or Azure portal.
Set the Azure subscription ID.
az account set -s REPLACE_WITH_SUBSCRIPTION_ID # for example, az account set -s d93xxxxxxxxxxxxxx
Create a resource group.
az group create --name AKS_RESOURCE_GROUP --location REPLACE_WITH_AZURE_LOCATION # for example, az group create --name rchinta_csa_resource_group --location "West Europe"
Create an AKS Cluster.
az aks create --resource-group REPLACE_WITH_AKS_RESOURCE_GROUP --name AKS_CLUSTER_NAME --node-count 2 --kubernetes-version 1.14.8 --node-vm-size DS2_v2 #for example, az aks create --resource-group rchinta_csa_resource_group --name talend-bonn-Az-aks-cluster --node-count 2 --kubernetes-version 1.14.8 --node-vm-size DS2_v2
From the Azure portal search for Kubernetes services. Click the Add button.
In the Create Kubernetes cluster > Basic settings, enter the details for Resource group, Kubernetes cluster name, Region, Kubernetes version, and Node size. Click Next.
In the Scale settings, Enable the VM scale sets option to allow auto-scaling of the nodes. Click Next.
In the Authentication settings, set Service principal to use default service principal, then select Yes to Enable RBAC.
Note: If you want to create/assign a service principal, click the Configure service principal link.
Click Next until Review + Create, validate the settings, then click Create.
Note: The cluster creation might take up to 5 or 10 minutes.
In the Azure portal, click the Cloud Shell icon, initialize storage if required, then launch the bash shell.
In the cloud shell, create a kubeconfig file in the ~/.kube folder by executing the following command:
az aks get-credentials --resource-group REPLACE_WITH_AKS_RESOURCE_GROUP --name REPLACE_WITH_AKS_CLUSTER_NAME # for example, az aks get-credentials --resource-group rchinta_aks_resource_group --name talend-bonn-Az-aks-cluster
Verify that the worker nodes joined with the cluster, by executing the following command:
kubectl get nodes
Helm is a package manager for Kubernetes. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Istio provides customizable Helm templates for installation on a Kubernetes cluster. For more information on other installation options, see the Istio Installation Guides.
Install Helm client using one of these three approaches, for example, Azure Cloud shell / Linux / Windows OS.
Helm client is preinstalled on Azure Cloud Shell.
helm version
Skip this step if you use Azure Cloud Shell. Helm client can be installed on Linux OS using shell commands.
Connect to a Cloud shell, then install Helm client.
curl -LO https://git.io/get_helm.sh chmod 700 get_helm.sh ./get_helm.sh
Skip this step if you use Azure Cloud shell. Helm package can be installed on Windows using Chocolatey
Launch a command prompt and execute the below command.
choco
Note: If you get an error command not found choco, then follow the instructions in this link and Install Chocolatey on Windows.
Install Helm using the choco install command:
#Install helm client choco install kubernetes-helm
From the cloud shell or command prompt.
Create an istio_install directory.
# create directory istio_install mkdir istio_install (Windows / Linux) # move into the istio_install cd istio_install
Download an Istio release and then execute the steps in the guide Customizable Install with Helm.
# Download the installation file using the curl command curl -Ls -O https://github.com/istio/istio/releases/download/1.3.4/istio-1.3.4-linux.tar.gz ( Linux) curl -Ls -O https://github.com/istio/istio/releases/download/1.3.4/istio-1.3.4-win.zip (Windows) # gunzip and untar the file gunzip istio-1.3.4-linux.tar.gz tar -xvf istio-1.3.4-linux.tar # move into the directory istio-1.3.4 cd istio-1.3.4 # Configure the PATH environment variable export PATH=$PWD/bin:$PATH (Linux) set PATH=%CD%/bin;%PATH% (Windows) # Initialize helm helm init #Add Istio to the helm repository helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.3.4/charts/ #Create namespace istio-system kubectl create namespace istio-system #Install all the istio CRDs helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
Install Istio choosing one of the configuration profiles.
Note: This article uses the demo_auth profile.
# Execute the helm template with the predefined settings in the demo_auth profile helm template install/kubernetes/helm/istio --name istio --namespace istio-system \ --values install/kubernetes/helm/istio/values-istio-demo-auth.yaml | kubectl apply -f -
Verify that all of the Istio components in the demo-auth configuration profile are in the status Running / Completed, by executing the following command:
kubectl get pods -n istio-system
Verify an External IP is assigned to the istio-ingressgateway service.
kubectl get svc -n istio-system
Note:
Ingress traffic to application pods or microservices is only made possible using the External / Public IP assigned to the istio-ingressgateway service.
An external Load-balancer is launched on AKS automatically, and its IP is assigned to the istio-ingressgateway service.
With Istio demo-auth profile, add-ons for dashboards like Prometheus, Grafana, Kiali, and Jaeger are enabled and installed along with Istio.
Prometheus is a web-based graphical user interface for querying the Istio metric values.
Grafana is a web-based graphical user interface that provides a global view of the mesh along with services and their workloads.
Kiali is a web-based graphical user interface to view service graphs of the mesh and Istio configuration objects. Different graph types such as App, Versioned App, Workload, Service are available to view the services in the mesh.
Open a Windows command prompt in Administrator mode, then execute the following commands, which creates the kubeconfig file in the Windows USER_HOME/.kube folder.
# login to windows from command prompt az login az aks get-credentials --resource-group REPLACE_WITH_AKS_RESOURCE_GROUP --name REPLACE_WITH_AKS_CLUSTER_NAME # for example, az aks get-credentials --resource-group rchinta_aks_resource_group --name talend-bonn-Az-aks-cluster
Launch the dashboards using local port forwarding.
# Execute the below port-forward command and access Prometheus using local port forwarding kubectl port-forward svc/prometheus 9090:9090 -n istio-system # Execute the below port-forward command and access Grafana using local port forwarding kubectl port-forward svc/grafana 3000:3000 -n istio-system # Execute the below port-forward command and access Kiali using local port forwarding kubectl port-forward svc/kiali 20001:20001 -n istio-system # Execute the below port-forward command and access Jaeger using local port forwarding kubectl port-forward svc/jaeger-query 16686:16686 -n istio-system
From the Azure portal search for container registries. Click Add.
Enter the registry details such as Registry name and Resource group as shown below, then click Create.
After the registry is created, open the registry, then click Access keys.
Obtain the ACR credentials from the fields username and password.
From Talend Studio, publish the Customers microservice to ACR, as shown below. Click Finish.
Note: In the Publish window, enter the usernameand password fields with the ACR credentials. For more information, follow the steps in Creating an Azure Container Registry (ACR) section.
Repeat Step 1 and publish the Orders microservice. Make a note of the URLs of the microservice images published to ACR.
Skip this section if you are not using Azure Container Registry as a container registry.
An AKS cluster, by default, doesn't have read access to the images in ACR. The application images are pulled from the registry after a successful Authenticate with Azure Container Registry from Azure Kubernetes Service.
When creating an AKS cluster, in the Authentication settings by default, a service principal is created and assigned to the AKS cluster.
Obtain the Client ID of the default service principal by executing the following command:
az aks show --name REPLACE_WITH_AKS_CLUSTER_NAME --resource-group REPLACE_WITH_AKS_CLUSTER_RESOURCE_GROUP --query servicePrincipalProfile.clientId -o tsv # for example, az aks show --name talend-bonn-Az-aks-cluster --resource-group rchinta_aks_resource_group --query servicePrincipalProfile.clientId -o tsv # command outputs clientId : 197bdxxx-xxxx............
Obtain the ACR resource ID by executing the following command:
az acr show --name REPLACE_WITH_AZURE_CONTAINER_REGISTRY_NAME --query id --output tsv # for example, az acr show --name rchinta --query id --output tsv # command outputs ACR resource ID : /subscriptions/xxxxxx/resourceGroups/rchinta_aks_resource_group/providers/Microsoft.ContainerRegistry/registries/rchinta
Assign the acrpull role to the AKS service principal by executing the following command:
az role assignment create --assignee REPLACE_WITH_AKS_SERVICE_PRINCIPAL_ID --scope REPLACE_WITH_ACR_RESOURCE_ID --role acrpull # for example, /subscriptions/xxxxxx/resourceGroups/rchinta_aks_resource_group/providers/Microsoft.ContainerRegistry/registries/rchinta
Note: If you get an Insufficient privileges to complete the operation error, contact your Azure administrator.
Create a Kubernetes secret that contains the AKS service principal's Client ID and Client Secret.
Using the Azure CLI / Cloud shell execute the following command:
# Fill your registry details in the below command before executing. kubectl create secret docker-registry acr-auth --docker-server REPLACE_WITH_YOUR_AZURE_REGISTRY_URL --docker-username REPLACE_WITH_YOUR_SERVICE_PRINCIPAL_CLIENT_ID --docker-password REPLACE_WITH_YOUR_SERVICE_PRINCIPAL_CLIENT_SECRET --docker-email REPLACE_WITH_YOUR_EMAIL_ID # for example kubectl create secret docker-registry acr-auth --docker-server rchinta.azurecr.io --docker-username ff420.... --docker-password 5CQHrss.... --docker-email rchinta@talend.com
Azure Kubernetes service authenticates with the Azure Container Registry using the Secret configuration in the resource files and downloads the application images.
In the Demo Kubernete resource files for AKS attached to this article, observe the usage of the property imagePullSecrets with name acr-auth.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: order-service-v2 spec: replicas: 2 template: metadata: labels: app : order-service version: v2 spec: containers: - name: order-service image: rchinta.azurecr.io/k8s_microservice/customer/microservices/orders:0.5.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 command: ["/bin/sh", "/maven/Orders/Orders_run.sh"] args: ["--spring.config.location=classpath:config/contexts/PROD.properties"] imagePullSecrets: - name: acr-auth
This article showed you how to launch an Azure Kubernetes Cluster, install Istio manually using Helm, and how to publish Talend microservices on Azure Container Registry.