Qlik Community

Qlik Sense Multi-Cloud

Discussion board where members can get learn more about Multi-Cloud deployments for Qlik Sense Enterprise.

Highlighted
korsikov
Valued Contributor II

Internal server error

Hi everyone.

I want deploy ESEfE. followin this instruction https://help.qlik.com/en-US/sense/November2018/Subsystems/PlanningQlikSenseDeployments/Content/Sense...

I want to note that the documentation does not say that in your LEF file there should be a line Elastics:YES;

whatever. My question. I have GCU Kubernetics cluster.

s@cloudshell:~ (qsefe-228310)$ cat values.yaml
#This setting enables dev mode to include a local MongoDB install
devMode:
  enabled: true

#This setting accepts the EULA for the product
engine:
  acceptEULA: "yes"
##These settings are to accomodate if RBAC is enabled.
#mira:
#rbac:
#   create: true
# serviceAccount:
#   create: true
#
#elastic-infra:
# traefik:
#   rbac:
#     enabled: true
# nginx-ingress:
#   rbac:
#      create: true
#These setting specifies the storage for the engine
engine:
  persistence:
    enabled: true
    accessMode: ReadWriteOnce
    existingClaim: qs-claim

#These setting specifies the storage for the resource-library..
resource-library:
  persistence:
    enabled: true
    accessMode: ReadWriteOnce
    existingClaim: qs-claim
s@cloudshell:~ (qsefe-228310)$

all pods work w/o error

s@cloudshell:~ (qsefe-228310)$ kubectl get pv,pvc,pod
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                     STORAGECLASS   REASON    AGE
persistentvolume/pvc-332d5539-15b5-11e9-be5f-42010a800200   8Gi        RWO            Delete           Bound     default/redis-data-qsefe-redis-master-0   standard                 4h
persistentvolume/pvc-dc2bdbad-15b4-11e9-be5f-42010a800200   30Gi       RWO            Delete           Bound     default/qs-claim                          standard                 4h
persistentvolume/pvc-e76cf09c-15d8-11e9-be5f-42010a800200   5Gi        RWO            Delete           Bound     default/qsefe-reporting                   standard                 30m

NAME                                                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/qs-claim                          Bound     pvc-dc2bdbad-15b4-11e9-be5f-42010a800200   30Gi       RWO            standard       4h
persistentvolumeclaim/qsefe-reporting                   Bound     pvc-e76cf09c-15d8-11e9-be5f-42010a800200   5Gi        RWO            standard       30m
persistentvolumeclaim/redis-data-qsefe-redis-master-0   Bound     pvc-332d5539-15b5-11e9-be5f-42010a800200   8Gi        RWO            standard       4h

NAME                                                       READY     STATUS    RESTARTS   AGE
pod/qsefe-collections-bc757df9-ktltr                       1/1       Running   0          19m
pod/qsefe-data-prep-747b7f9ff8-wtt2l                       1/1       Running   0          30m
pod/qsefe-edge-auth-cffb66f66-6b8j4                        2/2       Running   0          19m
pod/qsefe-engine-746b88f9b6-2ls5k                          1/1       Running   0          30m
pod/qsefe-feature-flags-64748d5cb8-fkrf2                   1/1       Running   0          30m
pod/qsefe-hub-84494ffb5b-8kmcx                             1/1       Running   0          30m
pod/qsefe-identity-providers-6d4bf4b4c8-69kfl              1/1       Running   0          30m
pod/qsefe-insights-85dbbc6f97-2jmgh                        1/1       Running   3          30m
pod/qsefe-licenses-66cc8446d8-99dg6                        1/1       Running   3          30m
pod/qsefe-locale-6784fc78cf-69pnt                          1/1       Running   0          30m
pod/qsefe-mira-5588cc988b-lc6jg                            1/1       Running   0          30m
pod/qsefe-mongodb-7c76b78d69-jglcl                         1/1       Running   1          30m
pod/qsefe-nginx-ingress-controller-ff94f6df4-r7q2z         1/1       Running   0          30m
pod/qsefe-nginx-ingress-default-backend-86dcc84bb5-7l99n   1/1       Running   0          30m
pod/qsefe-policy-decisions-668d8dd6c6-6knf6                1/1       Running   0          30m
pod/qsefe-qix-sessions-84ddc9fc96-dtp54                    1/1       Running   0          30m
pod/qsefe-redis-master-0                                   1/1       Running   0          19m
pod/qsefe-redis-metrics-8469db7486-jf779                   1/1       Running   0          30m
pod/qsefe-redis-slave-5c8f79dfd5-wlpc2                     1/1       Running   0          19m
pod/qsefe-reporting-7799d7dd75-6xtn7                       2/2       Running   0          30m
pod/qsefe-resource-library-6fc4d6fbd7-7s7wv                1/1       Running   2          30m
pod/qsefe-sense-client-77667c999b-rbx22                    1/1       Running   0          30m
pod/qsefe-tenants-6849966f4d-6smtb                         1/1       Running   2          30m
pod/qsefe-traefik-59c676887b-64gfk                         1/1       Running   0          30m
pod/qsefe-users-95d7d96bb-sn87q                            1/1       Running   4          30m
s@cloudshell:~ (qsefe-228310)$

but when i try open Elastic hub from link https://Public-ip I was see error "500 Internal Server Error"

@Michael_Tarallo I was already read all available information about multi cloud in support KB and partner technical advisor site and can't find answer. thanks for any advice

3 Replies
korsikov
Valued Contributor II

Re: Internal server error

I was found is a RBAC section needed.  I was found error "neet auth unexpected status: 500 while sending to client, client: 10.4.0.1"

After enabling RBAC and restart deployment I was have a error

{"errors":[{"title":"No authentication configured for this hostname","code":"LOGIN-2","status":"401"}]}

there must be mongodb and test authentication 

----

 

In this simple deployment an example Identity Provider is automatically configured. This allows you to login to the hubwith some sample accounts. When you browse to the hub you will be asked to login and you can use the sample account of harley@qlik.example with a password of Password1! . 

---

for uncertain reasons it doesn't work

 

UPDATE

add to hosts line

PUBLICIP elastic.example

and after that  I can open link

work only name elastic.example. any else return  "No authentication configured for this hostname" error

korsikov
Valued Contributor II

Re: Internal server error

next update
when i open link http://elastic.example I moved to https://elastic.example by 308
and then move to
http://elastic.example:32123/auth?client_id=foo&scope=
why http after https?
I found service with port 32123
qsefe-edge-auth NodePort 10.7.251.127 <none> 8080:32317/TCP,32123:32123/TCP

i tkink problem because this pod not have EXT ip and not recheable from my browser

strelokr@cloudshell:~ (qsefe-228310)$ kubectl describe service qsefe-edge-auth
Name:                     qsefe-edge-auth
Namespace:                default
Labels:                   app=edge-auth
                          chart=edge-auth-2.2.7
                          heritage=Tiller
                          release=qsefe
Annotations:              prometheus.io/port=8080
                          prometheus.io/scrape=true
Selector:                 app=edge-auth,release=qsefe
Type:                     NodePort
IP:                       10.7.251.127
Port:                     edge-auth  8080/TCP
TargetPort:               8080/TCP
NodePort:                 edge-auth  32317/TCP
Endpoints:                10.4.0.44:8080
Port:                     oidc  32123/TCP
TargetPort:               32123/TCP
NodePort:                 oidc  32123/TCP
Endpoints:                10.4.0.44:32123
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

 

But i found loadbalaser for this service on diffirent EXT IP.  

korsikov
Valued Contributor II

Re: Internal server error

found error

Error during sync: error while evaluating the ingress spec: service "default/qsefe-traefik-dashboard" is type "ClusterIP", expected "NodePort" or "LoadBalancer"

 

maybe it's a problem