This follows on from yesterday's thread: -
Today, after booting up my ICP VMs ( Master/Boot, Worker and Proxy ), I see this in the GUI: -
and: -
and: -
and: -
all of which makes me think that Helm API isn't happy again.
Digging into the Helm API Pod: -
So I dived onto the command-line: -
docker ps -a|grep -i helm-api
f9ee8a0e5d20 b72c1d4155b8 "npm start" 2 minutes ago Exited (0) About a minute ago k8s_helmapi_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_60
de1a7b2d1bf2 ibmcom/pause:3.0 "/pause" 2 hours ago Up 2 hours k8s_POD_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_8
de1a7b2d1bf2 ibmcom/pause:3.0 "/pause" 2 hours ago Up 2 hours k8s_POD_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_8
which makes me think that the first container is having a bad day.
I bounced it: -
docker restart f9ee8a0e5d20
and watched the logs: -
docker logs f9ee8a0e5d20 -f
> helmApi@0.0.0 start /usr/src/app
> node ./bin/www
2018-03-02T11:15:31.070Z 'FINE' 'HELM_REPOS helm_repos'
2018-03-02T11:15:31.074Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:15:31.075Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:15:31.318Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:15:31.459818780 15 env_linux.c:66] Warning: insecure environment read function 'getenv' used
2018-03-02T11:15:31.503Z 'FINE' 'Tiller url tiller-deploy.kube-system:44134'
2018-03-02T11:15:31.503Z 'FINE' 'eval ISICP true'
2018-03-02T11:15:31.507Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:15:31.509Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:15:31.509Z 'FINE' 'startInitialSynch'
2018-03-02T11:15:33.509Z 'INFO' 'dbutils getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:15:36.549Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:15:36.550Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:15:36.551Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:15:37.513Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:15:37.513Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:15:42.536Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:15:52.579Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
> helmApi@0.0.0 start /usr/src/app
> node ./bin/www
2018-03-02T11:18:54.589Z 'FINE' 'HELM_REPOS helm_repos'
2018-03-02T11:18:54.593Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:18:54.595Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:18:55.378Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:18:55.643869950 15 env_linux.c:66] Warning: insecure environment read function 'getenv' used
2018-03-02T11:18:55.730Z 'FINE' 'Tiller url tiller-deploy.kube-system:44134'
2018-03-02T11:18:55.730Z 'FINE' 'eval ISICP true'
2018-03-02T11:18:55.744Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:18:55.745Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:18:55.745Z 'FINE' 'startInitialSynch'
2018-03-02T11:18:57.745Z 'INFO' 'dbutils getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:19:00.809Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:19:00.814Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:19:00.815Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:19:01.745Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:19:01.746Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:19:06.761Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:19:16.781Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
> node ./bin/www
2018-03-02T11:15:31.070Z 'FINE' 'HELM_REPOS helm_repos'
2018-03-02T11:15:31.074Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:15:31.075Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:15:31.318Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:15:31.459818780 15 env_linux.c:66] Warning: insecure environment read function 'getenv' used
2018-03-02T11:15:31.503Z 'FINE' 'Tiller url tiller-deploy.kube-system:44134'
2018-03-02T11:15:31.503Z 'FINE' 'eval ISICP true'
2018-03-02T11:15:31.507Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:15:31.509Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:15:31.509Z 'FINE' 'startInitialSynch'
2018-03-02T11:15:33.509Z 'INFO' 'dbutils getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:15:36.549Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:15:36.550Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:15:36.551Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:15:37.513Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:15:37.513Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:15:42.536Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:15:52.579Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
> helmApi@0.0.0 start /usr/src/app
> node ./bin/www
2018-03-02T11:18:54.589Z 'FINE' 'HELM_REPOS helm_repos'
2018-03-02T11:18:54.593Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:18:54.595Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:18:55.378Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:18:55.643869950 15 env_linux.c:66] Warning: insecure environment read function 'getenv' used
2018-03-02T11:18:55.730Z 'FINE' 'Tiller url tiller-deploy.kube-system:44134'
2018-03-02T11:18:55.730Z 'FINE' 'eval ISICP true'
2018-03-02T11:18:55.744Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:18:55.745Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:18:55.745Z 'FINE' 'startInitialSynch'
2018-03-02T11:18:57.745Z 'INFO' 'dbutils getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:19:00.809Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:19:00.814Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:19:00.815Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:19:01.745Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:19:01.746Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:19:06.761Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:19:16.781Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
but things don't appear to be any better: -
docker ps -a|grep -i helm-api
a6616b26719e b72c1d4155b8 "npm start" 2 minutes ago Exited (0) About a minute ago k8s_helmapi_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_61
de1a7b2d1bf2 ibmcom/pause:3.0 "/pause" 2 hours ago Up 2 hours k8s_POD_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_8
de1a7b2d1bf2 ibmcom/pause:3.0 "/pause" 2 hours ago Up 2 hours k8s_POD_helm-api-5874f9d746-9qcjg_kube-system_7122aa29-1bcb-11e8-ab0b-000c290f4d7f_8
( Note that the container ID has changed )
I checked the logs for this new container: -
docker logs a6616b26719e -f
> helmApi@0.0.0 start /usr/src/app
> node ./bin/www
2018-03-02T11:19:24.518Z 'FINE' 'HELM_REPOS helm_repos'
2018-03-02T11:19:24.529Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:19:24.530Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:19:24.981Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:19:25.206140919 15 env_linux.c:66] Warning: insecure environment read function 'getenv' used
2018-03-02T11:19:25.287Z 'FINE' 'Tiller url tiller-deploy.kube-system:44134'
2018-03-02T11:19:25.287Z 'FINE' 'eval ISICP true'
2018-03-02T11:19:25.291Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:19:25.292Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:19:25.293Z 'FINE' 'startInitialSynch'
2018-03-02T11:19:27.294Z 'INFO' 'dbutils getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:19:30.322Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:19:30.322Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:19:30.323Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:19:31.296Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:19:31.296Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:19:36.314Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:19:46.335Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
> helmApi@0.0.0 start /usr/src/app
> node ./bin/www
2018-03-02T11:19:24.518Z 'FINE' 'HELM_REPOS helm_repos'
2018-03-02T11:19:24.529Z 'FINE' 'process.env.DBHOST https://cloudantdb:6984'
2018-03-02T11:19:24.530Z 'FINE' 'db_host https://cloudantdb:6984'
2018-03-02T11:19:24.981Z 'INFO' 'dbutils Authenticating with the database host \'https://cloudantdb:6984\'...'
D0302 11:19:25.206140919 15 env_linux.c:66] Warning: insecure environment read function 'getenv' used
2018-03-02T11:19:25.287Z 'FINE' 'Tiller url tiller-deploy.kube-system:44134'
2018-03-02T11:19:25.287Z 'FINE' 'eval ISICP true'
2018-03-02T11:19:25.291Z 'INFO' 'dbutils createMultipleDb(helm_repos)'
2018-03-02T11:19:25.292Z 'FINE' 'ABOUT TO START SYNCH'
2018-03-02T11:19:25.293Z 'FINE' 'startInitialSynch'
2018-03-02T11:19:27.294Z 'INFO' 'dbutils getDbConnection(\'helm_repos\') Initializing connection to database \'helm_repos\''
2018-03-02T11:19:30.322Z 'WARN' 'dbutils Using db with no auth'
2018-03-02T11:19:30.322Z 'INFO' 'dbutilsPOST Headers using db with no auth'
2018-03-02T11:19:30.323Z 'INFO' 'dbutils Initial DB connections tested.'
2018-03-02T11:19:31.296Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
2018-03-02T11:19:31.296Z 'INFO' 'dbutils createDb helm_repos'
2018-03-02T11:19:36.314Z 'INFO' 'dbutils addViewsToDb'
2018-03-02T11:19:46.335Z 'INFO' 'dbutils createMultipleDb(dbNameArr, arrIndex)'
so no obvious exception …..
And yet …..
I dug further, looking specifically at the Kubernetes pods in the kube-system namespace ( rather than the default namespace where my "user" workloads reside )
kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
auth-apikeys-fxlxs 1/1 Running 5 2d
auth-idp-wwhlw 3/3 Running 16 2d
auth-pap-2x8fs 1/1 Running 5 2d
auth-pdp-bpk7h 1/1 Running 5 2d
calico-node-amd64-ls5pg 2/2 Running 18 2d
calico-node-amd64-pk49d 2/2 Running 16 2d
calico-node-amd64-qwt42 2/2 Running 17 2d
calico-policy-controller-5997c6c956-m9nmd 1/1 Running 6 2d
catalog-catalog-apiserver-6mgbq 1/1 Running 17 2d
catalog-catalog-controller-manager-bd9f49c8c-f625b 1/1 Running 23 2d
catalog-ui-dgcq2 1/1 Running 6 2d
default-http-backend-8448fbc655-fnqv2 1/1 Running 1 1d
elasticsearch-client-6c9fc8b5b6-h87wr 2/2 Running 12 2d
elasticsearch-data-0 1/1 Running 0 2h
elasticsearch-master-667485dfc5-bppms 1/1 Running 6 2d
filebeat-ds-amd64-2h55z 1/1 Running 7 2d
filebeat-ds-amd64-dz2dz 1/1 Running 7 2d
filebeat-ds-amd64-pnjzf 1/1 Running 7 2d
heapster-5fd94775d5-8tjb9 2/2 Running 13 2d
helm-api-5874f9d746-9qcjg 0/1 CrashLoopBackOff 66 2d
helmrepo-77dccffb66-9xwgd 0/1 Running 32 2d
icp-ds-0 1/1 Running 6 2d
icp-router-86p4z 1/1 Running 25 2d
image-manager-0 2/2 Running 13 2d
k8s-etcd-192.168.1.100 1/1 Running 7 2d
k8s-mariadb-192.168.1.100 1/1 Running 7 2d
k8s-master-192.168.1.100 3/3 Running 22 2d
k8s-proxy-192.168.1.100 1/1 Running 7 2d
k8s-proxy-192.168.1.101 1/1 Running 8 2d
k8s-proxy-192.168.1.102 1/1 Running 7 2d
kube-dns-9494dc977-8tkmv 3/3 Running 20 2d
logstash-5ccb9849d6-z9ntw 1/1 Running 7 2d
metering-dm-8587b865b4-ng6rc 1/1 Running 8 2d
metering-reader-amd64-wtk6s 1/1 Running 9 2d
metering-reader-amd64-xp5h2 1/1 Running 10 2d
metering-reader-amd64-z2s9j 1/1 Running 11 2d
metering-server-748d8f8f5b-x57fs 1/1 Running 7 2d
metering-ui-6c56c5778f-xnnfr 1/1 Running 11 2d
monitoring-exporter-76b94fdd94-djr96 1/1 Running 8 2d
monitoring-grafana-5c49f54dd-w498k 2/2 Running 15 2d
monitoring-prometheus-77d4df9dd6-zqtt5 3/3 Running 22 2d
monitoring-prometheus-alertmanager-564496655f-np9hn 3/3 Running 22 2d
monitoring-prometheus-kubestatemetrics-776b5dcb86-r6jmg 1/1 Running 8 2d
monitoring-prometheus-nodeexporter-amd64-9vhn6 1/1 Running 9 2d
monitoring-prometheus-nodeexporter-amd64-lr8x5 1/1 Running 7 2d
monitoring-prometheus-nodeexporter-amd64-pbzk8 1/1 Running 8 2d
nginx-ingress-lb-amd64-8jjtz 1/1 Running 15 2d
platform-api-bkgs8 1/1 Running 6 2d
platform-ui-qmtg7 1/1 Running 12 2d
rescheduler-xn7jf 1/1 Running 6 2d
tiller-deploy-55fb4d8dcc-2b75v 1/1 Running 7 2d
unified-router-np97k 1/1 Running 13 2d
auth-apikeys-fxlxs 1/1 Running 5 2d
auth-idp-wwhlw 3/3 Running 16 2d
auth-pap-2x8fs 1/1 Running 5 2d
auth-pdp-bpk7h 1/1 Running 5 2d
calico-node-amd64-ls5pg 2/2 Running 18 2d
calico-node-amd64-pk49d 2/2 Running 16 2d
calico-node-amd64-qwt42 2/2 Running 17 2d
calico-policy-controller-5997c6c956-m9nmd 1/1 Running 6 2d
catalog-catalog-apiserver-6mgbq 1/1 Running 17 2d
catalog-catalog-controller-manager-bd9f49c8c-f625b 1/1 Running 23 2d
catalog-ui-dgcq2 1/1 Running 6 2d
default-http-backend-8448fbc655-fnqv2 1/1 Running 1 1d
elasticsearch-client-6c9fc8b5b6-h87wr 2/2 Running 12 2d
elasticsearch-data-0 1/1 Running 0 2h
elasticsearch-master-667485dfc5-bppms 1/1 Running 6 2d
filebeat-ds-amd64-2h55z 1/1 Running 7 2d
filebeat-ds-amd64-dz2dz 1/1 Running 7 2d
filebeat-ds-amd64-pnjzf 1/1 Running 7 2d
heapster-5fd94775d5-8tjb9 2/2 Running 13 2d
helm-api-5874f9d746-9qcjg 0/1 CrashLoopBackOff 66 2d
helmrepo-77dccffb66-9xwgd 0/1 Running 32 2d
icp-ds-0 1/1 Running 6 2d
icp-router-86p4z 1/1 Running 25 2d
image-manager-0 2/2 Running 13 2d
k8s-etcd-192.168.1.100 1/1 Running 7 2d
k8s-mariadb-192.168.1.100 1/1 Running 7 2d
k8s-master-192.168.1.100 3/3 Running 22 2d
k8s-proxy-192.168.1.100 1/1 Running 7 2d
k8s-proxy-192.168.1.101 1/1 Running 8 2d
k8s-proxy-192.168.1.102 1/1 Running 7 2d
kube-dns-9494dc977-8tkmv 3/3 Running 20 2d
logstash-5ccb9849d6-z9ntw 1/1 Running 7 2d
metering-dm-8587b865b4-ng6rc 1/1 Running 8 2d
metering-reader-amd64-wtk6s 1/1 Running 9 2d
metering-reader-amd64-xp5h2 1/1 Running 10 2d
metering-reader-amd64-z2s9j 1/1 Running 11 2d
metering-server-748d8f8f5b-x57fs 1/1 Running 7 2d
metering-ui-6c56c5778f-xnnfr 1/1 Running 11 2d
monitoring-exporter-76b94fdd94-djr96 1/1 Running 8 2d
monitoring-grafana-5c49f54dd-w498k 2/2 Running 15 2d
monitoring-prometheus-77d4df9dd6-zqtt5 3/3 Running 22 2d
monitoring-prometheus-alertmanager-564496655f-np9hn 3/3 Running 22 2d
monitoring-prometheus-kubestatemetrics-776b5dcb86-r6jmg 1/1 Running 8 2d
monitoring-prometheus-nodeexporter-amd64-9vhn6 1/1 Running 9 2d
monitoring-prometheus-nodeexporter-amd64-lr8x5 1/1 Running 7 2d
monitoring-prometheus-nodeexporter-amd64-pbzk8 1/1 Running 8 2d
nginx-ingress-lb-amd64-8jjtz 1/1 Running 15 2d
platform-api-bkgs8 1/1 Running 6 2d
platform-ui-qmtg7 1/1 Running 12 2d
rescheduler-xn7jf 1/1 Running 6 2d
tiller-deploy-55fb4d8dcc-2b75v 1/1 Running 7 2d
unified-router-np97k 1/1 Running 13 2d
and then looked at the logs for that particular offending pod: -
kubectl logs helm-api-5874f9d746-9qcjg -f --namespace kube-system
…
2018-03-02T11:37:29.661Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:29.662Z 'FINE' 'dbHealthcheck \nrepoName: ibm-charts\n'
2018-03-02T11:37:29.908Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:29.914Z 'FINE' 'loadMessages en'
GET /healthcheck 200 252.498 ms - 16
2018-03-02T11:37:36.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:36.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:36.089Z 'FINE' 'loadMessages en'
GET /healthcheck 200 1.314 ms - 16
2018-03-02T11:37:46.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:46.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:46.089Z 'FINE' 'loadMessages en'
GET /healthcheck 200 0.439 ms - 16
2018-03-02T11:37:56.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:56.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:56.090Z 'FINE' 'loadMessages en'
GET /healthcheck 200 3.081 ms - 16
2018-03-02T11:37:58.656Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:58.656Z 'FINE' 'dbHealthcheck \nrepoName: ibm-charts\n'
2018-03-02T11:37:58.669Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:58.670Z 'FINE' 'loadMessages en'
GET /healthcheck 200 14.629 ms - 16
2018-03-02T11:38:06.107Z 'FINE' 'GET /healthcheck'
2018-03-02T11:38:06.107Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:38:06.108Z 'FINE' 'loadMessages en'
GET /healthcheck 200 0.393 ms - 16
2018-03-02T11:37:29.662Z 'FINE' 'dbHealthcheck \nrepoName: ibm-charts\n'
2018-03-02T11:37:29.908Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:29.914Z 'FINE' 'loadMessages en'
GET /healthcheck 200 252.498 ms - 16
2018-03-02T11:37:36.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:36.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:36.089Z 'FINE' 'loadMessages en'
GET /healthcheck 200 1.314 ms - 16
2018-03-02T11:37:46.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:46.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:46.089Z 'FINE' 'loadMessages en'
GET /healthcheck 200 0.439 ms - 16
2018-03-02T11:37:56.088Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:56.089Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:56.090Z 'FINE' 'loadMessages en'
GET /healthcheck 200 3.081 ms - 16
2018-03-02T11:37:58.656Z 'FINE' 'GET /healthcheck'
2018-03-02T11:37:58.656Z 'FINE' 'dbHealthcheck \nrepoName: ibm-charts\n'
2018-03-02T11:37:58.669Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:37:58.670Z 'FINE' 'loadMessages en'
GET /healthcheck 200 14.629 ms - 16
2018-03-02T11:38:06.107Z 'FINE' 'GET /healthcheck'
2018-03-02T11:38:06.107Z 'FINE' 'getMessage ["statusCode",200] en '
2018-03-02T11:38:06.108Z 'FINE' 'loadMessages en'
GET /healthcheck 200 0.393 ms - 16
…
However, after a bit of time ( and patience ), we have this: -
kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
auth-apikeys-fxlxs 1/1 Running 5 2d
auth-idp-wwhlw 3/3 Running 16 2d
auth-pap-2x8fs 1/1 Running 5 2d
auth-pdp-bpk7h 1/1 Running 5 2d
calico-node-amd64-ls5pg 2/2 Running 18 2d
calico-node-amd64-pk49d 2/2 Running 16 2d
calico-node-amd64-qwt42 2/2 Running 17 2d
calico-policy-controller-5997c6c956-m9nmd 1/1 Running 6 2d
catalog-catalog-apiserver-6mgbq 1/1 Running 17 2d
catalog-catalog-controller-manager-bd9f49c8c-f625b 1/1 Running 23 2d
catalog-ui-dgcq2 1/1 Running 6 2d
default-http-backend-8448fbc655-fnqv2 1/1 Running 1 1d
elasticsearch-client-6c9fc8b5b6-h87wr 2/2 Running 12 2d
elasticsearch-data-0 1/1 Running 0 2h
elasticsearch-master-667485dfc5-bppms 1/1 Running 6 2d
filebeat-ds-amd64-2h55z 1/1 Running 7 2d
filebeat-ds-amd64-dz2dz 1/1 Running 7 2d
filebeat-ds-amd64-pnjzf 1/1 Running 7 2d
heapster-5fd94775d5-8tjb9 2/2 Running 13 2d
helm-api-5874f9d746-9qcjg 1/1 Running 67 2d
helmrepo-77dccffb66-9xwgd 1/1 Running 33 2d
icp-ds-0 1/1 Running 6 2d
icp-router-86p4z 1/1 Running 25 2d
image-manager-0 2/2 Running 13 2d
k8s-etcd-192.168.1.100 1/1 Running 7 2d
k8s-mariadb-192.168.1.100 1/1 Running 7 2d
k8s-master-192.168.1.100 3/3 Running 22 2d
k8s-proxy-192.168.1.100 1/1 Running 7 2d
k8s-proxy-192.168.1.101 1/1 Running 8 2d
k8s-proxy-192.168.1.102 1/1 Running 7 2d
kube-dns-9494dc977-8tkmv 3/3 Running 20 2d
logstash-5ccb9849d6-z9ntw 1/1 Running 7 2d
metering-dm-8587b865b4-ng6rc 1/1 Running 8 2d
metering-reader-amd64-wtk6s 1/1 Running 9 2d
metering-reader-amd64-xp5h2 1/1 Running 10 2d
metering-reader-amd64-z2s9j 1/1 Running 11 2d
metering-server-748d8f8f5b-x57fs 1/1 Running 7 2d
metering-ui-6c56c5778f-xnnfr 1/1 Running 11 2d
monitoring-exporter-76b94fdd94-djr96 1/1 Running 8 2d
monitoring-grafana-5c49f54dd-w498k 2/2 Running 15 2d
monitoring-prometheus-77d4df9dd6-zqtt5 3/3 Running 22 2d
monitoring-prometheus-alertmanager-564496655f-np9hn 3/3 Running 22 2d
monitoring-prometheus-kubestatemetrics-776b5dcb86-r6jmg 1/1 Running 8 2d
monitoring-prometheus-nodeexporter-amd64-9vhn6 1/1 Running 9 2d
monitoring-prometheus-nodeexporter-amd64-lr8x5 1/1 Running 7 2d
monitoring-prometheus-nodeexporter-amd64-pbzk8 1/1 Running 8 2d
nginx-ingress-lb-amd64-8jjtz 1/1 Running 15 2d
platform-api-bkgs8 1/1 Running 6 2d
platform-ui-qmtg7 1/1 Running 12 2d
rescheduler-xn7jf 1/1 Running 6 2d
tiller-deploy-55fb4d8dcc-2b75v 1/1 Running 7 2d
unified-router-np97k 1/1 Running 13 2d
I still have one unhealthy deployment: -
but that's my DataPower pod: -
kubectl get pods --namespace default
NAME READY STATUS RESTARTS AGE
davesdatapower-ibm-datapower-dev-57f6cf4c95-bl7sk 0/1 CrashLoopBackOff 39 20h
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9 1/1 Running 1 17h
davesdatapower-ibm-datapower-dev-57f6cf4c95-bl7sk 0/1 CrashLoopBackOff 39 20h
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9 1/1 Running 1 17h
kubectl logs davesdatapower-ibm-datapower-dev-57f6cf4c95-bl7sk -f
20180302T113711.926Z [0x8040006b][system][notice] logging target(default-log): Logging started.
20180302T113712.167Z [0x804000fb][system][error] : Incorrect number of CPUs. Expected minimum is 2, but have 1.
20180302T113712.167Z [0x804000fe][system][notice] : Container instance UUID: 807dada3-37e8-4dea-938d-443e351cd96e, Cores: 1, vCPUs: 1, CPU model: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz, Memory: 7954.3MB, Platform: docker, OS: dpos, Edition: developers-limited, Up time: 0 minutes
20180302T113712.171Z [0x8040001c][system][notice] : DataPower IDG is on-line.
20180302T113712.172Z [0x8100006f][system][notice] : Executing default startup configuration.
20180302T113712.464Z [0x8100006d][system][notice] : Executing system configuration.
20180302T113712.465Z [0x8100006b][mgmt][notice] domain(default): tid(8175): Domain operational state is up.
davesdatapower-ibm-datapower-dev-57f6cf4c95-bl7sk
Unauthorized access prohibited.
20180302T113715.531Z [0x806000dd][system][notice] cert-monitor(Certificate Monitor): tid(399): Enabling Certificate Monitor to scan once every 1 days for soon to expire certificates
20180302T113716.388Z [0x8100006e][system][notice] : Executing startup configuration.
20180302T113716.409Z [0x8040009f][system][notice] throttle(Throttler): tid(1391): Disabling throttle.
20180302T113716.447Z [0x00350015][mgmt][notice] b2b-persistence(B2BPersistence): tid(111): Operational state down
20180302T113716.663Z [0x0034000d][mgmt][warn] ssh(SSH Service): tid(111): Object is disabled
20180302T113717.447Z [0x00350015][mgmt][notice] smtp-server-connection(default): tid(7055): Operational state down
login: 20180302T113717.447Z [0x00350014][mgmt][notice] smtp-server-connection(default): tid(7055): Operational state up
20180302T113717.546Z [0x0035008f][mgmt][notice] quota-enforcement-server(QuotaEnforcementServer): tid(687): Operational state down pending
20180302T113717.605Z [0x00350014][mgmt][notice] web-mgmt(WebGUI-Settings): tid(303): Operational state up
20180302T113717.671Z [0x8100006b][mgmt][notice] domain(webApplicationProxy): tid(29615): Domain operational state is up.
20180302T113717.683Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'local:'
20180302T113717.683Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'logtemp:'
20180302T113717.683Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'logstore:'
20180302T113717.683Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'temporary:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'export:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'chkpoints:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'policyframework:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'dpnfsstatic:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'dpnfsauto:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'ftp-response:'
20180302T113717.684Z [0x8100001f][mgmt][notice] domain(webApplicationProxy): tid(29615): Created domain folder 'xm70store:'
20180302T113717.712Z [0x8100003b][mgmt][notice] domain(default): Domain configured successfully.
20180302T113718.795Z [webApplicationProxy][0x8040006b][system][notice] logging target(default-log): tid(111): Logging started.
20180302T113721.374Z [webApplicationProxy][0x00330019][mgmt][error] source-https(webApplicationProxy_Web_HTTPS): tid(111): Operation state transition to up failed
20180302T113721.387Z [webApplicationProxy][0x00350015][mgmt][notice] smtp-server-connection(default): tid(35151): Operational state down
20180302T113721.387Z [webApplicationProxy][0x00350014][mgmt][notice] smtp-server-connection(default): tid(35151): Operational state up
20180302T113721.476Z [webApplicationProxy][0x00350016][mgmt][notice] source-https(webApplicationProxy_Web_HTTPS): tid(111): Service installed on port
20180302T113721.476Z [webApplicationProxy][0x00350014][mgmt][notice] source-https(webApplicationProxy_Web_HTTPS): tid(111): Operational state up
20180302T113721.476Z [webApplicationProxy][0x00350014][mgmt][notice] mpgw(webApplicationProxy): tid(111): Operational state up
20180302T113721.531Z [0x8100003b][mgmt][notice] domain(webApplicationProxy): Domain configured successfully.
20180302T113800.626Z [0x80e0047a][system][error] : tid(176): DataPower QuotaEnforcement task is not responding, restart in progress
20180302T113804.626Z [0x00350014][mgmt][notice] quota-enforcement-server(QuotaEnforcementServer): tid(687): Operational state up
This pod was deployed using a Helm chart: -
helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
davesdatapower 1 Thu Mar 1 14:54:21 2018 DEPLOYED ibm-datapower-dev-1.0.4 default
virtuous-joey 1 Thu Mar 1 18:03:30 2018 DEPLOYED ibm-open-liberty-1.0.0 default
davesdatapower 1 Thu Mar 1 14:54:21 2018 DEPLOYED ibm-datapower-dev-1.0.4 default
virtuous-joey 1 Thu Mar 1 18:03:30 2018 DEPLOYED ibm-open-liberty-1.0.0 default
so I removed the release: -
helm delete --purge davesdatapower
release "davesdatapower" deleted
helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
virtuous-joey 1 Thu Mar 1 18:03:30 2018 DEPLOYED ibm-open-liberty-1.0.0 default
virtuous-joey 1 Thu Mar 1 18:03:30 2018 DEPLOYED ibm-open-liberty-1.0.0 default
and now all looks good: -
kubectl get pods --namespace default
NAME READY STATUS RESTARTS AGE
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9 1/1 Running 1 17h
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9 1/1 Running 1 17h
I've since redeployed my DataPower pod: -
helm install --name davesdatapower -f dp.yaml ibm-charts/ibm-datapower-dev
NAME: davesdatapower
LAST DEPLOYED: Fri Mar 2 12:07:41 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
davesdatapower-ibm-datapower-dev 10.0.0.96 <nodes> 8443:31954/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
davesdatapower-ibm-datapower-dev 1 1 1 0 1s
==> v1/Secret
NAME TYPE DATA AGE
davesdatapower-ibm-datapower-dev-secret Opaque 2 1s
==> v1/ConfigMap
NAME DATA AGE
davesdatapower-ibm-datapower-dev-config 3 1s
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services davesdatapower-ibm-datapower-dev)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo https://$NODE_IP:$NODE_PORT
LAST DEPLOYED: Fri Mar 2 12:07:41 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
davesdatapower-ibm-datapower-dev 10.0.0.96 <nodes> 8443:31954/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
davesdatapower-ibm-datapower-dev 1 1 1 0 1s
==> v1/Secret
NAME TYPE DATA AGE
davesdatapower-ibm-datapower-dev-secret Opaque 2 1s
==> v1/ConfigMap
NAME DATA AGE
davesdatapower-ibm-datapower-dev-config 3 1s
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services davesdatapower-ibm-datapower-dev)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo https://$NODE_IP:$NODE_PORT
and, as suggested, grabbed the endpoint URL: -
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services davesdatapower-ibm-datapower-dev)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
and checked that the pod containing the DataPower instance was up and running: -
kubectl get pods
NAME READY STATUS RESTARTS AGE
davesdatapower-ibm-datapower-dev-57f6cf4c95-9dqwp 1/1 Running 0 2m
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9 1/1 Running 1 18h
davesdatapower-ibm-datapower-dev-57f6cf4c95-9dqwp 1/1 Running 0 2m
virtuous-joey-ibm-open-l-6db4566d6d-zbgf9 1/1 Running 1 18h
and, finally, hit the DP endpoint: -
For the record, DP is acting as a Web Application Proxy (WAP) against IBM.COM :-)
I'm following this recipe: -
on IBM developerWorks.
2 comments:
Hello Dave, same problem and same "by patience" resolution.
It seems that helm-api receive many timeout error like this:
2018-04-08T09:56:52.779Z 'ERROR' 'downloadAndStore Problem getting response from URL https://kubernetes-charts.storage.googleapis.com/kubernetes-dashboard-0.4.0.tgz {"message":"read ETIMEDOUT","stack":"Error: read ETIMEDOUT\\n at exports._errnoException (util.js:1020:11)\\n at TLSWrap.onread (net.js:568:26)","code":"ETIMEDOUT","errno":"ETIMEDOUT","syscall":"read"}'
I put some other non IBM repository si i think there is some "latency" when a repository refresh occur.
Claudio, thanks for the comments and the feedback. Cheers, Dave
Post a Comment