left).
fatal: [localhost]: FAILED! => changed=true
attempts: 100
cmd: kubectl -n kube-system get daemonset auth-pdp -o=custom-columns=A:.status.numberAvailable,B:.status.desiredNumberScheduled --no-headers=true | tr -s " " | awk '$1 == $2 {print "READY"}'
delta: '0:00:01.308879'
end: '2018-11-22 17:00:56.092611'
rc: 0
start: '2018-11-22 17:00:54.783732'
stderr: ''
stderr_lines:
stdout: ''
stdout_lines:
fatal: [localhost]: FAILED! => changed=true
attempts: 100
cmd: kubectl -n kube-system get daemonset auth-pdp -o=custom-columns=A:.status.numberAvailable,B:.status.desiredNumberScheduled --no-headers=true | tr -s " " | awk '$1 == $2 {print "READY"}'
delta: '0:00:01.308879'
end: '2018-11-22 17:00:56.092611'
rc: 0
start: '2018-11-22 17:00:54.783732'
stderr: ''
stderr_lines:
stdout: ''
stdout_lines:
along with loads of these: -
…
FAILED - RETRYING: Waiting for auth-pdp to start (100 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (99 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (98 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (97 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (96 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (95 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (94 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (93 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (92 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (91 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (90 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (89 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (88 retries left).
…
FAILED - RETRYING: Waiting for auth-pdp to start (99 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (98 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (97 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (96 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (95 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (94 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (93 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (92 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (91 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (90 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (89 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (88 retries left).
…
FAILED - RETRYING: Waiting for auth-pdp to start (9 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (8 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (7 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (6 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (5 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (4 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (3 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (2 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (1 retries left).
...
FAILED - RETRYING: Waiting for auth-pdp to start (8 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (7 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (6 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (5 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (4 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (3 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (2 retries left).
FAILED - RETRYING: Waiting for auth-pdp to start (1 retries left).
...
I suspected that I was hitting a resource constraint, in terms of CPU or RAM.
Looking here: -
My wonder team suggested a number of debugging commands, all of which I ran from the boot/master node: -
docker ps -a | grep pdp
- This showed NOTHING running
kubectl get ds -n kube-system
kubectl describe node 9.20.194.53
- This showed NOTHING running
kubectl get ds -n kube-system
kubectl describe node 9.20.194.53
kubectl describe ds auth-pdp -n kube-system
the last of which threw this up: -
FailedPlacement - Failed to place pod on 9.20.194.53: Node didn't have enough resource
which did confirm that it WAS a resource constraint.
My ICP cluster has three nodes: -
- Boot/Master
- Management
- Worker
- Proxy
as it’s just a test environment.
The Boot/Master node ONLY had 2 CPU cores and 16 GB RAM.
I dynamically increased the CPU cores from 2 to 8, which is the recommended minimum number, as per this: -
and uninstalled: -
cd /opt/ibm-cloud-private-3.1.1/cluster
docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.1-ee uninstall
docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.1-ee uninstall
and then reinstalled: -
docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.1-ee install
After an hour or so, this finished A-OK, with: -
PLAY [Uploading images and charts of archive addons] ***************************
TASK [archive-addon : include_tasks] *******************************************
PLAY RECAP *********************************************************************
9.20.194.53 : ok=157 changed=95 unreachable=0 failed=0
9.20.194.58 : ok=102 changed=57 unreachable=0 failed=0
9.20.194.61 : ok=167 changed=107 unreachable=0 failed=0
9.20.194.95 : ok=101 changed=56 unreachable=0 failed=0
localhost : ok=248 changed=155 unreachable=0 failed=0
POST DEPLOY MESSAGE ************************************************************
The Dashboard URL: https://9.20.194.53:8443, default username/password is admin/admin
Playbook run took 0 days, 0 hours, 54 minutes, 15 seconds
TASK [archive-addon : include_tasks] *******************************************
PLAY RECAP *********************************************************************
9.20.194.53 : ok=157 changed=95 unreachable=0 failed=0
9.20.194.58 : ok=102 changed=57 unreachable=0 failed=0
9.20.194.61 : ok=167 changed=107 unreachable=0 failed=0
9.20.194.95 : ok=101 changed=56 unreachable=0 failed=0
localhost : ok=248 changed=155 unreachable=0 failed=0
POST DEPLOY MESSAGE ************************************************************
The Dashboard URL: https://9.20.194.53:8443, default username/password is admin/admin
Playbook run took 0 days, 0 hours, 54 minutes, 15 seconds
For reference, the logs are located here: -
ls -altrc /opt/ibm-cloud-private-3.1.1/cluster/logs
total 368
-rw-r--r-- 1 root root 180108 Nov 23 16:58 install.log.20181123141759
-rw-r--r-- 1 root root 18130 Nov 26 14:34 uninstall.log.20181126143338
drwxr-xr-x 3 root root 125 Nov 26 14:45 .
drwxr-xr-x 2 root root 80 Nov 26 14:46 .detail
drwxr-xr-x 9 root root 184 Nov 26 15:35 ..
-rw-r--r-- 1 root root 175228 Nov 26 15:40 install.log.20181126144552
Now that things are working, the debug commands are also looking good: -
docker ps -a | grep pdp
5474a1ae3020 5a7e7a8abb4b "bash -c ./startiam.…" About an hour ago Up About an hour k8s_auth-pdp_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
85dcec4493eb 769824455743 "audit-entrypoint.sh" About an hour ago Up About an hour k8s_icp-audit-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
d79e794efbf1 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-pap_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
00e86d7fcc18 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-token-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
1295d90a2fca 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-identity-manager_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
7609629cd307 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-identity-provider_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
44523d1ef769 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-auth-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
e45b13e73bed mycluster.icp:8500/ibmcom/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
85dcec4493eb 769824455743 "audit-entrypoint.sh" About an hour ago Up About an hour k8s_icp-audit-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
d79e794efbf1 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-pap_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
00e86d7fcc18 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-token-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
1295d90a2fca 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-identity-manager_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
7609629cd307 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-identity-provider_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
44523d1ef769 493b365fcc13 "sh -c 'until curl -…" About an hour ago Exited (0) About an hour ago k8s_init-auth-service_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
e45b13e73bed mycluster.icp:8500/ibmcom/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_auth-pdp-jlb6f_kube-system_fc825fe7-f18d-11e8-b6fe-00000914c235_0
kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
audit-logging-fluentd-ds 4 4 4 4 4
auth-apikeys 1 1 1 1 1 master=true 51m
auth-idp 1 1 1 1 1 master=true 51m
auth-pap 1 1 1 1 1 master=true 51m
auth-pdp 1 1 1 1 1 master=true 51m
calico-node 4 4 4 4 4
catalog-ui 1 1 1 1 1 master=true 41m
icp-management-ingress 1 1 1 1 1 master=true 51m
kube-dns 1 1 1 1 1 master=true 54m
logging-elk-filebeat-ds 4 4 4 4 4
metering-reader 4 4 4 4 4
monitoring-prometheus-nodeexporter 4 4 4 4 4
nginx-ingress-controller 1 1 1 1 1 proxy=true 53m
nvidia-device-plugin 4 4 4 4 4
platform-ui 1 1 1 1 1 master=true 40m
service-catalog-apiserver 1 1 1 1 1 master=true 53m
unified-router 1 1 1 1 1 master=true 40m
kubectl describe ds auth-pdp -n kube-system
Name: auth-pdp
Selector: component=auth-pdp,k8s-app=auth-pdp,release=auth-pdp
Node-Selector: master=true
Labels: app=auth-pdp
chart=auth-pdp-3.1.1
component=auth-pdp
heritage=Tiller
release=auth-pdp
Annotations:
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
Number of Nodes Scheduled with Available Pods: 1
Number of Nodes Misscheduled: 0
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: component=auth-pdp
k8s-app=auth-pdp
release=auth-pdp
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Init Containers:
init-auth-service:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl -k -i -fsS https://platform-auth-service:9443/oidc/endpoint/OP/.well-known/openid-configuration | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
init-identity-provider:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl --cacert /certs/ca.crt -i -fsS https://platform-identity-provider:4300 | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
/certs from cluster-ca (rw)
init-identity-manager:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl --cacert /certs/ca.crt -i -fsS https://platform-identity-management:4500 | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
/certs from cluster-ca (rw)
init-token-service:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl -k -i -fsS https://iam-token-service:10443/oidc/keys | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
init-pap:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl --cacert /certs/ca.crt -i -fsS https://iam-pap:39001/v1/health | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
/certs from cluster-ca (rw)
Containers:
icp-audit-service:
Image: mycluster.icp:8500/ibmcom/icp-audit-service:3.1.1
Port:
Host Port:
Limits:
cpu: 200m
memory: 512Mi
Requests:
cpu: 100m
memory: 256Mi
Environment:
AUDIT_DIR: /app/logs/audit
Mounts:
/app/logs/audit from shared (rw)
/etc/logrotate.conf from logrotate-conf (rw)
/etc/logrotate.d/audit from logrotate (rw)
/run/systemd/journal from journal (rw)
auth-pdp:
Image: mycluster.icp:8500/ibmcom/iam-policy-decision:3.1.1
Port:
Host Port:
Requests:
cpu: 500m
memory: 512Mi
Readiness: http-get http://:7998/v1/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
DEFAULT_ADMIN_USER: Optional: false
AUDIT_ENABLED: Optional: false
DEFAULT_ADMIN_PASSWORD: Optional: false
POD_NAME: (v1:metadata.name)
POD_NAMESPACE: (v1:metadata.namespace)
CLUSTER_NAME: Optional: false
MONGO_DB: platform-db
MONGO_COLLECTION: iam
MONGO_USERNAME: Optional: false
MONGO_PASSWORD: Optional: false
MONGO_HOST: mongodb
MONGO_PORT: 27017
MONGO_AUTHSOURCE: admin
CF_DB_NAME: security-data
DB_NAME: platform-db
CAMS_PDP_URL: http://iam-pdp:7998
IAM_TOKEN_SERVICE_URL: https://iam-token-service:10443
IDENTITY_PROVIDER_URL: https://platform-identity-provider:4300
IAM_PAP_URL: https://iam-pap:39001
DEFAULT_TTL: Optional: false
Mounts:
/app/logs/audit from shared (rw)
/certs from cluster-ca (rw)
/certs/mongodb-ca from mongodb-ca-cert (rw)
/certs/mongodb-client from mongodb-client-cert (rw)
Volumes:
mongodb-ca-cert:
Type: Secret (a volume populated by a Secret)
SecretName: cluster-ca-cert
Optional: false
cluster-ca:
Type: Secret (a volume populated by a Secret)
SecretName: cluster-ca-cert
Optional: false
journal:
Type: HostPath (bare host directory volume)
Path: /run/systemd/journal
HostPathType:
shared:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
logrotate:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: auth-pdp
Optional: false
logrotate-conf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: auth-pdp
Optional: false
mongodb-client-cert:
Type: Secret (a volume populated by a Secret)
SecretName: icp-mongodb-client-cert
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 52m daemonset-controller Created pod: auth-pdp-jlb6f
Selector: component=auth-pdp,k8s-app=auth-pdp,release=auth-pdp
Node-Selector: master=true
Labels: app=auth-pdp
chart=auth-pdp-3.1.1
component=auth-pdp
heritage=Tiller
release=auth-pdp
Annotations:
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
Number of Nodes Scheduled with Available Pods: 1
Number of Nodes Misscheduled: 0
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: component=auth-pdp
k8s-app=auth-pdp
release=auth-pdp
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Init Containers:
init-auth-service:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl -k -i -fsS https://platform-auth-service:9443/oidc/endpoint/OP/.well-known/openid-configuration | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
init-identity-provider:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl --cacert /certs/ca.crt -i -fsS https://platform-identity-provider:4300 | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
/certs from cluster-ca (rw)
init-identity-manager:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl --cacert /certs/ca.crt -i -fsS https://platform-identity-management:4500 | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
/certs from cluster-ca (rw)
init-token-service:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl -k -i -fsS https://iam-token-service:10443/oidc/keys | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
init-pap:
Image: mycluster.icp:8500/ibmcom/icp-platform-auth:3.1.1
Port:
Host Port:
Command:
sh
-c
until curl --cacert /certs/ca.crt -i -fsS https://iam-pap:39001/v1/health | grep "200 OK"; do sleep 3; done;
Environment:
Mounts:
/certs from cluster-ca (rw)
Containers:
icp-audit-service:
Image: mycluster.icp:8500/ibmcom/icp-audit-service:3.1.1
Port:
Host Port:
Limits:
cpu: 200m
memory: 512Mi
Requests:
cpu: 100m
memory: 256Mi
Environment:
AUDIT_DIR: /app/logs/audit
Mounts:
/app/logs/audit from shared (rw)
/etc/logrotate.conf from logrotate-conf (rw)
/etc/logrotate.d/audit from logrotate (rw)
/run/systemd/journal from journal (rw)
auth-pdp:
Image: mycluster.icp:8500/ibmcom/iam-policy-decision:3.1.1
Port:
Host Port:
Requests:
cpu: 500m
memory: 512Mi
Readiness: http-get http://:7998/v1/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
DEFAULT_ADMIN_USER:
AUDIT_ENABLED:
DEFAULT_ADMIN_PASSWORD:
POD_NAME: (v1:metadata.name)
POD_NAMESPACE: (v1:metadata.namespace)
CLUSTER_NAME:
MONGO_DB: platform-db
MONGO_COLLECTION: iam
MONGO_USERNAME:
MONGO_PASSWORD:
MONGO_HOST: mongodb
MONGO_PORT: 27017
MONGO_AUTHSOURCE: admin
CF_DB_NAME: security-data
DB_NAME: platform-db
CAMS_PDP_URL: http://iam-pdp:7998
IAM_TOKEN_SERVICE_URL: https://iam-token-service:10443
IDENTITY_PROVIDER_URL: https://platform-identity-provider:4300
IAM_PAP_URL: https://iam-pap:39001
DEFAULT_TTL:
Mounts:
/app/logs/audit from shared (rw)
/certs from cluster-ca (rw)
/certs/mongodb-ca from mongodb-ca-cert (rw)
/certs/mongodb-client from mongodb-client-cert (rw)
Volumes:
mongodb-ca-cert:
Type: Secret (a volume populated by a Secret)
SecretName: cluster-ca-cert
Optional: false
cluster-ca:
Type: Secret (a volume populated by a Secret)
SecretName: cluster-ca-cert
Optional: false
journal:
Type: HostPath (bare host directory volume)
Path: /run/systemd/journal
HostPathType:
shared:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
logrotate:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: auth-pdp
Optional: false
logrotate-conf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: auth-pdp
Optional: false
mongodb-client-cert:
Type: Secret (a volume populated by a Secret)
SecretName: icp-mongodb-client-cert
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 52m daemonset-controller Created pod: auth-pdp-jlb6f
kubectl describe node 9.20.194.53
Name: 9.20.194.53
Roles: etcd,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
etcd=true
kubernetes.io/hostname=9.20.194.53
master=true
node-role.kubernetes.io/etcd=true
node-role.kubernetes.io/master=true
role=master
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Mon, 26 Nov 2018 14:48:44 +0000
Taints: dedicated=infra:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Mon, 26 Nov 2018 16:07:34 +0000 Mon, 26 Nov 2018 14:48:44 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 26 Nov 2018 16:07:34 +0000 Mon, 26 Nov 2018 14:48:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 26 Nov 2018 16:07:34 +0000 Mon, 26 Nov 2018 14:48:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 26 Nov 2018 16:07:34 +0000 Mon, 26 Nov 2018 14:48:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 26 Nov 2018 16:07:34 +0000 Mon, 26 Nov 2018 15:11:48 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 9.20.194.53
Hostname: 9.20.194.53
Capacity:
cpu: 8
ephemeral-storage: 249436164Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16424812Ki
pods: 80
Allocatable:
cpu: 8
ephemeral-storage: 249333764Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16322412Ki
pods: 80
System Info:
Machine ID: 428e44fb1ec74efba5d4e3ca11fa2ac9
System UUID: 9E82ABA4-CABA-4645-B285-409E35FDF986
Boot ID: 321104d1-77ca-48ee-af9d-8f8311a749a5
Kernel Version: 4.15.0-38-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.3.1
Kubelet Version: v1.11.3+icp-ee
Kube-Proxy Version: v1.11.3+icp-ee
Non-terminated Pods: (35 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
cert-manager ibm-cert-manager-cert-manager-7d656f5dd5-c7lqt 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system audit-logging-fluentd-ds-lsjs2 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system auth-apikeys-qbg4k 200m (2%) 1 (12%) 300Mi (1%) 1Gi (6%)
kube-system auth-idp-rl8dj 300m (3%) 3200m (40%) 768Mi (4%) 3584Mi (22%)
kube-system auth-pap-s76p7 150m (1%) 1200m (15%) 456Mi (2%) 1536Mi (9%)
kube-system auth-pdp-jlb6f 600m (7%) 200m (2%) 768Mi (4%) 512Mi (3%)
kube-system calico-kube-controllers-d775694f-pzph9 250m (3%) 0 (0%) 100Mi (0%) 0 (0%)
kube-system calico-node-5xwb6 300m (3%) 0 (0%) 150Mi (0%) 0 (0%)
kube-system catalog-ui-vjj45 300m (3%) 300m (3%) 300Mi (1%) 300Mi (1%)
kube-system heapster-569fdfd65-ndvxh 20m (0%) 0 (0%) 64Mi (0%) 0 (0%)
kube-system helm-api-6c9756484f-ql4vl 350m (4%) 550m (6%) 556Mi (3%) 656Mi (4%)
kube-system helm-repo-5c8fcc8899-kd87g 150m (1%) 200m (2%) 640Mi (4%) 640Mi (4%)
kube-system ibmcloud-image-enforcement-c558c6c95-xxfbx 128m (1%) 256m (3%) 128Mi (0%) 256Mi (1%)
kube-system icp-management-ingress-bdgt7 200m (2%) 0 (0%) 256Mi (1%) 0 (0%)
kube-system icp-mongodb-0 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system image-manager-0 110m (1%) 0 (0%) 192Mi (1%) 0 (0%)
kube-system k8s-etcd-9.20.194.53 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system k8s-master-9.20.194.53 5m (0%) 0 (0%) 10Mi (0%) 0 (0%)
kube-system k8s-proxy-9.20.194.53 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-n5qfk 100m (1%) 0 (0%) 70Mi (0%) 0 (0%)
kube-system logging-elk-filebeat-ds-z6xpm 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system mariadb-0 500m (6%) 1 (12%) 128Mi (0%) 512Mi (3%)
kube-system metering-reader-vslkf 250m (3%) 0 (0%) 512Mi (3%) 0 (0%)
kube-system mgmt-repo-5cb9f9dc7b-thc28 150m (1%) 200m (2%) 640Mi (4%) 640Mi (4%)
kube-system monitoring-prometheus-nodeexporter-xh787 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system nvidia-device-plugin-vm9m8 150m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system platform-api-86dff555db-llbz2 100m (1%) 100m (1%) 128Mi (0%) 512Mi (3%)
kube-system platform-deploy-749fc56fb7-cmjql 100m (1%) 100m (1%) 128Mi (0%) 512Mi (3%)
kube-system platform-ui-qtvwm 300m (3%) 300m (3%) 256Mi (1%) 256Mi (1%)
kube-system secret-watcher-7994f75f9b-l4ffh 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system service-catalog-apiserver-79nmn 100m (1%) 100m (1%) 20Mi (0%) 200Mi (1%)
kube-system service-catalog-controller-manager-9c7bcf586-6kp2c 100m (1%) 100m (1%) 20Mi (0%) 200Mi (1%)
kube-system tiller-deploy-5677cc5dfb-m5k9h 100m (1%) 0 (0%) 128Mi (0%) 0 (0%)
kube-system unified-router-zf4fs 20m (0%) 0 (0%) 64Mi (0%) 0 (0%)
kube-system web-terminal-55c549d48d-bn98q 10m (0%) 100m (1%) 64Mi (0%) 512Mi (3%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 5043m (63%) 8906m (111%)
memory 6846Mi (42%) 11852Mi (74%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeReady 55m kubelet, 9.20.194.53 Node 9.20.194.53 status is now: NodeReady
No comments:
Post a Comment