You could just use another RHEL box or vm as your gateway. I’m pretty sure there’s some automation to set that up in with the okd stuff. I remember it vaguely from a youtube video…
so i redid some networking stuff
so i could work around the dns forwarding stuff not working consistently (for the rest of the home LAN stuff ; blah blah forwarder issues)
now DNS is working better.
also put the 10Gb sfp+ card in
so had to redo networking mostly b/c of that… and change stuff over to vlan ids
seems happy now.
now i can get back to building the internal mirror next chance i get
so the api service was not coming up on the control planes so the bootstrap could go away… idk why
but i rebooted master0 and the api service finally started on it… and then the bootstrap went red…
so i tried the same tactic on master1 and after a few minute the api service went green for it also…
trying the same on master2… but still red at the moment for it.
I think i have to get an account on redhat to pull some missing components… b/c the master nodes appears to be trying to reach the redhat registry… and that might explain some of the errors
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.10.0-0.okd-2022-03-07-131213 False True True 58m WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.7.210:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)
baremetal 4.10.0-0.okd-2022-03-07-131213 True False False 57m
cloud-controller-manager 4.10.0-0.okd-2022-03-07-131213 True False False 62m
cloud-credential 4.10.0-0.okd-2022-03-07-131213 True False False 62m
cluster-autoscaler 4.10.0-0.okd-2022-03-07-131213 True False False 57m
config-operator 4.10.0-0.okd-2022-03-07-131213 True False False 58m
console 4.10.0-0.okd-2022-03-07-131213 False True False 20m DeploymentAvailable: 0 replicas available for console deployment...
csi-snapshot-controller 4.10.0-0.okd-2022-03-07-131213 True True False 57m Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods
dns 4.10.0-0.okd-2022-03-07-131213 True True False 57m DNS "default" reports Progressing=True: "Have 4 available DNS pods, want 5."
etcd 4.10.0-0.okd-2022-03-07-131213 True True True 56m InstallerPodContainerWaitingDegraded: Pod "installer-6-master2" on node "master2" container "installer" is waiting since 2022-04-09 19:06:23 +0000 UTC because ContainerCreating...
image-registry 4.10.0-0.okd-2022-03-07-131213 True False False 18m
ingress 4.10.0-0.okd-2022-03-07-131213 True False False 15m
insights 4.10.0-0.okd-2022-03-07-131213 True False False 52m
kube-apiserver 4.10.0-0.okd-2022-03-07-131213 True True True 21m GuardControllerDegraded: Missing operand on node master2...
kube-controller-manager 4.10.0-0.okd-2022-03-07-131213 True True True 54m InstallerPodContainerWaitingDegraded: Pod "installer-8-master1" on node "master1" container "installer" is waiting since 2022-04-09 19:04:53 +0000 UTC because ContainerCreating...
kube-scheduler 4.10.0-0.okd-2022-03-07-131213 True False False 54m
kube-storage-version-migrator 4.10.0-0.okd-2022-03-07-131213 False True False 6m28s KubeStorageVersionMigratorAvailable: Waiting for Deployment
machine-api 4.10.0-0.okd-2022-03-07-131213 True False False 57m
machine-approver 4.10.0-0.okd-2022-03-07-131213 True False False 57m
machine-config True True True 47m Unable to apply 4.10.0-0.okd-2022-03-07-131213: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)
marketplace 4.10.0-0.okd-2022-03-07-131213 True False False 57m
monitoring False True True 42m Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error.
network 4.10.0-0.okd-2022-03-07-131213 True True True 58m DaemonSet "openshift-ovn-kubernetes/ovnkube-master" rollout is not making progress - last change 2022-04-09T19:05:21Z
node-tuning 4.10.0-0.okd-2022-03-07-131213 True False False 57m
openshift-apiserver 4.10.0-0.okd-2022-03-07-131213 True False False 18m
openshift-controller-manager 4.10.0-0.okd-2022-03-07-131213 True False False 53m
openshift-samples 4.10.0-0.okd-2022-03-07-131213 True False False 18m
operator-lifecycle-manager 4.10.0-0.okd-2022-03-07-131213 True False False 57m
operator-lifecycle-manager-catalog 4.10.0-0.okd-2022-03-07-131213 True False False 57m
operator-lifecycle-manager-packageserver 4.10.0-0.okd-2022-03-07-131213 True False False 18m
service-ca 4.10.0-0.okd-2022-03-07-131213 True False False 58m
storage 4.10.0-0.okd-2022-03-07-131213 True False False 58m
so i guess i’ll hae to reset again and do some more reading on these errors/issues
i think if all the errors dont clear up this time… i’ll throw my optane drive on the host and configure it as swap… that might allow me to throw more ram at the master nodes
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.10.0-0.okd-2022-03-07-131213 False True False 18m WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.7.212:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)
baremetal 4.10.0-0.okd-2022-03-07-131213 True False False 17m
cloud-controller-manager 4.10.0-0.okd-2022-03-07-131213 True False False 23m
cloud-credential 4.10.0-0.okd-2022-03-07-131213 True False False 23m
cluster-autoscaler 4.10.0-0.okd-2022-03-07-131213 True False False 18m
config-operator 4.10.0-0.okd-2022-03-07-131213 True False False 18m
console 4.10.0-0.okd-2022-03-07-131213 True False False 2m31s
csi-snapshot-controller 4.10.0-0.okd-2022-03-07-131213 True False False 18m
dns 4.10.0-0.okd-2022-03-07-131213 True False False 17m
etcd 4.10.0-0.okd-2022-03-07-131213 True False False 16m
image-registry 4.10.0-0.okd-2022-03-07-131213 True False False 7m58s
ingress 4.10.0-0.okd-2022-03-07-131213 True False False 6m27s
insights 4.10.0-0.okd-2022-03-07-131213 True False False 12m
kube-apiserver 4.10.0-0.okd-2022-03-07-131213 True True False 10m NodeInstallerProgressing: 1 nodes are at revision 7; 2 nodes are at revision 9
kube-controller-manager 4.10.0-0.okd-2022-03-07-131213 True False False 15m
kube-scheduler 4.10.0-0.okd-2022-03-07-131213 True False False 14m
kube-storage-version-migrator 4.10.0-0.okd-2022-03-07-131213 True False False 18m
machine-api 4.10.0-0.okd-2022-03-07-131213 True False False 17m
machine-approver 4.10.0-0.okd-2022-03-07-131213 True False False 17m
machine-config True True True 7m47s Unable to apply 4.10.0-0.okd-2022-03-07-131213: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)
marketplace 4.10.0-0.okd-2022-03-07-131213 True False False 17m
monitoring 4.10.0-0.okd-2022-03-07-131213 True False False 4m30s
network 4.10.0-0.okd-2022-03-07-131213 True False False 19m
node-tuning 4.10.0-0.okd-2022-03-07-131213 True False False 17m
openshift-apiserver 4.10.0-0.okd-2022-03-07-131213 True False False 10m
openshift-controller-manager 4.10.0-0.okd-2022-03-07-131213 True False False 16m
openshift-samples 4.10.0-0.okd-2022-03-07-131213 True False False 11m
operator-lifecycle-manager 4.10.0-0.okd-2022-03-07-131213 True False False 18m
operator-lifecycle-manager-catalog 4.10.0-0.okd-2022-03-07-131213 True False False 18m
operator-lifecycle-manager-packageserver 4.10.0-0.okd-2022-03-07-131213 True False False 12m
service-ca 4.10.0-0.okd-2022-03-07-131213 True False False 18m
storage 4.10.0-0.okd-2022-03-07-131213 True False False 18m
well the only one that didnt resolve itself was:
machine-config True True True 11m Unable to apply 4.10.0-0.okd-2022-03-07-131213: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)