(Demo) Datacenter-in-a-Box: EPYC vs. TR-Pro

Do you have it set for write cache as well?

Working on this today :slight_smile:

I can also just do the base install and you do stuff if you want? :smiley:

5 Likes

I do the same at home with kolla ansible and cephadm. It’s really quite insane how much you can get done on one system, tho what doesn’t work is upgrading Ceph without another node since it doesn’t like upgrading a single manager. I also ran into a problem with kops. It just didn’t want to create the K8S cluster since there is a Anti-Affinity rule in place and that just doesn’t work on a single node for obvious reasons. So it might be worthwhile running like 3 VMs with OpenStack in them with nesting enabled. Ceph can still be run on the host, with one small VM running another manager.

Edit: I just saw that you go for performance and load testing so running OpenStack in a VM would be stupid of course…

@modzilla
From a ceph POV, I was actually thinking of revising my storage strategy and not having it stood up on the host directly. The rationale here is:

  1. Ceph IO perf is directly related to available CPU. Rather than juggle things at the host layer for the purposes of demonstrations, I could define flavors and cinder qos policies that provide deterministic performance to a virtual ceph cluster. That way if I need to set up one or more ceph clusters for my demos (eg. to show replication etc) I can do so in a predictable way.

  2. Cinder supports multiple back ends, and you can put those backends into maintenance as required.

  3. Using instances for the cluster lets your maintenance of Ceph behave more or less normally.

This is a circular storage strategy, cinder-lvm maps the raw storage, cinder-volume provides storage to the primary ceph cluster instances. Cinder-ceph consumes the ceph cluster rbd service to provide volume storage for instances that need fault tolerant storage. Normally this wouldnt make sense, but DCIB makes certain assumptions on what is real and what isnt.

Regarding K8S you should be able to modify the anti-affinity hint.

I know with openshift or OKD (if using upstream), you can specify a nova server group affinity policy of ‘soft-anti-affinity’ and build the manifests for deployment and pass it the server group you created.

If you are interested:

3 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.