Help with distributed systems calculations

So I am working on a distributed system called Dusk which is a network management system for OpenWRT devices. I have thrown out the idea of setting a single node as the coordinator and instead gone with a distributed architecture based around TCP. The idea is that the network shouldn’t need a centralized controller. Each node talks to the nodes around it which means that data and configs are passed though the network in a distributed manner.

In the network each node as the following behavior:

  • When a node receives a change from a peer that triggers it to make a write it will broadcast it out to k nodes. If it receives a change but it is up to date it will not push the change to its neighbors. This is done to prevent infinite chatter and to keep it simple.
  • Periodically the node will pick n nodes in the network and test latency times for all nodes in group n and k. It will pick k nodes that have the lowest latency (replacement)
  • When one of of a nodes known peers is unavailable it will cache the changes locally and then send the change out when the peer comes back up

Here is where I am struggling. I am looking to find a sane value for k that will keep the network reliable. My understanding is that it should be a minimum of ln(number of nodes) but I don’t know how correct that is. I also know that a higher k means that data will propagate faster. I want this network to be reliable to I think it would be smart to target a higher value for k.

How would I calculate the ideal value of k? I would rather not try to find it will experimentation but I have been searching and I can’'t find a lot of information about the calculations. What is the simplest way to model this network in math?