Thank you for your patience and also for summarizing the state of our discussion from your point of view so far - very helpful!
I didn’t really admit to the “complicated” bit. The formula may look unwieldy, but the priciples are simple: Generally proportional, capped at 1/k, pledge influence against Sybil.
You will probably reply to that that your proposal has the same properties while being simpler, but I doubt the “same properties” part - we’ll get to that.
As for “unfair”, I still maintain that this is a necessary evil, the price to pay for Sybil protection, at least until it can be replaced by something else like a “reputation system”.
The indicator people get is in their best interest. It is the utility they can expect if everybody acts rationally. We also have done simulations of the dynamics of our system, where players simply tried to maximize their rewards (not some “indicator”), and those simulations ended in the desired configuration of k equally sized pools.
Furthermore, coming back to your “cartel attack” more directly, you assume that it would take a while for a small, yet attractive new pool to fill up. This is not the case, I think. There are a lot of “whales” in the ecosystem that can saturate a pool by themselves (IOHK being one of them). Such whales could jump on a new, better opportunity immediately.
People still have incentive to delegate to whom they know and trust, because pool performance also is very important for rewards. Furthermore, if they really trust a pool, they can become co-owners of that pool and help with the pledge.
I had missed your “DDoS attack” angle until now. But in the scenario you describe, where an attacker with a lot of stake oversaturates a pool on purpose and then delegators leave that pool, this is actually desired: You must keep in mind that the most important goal of the incentives scheme is to guarantee decentralization. If people did not leave the attacked pool, that pool would become too large and get too much influence on the system, which is exactly what we want to avoid. Your “protection” would encourage people to stay in that pool, allowing the pool to become oversaturated.
You are right, I think, that such an attacker could harm the attacked pool, but I think that is the lesser of two evil. And - as you pointed out yourself - such an attack would be really expensive - orders of magnitude more expensive than a Sybil attack, and orders of magnitude less efficient.
Now let’s look at the scheme you propose. Let me first emphasize, as I did before, that “Nash equilibrium with k pools” is not enough. We need k pools of equal size. I claim that your scheme does not have that property.
(Being pedantic, I could also point out that the proof obligation lies with you. We have proven that our scheme has the desired properties. If you legitimately want to replace our scheme with a different one, you have to prove that your scheme works. We don’t have to prove that it does not.)
But instead of being nasty, let me try to explain why I believe your scheme does not have the desired properties. Let’s look at a very simple case where only two players A and B have enough stake to make running a pool feasible (because costs are so very high), and let’s assume that k=2. I realize, of course, that those are extremely unrealistic assumptions. But if your scheme doesn’t work under those conditions, then it is up to you to clarify (and prove) precisely under which conditions your claims hold - something we did for our scheme.
In my unrealistic scenario, let’s further assume that A has stake 0.1, B has stake 0.2 and b_0 is 3, Then for A, we have
f(sigma, 0.1) = R * min(sigma, 0.5, 0.3) = R * min(sigma, 0.3),
and for B we have
f(sigma, 0.2) = R * min(sigma, 0.5, 0.6) = R * min(sigma, 0.5).
If sigma=0.5 was a Nash equilibrium as you claim, then rewards for delegators in Pool A to rewards for delegators in Pool B would be as 0.3 : 0.5 = 3 : 5 (ignoring operational costs), so people from Pool A would have incentive to move to Pool B. So the situation is not a Nash-Equilibrium. People in that scenario would migrate to Pool B until rewards equal out, leaving Pool B oversaturated and Pool A undersaturated.
Again, this is totally unrealistic, but my gut feeling tells me that something similar would happen in more realistic scenarios. It’s up to you to prove me wrong. 
Let me summarize my point on a “meta”-level: We have proven that all Nash-Equilibria in our scheme (under conditions that have been clearly explained and written down) are of the desired form, k pools of equal size. We have also proven that our scheme is Sybil-resilient in a precisely formulated sense.
It is of course legitimate to point out attacks we have not considered (like your “cartel”- and “DDOS”-attacks). I believe I have explained why our system does work against those two as well, although they were not part of our core analysis.
And you are free to propose an alternative scheme at any time. I am not saying that our scheme is the best possible one.
But it is your obligation to prove that your scheme is at least as good as ours (by proving it has the desired Nash equilibria), and only then does it make sense to discuss possible advantages it might have over ours.
This is not to discourage you. Again, I am not saying that our scheme is the best possible one. I am just pointing out that the bar is quite high: It’s not enough for an alternative scheme to sound good. It must withstand the same rigorous scrutiny that we applied to our own scheme - at least before we can seriously consider adopting it.
I am also not saying that you have to publish a paper in a mathematical journal before I am willing to listen to you. Of course not. Otherwise, I wouldn’t have spent so much time discussing with you. This is fine. But ultimately, when everything looks good (and I think your scheme does not, for the reasons explained above), there must be a formal proof.