If growth continues unabated, we will reach the point that the complexity of DNS is such that no one will be able to grasp it all. This in effect would make it impossible to write any new DNS implementations.
The presentation also explored the reasons behind this growth, in which I focused on DNS-specific mechanisms. In short, we have a very active open source implementation community that provides little pushback on new features. We dutifully implement most new drafts, even when this is very hard work. Force and counterforce are required for any system to be in equilibrium.
The pressure not to implement new protocols in software is traditionally supplied by vendors wary of the cost of implementing them, a counterforce that is not as strong in open source implementations.
Similarly, operational requirements usually provide a "pull" for new features. But that operational community is usually aware how much work it would be to deploy and maintain all those features in production so it can also push back. Oddly enough, the (sizeable) non-root DNS operational community is largely absent from IETF discussions so they neither provide a lot of requirements for the standardization efforts to consider, or pushback on specific initiatives.
So what is the force behind the huge growth in RFCs? It turns out that this is the standardization community itself! Most of the activity behind Internet-Drafts, which become RFCs, isn’t led by any operational requirement but rather stems from feelings by standardizers that things should be “improved”.