Lost in the debate surrounding net neutrality is the assumption that all end hosts play nice and ``fair share'' the limited network bandwidth. Traditionally, ``fair share'' is enforced through a combination of forces: there is a general consensus on the TCP congestion control algorithm, operating system vendors implement such algorithm in the kernels and it cannot be changed, applications use TCP and use only a small number of TCP connections, and perhaps most importantly, network operators can identify abusive applications and traffic-shape them. Unfortunately, today these protective forces are being weakened. The networking research community has proposed at least nine different congestion control algorithms, some more aggressive than others. Popular kernels such as Linux have implemented all of the algorithms, and furthermore, allow users with proper privileges to modify the congestion control algorithm arbitrarily via loadable kernel modules. Many P2P applications use four or more connections between each host pair to improve their performance. And finally, most proposals of Net-Neutrality legislation prohibit network operators from traffic-shaping applications, for good reasons. Therefore, for net-neutrality laws to avoid causing congestion collapse on the Internet, the law must also define proper end-host behavior, i.e. ``fair-share'', and allow network operators to detect violations of fair-share and apply punishment. This presents three immediate difficulties. First, TCP only provides per-flow fairness, so to achieve fair-share, one application shouldn't use more flows than the other. But having a law to limit the number of TCP connections that an application can open is draconian to say the least. Second, the TCP congestion control algorithm is the key in providing per-flow fair-share, but the algorithm itself is under active research, and legislation in this area could stymie innovation. Finally, it's difficult to have cheap and yet accurate detection of fair-share violations in routers, and any approximation techniques risk penalizing the innocent. This paper sketches out a set of potential solutions out of this dilemma. A central theme of the proposal is that there should be a code of conduct for ``fair-share'' involving the number of TCP connection used and the TCP congestion control algorithms used, and there should be a differentiation between hosts and applications that follow the code-of-conduct and those that do not. Those that follow the ``fair-share'' code of conduct receive a guarantee from network providers that their traffic always receives the fair-share, Those that do not want to be constrained by the code of conduct accept that their traffic will be shaped from time to time as deemed appropriate by the network provider. New technologies need to be developed to monitor the TCP implementations of the end-hosts and applications, so that those who pledge to follow the code of conduct can be verified.