Years ago, I read an article about how Microsoft planned to start punishing the assholes on Xbox Live by "forcing them to play with each other". I never heard anything more about it, or any further details, but it gave me an idea about how the problem of bad people on gaming communities could be solved elegantly. Since the subject has come up again in light of the TF2 Competitive Matchmaking system, I thought I'd lay out the idea in detail: So, imagine this. When you look at a player's Steam profile, next to the button to add them as a friend, there are also Like and Dislike buttons. Clicking one of these buttons doesn't affect some kind of overall Reputation score. This is important. Instead, it affects something I'll call "the Affinity Matrix". When a user clicks either of those buttons on the other's profile, it sets the Affinity value that represents their relationship to one another. So, supposing you clicked Like on their profile, and they haven't interacted with yours at all. Your Affinity would be set to 1. Now let's suppose you both clicked Like on each other. That's a lot more significant, so now the Affinity is maybe 4 or 5. (These values are stored separately from the Like and Dislike flags, so they could be recalculated at some future date if Valve decides to tweak the system.) And for everyone in your Steam Friends list, the Affinity is automatically set to 5 or maybe even higher. And here's the important part: If either of you clicks Dislike, regardless of what the other has done, that's automatically a -5 or something. This way, trolls can't game the system by Like-ing their victims. What's the practical upshot of this? Nothing at first. The community will need some time to populate the matrix with data. But eventually, matchmaking and quickplay systems in Steamworks games can be retooled to weight match-ups towards people with positive Affinity values. Other factors could be set up to subtly affect the Affinity values, such as whether players have spent a significant amount of time together, or chosen to leave servers shortly after joining. The long term goal is for peer groups to organically form, and to keep people away from those that they dislike. Unlike Reputation scores, this won't directly punish players for, say, trash talking as a form of friendly banter; it'll just give people who are put off by that sort of thing an easier way to avoid them, and gravitate towards people who are more their style. Even further down the line, programs could be developed that could analyze the resulting data, chart trends, create visual representations of things, and potentially even zero in on players who have been especially toxic and deserve to be banned without having to monitor every single server 24/7 or rely on user reports. This might all sound needlessly complicated, but Valve loves setting up nerdy analytics systems like this. Remember when they used to privately track heatmaps in TF2? Or when they hired a professional economist to monitor activity in the Steam Marketplace? An experimental project like this seems right up their alley.