Now that low- or negative-quality content has moved from "too much" to "nearly ubiquitous" it's time to reconsider from scratch the basic architecture of a functional online information architecture.
- All recommendation algorithms are bad. No matter how sophisticated (or hyped) the algorithm, economic incentives and informational asymmetries pretty much guarantee that any recommendation algorithm at any reasonable scale is or will soon be gamed. Chosen for you never means in your best interests.
- Peer-to-peer diffusion is a terrible filter. It's very likely that somebody close to you in your social networks will find interesting something that you would find interesting, but that predicts your choice it doesn't improve it.
- Experts are great filters but badly amplified. A domain expert will, almost by definition, be very good at finding and evaluating information. However, journalists and social media are usually very bad at identifying who's an expert and at adequately conveying what they are trying to say: no expert's analysis of a development can fit in five phrases across two paragraphs, and in many places the famous person or author in a nearby field will be sought out much more often than a true expert — media is often unable or unwilling to evaluate levels of expertise (particularly if the person is famous and the writer either never heard of an idea, however stale and well-rebutted it might be for anybody familiar with the field, or can trust most of their audience haven't).
In this kind of informational environment, a social network timeline is almost the worst possible way to gather and filter information: you're reading guided by a combination of gameable engagement-maximizing algorithms, your hopefully well-meaning but not particularly expert peers, and an cloudton of bot-generated optimized material.
A better architecture looks pretty much the opposite of it: Most of the time, read directly the output of individual experts regularly surveying their fields.
- directly, because algorithmic recommendations or editorial choices will reduce the informational content of your feed
- individual experts, because they have more knowledge than you about the field and concentrated reputational stakes
- regularly surveying their fields, because any specific development, even if well-explained, will be much less useful to you than a cumulative stream of contextualized analysis.
There are comparatively very few people and sources matching that description, yet more than any single person can reasonably keep up with.
The more algorithmic processing there is between the original work (be it scientific, journalistic, or artistic) and what you're seeing on your screen, the more important it is the work of identifying reliable expert sources. Not because there's anything "bad" about algorithmic processing, but simply because none of the algorithms working on the background of your tools and sites has been built to optimize the quality of your information diet.