12/21/2018, 5:24 PM
Yes, that makes total sense. I think detecting algorithmically would just be a pretext in my case. The real point would be to nudge the reader to ask herself and frequently: is this thing that I'm reading bullshit, and if yes, why am I wasting my time?


12/21/2018, 5:30 PM
But if we can't get humans to agree on what is BS/Not BS... or what is fake news/not fake news, a good political view/a bad political view, moral/not moral... how in the world can we expect machines to? Even things that we can somewhat universally agree on, like what is child-appropriate content, is failing.
Some people (including scientists isolated from the real world) naively think machines will be more objective than humans, but machines will be just as biased as their human creators. And they will only be dumb pattern-recognition machines reflecting those biases.