Vitaly Khudobakhshov [JB]
07/02/2018, 8:50 AMhastebrot
07/06/2018, 7:44 PMvoddan
09/01/2018, 11:21 AMNikky
09/18/2018, 7:36 PMthomasnield
09/21/2018, 1:57 AMthomasnield
09/25/2018, 3:58 PMthomasnield
09/26/2018, 2:12 AMkz
10/01/2018, 9:15 PMhudsonb
10/02/2018, 12:08 PMbserem
10/08/2018, 12:53 PMkyonifer
10/14/2018, 3:11 AMuser
10/17/2018, 12:59 PMholgerbrandl
10/19/2018, 10:21 AMNikky
10/20/2018, 3:05 AMthomasnield
10/22/2018, 3:03 AMthomasnield
10/24/2018, 6:26 PMthomasnield
10/27/2018, 10:31 PMthomasnield
11/13/2018, 4:57 PMValV
11/14/2018, 3:28 AMthomasnield
11/16/2018, 3:25 PMthomasnield
11/16/2018, 6:46 PMthomasnield
11/22/2018, 2:19 PMthomasnield
12/01/2018, 5:18 PMThomas Legrand
12/06/2018, 1:21 PMthomasnield
12/09/2018, 4:06 PMjmfayard
12/13/2018, 11:25 AMFrankfurt: "One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share. But we tend to take the situation for granted."Hello, after listening to the talks from @thomasnield and others, I wonder if data-science could help protect us from Bullshit. The basic idea would be to build a Bullshit/Non-bullshit classifier much like we do for Spam. In practice we would build a Browser extension because that's where most bullshit come to us. Initially all articles with a large amount of buzzwords inside Gartner's "Peak of Inflated Expectations" would be marked as "Bullshit". If the content was actually a good take-down of the bullshit, we would say so to the browser extension, which would learn from it by virtue of Bayesian Interference. Could that work? Why wouldn't it? It's a genuine question, I'm new to data-science.
thomasnield
12/19/2018, 2:37 AMmccorby
12/19/2018, 3:34 PMjmfayard
12/21/2018, 5:24 PM