At this point I somewhat agree with you. The same way emails are filtered because there is SO much spam or worse things floating around. But I’m such a forum I think it should be done on a user basis, not message basis. So if someone constantly says something and does not actually react to replies, so no discussion, just dumping something, then this should be stopped. Regardless of what the actual message is, what the intentions are, if that is a real person or a bot. Randomly dumping messages hardly has value. But how to detect that is indeed the issue. Even with LLM since they can be used to counter themselves, but they would help against the low effort posts.
Yeah. That was more or less the conclusion I came to – it’s too hard for the LLM to follow the flow of conversation well enough to really determine if someone’s “acting in good faith” or whatever, and it’s way too easy for it to interpret someone making a joke as being serious, that kind of thing. (Or, maybe GPT-4 can do it if you want to pay for that for API access for every user that wants to post, but I don’t want to do that).
But it seems even a cheap LLM is pretty capable of distilling down, what are the things that people are claiming (or implying as an assumption), and did someone challenge them on it, and then did they respond substantively / respond combatively / change the subject / never respond. That seems like it works and you can do it kinda cheaply. And, it means that someone who puts out 50 messages a day (which isn’t hard to do) would then have to respond to 50 messages a day coming back asking questions, which is a lot more demanding, and creates a lot more room for an opinion that doesn’t hold up if you examine it to get exposed as such. But, it wouldn’t really weigh in on what’s the “right answer” to come to, and it wouldn’t censor from participating anyone who wanted to be there and participate in the discussion.
IDK. Because you asked I dusted off the code I had from before just now, and I was reminded of how not-happy-with-it-yet I was 🙂. I think there’s a good idea somewhere in there though.
At this point I somewhat agree with you. The same way emails are filtered because there is SO much spam or worse things floating around. But I’m such a forum I think it should be done on a user basis, not message basis. So if someone constantly says something and does not actually react to replies, so no discussion, just dumping something, then this should be stopped. Regardless of what the actual message is, what the intentions are, if that is a real person or a bot. Randomly dumping messages hardly has value. But how to detect that is indeed the issue. Even with LLM since they can be used to counter themselves, but they would help against the low effort posts.
Yeah. That was more or less the conclusion I came to – it’s too hard for the LLM to follow the flow of conversation well enough to really determine if someone’s “acting in good faith” or whatever, and it’s way too easy for it to interpret someone making a joke as being serious, that kind of thing. (Or, maybe GPT-4 can do it if you want to pay for that for API access for every user that wants to post, but I don’t want to do that).
But it seems even a cheap LLM is pretty capable of distilling down, what are the things that people are claiming (or implying as an assumption), and did someone challenge them on it, and then did they respond substantively / respond combatively / change the subject / never respond. That seems like it works and you can do it kinda cheaply. And, it means that someone who puts out 50 messages a day (which isn’t hard to do) would then have to respond to 50 messages a day coming back asking questions, which is a lot more demanding, and creates a lot more room for an opinion that doesn’t hold up if you examine it to get exposed as such. But, it wouldn’t really weigh in on what’s the “right answer” to come to, and it wouldn’t censor from participating anyone who wanted to be there and participate in the discussion.
IDK. Because you asked I dusted off the code I had from before just now, and I was reminded of how not-happy-with-it-yet I was 🙂. I think there’s a good idea somewhere in there though.