It's an interesting suggestion which has been
briefly discussed before, though not at any great length. There's lots of great information on this Slack workspace and I can certainly see why having an LLM interface to all that information sounds tempting. But it's important to weigh that against the very real concerns that many people have about generative AI. Would people be less inclined to contribute here if they knew their contributions were being used to train an AI model? Personally, I'm much happier with the idea of occasionally answering the same question twice, or helping someone to find where their question has been answered previously, than I am with the idea of my words being mangled and misquoted by an AI model while someone else extracts "shareholder value." I would be interested to hear opinions from other contributors, though!