When using ChatGPT, Google Gemini, Microsoft Copilot, etc, we have all dealt with inaccurate responses. To rectify this, researchers from MIT have developed a tool called SymGen, which makes an LLM generate a response with citations to show from where exactly it has extracted information.
4 Likes
How do you think providing citations for AI-generated responses will impact user trust and the overall reliability of AI models in the future?
I noticed that Chatgpt would actually made up some ‘fake’ academic findings or resources that did not actually exist sometimes. This tool would be quite useful and convenient!
does this tool also highlight any potential biases in the cited sources, or is it more focused on just showing where the information comes from? That would be a big deal for ensuring that AI-generated content isn’t only accurate but also balanced!
I think it’s so important to be able to verify the info AI gives us, especially with how much we rely on it now.
1 Like