Hallucination: Uncorrected AI Brains
I have tried to train the AI with specific unstructured data sets to give additional powers to the AI to my specific use case. Training the GenAI to the specific configurations, and additional vectorised databases - results on the platform are very frickle with extreme hallucinations. Using and preparing the structured databases for huge data sets is costly, time consuming and man power demanding.. The bigger challenge from the GenAI is the problem of hallucination along with the P Value or any creativity configuration. As it more creative, the more chance of the AI going for hallucinating over the additional data. This can be more evident in the instance of GPT from OpenAI. This challenge is optimised to an extent by controlling the P Value and other creativity inducing configurations. However, results aren't satisfactory. Optimising with the prompt is always an option to ground it from hallucinations but in my specific case grounding is to be done at a scale of converting whole ...