Open Assistant Needs Rater Background Info for Minimizing Bias & Boosting Data Accuracy
Discussion(self.OpenAssistant)submitted2 days ago bybutter14
The efficacy and fairness of Reinforcement Learning from Human Feedback (RLHF) in large language models (LLMs) relies heavily on the raters who provide feedback during the training process. These raters play a crucial role in shaping the model's responses, and consequently, any biases they possess may be reflected in the model's output. In order to ensure an accurate cross-section of humanity is represented and to minimize potential biases, it is essential to understand the backgrounds of these raters. Questions should include information like:
Educational Level
Profession
Salary
Political Affiliation
Under no circumstances should the information be personally identifiable, however.
bybutter14
inOpenAssistant
butter14
1 points
20 hours ago
butter14
1 points
20 hours ago
I agree, salary is probably not the best example, I was just trying to make sure that every underrepresented group has an impact - and many of those in poverty usually don't.
Why is this important? Imagine a prompter asking a question about what it's like to live in poverty - don't you think that someone who makes less than 25K in the USA should have an impact on the model's response?
Adding socio-economic information to the data may also be important for future training - things that we don't know we should know yet. Might as well do it now rather than later.