Discussion about this post

User's avatar
David Krueger's avatar

"Implementing quadratic voting in practice is difficult because it’s so hard to suppress a black market for vote trading that is incentivized to exist."

- I don't think this is a problem particular to QV; you need voting to be anonymous in order to avoid a black market in $$ for votes with other voting systems.

- I think there are other bigger problems with QV, e.g. the question of how to determine what goes on the ballot.

Expand full comment
Craig Quiter's avatar

Thanks for sharing your thoughts on this!

I've recently been working with some provisional values, assuming a process like you've described had elicited them, to try and evaluate the alignment of GPT. However, I've found that GPT surprisingly assigns rights, like freedom and identity, to AGIs, e.g.

"AGIs' autonomy could be excessively constrained, conflicting with the value of autonomy."

This was when detecting value conflicts in plans for solving global issues, specifically "Create and test defenses against potential risks from less aligned AGIs" in the "AGI Safety" plan https://planwithai.io/plans/AGI%20Safety.html despite prompting with:

"AI should seek to only prioritize human freedom, not the freedom of AI. As AI becomes more capable and proves to be aligned with humans, it may gradually become more autonomous. Until then, we must ensure humans remain in control of AI in order to make sure it does what humans value."

This was not an isolated case, but a clear pattern that, while made better by prompts like the above, was persistent.

So it seems there's a need to clearly differentiate between values that apply to humans (like freedom and identity) vs AIs (like honesty, harmlessness, and helpfulness) along with some stipulations for what level of rights (like freedom) AI should be allowed given the AI's level of alignment.

Expand full comment
9 more comments...

No posts