"Implementing quadratic voting in practice is difficult because it’s so hard to suppress a black market for vote trading that is incentivized to exist."
- I don't think this is a problem particular to QV; you need voting to be anonymous in order to avoid a black market in $$ for votes with other voting systems.
- I think there are other bigger problems with QV, e.g. the question of how to determine what goes on the ballot.
It might be prudent to stratify the approach by the target population and context. Trying to include all comers not only will be challenging but also may not be modeling the society. A very small fraction of the entire population makes decisions based on evidence regarding sociocultural matters. Then, even decisions based on values may be muddled by person's mental state at the time, which may lead to misaligned decisions. Hence, it may be difficult to predict spontaneous decisions based on individuals' rationally penned thoughts. If we could film representative populations 24/7 longitudinally, we could probably collect more informative data.
Values are heuristics (either of behaviour or important objects) that help people to behave adaptively in a certain, *concrete* society/system. Not any society and eternally! This understanding of values has multiple important implications in the context of this proposal:
(1) As advanced AI proliferates, the civilisation will change deeply. It means that adaptive patterns of (collective and individual) behaviour will also change (albeit, it's not guaranteed that anyone will have time to figure out what these new optimal patterns are before the civilisation is changed even further, etc.). This may potentially come to such important values as freedom, democracy (at least in anything resembling the current form), work ethic, creativity, etc. -- these may be rendered ineffectual heuristics in the new reality. Thus, tasking AI with preserving these "traditional" values at all costs may lead to bizarre distortions.
(2) If LLM is made to "understand and account for" this conception of values, there should actually be little concern for "how to make simulated humans smarter and more effective deliberators without changing their values". Let's consider two types of collective deliberations: executive decision-making (i.e., inferring an optimal decition/action within the current system) and policy-making (i.e., changing the current system). For the first type, it's not a risk that the representative LLM is now smarter: it's task is to think about the predicament of its representee if this or that decision is made in relation to their current behaviour and the stance within the system in general. I.e., the LLM should better model how the decision will affect the representee.
For the second type of collective deliberations, it's pretty much the same but on a longer timescale and more meta-level, e.g., LLM should be able to model/predict how the behaviour and the stance of the representee will change themselves as a result of the system change, and how in the result their fitness within the system will change.
It also becomes evident that policy-making is a relative poor fit deliberative democracy. Cf. Chapter 13 "Choices" in David Deutsch's "The Beginning of Infinity" (https://www.nateliason.com/notes/beginning-of-infinity-david-deutsch -- summary, including of this chapter) on it.
I understand that while language models are able to simulate a conversation, they are limited in their ability to capture a singular worldview or experience. When attempting to develop characters and conversations within a chat, I found it's important to recognize that a single conversation in a bubble is unlikely to result in genuine contrast or individuality. Instead, it is often necessary to have multiple conversations with various characters in order to build out their personalities and create situations that feel authentic. Then you mix them.
Rather than simply providing a language model with reduced opinions from multiple characters, it is more effective to build individual characters and introduce them to one another, in separate conversations. This allows for a more realistic response that is based on each character's unique condition and perspective. By having separate conversations and selecting key pieces to carry forward, it enables us to develop complex situations that result in compelling dialogue and meaningful interactions. I even ask the model to select from the possible conversations.
Tl.dr; I think that having separate character arcs that are well developed, you can introduce them to each-other in a more effective fashion that inspires a better conversation.
A proposal for importing society’s values
"Implementing quadratic voting in practice is difficult because it’s so hard to suppress a black market for vote trading that is incentivized to exist."
- I don't think this is a problem particular to QV; you need voting to be anonymous in order to avoid a black market in $$ for votes with other voting systems.
- I think there are other bigger problems with QV, e.g. the question of how to determine what goes on the ballot.
It might be prudent to stratify the approach by the target population and context. Trying to include all comers not only will be challenging but also may not be modeling the society. A very small fraction of the entire population makes decisions based on evidence regarding sociocultural matters. Then, even decisions based on values may be muddled by person's mental state at the time, which may lead to misaligned decisions. Hence, it may be difficult to predict spontaneous decisions based on individuals' rationally penned thoughts. If we could film representative populations 24/7 longitudinally, we could probably collect more informative data.
Values are heuristics (either of behaviour or important objects) that help people to behave adaptively in a certain, *concrete* society/system. Not any society and eternally! This understanding of values has multiple important implications in the context of this proposal:
(1) As advanced AI proliferates, the civilisation will change deeply. It means that adaptive patterns of (collective and individual) behaviour will also change (albeit, it's not guaranteed that anyone will have time to figure out what these new optimal patterns are before the civilisation is changed even further, etc.). This may potentially come to such important values as freedom, democracy (at least in anything resembling the current form), work ethic, creativity, etc. -- these may be rendered ineffectual heuristics in the new reality. Thus, tasking AI with preserving these "traditional" values at all costs may lead to bizarre distortions.
(2) If LLM is made to "understand and account for" this conception of values, there should actually be little concern for "how to make simulated humans smarter and more effective deliberators without changing their values". Let's consider two types of collective deliberations: executive decision-making (i.e., inferring an optimal decition/action within the current system) and policy-making (i.e., changing the current system). For the first type, it's not a risk that the representative LLM is now smarter: it's task is to think about the predicament of its representee if this or that decision is made in relation to their current behaviour and the stance within the system in general. I.e., the LLM should better model how the decision will affect the representee.
For the second type of collective deliberations, it's pretty much the same but on a longer timescale and more meta-level, e.g., LLM should be able to model/predict how the behaviour and the stance of the representee will change themselves as a result of the system change, and how in the result their fitness within the system will change.
It also becomes evident that policy-making is a relative poor fit deliberative democracy. Cf. Chapter 13 "Choices" in David Deutsch's "The Beginning of Infinity" (https://www.nateliason.com/notes/beginning-of-infinity-david-deutsch -- summary, including of this chapter) on it.
I understand that while language models are able to simulate a conversation, they are limited in their ability to capture a singular worldview or experience. When attempting to develop characters and conversations within a chat, I found it's important to recognize that a single conversation in a bubble is unlikely to result in genuine contrast or individuality. Instead, it is often necessary to have multiple conversations with various characters in order to build out their personalities and create situations that feel authentic. Then you mix them.
Rather than simply providing a language model with reduced opinions from multiple characters, it is more effective to build individual characters and introduce them to one another, in separate conversations. This allows for a more realistic response that is based on each character's unique condition and perspective. By having separate conversations and selecting key pieces to carry forward, it enables us to develop complex situations that result in compelling dialogue and meaningful interactions. I even ask the model to select from the possible conversations.
Tl.dr; I think that having separate character arcs that are well developed, you can introduce them to each-other in a more effective fashion that inspires a better conversation.
Of course, written with the assistance of GPT-3.