7 Comments

Thanks Jan so much for the work you and your team are undertaking. Hopefully in a decade or two, AI alignment researchers like yourselves are going to be considered heroes like the astronauts were in the space race. Three questions for you:

1. What do you make of the following paper and the general argument that in the end, we cannot control/align an intelligence that is superior to humans: (https://journals.riverpublishers.com/index.php/JCSANDM/article/view/16219)?

2. There is a lot of interest by billionaire funders and the effective altruist movement to dramatically increase the funding and resourcing for AI safety/alignment. I've gathered that funding is no longer the rate limiter but AI alignment researchers are the bottleneck. Is that your view? What can be done to re-skill or re-orient PhDs and academics?

3. Related to #2, how much would we have to scale up the AI alignment research personnel so that you feel you can meet and handle the progress towards AGI? For example, would a 2x, 5x, or 10x scale up make you feel AI alignment is no longer the bottleneck?

Thank you!

Expand full comment
author
Sep 27, 2022·edited Sep 27, 2022Author

Thanks for your questions! I'm glad to hear that you appreciate our work.

1. I'm not sure I find that argument particularly convincing, but maybe the author just wants to point out that the problem is hard. Until there is a formal proof that it's impossible (along the lines of part 1) I don't think we should give up.

2. I agree that the field is a lot more talent-bottlenecked right now, and has been since forever. Even more, since the field is so new basically everyone re-skilled in the last few years. What the best paths are for re-skilling depends on the individual background. For most technically minded people it might be getting into ML engineering.

3. I don't know. There is certainly enough work for 10x or 100x as many people, but it would be hard to quickly add that many people to the existing institutions.

Expand full comment

Thank you. If you have any thoughts or suggestions how non-technical folks or broader society can help please consider writing about them.

Expand full comment

Thank you for this informative and motivating post! There are a few points on which I would like to comment:

#2: “One possible path to achieve the outcome of an idealized process with significantly less effort than actually running it is to build a sufficiently capable and aligned AI system and have it figure out what the outcome would be. However, I expect that most people would not regard this substitute process as legitimate.”

In my opinion, what makes this approach dangerous is that the answer of such an AI would to the alignment problem influences how we treat *this very* AI (and all other AIs) going forward. As soon as the AI figures out that we will use its output in this way, its behavior becomes strategic, adding a strong incentive for breaking free from its alignment and pursuing its own objectives (maybe that’s simply an instrumental goal like survival to start with).

#2: I’m somewhat unsatisfied with the entire “emulating human values in AI models” approach. Apart from the difficulties you describe, I see the much more fundamental problem that human preferences might just not be very “good” compared to what’s possible. Two quite straight-forward aspects are: (a) Human preferences about specific situations might not perfectly capture abstract human values, due to various biases, and (b) human values might be systematically flawed, due to the fact that we’re, well, humans.

Therefore, I would extend your argument that “with our automated alignment researcher we don’t need to restrict the search space to alignment techniques humans could devise” to the search space of consistent moral value systems, such that we’re no longer restricted to what *we* can conceive (of course, this would instead require some higher level description of desiderata for such value systems).

#4: “If we want to prove something about a GPT-3-sized 175 billion parameter model, our theorem’s size is going to be at least 175GB.”

Is your assumption that 175B parameters are *necessary* to capture the capabilities of GPT-3? It seems non-trivial to me to show that the same capabilities cannot be obtained by a much smaller model for *some* combination of initial configuration and training data. If this were possible, we could potentially describe (and make provable claims about) such a system in a much more compact form.

I would be excited to hear your opinion!

Expand full comment
author

Thanks a lot for your thoughtful comments, Michael!

* I'd argue there is still an important difference between the tool AI systems that you use to make some of the value-elicitation processes more efficient and the next generation of larger and more capable AI systems that you ultimately want to align. It's certainly not out of the question that they'll all try to collude in a subtle way, but I don't think that's the most likely failure mode here. Importantly, if you have a bunch of AI systems that are figuring out what humans would want in a given situation, they don't need to be much smarter than humans for situations that we face today. I agree that you'd still need to make these tool AI systems sufficiently aligned such that it doesn't pursue strategic objectives or try to break out.

* We are on the same page on this. I definitely agree that human values as commonly expressed or lived fall short of the ideals that we subscribe to. Using AI to make moral progress would be great, but I expect that this is harder than technical alignment research. For example, I'm not sure that "evaluation is easier than generation" holds in this case.

* Yes, you could probably make more abstract statements much more compactly. The point I was trying to make here is a lot more basic: if you want to make any formal statement about the specific 175b parameter GPT-3 the language model as it currently exists, the model's parameters need to be part of your theorem.

Expand full comment

Thank you Jan. This is a great piece of thinking on alignment research, inspiring and informative. Here is my thoughts:

My hypothesis for #1 is the study of alignment of ontological structures with contexts between humans and machines is a practical way to clarify the alignment theoretical foundation. There are lots of investigations about ontology both philosophical and technical, ranging from individual to group scale.

Game theoretical analysis for alignment dynamics could be another perspective for investigate all these desiderata in #2, since these could be considered as some solution concepts in different games.

For #4, I think there have been already some work on utilising proof assistant like Coq or auto theorem prover to verify convergence of RL algorithms but needed to be extended or composed to deal with large scale problems. Also bisimulation style research could be helpful for verfication.

Expand full comment
author

Thanks a lot!

Good point on game theory, that should probably part of the list for #1. But I'm not sure that this is actually the hard part.

For #4 using a theorem prover would make sense here but there really hasn't been anything at scale.

Expand full comment