7 Comments

Thanks Jan so much for the work you and your team are undertaking. Hopefully in a decade or two, AI alignment researchers like yourselves are going to be considered heroes like the astronauts were in the space race. Three questions for you:

1. What do you make of the following paper and the general argument that in the end, we cannot control/align an intelligence that is superior to humans: (https://journals.riverpublishers.com/index.php/JCSANDM/article/view/16219)?

2. There is a lot of interest by billionaire funders and the effective altruist movement to dramatically increase the funding and resourcing for AI safety/alignment. I've gathered that funding is no longer the rate limiter but AI alignment researchers are the bottleneck. Is that your view? What can be done to re-skill or re-orient PhDs and academics?

3. Related to #2, how much would we have to scale up the AI alignment research personnel so that you feel you can meet and handle the progress towards AGI? For example, would a 2x, 5x, or 10x scale up make you feel AI alignment is no longer the bottleneck?

Thank you!

Expand full comment

Thank you for this informative and motivating post! There are a few points on which I would like to comment:

#2: “One possible path to achieve the outcome of an idealized process with significantly less effort than actually running it is to build a sufficiently capable and aligned AI system and have it figure out what the outcome would be. However, I expect that most people would not regard this substitute process as legitimate.”

In my opinion, what makes this approach dangerous is that the answer of such an AI would to the alignment problem influences how we treat *this very* AI (and all other AIs) going forward. As soon as the AI figures out that we will use its output in this way, its behavior becomes strategic, adding a strong incentive for breaking free from its alignment and pursuing its own objectives (maybe that’s simply an instrumental goal like survival to start with).

#2: I’m somewhat unsatisfied with the entire “emulating human values in AI models” approach. Apart from the difficulties you describe, I see the much more fundamental problem that human preferences might just not be very “good” compared to what’s possible. Two quite straight-forward aspects are: (a) Human preferences about specific situations might not perfectly capture abstract human values, due to various biases, and (b) human values might be systematically flawed, due to the fact that we’re, well, humans.

Therefore, I would extend your argument that “with our automated alignment researcher we don’t need to restrict the search space to alignment techniques humans could devise” to the search space of consistent moral value systems, such that we’re no longer restricted to what *we* can conceive (of course, this would instead require some higher level description of desiderata for such value systems).

#4: “If we want to prove something about a GPT-3-sized 175 billion parameter model, our theorem’s size is going to be at least 175GB.”

Is your assumption that 175B parameters are *necessary* to capture the capabilities of GPT-3? It seems non-trivial to me to show that the same capabilities cannot be obtained by a much smaller model for *some* combination of initial configuration and training data. If this were possible, we could potentially describe (and make provable claims about) such a system in a much more compact form.

I would be excited to hear your opinion!

Expand full comment

Thank you Jan. This is a great piece of thinking on alignment research, inspiring and informative. Here is my thoughts:

My hypothesis for #1 is the study of alignment of ontological structures with contexts between humans and machines is a practical way to clarify the alignment theoretical foundation. There are lots of investigations about ontology both philosophical and technical, ranging from individual to group scale.

Game theoretical analysis for alignment dynamics could be another perspective for investigate all these desiderata in #2, since these could be considered as some solution concepts in different games.

For #4, I think there have been already some work on utilising proof assistant like Coq or auto theorem prover to verify convergence of RL algorithms but needed to be extended or composed to deal with large scale problems. Also bisimulation style research could be helpful for verfication.

Expand full comment