Subscribe
Sign in
Home
Notes
Start here
Get the book
Book exercises
About us
AI safety
Latest
Top
Discussions
Does the UK’s liver transplant matching algorithm systematically exclude younger patients?
Seemingly minor technical decisions can have life-or-death effects
Nov 11, 2024
•
Arvind Narayanan
and
Sayash Kapoor
101
11
7
AI existential risk probabilities are too unreliable to inform policy
How speculation gets laundered through pseudo-quantification
Jul 26, 2024
•
Arvind Narayanan
and
Sayash Kapoor
112
39
15
AI safety is not a model property
Trying to make an AI model that can’t be misused is like trying to make a computer that can’t be used for bad things
Mar 12, 2024
•
Arvind Narayanan
and
Sayash Kapoor
125
20
11
A safe harbor for AI evaluation and red teaming
An argument for legal and technical safe harbors for AI safety and trustworthiness research
Mar 5, 2024
•
Sayash Kapoor
and
Arvind Narayanan
30
1
3
On the Societal Impact of Open Foundation Models
Adding precision to the debate on openness in AI
Feb 27, 2024
•
Sayash Kapoor
and
Arvind Narayanan
36
11
4
Are open foundation models actually more risky than closed ones?
A policy brief on open foundation models
Dec 15, 2023
•
Sayash Kapoor
and
Arvind Narayanan
33
3
1
Model alignment protects against accidental harms, not intentional ones
The hand wringing about failures of model alignment is misguided
Dec 1, 2023
•
Arvind Narayanan
and
Sayash Kapoor
50
5
3
Is AI-generated disinformation a threat to democracy?
An essay on the future of generative AI on social media
Jun 19, 2023
•
Sayash Kapoor
and
Arvind Narayanan
32
6
6
Is Avoiding Extinction from AI Really an Urgent Priority?
The history of technology suggests that the greatest risks come not from the tech, but from the people who control it
May 31, 2023
•
Arvind Narayanan
81
18
13
A misleading open letter about sci-fi AI dangers ignores the real risks
Misinformation, labor impact, and safety are all risks. But not in the way the letter implies.
Mar 29, 2023
•
Sayash Kapoor
and
Arvind Narayanan
108
31
2
The LLaMA is out of the bag. Should we expect a tidal wave of disinformation?
The bottleneck isn't the cost of producing disinfo, which is already very low.
Mar 6, 2023
•
Arvind Narayanan
and
Sayash Kapoor
14
9
Students are acing their homework by turning in machine-generated essays. Good.
Teachers adapted to the calculator. They can certainly adapt to language models.
Oct 21, 2022
•
Arvind Narayanan
39
16
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts