Skip to main content

DPhil the Future: AI for bad?

Posted:

A grey-green rectangle with dark blue text reading "DPhil the Future Our students are 100% part of our success. DPhil the Future is our way of giving our students a platform to share their insights and views on all things computer science.". On the right of the image there are faded grey-white lines and dots that look like a computer board.

As the AI safety summit is underway at Bletchley Park, we asked DPhil student Vincent Wang-Maścianica for his insights into AI for the next generation – and is it all good?

In an increasingly information-rich future, AI can be guardians and navigators for our children, or they could be wardens and instruments of advertisement and surveillance that devalue human agency and flourishing. We don’t know what a path towards good outcomes looks like in detail, so we talk instead about what to avoid doing.

Let us go a step further and think about actions we should condemn, not just avoid: What would it look like if someone or something were trying to prevent good outcomes for AI? To answer that question, here is an imaginary list of brief recommendations for saboteurs.

Concentrate control over AI. Aim for a single point of failure.

Machine learning turns data into practical computation, just like a steam engine turns chemical energy in fuel into movement. Both drive carriages, commerce, and culture. Pick someone to put in charge of all that.

Make theory complex. Keep it inaccessible.

Revolutions are about context, not depth. No amount of technical understanding of a steam engine can predict how rural workers will move to cities. To understand, specialists must communicate, so don't let them.

Eliminate generalists. Make it hard for them to operate between fields.

Maintain mutually alien values and perspectives between groups by preventing bonds. Academia will seem useless, arcane, guildlike. Builders in the arena of commerce will only heed the blind logic of progress. Governance will continue to be glacial and reactive.

Let pessimists dominate the discourse. Encourage polarisation.

Define everyone else in opposition to those who believe AI will kill us all. Alienate, sideline, and repel optimists and the highly-agentic. Encourage black-and-white thinking: AI must be good or bad, big or small, slow or fast, etc. Make people take sides, force debates, and paralyse everyone with abundant information and choice.

Define initiatives narrowly. Leave nothing to chance.

Gatekeep resources by demanding concrete outcomes in advance. Survival in revolutionary times requires flexibility and openness to opportunity, so ensure a rigid bureaucracy. Remember that serendipity is worthless because it is not quantifiable, so don't waste time establishing new spaces for culture formation between different groups.

Embrace stasis. Reject motion.

The extent to which we are in crisis is also the extent to which our circumstances are novel and our institutional expertise is inadequate. Assume that credentialed expertise is a direct indicator of good judgment in a new zeitgeist. Convince others that keeping everything the same as it was before will work.

Think in terms of institutions, not people.

Let corporations aggressively replace workers with AI in the name of profit. Let schools ban students from using language models on homework in the name of educational standards, denying the next generation guided exploration of the technology as a tool to structure information. Ask not what institutions can do for people but what people can do for institutions.

Take it all very seriously.

Wonder and awe are childish emotions. To explore and to play are the enemy of understanding. Since these are often the drivers of technological, scientific, and cultural advancement, we must be on guard against such embarrassing states of mind. Remember that it is more important to do things correctly than it is to do things right.