I haven’t written in long-form about tech industry trends in years. Largely because I’m lazy and it’s easier to hot-take (you can find my thoughts on social media, usually), but also partially because a lot of the time I feel like the trend doesn’t deserve long-form writing.
I’ve found recent trends to be very exhausting.
But it occurs to me that if I write my thoughts up at a stable URL I own, I can direct recruiters to them when explaining why I don’t want to work at the company they’re hiring for. And that seems useful enough, so I figured I’d start with AI.
I don’t want to work on AI. I don’t even want to work at a company that’s enthusiastic about using AI, if I’m being honest.
This probably deserves a bit more nuance.
It’s hard to say “I don’t want to work on AI” as a blanket statement because “AI” is, as trends have been lately, very loosely defined. It’s more of a brand than a defined thing, and I have my suspicions that’s intentional.
But I have a lot of experience working on automation and having machines make decisions without a human in the loop, which some would call “AI” if you squint or need to raise a new round of funding. I’m happy to work on that again. So let’s try and set parameters on what I have no interest in working on.
Generative AI
The most common AI people want me to work on these days is “generative AI”, where a machine tries to create something that looks like something a human would create. The ChatGPTs and image generators and Copilots and so on.
I do not want to work on these things. I do not want to do work that involves these things. I am completely uninterested in them as tools.
If I’m being very honest with myself, the primary reason would be because I don’t connect with any of the problems they’re trying to solve, or don’t believe they’re the right tool to solve that problem. I am, if I’m being very honest, suspicious that the problem they’re usually trying to solve is “getting investor attention”.
Beyond that, I have four main complaints about generative AI.
The datasets it’s trained on.
Generative AI companies tend not to have the informed consent of the creators of the data in the datasets their models are trained on, because those models take a lot of data and it’s hard and expensive to build that large a dataset while getting informed consent.
In a different world, with a flourishing public domain and a robust solution to compensating the people that contribute to it, I’m actually really okay with this. In the world we have, where copyright is our answer to how creative people don’t starve, I am not okay with this.
I do find it slightly ironic (but no less predictable) that the capital class that murdered the public domain in cold blood through an expansive copyright regime now believes that an expansive public domain has value. Public domain for thee, copyright for me, as it were.
The illusion of intent.
I fundamentally believe if you want people to treat something as distinct from another thing, you must make it appear distinct from that other thing. Things that look similar will be treated as similar.
More concretely, if you make a machine look like it’s answering questions (or writing code, or making art), people will assume it’s answering questions (or writing code, or making art) with some intention and rationale, even if what it’s doing is coming up with something that is statistically similar in shape to an answer to that question (or to the code or art you asked for).
I think that confusion (and the ramifications of it) are on you, for making it appear as though a machine had intention and reason, when really what it had was pattern recognition.
The waste.
Look, training and running these models is a power-hungry process. My state catches fire every summer and my air is unbreathable for at least a few weeks a year. I am disinclined to work on power-hungry things unless their benefits can justify the power spend. Generative AI is on the wrong side of that discriminator, for me.
The impact on workers.
I am generally pro- automating away jobs, usually. I think every time we train a computer to do something that used to require a person, humanity got a little upgrade. Replacing necessity for human effort with some sand is really cool!
I think capitalism makes this a tough viewpoint to hold, because capitalism wants humans to always be doing work, and will get human-murdery if humans are not doing work. I am pretty opposed to allowing capitalism to get human-murdery.
So there’s some nuance there. It’s probably too nuanced to get into on this post.
But I generally try to make sure that the work I’m automating away will make it easier for people without capital to do things, and I won’t help you automate a profession into being untenable until there’s a clearly-articulated and already-underway plan in place for how those displaced workers won’t be eaten by capitalism.
Where generative AI runs afoul of this is it’s generally trying to make it easier for people with capital to gain more power, but won’t make it easier for people without capital to challenge people with capital. It’s generally turning entire professions—writer, artist, etc.—into professional AI-correctors, because AI-correctors are cheaper than artists and writers.
That’s not the role of tech I want in my future. I want tech to help people do things, I don’t want people to help tech do things.
Classification and Recognition Algorithms
I’m generally very happy to work on your algorithm startup. I like that my mail server classifies my incoming mail for me. I like that my photo server can detect my family’s faces in our pictures and sort them for me. Those are very useful things that technology does for me.
This is pattern-matching in its appropriate context: you’re not pretending a computer can think, you’re openly admitting you’re just detecting similar things.
I am less happy to work on these things when we pretend they’re some source of truth or can make decisions. Even IBM (famed for their understanding of ethics in computing) understood that a machine can never be held accountable, and therefore must never be entrusted to make a management decision.
This gets gray sometimes.
It is arguably a decision to route an email to a spam folder. But the consequences of that are usually relatively minor, and mistakes are easily corrected. I am generally okay with doing that thing. Using algorithms to decide where police officers should patrol, or who is likely to commit a crime, has much higher consequences. I will not do that thing.
Automation
I am pretty happy working on your automation startup. I’m defining automation as a human encoding a decision, in the form of “when this happens, do this”, to repetitively apply that human-made decision to new inputs that match it. The key characteristics this approach has is that a human is making the decision, and the process is deterministic. Because it is deterministic, it can be debugged, explained, and evolved with intent. A handy way to understand if I consider something automation is there are only two possible answers to the question “why did it do X?” And they are either “it’s a bug” or “[someone’s name] decided it should do that specific thing.”
I will find some things that you’re trying to automate objectionable, and will not do them, but it is not the automation itself I find objectionable.
I am generally a fan of reclaiming human time in this way.
Putting Labor Abuses Behind an API
Perhaps the most common form of AI is hiding low-cost workers behind a machine interface. This usually looks like paying workers subsistence wages to make decisions or massage computer-generated content, but making sure clients don’t have to see them or interact with them.
I will not work on this thing, and I will think less of you for asking me to. Asking me to work on a thing like this makes it likely I will decline to work on any things you want me to work on at any point in the future. This represents such a mismatch of ethical standards and vision for how technology should be integrated into our world that we are not and will never be a good fit to work together.
That’s my attempt at disambiguating this whole silly trend. I look forward to the day when people stop showing up in my inbox asking me to help them with it. But people are still showing up asking me to work on their cryptocurrency startup, so I think it’s going to be a hot minute.