AI, creativity, and self worth

AI-generated cyborg concept art

I’ve been messing around with the Stable Diffusion AI image generator via Dream Studio. It’s quite mind-blowing. The image above was generated with the keywords, “One red eye cyborg concept art.” It creeps me out a bit, and I don’t actually like using it here that much. For better sci-fi pieces, check this algorithm-generated art. Anyway, after spending half an hour feeding keywords into this beast, I started to view my own impulses and thought patterns more objectively. I also began to think differently about my kids’ development, as they are clearly building experiential datasets and learn how to process them.

Seeing visual experience being generated to order, in a matter of seconds, raises the question of what is authentically real? On one level, we know that the brain is taking raw data and extrapolating it to create a world. Now we see machines doing something similar, it brings the constructed nature of experience closer to home. Secondly, detail is not the yardstick of reality any longer. Machines can, in theory, dream up experiences of incredible detail and, possibly one day, coherence. I think the upshot of all this is to undermine our whole concept of reality. Nothing in experience comes with a telltale signature that says, “this is real.” Anything that is constructed can be reconstructed. Perhaps there is no “reality”, only experience.

What’s most surprising though, is how creative tasks might cede territory to machines sooner than we thought. It might put the value of creative roles into perspective. We laud artists, actors, writers, we aspire to be like them. I’ve put a lot of self worth into creativity over the years, but aren’t we all just algorithmically sifting datasets and synthesising ideas to produce works? How long will it be until something like the GPT-3 language model inhales the world’s literature and craps out a genuinely great novel? We are clearly not there yet, but let’s just suppose that happens. At first, human editors (like yours truly) will be needed to apply a lot of quality control but perhaps not always. Maybe we’ll prefer books written by humans with human intent and human experiences. How will we be able to tell? We already have a machine-generated summary of recent research on Lithium-Ion Batteries. BuddhaNexus is already mapping Buddhist suttas, and I’m quite keen to see it used as a tool for understanding this vast corpus of ancient literature. And if we can generate images today, perhaps whole films tomorrow?

Part of me finds a kind of relief in thinking, “Well, Shakespeare wrote all of those important plays, but in X number of years an AI might outdo him.” We could get upset about that, or we could let go of our ideas about what is really important. After all, wasn’t Shakespeare sifting ideas from other plays and poems? Controversially, some sections might have been written by different poets—an example of distributed intelligence perhaps. In any case, we put an awful lot of self worth in our ability to create, to produce, and perhaps not enough in simply being.

I think the rise of the machines gives us an insight into the fact that we have our values and priorities somewhat askew. So, let’s shift the goalposts on our silicon overlords. What really gives human life value if it isn’t our creativity, our art, our productivity? Where should we place our self worth? Love is one thing, and it would still give our lives meaning even if one day “artificial” intelligences are also able to love, and in greater capacity. Another thing we can only do for ourselves is personal development. All the other achievements are fine but our great project, I think, should be to become more aware and compassionate.

To learn more about the state of AI image generation, check out this Digital Foundry article.

%d bloggers like this: