OPINION: The rise of artificial intelligence (AI) as a potential threat or benign force is a topic of lively debate. Respected mainstream voices, like Stephen Hawking, Bill Gates and Elon Musk, are pointing to the dangers of AI.
On the other end of the scale, IBM’s “learning program” Watson is benefiting universities and life insurers, and the discussion has broadened beyond Skynet scenarios, serious though they are. A host of utopian and dystopian scenarios are coming into view as our understanding grows.
A prominent dystopian scenario is already unfolding in the concentration of power in the hands of elites, afforded by advanced automation technology. AI will accelerate this process by further replacing jobs, with successive waves of cognitive automation. Alongside the debate on autonomous weapons, this lends to the growing clamour to plan for a radically different post-work or post-capitalism economic reality.
But beyond eradicating taxi drivers and sales staff there are other profound effects afoot. AI promises to challenge fundamental notions of humanity: our biological needs, our social relations, our morality and also our aesthetics.
A lesson in humility
Carl Sagan described modern science as:
a voyage into the unknown with a lesson in humility waiting at every stop.
The discoveries that the Sun and Earth are not at the centre of the Universe, and that we are just evolved animals somewhere along the Tree of Life, take centre stage in this story, but Sagan anticipates another great “lesson in humility” in the understanding that whatever we do can be automated in machines.
Consider what effect automation might have on an activity like art-making, an activity that seems fundamental to the dynamics and character of our society.
There are two threads to this question.
The first is that our relationship to art is in constant flux. 20th century art interrogated ideas of authorship, identity and social function, showing these all as moving targets. The broader socioeconomic changes driven by AI will continue to influence this evolution.
In a post-work future, with most necessary work automated under proper common ownership, we may live out the oft-quoted vision of 18th century US president John Adams as a reality:
I must study politics and war that my sons may have liberty to study mathematics and philosophy […] in order to give their children a right to study painting, poetry, music, architecture …
Adams is typical in viewing creative arts as higher pursuits: virtuous, noble, and “useless” in the best possible sense, the sense intended by Oscar Wilde when he wrote:
We can forgive a man for making a useful thing as long as he does not admire it. The only excuse for making a useless thing is that one admires it intensely.
Add to this the second thread, that even artistic creativity itself can be automated, and the future becomes even harder to imagine. Enter computational creativity.
In the AI subfield of computational creativity we study the mechanisms by which computing technology can perform creative tasks, often in the arts. How can software create things of novelty, beauty, value and meaning?
This niche of AI can easily be sidelined as immaterial: flippantly because there is no economic value in robots that make art; more pragmatically because it deals with “ill-defined” problems that don’t have clear measures of success, let alone clear solutions. Indeed it is quite right to think it problematic to view art in terms of problems and solutions.
Computational creativity can also, instead, be revered as an epic frontier for AI. In the sci-fi cliché of the robot that can do everything except feel emotions and appreciate art we recognise a common presumption that artistic behaviour sits at the pinnacle of human intelligence and achievement: a sacred domain, virtuous, bettering, sophisticated, and in some sense definitively human.
Ray Kurzweil, the godfather of AI futurology, predicted in his 1998 book The Age of Spiritual Machines that by 2020 autonomous machine art would be prevalent, and soon after, robot artists would “exceed” human artists in ability.
An often-heard complaint is that such predictions oversimplify art to resemble a well-behaved engineering problem, devoid of its social complexity. But unperturbed, a steady surge of work that is at least computationally creative in intention, if nowhere near human-like in its capability, is creeping into the mainstream, with AI characters influencing game narratives, artificial improvising musicians performing with real musicians, poetry-writing Twitter bots, and neural nets that “dream”.
A common distinction is made in the community between systems that exhibit some kind of reflective awareness about what they are doing, and the rest, referred to as “mere generation”. Although the former is the true goal of computational creativity research, the mass of work creating “merely” generative systems is building a formidable scaffold upon which new discoveries might be made.
Even though Kurzweil was some way off, not just quantitatively but also in the way he imagined change taking place, he is right in principle that there is no existential force-field separating artistic behaviour from any other kind of human activity, that would place it out of reach for AI.
The artless robot is a myth begging to be disproved. Our evolved psychologies, grounded in adaptive social behaviour, can be unpicked, studied and modelled to reveal the ingredients that make up an artistic mind. That does not mean this is a simple task, achievable any time soon.
Even if it was simple, art is such a socially embedded thing that there is questionable power in framing the challenge just in terms of individual minds. Cultural dynamics count a great deal, and these are poorly understood; computational creativity is indeed an ill-defined and scarily multidisciplinary subject with a monumental task ahead of it.
With this in mind, practical computational creativity is not only about building artificial artists, but also about understanding the potential relationships between automated creativity systems and people. Researchers are looking increasingly at the usability issues associated with putting automated creativity systems into the hands of creative practitioners, from designing useful tools for supporting creative search, to getting art-producing machines to explain what they are “thinking”.
The future is therefore probably not one of human-like machines, but extraordinary machine-human hybrids with new approaches to creativity. As we explore this space we also have the potential not only to observe but to probe the phenomenon of creativity using the automated creativity systems we build.
Streamlining creative labour
Another popular expectation of computational creativity is that it might make art production available to all, not just a trained “elite”, by reducing the barrier to entry to zero. This has been the stated goal of various systems.
This may be a meaningful objective in some sense, particularly in the case of assistive technologies for the disabled, or rapid access to free content in a commercial world that seeks to push every efficiency to its limit (as any screen composer will know).
In the general sense that it makes making art more accessible, this is perhaps an evasive goal: it has always been the case that we can all enjoy amateur art-making. Gaining social recognition, as well as drawing genuine satisfaction from one’s achievements, are more substantial challenges, as these outcomes are inherently established relative to what other people are thinking and doing.
But the more pragmatic aim to drive efficiencies in commercial creative production may well be big business. Programs like Melomics use AI techniques to mass produce copyright-free music on demand for commercial applications.
AI techniques are also being used to support, for example, the adapting of musical content to on-screen or in-game scenarios, and indeed in the context of video games and interactive TV, such automation can even be seen as a necessity, not a luxury, if the entire process is generative and can’t be entirely pre-prepared.
This inserts into our speculative future a host of interactive, immersive and self-generating artforms that look nothing like those we have today, and that are fundamentally founded on a creative partnership between human and computational intelligence, just as successive dominant forms are grounded in their era’s technology: from perspective painting to rock’n’roll to 3D animated movies. The next great generational shift might revolve around such technology’s implications for authorship and the social currency of creative effort.
Ambiguity at the heart of art
Much energy in this field has been devoted to trying to evaluate creative systems, or even trying to define what meaningful evaluation would entail. For the time being, news stories covering the field seem caught up in an overly simplistic “Turing Test view” of computational creativity.
Articles ask you to spot the machine art, and if you are unable to tell the difference, then to contemplate quite how sophisticated such systems are. This is questionable. It relocates Turing’s original test – which was based on linguistic interrogation – to a context for which it is not suited.
One problem is that the process of evaluation is rarely interactive, which means you can’t probe the behaviour of the system. Another problem is that art exists in a domain in which ambiguity is very often celebrated rather than avoided. Language can be ambiguous, but it can also be used with great precision to convey complex concepts, which is what makes the Turing Test meaningful.
The AI pioneer Marvin Minsky said disparagingly of the results of a recent linguistic Turing Test that he thought was poorly executed; “Ask the program if you can push a car with a string. And, if not, then, why not?” Language reveals complex understanding.
By contrast it has often been said that ambiguity – a sort of floating state of interpretability – is a functionally important part of art, music and poetry. How do we dig into creative intelligence in a language-free environment?
To confuse matters further, claims are frequently made that such-and-such a system is the first ever to create art, music or poetry “on its own”. Unpacking the “on its own” clause in such claims involves complex forensics. What knowledge has been put into the system by its maker? If it is a learning-based system, then what is the difference between mere regurgitation-with-variation, and something creative?
Without rejecting the genuine and remarkable innovation that may be involved in such work, be questioning of such claims, especially when not supported by detailed disclosure of the system design. The standards of academic transparency, when strong claims are being made, should be no lower here than in any other domain.
For the empirical researcher – and the science or technology journalist for that matter – an even more basic challenge to the Turing Test mentality is simply this: why ask a simple yes/no question, as the Turing Test does? Does this sufficiently reflect the rich complexity of the subject of study? Instead conduct an expansive study of the creative and interactive affordances of the system. Use it to better understand the dynamics of creativity and aesthetic behaviour in humans.
For now, then, possibly the the best take-home message for thinking about the future of computational creativity is to do as the artist does, and be prepared to be challenged: to rethink your assumptions and your understanding of what is fundamental to human nature.
Oliver Bown is a Postdoctoral Fellow in the Faculty of Art and Design at UNSW.
This opinion piece was first published in The Conversation.