Everything is affected by technology. How will the Living With 21st Century Technology Grand Challenge help us deal with some of that change?

Robert Brooks: We want to understand the change and the unanticipated ways in which we might have to change. We think often about the “tech” side of technology, but the social consequences are [also] part of what we’re interested in. For example, how has nearly everyone having a smartphone influenced the ways in which we relate to each other? Everybody has a strong opinion about those influences, but is it the same as our individual experience?

Lyria Bennett Moses: Part of it is understanding the impact of technology, but it is also the fact that we make the change. This University is a leader in bringing new technologies into being. So, just as we can talk about the impact of technology on other sectors, I think we can also think about what other disciplinary perspectives might offer to technology. 

What does the future of work look like?

LBM: The expectation in our society is that everyone works. Should that be the case if, in the future, everyone isn’t working? If that happens, how do we create value in lives? How do we give people roles that, in a sense, could potentially be infinite? There’s no reason to have a finite number of mathematicians or philosophers, for example. Should we put people to work in those areas?

That’s a big question – for many people, their identity is very tied up with the job that they do.

LBM: There are choices. Either we find work, and maybe have far greater numbers of philosophers and mathematicians or musicians or actors than we have now, or we rethink leisure time. We might have a 20-hour work week and share work around, or societal wealth supports people whether or not they work.

This is where the Grand Challenges come in. To answer the question about work, for example, you need people who are expert in artificial intelligence (AI) who can talk about where the technologies are going [but] you also need sociologists, philosophers, experts in education. One thing I know: we can’t just keep things going as they are and not think about it. 

These conversations can be difficult, though, because politics often intrudes. 

Maurice Pagnucco: There is little dialogue in Australia, and other countries have taken the lead. I think this is the point of the Grand Challenges: to really step up and have these discussions that we aren’t having now.

RB: Some of the discussions can be had in a way that doesn’t polarise along traditional lines. There are some very strong, financially conservative arguments for a Universal Basic Income, for example. The chance to air those at UNSW, to explore them and pull them apart, is really exciting.

How do we make sure an ethical framework is a part of technological change?

LBM: Ethicists will tell engineers not to build a bridge that’s going to fall down. But I think the ethical discussion that needs to take place around AI is more complex and links into discussions around the future of humanity … how we create AI that makes the world a better place for humans. That requires different and much broader questions rather than just ethical and unethical categorisation of streams of AI research. 

RB: And this Grand Challenge isn’t just about AI, it’s about all types of technology. We have CRISPR gene editing, allowing us to make potent changes to an organism’s genome. Suddenly, all the things we thought were whacky science fiction are possible. This is technology we must use because it can deliver great good, but at the same time there are lots of nuanced decisions to be made.  

And who makes those decisions? 

LBM: It will vary by field. There are some decisions embedded in law, some are made by an individual when they’re choosing how to design something or which parts of research to pursue, and some are made institutionally. In some places there are mechanisms that bring the public into the debate. 

grand_challenge_1_photo_by_matteo_g.jpg

Maurice Pagnucco, Lyria Bennett Moses and Rob Brooks. Photo: Matteo G

How important is it for the community to be more informed and involved in decision-making?

LBM: Both are important but also challenging. Technology can enable engagement. It’s not just about how many people you can fit in a room – we can have platforms through which people can comment. A multi-pronged approach is necessary to tease out what people care about, but recognising diversity among that is also important. 

MP: I think education needs to be part of it too. You want people to be engaged in the decisions being made but they need to be educated in what technology does, the social issues that need to be considered. Which is where the Grand Challenges come in. 

RB: One of the things about the Grand Challenges is having discussions we think the world needs to have rather than those the media might want because they get lots of clicks. If you don’t have the sensible discussion and educate [the] public about a big issue like this, sooner or later the public will get involved and they’ll have pitchforks. 

One of the other Grand Challenges is Inequality. In a time of technological change, how do we ensure people don’t get left behind? 

LBM: We often talk about the digital divide or the genetic divide – who can afford to make the best genome – so we worry about inequality in terms of technology. But technology is only one strand. Conversations about the future of work bring up questions about the potential for this to turn into a dystopia of vast inequality. But there’s not much point just looking at technological divides without seeing them in the context of how they arose from educational divides or wealth inequality. 

RB: Another type of inequality we haven’t yet considered in our Grand Challenge is what’s neuro typical. That’s a polite way of saying people who have different mathematical abilities, for example. The stereotypical mathematical introvert may well be the plutocrat of the next generation. Two thousand years ago it was the big and brawny and aggressive who were able to corral all the resources and build lineages. Now it’s a completely different type of person. Where does that leave us? 

Where does the human factor fit in to this technological future?

MP: There are some key researchers who believe we’ll never have autonomous vehicles on the road because there are just too many possibilities for errors. We tolerate humans having accidents and killing one another but we wouldn’t tolerate that with technology. I don’t think we’ll have autonomous vehicles; I think we’ll have humans augmented with machines making the decisions.

LBM: If we’re going to talk about how we live with 21st century technology, we have to ask, what can technology do? What role do we want to have in a world where humans can manipulate their own genetic makeup and can automate processes including simulating intelligence and simulating various forms of creativity? What room is there for something that hasn’t been programmed or designed and what role do we want to maintain for that thing being us?