Artificial Intelligence Report
Between June and November 2010, Robots and Avatars hosted a series of Lunch Debates bringing together a diverse and specialised groups of professionals and experts to deepen the research and further the conversation around the Robots and Avatars programme and ask, ‘What sort of world are we educating our young people for?’ The lunch debates were designed to help extend the understanding of young people’s needs for the future world of work and envision the skill-sets, aptitudes, resources and methodologies that will be required by today’s young people at work in 2020 onwards, given that many of the jobs they will do have not yet been invented.
The Lunch Debates invited experts from a variety of backgrounds, including academics, creative practitioners, industry professionals, artists and the public sector. The Artificial Intelligence Lunch Debate was attended by:
- Heath Bunting (Artist),
- Professor Kerstin Dautenhahn (Research Professors in the School of Computing, University of Hertfordshire),
- Conatance Fleuriot (Research Associate at the UWE’s Digital Cultures Research Centre),
- Paul Granjon (Visual and Performance Artist),
- Hazel Grian (Writer and Digital Artist),
- Gary Hall (Professor of Media and Performing Arts, Coventry Univeristy),
- Paul Harter (Artist and Designer),
- Jody Hudson-Powell (Designer),
- Dr. David Levy (Author of ‘Love and Sex with Robots’),
- Katy Lindermann (Robotics Expert),
- Fiddian Warmann (Director/Artist, Soda)
- Rich Walker (Shadow Robot Company)
The debate was moderated by Ghislaine Boddington (Director, body>data>space) and Benedict Arora (NESTA, Programme Director, Education).
Ghislaine Boddington began the debate explaining that the group had been brought together to create a ‘visionary edge with tangible results’, underpinning Robots and Avatar’s commitment to deep, yet practical and accessible research.
The Artificial Intelligence (AI) Lunch Debate focused on the potential evolution of AI in robots and avatars. The aim was to envision the involvement of a mix of robots, avatars, tele-presence and real-time presence in the work place and examine the consequences of AI into future team and working spaces.
Initially some of the themes of Robots and Avatars were outlined and the group was asked to consider ‘how the education system needs to evolve?’ and ‘what the jobs and careers of the future would be?’ in the context of a world where AI would take on an increasing presence in our daily lives. The group were encouraged to share their ideas of the skills, resources and aptitudes that could be integrated into the curriculum now, to help young people today prepare for their future careers. As the debate progressed a series of themes began to emerge which included: sentience and emotion through AI, service and domestic use of AI, the influence of science fiction on AI and new pedagogical needs.
Views on Artificial Intelligence
Professor Noel Sharkey, the lively provocateur for the group, began by getting us to think forward to the future work spaces as far ahead as 2066. He went on to outline his position on AI, describing himself as ‘an agnostic on the sceptical side’ and going on to say that ‘there is no evidence of sentience after working in the field for 30 years’. As such he called for a reassessment of how we commonly understand AI, defining it as ‘the science of making machines do things that require the intelligence of humans’. This is as opposed to attempting to make machines as intelligent as humans, which is now widely understood to be a futile avenue for exploration with our current technologies. Referencing Newton’s Datum and emphasising the importance of empirical evidence in scientific progress, Professor Sharkey quipped that ‘people can argue that there are tea cups orbiting the sun, but until it is demonstrated you can’t prove it!’.
Professor Sharkey’s view on AI is in stark contract to writings from the 1950’s, such as Isaac Asimov’s positronic robot stories—many of which were collected in I, Robot (1950) and promulgated a set of rules of ethics for robots and intelligent machines that greatly influenced other writers and thinkers in their treatment of the subject. With regards to classic sci-fi notions of AI and robotics, which imagine robots carrying out multiple tasks simultaneously Professor Sharkey suggested that we should think of AI as a far more pragmatic concept. He pointed to the increase in specific-use service and domestic robots as a way of conceptualising how the field might develop. He pointed to the robot vacuum cleaner which, although by no means cutting edge might be a more helpful way to understand the scope of current developments in domestic robotics.
Looking at the widespread use of robots in industry and the military which perform very specific tasks, it does not take too much of a leap to vision the integration of AI and robotics into domestic and work space. Some current future visions include robots as receptionists and robots that care for children, the elderly and disabled people. This is something that is already being pioneered in Japan due to its ageing population crisis. There are currently fourteen companies in Japan making robots to look after the elderly in care homes and the “RAPUDA” robotic arm, which can be attached to wheelchairs, tables and other objects and extended to up to one metre to pick up objects, is already helping improve quality of life for disabled and elderly people there.
Professor Sharkey delved into the possibilities of synthetic biology, a mixture of organic and inorganic matter to create a possible new ‘cyborg race’. Citing sci-fi visions such as the film Surrogates and the Cyborg Beetle developed from a Pentagon-sponsored project at the University of California, Berkley, Professor Sharkey shared his concept of the Robotar – a blend between a robot and an avatar – as a future vision which he thinks would be more likely than an autonomous, sentient robot. NASA’s Robonaut is one current example of the development of this hybrid technology where sensory feedback technology, in particular, plays an important part in thinking about robotic avatars and led Professor Sharkey to coin the phrase, ‘I feel therefore I am’.
Relationships with Robots and Avatars
Professor Masaki Fujihata (Keio University, Tokyo Japan) provides us with a view from Asia which approaches sentience from a different angle. He explains that:
“A robot is a machine which expresses interior information. It’s hard to argue that a robot is anything but an expression of its creator. A different creator will make a different robot. They won’t be the same. Of course its purpose and solutions will also be different.
So you can see that what I mean by “expressing interior information” is that a machine interprets and expresses its internal information to to it exterior. That’s the difference between robot and other machines. The algorithms that the robot uses to make its internal information abstract is simply one decision of the robot’s designer. In order to be distinguished as a unique “expression”, the robot must be able to go beyond the level of our current thinking about machines; such as learning how to cheat, or something…
So to clarify, robots, which are expressions of their creators, are devices to interpret information from their machinic interior into exterior expressions readily apparent to observers.”
Dr. David Levy, author of “Love and Sex with Robots”, provided another viewpoint and vision of AI, asserting that ‘we have lots of the technologies that are necessary for creating AI and things are developing very fast’. He optimistically suggested that ‘young people will get to witness high quality robots getting married, having relationships and doing all the tasks that humans can perform today’. In his book he goes further explaining that:
“By around 2050…Robots will be hugely attractive to humans as companions because of their many talents, sense and capabilities…Robots will transform human notions of love and sexuality. I am not suggesting that most people will eschew love and sex with humans in favour of relationships with robots, though some undoubtedly will. But what does seem to me to be entirely reasonable and extremely likely – nay, inevitable – is that many humans will expand their horizons of love and sex, leaning, experimenting and enjoying new forms of relationship that will be make possible, pleasurable, and satisfying through the development of highly sophisticated humanoid robots.”
As the discussion continued the potential issues of ‘false’ relationships with robots arose as a central ethical and moral question when thinking about AI. Professor Noel Sharkey explained that you could be ‘loving an artefact that can’t love you back’ and questioned the ethics of a relationship that was based on illusion. For young people in the early stages of understanding how to form relationships this sort of ‘deception’ could be problematic in terms of social development.
Paul Harter suggested that ‘we already have these sorts of relationship with our computers’ and pointed towards Sherry Turkle’s book The Second Self: Computers and the Human Spirit, as well as ‘relational artefacts’ as an area of study examining this new form of companionship. It was acknowledged in the group that these concepts are regarded differently in different cultures, for example in Japan where any object, including technology, is imbued with ‘spirit’ and so therefore is potentially more invested in and perhaps more meaningful from the outset.
Paul Harter emphasised the importance of design in this context explaining that ‘designers give them [computers, phones etc.] all the right clues to make sure that people relate to them’. The group went on to think about the ‘”Anthony”, an MIT Student, referenced in Turkle’s book, who tried having girlfriend’s but found that he preferred relationships with computers…as he feels more confident…as he knows how the computer will react to him…’. Another example of our relationships with robots came from the military, where soldiers working in Afghanistan with bomb disposal robots find themselves putting the same robot back together if it gets blown up because of the extent that they invest in the ‘seeming presence’ of this inanimate object.
Suggesting that AI and robots are unlikely to replace humans, Professor Kerstin Dautenhahn explained that ultimately ‘robots are just another technology’ and that ultimately ‘people are interested in other people’. She explained that the desire within the robotics research community is to find out what the ‘useful applications’ are for robots as opposed to how they can be human-like. It was generally accepted that given robots are not good at being human-like, it would be more useful to ask ‘how robots and AI can help people?’.
Raising awareness about the ethics of AI
In terms of Avatars, questions of authenticity and illusion were understood differently to robots by the group. Dr. Constance Fleuriot suggested that avatars afford anonymity and hence enable confidence. She shared the experience of her 11 year old daughter who found herself having no choice but to create a male avatar online because the clothes were more to her taste. Dr Fleuriot went on to explain that this sort of playful cyber-exploration enables young people to think progressively about ‘the person as opposed to the gender’. However this is not always the case, for example the online youth social network Stardoll, which is aimed at girls, appears to operate using very strongly defined and traditionally constructed gender roles. In this context the group went on to debate and discuss what they saw as ‘hidden assumptions in software’ which are written into code by programmers and software designers – perhaps control over the ethics starts here?
This part of the debate had three main points; firstly the group emphasised how important it is for young people to be aware of these limitations and boundaries inherent in software; secondly, how important open source software is in this context and thirdly, that young people need be equipped with the skills to be able to influence and have an understanding of the architecture of the software and online resources that they might be using. Fiddian Warman emphasised the importance of ‘having open frameworks that allow you to input and build upon’ as a way to allow young people to be able to explore, experiment and learn.
Towards the end of the debate the participants were asked to think about what would be useful for preparing young people for the fluid future that the debate suggested we are heading towards. Thinking about the future world of work, future careers in the context of behaviours and ethics with robots and avatars, they were asked ‘what [they] would take to the government or education sectors?’
David Levy suggested teaching computer programming and electronics in schools at a young age. Constance Fleuriot identified problems with the fact that it is impossible for young people to easily access of influence the architecture of games consoles and other handheld devices. The problem being, as David Levy said, that without access to the back end ‘you can’t get to the soul of the programme’.
New Pedagogies and Collaboration
The group also pointed out that conventional pedagogies are not sufficient to be able to support new forms of learning in particular with regards to robotics and avatars. Professor Gary Hall explained that our increased engagement with digital and virtual technologies is ‘changing the way that people learn’ and suggested that this was having neurological affects meaning that ‘you can’t get young people to concentrate’ in the same ways as before. Challenging the ‘literacy model’ he called for new pedagogies which allow these new forms of processing – he suggested that ‘maybe it’s not about holding knowledge in your head?’
Katy Lindermann suggested that education does not really reward collaboration, despite the fact that this is how we tend to work in the real world and Ghislaine Boddington went on to say that many current modes of assessment do not give room for collaboration or interdisciplinarity. Rich Walker told the group about the Opening Minds project which placed the emphasis on ‘how to learn as opposed to know’.
Paul Harter talked about ‘learning through a goal driven process’ in order to help young people explore the idea that technological tools are there for their use. He suggested that if young people can imagine things and empower creativity in their learning, then there will be a technological solution for them. His emphasis was on giving young people ways to imagine possibilities through an open relationship with technology. This sort of interaction encourages the attitude that ‘these tools are there to be used’ and it’s only through their use that we will discover and innovate. Linked to this was Professor Kerstin Dautenhahn’s comment about how important it is for young people ‘to identify things that they are really passionate about’ and ‘that it really does not matter what it is’ because with passion comes exploration, creativity and innovation.
New forms of representation, such a robots and avatars, open out new models, new spaces and new modes of thinking for everyone who interacts with them. It was clear from the group that the ways in which we understand and conceptualise artificial intelligence in these contexts will have major implications for how today’s young people will shape and experience the future world.