Robots

All posts relating to robots.

 

‘The rise of the robot’ on BBC Business Daily

‘The rise of the robot’ explored in BBC Business News / Business Daily on 31 Dec 2012.

“Justin Rowlatt meets Ghislaine Boddington, creative director of body>data>space, a company which specialises in how new technology can improve the way we communicate.”

Download here: http://www.bbc.co.uk/podcasts/series/bizdaily/all

 

‘America’s mindless killer robots must be stopped’ by Noel Sharkey

Read Robots and Avatars’ project champion Noel Sharkey‘s opinion column  on the mindless use of robots in war  in this week edition of the Guardian (3 December 2012).

‘The rational approach to the inhumanity of automating death by machines beyond the control of human handlers is to prohibit it

Are we losing our humanity by automating death?

Human Rights Watch (HRW) thinks so. In a new report, co-published with Harvard Law School’s International Human Rights Clinic, they argue the “case against killer robots“. This is not the stuff of science fiction. The killer robots they refer to are not Terminator-style cyborgs hellbent on destroying the human race. There is not even a whiff of Skynet.

These are the mindless robots I first warned Guardian readers about in 2007 – robots programmed to independently select targets and kill them. Five years on from that call for legislation, there is still no international discussion among state actors, and the proliferation of precursor technologies continues unchecked.’

Read the article.

 

EXHIBITION: Robots & Avatars, our colleagues & playmates of the future

Visions of Our Communal Dreams (work-in-process) by Michael Takeo Magruder

Michael Takeo Magruder

Join us in a near future where robots, avatars and telepresence form part of an exciting new reality.
From pervasive networked gaming to robots that teach, touch, care or scare, Robots and Avatars already co-habit the world in which we work and play. Robots and Avatars is an intercultural, intergenerational and interdisciplinary exploration of a near future world consisting of collaborations between robots, avatars, virtual worlds, telepresence and real time presence within creative places, work spaces, cultural environments, interactive entertainment and play space.

“I love the way Robots and Avatars is bringing together a beautiful diversity of people, exploring new paradigms, with unexpected and inspiring results. Exciting! Brave! Fun!”

Pear Urushima, Marketing Guru, Apple. Inc

The Robots and Avatars Exhibition will feature cutting edge art pieces, including three Commissions with the National Theatre “Sociable Asymmetry”, “The Blind Robot” and “Visions of our Communal Dreams”. It will present a mixture of robotics, online and wearable projects, immersive installations, performances and films. Attached to the Exhibition are high-quality debates and workshops, playfully didactic experiences that will enable visitors of all ages and levels to interact and engage with digital technologies. The Robots and Avatars Jury has selected a number of works to make up the 3 exhibitions in the project through a high level and largely spread out Call for Proposal process. The exhibition will present at FACT, Liverpool (UK) where it will premiere from 16 March to 27 May 2012; AltArt, Cluj-Napoca (Romania) in June/July 2012 and KIBLA (Slovenia) in 2013. Read more about the exhibition at FACT here. “The Robots & Avatars project has been one of the first to bring robotics and avatar professionals together and provide a creative atmosphere with artists and designers. This is such a wonderfully interdisciplinary grouping that I feel compelled and proud to support it.”

Noel Sharkey, Professor of AI and Robotics at the University of Sheffield (Department of Computer Science) and EPSRC Senior Media Fellow (2004–2009).

The Robots and Avatars exhibition in the UK is co-produced in the UK by body>data>space (London) and FACT (Liverpool) in collaboration with the National Theatre (London). European co-organisers are KIBLA (Maribor/Slovenia) and AltArt (Cluj Napoca/Romania). With the support of the Culture programme of the European Union, this project was conceived by lead producer body>data>space in association with NESTA. Find the Selected Projects’ list more here

Visions of Our Communal Dreams (work-in-process) by Michael Takeo Magruder with Drew Baker, Erik Fleming and David Steele, 2012. Image C 2012 Takeo.

Cynthia Breazeal: The rise of personal robots

This great TED talk Cynthia Breazeal expands further on one of the recurring themes of Robots and Avatars – the increase in personal and domestic use robots and the implications this may have for young people in particular. As a grad student, Breazeal wondered why we were using robots on Mars, but not in our living rooms. The key, she realized: training robots to interact with people. Now she dreams up and builds robots that teach, learn — and play. Watch for amazing demo footage of a new interactive game for kids.

Cynthia Breazeal founded and directs the Personal Robots Group at MIT’s Media Lab. Her research focuses on developing the principles and technologies for building personal robots that are socially intelligent—that interact and communicate with people in human-centric terms, work with humans as peers, and learn from people as an apprentice.

She has developed some of the world’s most famous robotic creatures, ranging from small hexapod robots to highly expressive humanoids, including the social robot Kismet and the expressive robot Leonardo. Her recent work investigates the impact of social robots on helping people of all ages to achieve personal goals that contribute to quality of life, in domains such as physical performance, learning and education, health, and family communication and play over distance.

 

Virtual/Physical Play Robot

Here is an intresting article from Gizmag about The Playtime Computing System developed by MIT Media Laboratory’s Personal Robots Group. The system blends robotic and virtual interfaces and is currently designed for children between the ages of 4 and 6 years old. This blend also allows the device to be used with telepresence, enabling a play and learning to occur in real-time across continents.

children playing with the playtime computing system

As Alphabot passes through a hole in the display panel, it appears to continue its journey through the virtual world projected onto the panels. Image Source: Gizmag/MIT

In an increasingly tech-centric world, keeping kids interested in learning can be an uphill battle. With teaching that involves play recently attracting some powerful supportive voices, students from MIT’s Media Lab have developed a system which merges technology and play to stimulate young minds. The Playtime Computing system uses infrared emitters and tracking cameras to monitor the position of a special robot within a play area. As the bot disappears into a hole in a panel, it appears to continue its journey into a virtual world projected onto the walls.

The Playtime Computing system developed by MIT Media Laboratory’s Personal Robots Group is aimed at children between 4 and 6 years old and allows them to get up and about instead of sitting around and getting bored, a hot topic at the moment given Michelle Obama’s Let’s Move campaign. It also allows for early experimentation in such things as symbolic reasoning and social roles.

The system is made up of three panels with projectors behind them, and a set of four ceiling projectors for sending images to the play area floor. Alphabot, a cube-shaped robot with infrared emitters at its corners, is tracked by ceiling-mounted cameras. A virtual landscape is projected onto the panels and floor to blur the barriers between reality and the artificially-created world. To further add to the illusion, as Alphabot disappears into a hole in the panel and some robotic foliage closes behind, the image projected onto the panel appears to show it continuing its journey into the virtual world.

A set of RFID-tagged wooden alphabet letters or symbols such as musical notes was also created so that the children can stick them onto Alphabot’s face. Placing letters onto the bot results in its face changing color to match, with musical notes causing music to be played through its onboard speakers. As the robot disappears into the virtual world beyond the panel, the symbol placed by the kids will also continue through to the animated version.

International playtime

The fun needn’t stop with just one play room, however. “One of the things we’re really excited about is having two of these spaces, one here and maybe one in Japan, and when the robot goes into the virtual word here, it comes out of the virtual world in Japan,” explained the group’s Adam Setapen. “So that kind of fits in with that one-reality concept, that there’s one robot, and whether it’s physical or virtual is based on the state of the robot in the Playtime Computing system.”

Of course, kids being kids, the young prototype testers crammed lots of different symbols onto the bot, which it wasn’t developed to handle. They also expected other objects placed in the hole to appear on the screen. Future developments of the system may well take such things in stride, with children perhaps being able to send a favorite toy into the virtual world.

Maybe it would also be interesting to see how they would deal with a digital twin!

Another aspect of the system is the Creation Station, a table-top computer where youngsters can arrange objects or draw pictures. Whatever is on the Station is recreated on the panels via the projectors.

The researchers also kitted out the playful system testers with baseball caps sporting infrared emitters. This allowed the system to keep track of the kids as well as the Alphabot, which could make it possible for such things as interaction with the computer animated robot in future versions. If the team can develop the system to operate using something like Microsoft’s Kinect gaming technology, then players could be tracked without having to rely on infrared clothing.

The team says that the current prototype was put together using off-the-shelf parts at a cost of just a few hundred dollars, and believe that mass production for home use is a viable possibility.

Source: Gizmag

 

Outrace – Robots in Trafalgar Square

outrace robots projecting into the air in trafalgar square

Credit: Outrace

This year as part of London Design Festival the public were invited to take control of eight industrial robots on loan from Audi’s production line. OUTRACE is an installation by Clemens Weisshaar and Reed Kram, that consists of 6 independent systems coordinated by one KWTC CONTROLLER. Messages were sent in by the public, via a website, and then processed by the system every 60 seconds.

By way of a powerful LED light source, positioned at the tool head of each robot, people’s messages were traced into the public space of Trafalgar Square. Long-exposure cameras captured the interactive light paintings and relayed them to the project website and social media platforms to be shared.

Robots and Avatars sent in a message to OUTRACE which was shown at the rather unsociable time of 7.08am! Here is the video of the drawing – see if you can work out what we sent…

 

Robonaut Tweets

RobonautWe have already blogged about Robotars – humanoid robots that are controlled virtually from a remote location – and NASA’s efforts in this field are developing further with their Robonaut 2.

At the recent Artifical Intelligence Lunch Debate the diverse group of experts discussed the implications of this sort of blended reality. This is particularly in relation to the use of sensory feedback technology which gives users a more heightened and tactile experience and that provides new and more tangible ways of behaving through and with new representational forms.

Commenting about the problems with traditional understandings of artifical intelligence at the Lunch Debate in June, Professor Noel Sharkey suggested that with robots and avatars we should not be saying “I think therefore I am” but instead,  ‘I feel therefore I am’.

Daily Galaxy has a great article on the Robonaut 2 which is below:

NASA’s Robonaut 2, or R2, is getting ready to work on the International Space Station in November but it’s already tweeting about preparations under the account, @AstroRobonaut.

The humanoid robot — complete with a head, arms and an upper torso — will be the first dexterous humanoid robot in space and it assures its followers in one of its first tweets alluding to 2001: A Space Odyssey that, “No, no relation to Hal. Don’t know if I’d want to admit to having him on my family tree if I was. [Definately] don’t condone his actions.” It also tweeted that it’s not related to Boba Fett.

Is this another vivid sign that we have entered the dawn of the age of post-biological intelligence?

Although there are already several robots in space — including the famous now AI-enhanced Mars Rovers, which have been zipping around the red planet for years — NASA and G.M.have created the first human-like robot to leave Earth.

The robot is called Robonaut 2, or R2 for short, and it weighs in at 300 pounds, with a head, torso and two fully functional arms. At first, R2 will be monitored in space to see how it performs in weightlessness, but NASA hopes to eventually use R2 to assist astronauts during space walks and to work alongside engineers in the space station.

In a joint news release, John Olson, director of NASA’s Exploration Systems Integration Office, said, “The partnership of humans and robots will be critical to opening up the solar system and will allow us to go farther and achieve more than we can probably even imagine today.”

According to researchers on the project, “Robonaut systems are the first humanoids specifically designed for space.”

Robonaut is a collaboration between the Robot Systems Technology Branch at the NASA Johnson Space Center and the US military’s Defense Advanced Research Projects Agency (DARPA) to build a robotic ‘astronaut equivalent’. Robonaut looks a bit like a human, with an upper torso, two arms and a head – all controlled by a human operator through telerobotic technologies. Robonaut was designed with the concept of creating a robot for tasks that ‘were not specifically designed for robots.’ In order for the Robonaut to complete these ‘human-like’ tasks, it is equipped with hands that are actually more dexterous than those of an astronaut in a pressurized spacesuit.

In 2004, the second generation of Robonaut gained mobility when engineers attached its body to a Segway Robotic Mobility Platform (RMP) commissioned by DARPA. Using virtual reality instruments, a human operator was immersed in the Robonaut’s actual environment and was able to perform remote operations.

According to researchers on Robonaut, “As the project matures with increased feedback to the human operator, the Robonaut system will approach the handling and manipulation capabilities of a suited astronaut.”

With more ‘haptic technology’ which uses sensory feedback to recreate the sense of touch, a user might wear gloves that allow them to ‘feel’ objects in a virtual world. You could examine the texture and weight of rocks, or even experience the crunch of icy martian dirt.

Dr Grace Augustine’s Avatars on Pandora go well beyond current technologies. We’re not going to be growing any biological avatars for human explorers in the lab – but modern robotics are getting close to providing a ‘human’ experience through increased dexterity and mobility. Robotic avatars could allow humans to fully experience the environment of other worlds. Through the eyes of robotic avatars we could watch the sunrise over the rusty, red crater rims without having to “experience suffocation, the icy death of -200 degrees C on their skin or the sting of microscopic dust in their eyes.”

Even though NASA and others have come a long way in developing avatars, the technology still has a long way to go before we’re having adventures on Pandora-like planets. Perhaps more advanced civilizations on distant worlds have developed avatars just as good as those in the movie.

R2 will be a passenger on the Space Shuttle Discovery, which is scheduled to head to the space station in September.

Source Credits:

http://www.dailygalaxy.com/my_weblog/2010/08/robonaut-2-the-first-humanoid-tweeting-from-space.html

http://www.spacedaily.com/reports/Avatars_In_Space_999.html

http://bits.blogs.nytimes.com/2010/04/14/nasa-and-gm-robot-heading-to-space-station/?src=busln


 

Robot Teachers

Some of the key themes of the recent Robots and Avatars Lunch Debates on Artificial Intelligence and Behaviours and Ethics concern the relativity of humans to robots and avatars. We have been asking, how might we relate to these new representation forms in the future and what the implications might be of this? Central to this debate have been questions of authenticity, credibility, trust and security which have been highlighted due to the fact that, ultimately, it is currently impossible for an inanimate object or a graphical representation to ‘love you back’. In short, the relationships that we think we might be having with robots or avatars tend to be deceptive and illusory.

One area that this seems to have particular implications for is in the use of service or domestic robots, which is set to be a major growth area in the near future. Below is a great article by blogger Donald Clark about robots teachers, which surveys some of the the developments and issues around this sort of future.

Robot teachers – endlessly patient, CPD updates in seconds

Andrea Thomaz, right, and Nick DePalma in 2009 with Simon, a robot being developed at Georgia Tech.

The article in the New York Times will no doubt drive traditionalists to apoplexy, so there must be something in this report about the rise of robots in schools. Imagine a teacher that is endlessly patient, always teaches the correct skill in the correct way, provides lot of constructive feedback, can receive CPD updates in seconds, never gets ill and costs less than one month’s teacher’s salary? That’s the long-term promise.

What makes all this possible are advances in AI, motion tracking and language recognition. We have already seen what Microsoft have done with Natal in terms of speech, gesture and motion recognition. Plonk this into a robot and we’re really getting somewhere. The point in the short term is not to replace teachers but to replace the teaching of SOME TEACHING TASKS.

The focus, for the moment, is on early years education, playing to the ‘cute’ factor. This makes sense. We could see significant advances in early numeracy, literacy and second language skills in schools with software that provides guidance superior to that of many teachers. In addition, they can be updated, wirelessly, in seconds and can even learn as they teach.

The basic premise is sound and was shown convincingly by Nass and Reeves in The Media Equation – we treat technology as humans if it has the right affective behaviours – good timing, being polite, co-operative etc.. This is how computer games work. With the right movement, sounds and behaviour, avatars and robots seem real. Tens of millions use and experience this phenomenon every day. There’s even a touch of this in your ATM, where you’d rather deal with a hole in a wall than a live teller.

Robots don’t get hangovers, don’t take holidays, never discriminate on grounds of gender, race or accent. They’re patient, scalable and consistent. The ideal teacher!

Robots & language learning

The initial trials have been in learning tasks in autism and English as a second language. S Korea sees English as a key skill in terms of growth and its Institute of Science and Technology has developed Engey, to teach English. Their goal is to have an effective robot, that is better than the average teacher, in 3-5 years. This is part of a general push in robotics that sees robots do things humans do in areas such as production, military, health and education. Hundreds of robots are already in S Korea’s 8,400 kindergartens and the plan is to have one in every Kindergarten by 2013.

The University of California has been doing studies with a robot called RUBI, teaching Finnish. Initial studies show that the children do as well in tests as children taught by real teachers on specific language tasks. The retention was significantly better after 12 weeks with a reduction in errors of over 25%. Another interesting finding is that the robots need not look like real people, in fact hi-fidelity seems to be a little ‘creepy’ for kids. Although for an amazingly life like robot watch this.

CES 2010 featured a wonderful talking robot that follows you around the house and teaches you languages. It was remarkably sophisticated, with voice recognition, face recognition, picture recognition (show it a picture and it will say the word in your chosen language).l

Robots & autism

In a collaborative Japanese/US research project, children with autism have been shown to respond positively to synchronised behaviour from a robot. This is used to move the child on to other types of social interaction. In the University of Connecticut, a French robot is being used to with autistic children using mimicry to establish trust. Have a look at Beatbot’s Keepon robot designed for kids with autism.

Robots and personalised learning

Personalised learning can also be realised through one on one interaction and the robot engaging in conversations and learning from the learner. Work of this kind has been going on at the Georgia Institute of Technology, with a robot called Simon. The improvements in AI and natural language processing have led to results in the robotic world that promise one to one tuition in the future.

Robots & physical tasks

There’s also the teaching of physical tasks, such as setting a table, where Honda Labs have taught older children to complete the task without the aid of teachers. Robots can already complete physical manufacturing tasks way beyond the physical capability, speed and accuracy of a human. We’ve had 25 years of robotic surgery, with robots being used to do surgery at a distance, unmanned surgery and to minimise invasion. In May 2006 the first AI doctor-conducted unassisted robotic surgery on a 34 year old male to correct heartarrhythmia. The results were rated as better than an above-average human surgeon. The machine had a database of 10,000 similar operations, and so, in the words of its designers, was “more than qualified to operate on any patient.” The designers believe that robots can replace half of all surgeons within 15 years. In January 2009, the first all-robotic-assisted kidney transplant was performed at in the US by Dr. Stuart Geffner. The same team performed eight more fully robotic-assisted kidney transplants over the next six months.

Conclusion

It is only natural that robots, which have replaced highly skilled tasks in manufacturing, should be considered for teaching. Automating repetitive, difficult and dangerous tasks has always been technology’s trump card. If we know one thing about teaching, it’s that it is difficult and demanding, leading to unnatural levels of stress and illness. If we can, at the very least, relieve the pressure on teachers, that is surely a noble aim. In its own way, simple robotic, screen programmes like BBCBitesize and e-learning have already automated a lot of education and training. Robots promise to personalise this process. Every passing month sees improvements in movement, gesture and language recognition, with the technology appearing in the games world this year by Christmas. I have no doubt that robo-teaching will be common in schools in my lifetime.

Source: http://donaldclarkplanb.blogspot.com/

 

Robot Resemblance

IMG_2279 (1)Little Island of Japan is a company that comes up with clone robots, and to date their efforts with robotic dolls have managed to bear a close resemblance to celebrities as well as politicians, being highlighted in TV shows as well as worldwide news.

For those who want a robotic avatar of yourself, it will take around 3 months from your order for the robot to be churned out and delivered right to your doorstep. These robots come with sensors built-in to detect when people are nearby, and are full well capable of waving its hands and saying a simple “Hello”. Each robot stands at 70cm in height and will set you back by a cool $2,200 after conversion.

Source: Ubergizmo

 

Anybots – Work Anywhere

qbCutoutOne of the central questions of Robots and Avatars is to ask what it would be like to collaborate with a robot in the workplace? Further, we are exploring what the implications of this would be for how we present ourselves to our colleagues in both physical and virtual space?

We wonder how it will be possible to envisage robots as colleagues and are incredibly excited by the potential in a hybrid between robots and avatars – ‘Robotars’ as Prof. Noel Sharkey calls it – which we think will help us push forward the possibilities for new and blended methods of work, play and collaboration in 10-15 years time.

Anybots a California based company who make telepresence robots announced the launch today of QB, the first professional-quality mobile proxy robot. QB is the first in a line of Anybots made to connect people and locations. Accessible from any web browser, QB represents you throughout the workplace from wherever you are.

Trevor Blackwell, Founder & CEO, Anybots says:
“Remote-presence robots add a new layer to the engagement available for a distributed workforce. The global Internet is now fast enough for millions of people to be streaming live video and 4G cellular data will soon be deployed everywhere — so in very short order, web-based robotics will no longer be limited to facilities with Wi-Fi.”

Hyoun Park, Research Analyst, Aberdeen Group
“By combining audiovisual telepresence with the freedom of robotic mobility and an easy-to-use remote control, Anybots has created a new level of remote presence. The QB telepresence robot provides the functionality needed for business processes without falling prey to the “uncanny valley” of discomfort associated with fully anthropomorphic robotic builds. QB could change the current model for remote interactions in research and development, corporate collaboration, retail, sales and customer service.”

Go to Top