Find Your Course
Liverpool Hope Logo

Filter news by category:

print Icon print this page share this article

Expert comment: Genuine People Personalities - Encoding EQ

0198 Dr David Reid Wednesday 22 June 2016

Following Enfield Council’s decision to employ a virtual robot, Senior Lecturer in Computer Science Dr David Reid explores what replacing humans with virtual agents could mean for the future.

The great Douglas Adams wrote in The Hitchhiker’s Guide to the Galaxy about a new breed of robot that was imbued - or in some cases inflicted - with Genuine People Personality (GPP). Not only did these robots have intelligence (IQ), they also had emotions (EQ). In the book, the fictional Sirius Cybernetics Corporation encoded robots of all types (including toasters, doors, elevators and most famously Marvin a super intelligent robot) with personalities, in order for people to interact with them more closely. As is often the case, given enough time, sci-fi becomes sci-fa (science fact).

Recently, Enfield Council employed a virtual “robot” called Amelia, in order to answer questions from members of the public about council services and to help them fill in standard forms. Amelia is an advanced cognitive computing platform, who is not only capable of analysing natural language in many different contexts, but can learn from her mistakes and has been designed to detect the emotional state of the person making an enquiry.

If Amelia fails in her task of providing an adequate answer to a question, a human co-worker will teach Amelia the correct way to solve a problem. This is done not by reprogramming Amelia, but by showing her the correct way of doing things. Amelia also learns on the job by observing interactions between her human co-workers and customers, and independently builds her own cognitive map of what is happening. 

Amelia has also been encoded with a normative emotional semantic network. This allows her to modify her behaviour if the customer gets angry with her, is upset or is joking. This modification in response is triggered by understanding the meaning and context of the specific query, and is aided by interpretation of the user's facial expressions as seen through Amelia’s camera.

Amelia can “speak” 20 languages and process thousands of enquiries simultaneously. She is an example of Artificial Narrow Intelligence (ANI), meaning that although Amelia can be trained for other tasks - such as a mortgage broker, IT service desk agent or invoice query agent - she only specialises in one domain at a time. This is unlike Artificial General Intelligence – the ‘holy grail’ of AI.

Even though Amelia is only being trialled as a web-based virtual agent for the time being, this type of technology has massive ramifications for society. Just as the Industrial Revolution caused a massive reduction in the number of people employed in specific blue-collar, manual work – such as textile workers and farm labourers - ANI has the potential to do the same for white collar, intellectual work

Professionals such as lawyers and doctors will be at risk, or will have to retrain, but the heaviest burden might rest on the shoulders of those who are least skilled and perhaps less well-equipped to move into whatever new occupations are generated. Security guards, street cleaners, taxi drivers, lorry drivers, secretaries, accountants, counter clerks and sales related jobs are all in the high risk category. The Bank of England has recently warned that up to 15 million jobs in Britain are at risk of being lost through ANI.

The Bank of England classified jobs into three categories – those with a high (greater than 66 per cent), medium (33-66 per cent) and low (less than 33 per cent) probability of automation, and made an adjustment for the proportion of employment those jobs represented. For the UK, roughly a third of jobs by employment fall into each category. This doesn’t mean that there will be 33 per cent fewer jobs around, but that people in those jobs will probably have to re-train or find new careers.

The other consideration is that people may not like interacting with robots. The discipline of cyber-psychology has shown the way people interact with machines is very different to the way we interact with each other. We expect different values and actions for robots than we do people. Accordingly, robots need to be aware of this if they are to communicate effectively with us.

This was recently illustrated through the first Google Self-Driving Car crash in March. Nobody was hurt, but the driving style of the car was such that it was being too polite to other road users while trying to navigate around sandbags that where placed on a drain after a storm. It allowed traffic to overtake, eventually pulling out after assuming a bus behind would be polite and let it pass - it didn’t - and the Google car hit the side of the bus. This illustrates that the subtle everyday negotiations we account for when doing something as trivial as driving, are exceedingly difficult to account for in ANI.

As we start to rely on ANI more and more, it is likely this type of thing will happen more frequently. The “messy” nature of the environment we want our robots to function in will produce problems that even the most cautious computer scientist didn’t anticipate.

If ANI makes a big blunder - as it inevitably will - then will the programmer, the person who taught the ANI or the ANI itself, be to blame? 

Show more