Training in a chatbot

I love trying out new things and as part of research for a project I am working on, I was testing chatbots. I came across Bottr.me and have been answering questions on the bottr.me back-end to enable the bot to speak on my behalf on Twitter. I had only just started using it and posted a message to ask bot any questions. Sure enough someone jumped on the change to tweet chat with my new bot and immediately bot got baffled by questions he didn’t know.

The fun cute side then was, that he triggered emails to me to answer the questions in real time. I thought that was useful. I also found it quite good fun to see how technology could represent me when I am busy. I like developments like these and feel they will become part of a lot of our tools.

Imagine getting stuck in a learning lesson and just letting the bot know that you don’t understand, he can then guide you back on track. Tutorbots are in creation and some are on trial in third level education, so it will be interesting to find out how they are received. I personally think that if a machine is faster and more efficient at doing something, we should maximise its capability. I also see in the short term that we will need to nurture them until machine learning steps become better.

From a gamification perspective the use of bots can add narrative into your designs and give an application a more supportive feel or even a business process for that matter. Asking basic questions may not be something you would do in public to a person, but privately from a bot may be much more discrete. I read recently that in knowledge intensive sectors like for example health care, the use of bots to have quick checklist or updates on best practise are helping health professionals make less mistakes. I can see how that is a seriously wise way to use chat bots.

What use do you see for chatbots?

Artificial intelligence in learning

Artificial intelligence (AI) is starting to invade a lot of applications we use regularly from Apple’s Siri to Amazons Alexa and a few others. The algorithms are consistently improving and some interesting use cases in learning are appearing too. IBM’s Watson is probably one of the better know tools that power artificial intelligence.

Artificial Intelligence by definition is simply computers that do more than process information. By learning human behaviours,  machines can act and respond in a humanlike manner. All those examples given previously in a typical morning routine, are already active processes using Artificial Intelligence that we see in our day to day lives.

We hear these words; Machine Learning, algorithms, Artificial Intelligence and Deep Learning, but what do they really mean? Algorithms are the reason Artificial Intelligence works. Algorithms can be easily defined by looking at them as a set of rules for solving a problem by following a sequence of actions in order.  Think about those tired mornings where you’re getting ready for the office, and you try and put your shoes on before your work trousers (we’ve all been there). It just doesn’t work. This is the way that algorithms operate, and it is thanks to these that Artificial Intelligence exists.

If algorithms make up Artificial Intelligence, then Machine Listening is the next stage in the evolution process.

Machine Learning is the type of Artificial Intelligence that operates through the process of looking at patterns using algorithms. Machines can learn behaviour without the need for manual programming in this way. The easiest example to demonstrate Machine Learning in its purest form is by looking at your email inbox. Most email programmes now will automatically move specific emails into specific social, junk or spam folders without the need of being told.

Further progression and technology breakthroughs saw the growth of Deep Learning. Where Machine Learning predicts behaviour, Deep Learning, using advanced patterns and artificial neuron-like behaviour whilst learning the user’s patterns, can predict the future.

These are the processes behind interfaces such as Amazon, Netflix and YouTube and many other platforms that decide what you want to watch, buy or listen to next.

The easiest way to think of the relationship between Artificial Intelligence (AI), Machine Listening and Deep Learning is to visualise AI as the catch all term for algorithms mimicking human behaviour — the first and largest idea, then Machine Learning — developed later works mainly on data analysis and pattern recognition to make predictions, and finally Deep Learning — which is the driving factor behind today’s AI popularity in business, which creates a learning algorithm to attempt human decision making through data, patterns and their links.

In learning we have seen only a few initiatives, powered by artificial intelligence, yet a lot of people are thinking about this and working on it. The experiment carried out by artificial intelligence teacher Ashok Goel of Georgia Tech, wondered initially if students would detect if a teaching assistant was powered by artificial intelligence or not. In the Ted Talk about “Jill Watson”, which is the name given to their artificial teaching assistant, Ashok explains that the initial implementation had challenges, he also shares how they solved them.

[video_player type=”embed” width=”560″ height=”315″ align=”center” margin_top=”0″ margin_bottom=”20″]PGlmcmFtZSB3aWR0aD0iNTYwIiBoZWlnaHQ9IjMxNSIgc3JjPSJodHRwczovL3d3dy55b3V0dWJlLmNvbS9lbWJlZC9XYkNndUlDeWZUQSIgZnJhbWVib3JkZXI9IjAiIGFsbG93ZnVsbHNjcmVlbj0iIj48L2lmcmFtZT4=[/video_player]

In my view artificial intelligence will allow for learning to become more personal. At the moment, we see most artificial intelligence applications similar to chat bots who provide answers, here and there people are looking at their application in recommendations for learning and some for analysis. My thinking is that we will see further expansion in the use cases and excitingly giving us the opportunity to have learning our way with our preferences taken into account from the outset. In games, we have had robots to play against for years and a lot of games have adaptive algorithms built in to let you have an enjoyable experience based on your skills. It is only a matter of time before we see this applied extensively in gamification for learning and learning technology.

Humanising algorithms by giving them a name

I subscribe to a number of technology magazines and futurology newsletters and what struck when reading about Einstein, the Salesforce cloud commerce artificial intelligence platform, is that most tech companies seem to want to humanise their algorithms by giving them a name. Other technology companies have Alexa, Echo, Siri, Watson, to name a few that straight away jumped into my mind.

For a parent picking a name is a meaningful and significant life event and I wonder by giving conceptual and harder to visualise  items like algorithms a name, does that then humanises the concept. Most people are a little bit suspicious about technology’s ability to predict events or even further replace us as humans in certain tasks. Is the very act of giving a name then a conscious choice to make it more acceptable as a member of the team or even the family.

I hear of children talking to Alexa at home as if it was a friend at school. Siri in my case doesn’t understand me very often, so my experience is more laughable than useful. But I guess the days of science-fi movies with robots performing some of our tasks is becoming ever closer to reality. Personally I would be very happy to adopt the robot called Pepper into our home, just for fun. Although on my priority wish list for home assistants is a hoover and grocery shopping bot.

Name giving is as old as most societies, we named objects to give them a sense of meaning and I guess algorithm didn’t give it enough meaning. The creators at companies may have the exact same feeling as a proud parent when their baby in this case an algorithm is launched, so name giving may just make it identifiably theirs. I also wonder though if this then lowers the barriers of acceptance of the technology. New technology acceptance is not widespread instantly, however some of the named AI assistants seem to have seamlessly entered households, albeit still for a very much early adopter audience.

In games, we are often asked to choose a game name, so we are recognisable to our opponents. Some people get creative others very much stick to their initials. Some of my gamer names are Chillyspice, Icicle and others. Picking the name is fun, the challenge of finding an original one that wasn’t taken a whole lot harder. It never created a sense of ownership for me to have a player name, that is more linked to identity. However products I have created and given a name or even this business, I would cherish a whole lot more. It is as if it creates a sense of attachment emotionally apart from the inherent sense of ownership as the creator.

Is allowing people to give projects, inventions, algorithms a name a way to create an emotional link so that they will take more care of the “baby”? I couldn’t actually find research about it to confirm or reject my wonder, but if you know of research findings where the connection of name giving and user acceptance are linked or examined, I would be interested to find out. It may just mean all gamification design projects from now on have identifiable one word names.

Are you having hilarious conversations with voice activated tools?

Speech recognition as a tool for both augmented and virtual reality as well as more already available technology such as our mobile phones and devices such as Alexa, Echo and others is definitely improving and surrounding us. The technology enabling this service is becoming better and the ability for it to adapt to accents and even further speech impediments is also being fine-tuned.

So far my experience has been decidedly hit and miss. Siri on most occasions doesn’t get my question right, even with training it in to hear my voice in a recent IOS update, he still cracks me up with the most weird and wonderful things. This morning I only wanted him to go back to his previous search results and we ended up having a debate on why it was me not him. Very funny, totally random and glad I am not relying on just speech to get my stuff done.

I haven’t had the privilege yet of testing out Alexa and Echo. I did try out some voice activated typing dictation softwares, which involved a lot more training of the system than in Siri’s case and they reached reasonable accuracy, but again nothing I would be willing to drop written search over. With sound input and enunciation quality as factors, I wonder if it is possible to get to the point where a group of people can all use the same device with their own accent and find the answers they were looking for. Or even better again different language all going into the one device and everyone receiving their answers in the language they asked for.

The companies behind these technologies say it is possible and in a lot of their marketing you are made to believe it is accurate, can adapt to various accents, etc. The reality in my limited experience is quite different. I had to turn Siri off on my iPad because it also thought newsreaders or journalists with a similar voice to mine were calling him into action every few minutes. Imagine watching BBC news and being interrupted every 5 minutes by “Hi, how can I help”. I have some friends who are absolute fans of Alexa and see their children having conversations with it.

In a virtual world project we are working on, we use the technology developed by Edorble. In this scenario my voice and the voices of the other people in the virtual world are our own, just like you would hear them on Skype or via telephone. At this stage there is no robot or other artificial intelligence involved. However for our 2-D version on the website for the same client we have had to resort to an avatar character, that asks you questions to help you navigate. Voice commands would make the project prohibitively expensive and possible also a bad user-experience for the end-user.

I worked on a translation related software straight after finishing my first degree, and then the project fell down because the organic way language evolved couldn’t be accurately translated by devices. Now a good 10+ years on, the translation tools are more and more accurate. My guess is that this will be the same for speech powered tools. Right now we will stick with voice recording or real speech from one user to another when incorporating speech recognition in our gamification design projects. I am hopeful that in a few years, we don’t have to limit ourselves to the written word and can expand our reality to have voice activated tools that respond to all voices with different accents, speech impediments or even other languages.

Maybe the future holds a reality were we can just call out a cloud based tool to come to our assistance. The click and point style augmented reality may be closer in reach, but the real shift will come when our voice or even our mind can activate tools and find answers. My guess is, that this will take less than the 10 years it took with the translation ability of tools. I am looking forward to the design options this will give us.

Trend watch: Are robots going to do your resourcing?

This morning I attended an innovation update session by the wonderful people of Resource Solutions. One of the attractions was my favourite robot called Pepper, who had been trained to do a few poses for selfies. But imagine a robot as your secret weapon in the candidate resourcing, recruitment and career coaching field.

We were given a brief flavour of what is possible with the virtual sourcing assistant Arya, who trawls the net for matches to your open positions. It comes back with best fit based on skills and values listed in your role description in the shape of a star rating as well as an indication of the likelihood that the person is in the market to move to a new job and their predicted timeframe for doing so. Based on how you shortlist candidates and how you reject others, the virtual resources will become more intelligent over time and learn about your preferences and selection criteria. As with all machine learning powered systems, it gets better by use. Because the virtual sources is purely searching on a factual level, unconscious prejudice and bias can be avoided to some extent (until your humans obviously have started shortlisting). In any case the level of inclusion and better fit based on a larger number of criteria is made possible this way.

IBM showcased Watson application MyCa, a cognitive career advisor app, which is a chatbot to create personalised learning at the point of need or to match employees to internal job opportunities. In layman’s terms this means it uses a smart algorithm to search for information based on questions asked in the application called Myca. IBM Watson can be deployed to assist with recruitment career coaching, talent development, to create a virtual agent for HR and talent insight. Myca looks for demand for a role internally, previous successful candidates and their profiles, how the individual using the app matches the successful candidates and if they are not appropriate they will look for more appropriate fit or provide feedback on how to gain the necessary skills.

The final presentation was by HireVue, who allow again through the shape of an app, to shorten the recruitment process. From interview, to assessment and evaluation, is all carried out through an app. The machine learning hiding in the application analyses audio, video and language patterns to detect behavioural traits and give insights on the likelihood if a person is a good fit or not for a specific role. It uses micro expression analysis, such as minor facial movements from smiles to frowns to assess suitability. The number of data points it analyses is vastly larger than most of us humans can absorb and look for. The data models provide scoring and ranking for faster, smarter hiring and coaching decisions.

 

If you like this kind of content, explore our HR & L&D Technologies Updates, where we explain current trends and technology in words we all understand.

 

How often does a recommendation algorithm get it right?

With artificial intelligence, recommendation and reminder algorithms invading most web based applications and services, I often wonder how often they actually get it right? As you probably know I love technology and will always be trying out new things and find out how they can potentially impact our lives.

Algorithms that give us recommendations based on our previous searches or behaviour, have been programmed to track past behaviour. Typically it is based on a trackable action ranging from search to view to click to buy. In my working world research is often part of the game, so some of the recommendations I receive are very relevant to projects, but a million years away from my personal preferences. In the world of learning I have often had to deep dive into a topic and find out the basics in order to design something of value. So in my case, the search based algorithms typically are off the mark for me personally.

Then when it comes to actually buying recommendations, the best example in my world is Amazon. It does show me relevant examples, because typically I will only buy what I want and not necessarily client related materials. But I also often receive recommendations of items I have already bought, which I find mildly amusing. Surely the same algorithm should be able to trace my old buys?

It reminds me of my early adventures into the world of VB programming. One of the lecturers made the comment about GUI, garbage in garbage out and in my mind this applies to todays recommendation algorithms. If you are not sure what will give the best results for your customers, ask them what they value and what they would actually buy for themselves. In my own experience, my buying behaviour is much more a reflection of my true self than for example my search patterns, which can be vastly different. I personally wonder if the creators of these engines mostly started with the view of what they want to track, rather than what the client would want to receive.

When you then enter multiple people into the equation, as in the case of our Tivo our television show recommender, it becomes really funny. We actually watch for the red buttons to jump on, to find out what exactly he thinks we need to be watching today. Funny enough, he can surprise us with great choices. Most often we have to throw out the majority of his great finds. We have a few series we have set him up to record, but really the accuracy is very hit and miss. Then my partner would watch sports, I watch news, technology and also very girly programs and dance related movies. We both like movies but from quite different genres. So no matter what Tivo has a difficult task ahead.

In our gamification design work the requests for some of this technology is slowly but surely entering the market. Some engines are working on integrating it out of the box, others are taking a wait and see approach. In my view the same design principle applies, you have to find out what is important for your customer to get the algorithm right.

 

Trendwatch: Big data and gamified learning

Big data is a buzzword, that has entered the frame of enterprise applications in the last number of years. It is in recent months that it is also entering the realm of learning technology. Saying that very few learning technology providers are able to provide the service just yet.

The places where you have likely been exposed to big data are Netflix, which recommends best content for you based on what you have viewed before and Amazon, who add the information of other buyers of the same products have also looked at. Now imagine your corporate learning system providing the same feedback. Suggestions based on the courses your took before and then what your colleagues then went on to learn about. Let’s not stop there though, linking it to HR and other enterprise applications would then allow you to make further suggestions, based on the projects you work on and the competencies required for those, your personal interests and your managers and peer feedback.

Now how does gamified learning enter the picture here? Well in gamification we track all events and some are actively encouraged, others discouraged, typically with a business outcome in mind. Each tracking event is a data entry point, where analysis can take place. It requires typically a bit of data cleansing to have a real user picture and trends analysis over time and over a number of users. The more users are active, the better the data and its potential analysis.

So when you are working in the field of learning and development, having knowledge of data analysis will become a required skill for people int the team. Looking at it from a wider enterprise viewpoint, a data analysis team or robots which are programmed translate human commands into algorithms that can search for data trends and habits users are showing with their use of enterprise applications.

Currently what I find when working with HR and L&D teams is that either no data is gathered nor analysed, just merely providing a service, which you hope is hitting the mark and somewhat effective. Or on the other side data is being gathered and the analysis of it is not made, used nor acted upon. Effectively what big data can bring to the table is predictability about your people. From whether they are likely to stay or are actively looking to leave, whether they will grow and turn out to be your next high potential leader, whether they are a good fit in a team or project or not, whether the course you just wrote will be hitting the mark or not etc.

Possibilities to use the data are amazing. I personally find it very interesting to watch data flows about my business. I do what I can to find ways around it and in every case data will tell me if my efforts are successful or not. We have found a few ways of being able to provide some of these seemingly futuristic elements to business customers and we hope to see more of the same coming our way.

If you like posts like these, you may love our Technologies Update specifically for HR & L&D professionals

 

Should we be afraid of Artificial Intelligence to take over our jobs?

In the HR world, new technology is often looked at with a level of scepticism. The emergence of artificial intelligence taking over our jobs is the current kid causing some worries. The fact that both Stephen Hawkins and Bill Gates have been speaking about a revolution of how artificial intelligence will be the overhaul of jobs as we know it, is founding some of these fears.

Artificial intelligence is a system that can understand, reason, learn, and respond based on evaluating large amounts of data. It can be just a computer program or a installed in a fully functioning robot.

The days of robots walking around offices instead of people, are probably a few decades away. But at the same time, some of the more menial tasks may be executed a lot more effectively through artificial intelligence led machines. According to Elon Musk,  a computer can communicate at a trillion bits per second and a human through writing on a mobile device at 10 bits per second.  So when it comes to productivity they may find a place in our workplace.

So far the areas that humans are still needed for include:

  • creativity
  • influencing others
  • understanding human emotions
  • ability to handle poorly described problems
  • subject matter expertise with understanding for contextual finesses and applications

This is by no means the exhaustive list, and I am also sure that some department somewhere is looking to build these abilities into algorithms that we will in time find in intelligent machines. At the same time, humans will have a place too in a world invaded by machines. Most vitally we can shape how we integrate them and make the combination of resources work for the best positive outcomes.

Some of the questions for HR and management are about “How will robots enhance our work?” and “How do we want them to fit into our organisations?” as well as “What will humans bring to the equation?”, “Where can both be most effective?”

Whilst it won’t happen widespread for probably a decade or two with the speed of change consistently increasing, the usability is something to start considering today. In some circles, it has triggered the discussion around basic universal wages given to all citizens by governments. My question to you is how will you deal with artificial intelligence as a resource in the mix of managing human resources?

In gamification design, we can see benefits from artificial intelligence in the areas of user research, because a lot of behavioural data exist and can be processed a lot quicker and objectively than when humans analyse data. Equally when going live with a digital gamification implementation, correction requirements and trends may be much more quickly spotted through a robot instead of a person. So in my view it will make our work more interesting, where mainly the creative side will remain and the robotic side can do the data analytics to help us make better informed objective decisions.

If you like posts like these, you may love our Technologies Update specifically for HR & L&D professionals

Trend watch from SXSW

It is my first time visiting South by southwest in Austin Texas and for me it is like nothing I have visited before. You have a combination of music, film, technology, education, creativity, design, etc all melting together in one big pot. Across a few venues the city is completely consumed by this festival, conference, exhibition.

My first impression is that anything goes as far as fashion, although the uniform is jeans and t-shirt for the mostly young male visitor. The ladies I have met so far are strong and interesting with all sorts of viewpoints. Being part of an all-female discussion group made me feel that women globally experience similar issues and at the same time the first world issues are different depending on region and age.

Although I have only managed to enter a few talks and mainly hung out so far with my fellow panellists. The trends that are apparent are distinctly around are virtual reality and artificial intelligence. Most talks about AI are fully subscribed and the amount of VR and AR stands in the exhibition definitely give some insight into the possibilities or lack of for that matter. Augmented reality keeps disappointing in the way it is adopted with lack of creativity in my view. So note to self we may have to address this. Virtual reality is still all about the headsets and content remains an issue because it simply is expensive to create right now.

Artificial intelligence in this company stands out like the sexy kid on the block, even if it has a whole range of other issues attached to that such as the question of ethics and how will we interact when our world can be mainly run by robots and algorithms. The challenge of artificial intelligence is that it still needs to be originally programmed by humans and that the lack of diversity in the base programming may well cause further disconnect than we already see in society today. AI can learn over time and adapt, but only if it left exposed to a variety of experiences from different viewpoints male/female, young/old, cultural and national or even regional differences. Saying that it is exciting to see the potential of it. It will disrupt our world to a larger extent than ever before in my view with more.

Imagine the low end process driven jobs disappearing to a robot, quite a few roles completely disappearing and the world of work adapting to mainly creative and mind pursuits. Maybe the whole idea of giving all citizens a wage and allowing them to work or not on top of that would be a major paradigm shift. It would also require governments to actually employ people with business experience of managing this as a not for profit long term sustainable system. I am not sure how or if it can work, but the one thing I am sure of is that artificial intelligence will disrupt much more than we currently think of. Maybe not in my lifetime but with the rapid change of technology I wouldn’t be 100% sure of it, more the resistance to change of most humans will be what will drive a slower take up.

I finally found out what Facebook wants to actually do their Occulus purchase and it turns out it os for 360 social media creations, so your friends could feel in your presence even when they aren’t. I guess that’s why they are training us in to watch live videos from everyone to get into the habit of wanting to attend when someone happens to invite you in a 360 movie. Personally I am not sure if I would be all that interested considering the intrusion of push live video already.

It is a brief post today, with a further update probably next week when I have absorbed more sessions, heard more people and had my own experience of being in a panel.

Feminine gamification viewpoint: language bias

Feminine gamification viewpoint: language bias

Artificial intelligence driven by machine learning has come up with some interesting challenges. Because of it’s adaptive nature responding to how we deal with it, it can come out different than the way it was designed for. The Microsoft chatbot Tay was an example designed to be helpful, but thanks to people sending it abusive and racist tweets and comments, it started to respond in exactly the same way.

The challenge with coding responses is that there is no emotional filter or value (right/wrong, acceptable/offensive) filter built in to most machine learning set-ups. Technically most projects aim to resolve a specific problem and they do exactly that solve that specific problem. Currently most AI developer teams are white affluent male, in fact most artificial intelligence conferences have trouble attracting more than 15% in female attendees. However the industry does need the input to understand how men and women interact differently.

Textio, a tool that helps companies change job posting language to increase the number and diversity of people that apply, analysed 1700 AI employment ads and compared those to over 70,000 listings spread across six other typical IT roles. The analysis found that AI job ads tend to be written in a highly masculine way, relative to other jobs. Some of the words used included “coding ninja”, “relentlessly” and “fearlessly” which tend to lead to fewer women applying.

I often find the same applies when I see game related roles, even when the company does their best to have a gender equality policy and method. I think tools like Textio can help just as much as role models and awareness about the issue that we all have a personal bias to our own kind.

When I take this one step further, in our designs the types of questions women ask and men ask are different. How they engage with robots or gadgets is also different. I would love to see the result of male trained robots and female trained robots and the difference in behaviour. It would prove how much environment influences these factors and it is isolated enough to be relevant.

What words have you come across that are indicative to one gender or another?