Bill Boorman joined jobpal as Advisor, and sat down for to talk with us about the past, present, and future of chatbots. You can read and share the summary on our blog!
“We’re not in AI time, and that’s excellent!”
Kit Kuksenok (Senior R&D Engineer at jobpal): A few years ago, I could have asked you about how chatbots are the next big thing, but now, they are already here. From your perspective, having observed and been part of the development of chatbots in talent acquisition, what has been something that surprised you?
Bill Boorman (Advisor at jobpal): The first thing that surprised me, pleasantly, is the effectiveness of the chatbot as a user interface: taking static information and turning it into conversation.
The second thing is how much we'd underestimated how quickly people would move to communicate by chat, and want to do things in real time and one to one. I think this evolution has made chatbots central to the way that I see talent acquisition and HR, when it comes to conversation transferring information.
There was a feeling in the beginning, which I shared, a concern about how people would feel communicating with a bot, communicating with a machine, and whether it was going to be just for very simple things. We understood it for a pizza delivery, but not necessarily something complex, like an HR complaint. And what we found is the opposite to what we anticipated: job-seekers at the top of the funnel would far rather speak to a chatbot than a person.
People are showing the preference to talk to a bot first, and then have more details and filter the person-to-person conversations that they want to have. That's come as a benefit of chatbots but not something we anticipated. We anticipated that there was going to be a bit of reticence on the candidate experience. That just hasn't been the case, quite the opposite in fact.
I don't want to say “AI”, because that's the wrong term. Recruitment automation. There's an understandable fear around recruitment automation and it's something we need to overcome.
KK: Do you see any aspects that people are reluctant about or do you feel like there's just an overall surprising amount of acceptability around chatbots?
BB: At the moment, there's very little resistance. The resistance tends to come when people are evaluating when and where to bring in chatbots. I think there's a natural fear around what people label as AI, and around the whole automation of HR, which is really taking tasks that people used to do and automating them. People have a natural fear: “well, if we automate a lot of these tasks and take a lot of the time away, what's going to happen to me?” So the resistance is more of an overall - we could say that about chatbots, about matching technologies, about all kinds of recruitment automation.
I don't want to say “AI”, because that's the wrong term. Recruitment automation. There's an understandable fear around recruitment automation and it's something we need to overcome. People say, “well, ok, if you automate a lot of many of the tasks that I do, from matching to screening, from talking to candidates, all these kind of functions really, is there going to be a point of me?" That's no different to the conversations - or the fear - that's happening in many areas of business, as we look to automate work. Less automate work and more automate tasks.
If we've got 80% of our time back, what could we do that we don't do now?
KK: Resistance and fear regarding "AI" includes the various concerns about replacing human labor, or reducing labor to relatively unskilled activities, like labelling data to make machine learning work. And even at this cost, the work done by machine learning, while more scalable and in some cases cheaper, struggles to compare in adequacy to the human labor it is arguably replacing. What do you see as the power and potential of conversational applications in talent acquisition, beyond replacing certain manual tasks with more scalable automation?
BB: I can give you a very real example. I was involved in a reasonably high-level strategy meeting for hiring with a global organization, and the particular team I looked at had about a hundred recruiters. We looked at workflow: what tasks could I automate and what will be the benefit of doing that? We considered tasks we could use technology for today, never mind future capability, to automate 80% of the tasks involved in hiring. Things like matching, short listing, communications applications, recruitment marketing decisions, interview scheduling, booking interview rooms. For example, diary management: one of the examples I looked at, it took 14 emails to arrange a meeting, because you have the back and forth from people in the conversation. But if you put in diary management system so everyone's diary was accessible, could sync up, you could drive that really simply. Here's three times, choose one, job done.
We estimated that we could automate 80% of the tasks that were getting done, and the big question, which took a few days of discussion: if we can automate 80% of the work, do we need 80% of the recruiters? In this case, it was 100 recruiters.The question was, do we need to invest in automation, get the process right, reduce the team by 80, make the saving - reducing the cost of hire by taking the people out of it - pay for the automation in the first 12 months, and then having reduced our cost-per-hire significantly by 80 salaries a year, and carry on with the 20 people. I think that was one of the options that was on the table as a realistic conversation that people are having in boardrooms.
So we flipped that and said, OK, coming at it from my perspective, if we do that, what are we left with? What we're left with is: we're doing exactly the same things, we're just doing it cheaper. So all we're bringing with automation is reducing the cost and increasing the scale. We could do it quicker, more consistently - because machines, apart from technical issues, don't have off days and high performance days, and then decision making is consistent and driven by emotion.
There's a bunch of benefits in doing that, but you're not making anything better. And if you've got your foundations wrong, your datasets from machine learning and so on, your decision-making and matching software, actually you're increasing the volume of things like unconscious bias. So potentially, we could do the same thing faster and more but we're not doing anything any better, we're not changing anything, we're not providing what I call a “+1” solution, we're doing the same thing.
Or the other decision is, if we've got 80% of our time back, what could we do that we don't do now? How could we change things, how could we do things better in a more relevant and better way, get very different and better outcomes, because we've got time to do the things we need, because we now have time back?
There's a lot of time invested in meeting with strangers. And that's not how people are connecting with brands these days, people have already moved on from that.
KK: So what kinds of things can that time be used for? What are the possible improvements?
BB: A lot of things! Let's talk about talent acquisition, because that's my area, but we could apply it to any area of HR. Right now, recruiters are dealing with a high volume of requisitions and a high volume of candidates because the way in which the funnel was built requires that human system: you need to spread wide, there's not a lot of quality in it, so then you need to spend a lot of time doing things like reviewing CVs, resumes, talking to people, sending out tests and assessments, doing front-line interviews to try to figure out who's right and who isn't based on quite a limited dataset. Every time people are meeting, they're meeting as strangers, and both sides try to figure each other out by questioning. The candidate asks, "do I really want to work here? let's figure it out, let's ask certain questions, make some judgments" and the company asks, "does this person fit, are they right, what do we know about them?"
There's a lot of time invested in meeting with strangers. And that's not how people are connecting with brands these days, people have already moved on from that. When people apply for jobs, they're not strangers, there's already been some interaction, there's already been some decision-making making to get to the point of getting in the room. If you visualize a recruiting funnel, at the moment, most of the time and investment in talent acquisition is spent on the top of the funnel, in filtering: "how do we turn 200 applicants into 50 and turn 50 into 10 and put 5 in the pipeline for the interviews?"
What if we said, let's really shrink the funnel, because we're spending more time on things like attraction, relationships, understand the people there, if we spend more time outside of the top of the funnel, in the bulk of the funnel and at the bottom of the funnel. If we just had 3 people in the pipeline instead of 50 people in the pipeline, we could dedicate time to each of those individuals. So it’s really about how do we shrink the funnels, have less people in transaction, less applicants, more candidates. Which is a significantly different thing.
Candidates are attracted to a brand, a company, or a job, they're in the available funnel of people you want to consider. Applicants are being measured against the job; they put their hand and they said, "this job". We are measuring them in different ways. Candidates are measured as people: "lets see what they fit." For applicants, we're actually saying, "This is the job, now let's measure you against the job: what are your skills, capabilities, experience, fit." And the applicant is doing the same thing, "as an applicant I'm measuring myself against this job to determine if it's something that's going to enhance my reputation, help me to work with the right people."
Applicants are really project management, which we can automate. We want even more people at the top of the funnel. Outside of the funnel, we have this networked community, people from whom we can choose to invite to apply for jobs: existing employees, boomerangs (people who used to work for us), people who applied and have gone through the interview process or expressed interest before, or this could be people who are following our brand on LinkedIn and Twitter and all the rest of it.
We believe that all organizations can reach the talent tipping point: "actually, we know all the people we need to know."
KK: On this topic of dedicating more time to fewer candidates, can you say more about what you said about automation potentially increasing the volume of unconscious bias?
BB: This is something I've been thinking about for a number of years, and looked at a number of different tech solutions that worked towards this. We believe that all organizations can reach the talent tipping point: "actually, we know all the people we need to know."
If we look at current talent attraction/acquisition, the bulk of the investment in terms of budget and time is spent on attraction, in a reactionary way: "here's a job, let's attract people." Every time you have a role or a requisition it's a rinse-and-repeat process, and to some degree that's automated. The second phase of that, is what if, actually, we already know these people, and we already know something about them, we already have some of their data, we could match those people to opportunities and invite them to apply, because we already know who they are. With some branding, we’re naturally topping up that pool of people. We can invite people with whom we already have a relationship: “how about this job?” The challenge then is, how do we retain the relationship? How do we stay relevant and interesting, so you want to stay connected with us? How do we give you enough value in that relationship?
There's a lot of opportunity in knowledge sharing and relationship building.
KK: What are some ways can we provide enough value in that relationship?
BB: There's all kinds of things! Currently, where I see attraction and connection and relationship going is to be built around learning. I think learning is going to become the new currency and I'll tell you why I think that. I think people are fundamentally are more focused on careers than ever before, when we look at individuals; however, they're not focused on careers with companies.
If we went back 5 or 10 years, the way in which you'd look for a career would be to choose an employer with whom you were expecting to stay, and you'd expect certain things to happen: every year you'd have an annual review, every couple of years you'd expect to get some training and development. In a vertical career, you'd know: “if I'm at this grade, if I'm at grade 4, I want to be a grade 5, or I want to be a supervisor, or whatever” - your career was mapped out in an obvious path, and the training and what you had to do was pretty clear, and you'd learn in traditional ways, maybe in a classroom, in formal careers that might be qualification-based; and more informal careers that might be in-house training.
But there was the route in which you learned, and that's not what's happening now. What we see is people moving jobs much more frequently, and the reason they're leaving are significantly different. When people used to change jobs, it was a major occurrence on one side or the other, either the company didn't like the person or the person didn't like the company, but there had been a fallout, or a breakdown in the relationship. Now, what we see is that people look for their next job on the basis of what their job is going to be after that: “is it an interesting project? is the work going to be right?” We're also seeing that people are moving on the basis of, "how is it going to enhance me?" But by not being in that vertical organization, they're losing their learning and development opportunities.
If you lose the learning aspects of that, where are you going to learn? You're going to have to go and learn for yourself. If we can provide learning and knowledge in our networks of people we're connected to, making the organization a virtual organization, we can maintain relationships. There's a lot of opportunity- we don't quite know what that looks like yet, but that's the ways in which I identify we can connect - in knowledge sharing and relationship building. People develop their skills and do all kinds of things, and you can sometimes invite some of those people to be employees when something is relevant.
KK: Embracing more of a skill-sharing model?
BB: Yes. How do we share our knowledge? How do we understand that? Part of it is changing our attitudes. Organizations used to be very much built on the idea that if we keep the knowledge in the organization, we can be better than everyone else: “it's all secret, it's all locked down.” People didn't change jobs much, so you could really have proprietary knowledge. That model is not really possible anymore even though many organizations still operate that way.
Now we want to say, right, the real differential between one company and another is not the available knowledge or information but the execution of it: who acts on that information and knowledge. Knowledge is free and open and public. This is a real challenge for the education system, the way in which people learn. People learn, and people can learn the things that they need in a social, open-source way - and there's thousand of people in networks who want to build personal brands by educating. There's lots of people out there, whether it's blogging or making videos or coaching, there's lots of people who are enhancing that ecosystem by contributing to it because it's good for them, there's a mutual payoff. People are learning, and companies can contribute to that. There's no point to locking down your information: share it, but have a great plan for execution and assessment of knowledge when people don't have certificates anymore.
KK: If we imagined a world where we get over that organizational challenge, do you see this is an area where conversational applications can be helpful?
BB: I think conversation is related to anything, right? If we want to talk about how to we use messaging, how do we use chatbots, just think about learning. Let's go on a basic scale. I spend a lot of time and I push people towards Codecademy, which is a free basic platform for learning coding. It's actually built for kids, but it's good for anyone, my children have been through it at quite a high level. That's always been delivered in a conversational style: we give you a task, you do that task, we come back and tell you what's not quite working. It's a little bit easier to build a conversation flow on that, because in coding the errors are absolute, and we can tell you that it's wrong and you can try again. When they switched the platform to a more chat interface, so you did something and you got a response or a suggestion, and you could ask a question or you could ask for help, the time on that platform increased exponentially.
That isn't very different to learning. It's the difference between if you watch a one-dimensional video, or you go to a website and read a book. You could learn some real foundational skills, but you can't engage with that. So if I gave you a book and said, read this book, I think this is great, you would read that and you would learn some foundational knowledge, but you could only bring that to life by coming and having a conversation with me.
KK: Because in order to have the conversation, the reader or learner would have to synthesize the foundational knowledge encountered in the linear manner?
BB: Yeah, so what I see form the conversational delivery point of view, this is going to engage you and take you to some new places. I don't believe there is an element of HR in which we can't enhance what we do by using chatbots. And I genuinely believe that, that isn't a JobPal pitch or anything. The more I've looked at process, the more I've thought, “if I put a bot there, it would do this,” it comes back to what a chatbot is.
Chatbots are UI, it is a way of delivering information in a conversational style rather than a static one-dimensional style. How do I deliver this webpage to you in pieces, and filter you to the pieces that you want, and enable you to ask questions of it? I can do that through chatbots. And I can do that in every aspect, and we could switch from machine to person when we don't have the answers. We can improve that whole interaction.
Think about how we email now: if I email with you, we would try to arrange a call, and we've exchanged probably five or ten emails to do that. And in each of those emails you're just asking me one thing: "do you want to do this? when can you do this? when are you available? okay, shall we do it on Skype?" So we've effectively had a conversation, and I think everything we're doing is moving towards this. The way our brains are becoming wired through technology and mobile and all this stuff, I just have one question and I want it answered, and then I want the next question. I don't want you to send me a long letter saying, "this is what we're going to do, this is where we're going to do it." Just ask a question, I reply, you reply; I'm on a mobile device so it's much more possible, I don't have to be at my desk to reply to you. It's not different than you sending me texts. You're just doing it in email, or messenger, or whatever it is.
What I was interested in was, what if we separate out those two things, the chat and the task? The relevance is really in the task, how well the task is done, and what we're really doing then through chat is creating filters.
KK: How did you first hear about jobpal/what was your first encounter?
BB: I first heard of JobPal at the HR Tech meetup in Berlin. I was very aware of chatbots and what the different chatbots were, and I was aware of JobPal as a player on that list. It wasn't anything more than that, and at that time, I was still also typing to get my head around what a chatbot was, it was still quite early days of chatbots. I was lucky enough, when I was in Johannesburg three years ago, to go have a look at what Google were doing over chat and over tasks. How you offer information, how you could filter that and get very specific information back. So I was interested in, initially, the concepts of task bots.
I'd separate out task bots from chatbots as two different thing. We tend to think of them as the same thing. A chatbot is UI, it is "how do we deliver information in a conversational style, one piece at a time." That's a chatbot. A task-bot does something.
The task is what happens behind it. So let's just talk about the chatbot. If you go into an application in a chatbot, the chat might ask a question, "what's your postal code or your zip code?" and I would give it an answer. The taskbot will then go into my existing database, ATS, whatever it is, and create a filter which says, “only show me jobs in this postcode.” So that is the task behind the conversation. I would separate those two things out.
KK: So, one automates the conversation and the other automates the task triggered by the conversation?
BB: Does the task. If we think of them as two people, it would be one person asking you the question, and the other person going away and physically getting the answer, and coming back and going, here is the answer. What I was interested in was, what if we separate out those two things, the chat and the task? The relevance is really in the task, how well the task is done, and what we're really doing then through chat is creating filters. I was interested in looking at the task capability, and looking at the chat capability, and considering where we fit in the process.
With chatbots, we've had uptake in the obvious places, like let's change "apply", let's take a static one-dimensional application form and turn it into an interactive experience by asking the questions instead of giving you all the questions on a fixed form. That's an obvious use case, but it’s just one; there's so many other use cases where chat should be the way we transfer information, which enables people to personalize that information. Chat should be the way we gain information; task is what we do with that. Even if you think about "apply" - the task could be simply data collection, so I could replace the form in my ATS, ask you the questions, and the answer would be to put the answer to the ATS, and then let the ATS do the work. Or, I could build the selection criteria into that form, so not only are we asking the question, but we're making decisions and returning a decision, which is whether we're going to progress you to the ATS, or give you a comfortable rejection in a different way, or put you in a different place.
I ended up with JobPal because I saw a really good use case in what Anna was doing with hub:raum which led me to Luc [Dudler, CEO], and to go and have a look at the product and see whether these are people I want to be involved with.
We're not plugging into Google and using what's existing, we're creating something new, which might create something unique for the organization.
KK: What did you think made JobPal intriguing or different?
BB: First of all, there's a decent product there, number one. There's not just a product there actually doing things, but we're getting real experiences and real data off the back of that. That's the beginning. It's real, it wasn't a hypothetical conversation about what might be. When I looked at the user cases, it was more than just the chat element, it was also the task element, so that is what I was interested in. I was really interested in the approach Luc [Dudler, CEO] was taking in the first conversation that we had. Most of the bots I'd been looking at were either using something super-sophisticated, say something like IBM Watson, where they still haven't quite figured out what that is, it's just a massive huge brain; or whether you were going to develop your own taxonomy, your own NLP. Luc was very clear than through Machine Learning he wanted to develop his own NLP, and I found that really exciting. We're not plugging into Google and using what's existing, we're creating something new, which might create something unique for the organization. Got a lot of challenges with it as well, but that made it really exciting. That made me think, actually, out of all of these bot companies, JobPal is the interesting one. And they've got quite a few customers. I work with eight different startups in different areas, it also comes down to the people: you meet the people and you think, yeah, we could actually work together, you're going to listen, and I'm going to listen, and we're going to be able to take this somewhere, and that's what I really felt with JobPal. I wasn't actually looking to work with anybody else at that point, but I just got really excited by what you're up to, where it was going. So it just made sense to say, let's work together, let's try and shape this.
In our enthusiasm to market and hype, what Gartner would call the “hype curve”, we've moved ahead of reality. Right now we see a lot of products geared around AI, or Artificial Intelligence. The reality, when we get into it, is we don't see any artificial intelligence in HR and recruiting technology currently. There's no AI there.
KK: I'd also like to hear your perspective on a common narratives around automation in talent acquisition, which we’ve kind of touched on already. That automation and "AI" - of course, you've spoken about how it's not "real AI" - reduces or removes the "human bias" and leads to diversity. What do you think is necessary to make this idea plausible? What do chatbot developers and designers need to do in order to use automation or partial automation in such a way as to improve diversity in hiring?
BB: First of all, where are we really interested in? What do we fix first? We can't fix everything, so what do we fix first? Inclusion, or diversity? One might think they're the same thing, but actually they're not. Because I could improve process, I could use technology to say, "I'm going to screen out bias in my advertising, in my selection, I'm going to make sure I've got a balanced shortlist of genders, of race, whatever criteria I want to apply as my diversity" - but that doesn't mean those people are going to be hired when they meet a human in the process, so that's the first thing. And it also doesn't mean they're going to be happy.
I'll give you a direct example: if I hired you because you were a female engineer, but then I didn't have an ecosystem that recognized your work when you were there, or gave you the right opportunities, and you were aware you were being paid 20% less than the person you were sitting next to because they happened to be a man - you would have an issue with that. And rightfully so. Unless we fix those things in organizations, so that the selection is made in a balanced way so you get every opportunity, it's an inclusive environment that concentrates on similarity not difference - diversity is really irrelevant. In fact, I think we're making the situation worse by putting a more diverse target list into that.
What should do we do about fixing inclusion and monitoring those things? Looking at very hard things like salary levels, opportunities, promotions, how are decisions being made, how do people think in this organization. It's very easy to be critical about male environments or male bias, when it comes to unconscious bias, but most people have been brought up in that environment so it becomes their point of reference. So how are we going to re-educate the workforce to think differently? How are we going to challenge those things? And we're going to do that with data, and direct challenge, and education and learning and exposure and really monitoring and tracking the challenging decisions.
Inclusion first - that's my first point. My second point is, in terms of technology, there's a mistake that we're seeing that we've seen every time we've changed the technology. In our enthusiasm to market and hype, what Gartner would call the “hype curve”, we've moved ahead of reality. Right now we see a lot of products geared around AI, or Artificial Intelligence. The reality, when we get into it, is we don't see any artificial intelligence in HR and recruiting technology currently. There's no AI there, right now, there's no intelligence, there's no independent thought or decision-making. That isn't being critical, that's what I expect!
The foundations of AI are two things. The first one is, deep learning: are we collecting enough data, is the the data clean, is there diversity in that data to actually understand properly the patterns of work? And then once we're going past the deep learning stage, we get to the machine learning bit, which is, how do our machines learn? what is good as an outcome? It shouldn't be based on what “good” used to be, what would be good outcomes. because it dictates learning. Determining what does good look like? and we've got broad enough data to work from, and clean enough data, and diverse enough data, what does good look like? and what is that based on? and what can we learn from the data that we have that gives us this good outcome? And what are we learning from user behavior and all the rest of it? But that is still machine learning and recommendation, that's not AI, there's no independent thought in that. What that's doing is learning the decisions of people, and automating that decision making.
That's where we are now, at best, in most technologies. So for us to talk or call it AI, first of all, creates an expectation, but also means we run a major risk that either we won't do the deep learning bit right - that we don't learn from the right data - we won't improve anything - we'll just take our unconscious bias, put them in our algorithms, and put them on steroids. So we're not only biased, we're biased more often, more consistently, every time we make a decision instead of just some of the time we make a decision. Because the data we use in the first instance is wrong. And then the machine learning bit which also says, we need to challenge what the machine is learning. We saw that with Microsoft's Tay, which is a famous failure story - a chatbot in a Twitter feed to mimic a human, and create conversation around the human behavior around it, and it became a mysogynist racist within about 15 minutes. It mimics the behavior: it sees it, and mimics it, "cause that's going to make me popular." So imagine that, applied to decision making.
So we might look at the data of the top 10 percent of performers in our organization, find the patterns, and only hire people who look like that. Which, in theory, sounds like a good decision. [But] if those high performers were selected using unconscious bias, we're actually justifying unconscious bias.
KK: So we want to challenge the definition of what is being optimized, what is the definition of “good” in the machine learning part.
BB: Yeah, and what's a good outcome. Let me give you a real example that always drives me mad. I was involved with matching technologies which were being developed and the way in which companies bought that matching technology or evaluate it was this: they ran a controlled experiment - which, in theory, you'd say, “yeah, that works” - so they had 1000 candidates and a 100 jobs and gave those 1000 candidates and 100 jobs to their top 50 recruiters who would then produce a shortlist. Then those same jobs and same people would be. put through an algorithm, and from that they would get a shortlist and compare those two. IF the two shortlists matched they would consider that technology to be good because it reproduced human decision-making, just added scale and volume: "yeah, it's going to get the same result as we would get, we would just do it with a machine."
KK: Accepting the manual process as a gold standard?
BB: As a gold standard, yeah. Another big one I was excited about was high performers. So we might say, right, let's look at the data of the top 10 percent of performers in our organization, find the patterns, and only hire people who look like that. Which, in theory, that sounds okay, that sounds like a good decision. But I'm challenging those two perceptions: the first one, if the short list you're producing with a machine is not different to the shortlist you're creating as people, what's the point of the machine? If the decision-making is the same, what's the point of technology and data in there? Yes, you're getting scale and you're getting volume. But I want to see a matching technology that produces a list that I wouldn't produce as a person, and challenges some perceptions: "yeah, this is your list; actually from a diversity point of view, this is what your list should look like" because the good outcome has factored in diversity and inclusion.
KK: Do you have ideas on how that kind of technology can be evaluated? If we cannot rely on manual gold standards?
BB: It will take test projects, and time. If I hire people from an automated matching that introduces people into the hiring funnel different from the people that the humans would have selected, the only way I can know whether that actually works is over time: are these people more successful? If we understand the stakes, we will invest the time. If we think we're AI now, and we use our technology as AI when it's not, the potential to just mess this up is huge. However, equally so the opportunity to do something different.
If those high performers were selected using unconscious bias, we're actually justifying unconscious bias. If most of the high performers were male, because most of the people or engineers were male, there is a point where I could say, well to be successful in this job you have to be a man. And I could justify that logic with data even though it would be nonsense.
KK: This comes back to your earlier point about inclusion and diversity, if the environment is not inclusive and it's difficult to be a high performer, while not being in the majority group. The causality could be reversed.
BB: It's the classic data question. I could use data to justify anything. A few times, I had exactly the opposite argument using the same data. And had people agree and disagree with me, using exactly the same numbers. Depends which lens I put on it: is this a good thing or a bad thing.
I think [we have now] a real opportunity to make sure we have time, and build our technology the right way.
KK: I want to come back to what you were saying about the hype curve, and the risk of expecting AI. When you talked about both inclusion and diversity, and learning, you talked about broader organizational challenges and the difficult shifts that organizations face in terms of opening information or creating more diverse environments. So what do you think developers and designers of technologies, like chatbots, could do to support these difficult shifts?
BB: One thing is that businesses need things to happen and results now. So they don't allow things to evolve. And Machine Learning is a classic example. Like taking a first-child, like going to your first school and I have you your degree. That's what we're doing with our machine learning. We're not allowing our machines enough time to learn before we expect outcomes. And what we've got right now, because we think we're at the end, we think we're finished, rather than, actually I'm at the beginning. It's my first school, I need to learn to read and write first.
One is, as designers, challenging the data, making sure the data is right. But also making sure we have enough time for our machines to learn and make the right decisions.
The other one is, challenging the perception of good and using data to do that. Challenging why do we believe the things we believe, and do the things we do. Quite often the answer to that is, "because we always have" but nobody has ever sat down as said, "is this right?"
I think it's a real opportunity to make sure we have time and build our technology the right way, and look at the data - is it clean, covering right things, diverse enough - do we have all the things we need to make the right decisions - and then off the back of that until we have enough inclusion - are we giving it enough time to learn, to get the outcomes we need? And if we call it AI, we're going to be impatient with it and we're going to have an expectation that it will have its own intelligence when it's not. We can't expect that.
As product companies, we have the knowledge, so we have to educate the market, we cannot be telling people "this is AI, we're in the AI time" because we're not. We should be saying "we're not in AI time, and that's excellent." This is what we're doing. We have Machine Learning and recommendation - and, by the way, that's much better than what we had before - so let's not get impatient to call everything AI.
There is a stage between ML and AI that we don't talk about enough, which I call augmented intelligence. That's like the ML and recommendation based on human patterns, it's not AI. It tried to automate what a person would have done, which is really what most of our technology does now. It doesn't think for itself. Real AI would say, hang on, morally, this isn't where I want to be. The pattern says this, but I don't think the pattern's right, I'm going to change it.
You can bet whatever problems were thinking about now are not problems that are going to exist by the time we fix them.
KK: Where do you think automation and/or chatbots can bring talent acquisition in the next 5 years? How will the next 5 years be different from the previous 5 years?
BB: The technology will continue to change, the channels will continue to evolve. Certain themes are naturally evolving. People's behavior is what drives the change, not the technology driving the people, although we sometimes think it is. Continually we're pushing into this messenger and closed-group society, I think privacy by design is going to be the key thing we see that a bit with facebook. Three or four years ago I was talking in a presentation called “Data for good, data for evil,” I think we're reached the point where our personal data has been sufficiently abused that enough people have woken up to say, we're not really comfortable with that. I think it's not about privacy, it's the permission to use my data, and my confidence that you're not going to abuse my data.
KK: It's the power dynamic, like privacy as a part of consumer sense of whether they have access to understanding what happens to their data and feeling any degree of control over it?
BB: This is what's happened with the Cambridge Analytica case: people have a visual example in front of them that they never understood, and it spooks them because actually what's happening isn't any different to what other people have done. What Cambridge Analytica did, using the questionnaires to collect data, was a technique that was completely legal up to 2014 and was exactly the same as, say, the page by Glassdoor understanding user behavior; by Monster through talent bid. So there was a whole bunch of organizations that did the same thing. We weren’t comfortable with the way Cambridge Analytica used it, but we might say, I'm okay with Glassdoor because they didn't sell it to anybody else and used it to make my user experience better and to make better recommendation. I think it's also been a wake up period for people on how the internet works. I don't think people have ever sat down and thought about content bubbles. Now, we're at a point where people might pay for Facebook but would want it to be private. People realized that all these companies - Google, Facebook, etc - didn't give us a network because they loved us.
So, boring things like the security of backend, and protecting data, especially in HR, so people can't hack and use it. That aspect will be really important. And, increasingly, we will look to how to personalize experiences and communication and do transactions on a one to one basis rather than one to many.
But forget five years! Five years is too long. we should really be talking one year only. We know that people are agile and change very quickly, because they don't need permission to change. The only person I need to ask for permission to change is myself, for example, to use Facebook in a different way or not use Facebook. I can decide for myself. Organizations need to get permission. We need to make organizations agile. You can bet whatever problems were thinking about now are not problems that are going to exist by the time we fix them.
You can read and share the summary of the interview on our blog!