IBM wants to keep its employees from quitting, and it’s using artificial intelligence to do it.
In a recent CNBC interview, chief executive Ginni Rometty said that thanks to AI, the tech and consulting giant can predict with 95 percent accuracy the employees who are likely to leave in the next six months. The “proactive retention” tool — which IBM uses internally but is also selling to clients — analyzes thousands of pieces of data and then nudges managers toward the employees who may be on their way out, telling them to “do something now so it never enters their mind,” Rometty said.
IBM’s efforts to use AI to learn who might quit is one of the more high-profile recent examples of the way data science, “deep learning” and “predictive analytics” are increasingly infiltrating the traditionally low-tech human resources department, arming personnel chiefs with more rigorous tools and hard data around the tricky art of managing people.
.. Almost every Fortune 100 company, said Brian Kropp, group vice president for advisory firm Gartner’s HR practice, has a head of “talent analytics” and a team of data scientists in human resources.
“Compare that to three years ago, when there were maybe 10 to 15 percent that had a named and known head of talent analytics,” said Kropp, whose firm counts IBM as a client. “It’s the fastest-growing job in HR.”
.. Analysts say retention, in particular, is a critical area for the application of artificial intelligence. For one, there’s a clear event that happens — someone quits and leaves the company, or threatens to — that helps data scientists seek patterns for intervening.
“The person was here, and then the person was not here,” Kropp said. “It is where the more sophisticated analytics work in HR is going.”
Meanwhile, especially in a labor market with an unemployment rate below 4 percent and a near-record rate of people quitting their jobs for new gigs, there’s increasing worries about the high cost of not keeping great employees. The cost of trying to hire a replacement, Kropp said, is about half that person’s salary.
IBM’s use of AI in HR, which began in 2014, comes at a time when the 108-year-old company has been trying to shift its massive 350,000-person workforce to the most current tech skills and includes 18 different AI deployments across the department. Diane Gherson, IBM’s chief human resources officer, said in an interview that using tech to predict who might leave — considering thousands of factors such as
- job tenure,
- internal and external pay comparisons, and recent promotions —
was the first area the department focused on.
“It was an obvious issue,” she said. “We were going out and replacing people at a huge premium.”
IBM had already been using algorithms and testing hypotheses about who would leave and why. Simple factors, such as the length of an employee’s commute, were helpful but only so telling.
“You can’t possibly come up with every case,” Gherson said. “The value you get from AI is it doesn’t rely on hypotheses being developed in advance; it actually finds the patterns.”
For instance, the system spotted one software engineer who hadn’t been promoted at the same rate as three female peers who all came from the same top university computer science program. The women had all been at IBM for four years but worked in different parts of the sprawling company. While her manager didn’t know she was comparing herself to these women, the engineer was all too aware her former classmates had been promoted and she hadn’t, Gherson said. After the risk was flagged, she was given more mentoring and stretch assignments, and she remains at IBM.
While the program urges managers to intervene for employees who have hard-to-find skills — offering them raises, public recognition or promotions — potential quitters that the system identifies as having less-valuable skills or who are low performers don’t necessarily get the same response.
“Our universe for doing this is not the whole IBM universe, and does not include low performers,” Gherson said. “The ones who are in high demand today and high demand tomorrow are going to be the ones we treat with a very high-touch” response.
IBM does not analyze or monitor employees’ email, external social media accounts or other internal message boards as part of its predictions on who has one foot out the door. But some start-ups have scraped publicly available LinkedIn data, for instance, to predict probable departures.
Meanwhile, other vendors have recently begun analyzing data to predict how lower employee engagement scores can give companies a nine-month heads-up about the groups of workers that might be at risk of leaving. Josh Bersin, an industry analyst who focuses on HR technology, said some companies have taken a high-level look at email to make predictions.
He recently wrote about how some firms have studied email “metadata” and communication patterns, finding that people who quit were less engaged in their email for up to six months before leaving.
“Predictive attrition” methods are becoming popular, he said, because “it’s so hard to hire people. Companies just want to know why people are leaving, and they want data about why people are leaving.”
How effective such systems really are at predicting who might leave — and whether the interventions suggested will always work to keep them — is still somewhat unknown, Kropp said. And some patterns the AI might turn up — for instance, women of childbearing age who leave tend to have higher turnover rates — might be tricky for managers to address.
But they may still offer an edge over the surprise office visit from an employee no one guessed was about to leave.
“There’s still always going to be a lot of art, and a lot of uncertainty,” Kropp said. “But it’s still better than a manager guessing.”
They still may need a decade to make this computer powerful enough and reliable enough for groundbreaking industrial applications, but clearly quantum computing has gone from science fiction to nonfiction faster than most anyone expected.
.. “Whereas normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena — entanglement and superposition — to process information,”
.. The result is computers that may one day “operate 100,000 times faster than they do today,” adds Wired magazine.
.. How many different ways can you seat 10 people? It turns out, she explained, there are “3.6 million ways to arrange 10 people for dinner.”
.. Classical computers don’t solve “big versions of this problem very well at all,”
.. It’s just another reason China, the N.S.A., IBM, Intel, Microsoft and Google are now all racing — full of sweat — to build usable quantum systems.
.. “If I try to map a caffeine molecule problem on a normal computer, that computer would have to be one-tenth the volume of this planet in size,” said Arvind Krishna, head of research at IBM. “A quantum computer just three or four times the size of those we’ve built today should be able to solve that problem.”.. Each time work gets outsourced or tasks get handed off to a machine, “we must reach up and learn a new skill or in some ways expand our capabilities
A reliable, large-scale quantum computer could transform industries from AI to chemistry, accelerating machine learning and engineering new materials, chemicals and drugs.
.. “People ask, ‘Well, is it a thousand times faster? Is it a million times faster?’ It all depends on the application. It could do things in a minute that we don’t know how to do classically in the age of the universe. For other types of tests, a quantum computer probably helps you only modestly or, in some cases, not at all.”
.. Qubits, on the other hand, are like coins spinning through the air in a coin toss, showing both sides at once.
.. The computing power of a data center stretching several city blocks could theoretically be achieved by a quantum chip the size of the period at the end of this sentence.
.. Unlike classical computers, quantum computers don’t test all possible solutions to a problem. Instead, they use algorithms to cancel out paths leading to wrong answers, leaving only paths to the right answer—and those algorithms work only for certain problems. This makes quantum computers unsuited for everyday tasks like surfing the web
.. Quantum computers are also subject to high error rates, which has led some scientists and mathematicians to question their viability. Google and other companies say the solution is error-correction algorithms, but those algorithms require additional qubits to check the work of the qubits running computations. Some experts estimate that checking the work of a single qubit will require an additional 100.
.. Richard Feynman, a Nobel Prize-winning theoretical physicist, put it this way: “I think I can safely say that nobody understands quantum mechanics.”
.. investment has surged, with projects under way at Google, Microsoft, IBM and Intel
.. D-Wave .. the company’s $15 million 2000Q model is useful only for a narrow category of data analysis
.. Companies and governments are scrambling to prepare for what some call Y2Q, the year a large-scale, accurate quantum computer arrives, which some experts peg at roughly 2026
.. Documents leaked by former NSA contractor Edward Snowden in 2013 showed that the NSA is building its own quantum computer as part of an $80 million research program called Penetrating Hard Targets
.. Experts believe their biggest near-term promise is to supercharge machine learning and AI, two rapidly growing fields—and businesses. Neven of Google says he expects all machine learning to be running on quantum computers within the decade.
.. In May, IBM unveiled a chip with 16 qubits
.. John Martinis, Google’s head of quantum hardware, in which he let slip that Google had a 22-qubit chip.
.. “If you were to vibrate this frame, you can actually see the temperature rise on the thermometer,
.. Neven’s team in Southern California is racing to finish the 49-qubit chip
Talking about Watson is a good way to trigger eye rolls from people in the machine learning and AI community. There’s widespread agreement that its triumph on the specific backward-question problem of Jeopardy! was notable. Making sense of language remains one of the biggest challenges in artificial intelligence. But IBM quickly turned Watson into an umbrella brand promising a bewildering variety of bold new applications, from understanding the emotional tone of Tweets to scouring genomes for mutations. It bought startups and rebranded their wares as Watson and touted cute but hardly lucrative projects like Watson-designed recipes and dresses. In one TV commercialWatson chatted with Bob Dylan, confessing “I have never known love.”
Critics say IBM executives overshot badly by allowing marketing messages to suggest that Watson’s Jeopardy! breakthrough meant it could break through on just about anything else. “The original system was a terrific achievement, there’s no question about that,” says Oren Etzioni, CEO of the Allen Institute for AI. “But they’ve really over-claimed what they can deliver in a big way; the only intelligent thing about Watson is their PR department.”.. MD Anderson had walked away from more than $62 million and four years spent on contracts promising a Watson system to help oncologists treat patients. An internal audit reserved judgment on Watson’s intelligence but said the center had struggled to connect it with an upgraded medical records system. IBM maintains the system could have been deployed if MD Anderson had kept going; the center is now seeking a new partner to work with on applying AI to cancer care.