Red Hat CEO Says Acquisition by IBM Will Help Spur More Open-Source Innovation

‘This is two cultures working together, not coming together,’ says Jim Whitehurst

International Business Machines Corp.’s recent acquisition of Red Hat Inc. is aimed squarely at building up its cloud business—in part by making it easier for IBM customers to use competing cloud services.

Red Hat’s open-source software enables chief information officers and other enterprise IT managers to run applications both within their own data centers and across a range of third-party providers, from IBM’s own cloud to Amazon.com Inc. ’s AWS, Microsoft Corp ’s Azure, or any other tech company that rents computer software and systems to businesses online.

“Public clouds are a big partner of ours,” says Red Hat Chief Executive Jim Whitehurst, “but they are basically saying ‘Come into my world and use all of my stuff.’”

Mr. Whitehurst says the ability to shift business applications between different cloud providers—known as a hybrid cloud strategy—is proving popular with CIOs as a way to minimize the risk of relying on a single tech service to handle a company’s entire information-technology system.

It also lets them shop around for a wider mix of cloud-based tools and emerging capabilities spread across an increasingly competitive market, Mr. Whitehurst says.

Revenue from IBM’s cloud business climbed 5% year-over-year in the second quarter, even as the tech giant posted its fourth straight quarter of declining revenue overall, but it remains far behind the pace of revenue growth set by Amazon and Microsoft.

IBM’s $34 billion deal to buy Red Hat, which closed in July, seeks to boost its standing in the market by drawing more businesses to hybrid cloud strategies.

Mr. Whitehurst spoke with CIO Journal on Tuesday about Red Hat’s role in that strategy, along with its place within IBM. Edited excerpts are below:

Some industry analysts worry about Red Hat’s independence under IBM. How is that working relationship shaking out?

I think the reason a lot of acquisitions fail is that there’s a reason somebody wants to sell. Generally, the reason they want to sell probably doesn’t mean they think there’s this wonderful, rosy future. Our view is that we’re at the beginning of a fundamental change in how technology is going to be built and consumed that is driven by open source.

Could we do it by ourselves? Maybe. But I think we thought the probability was small and being part of a bigger company was something we really wanted. Speaking for the senior team at Red Hat, we’re all still in and driving this. I think we have a real opportunity.

How is the cultural fit?

I’m the only dual-badged employee and that is absolutely intentional. We are a separate stand-alone subsidiary. This is two cultures working together, not coming together. There is a clear recognition from both organizations that we have very strong cultures and very different ways of operating that generate different capabilities.

The great news is that those capabilities are highly complementary. Typically if you jam things together you get the worst of both, not the best of both. And so we try to be really clear that we are separate companies. Let’s recognize the power of the other model and let’s work together.

Why is hybrid cloud compelling to CIOs and how does Red Hat help achieve it?

Not only does it reduce costs, it allows for a greater pace of innovation to occur, because you’re not hamstrung by all these incompatible vertical stacks. While there are other people talking about it, we are the only ones to deliver a hybrid platform.

This is why IBM bought us. OpenShift, which is our container platform, is the only way to have a consistent container platform that runs on any of the major clouds. You write code once and you can run it anywhere. That’s what’s compelling. I can innovate faster and protect myself from being locked in.

How does all this foster innovation?

The beauty of the Red Hat model is we get to sit and observe and dip our toe into a whole set of technologies and see what emerges. The majority of innovation is happening in open source, without a doubt. When something emerges as the next big thing, we’re just already more deeply into that.

Writing software and giving it away is a really bad business model. That is not what we do. We get involved in existing open-source communities and then we help commercialize and make it consumable. Open source is user-driven innovation. It’s users that have an issue and they solve that issue themselves. Big data emerged not because someone sat back and said we need a way to access data at scale. It was iterative issues that Facebook had, and Google had, and Yahoo had, and slowly built over time.

Big Ocean Cargo Carriers Join Blockchain Initiative

The addition of France’s CMA CGM SA and Switzerland-based Mediterranean Shipping Company to the effort called TradeLens means the three carriers that control nearly half of all seaborne containerized cargo capacity will make the movement of freight in international supply chains more transparent and potentially generate substantial annual savings.

.. For ocean carriers, the blockchain technology allows trusted participants to share information as goods move through supply chains. The system also promises to reduce the cost of paperwork. Maersk said the maximum cost of the required documentation to process and administer many of the goods shipped each year makes up roughly one-fifth of the actual physical transportation costs.

Widespread participation across the supply chain is key to making TradeLens work, however. Many companies, including transportation operators and freight forwarders that manage the flow of goods, have been reluctant to share data on common platforms.

.. “The fact that CMA CGM is now on two platforms means blockchain solutions in shipping won’t be a winner-take-all, but there may be room for a couple of competing platforms,” said Lars Jenes, chief executive, Copenhagen-based SeaIntelligence Consulting.

.. The blockchain pact comes as Maersk and IBM are trying to reinvent themselves.

IBM has been looking to new lines of business including blockchain as sales in its legacy business of selling hardware and software slow. Maersk has been trying to transform expand its business from port-to-port shipping into more of an integrated logistics provider likeFedEx Corp.

The new way your boss can tell if you’re about to quit your job

IBM wants to keep its employees from quitting, and it’s using artificial intelligence to do it.

In a recent CNBC interview, chief executive Ginni Rometty said that thanks to AI, the tech and consulting giant can predict with 95 percent accuracy the employees who are likely to leave in the next six months. The “proactive retention” tool — which IBM uses internally but is also selling to clients — analyzes thousands of pieces of data and then nudges managers toward the employees who may be on their way out, telling them to “do something now so it never enters their mind,” Rometty said.

IBM’s efforts to use AI to learn who might quit is one of the more high-profile recent examples of the way data science, “deep learning” and “predictive analytics” are increasingly infiltrating the traditionally low-tech human resources department, arming personnel chiefs with more rigorous tools and hard data around the tricky art of managing people.

.. Almost every Fortune 100 company, said Brian Kropp, group vice president for advisory firm Gartner’s HR practice, has a head of “talent analytics” and a team of data scientists in human resources.

“Compare that to three years ago, when there were maybe 10 to 15 percent that had a named and known head of talent analytics,” said Kropp, whose firm counts IBM as a client. “It’s the fastest-growing job in HR.”

.. Analysts say retention, in particular, is a critical area for the application of artificial intelligence. For one, there’s a clear event that happens — someone quits and leaves the company, or threatens to — that helps data scientists seek patterns for intervening.

“The person was here, and then the person was not here,” Kropp said. “It is where the more sophisticated analytics work in HR is going.”

Meanwhile, especially in a labor market with an unemployment rate below 4 percent and a near-record rate of people quitting their jobs for new gigs, there’s increasing worries about the high cost of not keeping great employees. The cost of trying to hire a replacement, Kropp said, is about half that person’s salary.

IBM’s use of AI in HR, which began in 2014, comes at a time when the 108-year-old company has been trying to shift its massive 350,000-person workforce to the most current tech skills and includes 18 different AI deployments across the department. Diane Gherson, IBM’s chief human resources officer, said in an interview that using tech to predict who might leave — considering thousands of factors such as

  • job tenure,
  • internal and external pay comparisons, and recent promotions —

was the first area the department focused on.

“It was an obvious issue,” she said. “We were going out and replacing people at a huge premium.”

IBM had already been using algorithms and testing hypotheses about who would leave and why. Simple factors, such as the length of an employee’s commute, were helpful but only so telling.

“You can’t possibly come up with every case,” Gherson said. “The value you get from AI is it doesn’t rely on hypotheses being developed in advance; it actually finds the patterns.”

For instance, the system spotted one software engineer who hadn’t been promoted at the same rate as three female peers who all came from the same top university computer science program. The women had all been at IBM for four years but worked in different parts of the sprawling company. While her manager didn’t know she was comparing herself to these women, the engineer was all too aware her former classmates had been promoted and she hadn’t, Gherson said. After the risk was flagged, she was given more mentoring and stretch assignments, and she remains at IBM.

While the program urges managers to intervene for employees who have hard-to-find skills — offering them raises, public recognition or promotions — potential quitters that the system identifies as having less-valuable skills or who are low performers don’t necessarily get the same response.

Our universe for doing this is not the whole IBM universe, and does not include low performers,” Gherson said. “The ones who are in high demand today and high demand tomorrow are going to be the ones we treat with a very high-touch” response.

IBM does not analyze or monitor employees’ email, external social media accounts or other internal message boards as part of its predictions on who has one foot out the door. But some start-ups have scraped publicly available LinkedIn data, for instance, to predict probable departures.

Meanwhile, other vendors have recently begun analyzing data to predict how lower employee engagement scores can give companies a nine-month heads-up about the groups of workers that might be at risk of leaving. Josh Bersin, an industry analyst who focuses on HR technology, said some companies have taken a high-level look at email to make predictions.

He recently wrote about how some firms have studied email “metadata” and communication patterns, finding that people who quit were less engaged in their email for up to six months before leaving.

“Predictive attrition” methods are becoming popular, he said, because “it’s so hard to hire people. Companies just want to know why people are leaving, and they want data about why people are leaving.”

How effective such systems really are at predicting who might leave — and whether the interventions suggested will always work to keep them — is still somewhat unknown, Kropp said. And some patterns the AI might turn up — for instance, women of childbearing age who leave tend to have higher turnover rates — might be tricky for managers to address.

But they may still offer an edge over the surprise office visit from an employee no one guessed was about to leave.

“There’s still always going to be a lot of art, and a lot of uncertainty,” Kropp said. “But it’s still better than a manager guessing.”

While You Were Sleeping

 They still may need a decade to make this computer powerful enough and reliable enough for groundbreaking industrial applications, but clearly quantum computing has gone from science fiction to nonfiction faster than most anyone expected.

.. “Whereas normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena — entanglement and superposition — to process information,”

.. The result is computers that may one day “operate 100,000 times faster than they do today,” adds Wired magazine.

.. How many different ways can you seat 10 people? It turns out, she explained, there are “3.6 million ways to arrange 10 people for dinner.”

.. Classical computers don’t solve “big versions of this problem very well at all,”

.. It’s just another reason China, the N.S.A., IBM, Intel, Microsoft and Google are now all racing — full of sweat — to build usable quantum systems.

.. “If I try to map a caffeine molecule problem on a normal computer, that computer would have to be one-tenth the volume of this planet in size,” said Arvind Krishna, head of research at IBM. “A quantum computer just three or four times the size of those we’ve built today should be able to solve that problem.”

.. Each time work gets outsourced or tasks get handed off to a machine, “we must reach up and learn a new skill or in some ways expand our capabilities

How Google’s Quantum Computer Could Change the World

The ultra-powerful machine has the potential to disrupt everything from science and medicine to national security—assuming it works

 A reliable, large-scale quantum computer could transform industries from AI to chemistry, accelerating machine learning and engineering new materials, chemicals and drugs.

..  “People ask, ‘Well, is it a thousand times faster? Is it a million times faster?’ It all depends on the application. It could do things in a minute that we don’t know how to do classically in the age of the universe. For other types of tests, a quantum computer probably helps you only modestly or, in some cases, not at all.”

.. Qubits, on the other hand, are like coins spinning through the air in a coin toss, showing both sides at once.

.. The computing power of a data center stretching several city blocks could theoretically be achieved by a quantum chip the size of the period at the end of this sentence.

.. Unlike classical computers, quantum computers don’t test all possible solutions to a problem. Instead, they use algorithms to cancel out paths leading to wrong answers, leaving only paths to the right answer—and those algorithms work only for certain problems. This makes quantum computers unsuited for everyday tasks like surfing the web

.. Quantum computers are also subject to high error rates, which has led some scientists and mathematicians to question their viability. Google and other companies say the solution is error-correction algorithms, but those algorithms require additional qubits to check the work of the qubits running computations. Some experts estimate that checking the work of a single qubit will require an additional 100.

.. Richard Feynman, a Nobel Prize-winning theoretical physicist, put it this way: “I think I can safely say that nobody understands quantum mechanics.”

Feynman was one of the first to introduce the idea of a quantum computer. In a 1981 lecture

.. investment has surged, with projects under way at Google, Microsoft, IBM and Intel Corp

..  D-Wave ..  the company’s $15 million 2000Q model is useful only for a narrow category of data analysis

.. Companies and governments are scrambling to prepare for what some call Y2Q, the year a large-scale, accurate quantum computer arrives, which some experts peg at roughly 2026

.. Documents leaked by former NSA contractor Edward Snowden in 2013 showed that the NSA is building its own quantum computer as part of an $80 million research program called Penetrating Hard Targets

.. Experts believe their biggest near-term promise is to supercharge machine learning and AI, two rapidly growing fields—and businesses. Neven of Google says he expects all machine learning to be running on quantum computers within the decade.

.. In May, IBM unveiled a chip with 16 qubits

..  John Martinis, Google’s head of quantum hardware, in which he let slip that Google had a 22-qubit chip.

.. “If you were to vibrate this frame, you can actually see the temperature rise on the thermometer,

..

Google and its peers will likely sell quantum computing via the cloud, possibly charging by the second.

.. Neven’s team in Southern California is racing to finish the 49-qubit chip

Watson Won Jeopardy, but is it smart enough to Spin Big Blue’s AI into Green

Talking about Watson is a good way to trigger eye rolls from people in the machine learning and AI community. There’s widespread agreement that its triumph on the specific backward-question problem of Jeopardy! was notable. Making sense of language remains one of the biggest challenges in artificial intelligence. But IBM quickly turned Watson into an umbrella brand promising a bewildering variety of bold new applications, from understanding the emotional tone of Tweets to scouring genomes for mutations. It bought startups and rebranded their wares as Watson and touted cute but hardly lucrative projects like Watson-designed recipes and dresses. In one TV commercialWatson chatted with Bob Dylan, confessing “I have never known love.”

Overhyped

Critics say IBM executives overshot badly by allowing marketing messages to suggest that Watson’s Jeopardy! breakthrough meant it could break through on just about anything else. “The original system was a terrific achievement, there’s no question about that,” says Oren Etzioni, CEO of the Allen Institute for AI. “But they’ve really over-claimed what they can deliver in a big way; the only intelligent thing about Watson is their PR department.”

 .. In fact, like all the AI systems in use today, Watson needs to be carefully trained with example data to take on a new kind of problem. The work needed to curate and label the necessary data has been a drag on some projects using IBM’s system. Ashok Goel, a computer science professor at Georgia Institute of Technology, got written up in The Wall Street Journal and Backchannel after building a Watson bot to answer questions from students to his online course on artificial intelligence. But its performance was limited by the amount of manual labeling of data needed. “It had fairly high precision, but it did not answer a very large number of questions,” Goel says
.. MD Anderson had walked away from more than $62 million and four years spent on contracts promising a Watson system to help oncologists treat patients. An internal audit reserved judgment on Watson’s intelligence but said the center had struggled to connect it with an upgraded medical records system. IBM maintains the system could have been deployed if MD Anderson had kept going; the center is now seeking a new partner to work with on applying AI to cancer care.