It’s no secret that IBM can help companies transform data analytics, build a better cloud and ladder up to AI. The IBM approach to data analytics stands incorporates three core principles: making data simple and accessible; building a trusted analytics foundation; and scaling insights on demand.
Rob Thomas, general manager of IBM Analytics, discusses these principles that guide the IBM Analytics business and have led to its latest major offering: IBM Cloud Private for Data, a platform for high-performance analytics that powers cloud-based applications so companies can be ready for AI.
What are your goals for IBM Analytics? It feels like 2018 is the year IBM Analytics is having its coming out party.
We started talking to clients at start of the year around the AI ladder, which is about the steps clients should take to get to an AI future. This is the best way to sum up our strategy in analytics, which provides the building blocks clients need to be ready for AI, so they can take advantage of AI both now and at-scale in the future. That’s what we’re focused on the rest of 2018 and for the foreseeable future.
What are your long-term bets over the next five to 10 years?
If you think broader on that time scale, we’ll be in a world of pervasive AI. One thing we’ll plan to enable is putting AI and machine learning inside most products, applications and experiences.
We’re also thinking about SQL. In the future, the world will be a potential data source for SQL.You can capture data —whether it’s from IoT, a server, or any type of PC or smartphone. The idea of “SQL the world” is something I see that will take hold.
And “data as a utility” is a third. Our belief is that it should be easy to virtualize and provision all data instantly.
A fourth bet is the convergence between DevOps and data science whereby organizations can automate any task they want on demand.
What are the essential principles guiding IBM Analytics?
I see four essentials:
- Containers. I think containers will help revolutionize data and applications. The next generation of data and apps will emerge on the basis of containers.
- Enterprises will modernize their data architectures — but it will happen enterprise-out. It won’t be companies taking their data, moving it somewhere else and modernizing it. Instead you’ll be able to modernize your data from behind the firewall and move outward toward the cloud.
- One architecture across public cloud and private cloud. Clients shouldn’t have to choose one or the other.
- AI and machine learning for pervasive automation.
How do our essential principles set the stage for our long-term bets?
Our core mission is make data simple and accessible. The belief is that everybody in every company should have access to all the data they need on a moment’s notice to make more informed decisions. That’s ultimately why we exist and we give you the key steps to help you get there and AI. If you have pervasive AI, you realize data is simple and accessible.
It goes back to our belief in the power of containers. Everything we did with ICP for Data was built on containers. Containers and microservices are easy to deploy and you can more easily achieve value. The assumption with ICP for Data was that clients will modernize their data architectures. We’re essentially bringing the public cloud to data.
We’ll bring the public cloud to your data, because that’s where you’re most comfortable having your data today. As you’re ready to move to the public cloud, we’ll make that easy because we’ll be on the same architecture and you haven’t sacrificed anything for the future.
ICP for Data has all the components and building blocks you need for AI. We’re not talking about 12-36-month consulting projects here. The containers are designed to install in hours. Clients can then get their first experience within a day. It’s a different approach to talking about data.
What are some things that people might not know about ICP for Data?
Anytime somebody hears about a big data project behind the firewall, they assume it will take multiple years. But we’re talking hours. That’s one thing that’s unique.
Second, the “ah-ha moment” for most clients is when you connect it to a few data sources suddenly you have visibility to all your enterprise data. We give you a single view into any kind of data, no matter where it lives. I think that’s a surprise to people.
Thirdly, when you connect all your enterprise data you can build machine learning models to automate your business, and you can train them on all your data. So your models become that much more valuable as you put them into production.
What kinds of use cases are you hoping to see come out of this?
Since we announced this formally in March, five dominant use cases have emerged so far:
- Collecting and organizing all your data — seeing all your enterprise data.
- Accelerating the journey to AI using machine learning.
- Empowering teamwork. We make data a team sport. A python developer can work with a R developer inside IBM Cloud Private for Data.
- Modernizing workloads, extending a Terradata environment with Mongo DB or with PostgreSQL and using more modern tools for data.
- Supporting compliance readiness, including GDPR.
Why does ICP for Data matter today?
Everybody wants flexibility and agility of public cloud but not everybody is ready to put their data there. This enables you to bring that capability to your data. So now we have ICP for Data and the IBM Data Science Experience plus the Data Science Elite Team to bring it all together.
Many of our clients identify themselves as being between the stages of “data dashboards for performance” and “data used to drive decision making” while a much smaller percent of clients are using machine learning to create new business models. Are you expecting to see that balance shift?
Absolutely. The question is rate and pace, but I’m not sure anybody can really predict that. I will tell you that the biggest difference in client discussions I have now is that a year ago, everybody would say we’re focused on the left side: data warehousing and modernizing our data environment. Now everybody wants to talk about the right side: how do I get to self-service analytics, how do I get to machine learning and AI?
What we learned from Data Science Elite Team is that aspirations are high but ability in most organizations to execute is low. The DS Elite Team augments that, and brings a set of capabilities that says look, if you want to use SPSS we can use that, and if you want to use Data Science Experience, we can use that. If you want to build a model and extend with Watson APIs we can use that. It requires a level of skill most organizations don’t have enough of. What we go in and do is say, “let’s get you to your first big win.”
I see very few companies that have actually staffed up a DS team of experts whose only job is to help clients be successful with data science. And, by the way, we don’t charge for it. It’s a pretty compelling offering. If you look at some of the early use cases we’ve done, clients have gotten tangible value in only a few weeks of time. I think that’s pretty unique in our approach.
With demand for data scientists growing, how is IBM poised to fulfill this demand?
We’re certainly doing more in universities building out curriculum and training. I think that’s one role we play. I think a lot of DS science tasks, about 90 percent, are about finding and preparing data. With ICP for Data, we can help automate the vast majority of that. Suddenly you’ve created a lot more capacity for data scientists to help tackle their organization’s larger challenges.
CIOs looking to increase their data maturity have the ability with IBM to harness their existing on-premises data while conducting advanced analytics in the cloud. Schedule a complimentary 30-minute consultation with an IBM Cloud Private for Data Expert to find out how.