You are currently browsing the category archive for the ‘Business Performance’ category.

IBM redesigned its business intelligence platform, now called IBM Cognos Analytics. Expected to be released by the end of 2015, the new version includes features to help end users model their own data without IT assistance while maintaining the centralized governance and security that the platform already has. Our benchmark research into information optimization shows that simplifying access to information is important to vr_Info_Optimization_01_whos_responsible_for_information_availabilityvirtually all (97%) participating organizations, but it also finds that only one in four (25%) are satisfied with their current software for doing that. Simplification is a major theme of the IBM Cognos redesign.

The new IBM Cognos Analytics provides a completely Web-based environment that is consistent in the user interface and security across multiple devices and browsers. The redesigned interface follows IBM’s internal cultural shift to base product development first on the user experience and second on features and functionality. This may be a wise move as our research across multiple analytic software categories finds usability to be organizations’ most often important buying criterion.

The redesign is based on the same design and self-service principles as IBM Watson Analytics which we did award a Ventana Research Technology Innovation Award for 2015 in business analytics. The redesign is most evident in the IBM Cognos Analytics authoring mode. The Report Studio and Cognos Workspace Advanced modules have been replaced with a simplified Web-based modeling environment. The extended capabilities of IBM Cognos 10.2.2 are still available, but now they are hidden and more logically arranged to provide easier user access. For example, the previous version of Cognos presented an intimidating display of tools with which to do tasks such as fine-grain manipulation of reports; now these features are hidden but still easily accessible. If a user is having difficulty finding a particular function, a “smart search” feature helps to find the correct menu to add it.

The new system indexes objects, including metadata, as they are created, providing a more robust search function suitable for nontechnical users in the lines of business. The search feature works with what IBM calls “intent-based modeling” so users can search for words or phrases – for example, revenue by unit or product costs – and be presented with only relevant tables and columns. The system can then automatically build a model by inferring relationships in the data. The result is that the person building the report need not manually design a multidimensional model of the data, so less skilled end users can serve themselves to build their own data models that underpin dashboards and reports. Previously, end users were limited to parameterized reporting in which they could work only within the context of models previously designed by IT. Many vendors of analytics have been late in exploiting the power of search and therefore may be missing a critical feature that customers desire. Ventana Research is a proponent of such capabilities; my colleague Mark Smith has written about them in the context of data discovery technology. Search is fundamental to user-friendly discovery systems, as is reflected in the success of companies such as Google and Splunk. With search becoming more sophisticated, being based on machine-learning algorithms, we expect it to become a key requirement for new analytics and business intelligence systems.

Furthering the self-service aspect is the ability for end users to access and combine multiple data sets. The previous version of IBM Cognos (10.2.2) allowed users to work with “personal data sets” such as .csv files, but they needed an IBM DB2 back end to house the files. Now such data sets can be uploaded and managed directly on the IBM Cognos Analytics server and accessed with the new Web-based authoring tool. Once data sets are uploaded they can be accessed and modeled like any other object to which the user has access. In this way, IBM Cognos Analytics addresses the “bring your own data” challenge in which data sources such as personal spreadsheets must be integrated into enterprise analytics and business intelligence systems.

After modeling the data, users can lay out new dashboards using drag-and-drop capabilities like those found in IBM Watson Analytics. Dashboards can be previewed and put into service for one-time use or put into production mode if the user has such privileges. As is the case with IBM Watson Analytics, newly designed dashboard components such as tables, charts and maps are automatically linked so that changes in one part of the dashboard automatically relate to other parts. This feature facilitates ease of use in designing dashboards. Some other tools in the market require widgets to be connected manually, which can be time-consuming and is an impediment to prototyping of dashboards.

The move to a more self-service orientation has long been in the works for IBM Cognos and so this release is an important one for IBM. The ability to automatically integrate and model data gives the IT department a more defensible position as other self-service tools are introduced into the organization and are challenging data access and preparation built within tools like IBM Cognos. vr_DAC_20_justification_for_data_preparationThis is becoming especially important as the number and complexity of data sources increases and are needed more rapidly by business. Our research into information optimization shows that most organizations need to integrate at least six data sources and some have 20 or more sources they need to bring together. All of which confirms what our data and analytics in the cloud benchmark research finds data preparation to be a top priority in over half (55%) of organizations.

Over time, IBM intends to integrate the capabilities of Cognos Analytics with those of Watson Analytics. This is an important plan because IBM Watson Analytics has capabilities beyond those of self-service tools in the market today. In particular, the ability to explore unknown data relationships and do advanced analysis is a key differentiator for IBM Watson Analytics, as I have written. IBM Watson Analytics enables users to explore relationships in data that otherwise would not be noticeable, whereas IBM Cognos Analytics enables them to explore and put into production information based on predefined assumptions.

Going forward, I will be watching how IBM aligns Cognos Analytics with Watson Analytics, and in particular, how Cognos Analytics will fit into the IBM cloud ecosystem. Currently IBM Cognos Analytics is offered both on-premises and in a hosted cloud, but here also IBM is working to align it VR_AnalyticsandBI_VI_HotVendor_2015more closely with IBM Watson Analytics. Bringing in data preparation, data quality and MDM capabilities from the IBM DataWorks product could also benefit IBM Cognos Analytics users. IBM should emphasize the breadth of its portfolio of products including IBM Cognos TM1, IBM SPSS, IBM Watson Analytics and IBM DataWorks as it faces stiff competition in enterprise analytics and business intelligence from a host of analytics companies including new cloud-based ones. IBM is rated a Hot Vendor in our Ventana Research Analytics and Business Intelligence Value Index in part because of its overall portfolio.

For organizations already using IBM Cognos, the redesign addresses the need of end users to create their own dashboards while maintaining IT governance and control. The new interface may take some getting used to, but it is modern and more intuitive than previously. For companies new to IBM Cognos, as well as departments wanting to take a look at the platform, cloud options offer less risk. For those wanting early access to the new IBM Cognos Analytics, IBM has provided access to it on www.analyticszone.com. The changes I have noted move IBM Cognos Analytics closer to the advances in analytics as a whole, and I recommend that all these groups examine the new version.

Regards,

Ventana Research

Splunk’s annual gathering, this year called .conf 2015, in late September hosted almost 4,000 Splunk customers, partners and employees. It is one of the fastest-growing user conferences in the technology industry. The area dedicated to Splunk partners has grown from a handful of booths a few years ago to a vast showroom floor many times larger. While the conference’s main announcement was the release of Splunk Enterprise 6.3, its flagship platform, the progress the company is making in the related areas of machine learning and the Internet of Things (IoT) most caught my attention.

Splunk’s strength is its ability to index, normalize, correlate and query data throughout the technology stack, including applications, servers, networks and sensors. It uses distributed search that enables correlation and analysis of events across local- and wide-area networks without moving vast amounts of data. Its architectural approach unifies cloud and on-premises implementations and provides extensibility for developers building applications. Originally, Splunk provided an innovative way to troubleshoot complex technology issues, but over time new uses for Splunk-based data have emerged, including digital marketing analytics, cyber security, fraud prevention and connecting digital devices in the emerging Internet of Things. Ventana Research has covered Splunk since its establishment in the market, most recently in this analysis of mine.

Splunk’s experience in dealing directly with distributed, time-series data and processes on a large scale puts it in position to address the Internet of Things from an industrial perspective. This sort of data is at the heart of large-scale industrial control systems, but it often comes in different formats and its implementation is based on different formats and protocols. For instance, sensor technology and control systems that were invented 10 to 20 years ago use very different technology than modern systems. Furthermore, as with computer technology, there are multiple layers in stack models that have to communicate. Splunk’s tools help engineers and systems analysts cross-reference these disparate systems in the same way that it queries computer system and network data, however, the systems can be vastly different. To address this challenge, Splunk turns to its partners and its extensible platform. For example, Kepware has developed plug-ins that use its more than 150 communication drivers so users can stream real-time industrial sensor and machine data directly into the Splunk platform. Currently, the primary value drivers for organizations in this field of the industrial IoT are operational efficiency, predictive maintenance and asset management. At the conference, Splunk showcased projects in these areas including one with Target that uses Splunk to improve operations in robotics and manufacturing.

For its part, Splunk is taking a multipronged approach by acquiring companies, investing in internal development and enabling its partner ecosystem to build new products. One key enabler of its approach to IoT is machine learning algorithms built on the Splunk platform. In machine learning a model can use new data to continuously learn and adapt its answers to queries. This differs from conventional predictive analytics, in which users build models and validate them based on a particular sample; the model does not adapt over time. With machine learning, for instance, if a piece of equipment or an automobile shows a certain optimal pattern of operation over time, an algorithm can identify that pattern and build a model for how that system should behave. When the equipment begins to act in a less optimal or anomalous way, the system can alert a human operator that there may be a problem, or in a machine-to-machine situation, it can invoke a process to solve the problem or recalibrate the machine.

Machine learning algorithms allow event processes to be audited, analyzed and acted upon in real time. They enable predictive capabilities for maintenance, transportation and logistics, and asset management and can also be applied in more people-oriented domains such as fraud prevention, security, business process improvement, and digital products.  IoT potentially can have a major impact on business processes, but only if organizations can realign systems to discover-and-adapt rather than model-and-apply approaches. For instance, processes are often carried out in an uneven fashion different from the way the model was conceived and communicated through complex process documentation and systems. As more process flows are directly instrumented and more processes carried out by machines, the ability to model directly based on the discovery of those event flows and to adapt to them (through human learning or machine learning) becomes key to improving organizational processes. Such realignment of business processes, however, often involves broad organizational transformation.Our benchmark research on operational intelligence shows that challenges associated with people and processes, rather than information and technology, most often hold back organizational improvement.

Two product announcements made at the conference illuminate the direction Splunk is taking with IoT and machine learning. The first is User Behavior Analytics (UBA), based VR2015_InnovationAwardWinneron its acquisition of Caspida, which produces advanced algorithms that can detect anomalous behavior within a network. Such algorithms can model internal user behavior, and when behavior deviates from the specified norm, it can generate an alert that can be addressed through investigative processes usingSplunk Enterprise Security 4.0. Together, Splunk Enterprise Security 4.0 and UBA won the 2015 Ventana Research CIO Innovation Award.The acquisition of Caspida shows that Splunk is not afraid to acquire companies in niche areas where they can exploit their platform to deliver organizational value. I expect that we will see more such acquisitions of companies with high value ML algorithms as Splunk carves out specific positions in the emergent markets.

The other product announced is IT Service Intelligence (ITSI), which highlights machine learning algorithms alongside of Splunk’s core capabilities. The IT Service Intelligence App is an application in which end users deploy machine learning to see patterns in various IT service scenarios. ITSI can inform and enable multiple business uses such as predictive maintenance, churn analysis, service level agreements and chargebacks. Similar to UBA, it uses anomaly detection to point out issues and enables managers to view highly distributed processes such as claims process data in insurance companies. At this point, however, use of ITSI (like other areas of IoT) may encounter cultural and political issues as organizations deal with changes in the roles of IT and operations management. Splunk’s direction with ITSI shows that the company is staying close to its IT operations knitting as it builds out application software, but such development also puts Splunk into new competitive scenarios where legacy technology and processes may still be considered good enough.

We note that ITSI is built using Splunk’s Machine Learning Toolkit and showcase, which currently is in preview mode. The vr_Big_Data_Analytics_08_top_capabilities_of_big_data_analyticsplatform is an important development for the company and fills one of the gaps that I pointed out in its portfolio last year. Addressing this gap enables Splunk and its partners to create services that apply advanced analytics to big data that almost half (45%) of organizations find important. The use of predictive and advanced analytics on big data I consider a killer application for big data; our benchmark research on big data analytics backs this claim: Predictive analytics is the type of analytics most (64%) organizations wish to pursue on big data.

Organizations currently looking at IoT use cases should consider Splunk’s strategy and tools in the context of specific problems they need to address. Machine learning algorithms built for particular industries are key so it is important to understand if the problem can be addressed using prebuilt applications provided by Splunk or one of its partners, or if the organization will need to build its own algorithms using the Splunk machine learning platform or alternatives. Evaluate both the platform capabilities and the instrumentation, the type of protocols and formats involved and how that data will be consumed into the system and related in a uniform manner. Most of all, be sure the skills and processes in the organization align with the technology from an end user and business perspective.

Regards,

Ventana Research

One of the key findings in our latest benchmark research into predictive analytics is that companies are incorporating predictive analytics into their operational systems more often than was the case three years ago. The research found that companies are less inclined to purchase stand-alone predictive analytics tools (29% vs 44% three years ago) and more inclined to purchase predictive analytics built into business intelligence systems (23% vs 20%), applications (12% vs 8%), databases (9% vs 7%) and middleware (9% vs 2%). This trend is not surprising since operationalizing predictive analytics – that is, building predictive analytics directly into business process workflows – improves companies’ ability to gain competitive advantage: those that deploy predictive analyticsvr_NG_Predictive_Analytics_12_frequency_of_updating_predictive_models within business processes are more likely to say they gain competitive advantage and improve revenue through predictive analytics than those that don’t.

In order to understand the shift that is underway, it is important to understand how predictive analytics has historically been executed within organizations. The marketing organization provides a useful example since it is the functional area where organizations most often deploy predictive analytics today. In a typical organization, those doing statistical analysis will export data from various sources into a flat file. (Often IT is responsible for pulling the data from the relational databases and passing it over to the statistician in a flat file format.) Data is cleansed, transformed, and merged so that the analytic data set is in a normalized format. It then is modeled with stand-alone tools and the model is applied to records to yield probability scores. In the case of a churn model, such a probability score represents how likely someone is to defect. For a marketing campaign, a probability score tells the marketer how likely someone is to respond to an offer. These scores are produced for marketers on a periodic basis – usually monthly. Marketers then work on the campaigns informed by these static models and scores until the cycle repeats itself.

The challenge presented by this traditional model is that a lot can happen in a month and the heavy reliance on process and people can hinder the organization’s ability to respond quickly to opportunities and threats. This is particularly true in fast-moving consumer categories such as telecommunications or retail. For instance, if a person visits the company’s cancelation policy web page the instant before he or she picks up the phone to cancel the contract, this customer’s churn score will change dramatically and the action that the call center agent should take will need to change as well. Perhaps, for example, that score change should mean that the person is now routed directly to an agent trained to deal with possible defections. But such operational integration requires that the analytic software be integrated with the call agent software and web tracking software in near-real time.

Similarly, the models themselves need to be constantly updated to deal with the fast pace of change. For instance, if a telecommunications carrier competitor offers a large rebate to customers to switch service providers, an organization’s churn model can be rendered out of date and should be updated. Our research shows that organizations that constantly update their models gain competitive advantage more often than those that only update them periodically (86% vs 60% average), more often show significant improvement in organizational activities and processes (73% vs 44%), and are more often very satisfied with their predictive analytics (57% vs 23%).

Building predictive analytics into business processes is more easily discussed than done; complex business and technical challenges must be addressed. The skills gap that I recently wrote about is a significant barrier to implementing predictive analytics. Making predictive analytics operational requires not only statistical and business skills but technical skills as well.   From a technical perspective, one of the biggest challenges for operationalizing predictive analytics is accessing and preparing data which I wrote about. Four out of ten companies say that this is the part of the predictive analytics process vr_NG_Predictive_Analytics_02_impact_of_doing_more_predictive_analyticswhere they spend the most time. Choosing the right software is another challenge that I wrote about. Making that choice includes identifying the specific integration points with business intelligence systems, applications, database systems, and middleware. These decisions will depend on how people use the various systems and what areas of the organization are looking to operationalize predictive analytics processes.

For those that are willing to take on the challenges of operationalizing predictive analytics the rewards can be significant, including significantly better competitive positioning and new revenue opportunities. Furthermore, once predictive analytics is initially deployed in the organization it snowballs, with more than nine in ten companies going on to increase their use of predictive analytics. Once companies reach that stage, one third of them (32%) say predictive analytics has had a transformational impact and another half (49%) say it provides a significant positive benefits.

Regards,

Ventana Research

Our recently completed benchmark research on data and analytics in the cloud shows that analytics deployed in cloud-based systems is gaining widespread adoption. Almost half (48%) of vr_DAC_04_widespread_use_of_cloud_based_analyticsparticipating organizations are using cloud-based analytics, another 19 percent said they plan to begin using it within 12 months, and 31 percent said they will begin to use cloud-based analytics but do not know when. Participants in various areas of the organization said they use cloud-based analytics, but front-office functions such as marketing and sales rated it important more often than did finance, accounting and human resources. This front-office focus is underscored by the finding that the categories of information for which cloud-based analytics is most often deemed important are forecasting (mentioned by 51%) and customer-related (47%) and sales-related (33%) information.

The research also shows that while adoption is high, organizations face challenges as they seek to realize full value from their cloud-based data and analytics initiatives. Our Performance Index analysis reveals that only one in seven organizations reach the highest Innovative level of the four levels of performance in their use of cloud-based analytics. Of the four dimensions we use to further analyze performance, organizations do better in Technology and Process than in Information and People. That is, the tools and analytic processes used for data and analytics in the cloud have advanced more rapidly than users’ abilities to work with their information. The weaker performance in People and Information is reflected in findings on the most common barriers to deployment of cloud-based analytics: lack of confidence about the security of data and analytics, mentioned by 56 percent of organizations, and not enough skills to use cloud-based analytics (42%).

Given the top barrier of perceived data security issues, it is not surprising the research finds that the largest percentage of organizations (66%) use a private cloud, which by its nature ostensibly is more secure, to deploy analytics; fewer use a public cloud (38%) or a hybrid cloud (30%), although many use more than one type today. We know from tracking analytics and business intelligence software providers that operate in the public cloud that this is changing quite rapidly. Comparing vr_DAC_06_how_to_deploy_cloud_based_analyticsdeployment by industry sector, the research analysis shows that private and hybrid clouds are more prevalent in the regulated areas of finance, insurance and real estate and government than in services and manufacturing. The research suggests that private and hybrid cloud deployments are used more often for analytics where data privacy is a concern.

Furthermore, organizations said that access to data for analytics is easier with private and hybrid clouds (29% for public cloud vs. 58% for private cloud and 67% for hybrid cloud). In addition, organizations using private and hybrid cloud more often said they have improved communication and information sharing (56% public vs. 72% private and 70% hybrid). Thus, the research data makes clear that organizations feel more comfortable implementing analytics in a private or hybrid cloud in many areas.

Private and hybrid cloud implementations of data and analytics often coincide with large data integration efforts, which are necessary at some point to benefit from such deployments. Those who said that integration is very important also said more often than those giving it less importance that cloud-based analytics helps their customers, partners and employees in an array of ways, including improved presentation of data and analytics (62% vs. 43% of those who said integration is important or somewhat important), gaining access to many different data sources (57% vs. 49%) and improved data quality and data management (59% vs. 53%). We note that the focus on data integration efforts correlates more with private and hybrid cloud approaches than with public cloud approaches, thus the benefits cannot be directly assigned to the various cloud approaches nor the integration efforts.

Another key insight from the research is that data and analytics often are considered in conjunction with mobile and collaboration initiatives which have different priorities for business than IT or in consumer markets. Nine out of 10 organizations said they use or intend to use collaboration technology to support their cloud-based data and analytics, and 83 percent said they need to support data access and analytics on mobile devices. Two-thirds said they support both tablets and smartphones and multiple mobile operating systems, the most important of which are Apple iOS (ranked first by 60%), Google Android (ranked first by 26%) and Microsoft Windows Mobile (ranked first by 13%). We note that Microsoft has a higher percentage of importance here than its reported market share (approximately 2.5%) would suggest. Similarly, Google Android has greater penetration than Apple in the consumer market (51% vs. 41%). We expect that the influence of mobile operating systems related to data and analytics in the cloud will continue to evolve and be impacted by upcoming corporate technology refreshment cycles, the consolidation of PCs and mobile devices, and the “bring your own device” (BYOD) trend.

The research finds that usability (63%) and reliability (57%) arevr_DAC_20_evaluation_criteria_for_cloud_based_analytics the top technology buying criteria, which is consistent with our business technology innovation research conducted last year. What has changed is that manageability is cited as very important as often as functionality, by approximately half of respondents, a stronger showing than in our previous research.  We think it likely that manageability is gaining prominence as cloud providers and organizations sort out issues in who manages deployments along with usage and licensing, along with who actually owns your data in the cloud which my colleague Robert Kugel has discussed.

As the research shows, the importance of cloud data and analytics is continuing to grow. The importance of this topic makes me eager to discuss further the attitudes, re­quire­­ments and future plans of organizations that use data and analytics in the cloud and to identify the best prac­tices of those that are most proficient in it. For more information on this topic, and learn more on best practices for data and analytics in the cloud, and download the executive summary of the report to improve your readiness.

Regards,

Ventana Research

Our research into next-generation predictive analytics shows that along with not having enough skilled resources, which I discussed in my previous analysisNGPA AP #4 image 1the inability to readily access and integrate data is a primary reason for dissatisfaction with predictive analytics (in 62% of participating organizations). Furthermore, this area consumes the most time in the predictive analytics process: The research finds that preparing data for analysis (40%) and accessing data (22%) are the parts of the predictive analysis process that create the most challenges for organizations. To allow more time for actual analysis, organizations must work to improve their data-related processes.

Organizations apply predictive analytics to many categories of information. Our research shows that the most common categories are customer (used by 50%), marketing (44%), product (43%), financial (40%) and sales (38%). Such information often has to be combined from various systems and enriched with information from new sources. Before users can apply predictive analytics to these blended data sets, the information must be put into a common form and represented as a normalized analytic data set. Unlike in data warehouse systems, which provide a single data source with a common format, today data is often located in a variety of systems that have different formats and data models. Much of the current challenge in accessing and integrating data comes from the need to include not only a variety of relational data sources but also less structured forms of data. Data that varies in both structures and sizes is commonly called big data.

To deal with the challenge of storing and computing big data, organizations planning to use predictive analytics increasingly turn to big data technology. While flat files and relational databases on standard hardware, each cited by almost two-thirds (63%) of participants, are still the most commonly used tools for predictive analytics, more than half (52%) of organizations now use data warehouse appliances for Using Big Data with Predictive Analytics predictive analytics, and 31 percent use in-memory databases, which the second-highest percentage (24%) plan to adopt in the next 12 to 24 months. Hadoop and NoSQL technologies lag in adoption, currently used by one in four organizations, but in the next 12 to 24 months an additional 29 percent intend to use Hadoop and 20 percent more will use other NoSQL approaches. Furthermore, more than one-quarter (26%) of organizations are evaluating Hadoop for use in predictive analytics, which is the most of any technology.

 Some organizations are considering moving from on-premises to cloud-based storage of data for predictive analytics; the most common reasons for doing so are to improve accessing data (for 49%) and preparing data for analysis (43%). This trend speaks to the increasing importance of cloud-based data sources as well as cloud-based tools that provide access to many information sources and provide predictive analytics. As organizations accumulate more data and need to apply predictive analytics in a scalable manner, we expect the need to access and use big data and cloud-based systems to increase.

While big data systems can help handle the size and variety of data, they do not of themselves solve the challenges of data access and normalization. This is especially true for organizations that need to blend new data that resides in isolated systems. How to do this is critical for organizations to consider, especially in light of the people using predictive analytic system and their skills. There are three key considerations here. One is the user interface, the most common of which are spreadsheets (used by 48%), graphical workflow modeling tools (44%), integrated development environments (37%) and menu-driven modeling tools (35%). Second is the number of data sources to deal with and which are supported by the system; our research shows that four out of five of organizations need to access and integrate five or more data sources. The third consideration is which analytic languages and libraries to use and which are supported by the system; the research finds that Microsoft Excel, SQL, R, Java and Python are the most widely used for predictive analytics. Considering these three priorities both in terms of the resident skills, processes, current technology, and information sources that need to be accessed are crucial for delivering value to the organization with predictive analytics.

While there has been an exponential increase in data available to use in predictive analytics as well as advances in integration technology, our research shows that data access and preparation are still the most challenging and time-consuming tasks in the predictive analytics process. Although technology for these tasks has improved, complexity of the data has increased through the emergence of different data types, large-scale data and cloud-based data sources. Organizations must pay special attention to how they choose predictive analytics tools that can give easy access to multiple diverse data sources including big data stores and provide capabilities for data blending and provisioning of analytic data sets. Without these capabilities, predictive analytics tools will fall short of expectations.

Regards,

Ventana Research

The Performance Index analysis we performed as part of our next-generation predictive analytics benchmark research shows that only one in four organizations, those functioning at the highest Innovative level of performance, can use predictive analytics to compete effectively against others that use this technology less well. We analyze performance in detail in four dimensions (People, Process, Information and Technology), and for predictive analytics we find that organizations perform best in the Technology dimension, with 38 percent reaching the top Innovative level. This is often the case in our analyses, as organizations initially perform better in the details of selectingvr_NG_Predictive_Analytics_performance_06_dimensions and managing new tools than in the other dimensions. Predictive analytics is not a new technology per se, but the difference is that it is becoming more common in business units, as I have written.

In contrast to organizations’ performance in the Technology dimension, only 10 percent reach the Innovative level in People and only 11 percent in Process. This disparity uncovered by the research analysis suggests there is value in focusing on the skills that are used to design and deploy predictive analytics. In particular, we found that one of the two most-often cited reasons why participants are not fully satisfied with the organization’s use of predictive analytics is that there are not enough skilled resources (cited by 62%). In addition, 29 percent said that the need for too much training or customized skills is a barrier to changing their predictive analytics.

The challenge for many organizations is to find the combination of domain knowledge, statistical and mathematical knowledge, and technical knowledge that it needs to be able to integrate predictive analytics into other technology systems and into operations in the lines of business, which I also have discussed. The need for technical knowledge is evident in the research findings on the jobs held by individual participants: Three out of four require technical sophistication. More than one-third (35%) are data scientists who have a deep understanding of predictive analytics and its use as well as of data-related technology; one-fourth are data analysts who understand the organization’s data and systems but have limited knowledge of predictive analytics; and 16 percent described themselves as predictive analytics experts who have a deep understanding of this topic but not of technology in general. The research also finds that those most often primarily responsible for designing and deploying predictive analytics are data scientists (in 31% of organizations) or members of the business intelligence and data warehouse team (27%). This focus on business intelligence and data warehousing vr_NG_Predictive_Analytics_16_why_users_dont_produce_predictive_analysesrepresents a shift toward integrating predictive analytics with other technologies and indicates a need to scale predictive analytics across the organization.

In only about half (52%) of organizations are the people who design and deploy predictive analytics the same people who utilize the output of these processes. The most common reasons cited by research participants that users of predictive analytics don’t produce their own analyses are that they don’t have enough skills training (79%) and don’t understand the mathematics involved (66%). The research also finds evidence that skills training pays off: Fully half of those who said they received adequate training in applying predictive analytics to business problems also said they are very satisfied with their predictive analytics; percentages dropped precipitously for those who said the training was somewhat adequate (8%) and inadequate (6%). It is clear that professionals trained in both business and technology are necessary for an organization to successfully understand, deploy and use predictive analytics.

To determine the technical skills and training necessary for predictive analytics, it is important to understand which languages and libraries are used. The research shows that the most common are SQL (used by 67% of organizations) and Microsoft Excel (64%), with which many people are familiar and which are relatively easy to use. The three next-most commonly used are much more sophisticated: the open source language R (by 58%), Java (42%) and Python (36%). Overall, many languages are in use: Three out of five organizations use four or more of them. This array reflects the diversity of approaches to predictive analytics. Organizations must assess what languages make sense for their uses, and vendors must support many languages for predictive analytics to meet the demands of all customers.

The research thus makes clear that organizations must pay attention to a variety of skills and how to combine them with technology to ensure success in using predictive analytics. Not all the skills necessary in an analytics-driven organization can be combined in one person, as I discussed in my analysis of analytic personas. We recommend that as organizations focus on the skills discussed above, they consider creating cross-functional teams from both business and technology groups.

Regards,

Ventana Research

To impact business success, Ventana Research recommends viewing predictive analytics as a business investment rather than an IT investment.  Our recent benchmark research into next-generation predictive analytics  reveals that since our previous research on the topic in 2012, funding has shifted from general business budgets (previously 44%) to line of business IT budgets (previously 19%). Now more than vr_NG_Predictive_Analytics_15_preferences_in_purchasing_predictive_analy.._  half of organizations fund such projects from business budgets: 29 percent from general business budgets and 27 percent from a line of business IT budget. This shift in buying reflects the mainstreaming of predictive analytics in organizations,  which I recently wrote about .

This shift in funding of initiatives coincides with a change in the preferred format for predictive analytics. The research reveals that 15 percent fewer organizations prefer to purchase predictive analytics as stand-alone technology today than did in the previous research (29% now vs. 44% then). Instead we find growing demand for predictive analytics tools that can be integrated with operational environments such as business intelligence or transaction applications. More than two in five (43%) organizations now prefer predictive analytics embedded in other technologies. This integration can help businesses respond faster to market opportunities and competitive threats without having to switch applications.

  vr_NG_Predictive_Analytics_14_considerations_in_evaluating_predictive_an.._ The features most often sought in predictive analytics products further confirm business interest. Usability (very important to 67%) and capability (59%) are the top buying criteria, followed by reliability (52%) and manageability (49%). This is consistent with the priorities of organizations three years ago with one important exception: Manageability was one of the two least important criteria then (33%) but today is nearly tied with reliability for third place. This change makes sense in light of a broader use of predictive analytics and the need to manage an increasing variety of models and input variables.

Further, as a business investment predictive analytics is most often used in front-office functions, but the research shows that IT and operations are closely associated with these functions. The top four areas of predictive analytics use are marketing (48%), operations (44%), IT (40%) and sales (38%). In the previous research operations ranked much lower on the list.

To select the most useful product, organizations must understand where IT and business buyers agree and disagree on what matters. The research shows that they agree closely on how to deploy the tools: Both expressed a greater preference to deploy on-premises (business 53%, IT 55%) but also agree in the number of those who prefer it on demand through cloud computing (business 22%, IT 23%). More than 90 percent on both sides said the organization plans to deploy more predictive analytics, and they also were in close agreement (business 32%, IT 33%) that doing so would have a transformational impact, enabling the organization to do things it couldn’t do before.

However, some distinctions are important to consider, especially when looking at the business case for predictive analytics. Business users more often focus on the benefit of achieving competitive advantage (60% vs. 50% of IT) and creating new revenue opportunities (55% vs. 41%), which are the two benefits most often cited overall. On the other hand, IT professionals more often focus on the benefits of in­creased upselling and cross-selling (53% vs. 32%), reduced risk (26% vs. 21%) and better compliance (26% vs. 19%); the last two reflect key responsibilities of the IT group.

Despite strong business involvement, when it comes to products, IT, technical and data experts are indispensable for the evaluation and use of predictive analytics. Data scientists or the head of data management are most often involved in recommending (52%) and evaluating (56%) predictive analytics technologies. Reflecting the need to deploy predictive analytics to business units, analysts and IT staff are the next-most influential roles for evaluating and recommending. This involvement of technically sophisticated individuals combined with the movement away from organizations buying stand-alone tools indicates an increasingly team-oriented approach.

Purchase of predictive analytics often requires approval from high up in the organization, which underscores the degree of enterprise-wide interest in this technology. The CEO or president is most likely to be involved in the final decision in small (87%) and midsize (76%) companies. In contrast, large companies rely most on IT management (40%), and very large companies rely most on the CIO or head of IT (60%). We again note the importance of IT in the predictive analytics decision-making process in larger organizations. In the previous research, in large companies IT management was involved in approval in 9 percent of them and the CIO was involved in only 40 percent.

As predictive analytics becomes more widely used, buyers should take a broad view of the design and deployment requirements of the organization and specific lines of business. They should consider which functional areas will use the tools and consider issues involving people, processes and information as well as technology when evaluating such systems. We urge business and IT buyers to work together during the buying process with the common goal of using predictive analytics to deliver value to the enterprise.

Regards,

Ventana Research

Our recently released benchmark research into next-generation predictive analytics  shows that in this increasingly important area many organizations are moving forward in the dimensions of information and technology, but most are challenged to find people with the right skills and to align organizationalVentanaResearch_NextGenPredictiveAnalytics_BenchmarkResearch processes to derive business value from predictive analytics.

For those that have done so, the rewards can be significant. One-third of organizations participating in the research said that using predictive  analytics leads to transformational change – that is, it enables them to do things they couldn’t do before – and at least half said that it provides competitive advantage or creates new revenue opportunities. Reflecting the  vr_NG_Predictive_Analytics_03_benefits_of_predictive_analytics momentum behind predic­tive analytics today, virtually all participants (98%) that have engaged in predictive analytics said that they will be rolling out more of it.

Our research shows that predictive analytics is being used most often in the front offices of organizations, specifically in marketing (48%), operations (44%) and IT (40%). While operations and IT are not often considered front-office functions, we find that they are using predictive analytics in service to customers. For instance, the ability to manage and impact the customer experience by applying analytics to big data is an increasingly important approach that  I recently wrote about . As conventional channels of communication give way to digital channels, the use of predictive analytics in operations and IT becomes more valuable for marketing and customer service.

However, the most widespread barrier to making changes in predictive analytics is lack of resources (cited by 52% of organizations), which includes finding the necessary skills to design and deploy programs. The research shows that currently consultants and data scientists are those most often needed. Half the time those designing the system are also the end users of it, which indicates that using predictive analytics still requires advanced skills. Lack of awareness (cited by 48%) is the second-most common barrier; many organizations fail to understand the vr_NG_Predictive_Analytics_06_technical_challenges_to_predictive_analyti.._  value of predictive analytics in their business. Some of the reluctance to implement predictive analytics may be because doing so can require significant change. Predictive analytics often represents a new way of thinking and can necessitate revamping of key organizational processes.

From a technical perspective, the most common deployment challenge is difficulty in integrating predictive analytics into the information architecture, an issue cited by half of participants. This is not surprising given the diversity of tools and databases involved in big data. Problems with accessing source data (30%), inappropriate algorithms (26%) and inaccurate results (21%) also impede use. Accessing and normalizing data sources is a significant issue as many different types of data must be incorporated to use predictive analytics optimally. Blending this data and turning it into a clean analytic data set often takes significant effort. Confirming this is the finding that data preparation is the most challenging part of the analytic process for half of the organizations in the research.

Regarding interaction with other established systems, business intelligence is most often the integration point (for 56% of companies). However, it also is increasingly embedded in databases and middleware. The ability to perform modeling in databases is important since it enables analysts to work with large data sets and do more timely model updates and scoring. Embedding into middleware has grown fourfold since our previous research on predictive analytics in 2012; this has implications for the emerging Internet of Things (IoT), through which people will interact with an increasing array of devices.

Another sign of the broader adoption of predictive analytics is how and where buying decisions are made. Budgets for  vr_NG_Predictive_Analytics_07_funding_improvement_in_predictive_analytic.._ predictive analytics are shifting. Since the previous research, funding sourced from general business budgets has declined 9 percent and increased 8 percent in line-of-business IT budgets. This comports with a shift in the form in which organizations prefer to buy predictive analytics, which now is less as a stand-alone product and more embedded in other systems. Usability and functionality are still the top buying criteria, reflecting needs to simplify predictive analytics tools and address the skills gap while still being able to access a range of capabilities.

Overall the research shows that the application of predictive analytics to business processes sets high-performing organizations apart from others. Companies more often achieve competitive advantage with predictive analytics when they support the deployment of predictive analytics in business processes (66% vs. 57% overall), use business intelligence and data warehouse teams to design and deploy predictive analytics (71% vs. 58%) and fund predictive analytics as a shared service (73% vs. 58%). Similarly, those that train employees in the application of predictive analytics to business problems achieve more satisfaction and better outcomes.

Organizations looking to improve their business through predictive analytics should examine what others are doing. Since the time of our previous research, innovation has expanded and there are more peer organizations across industries and business functions that can be emulated. And the search for such innovation need not be limited to within one’s industry; cross-industry examples also can be enlightening. More concretely, the research finds that people and processes are where organizations can improve most in predictive analytics. We advise them to concentrate on streamlining processes, acquiring necessary skills and supporting both with technology available in the market. To begin, develop a practical predictive analytics strategy and enlist all stakeholders in the organization to support initiatives.

Regards,

Ventana Research

Our benchmark research into big data analytics shows that marketing in the form of cross-selling and upselling (38%) and customer understanding (32%) are the top use cases for big data analytics. Related to these uses, organizations today spend billions of dollars on programs seeking customer loyalty andvr_Big_Data_Analytics_09_use_cases_for_big_data_analytics satisfaction. A powerful metric that impacts this spending is net promoter score (NPS), which attempts to connect brand promotion with revenue. NPS has proven to be a popular metric among major brands and Fortune 500 companies. Today, however, the advent of big data systems brings the value and the accuracy of NPS into question. It and similar loyalty metrics face displacement by big data analytics capabilities that can replace stated behavior and survey-based attitudinal data with actual behavioral data (sometimes called revealed behavior) combined with unstructured data sources such as social media. Revealed behavior shows what people have actually done and thus is a better predictor of what they will do in the future than what they say they have done or intend to do in the future. With interaction through various customer touch points (the omnichannel approach) it is possible to measure both attitudes and revealed behavior in a digital format and to analyze such data in an integrated fashion. Using innovative technology such as big data analytics can overcome three inherent drawbacks of NPS and similar customer loyalty and satisfaction metrics.

Such metrics have been part of the vernacular in boardrooms, organizational cultures and MBA programs since the 1980s, based on frameworks such as the Balanced Scorecard introduced by Kaplan and Norton. Net promoter score, a metric to inform the customer quadrant of such scorecards, is based on surveys in which participants are asked how likely they are to promote a brand based on an 11-point scale. The percentage of detractors (scores 0-6) is subtracted from the percentage of promoters (9-10) to produce the net promoter score. This score helps companies assess satisfaction around a brand and allows executives and managers to allocate resources. The underlying assumption is that attitude toward a brand is a leading indicator of intent and behavior. As such, NPS ostensibly can predict things such as churn behavior (the net number of new customers minus those leaving). By understanding attitudes and behavioral intent, marketers can intervene with actions such as timely offers and others intended to change behavior such as customers leaving.

Until recently, NPS and similar loyalty approaches have been one of the most adopted methods to track attitudes and vr_Customer_Analytics_02_drivers_for_new_customer_analyticsbehaviors in customer interactions and to provide a logical way to impact and improve the customer experience. The prominence of such loyalty programs and metrics reflects an increasing focus on the customer. An indication of this increased focus is found in our next-generation customer analytics benchmark research, in which improving the customer experience (63%), improving customer service strategy (57%) and improving outcomes of interactions (51%) are the top drivers for adopting customer analytics. Nevertheless, while satisfaction and loyalty metrics such as NPS are entrenched in many organizations, there are three fundamental problems with them that can be overcome using big data analytics. Let’s look at each of these challenges and how big data analytics can overcome them.

It is prone to error. Current methods and metrics are vulnerable to errors, most deriving from one of three sources.

Coverage error results from measuring only a segment of a population and projecting the results onto the entire population. The problem here is clear if we imagine using data about California to draw conclusions about the entire United States. While researchers try to overcome such coverage error with stratified sampling methods, it necessitates significant investment usually not associated with business research. Additionally, nonresponse error, a subset of coverage error, results from people opting out of being measured.

Sample error is the statistical error associated with making conclusions about a population based on only a subset of a population. Researchers can overcome it by increasing sample sizes, but this, too, requires significant investment usually not associated with business research.

Measurement error is a complex topic that deserves an extended discussion beyond the scope here, but it presumes that analysts should start with a hypothesis and try to disprove it rather than to prove it. From there, iteration is needed to come as close to the truth as possible. In the case of NPS, measurement error can simply be the result of people not telling the truth or being unduly influenced by a recent experience that skews evaluations such as brand impression or likelihood to promote a brand. Another instance occurs when a proper response option is not represented and people are forced to give an incorrect response.

Big data can address these error vulnerability because it uses a census approach to data collection. Today companies can capture data about nearly every customer interaction with the brand, including customer service calls, website experiences, social media posts and transactions. Because the data is collected across the entire population and includes more revealed behavior than attitudinal and stated behavior, the error problems associated with NPS can be largely overcome.

It lacks causal linkage with financial metrics. The common claim that a higher NPS leads to increased revenue, like the presumed relationship between customer satisfaction and business outcomes, is impossible to prove in all circumstances and all industries. For instance, a pharmaceutical company trying to tie NPS to revenue might ask a doctor how likely he is to write a prescription for a certain drug. The doctor might see this as a compromising question and not be willing to answer honestly. Regarding satisfaction metrics, Microsoft in the 1990s had very low user satisfaction but high loyalty because it had a virtual monopoly. The airline industry today sees similar dynamics.

Big data analytics can show causal linkage between measurement of the customer experience and the organization’s financial metrics. It can link systems of record such as enterprise resource planning and enterprise performance management with systems of engagement such as content management, social media, marketing and sales. Collecting large data sets of customer interactions over time enable systems to relate customer experiences with purchase behaviors such as recency, frequency and size of purchase. This can be done on an ongoing basis and can be tested with randomized experiments. With big data platforms that can reduce data to the lowest common denominators in the form of key-value pairs, the only obstacles are to have the right skill sets, big data analytic software and enough data to be able to isolate variables and repeat the experiments over time. When there is enough data to do so, causal patterns emerge that can link customer attitudes and experiences directly with transactional outcomes. As long as there is enough data, such linkage can be revealed in any type of market such as wallet share in consumer packaged goods or “winner take all” markets such as automobiles.

It lacks actionable data. Often loyalty metrics such as NPS are tied to employee compensation. Those employees have a motivation to understand the metric and what action is needed to improve the score, but that is not easy due to a number of factors. Unlike quantitative metrics such as revenue or profitability, NPS and similar loyalty metrics are softer metrics whose impacts are not easily understood. Furthermore, the measurement may happen just once or twice a year, and the composition of the sample can change over time. Often what happens is a customer satisfaction team and consultants responsible for the research and analysis prepare the trend and driver analysis and share that with various teams with suggested areas of improvement and action to be taken. Such information is disseminated based on aggregated data broken out by important product and service segments and perhaps customer journey timelines. The problem is that even if employees understand the metric and how to impact it, by the time action is taken within the organization, it is not timely and not customized in an individual manner.

Big data analytics inherently has a streamlined capability to act upon data. Instead of the traditional process of reporting results and waiting months for action to be taken on those results and new results to show up in an NPS program, data can be acted upon immediately by all employees. A big reason for this is that data is now collected at a granular level for individual customers. For instance, if a customer with a high customer lifetime value (CLV) score shows signs that are precursors of switching companies, a report can be issued to show all interactions in that individual’s customer journey and highlight the most impactful events. Then an alert can be sent and a personal interaction such as a phone call or a face-to-face meeting can be set up with the objective of preventing the customer’s defection. Incentives such as a bank automatically waiving certain fees, an airline giving an upgrade to first-class or a grocery store giving a gift certificate can be recommended by the system as a next best action.  It can also be done on a more automated but still personalized basis where the individual customer can be discreetly addressed to see how he or she can be made happy. Each of the actions can be measured against the value of the customer and contextualized forvr_Big_Data_Analytics_08_top_capabilities_of_big_data_analytics that customer. In this way, big data analytics platforms can bring together what used to be separate analytic models and action plans related to loyalty, churn, micromarketing campaigns and next best action. It is not surprising in this context that applying predictive analytics is the most important capability for big data analytics for nearly two-thirds (64%) of organizations participating in our research.I wrote about these ideas a few years ago, but only recently have I seen information systems capable of disrupting this entire category. It will not happen overnight since many NPS and satisfaction programs are tied to a component of employee compensation and internal processes that are not easily changed. Furthermore, NPS can still have value as a metric to understand word of mouth around a brand and in areas that lack data and better metrics. However, as attitudinal and behavioral big data continue to be collected and big data analytics technology continues to mature, revealed behavior will always outperform attitudinal and stated behavior data. Organizations that can challenge their conventional NPS wisdom and overcome internal political obstacles are likely to see superior return from their customer experience management investments.

Regards,

Ventana Research

Ventana Research recently completed the most comprehensive evaluation of analytics and business intelligence products and vendors available anywhere. As I discussed recently, such research is necessary and timely as analytics and business intelligence is now a fast-changing market. Our Value Index for Analytics and Business Intelligence in 2015 scrutinizes 15 top vendors and their product offerings in seven key
categories: Usability, Manageability, Reliability, Capability, Adaptability, Vendor Validation and TCO/ROI. The analysis shows that the top supplier is Information Builders, which qualifies as a Hot vendor and is followed by 10 other Hot vendors: SAP, IBM, MicroStrategy, Oracle, vr_VI_BI_2015_Weighted_OverallSAS, Qlik, Actuate (now part of OpenText) and Pentaho.

The evaluations drew on our research and analysis of vendors’ and products along with their responses to our detailed RFI or questionnaire, our own hands-on experience and the buyer-related findings from our benchmark research on next-generation business intelligence, information optimization and big data analytics. The benchmark research examines analytics and business intelligence from various perspectives to determine organizations’ current and planned use of these technologies and the capabilities they require for successful deployments.

We find that the processes that comprise business intelligence today have expanded beyond standard query, reporting, analysis and publishing capabilities. They now include sourcing and integration of data and at later stages the use of analytics for planning and forecasting and of capabilities utilizing analytics and metrics for collaborative interaction and performance management. Our research on big data analytics finds that new technologies collectively known as big data vr_Big_Data_Analytics_15_new_technologies_enhance_analyticsare influencing the evolution of business intelligence as well; here in-memory systems (used by 50% of participating organizations), Hadoop (42%) and data warehouse appliances (33%) are the most important innovations. In-memory computing in particular has changed BI because it enables rapid processing of even complex models with very large data sets. In-memory computing also can change how users access data through data visualization and incorporate data mining, simulation and predictive analytics into business intelligence systems. Thus the ability of products to work with big data tools figured in our assessments.

In addition, the 2015 Value Index includes assessments of their self-service tools and cloud deployment options. New self-service approaches can enable business users to reduce their reliance on IT to access and use data and analysis. However, our information optimization research shows that this change is slow to proliferate. In four out of five organizations, IT currently is involved in making information available to end users vr_Info_Optimization_01_whos_responsible_for_information_availabilityand remains entrenched in the operations of business intelligence systems.

Similarly, our research, as well as the lack of maturity of the cloud-based products evaluated, shows that organizations are still in the early stages of cloud adoption for analytics and business intelligence; deployments are mostly departmental in scope. We are exploring these issues further in our benchmark research into data and analytics in the cloud, which will be released in the second quarter of 2015.

The products offered by the five top-rated com­pa­nies in the Value Index provide exceptional functionality and a superior user experi­ence. However, Information Builders stands out, providing an excep­tional user experience and a completely integrated portfolio of data management, predictive analytics, visual discovery and operational intelligence capabilities in a single platform. SAP, in second place, is not far behind, having made significant prog­ress by integrating its Lumira platform into its BusinessObjects Suite; it added pre­dictive analytics capabilities, which led to higher Usability and Capability scores. IBM, MicroStrategy and Oracle, the next three, each provide a ro­bust integrated platform of capabilities. The key differentiator between them and the top two top is that they do not have superior scores in all of the seven categories.

In evaluating products for this Value Index we found some noteworthy innovations in business intelligence. One is Qlik Sense, which has a modern architecture that is cloud-ready and supports responsive design on mobile devices. Another is SAS Visual Analytics, which combines predictive analytics with visual discovery in ways that are a step ahead of others currently in the market. Pentaho’s Automated Data Refinery concept adds its unique Pentaho Data Integration platform to business intelligence for a flexible, well-managed user experience. IBM Watson Analytics uses advanced analytics and VR_AnalyticsandBI_VI_2015natural language processing for an interactive experience beyond the traditional paradigm of business intelligence. Tableau, which led the field in the category of Usability, continues to innovate in the area of user experience and aligning technology with people and process. MicroStrategy’s innovative Usher technology addresses the need for identity management and security, especially in an evolving era in which individuals utilize multiple devices to access information.

The Value Index analysis uncovered notable differences in how well products satisfy the business intelligence needs of employees working in a range of IT and business roles. Our analysis also found substantial variation in how products provide development, security and collaboration capabilities and role-based support for users. Thus, we caution that similar vendor scores should not be taken to imply that the packages evaluated are functionally identical or equally well suited for use by every organization or for a specific process.

To learn more about this research and to download a free executive summary, please visit.

Regards,

Ventana Research

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 72,990 hits
%d bloggers like this: