You are currently browsing the tag archive for the ‘Big Data’ tag.

IBM redesigned its business intelligence platform, now called IBM Cognos Analytics. Expected to be released by the end of 2015, the new version includes features to help end users model their own data without IT assistance while maintaining the centralized governance and security that the platform already has. Our benchmark research into information optimization shows that simplifying access to information is important to vr_Info_Optimization_01_whos_responsible_for_information_availabilityvirtually all (97%) participating organizations, but it also finds that only one in four (25%) are satisfied with their current software for doing that. Simplification is a major theme of the IBM Cognos redesign.

The new IBM Cognos Analytics provides a completely Web-based environment that is consistent in the user interface and security across multiple devices and browsers. The redesigned interface follows IBM’s internal cultural shift to base product development first on the user experience and second on features and functionality. This may be a wise move as our research across multiple analytic software categories finds usability to be organizations’ most often important buying criterion.

The redesign is based on the same design and self-service principles as IBM Watson Analytics which we did award a Ventana Research Technology Innovation Award for 2015 in business analytics. The redesign is most evident in the IBM Cognos Analytics authoring mode. The Report Studio and Cognos Workspace Advanced modules have been replaced with a simplified Web-based modeling environment. The extended capabilities of IBM Cognos 10.2.2 are still available, but now they are hidden and more logically arranged to provide easier user access. For example, the previous version of Cognos presented an intimidating display of tools with which to do tasks such as fine-grain manipulation of reports; now these features are hidden but still easily accessible. If a user is having difficulty finding a particular function, a “smart search” feature helps to find the correct menu to add it.

The new system indexes objects, including metadata, as they are created, providing a more robust search function suitable for nontechnical users in the lines of business. The search feature works with what IBM calls “intent-based modeling” so users can search for words or phrases – for example, revenue by unit or product costs – and be presented with only relevant tables and columns. The system can then automatically build a model by inferring relationships in the data. The result is that the person building the report need not manually design a multidimensional model of the data, so less skilled end users can serve themselves to build their own data models that underpin dashboards and reports. Previously, end users were limited to parameterized reporting in which they could work only within the context of models previously designed by IT. Many vendors of analytics have been late in exploiting the power of search and therefore may be missing a critical feature that customers desire. Ventana Research is a proponent of such capabilities; my colleague Mark Smith has written about them in the context of data discovery technology. Search is fundamental to user-friendly discovery systems, as is reflected in the success of companies such as Google and Splunk. With search becoming more sophisticated, being based on machine-learning algorithms, we expect it to become a key requirement for new analytics and business intelligence systems.

Furthering the self-service aspect is the ability for end users to access and combine multiple data sets. The previous version of IBM Cognos (10.2.2) allowed users to work with “personal data sets” such as .csv files, but they needed an IBM DB2 back end to house the files. Now such data sets can be uploaded and managed directly on the IBM Cognos Analytics server and accessed with the new Web-based authoring tool. Once data sets are uploaded they can be accessed and modeled like any other object to which the user has access. In this way, IBM Cognos Analytics addresses the “bring your own data” challenge in which data sources such as personal spreadsheets must be integrated into enterprise analytics and business intelligence systems.

After modeling the data, users can lay out new dashboards using drag-and-drop capabilities like those found in IBM Watson Analytics. Dashboards can be previewed and put into service for one-time use or put into production mode if the user has such privileges. As is the case with IBM Watson Analytics, newly designed dashboard components such as tables, charts and maps are automatically linked so that changes in one part of the dashboard automatically relate to other parts. This feature facilitates ease of use in designing dashboards. Some other tools in the market require widgets to be connected manually, which can be time-consuming and is an impediment to prototyping of dashboards.

The move to a more self-service orientation has long been in the works for IBM Cognos and so this release is an important one for IBM. The ability to automatically integrate and model data gives the IT department a more defensible position as other self-service tools are introduced into the organization and are challenging data access and preparation built within tools like IBM Cognos. vr_DAC_20_justification_for_data_preparationThis is becoming especially important as the number and complexity of data sources increases and are needed more rapidly by business. Our research into information optimization shows that most organizations need to integrate at least six data sources and some have 20 or more sources they need to bring together. All of which confirms what our data and analytics in the cloud benchmark research finds data preparation to be a top priority in over half (55%) of organizations.

Over time, IBM intends to integrate the capabilities of Cognos Analytics with those of Watson Analytics. This is an important plan because IBM Watson Analytics has capabilities beyond those of self-service tools in the market today. In particular, the ability to explore unknown data relationships and do advanced analysis is a key differentiator for IBM Watson Analytics, as I have written. IBM Watson Analytics enables users to explore relationships in data that otherwise would not be noticeable, whereas IBM Cognos Analytics enables them to explore and put into production information based on predefined assumptions.

Going forward, I will be watching how IBM aligns Cognos Analytics with Watson Analytics, and in particular, how Cognos Analytics will fit into the IBM cloud ecosystem. Currently IBM Cognos Analytics is offered both on-premises and in a hosted cloud, but here also IBM is working to align it VR_AnalyticsandBI_VI_HotVendor_2015more closely with IBM Watson Analytics. Bringing in data preparation, data quality and MDM capabilities from the IBM DataWorks product could also benefit IBM Cognos Analytics users. IBM should emphasize the breadth of its portfolio of products including IBM Cognos TM1, IBM SPSS, IBM Watson Analytics and IBM DataWorks as it faces stiff competition in enterprise analytics and business intelligence from a host of analytics companies including new cloud-based ones. IBM is rated a Hot Vendor in our Ventana Research Analytics and Business Intelligence Value Index in part because of its overall portfolio.

For organizations already using IBM Cognos, the redesign addresses the need of end users to create their own dashboards while maintaining IT governance and control. The new interface may take some getting used to, but it is modern and more intuitive than previously. For companies new to IBM Cognos, as well as departments wanting to take a look at the platform, cloud options offer less risk. For those wanting early access to the new IBM Cognos Analytics, IBM has provided access to it on www.analyticszone.com. The changes I have noted move IBM Cognos Analytics closer to the advances in analytics as a whole, and I recommend that all these groups examine the new version.

Regards,

Ventana Research

Tableau Software’s annual conference, which company spokespeople reported had more than 10,000 attendees, filled the MGM Grand in Las Vegas. Various product announcements supported the company’s strategy to deliver value to analysts and users of visualization tools. Advances include new data preparation and integration features, advanced analytics and mapping. The company also announced the release of a stand-alone mobile application called Vizable . One key message management aimed to promote is that Tableau is more than just a visualization company.

Over the last few years Tableau has made strides in the analytics and business intelligence market with a user-centric philosophy and the ability to engage younger analysts who work in the lines of business rather than in IT. Usability continues to rank as the top criteria for selecting analytic and business intelligence software in all of our business analytics benchmark research. In this area Tableau has introduced innovations such as VizQL, originally developed at Stanford University, which links capabilities to query a database and to visualize data. This combination enables users not highly skilled in languages such as SQL or using proprietary business intelligence tools to create and share visually intuitive dashboards. The effect is to provide previously unavailable visibility into areas of their operations. The impact of being able to see and compare performance across operations and people often increases communication and knowledge sharing.

Tableau 9, released in April 2015, which I discussed, introduced advances including analytic ease of use and performance, new APIs, data preparation, storyboarding and Project Elastic, the precursor to this year’s announcement of Vizable. Adoption of 9.x appears to be robust given both the number of conference attendees and increases in third-quarter revenue ($170 million) and new customers (3,100) reported to the financial markets.

As was the case last year, conference announcements included some developments already on the market as well as some still to come. Among data preparation capabilities introduced are integration and automated spreadsheet cleanup. For the former, being able to join two data sets through a union function, which adds rows to form a single data set, and to do integration across databases by joining specific data fields gives users flexibility in combining, analyzing and visualizing multiple sets of data. For the latter, to automate the spreadsheet cleanup process Tableau examined usage patterns of Tableau Public to learn how users manually clean their spreadsheets. Then it used machine-learning algorithms to help users automate the tasks. Being able to automatically scan Excel files to find subtables and automatically transform data without manual calculations and parsing will save time for analysts who vr_LA_most_important_location_analytics_capabilitiesotherwise would have to do these tasks manually. Our benchmark research into information optimization shows that data preparation consumes the largest portion of time spent on analytics by nearly half (47%) of organizations and even higher in our latest data and analytics in the cloud benchmark research by 59 percent of organizations.

Advanced analytics is another area of innovation for Tableau. The company demonstrated developments in outlier detection and clustering analysis natively integrated with the software. Use of these features is straightforward and visually oriented, replacing the need for statistical charts with drag-and-drop manipulation. The software does not enable users to identify numbers of segments or filter the degree of the outliers, but the basic capability can reduce data sets to more manageable analytic sets and facilitate exploration of anomalous data points within large sets. The skill necessary for these tasks, unlike the interpretation of box plots introduced at last year’s conference, is more intuitive and better suited for business users of information.

The company also demonstrated new mapping and geospatial features at the conference. Capabilities to analyze down to the zip code on a global basis, define custom territories, support geospatial files, integrate with vr_LA_most_important_location_analytics_capabilitiesthe open source mapping platform MapBox and perform calculations within the context of a digital map are all useful features for location analytics, which is becoming more important in areas such as customer analytics and digital devices connected in the emerging Internet of things (IoT). Tableau is adding capabilities that participants most often cited as important in our research on location analytics: to provide geographic representation (72%), visualize metrics associated with locations (65%) and directly select and analyze locations on maps (61%).

Tableau insists that its development of new capabilities is guided by customer requests. This provides a source of opportunities to address user needs especially in the areas of data preparation, advanced analytics and location analytics. However, this strategy raises the question of whether it will ultimately put the company in conflict with the partners that have helped build the Tableau ecosystem and feed the momentum of the company thus far. Tableau is positioning its product as a fully featured analytic platform of the sort that I have outlined, but to achieve that eventually it will have to encroach on the capabilities that partners such as Alteryx, Datawatch, Informatica, Lavastorm, Paxata and Trifacta offer today. Another question is whether Tableau will continue its internal development strategy or opt to acquire companies that can broaden its capabilities that has hampered its overall value rating as identified in our 2015 Analytics and Business intelligence Value Index. In light of announcements at the conference, the path seems to be to develop these capabilities in-house. While there appears to be no immediate threat to the partnerships the continuation of development of some of these capabilities eventually will impact the partner business model in a more material way. Given that the majority of the deals for its partner ecosystem flows through Tableau itself, many of the partners are vulnerable to these development efforts. In addition I will be watching how aggressively Tableau helps to market Spark, the open source big data technology that I wrote about, as compared to some of the partner technologies that Spark threatens. Tableau has already built on Spark while some of its competitors have not, which may give Tableau a window of opportunity.

Going forward, integration with transactional systems and emerging cloud ecosystems is an area for Tableau that I will be watching. Given its architecture it’s not easy for Tableau to participate in the new generation of service-oriented architectures that characterize part of today’s cloud marketplace. For this reason, Tableau will need to continue to build out its own platform and the momentum of its ecosystem – which at this point does not appear to be a problem.

Finally, it will be interesting to see how Tableau eventually aligns its stand-alone data visualization application Vizable with its broader mobile strategy. We will be looking closely at the mobile market in our upcoming Mobile Analytics and Business Intelligence Value Index in the first half of 2016 where in our last analysis found Tableau was in the middle of the pack with other providers but they have made more investments since our last analysis.

We recommend that companies exploring analytics platforms, especially for on-premises and hosted cloud use, include Tableau on their short lists. Organizations that consider deploying Tableau on an enterprise basis should look closely at how it aligns with their broader user requirements and if their cloud strategy will meet its future needs. Furthermore, while the company has made improvements in manageability and performance, these can still be a concern in some circumstances. Tableau should be evaluated also with specific business objectives in mind and in conjunction with its partner ecosystem.

Regards,

Ventana Research

PentahoWorld 2015, Pentaho’s second annual user conference, held in mid-October, centered on the general availability of release 6.0 of its data integration and analytics platform and its acquisition by Hitachi Data Systems (HDS) earlier this year. Company spokespeople detailed the development of the product in relation to the roadmap laid out in 2014 and outlined plans for its integration with those of HDS and its parent Hitachi. They also discussed Pentaho’s and HDS’s shared intentions regarding the Internet of Things (IoT), particularly in telecommunications, healthcare, public infrastructure and IT analytics.

Pentaho competes on the basis of what it calls a “streamlined data refinery” that enables a flexible way to access, transform and integrate data and embed and present analytic data sets in usable formats without writing new code. In addition, it integrates a visual analytic workflow interface with a business intelligence front end including customization extensions; this is a differentiator for the company since much of the self-serve analytics market in which it competes is still dominated by separate point products.

Pentaho 6 aims to provide manageable and scalable self-service analytics. A key advance in the new version is what Pentaho calls “virtualized data sets” that logically aggregate multiple data sets according to transformations and integration specified by the Pentaho Data Integration (PDI) analytic workflow interface. This virtual approach allows the physical processing to be executed close to the data in various systems such as Hadoop or an RDBMS, which relieves users of the burden of having to continually move data back and forth between the vr_oi_factors_impeding_ol_implementationquery and the response systems. In this way, logical data sets can be served up for consumption in Pentaho Analytics as well as other front-end interfaces in a timely and flexible manner.

One challenge that emerges when accessing multiple integrated and transformed data sets is data lineage. Tracking its lineage is important to establish trust in the data among users by enabling them to ascertain the origin of data prior to transformation and integration. This is particularly useful in regulated industries that may need access to and tracking of source data to prove compliance. This becomes even more complicated with events and completely sourcing them along with the large number of them as found in over a third of organizations in our operational intelligence benchmark research that examined operational centric analytics and business intelligence.

Similarly, Pentaho 6 uses Simple Network Management Protocol (SNMP) to deliver application programming interface (API) extensions so that third-party tools can help provide governance lower in the system stack to further enable reliability of data. Our benchmark research consistently shows that manageability of systems is important for user organizations and in particular for big data environments.

The flexibility introduced with virtual tables and improvements in Pentaho 6.0 around in-line modeling (a concept I discussed after last year’s event are two critical means to building self-service analytic environments. Marrying various data systems with different data models, sometimes referred to as big data integration, has proven to be a difficult challenge in such environments. Pentaho’s continued focus on vr_BDI_01_automating_big_data_integrationbig data integration and providing an integration backbone to the many business intelligence tools (in addition to its own) are potential competitive differentiators for the company. While analysts and users prefer integrated tool sets, today’s fragmented analytics market is increasingly dominated by separate tools that prepare data and surface data for consumption. Front-end tools alone cannot automate the big data integration process, which Pentaho PDI can do.Our research into big data integration shows the importance of eliminating manual tasks in this process: 78 percent of companies said it is important or very important to automate their big data integration processes. Pentaho’s ability to integrate with multiple visual analytics tools is important for the company, especially in light of the HDS accounts, which likely have a variety of front-end tools. In addition, the ability to provide an integrated front end can be attractive to independent software vendors, analytics services providers and certain end-user organizations that would like to embed both integration and visualization without having to license multiple products.

Going forward, Pentaho is focused on joint opportunities with HDS such as the emerging Internet of Things. Pentaho cites established industrial customers such as Halliburton, Intelligent Mechatonic Systems and Kirchoff Datensysteme Software as reference accounts for IoT. In addition, a conference participant from Caterpillar Marine Asset Intelligence shared how it embeds Pentaho to help analyze and predict equipment failure on maritime equipment. Pentaho’s ability to integrate and analyze multiple data sources is key to delivering business value in each of these environments, but the company also possesses a little-known asset in the Weka machine learning library, which is an integrated part of the product suite. Our research on next-generation predictive analytics finds that Weka is used by 5 percent of organizations, and many of the companies that use it are large or very large, which is Pentaho’s target market. Given the importance of machine learning in the IoT category, it will be interesting to see how Pentaho leverages this asset.

Also at the conference, an HDS spokesperson discussed its target markets for IoT or what the company calls “social innovation.” These markets include telecommunications, healthcare, public infrastructure and IT analytics and reflect HDS’s customer base and the core businesses of its parent company Hitachi. Pentaho Data Integration is currently embedded within major customer environments such as Caterpillar, CERN, FINRA, Halliburton, NASDAQ, Sears and Staples, but not all of these companies fit directly into the IoT segments HDS outlined. While Hitachi’s core businesses provide a fertile ground in which grow its business, Pentaho will need to develop integration with the large industrial control systems already in place in those organizations.

The integration of Pentaho into HDS is a key priority. The 2,000-strong global sales force of HDS is now incented to sell Pentaho, and it will be important for the reps to include it as they discuss their accounts’ needs. While Pentaho’s portfolio can potentially broaden sales opportunities for HDS, big data software is a more consultative sale than the price-driven hardware and systems that the sales force may be used to. Furthermore, the buying centers, which are shifting from IT to lines of business, can be significantly different based on the type of organization and their objectives. To address this will require significant training within the HDS sales force and with partner consulting channels. The joint sales efforts will be well served by emphasizing the “big data blueprints” developed by Pentaho over the last couple of years and developing of new ones that speak to IoT and the combined capabilities of the two companies.

HDS says it will begin to embed Pentaho into its product portfolio but has promised to leave Pentaho’s roadmap intact. This is important because Pentaho has done a good job of listening to its customers and addressing the complexities that exist in big data and open source environments. As the next chapter unfolds, I will be looking at how the company integrates its platform with the HDS portfolio and expands it to deal with the complexities of IoT, which we will be investigating in upcoming benchmark research study.

For organizations that need to use large-scale integrated data sets, Pentaho provides one of the most flexible yet mature tools in the market, and they should consider it. The analytics tool provides an integrated and embeddable front end that should be of particular interest to analytics services providers and independent software vendors seeking to make information management and data analytics core capabilities. For existing HDS customers, the Pentaho portfolio will open conversations in new areas of those organizations and potentially add considerable value within accounts.

Regards,

Ventana Research

The emerging Internet of Things (IoT) extends digital connectivity to devices and sensors in homes, businesses, vehicles and potentially almost anywhere. This innovation enables devices designed for it to generate and transmit data about their operations; analytics using this data can facilitate monitoring and a range of automatic functions.vr_oi_goals_of_using_operational_intelligence_updated

To perform these functions IoT requires what Ventana Research calls Operational Intelligence (OI), a discipline that has evolved from the capture and analysis of instrumentation, networking and machine-to-machine interactions of many types. We define operational intelligence as a set of event-centered information and analytic processes operating across an organization that enable people to use that event information to take effective actions and make optimal decisions. Our benchmark research into Operational Intelligence shows that organizations most often want to use such event-centric architectures for defining metrics (37%) and assigning thresholds for alerts (35%) and for more action-oriented processes of sending notifications to users (33%) and linking events to activities (27%).

In many industries, organizations can gain competitive advantage if they can reduce the elapsed time between an event occurring and actions taken or decisions made in response to it. Existing business intelligence (BI) tools provide useful analysis of and reporting on data drawn from previously recorded transactions, but to improve competitiveness and maximize efficiencies organizations are concluding that employees and processes – in IT, business operations and front-line customer sales, service and support – also need to be able to detect and respond to events as they happen. Our research into big data integration shows that nearly one in four companies currently integrate data into big data stores in real time. The challenge is to go further and act upon both the data that is stored and the data that is streaming in a timely manner.

The evolution of operational intelligence, especially in conjunction with IoT, is encouraging companies to revisit their priorities and spending for information technology and application management. However, sorting out the range of options poses a challenge for both business and IT leaders. Some see potential value in expanding their network infrastructure to support OI. Others are implementing event processing (EP) systems that employ new technology to detect meaningful patterns, anomalies and relationships among events. Increasingly, organizations are using dashboards, visualization and modeling to notify nontechnical people of events and enable them to understand their significance and take appropriate and immediate action.

As with any innovation, using OI for IoT may require substantial changes. These are among the challenges organizations face as they consider adopting operational intelligence:

  • They find it difficult to evaluate the business value of enabling real-time sensing of data and event streams using identification tags, agents and other systems embedded not only in physical locations like warehouses but also in business processes, networks, mobile devices, data appliances and other technologies.
  • They lack an IT architecture that can support and integrate these systems as the volume and frequency of information increase.
  • They are uncertain how to set reasonable business and IT expectations, priorities and implementation plans for important technologies that may conflict or overlap. These can include business intelligence, event processing, business process management, rules management, network upgrades and new or modified applications and databases.
  • They don’t understand how to create a personalized user experience that enables nontechnical employees in different roles to monitor data or event streams, identify significant changes, quickly understand the correlation between events and develop a context in which to determine the right decisions or actions to take.

Ventana Research has announced new benchmark research on The Internet of Things and Operational Intelligence that will identify trends and best practices associated with this technology and these processes. It will explore organizations’ experiences with initiatives related to events and data and with attempts to align IT projects, resources and spending with new business objectives that demand real-time intelligence and event-driven architectures. The research will investigate how organizations are increasing their responsiveness to events by rebalancing the roles of networks, applications and databases to reduce latency; it also will explore ways in which they are using sensor data and alerts to anticipate problematic events. We will benchmark the performance of organizations’ implementations, including IoT, event stream processing, event and activity monitoring, alerting, event modeling and workflow, and process and rules management.

As operational intelligence evolves as the core of IoT platforms, it is an important time to take a closer look at this emerging opportunity and challenge. For those interested in learning more or becoming involved in this upcoming research, please let me know.

Regards,

Ventana Research

Splunk’s annual gathering, this year called .conf 2015, in late September hosted almost 4,000 Splunk customers, partners and employees. It is one of the fastest-growing user conferences in the technology industry. The area dedicated to Splunk partners has grown from a handful of booths a few years ago to a vast showroom floor many times larger. While the conference’s main announcement was the release of Splunk Enterprise 6.3, its flagship platform, the progress the company is making in the related areas of machine learning and the Internet of Things (IoT) most caught my attention.

Splunk’s strength is its ability to index, normalize, correlate and query data throughout the technology stack, including applications, servers, networks and sensors. It uses distributed search that enables correlation and analysis of events across local- and wide-area networks without moving vast amounts of data. Its architectural approach unifies cloud and on-premises implementations and provides extensibility for developers building applications. Originally, Splunk provided an innovative way to troubleshoot complex technology issues, but over time new uses for Splunk-based data have emerged, including digital marketing analytics, cyber security, fraud prevention and connecting digital devices in the emerging Internet of Things. Ventana Research has covered Splunk since its establishment in the market, most recently in this analysis of mine.

Splunk’s experience in dealing directly with distributed, time-series data and processes on a large scale puts it in position to address the Internet of Things from an industrial perspective. This sort of data is at the heart of large-scale industrial control systems, but it often comes in different formats and its implementation is based on different formats and protocols. For instance, sensor technology and control systems that were invented 10 to 20 years ago use very different technology than modern systems. Furthermore, as with computer technology, there are multiple layers in stack models that have to communicate. Splunk’s tools help engineers and systems analysts cross-reference these disparate systems in the same way that it queries computer system and network data, however, the systems can be vastly different. To address this challenge, Splunk turns to its partners and its extensible platform. For example, Kepware has developed plug-ins that use its more than 150 communication drivers so users can stream real-time industrial sensor and machine data directly into the Splunk platform. Currently, the primary value drivers for organizations in this field of the industrial IoT are operational efficiency, predictive maintenance and asset management. At the conference, Splunk showcased projects in these areas including one with Target that uses Splunk to improve operations in robotics and manufacturing.

For its part, Splunk is taking a multipronged approach by acquiring companies, investing in internal development and enabling its partner ecosystem to build new products. One key enabler of its approach to IoT is machine learning algorithms built on the Splunk platform. In machine learning a model can use new data to continuously learn and adapt its answers to queries. This differs from conventional predictive analytics, in which users build models and validate them based on a particular sample; the model does not adapt over time. With machine learning, for instance, if a piece of equipment or an automobile shows a certain optimal pattern of operation over time, an algorithm can identify that pattern and build a model for how that system should behave. When the equipment begins to act in a less optimal or anomalous way, the system can alert a human operator that there may be a problem, or in a machine-to-machine situation, it can invoke a process to solve the problem or recalibrate the machine.

Machine learning algorithms allow event processes to be audited, analyzed and acted upon in real time. They enable predictive capabilities for maintenance, transportation and logistics, and asset management and can also be applied in more people-oriented domains such as fraud prevention, security, business process improvement, and digital products.  IoT potentially can have a major impact on business processes, but only if organizations can realign systems to discover-and-adapt rather than model-and-apply approaches. For instance, processes are often carried out in an uneven fashion different from the way the model was conceived and communicated through complex process documentation and systems. As more process flows are directly instrumented and more processes carried out by machines, the ability to model directly based on the discovery of those event flows and to adapt to them (through human learning or machine learning) becomes key to improving organizational processes. Such realignment of business processes, however, often involves broad organizational transformation.Our benchmark research on operational intelligence shows that challenges associated with people and processes, rather than information and technology, most often hold back organizational improvement.

Two product announcements made at the conference illuminate the direction Splunk is taking with IoT and machine learning. The first is User Behavior Analytics (UBA), based VR2015_InnovationAwardWinneron its acquisition of Caspida, which produces advanced algorithms that can detect anomalous behavior within a network. Such algorithms can model internal user behavior, and when behavior deviates from the specified norm, it can generate an alert that can be addressed through investigative processes usingSplunk Enterprise Security 4.0. Together, Splunk Enterprise Security 4.0 and UBA won the 2015 Ventana Research CIO Innovation Award.The acquisition of Caspida shows that Splunk is not afraid to acquire companies in niche areas where they can exploit their platform to deliver organizational value. I expect that we will see more such acquisitions of companies with high value ML algorithms as Splunk carves out specific positions in the emergent markets.

The other product announced is IT Service Intelligence (ITSI), which highlights machine learning algorithms alongside of Splunk’s core capabilities. The IT Service Intelligence App is an application in which end users deploy machine learning to see patterns in various IT service scenarios. ITSI can inform and enable multiple business uses such as predictive maintenance, churn analysis, service level agreements and chargebacks. Similar to UBA, it uses anomaly detection to point out issues and enables managers to view highly distributed processes such as claims process data in insurance companies. At this point, however, use of ITSI (like other areas of IoT) may encounter cultural and political issues as organizations deal with changes in the roles of IT and operations management. Splunk’s direction with ITSI shows that the company is staying close to its IT operations knitting as it builds out application software, but such development also puts Splunk into new competitive scenarios where legacy technology and processes may still be considered good enough.

We note that ITSI is built using Splunk’s Machine Learning Toolkit and showcase, which currently is in preview mode. The vr_Big_Data_Analytics_08_top_capabilities_of_big_data_analyticsplatform is an important development for the company and fills one of the gaps that I pointed out in its portfolio last year. Addressing this gap enables Splunk and its partners to create services that apply advanced analytics to big data that almost half (45%) of organizations find important. The use of predictive and advanced analytics on big data I consider a killer application for big data; our benchmark research on big data analytics backs this claim: Predictive analytics is the type of analytics most (64%) organizations wish to pursue on big data.

Organizations currently looking at IoT use cases should consider Splunk’s strategy and tools in the context of specific problems they need to address. Machine learning algorithms built for particular industries are key so it is important to understand if the problem can be addressed using prebuilt applications provided by Splunk or one of its partners, or if the organization will need to build its own algorithms using the Splunk machine learning platform or alternatives. Evaluate both the platform capabilities and the instrumentation, the type of protocols and formats involved and how that data will be consumed into the system and related in a uniform manner. Most of all, be sure the skills and processes in the organization align with the technology from an end user and business perspective.

Regards,

Ventana Research

One of the key findings in our latest benchmark research into predictive analytics is that companies are incorporating predictive analytics into their operational systems more often than was the case three years ago. The research found that companies are less inclined to purchase stand-alone predictive analytics tools (29% vs 44% three years ago) and more inclined to purchase predictive analytics built into business intelligence systems (23% vs 20%), applications (12% vs 8%), databases (9% vs 7%) and middleware (9% vs 2%). This trend is not surprising since operationalizing predictive analytics – that is, building predictive analytics directly into business process workflows – improves companies’ ability to gain competitive advantage: those that deploy predictive analyticsvr_NG_Predictive_Analytics_12_frequency_of_updating_predictive_models within business processes are more likely to say they gain competitive advantage and improve revenue through predictive analytics than those that don’t.

In order to understand the shift that is underway, it is important to understand how predictive analytics has historically been executed within organizations. The marketing organization provides a useful example since it is the functional area where organizations most often deploy predictive analytics today. In a typical organization, those doing statistical analysis will export data from various sources into a flat file. (Often IT is responsible for pulling the data from the relational databases and passing it over to the statistician in a flat file format.) Data is cleansed, transformed, and merged so that the analytic data set is in a normalized format. It then is modeled with stand-alone tools and the model is applied to records to yield probability scores. In the case of a churn model, such a probability score represents how likely someone is to defect. For a marketing campaign, a probability score tells the marketer how likely someone is to respond to an offer. These scores are produced for marketers on a periodic basis – usually monthly. Marketers then work on the campaigns informed by these static models and scores until the cycle repeats itself.

The challenge presented by this traditional model is that a lot can happen in a month and the heavy reliance on process and people can hinder the organization’s ability to respond quickly to opportunities and threats. This is particularly true in fast-moving consumer categories such as telecommunications or retail. For instance, if a person visits the company’s cancelation policy web page the instant before he or she picks up the phone to cancel the contract, this customer’s churn score will change dramatically and the action that the call center agent should take will need to change as well. Perhaps, for example, that score change should mean that the person is now routed directly to an agent trained to deal with possible defections. But such operational integration requires that the analytic software be integrated with the call agent software and web tracking software in near-real time.

Similarly, the models themselves need to be constantly updated to deal with the fast pace of change. For instance, if a telecommunications carrier competitor offers a large rebate to customers to switch service providers, an organization’s churn model can be rendered out of date and should be updated. Our research shows that organizations that constantly update their models gain competitive advantage more often than those that only update them periodically (86% vs 60% average), more often show significant improvement in organizational activities and processes (73% vs 44%), and are more often very satisfied with their predictive analytics (57% vs 23%).

Building predictive analytics into business processes is more easily discussed than done; complex business and technical challenges must be addressed. The skills gap that I recently wrote about is a significant barrier to implementing predictive analytics. Making predictive analytics operational requires not only statistical and business skills but technical skills as well.   From a technical perspective, one of the biggest challenges for operationalizing predictive analytics is accessing and preparing data which I wrote about. Four out of ten companies say that this is the part of the predictive analytics process vr_NG_Predictive_Analytics_02_impact_of_doing_more_predictive_analyticswhere they spend the most time. Choosing the right software is another challenge that I wrote about. Making that choice includes identifying the specific integration points with business intelligence systems, applications, database systems, and middleware. These decisions will depend on how people use the various systems and what areas of the organization are looking to operationalize predictive analytics processes.

For those that are willing to take on the challenges of operationalizing predictive analytics the rewards can be significant, including significantly better competitive positioning and new revenue opportunities. Furthermore, once predictive analytics is initially deployed in the organization it snowballs, with more than nine in ten companies going on to increase their use of predictive analytics. Once companies reach that stage, one third of them (32%) say predictive analytics has had a transformational impact and another half (49%) say it provides a significant positive benefits.

Regards,

Ventana Research

Our benchmark research into predictive analytics shows that lack of resources, including budget and skills, is the number-one business barrier to the effective deployment and use of predictive analytics; awareness – that is, an understanding of how to apply predictive analytics to business problems – is second. In order to secure resources and address awareness problems a business case needs to be created and communicated clearly wherever appropriate across the organization. A business case presents the reasoning for initiating a project or task. A compelling business case communicates the nature of the proposed project and the arguments, both quantified and unquantifiable, for its deployment.

The first steps in creating a business case for predictive analytics are to understand the audience and to communicate with the experts who will be involved in leading the project. Predictive analytics can be transformational in nature and therefore the audience potentially is broad, including many disciplines within the organization. Understand who should be involved in business case creation a list that may include business users, analytics users and IT. Those most often primarily responsible for designing and deploying predictive analytics are data scientists (in 31% of organizations), the business intelligence and data warehouse team (27%), those working in general IT (16%) and line of business analysts (13%), so be sure to involve these groups. Understand the specific value and challenges for each of the constituencies so the business case can represent the interests of these key stakeholders. I discuss the aspects of the business where these groups will see predictive analytics most adding value here and here.

For the business case for a predictive analytics deployment to be persuasive, executives also must understand how specifically the deployment will impact their areas of responsibilityvr_NG_Predictive_Analytics_01_front_office_functions_use_predictive_anal.._ and what the return on investment will be. For these stakeholders, the argument should be multifaceted. At a high level, the business case should explain why predictive analytics is important and how it fits with and enhances the organization’s overall business plan. Industry benchmark research and relevant case studies can be used to paint a picture of what predictive analytics can do for marketing (48%), operations (44%) and IT (40%), the functions where predictive analytics is used most.

A business case should show how predictive analytics relates to other relevant innovation and analytic initiatives in the company. For instance, companies have been spending money on big data, cloud and visualization initiatives where software returns can be more difficult to quantify. Our research into big data analytics and data and analytics in the cloud show that the top benefit for these initiatives are communication and knowledge sharing. Fortunately, the business case for predictive analytics can cite the tangible business benefits our research identified, the most often identified of which are achieving competitive advantage (57%), creating new revenue opportunities (50%), and increasing profitability vr_NG_Predictive_Analytics_03_benefits_of_predictive_analytics(46%). But the business case can be made even stronger by noting that predictive analytics can have added value when it is used to leverage other current technology investments. For instance, our big data analytics research shows that the most valuable type of analytics to be applied to big data is predictive analytics.

To craft the specifics of the business case, concisely define the business issue that will be addressed. Assess the current environment and offer a gap analysis to show the difference between the current environment and the future environment). Offer a recommended solution, but also offer alternatives. Detail the specific value propositions associated with the change. Create a financial analysis summarizing costs and benefits. Support the analysis with a timeline including roles and responsibilities. Finally, detail the major risk factors and opportunity costs associated with the project.

For complex initiatives, break the overall project into a series of shorter projects. If the business case is for a project that will involve substantial work, consider providing separate timelines and deliverables for each phase. Doing so will keep stakeholders both informed and engaged during the time it takes to complete the full project. For large predictive analytics projects, it is important to break out the due-diligence phase and try not to make any hard commitments until that phase is completed. After all, it is difficult to establish defensible budgets and timelines until one knows the complete scope of the project.

Ensure that the project time line is realistic and addresses all the key components needed for a successful deployment.  In particular with predictive analytics projects, make certain that it reflects a thoughtful approach to data access, data quality and data preparation. We note that four in 10 organizations say vr_NG_Predictive_Analytics_08_time_spent_in_predictive_analytic_processthat the most time spent in the predictive analytics process is in data preparation and another 22 percent say that they spend the most time accessing data sources. If data issues have not been well thought through, it is next to impossible for the predictive analytics initiative to be successful. Read my recent piece on operationalizing predictive analytics to show how predictive analytics will align with specific business processes.

If you are proposing the implementation of new predictive analytics software, highlight the multiple areas of return beyond competitive advantage and revenue benefits. Specifically, new software can have a total lower cost of ownership and generate direct cost savings from improved operating efficiencies. A software deployment also can yield benefits related to people (productivity, insight, fewer errors), management (creativity, speed of response), process (shorter time on task or time to complete) and information (easier access, more timely, accurate and consistent). Create a comprehensive list of the major benefits the software will provide compared to the existing approach, quantifying the impact wherever possible. Detail all major costs of ownership whether the implementation is on-premises or cloud-based: these will include licensing, maintenance, implementation consulting, internal deployment resources, training, hardware and other infrastructure costs. In other words, think broadly about both the costs and the sources of return in building the case for new technology. Also, read my recent piece on procuring predictive analytics software.

Understanding the audience, painting the vision, crafting the specific case, outlining areas of return, specifying software, noting risk factors, and being as comprehensive as possible are all part of a successful business plan process. Sometimes, the initial phase is really just a pitch for project funding and there won’t be any dollar allocation until people are convinced that the program will get them what they need.  In such situations multiple documents may be required, including a short one- to two-page document that outlines vision and makes a high-level argument for action from the organizational stakeholders. Once a cross functional team and executive support is in place, a more formal assessment and design plan following the principles above will have to be built.

Predictive analytics offers significant returns for organizations willing pursue it, but establishing a solid business case is the first step for any organization.

Regards,

Ventana Research

Our research into next-generation predictive analytics shows that along with not having enough skilled resources, which I discussed in my previous analysisNGPA AP #4 image 1the inability to readily access and integrate data is a primary reason for dissatisfaction with predictive analytics (in 62% of participating organizations). Furthermore, this area consumes the most time in the predictive analytics process: The research finds that preparing data for analysis (40%) and accessing data (22%) are the parts of the predictive analysis process that create the most challenges for organizations. To allow more time for actual analysis, organizations must work to improve their data-related processes.

Organizations apply predictive analytics to many categories of information. Our research shows that the most common categories are customer (used by 50%), marketing (44%), product (43%), financial (40%) and sales (38%). Such information often has to be combined from various systems and enriched with information from new sources. Before users can apply predictive analytics to these blended data sets, the information must be put into a common form and represented as a normalized analytic data set. Unlike in data warehouse systems, which provide a single data source with a common format, today data is often located in a variety of systems that have different formats and data models. Much of the current challenge in accessing and integrating data comes from the need to include not only a variety of relational data sources but also less structured forms of data. Data that varies in both structures and sizes is commonly called big data.

To deal with the challenge of storing and computing big data, organizations planning to use predictive analytics increasingly turn to big data technology. While flat files and relational databases on standard hardware, each cited by almost two-thirds (63%) of participants, are still the most commonly used tools for predictive analytics, more than half (52%) of organizations now use data warehouse appliances for Using Big Data with Predictive Analytics predictive analytics, and 31 percent use in-memory databases, which the second-highest percentage (24%) plan to adopt in the next 12 to 24 months. Hadoop and NoSQL technologies lag in adoption, currently used by one in four organizations, but in the next 12 to 24 months an additional 29 percent intend to use Hadoop and 20 percent more will use other NoSQL approaches. Furthermore, more than one-quarter (26%) of organizations are evaluating Hadoop for use in predictive analytics, which is the most of any technology.

 Some organizations are considering moving from on-premises to cloud-based storage of data for predictive analytics; the most common reasons for doing so are to improve accessing data (for 49%) and preparing data for analysis (43%). This trend speaks to the increasing importance of cloud-based data sources as well as cloud-based tools that provide access to many information sources and provide predictive analytics. As organizations accumulate more data and need to apply predictive analytics in a scalable manner, we expect the need to access and use big data and cloud-based systems to increase.

While big data systems can help handle the size and variety of data, they do not of themselves solve the challenges of data access and normalization. This is especially true for organizations that need to blend new data that resides in isolated systems. How to do this is critical for organizations to consider, especially in light of the people using predictive analytic system and their skills. There are three key considerations here. One is the user interface, the most common of which are spreadsheets (used by 48%), graphical workflow modeling tools (44%), integrated development environments (37%) and menu-driven modeling tools (35%). Second is the number of data sources to deal with and which are supported by the system; our research shows that four out of five of organizations need to access and integrate five or more data sources. The third consideration is which analytic languages and libraries to use and which are supported by the system; the research finds that Microsoft Excel, SQL, R, Java and Python are the most widely used for predictive analytics. Considering these three priorities both in terms of the resident skills, processes, current technology, and information sources that need to be accessed are crucial for delivering value to the organization with predictive analytics.

While there has been an exponential increase in data available to use in predictive analytics as well as advances in integration technology, our research shows that data access and preparation are still the most challenging and time-consuming tasks in the predictive analytics process. Although technology for these tasks has improved, complexity of the data has increased through the emergence of different data types, large-scale data and cloud-based data sources. Organizations must pay special attention to how they choose predictive analytics tools that can give easy access to multiple diverse data sources including big data stores and provide capabilities for data blending and provisioning of analytic data sets. Without these capabilities, predictive analytics tools will fall short of expectations.

Regards,

Ventana Research

The Performance Index analysis we performed as part of our next-generation predictive analytics benchmark research shows that only one in four organizations, those functioning at the highest Innovative level of performance, can use predictive analytics to compete effectively against others that use this technology less well. We analyze performance in detail in four dimensions (People, Process, Information and Technology), and for predictive analytics we find that organizations perform best in the Technology dimension, with 38 percent reaching the top Innovative level. This is often the case in our analyses, as organizations initially perform better in the details of selectingvr_NG_Predictive_Analytics_performance_06_dimensions and managing new tools than in the other dimensions. Predictive analytics is not a new technology per se, but the difference is that it is becoming more common in business units, as I have written.

In contrast to organizations’ performance in the Technology dimension, only 10 percent reach the Innovative level in People and only 11 percent in Process. This disparity uncovered by the research analysis suggests there is value in focusing on the skills that are used to design and deploy predictive analytics. In particular, we found that one of the two most-often cited reasons why participants are not fully satisfied with the organization’s use of predictive analytics is that there are not enough skilled resources (cited by 62%). In addition, 29 percent said that the need for too much training or customized skills is a barrier to changing their predictive analytics.

The challenge for many organizations is to find the combination of domain knowledge, statistical and mathematical knowledge, and technical knowledge that it needs to be able to integrate predictive analytics into other technology systems and into operations in the lines of business, which I also have discussed. The need for technical knowledge is evident in the research findings on the jobs held by individual participants: Three out of four require technical sophistication. More than one-third (35%) are data scientists who have a deep understanding of predictive analytics and its use as well as of data-related technology; one-fourth are data analysts who understand the organization’s data and systems but have limited knowledge of predictive analytics; and 16 percent described themselves as predictive analytics experts who have a deep understanding of this topic but not of technology in general. The research also finds that those most often primarily responsible for designing and deploying predictive analytics are data scientists (in 31% of organizations) or members of the business intelligence and data warehouse team (27%). This focus on business intelligence and data warehousing vr_NG_Predictive_Analytics_16_why_users_dont_produce_predictive_analysesrepresents a shift toward integrating predictive analytics with other technologies and indicates a need to scale predictive analytics across the organization.

In only about half (52%) of organizations are the people who design and deploy predictive analytics the same people who utilize the output of these processes. The most common reasons cited by research participants that users of predictive analytics don’t produce their own analyses are that they don’t have enough skills training (79%) and don’t understand the mathematics involved (66%). The research also finds evidence that skills training pays off: Fully half of those who said they received adequate training in applying predictive analytics to business problems also said they are very satisfied with their predictive analytics; percentages dropped precipitously for those who said the training was somewhat adequate (8%) and inadequate (6%). It is clear that professionals trained in both business and technology are necessary for an organization to successfully understand, deploy and use predictive analytics.

To determine the technical skills and training necessary for predictive analytics, it is important to understand which languages and libraries are used. The research shows that the most common are SQL (used by 67% of organizations) and Microsoft Excel (64%), with which many people are familiar and which are relatively easy to use. The three next-most commonly used are much more sophisticated: the open source language R (by 58%), Java (42%) and Python (36%). Overall, many languages are in use: Three out of five organizations use four or more of them. This array reflects the diversity of approaches to predictive analytics. Organizations must assess what languages make sense for their uses, and vendors must support many languages for predictive analytics to meet the demands of all customers.

The research thus makes clear that organizations must pay attention to a variety of skills and how to combine them with technology to ensure success in using predictive analytics. Not all the skills necessary in an analytics-driven organization can be combined in one person, as I discussed in my analysis of analytic personas. We recommend that as organizations focus on the skills discussed above, they consider creating cross-functional teams from both business and technology groups.

Regards,

Ventana Research

To impact business success, Ventana Research recommends viewing predictive analytics as a business investment rather than an IT investment.  Our recent benchmark research into next-generation predictive analytics  reveals that since our previous research on the topic in 2012, funding has shifted from general business budgets (previously 44%) to line of business IT budgets (previously 19%). Now more than vr_NG_Predictive_Analytics_15_preferences_in_purchasing_predictive_analy.._  half of organizations fund such projects from business budgets: 29 percent from general business budgets and 27 percent from a line of business IT budget. This shift in buying reflects the mainstreaming of predictive analytics in organizations,  which I recently wrote about .

This shift in funding of initiatives coincides with a change in the preferred format for predictive analytics. The research reveals that 15 percent fewer organizations prefer to purchase predictive analytics as stand-alone technology today than did in the previous research (29% now vs. 44% then). Instead we find growing demand for predictive analytics tools that can be integrated with operational environments such as business intelligence or transaction applications. More than two in five (43%) organizations now prefer predictive analytics embedded in other technologies. This integration can help businesses respond faster to market opportunities and competitive threats without having to switch applications.

  vr_NG_Predictive_Analytics_14_considerations_in_evaluating_predictive_an.._ The features most often sought in predictive analytics products further confirm business interest. Usability (very important to 67%) and capability (59%) are the top buying criteria, followed by reliability (52%) and manageability (49%). This is consistent with the priorities of organizations three years ago with one important exception: Manageability was one of the two least important criteria then (33%) but today is nearly tied with reliability for third place. This change makes sense in light of a broader use of predictive analytics and the need to manage an increasing variety of models and input variables.

Further, as a business investment predictive analytics is most often used in front-office functions, but the research shows that IT and operations are closely associated with these functions. The top four areas of predictive analytics use are marketing (48%), operations (44%), IT (40%) and sales (38%). In the previous research operations ranked much lower on the list.

To select the most useful product, organizations must understand where IT and business buyers agree and disagree on what matters. The research shows that they agree closely on how to deploy the tools: Both expressed a greater preference to deploy on-premises (business 53%, IT 55%) but also agree in the number of those who prefer it on demand through cloud computing (business 22%, IT 23%). More than 90 percent on both sides said the organization plans to deploy more predictive analytics, and they also were in close agreement (business 32%, IT 33%) that doing so would have a transformational impact, enabling the organization to do things it couldn’t do before.

However, some distinctions are important to consider, especially when looking at the business case for predictive analytics. Business users more often focus on the benefit of achieving competitive advantage (60% vs. 50% of IT) and creating new revenue opportunities (55% vs. 41%), which are the two benefits most often cited overall. On the other hand, IT professionals more often focus on the benefits of in­creased upselling and cross-selling (53% vs. 32%), reduced risk (26% vs. 21%) and better compliance (26% vs. 19%); the last two reflect key responsibilities of the IT group.

Despite strong business involvement, when it comes to products, IT, technical and data experts are indispensable for the evaluation and use of predictive analytics. Data scientists or the head of data management are most often involved in recommending (52%) and evaluating (56%) predictive analytics technologies. Reflecting the need to deploy predictive analytics to business units, analysts and IT staff are the next-most influential roles for evaluating and recommending. This involvement of technically sophisticated individuals combined with the movement away from organizations buying stand-alone tools indicates an increasingly team-oriented approach.

Purchase of predictive analytics often requires approval from high up in the organization, which underscores the degree of enterprise-wide interest in this technology. The CEO or president is most likely to be involved in the final decision in small (87%) and midsize (76%) companies. In contrast, large companies rely most on IT management (40%), and very large companies rely most on the CIO or head of IT (60%). We again note the importance of IT in the predictive analytics decision-making process in larger organizations. In the previous research, in large companies IT management was involved in approval in 9 percent of them and the CIO was involved in only 40 percent.

As predictive analytics becomes more widely used, buyers should take a broad view of the design and deployment requirements of the organization and specific lines of business. They should consider which functional areas will use the tools and consider issues involving people, processes and information as well as technology when evaluating such systems. We urge business and IT buyers to work together during the buying process with the common goal of using predictive analytics to deliver value to the enterprise.

Regards,

Ventana Research

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,074 hits
%d bloggers like this: