You are currently browsing the category archive for the ‘Cloud Computing’ category.

As I discussed in the state of data and analytics in the cloud recently, usability is a top evaluation criterion for organizations in selecting cloud-based analytics software. Data access of cloud and on-premises systems are essential antecedents of usability. They can help business people perform analytic tasks themselves without having to rely on IT. Some tools allow data integration by business users on an ad hoc basis, but to provide an enterprise integration process and a governed information platform, IT involvement is often necessary. Once that is done, though, using cloud-based data for analytics can help, empowering business users and improving communication and process .

vr_DAC_16_dealing_with_multiple_data_sourcesTo be able to make the best decisions, organizations need access to multiple integrated data sources. The research finds that the most common data sources are predictable: business applications (51%), business intelligence applications (51%), data warehouses or operational data stores (50%), relational databases (41%) and flat files (33%). Increasingly, though, organizations also are including less structured sources such as semistructured documents (33%), social media (27%) and nonrelational database systems (19%). In addition there are important external data sources, including business applications (for 61%), social media data (48%), Internet information (42%), government sources (33%) and market data (29%). Whether stored in the cloud or locally, data must be normalized and combined into a single data set so that analytics can be performed.

Given the distributed nature of data sources as well as the diversity of data types, information platforms and integration approaches are changing. While more than three in five companies (61%) still do integration primarily between on-premises systems, significant percentages are now doing integration from the cloud to on-premises (47%) and from on-premises to the cloud (39%). In the future, this trend will become more pronounced. According to our research, 85 percent of companies eventually will integrate cloud data with on-premises sources, and 84 percent will do the reverse. We expect that hybrid architectures, a mix of on-premises and cloud data infrastructures, will prevail in enterprise information architectures for years to come while slowly evolving to equality of bidirectional data transfer between the two types.

Further analysis shows that a focus on integrating data for cloud analytics can give organizations competitive advantage. Those who said it is very important to integrate data for cloud-based analytics (42% of participants) also said they are very confident in their ability to use the cloud for analytics (35%); that’s three times more often than those who said integrating data is important (10%) or somewhat important (9%). Those saying that integration is very important also said more often that cloud-based analytics helps their customers, partners and employees in an array of ways, including improved presentation of data and analytics (62% vs. 43% of those who said integration is important or somewhat important), gaining access to many different data sources (57% vs. 49%) and improved data quality and data management (59% vs. 53%). These numbers indicate that organizations that neglect the integration aspects of cloud analytics are likely to be at a disadvantage compared to their peers that make it a priority.

Integration for cloud analytics is typically a manual task. In particular, almost half (49%) of organizations in the research use spreadsheets to manage the integration and preparation of cloud-based data. Yet doing so poses serious challenges: 58 percent of those using spreadsheets said it hampers their ability to manage processes efficiently. While traditional methods may suffice for integrating relatively small and well-defined data sets in an on-premises environment, they have limits when dealing with the scale and complexity of cloud-based data. vr_DAC_02_satisfaction_with_data_integration_toolsThe research also finds that organizations utilizing newer integration tools are satisfied with them more often than those using older tools. More than three-fourths (78%) of those using tools provided by a cloud applications  provider said they are satisfied or somewhat satisfied with them, as are even more (86%) of those using data integration tools designed for cloud computing; by comparison, fewer of those using spreadsheets (56%) or traditional enterprise data integration tools (71%) are satisfied.

This is not surprising. Modern cloud connectors are designed to connect via loosely coupled interfaces that allow cloud systems to share data in a flexible manner. The research thus suggests that for organizations needing to integrate data from cloud-based data sources, switching to modern integration tools can streamline the process.

Overall three-quarters of companies in our research said that it is important or very important to access data from cloud-based sources for analysis. Cloud-based analytics isn’t useful unless the right data can be fed into the analytic process. But without capable tools this is not easy to do. A substantial impediment is that analysts spend the majority of their time in accessing and preparing the data rather than in actual analysis. Complicating the task, each data source can represent a different, possibly complex, data model. Furthermore, the data sets may have varying data formats and interface requirements, which are not easily addressed with legacy integration tools.

Such complexity is the new reality, and new tools and approaches have come to market to address these complexities. For organizations looking to integrate their data for cloud-based analytics, we recommend exploring these new integration processes and technologies.

Regards,

Ventana Research

Our recently completed benchmark research on data and analytics in the cloud shows that analytics deployed in cloud-based systems is gaining widespread adoption. Almost half (48%) of vr_DAC_04_widespread_use_of_cloud_based_analyticsparticipating organizations are using cloud-based analytics, another 19 percent said they plan to begin using it within 12 months, and 31 percent said they will begin to use cloud-based analytics but do not know when. Participants in various areas of the organization said they use cloud-based analytics, but front-office functions such as marketing and sales rated it important more often than did finance, accounting and human resources. This front-office focus is underscored by the finding that the categories of information for which cloud-based analytics is most often deemed important are forecasting (mentioned by 51%) and customer-related (47%) and sales-related (33%) information.

The research also shows that while adoption is high, organizations face challenges as they seek to realize full value from their cloud-based data and analytics initiatives. Our Performance Index analysis reveals that only one in seven organizations reach the highest Innovative level of the four levels of performance in their use of cloud-based analytics. Of the four dimensions we use to further analyze performance, organizations do better in Technology and Process than in Information and People. That is, the tools and analytic processes used for data and analytics in the cloud have advanced more rapidly than users’ abilities to work with their information. The weaker performance in People and Information is reflected in findings on the most common barriers to deployment of cloud-based analytics: lack of confidence about the security of data and analytics, mentioned by 56 percent of organizations, and not enough skills to use cloud-based analytics (42%).

Given the top barrier of perceived data security issues, it is not surprising the research finds that the largest percentage of organizations (66%) use a private cloud, which by its nature ostensibly is more secure, to deploy analytics; fewer use a public cloud (38%) or a hybrid cloud (30%), although many use more than one type today. We know from tracking analytics and business intelligence software providers that operate in the public cloud that this is changing quite rapidly. Comparing vr_DAC_06_how_to_deploy_cloud_based_analyticsdeployment by industry sector, the research analysis shows that private and hybrid clouds are more prevalent in the regulated areas of finance, insurance and real estate and government than in services and manufacturing. The research suggests that private and hybrid cloud deployments are used more often for analytics where data privacy is a concern.

Furthermore, organizations said that access to data for analytics is easier with private and hybrid clouds (29% for public cloud vs. 58% for private cloud and 67% for hybrid cloud). In addition, organizations using private and hybrid cloud more often said they have improved communication and information sharing (56% public vs. 72% private and 70% hybrid). Thus, the research data makes clear that organizations feel more comfortable implementing analytics in a private or hybrid cloud in many areas.

Private and hybrid cloud implementations of data and analytics often coincide with large data integration efforts, which are necessary at some point to benefit from such deployments. Those who said that integration is very important also said more often than those giving it less importance that cloud-based analytics helps their customers, partners and employees in an array of ways, including improved presentation of data and analytics (62% vs. 43% of those who said integration is important or somewhat important), gaining access to many different data sources (57% vs. 49%) and improved data quality and data management (59% vs. 53%). We note that the focus on data integration efforts correlates more with private and hybrid cloud approaches than with public cloud approaches, thus the benefits cannot be directly assigned to the various cloud approaches nor the integration efforts.

Another key insight from the research is that data and analytics often are considered in conjunction with mobile and collaboration initiatives which have different priorities for business than IT or in consumer markets. Nine out of 10 organizations said they use or intend to use collaboration technology to support their cloud-based data and analytics, and 83 percent said they need to support data access and analytics on mobile devices. Two-thirds said they support both tablets and smartphones and multiple mobile operating systems, the most important of which are Apple iOS (ranked first by 60%), Google Android (ranked first by 26%) and Microsoft Windows Mobile (ranked first by 13%). We note that Microsoft has a higher percentage of importance here than its reported market share (approximately 2.5%) would suggest. Similarly, Google Android has greater penetration than Apple in the consumer market (51% vs. 41%). We expect that the influence of mobile operating systems related to data and analytics in the cloud will continue to evolve and be impacted by upcoming corporate technology refreshment cycles, the consolidation of PCs and mobile devices, and the “bring your own device” (BYOD) trend.

The research finds that usability (63%) and reliability (57%) arevr_DAC_20_evaluation_criteria_for_cloud_based_analytics the top technology buying criteria, which is consistent with our business technology innovation research conducted last year. What has changed is that manageability is cited as very important as often as functionality, by approximately half of respondents, a stronger showing than in our previous research.  We think it likely that manageability is gaining prominence as cloud providers and organizations sort out issues in who manages deployments along with usage and licensing, along with who actually owns your data in the cloud which my colleague Robert Kugel has discussed.

As the research shows, the importance of cloud data and analytics is continuing to grow. The importance of this topic makes me eager to discuss further the attitudes, re­quire­­ments and future plans of organizations that use data and analytics in the cloud and to identify the best prac­tices of those that are most proficient in it. For more information on this topic, and learn more on best practices for data and analytics in the cloud, and download the executive summary of the report to improve your readiness.

Regards,

Ventana Research

Ventana Research recently completed the most comprehensive evaluation of analytics and business intelligence products and vendors available anywhere. As I discussed recently, such research is necessary and timely as analytics and business intelligence is now a fast-changing market. Our Value Index for Analytics and Business Intelligence in 2015 scrutinizes 15 top vendors and their product offerings in seven key
categories: Usability, Manageability, Reliability, Capability, Adaptability, Vendor Validation and TCO/ROI. The analysis shows that the top supplier is Information Builders, which qualifies as a Hot vendor and is followed by 10 other Hot vendors: SAP, IBM, MicroStrategy, Oracle, vr_VI_BI_2015_Weighted_OverallSAS, Qlik, Actuate (now part of OpenText) and Pentaho.

The evaluations drew on our research and analysis of vendors’ and products along with their responses to our detailed RFI or questionnaire, our own hands-on experience and the buyer-related findings from our benchmark research on next-generation business intelligence, information optimization and big data analytics. The benchmark research examines analytics and business intelligence from various perspectives to determine organizations’ current and planned use of these technologies and the capabilities they require for successful deployments.

We find that the processes that comprise business intelligence today have expanded beyond standard query, reporting, analysis and publishing capabilities. They now include sourcing and integration of data and at later stages the use of analytics for planning and forecasting and of capabilities utilizing analytics and metrics for collaborative interaction and performance management. Our research on big data analytics finds that new technologies collectively known as big data vr_Big_Data_Analytics_15_new_technologies_enhance_analyticsare influencing the evolution of business intelligence as well; here in-memory systems (used by 50% of participating organizations), Hadoop (42%) and data warehouse appliances (33%) are the most important innovations. In-memory computing in particular has changed BI because it enables rapid processing of even complex models with very large data sets. In-memory computing also can change how users access data through data visualization and incorporate data mining, simulation and predictive analytics into business intelligence systems. Thus the ability of products to work with big data tools figured in our assessments.

In addition, the 2015 Value Index includes assessments of their self-service tools and cloud deployment options. New self-service approaches can enable business users to reduce their reliance on IT to access and use data and analysis. However, our information optimization research shows that this change is slow to proliferate. In four out of five organizations, IT currently is involved in making information available to end users vr_Info_Optimization_01_whos_responsible_for_information_availabilityand remains entrenched in the operations of business intelligence systems.

Similarly, our research, as well as the lack of maturity of the cloud-based products evaluated, shows that organizations are still in the early stages of cloud adoption for analytics and business intelligence; deployments are mostly departmental in scope. We are exploring these issues further in our benchmark research into data and analytics in the cloud, which will be released in the second quarter of 2015.

The products offered by the five top-rated com­pa­nies in the Value Index provide exceptional functionality and a superior user experi­ence. However, Information Builders stands out, providing an excep­tional user experience and a completely integrated portfolio of data management, predictive analytics, visual discovery and operational intelligence capabilities in a single platform. SAP, in second place, is not far behind, having made significant prog­ress by integrating its Lumira platform into its BusinessObjects Suite; it added pre­dictive analytics capabilities, which led to higher Usability and Capability scores. IBM, MicroStrategy and Oracle, the next three, each provide a ro­bust integrated platform of capabilities. The key differentiator between them and the top two top is that they do not have superior scores in all of the seven categories.

In evaluating products for this Value Index we found some noteworthy innovations in business intelligence. One is Qlik Sense, which has a modern architecture that is cloud-ready and supports responsive design on mobile devices. Another is SAS Visual Analytics, which combines predictive analytics with visual discovery in ways that are a step ahead of others currently in the market. Pentaho’s Automated Data Refinery concept adds its unique Pentaho Data Integration platform to business intelligence for a flexible, well-managed user experience. IBM Watson Analytics uses advanced analytics and VR_AnalyticsandBI_VI_2015natural language processing for an interactive experience beyond the traditional paradigm of business intelligence. Tableau, which led the field in the category of Usability, continues to innovate in the area of user experience and aligning technology with people and process. MicroStrategy’s innovative Usher technology addresses the need for identity management and security, especially in an evolving era in which individuals utilize multiple devices to access information.

The Value Index analysis uncovered notable differences in how well products satisfy the business intelligence needs of employees working in a range of IT and business roles. Our analysis also found substantial variation in how products provide development, security and collaboration capabilities and role-based support for users. Thus, we caution that similar vendor scores should not be taken to imply that the packages evaluated are functionally identical or equally well suited for use by every organization or for a specific process.

To learn more about this research and to download a free executive summary, please visit.

Regards,

Ventana Research

Ventana Research recently completed the most comprehensive evaluation of analytics and business intelligence products and vendors available anywhere. As I discussed recently, such research is necessary and timely as analytics and business intelligence is now a fast-changing market. Our Value Index for Analytics and Business Intelligence in 2015 scrutinizes 15 top vendors and their product offerings in seven keyvr_VI_BI_2015_Weighted_Overall categories: Usability, Manageability, Reliability, Capability, Adaptability, Vendor Validation and TCO/ROI. The analysis shows that the top supplier is Information Builders, which qualifies as a Hot vendor and is followed by 10 other Hot vendors: SAP, IBM, MicroStrategy, Oracle, SAS, Qlik, Actuate (now part of OpenText) and Pentaho.

The evaluations drew on our research and analysis of vendors’ and products along with their responses to our detailed RFI or questionnaire, our own hands-on experience and the buyer-related findings from our benchmark research on next-generation business intelligence, information optimization and big data analytics. The benchmark research examines analytics and business intelligence from various perspectives to determine organizations’ current and planned use of these technologies and the capabilities they require for successful deployments.

We find that the processes that comprise business intelligence today have expanded beyond standard query, reporting, analysis and publishing capabilities. They now include sourcing and integration of data and at later stages the use of analytics for planning and forecasting and of capabilities utilizing analytics and metrics for collaborative interaction and performance management. Our research on big data analytics finds that new technologies collectively known as big data vr_Big_Data_Analytics_15_new_technologies_enhance_analyticsare influencing the evolution of business intelligence as well; here in-memory systems (used by 50% of participating organizations), Hadoop (42%) and data warehouse appliances (33%) are the most important innovations. In-memory computing in particular has changed BI because it enables rapid processing of even complex models with very large data sets. In-memory computing also can change how users access data through data visualization and incorporate data mining, simulation and predictive analytics into business intelligence systems. Thus the ability of products to work with big data tools figured in our assessments.

In addition, the 2015 Value Index includes assessments of their self-service tools and cloud deployment options. New self-service approaches can enable business users to reduce their reliance on IT to access and use data and analysis. However, our information optimization research shows that this change is slow to proliferate. In four out of five organizations, IT currently is involved in making information available to end users vr_Info_Optimization_01_whos_responsible_for_information_availabilityand remains entrenched in the operations of business intelligence systems.

Similarly, our research, as well as the lack of maturity of the cloud-based products evaluated, shows that organizations are still in the early stages of cloud adoption for analytics and business intelligence; deployments are mostly departmental in scope. We are exploring these issues further in our benchmark research into data and analytics in the cloud, which will be released in the second quarter of 2015.

The products offered by the five top-rated com­pa­nies in the Value Index provide exceptional functionality and a superior user experi­ence. However, Information Builders stands out, providing an excep­tional user experience and a completely integrated portfolio of data management, predictive analytics, visual discovery and operational intelligence capabilities in a single platform. SAP, in second place, is not far behind, having made significant prog­ress by integrating its Lumira platform into its BusinessObjects Suite; it added pre­dictive analytics capabilities, which led to higher Usability and Capability scores. IBM, MicroStrategy and Oracle, the next three, each provide a ro­bust integrated platform of capabilities. The key differentiator between them and the top two top is that they do not have superior scores in all of the seven categories.

In evaluating products for this Value Index we found some noteworthy innovations in business intelligence. One is Qlik Sense, which has a modern architecture that is cloud-ready and supports responsive design on mobile devices. Another is SAS Visual Analytics, which combines predictive analytics with visual discovery in ways that are a step ahead of others currently in the market. Pentaho’s Automated Data Refinery concept adds its unique Pentaho Data Integration platform to business intelligence for a flexible, well-managed user experience. IBM Watson Analytics uses advanced analytics and VR_AnalyticsandBI_VI_2015natural language processing for an interactive experience beyond the traditional paradigm of business intelligence. Tableau, which led the field in the category of Usability, continues to innovate in the area of user experience and aligning technology with people and process. MicroStrategy’s innovative Usher technology addresses the need for identity management and security, especially in an evolving era in which individuals utilize multiple devices to access information.

The Value Index analysis uncovered notable differences in how well products satisfy the business intelligence needs of employees working in a range of IT and business roles. Our analysis also found substantial variation in how products provide development, security and collaboration capabilities and role-based support for users. Thus, we caution that similar vendor scores should not be taken to imply that the packages evaluated are functionally identical or equally well suited for use by every organization or for a specific process.

To learn more about this research and to download a free executive summary, please visit.

Regards,

Ventana Research

Just a few years ago, the prevailing view in the software industry was that the category of business intelligence (BI) was mature and without room for innovation. Vendors competed in terms of feature parity and incremental advancements of their platforms. But since then business intelligence has grown to include analytics, data discovery tools and big data capabilities to process huge volumes and new types of data much faster. As is often the case with change, though, this one has created uncertainty. For example, only one in 11 participants in our benchmark research on big data analytics said that their organization fully agrees on the meaning of the term “big data analytics.”

There is little question that clear definitions of analytics and business intelligence as they are used in business today would be of value. But some IT analyst firms have tried to oversimplify the process of updating these definitions by merely combining a market basket of discovery capabilities under the label of analytics. In our estimation, this attempt is neither accurate nor useful. Discovery tools are only components of business intelligence, and their capabilities cannot accomplish all the tasks comprehensive BI systems can do. Some firms seem to want to reduce the field further by overemphasizing the visualization aspect of discovery. While visual discovery can help users solve basic business problems, other BI and analytic tools are available that can attack more sophisticated and technically challenging problems. In our view, visual discovery is one of four types of analytic discovery that can help organizations identify and understand the masses of data they accumulate today. But for many organizations visualization alone cannot provide them with the insights necessary to help make critical decisions, as interpreting the analysis requires expertise that mainstream business professionals lack.

In Ventana Research’s view, business intelligence is a technology managed by IT that is designed to produce information and reports from business data to inform business about the performance of activities, people and processes. It has provided and will continue to provide great value to business, but in itself basic BI will not meet the new generation of requirements that businesses face; they need not just information but guidance on how to take advantage of opportunities, address issues and mitigate the risks of subpar performance. Ventana_Research_Value_Index_LogoAnalytics is a component of BI that is applied to data to generate information, including metrics. It is a technology-based set of methodologies used by analysts as well as the information gained through the use of tools designed to help those professionals. These thoughtfully crafted definitions inform the evaluation criteria we apply in our new and comprehensive 2015 Analytics and Business Intelligence Value Index, which we will publish soon. As with all business tools, applications and systems we assess in this series of indexes, we evaluate the value of analytic and business intelligence tools in terms of five functional categories – usability, manageability, reliability, capability and adaptability – and two customer assurance categories – validation of the vendor and total cost of ownership and return on investment (TCO/ROI). We feature our findings in these seven areas of assessment in our Value Index research and reports. In the Analytics and Business Intelligence Value Index for 2015 we assess in depth the products of 15 of the leading vendors in today’s BI market.

The Capabilities category examines the breadth of functionality that products offer and assesses their ability to deliver the insights today’s enterprises need. For our analysis we divide this category into three subcategories for business intelligence: data, analytics and optimization. We explain each of them below.

The data subcategory of Capabilities examines data access and preparation along with supporting integration and modeling. New data sources are coming into being continually; for example, data now is generated in sensors in watches, smartphones, cars, airplanes, homes, utilities and an assortment of business, network, medical and military equipment. In addition, organizations increasingly are interested in behavioral and attitudinal data collected through various communication platforms. Examples include Web browser behavior, data mined from the Internet, social media and various survey and community polling data. The data access and integration process identifies each type of data, integrates it with all other relevant types, checks it all for quality issues, maps it back to the organization’s systems of record and master data, and manages its lineage. Master data management in particular, including newer approaches such as probabilistic matching, is a key component for creating a system that can combine data types across the organization and in the cloud to create a common organizational vernacular for the use of data.

Ascertaining which systems must be accessed and how is a primary challenge for today’s business intelligence platforms. A key part of data access is the user interface. Whether it appears in an Internet browser, a laptop, a smartphone, a tablet or a wearable device, data must be presented in a manner optimized for the interface. Examining the user interface for business intelligence systems was a primary interest of our 2014 Mobile Business Intelligence Value Index. In that research, we learned that vendors are following divergent paths and that it may be hard for some to change course as they continue. Therefore how a vendor manages mobile access and other new means impacts its products’ value for particular organizations.

Once data is accessed, it must be modeled in a useful way. Data models in the form of OLAP cubes and predefined relationships of data sometimes grow overly complex, but there is value in premodeling data in ways that make sense to business people, most of whom are not up to modeling it for themselves. Defining data relationships and transforming data through complex manipulations is often needed, for instance, to define performance indicators that align with an organization’s business initiatives. These manipulations can include business rules or what-if analysis within the context of a model or external to it. Finally, models must be flexible so they do not hinder the work of organizational users. The value of premodeling data is that it provides a common view for business users so they need not redefine data relationships that have already been thoroughly considered.

The analytics subcategory includes analytic discovery, prediction and integration. Discovery and prediction roughly map to the ideas of exploratory and confirmatory analytics, which I have discussed. Analytic discovery includes calculation and visualization processes that enable users to move quickly and easily through data to create the types of information they need for business purposes. Complementing it is prediction, which typically follows discovery. Discovery facilitates root-cause and historical analysis, but to look ahead and make decisions that produce desired business outcomes, organizations need to track various metrics and make informed predictions. Analytic integration encompasses customization of both discovery and predictive analytics and embedding them in other systems such as applications and portals.

The optimization subcategory includes collaboration, organizational management, information optimization, action and automation. Collaboration is a key consideration for today’s analytic platforms. It includes the ability to publish, share and coordinate various analytic and business intelligence functions. Notably, some recently developed collaboration platforms incorporate many of the characteristics of social platforms such as Facebook or LinkedIn. Organizational management attempts to manage to particular outcomes and sometimes provides performance indicators and scorecard frameworks. Action assesses how technology directly assists decision-making in an operational context. This includes gathering inputs and outputs for collaboration before and after a decision, predictive scoring that prescribes action and delivery of the information in the correct form to the decision-maker. Finally, automation triggers alerts in circumstances based on statistical triggers or rules and should be managed as part of a workflow. Agent technology takes automation to a level that is more proactive and autonomous.

vr_Info_Optim_Maturity_06_oraganization_maturity_by_dimensionsThis broad framework of data, analytics and optimization fits with a process orientation to business analytics that I have discussed. Our benchmark research on information optimization indicates that the people and process dimensions of performance are less well developed than the information and technology aspects, and thus a focus on these aspects of business intelligence and analytics will be beneficial.

In our view, it’s important to consider business intelligence software in a broad business context rather than in artificially separate categories that are designed for IT only. We advise organizations seeking to gain a competitive edge to adopt a multifaceted strategy that is business-driven, incorporates a complete view of BI and analytics, and uses the comprehensive evaluation criteria we apply.

Regards,

Ventana Research

Oracle is one of the world’s largest business intelligence and analytics software companies. Its products range from middleware, back-end databases and ETL tools to business intelligence applications and cloud platforms, and it is well established in many corporate and government accounts. A key to Oracle’s ongoing success is in transitioning its business intelligence and analytics portfolio to self-service, big data and cloud deployments. To that end, three areas in which the company has innovated are fast, scalable access for transaction data; exploratory data access for less structured data; and cloud-based business intelligence.

 Providing users with access to structured data in an expedient and governed fashion continues to be a necessity for companies. Our benchmark research into information optimization finds drilling into information within applications (37%) and search (36%) to be the capabilities most needed for end users in business.

To provide them, Oracle enhanced its database in version Oracle 12c, which was  released in 2013 . The key innovation is to enable both transaction processing and analytic processing workloads on the same system.MostImportantEndUseCapUsing in-memory instruction sets on the processor, the system can run calculations quickly without changing the application data. The result is that end users can explore large amounts of information in the context of all data and applications running on the 12c platform. These applications include Oracle’s growing cadre of cloud based applications. The value of this is evident in our big data analytics benchmark research , which finds that the number-one source of big data is transactional data from applications, mentioned by 60 percent of participants.

 Search and interactive analysis of structured data are addressed by Oracle Business Intelligence Enterprise Edition (OBIEE) through a new visualization interface that applies assets Oracle acquired from Endeca in 2011. (Currently, this approach is available in Business Intelligence Cloud Service, which I discuss below.) To run fast queries of large data sets, columnar compression can be implemented by small code changes in the Oracle SQL Developer interface. These changes use the innovation in 12c discussed above and would be implemented by users familiar with SQL. Previously, IT professionals would have to spend significant time to construct aggregate data and tune the database so users could quickly access data. Otherwise transactional databases take a long time to query since they are row-oriented and the query literally must go through every row of data to return analytic results. With columnar compression, end users can explore and interact with data in a much faster, less limited fashion. With the new approach, users no longer need to walk down each hierarchy but can drag and drop or right-click to see the hierarchy definition. Drag-and-drop and brushing features enable exploration and uniform updates across all visualizations on the screen. Under the covers,

 DefiningBDAnalyticsthe database is doing some heavy lifting, often joining five to 10 tables to compute the query in near real time. The ability to do correlations on large data sets in near real time is a critical enabler of data exploration since it allows questions to be asked and answered one after another rather than asking users to predefine what those questions might be. This type of analytic discovery enables much faster time to value especially when providing root-cause analysis for decision-making.

 Oracle also  provides Big Data SQL , a query approach that enables analysis of unstructured data analysis on systems such as Hadoop. The model uses what Oracle calls query franchising rather than query federation in which, processing is done in a native SQL dialect and the various dialects must be translated and combined into one. With franchising, Oracle SQL runs natively inside of each of the systems. This approach applies Oracle SQL to big data systems and offloads queries to the compute nodes or storage servers of the big data system. It also maintains the security and speed needed to do exploration on less structured data sources such as JSON, which the 12c database supports natively. In this way Oracle provides security and manageability within the big data environment. Looking beyond structured data is key for organizations today. Our research shows that analyzing data from all sources is how three-fourths (76%) of organizations define big data analytics.

 To visualize and explore big data, Oracle  offers Big Data Discovery , which browses Hadoop and NoSQL stores, and samples and profiles data automatically to create catalogs. Users can explore important attributes through visualization as well as using common search techniques. The system currently supports capabilities such as string transformations, variable grouping, geotagging and text enrichment that assist in data preparation. This is a good start to address exploration on big data sources, but to better compete in this space, Oracle should offer more usable interfaces and more capabilities for both data preparation and visualization. For example, visualizations such as decision trees and correlation matrices are important to help end users to make sense of big data and do not appear to be included in the tool.

 The third analytic focus, and the catalyst of the innovations discussed above, is Oracle’s move to the cloud. In September 2014,  Oracle released BI Cloud Service  (BICS), which helps business users access Oracle BI systems in a self-service manner with limited help from IT. Cloud computing has been a major priority for Oracle in the past few years with not just its applications but also for its entire stack of technology. With BICS, Oracle offers a stand-alone product with which a departmental workgroup can insert analytics directly into its cloud applications. When BICS is coupled with the Data-as-a-Service (DaaS) offering, which accesses internal data as well as third-party data sources in the cloud, Oracle is able to deliver cross-channel analysis and identity-as-data. Cross-channel analysis and identity management are important in cloud analytics from both business and a privacy and security perspectives.

 CustomerAnalyticsIn particular, such tools can help tie together and thus simplify the complex task of managing multichannel marketing. Availability and simplicity in analytics tools are priorities for marketing organizations.  Our research into next-generation customer analytics  shows that for most organizations data not being readily available (63%) and difficulty in maintaining customer analytics systems (56%) are top challenges.

 Oracle is not the first vendor to offer self-service discovery and flexible data preparation, but BICS begins its movement from the previous generation of BI technology to the next. BICS puts Oracle Transactional Business Intelligence (OTBI) in the cloud as a first step toward integration with vertical applications in the lines of business. It lays the groundwork for cross-functional analysis in the cloud.

 We don’t expect BICS to compete immediately with more user-friendly analytic tools designed for business and analytics or with well-established cloud computing BI players. Designers still must be trained in Oracle tools, and for this reason, it appears that the tool, at least in its first iteration, is targeted only at Oracle’s OBIEE customers seeking a departmental solution that limits IT involvement. Oracle should continue to address usability for both end users and designers. BICS also should connect to more data sources including Oracle Essbase. It currently comes bundled with  Oracle Database Schema Service  which acts as the sole data source but does not directly connect with any other database. Furthermore, data movement is not streamlined in the first iteration, and replication of data is often necessary.

 Overall, Oracle’s moves in business intelligence and analytics make sense because they use the same semantic models in the cloud as those analytic applications that many very large companies use today and won’t abandon soon. Furthermore, given Oracle’s growing portfolio of cloud applications and the integration of analytics into these transactional applications through OTBI, Oracle can leverage cloud application differentiation for companies not using Oracle. If Oracle can align its self-service discovery and big data tools with its current portfolio in reasonably timely fashion, current customers will not turn away from their Oracle investments. In particular, those with an Oracle centric cloud roadmap will have no reason to switch. We note that cloud-based business intelligence and analytics applications is still a developing market. Our previous research showed that business intelligence had been a laggard in the cloud in comparison to genres such as human capital management, marketing, sales and customer service. We are examining trends in our forthcoming  data and analytics in the cloud benchmark research, which will evaluate both the current state of such software and where the industry likely is heading in 2015 and beyond. For organizations shifting to cloud platforms, Oracle has a very progressive cloud computing portfolio that  my colleague has assessed  and they have created a path by investing in its Platform-as-a-Service (PaaS) and DaaS offerings. Its goal is to provide uniform capabilities across mobility, collaboration, big data and analytics so that all Oracle applications are consistent for users and can be extended easily by developers. However, Oracle competes against many cloud computing heavyweights like Amazon Web Services, IBM and Microsoft, so achieving success through significant growth has some challenges. Oracle customers generally and OBIEE customers especially should investigate the new innovations in the context of their own roadmaps for big data analytics, cloud computing and self-service access to analytics.

 Regards,

Ventana Research

Oracle is one of the world’s largest business intelligence and analytics software companies. Its products range from middleware, back-end databases and ETL tools to business intelligence applications and cloud platforms, and it is well established in many corporate and government accounts. A key to Oracle’s ongoing success is in transitioning its business intelligence and analytics portfolio to self-service, big data and cloud deployments. To that end, three areas in which the company has innovated are fast, scalable access for transaction data; exploratory data access for less structured data; and cloud-based business intelligence.

 Providing users with access to structured data in an expedient and governed fashion continues to be a necessity for companies. Our benchmark research into information optimization finds drilling into information within applications (37%) and search (36%) to be the capabilities most needed for end users in business.

To provide them, Oracle enhanced its database in version Oracle 12c, which was  released in 2013 . The key innovation is to enable both transaction processing and analytic processing workloads on the same system.MostImportantEndUseCapUsing in-memory instruction sets on the processor, the system can run calculations quickly without changing the application data. The result is that end users can explore large amounts of information in the context of all data and applications running on the 12c platform. These applications include Oracle’s growing cadre of cloud based applications. The value of this is evident in our big data analytics benchmark research , which finds that the number-one source of big data is transactional data from applications, mentioned by 60 percent of participants.

 Search and interactive analysis of structured data are addressed by Oracle Business Intelligence Enterprise Edition (OBIEE) through a new visualization interface that applies assets Oracle acquired from Endeca in 2011. (Currently, this approach is available in Business Intelligence Cloud Service, which I discuss below.) To run fast queries of large data sets, columnar compression can be implemented by small code changes in the Oracle SQL Developer interface. These changes use the innovation in 12c discussed above and would be implemented by users familiar with SQL. Previously, IT professionals would have to spend significant time to construct aggregate data and tune the database so users could quickly access data. Otherwise transactional databases take a long time to query since they are row-oriented and the query literally must go through every row of data to return analytic results. With columnar compression, end users can explore and interact with data in a much faster, less limited fashion. With the new approach, users no longer need to walk down each hierarchy but can drag and drop or right-click to see the hierarchy definition. Drag-and-drop and brushing features enable exploration and uniform updates across all visualizations on the screen. Under the covers,

 DefiningBDAnalyticsthe database is doing some heavy lifting, often joining five to 10 tables to compute the query in near real time. The ability to do correlations on large data sets in near real time is a critical enabler of data exploration since it allows questions to be asked and answered one after another rather than asking users to predefine what those questions might be. This type of analytic discovery enables much faster time to value especially when providing root-cause analysis for decision-making.

 Oracle also  provides Big Data SQL , a query approach that enables analysis of unstructured data analysis on systems such as Hadoop. The model uses what Oracle calls query franchising rather than query federation in which, processing is done in a native SQL dialect and the various dialects must be translated and combined into one. With franchising, Oracle SQL runs natively inside of each of the systems. This approach applies Oracle SQL to big data systems and offloads queries to the compute nodes or storage servers of the big data system. It also maintains the security and speed needed to do exploration on less structured data sources such as JSON, which the 12c database supports natively. In this way Oracle provides security and manageability within the big data environment. Looking beyond structured data is key for organizations today. Our research shows that analyzing data from all sources is how three-fourths (76%) of organizations define big data analytics.

 To visualize and explore big data, Oracle  offers Big Data Discovery , which browses Hadoop and NoSQL stores, and samples and profiles data automatically to create catalogs. Users can explore important attributes through visualization as well as using common search techniques. The system currently supports capabilities such as string transformations, variable grouping, geotagging and text enrichment that assist in data preparation. This is a good start to address exploration on big data sources, but to better compete in this space, Oracle should offer more usable interfaces and more capabilities for both data preparation and visualization. For example, visualizations such as decision trees and correlation matrices are important to help end users to make sense of big data and do not appear to be included in the tool.

 The third analytic focus, and the catalyst of the innovations discussed above, is Oracle’s move to the cloud. In September 2014,  Oracle released BI Cloud Service  (BICS), which helps business users access Oracle BI systems in a self-service manner with limited help from IT. Cloud computing has been a major priority for Oracle in the past few years with not just its applications but also for its entire stack of technology. With BICS, Oracle offers a stand-alone product with which a departmental workgroup can insert analytics directly into its cloud applications. When BICS is coupled with the Data-as-a-Service (DaaS) offering, which accesses internal data as well as third-party data sources in the cloud, Oracle is able to deliver cross-channel analysis and identity-as-data. Cross-channel analysis and identity management are important in cloud analytics from both business and a privacy and security perspectives.

 CustomerAnalyticsIn particular, such tools can help tie together and thus simplify the complex task of managing multichannel marketing. Availability and simplicity in analytics tools are priorities for marketing organizations.  Our research into next-generation customer analytics  shows that for most organizations data not being readily available (63%) and difficulty in maintaining customer analytics systems (56%) are top challenges.

 Oracle is not the first vendor to offer self-service discovery and flexible data preparation, but BICS begins its movement from the previous generation of BI technology to the next. BICS puts Oracle Transactional Business Intelligence (OTBI) in the cloud as a first step toward integration with vertical applications in the lines of business. It lays the groundwork for cross-functional analysis in the cloud.

 We don’t expect BICS to compete immediately with more user-friendly analytic tools designed for business and analytics or with well-established cloud computing BI players. Designers still must be trained in Oracle tools, and for this reason, it appears that the tool, at least in its first iteration, is targeted only at Oracle’s OBIEE customers seeking a departmental solution that limits IT involvement. Oracle should continue to address usability for both end users and designers. BICS also should connect to more data sources including Oracle Essbase. It currently comes bundled with  Oracle Database Schema Service  which acts as the sole data source but does not directly connect with any other database. Furthermore, data movement is not streamlined in the first iteration, and replication of data is often necessary.

 Overall, Oracle’s moves in business intelligence and analytics make sense because they use the same semantic models in the cloud as those analytic applications that many very large companies use today and won’t abandon soon. Furthermore, given Oracle’s growing portfolio of cloud applications and the integration of analytics into these transactional applications through OTBI, Oracle can leverage cloud application differentiation for companies not using Oracle. If Oracle can align its self-service discovery and big data tools with its current portfolio in reasonably timely fashion, current customers will not turn away from their Oracle investments. In particular, those with an Oracle centric cloud roadmap will have no reason to switch. We note that cloud-based business intelligence and analytics applications is still a developing market. Our previous research showed that business intelligence had been a laggard in the cloud in comparison to genres such as human capital management, marketing, sales and customer service. We are examining trends in our forthcoming  data and analytics in the cloud benchmark research, which will evaluate both the current state of such software and where the industry likely is heading in 2015 and beyond. For organizations shifting to cloud platforms, Oracle has a very progressive cloud computing portfolio that  my colleague has assessed  and they have created a path by investing in its Platform-as-a-Service (PaaS) and DaaS offerings. Its goal is to provide uniform capabilities across mobility, collaboration, big data and analytics so that all Oracle applications are consistent for users and can be extended easily by developers. However, Oracle competes against many cloud computing heavyweights like Amazon Web Services, IBM and Microsoft, so achieving success through significant growth has some challenges. Oracle customers generally and OBIEE customers especially should investigate the new innovations in the context of their own roadmaps for big data analytics, cloud computing and self-service access to analytics.

 Regards,

Ventana Research

Our benchmark research into business technology innovation shows that analytics ranks first or second as a business technology innovation priority in 59 percent of organizations. Businesses are moving budgets and responsibilities for analytics closer to the sales operations, often in the form of so-calledvr_Big_Data_Analytics_15_new_technologies_enhance_analytics shadow IT organizations that report into decentralized and autonomous business units rather than a central IT organization. New technologies such as in-memory systems (50%), Hadoop (42%) and data warehouse appliances (33%) are top back-end technologies being used to acquire a new generation of analytic capabilities. They are enabling new possibilities including self-service analytics, mobile access, more collaborative interaction and real-time analytics. In 2014, Ventana Research helped lead the discussion around topics such as information optimization, data preparation, big data analytics and mobile business intelligence. In 2015, we will continue to cover these topics while adding new areas of innovation as they emerge.

Three key topics lead our 2015 business analytics research agenda. The first focuses on cloud-based analytics. In our benchmark research on information optimization, nearly all (97%) organizations said it is important or very important to Ventana_Research_Benchmark_Research_Logosimplify informa­tion access for both their business and their customers. Part of the challenge in optimizing an organization’s use of information is to integrate and analyze data that originates in the cloud or has been moved there. This issue has important implications for information presentation, where analytics are executed and whether business intelligence will continue to move to the cloud in more than a piecemeal fashion. We are currently exploring these topics in our new benchmark research called analytics and data in the cloud Coupled with the issue of cloud use is the proliferation of embedded analytics and the imperative for organizations to provide scalable analytics within the workflow of applications. A key question we’ll try to answer this year is whether companies that have focused primarily on operational cloud applications at the expense of developing their analytics portfolio or those that have focused more on analytics will gain a competitive advantage.

The second research agenda item is advanced analytics. It may be useful to divide this category into machine learning and predictive analytics, which I have discussed and covered in vr_predanalytics_benefits_of_predictive_analytics_updatedour benchmark research on big data analytics. Predictive analytics has long been available in some sectors of the business world, and two-thirds (68%) of organizations as found in our research that use it said it provides a competitive advantage. Programming languages such as R, the use of Predictive Model Markup Language (PMML), inclusion of social media data in prediction, massive scale simulation, and right-time integration of scoring at the point of decision-making are all important advances in this area. Machine learning also been around for a long time, but it wasn’t until the instrumentation of big data sources and advances in technology that it made sense to use in more than academic environments. At the same time as the technology landscape is evolving, it is getting more fragmented and complex; in order to simplify it, software designers will need innovative uses of machine learning to mask the underlying complexity through layers of abstraction. A technology such as Spark out of Amp-Lab at Berkeley is still immature, but it promises to enable increasing uses of machine learning on big data. Areas such as sourcing data and preparing data for analysis must be simplified so analysts are not overwhelmed by big data.

Our third area of focus is the user experience in business intelligence tools. Simplification and optimization of information in a context-sensitive manner are paramount. An intuitive user experience can advance the people and process dimensions VR_Value_Index_Logoof business, which have lagged technology innovation according to our research in multiple areas. New approaches coming from business end-users, especially in the tech-savvy millennial generation, are pushing the envelope here. In particular, mobility and collaboration are enabling new user experiences in both business organizations and society at large. Adding to it is data collected in more forms, such as location analytics (which we have done research on), individual and societal relationships, information and popular brands. How business intelligence tools incorporate such information and make it easy to prepare, design and consume for different organizational personas is not just an agenda focus but also one focus of our 2015 Analytics and Business Intelligence Value Index to be published in the first quarter of the year.

This shapes up as an exciting year. I welcome any feedback you have on this research agenda and look forward to providing research, collaborating and educating with you in 2015.

Regards,

Ventana Research

In 2014, IBM announced Watson Analytics, which uses machine learning and natural language processing to unify and simplify the user experience in each step of the analytic processing: data acquisition, data preparation, analysis, dashboarding and storytelling.  After a relatively short beta testing period involving more than 22,000 users, IBM released Watson Analytics for general availability in December. There are two editions: the “freemium” trial version allows 500MB of data storage and access to file sizes less than 100,000 rows of data and 50 columns; the personal edition is a monthly subscription that enables larger files and more storage.

Its initial release includes functions to explore, predict and assemble data. Many of the features are based on IBM’s SPSS Analytic Catalyst, which I wrote about and which won the 2013 Ventana Research Technology Innovation Award for business analytics. Once data is uploaded, the explore function enables users to analyze data in an iterative fashion using natural language processing and simple point-and-click actions. Algorithms decide the best fit for graphics based on the data, but users may choose other graphics as needed. An “insight bar” shows other relevant data that may contain insights such as potential market opportunities.

The ability to explore data through visualizations with minimal knowledge is a primary aim of modern analytics tools. With the explore function incorporating natural language processing, which other tools in the market lack, IBM makes analytics accessible to users without the need to drag and drop dimensions and measures across the screen. This feature should not be underestimated; usability is the buying criterion for analytics tools most widely cited in our benchmark research on next-generation business intelligence (by 63% of organizations).

vr_ngbi_br_importance_of_bi_technology_considerations_updatedThe predict capability of Watson Analytics focuses on driver analysis, which is useful in a variety of circumstances such as sales win and loss, market lift analysis, operations and churn analysis. In its simplest form, a driver analysis aims to understand causes and effects among multiple variables. This is a complex process that most organizations leave to their resident statistician or outsource to a professional analyst. By examining the underlying data characteristics, the predict function can address data sets, including what may be considered big data, with an appropriate algorithm. The benefit for nontechnical users is that Watson Analytics makes the decision on selecting the algorithm and presents results in a relatively nontechnical manner such as spiral diagrams or tree diagrams. Having absorbed the top-level information, users can drill down into top key drivers. This ability enables users to see relative attribute influences and interactivity between attributes. Understanding interactivity is an important part of driver analysis since causal variables often move together (a challenge known as multicollinearity) and it is sometimes hard to distinguish what is actually causing a particular outcome. For instance, analysis may blame the customer service department for a product defect and point to it as the primary driver of customer defection. Accepting this result, a company may mistakenly try to fix customer service when a product issue needs to be addressed. This approach also overcomes the challenge of Simpson’s paradox, in which a trend that appears in different groups of data disappears or reverses when these groups are combined. This is a hindrance for some visualization tools in the market.

Once users have analyzed the data sufficiently and want to create and share their analysis, the assemble function enables them to bring together various dashboard visualizations in a single screen. Currently, Watson Analytics does such sharing (as well as comments related to the visualizations) via email. In the future, it would good to see capabilities such as annotation and cloud-based sharing in the product.

Full data preparation capabilities are not yet integrated into Watson Analytics. Currently, it includes a data quality report that gives confidence levels for the current data based on its cleanliness, and basic sort, transform and relabeling are incorporated as well. I assume that IBM has much more in the works here. For instance, its DataWorks cloud service offers APIs for some of the best data preparation and master data management available today. DataWorks can mask data at the source and do probabilistic matching against many sources including both cloud and on-premises addresses.  This is a major challenge organizations face when needing to conduct analytics across many data sets. For instance, in multichannel marketing, each individual customer may have many email addresses as well as different mailing addresses, phone numbers and identifiers for social media. A so-called “golden record” needs to be created so all such information can be linked together. Conceptually, the data becomes one long row of data related to that golden record, rather than multiple unassociated data in rows of shorter length. This data needs to be brought into a company’s own internal systems, and personally identifiable information must be stripped out before anything moves into a public domain. In a probabilistic matching system, data is matched not on one field but through associations of data which gives levels of certainty that records should be merged. This is different than past approaches and one of the reasons for significant innovation in the category. Multiple startups have been entering the data preparation space to address the need for a better user experience in data preparation. Such needs have been documented as one of the foundational issues facing the world of big data. Our benchmark research into information optimization shows that data preparation (47%) and quality and consistency (45%) are the most time-consuming tasks for organizations in analytics.

Watson Analytics is deployed on IBM’s SoftLayer cloud vr_Info_Optimization_04_basic_information_tasks_consume_timetechnology and is part of a push to move its analytic portfolio into the cloud. Early in 2015 the company plans to move its SPSS and Cognos products into the cloud via a managed service, thus offloading tasks such as setup, maintenance and disaster recovery management. Watson Analytics will be offered as a set of APIs much as the broader Watson cognitive computing platform has been. Last year, IBM said it would move almost all of its software portfolio to the cloud via its Bluemix service platform. These cloud efforts, coupled with the company’s substantial investment in partner programs with developers and universities around the world, suggest that Watson may power many next-generation cognitive computing applications, a market estimated to grow into the tens of billions of dollars in the next several years.

Overall, I expect Watson Analytics to gain more attention and adoption in 2015 and beyond. Its design philosophy and user experience are innovative, but work must be done in some areas to make it a tool that professionals use in their daily work. Given the resources IBM is putting into the product and the massive amounts of product feedback it is receiving, I expect initial release issues to be worked out quickly through the continuous release cycle. Once they are, Watson Analytics will raise the bar on self-service analytics.

Regards,

Ventana Research

vr_oi_information_sources_for_operational_intelligenceAt a conference of more than 3,500 users, Splunk executives showed off their company’s latest tools. Splunk makes software for discovering, monitoring and analyzing machine data, which is often considered data exhaust since it is a by-product of computing processes and applications. But machine data is essential to a smoothly running technology infrastructure that supports business process. One advantage is that because machine data not recorded by end users, it is less subject to input error. Splunk has grown rapidly by solving fundamental problems associated with the complexities of information technology and challenging assumptions in IT systems and network management that is rapidly being referred to as big data analytics. The two main and related assumptions it challenges are that different types of IT systems should be managed separately and that data should be modeled prior to recording it. Clint Sharp, Splunk’s director of product marketing, pointed out that network and system data can come from several sources and argued that utilizing point solution tools and a “model first” approach does not work when it has to deal with big data and a question-and-answer paradigm. Our research into Operational Intelligence finds that IT systems are most important information source in almost two thirds (62%) of organizations. Splunk used the conference to show how it has brought to these data management innovations the business trends of mobility, cloud deployment and security.

Presenters from major customer companies demonstrated how they work with Splunk Enterprise. For example, according to Michael Connor, senior platform architect for Coca-Cola, bringing all the company’s data into Splunk allowed the IT department to reduce trouble tickets by 80 percent and operating costs by 40 percent. Beyond asserting the core value of streamlining IT operations and the ability to quickly provision system resources, Connor discussed other uses for data derived from the Splunk product. Coca-Cola IT used a free community add-on to deliver easy-to-use dashboards for the security team. He showed how channel managers compare different vending environments in ways they had never done before. They also can conduct online ethnographic studies to better understand behavior patterns and serve different groups. For Coca-Cola, the key to success for the application was to bring data from various platforms in the organization into one data platform. This challenge, he said, is more to do with people and processes than technology, since many parts of an organization are protective of their data, in effect forming what he called “data cartels.” This situation is not uncommon. Our research into information optimization shows that organizations need these so-called softer disciplines to catch up with their capabilities in technology and information to realize full value from data and analytics initiatives.

In keeping up with trends, Splunk is making advances in mobility. One is MINT for monitoring mobile devices. With the company’s acquisition of BugSense as a foundation, Splunk has built an extension of its core platform that consumes and indexes application and other machine data from mobile devices. The company is offering the MINT Express version to developers so they can build the operational service into their applications. Similar to the core product, MINT has the ability to track transactions, network latency and crashes throughout the IT stack. It can help application developers quickly solve user experience issues by understanding root causes and determining responsibility. For instance, MINT Express can answer questions such as these: Is it an application issue or a carrier issue? Is it a bad feature or a system problem? After it is applied, end-user customers get a better digital experience which results in more time spent with the application and increased customer loyalty in a mobile environment where the cost of switching is low. Splunk also offers MINT Enterprise, which allows users to link and cross-reference data in Splunk Enterprise. The ability to instrument data in a mobile environment, draw a relationship with the enterprise data  and display key operational variables is critical to serving and satisfying consumers. By extending this capability into the mobile sphere, Splunk MINT delivers value for corporate IT operations as well as the new breed of cloud software providers. However, Splunk risks stepping on its partners’ toes as it takes advantage of certain opportunities as in mobility. In my estimation, the risk is worth taking given that mobility is a systemic change that represents enormous opportunity. Our research into business technology innovation shows mobility in a virtual tie with collaboration for the second-most important innovation priority for companies today.

vr_Info_Optimization_17_with_more_sources_easy_access_more_importantCloud computing is another major shift that the company is prioritizing. Praveen Rangnath, director of cloud product marketing, said that Splunk Cloud enables the company to deliver 100 percent on service level agreements through fail-over capabilities across AWS availability zones, redundant operations across indexers and search heads, and by using Splunk on Splunk itself. Perhaps the most important capability of the cloud product is its integration of enterprise and on-demand systems. This capability allows a single view and queries across multiple data sources no matter where they physically reside. Coupled with Splunk’s abilities to ingest data from various NoSQL systems – such as Mongo, Cassandra, Accumulo, Amazon’s Elastic Map Reduce, Amazon S3 and even mainframes – with the Ironstream crawler, its hybrid search capability is unique. The company’s significant push into the cloud is reflected by both a 33 percent reduction in price and its continued investment into the platform. According to our research into information optimization one of the biggest challenges with big data is simplification of data access; as data sources increase easy access becomes more important. More than 92 percent of organizations that have  16 to 20 data sources rated information simplification very important. As data proliferates both on-premises and in the cloud, Splunk’s software abstracts users from the technical complexities of integrating and accessing the hybrid environment. (Exploring this and related issues, our upcoming benchmark research into data and analytics in the cloud will examine trends in business intelligence and analytics related to cloud computing.)

vr_ngbi_br_importance_of_bi_technology_considerations_updatedUsability is another key consideration: In our research on next-generation business intelligence nearly two-thirds (63%) of organizations said that is an important evaluation criterion, more than any other one. At the user conference Divanny Lamas, senior manager of product management, discussed new features aimed at the less sophisticated Splunk user. Advanced Feature Extractor enables users to extract fields in a streamlined fashion that does not require them to write an expression. Instant Pivot enables easy access to a library of panels and dashboards that allows end users to pivot and visually explore data. Event Pattern Detection clusters patterns in the data to make different product usage metrics and issues impacting downtime easier to resolve. Each of these advances represents progress in broader usability and organizational appeal. While Splunk continues to make its data accessible to business users, gaining broader adoption is still an uphill battle because much of the Splunk data is technical in nature. The current capabilities address the technologically sophisticated knowledge worker or the data analyst, while a library of plug-ins allows more line-of-business end-users to perform visualization. (For more on the analytic user personas that matter in the organization and what they need to be successful, please see my analysis.)

Splunk is building an impressive platform for collecting and analyzing data across the organization. The question from the business analytics perspective is whether the data can be modeled in ways that easily represent each organization’s unique business challenges. Splunk provides search capabilities for IT data by default, but when other data sources need to be brought in for more advanced reporting and correlation, it requires the data to be normalized, categorized and parsed. Currently, business users apply various data models and frameworks from major IT vendors as well as various agencies and data brokers. This dispersion could provide an opportunity for Splunk to provide a unified platform; the more data businesses ingest, the more likely they will rely on such a platform. Splunk’s Common Information Model provides a metadata framework using key-value pair representation similar to what other providers of cloud analytic applications are doing. When we consider the programmable nature of the platform including RESTful APIs and various SDKs, HUNK’s streamlined access to Hadoop and other NoSQL sources, Splunk BD connect for relational sources, the Splunk Cloud hybrid access model and the instrumentation of mobile data in MINT, the expansive platform idea seems plausible.

vr_Big_Data_Analytics_04_types_of_big_data_for_analyticsA complicating factor as to whether Splunk will become such a platform for operational intelligence and big data analytics is the Internet of Things (IoT), which collects data from various devices. Massive amounts of sensor data already are moving through the Internet, but IoT approaches and service architectures are works in progress. Currently, many of these architectures do not communicate with others. Given Splunk’s focus on machine data which is a key type of input for big data analytics in 42 percent of organizations according to our research, IoT appears to be a natural fit as it is generating event-centered data which is a type of input for big data analytics in 48 percent of organizations. There is some debate about whether Splunk is a true event processing engine, but that depends on how the category is defined. Log messages, its specialty, are not events per se but rather are data related to something that has happened in an IT infrastructure. Once correlated, this data points directly to something of significance, including events that can be acted upon. If such a correlation triggers a system action, and that action is taken in time to solve the problem, then the data provides value and it should not matter if the system is acting in real time or near real time. In this way, the data itself is Splunk’s advantage. To be successful in becoming a broader data platform, the company will need to advance its Common Information Model, continue to emphasize the unique value of machine data, build their developer and partner ecosystem, and encourage customers to push the envelope and develop new use cases.

For organizations considering Splunk for the first time, IT operations, developer operations, security, fraud management and compliance management are obvious areas to evaluate. Splunk’s core value is that it simplifies administration, reduces IT costs and can reduce risk through pattern recognition and anomaly detection. Each of these areas can deliver value immediately. For those with a current Splunk implementation, we suggest examining use cases related to business analytics. Specifically, comparative analysis and analysis of root causes, online ethnography and feature optimization in the context of the user experience can all deliver value. As ever more data comes into their systems, companies also may find it reasonable to consider Splunk in other new ways like big data analytics and operational intelligence.

Regards,

Ventana Research

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,250 hits
%d bloggers like this: