You are currently browsing the tag archive for the ‘oracle’ tag.

One of the key findings in our latest benchmark research into predictive analytics is that companies are incorporating predictive analytics into their operational systems more often than was the case three years ago. The research found that companies are less inclined to purchase stand-alone predictive analytics tools (29% vs 44% three years ago) and more inclined to purchase predictive analytics built into business intelligence systems (23% vs 20%), applications (12% vs 8%), databases (9% vs 7%) and middleware (9% vs 2%). This trend is not surprising since operationalizing predictive analytics – that is, building predictive analytics directly into business process workflows – improves companies’ ability to gain competitive advantage: those that deploy predictive analyticsvr_NG_Predictive_Analytics_12_frequency_of_updating_predictive_models within business processes are more likely to say they gain competitive advantage and improve revenue through predictive analytics than those that don’t.

In order to understand the shift that is underway, it is important to understand how predictive analytics has historically been executed within organizations. The marketing organization provides a useful example since it is the functional area where organizations most often deploy predictive analytics today. In a typical organization, those doing statistical analysis will export data from various sources into a flat file. (Often IT is responsible for pulling the data from the relational databases and passing it over to the statistician in a flat file format.) Data is cleansed, transformed, and merged so that the analytic data set is in a normalized format. It then is modeled with stand-alone tools and the model is applied to records to yield probability scores. In the case of a churn model, such a probability score represents how likely someone is to defect. For a marketing campaign, a probability score tells the marketer how likely someone is to respond to an offer. These scores are produced for marketers on a periodic basis – usually monthly. Marketers then work on the campaigns informed by these static models and scores until the cycle repeats itself.

The challenge presented by this traditional model is that a lot can happen in a month and the heavy reliance on process and people can hinder the organization’s ability to respond quickly to opportunities and threats. This is particularly true in fast-moving consumer categories such as telecommunications or retail. For instance, if a person visits the company’s cancelation policy web page the instant before he or she picks up the phone to cancel the contract, this customer’s churn score will change dramatically and the action that the call center agent should take will need to change as well. Perhaps, for example, that score change should mean that the person is now routed directly to an agent trained to deal with possible defections. But such operational integration requires that the analytic software be integrated with the call agent software and web tracking software in near-real time.

Similarly, the models themselves need to be constantly updated to deal with the fast pace of change. For instance, if a telecommunications carrier competitor offers a large rebate to customers to switch service providers, an organization’s churn model can be rendered out of date and should be updated. Our research shows that organizations that constantly update their models gain competitive advantage more often than those that only update them periodically (86% vs 60% average), more often show significant improvement in organizational activities and processes (73% vs 44%), and are more often very satisfied with their predictive analytics (57% vs 23%).

Building predictive analytics into business processes is more easily discussed than done; complex business and technical challenges must be addressed. The skills gap that I recently wrote about is a significant barrier to implementing predictive analytics. Making predictive analytics operational requires not only statistical and business skills but technical skills as well.   From a technical perspective, one of the biggest challenges for operationalizing predictive analytics is accessing and preparing data which I wrote about. Four out of ten companies say that this is the part of the predictive analytics process vr_NG_Predictive_Analytics_02_impact_of_doing_more_predictive_analyticswhere they spend the most time. Choosing the right software is another challenge that I wrote about. Making that choice includes identifying the specific integration points with business intelligence systems, applications, database systems, and middleware. These decisions will depend on how people use the various systems and what areas of the organization are looking to operationalize predictive analytics processes.

For those that are willing to take on the challenges of operationalizing predictive analytics the rewards can be significant, including significantly better competitive positioning and new revenue opportunities. Furthermore, once predictive analytics is initially deployed in the organization it snowballs, with more than nine in ten companies going on to increase their use of predictive analytics. Once companies reach that stage, one third of them (32%) say predictive analytics has had a transformational impact and another half (49%) say it provides a significant positive benefits.

Regards,

Ventana Research

Our benchmark research into predictive analytics shows that lack of resources, including budget and skills, is the number-one business barrier to the effective deployment and use of predictive analytics; awareness – that is, an understanding of how to apply predictive analytics to business problems – is second. In order to secure resources and address awareness problems a business case needs to be created and communicated clearly wherever appropriate across the organization. A business case presents the reasoning for initiating a project or task. A compelling business case communicates the nature of the proposed project and the arguments, both quantified and unquantifiable, for its deployment.

The first steps in creating a business case for predictive analytics are to understand the audience and to communicate with the experts who will be involved in leading the project. Predictive analytics can be transformational in nature and therefore the audience potentially is broad, including many disciplines within the organization. Understand who should be involved in business case creation a list that may include business users, analytics users and IT. Those most often primarily responsible for designing and deploying predictive analytics are data scientists (in 31% of organizations), the business intelligence and data warehouse team (27%), those working in general IT (16%) and line of business analysts (13%), so be sure to involve these groups. Understand the specific value and challenges for each of the constituencies so the business case can represent the interests of these key stakeholders. I discuss the aspects of the business where these groups will see predictive analytics most adding value here and here.

For the business case for a predictive analytics deployment to be persuasive, executives also must understand how specifically the deployment will impact their areas of responsibilityvr_NG_Predictive_Analytics_01_front_office_functions_use_predictive_anal.._ and what the return on investment will be. For these stakeholders, the argument should be multifaceted. At a high level, the business case should explain why predictive analytics is important and how it fits with and enhances the organization’s overall business plan. Industry benchmark research and relevant case studies can be used to paint a picture of what predictive analytics can do for marketing (48%), operations (44%) and IT (40%), the functions where predictive analytics is used most.

A business case should show how predictive analytics relates to other relevant innovation and analytic initiatives in the company. For instance, companies have been spending money on big data, cloud and visualization initiatives where software returns can be more difficult to quantify. Our research into big data analytics and data and analytics in the cloud show that the top benefit for these initiatives are communication and knowledge sharing. Fortunately, the business case for predictive analytics can cite the tangible business benefits our research identified, the most often identified of which are achieving competitive advantage (57%), creating new revenue opportunities (50%), and increasing profitability vr_NG_Predictive_Analytics_03_benefits_of_predictive_analytics(46%). But the business case can be made even stronger by noting that predictive analytics can have added value when it is used to leverage other current technology investments. For instance, our big data analytics research shows that the most valuable type of analytics to be applied to big data is predictive analytics.

To craft the specifics of the business case, concisely define the business issue that will be addressed. Assess the current environment and offer a gap analysis to show the difference between the current environment and the future environment). Offer a recommended solution, but also offer alternatives. Detail the specific value propositions associated with the change. Create a financial analysis summarizing costs and benefits. Support the analysis with a timeline including roles and responsibilities. Finally, detail the major risk factors and opportunity costs associated with the project.

For complex initiatives, break the overall project into a series of shorter projects. If the business case is for a project that will involve substantial work, consider providing separate timelines and deliverables for each phase. Doing so will keep stakeholders both informed and engaged during the time it takes to complete the full project. For large predictive analytics projects, it is important to break out the due-diligence phase and try not to make any hard commitments until that phase is completed. After all, it is difficult to establish defensible budgets and timelines until one knows the complete scope of the project.

Ensure that the project time line is realistic and addresses all the key components needed for a successful deployment.  In particular with predictive analytics projects, make certain that it reflects a thoughtful approach to data access, data quality and data preparation. We note that four in 10 organizations say vr_NG_Predictive_Analytics_08_time_spent_in_predictive_analytic_processthat the most time spent in the predictive analytics process is in data preparation and another 22 percent say that they spend the most time accessing data sources. If data issues have not been well thought through, it is next to impossible for the predictive analytics initiative to be successful. Read my recent piece on operationalizing predictive analytics to show how predictive analytics will align with specific business processes.

If you are proposing the implementation of new predictive analytics software, highlight the multiple areas of return beyond competitive advantage and revenue benefits. Specifically, new software can have a total lower cost of ownership and generate direct cost savings from improved operating efficiencies. A software deployment also can yield benefits related to people (productivity, insight, fewer errors), management (creativity, speed of response), process (shorter time on task or time to complete) and information (easier access, more timely, accurate and consistent). Create a comprehensive list of the major benefits the software will provide compared to the existing approach, quantifying the impact wherever possible. Detail all major costs of ownership whether the implementation is on-premises or cloud-based: these will include licensing, maintenance, implementation consulting, internal deployment resources, training, hardware and other infrastructure costs. In other words, think broadly about both the costs and the sources of return in building the case for new technology. Also, read my recent piece on procuring predictive analytics software.

Understanding the audience, painting the vision, crafting the specific case, outlining areas of return, specifying software, noting risk factors, and being as comprehensive as possible are all part of a successful business plan process. Sometimes, the initial phase is really just a pitch for project funding and there won’t be any dollar allocation until people are convinced that the program will get them what they need.  In such situations multiple documents may be required, including a short one- to two-page document that outlines vision and makes a high-level argument for action from the organizational stakeholders. Once a cross functional team and executive support is in place, a more formal assessment and design plan following the principles above will have to be built.

Predictive analytics offers significant returns for organizations willing pursue it, but establishing a solid business case is the first step for any organization.

Regards,

Ventana Research

Oracle is one of the world’s largest business intelligence and analytics software companies. Its products range from middleware, back-end databases and ETL tools to business intelligence applications and cloud platforms, and it is well established in many corporate and government accounts. A key to Oracle’s ongoing success is in transitioning its business intelligence and analytics portfolio to self-service, big data and cloud deployments. To that end, three areas in which the company has innovated are fast, scalable access for transaction data; exploratory data access for less structured data; and cloud-based business intelligence.

 Providing users with access to structured data in an expedient and governed fashion continues to be a necessity for companies. Our benchmark research into information optimization finds drilling into information within applications (37%) and search (36%) to be the capabilities most needed for end users in business.

To provide them, Oracle enhanced its database in version Oracle 12c, which was  released in 2013 . The key innovation is to enable both transaction processing and analytic processing workloads on the same system.MostImportantEndUseCapUsing in-memory instruction sets on the processor, the system can run calculations quickly without changing the application data. The result is that end users can explore large amounts of information in the context of all data and applications running on the 12c platform. These applications include Oracle’s growing cadre of cloud based applications. The value of this is evident in our big data analytics benchmark research , which finds that the number-one source of big data is transactional data from applications, mentioned by 60 percent of participants.

 Search and interactive analysis of structured data are addressed by Oracle Business Intelligence Enterprise Edition (OBIEE) through a new visualization interface that applies assets Oracle acquired from Endeca in 2011. (Currently, this approach is available in Business Intelligence Cloud Service, which I discuss below.) To run fast queries of large data sets, columnar compression can be implemented by small code changes in the Oracle SQL Developer interface. These changes use the innovation in 12c discussed above and would be implemented by users familiar with SQL. Previously, IT professionals would have to spend significant time to construct aggregate data and tune the database so users could quickly access data. Otherwise transactional databases take a long time to query since they are row-oriented and the query literally must go through every row of data to return analytic results. With columnar compression, end users can explore and interact with data in a much faster, less limited fashion. With the new approach, users no longer need to walk down each hierarchy but can drag and drop or right-click to see the hierarchy definition. Drag-and-drop and brushing features enable exploration and uniform updates across all visualizations on the screen. Under the covers,

 DefiningBDAnalyticsthe database is doing some heavy lifting, often joining five to 10 tables to compute the query in near real time. The ability to do correlations on large data sets in near real time is a critical enabler of data exploration since it allows questions to be asked and answered one after another rather than asking users to predefine what those questions might be. This type of analytic discovery enables much faster time to value especially when providing root-cause analysis for decision-making.

 Oracle also  provides Big Data SQL , a query approach that enables analysis of unstructured data analysis on systems such as Hadoop. The model uses what Oracle calls query franchising rather than query federation in which, processing is done in a native SQL dialect and the various dialects must be translated and combined into one. With franchising, Oracle SQL runs natively inside of each of the systems. This approach applies Oracle SQL to big data systems and offloads queries to the compute nodes or storage servers of the big data system. It also maintains the security and speed needed to do exploration on less structured data sources such as JSON, which the 12c database supports natively. In this way Oracle provides security and manageability within the big data environment. Looking beyond structured data is key for organizations today. Our research shows that analyzing data from all sources is how three-fourths (76%) of organizations define big data analytics.

 To visualize and explore big data, Oracle  offers Big Data Discovery , which browses Hadoop and NoSQL stores, and samples and profiles data automatically to create catalogs. Users can explore important attributes through visualization as well as using common search techniques. The system currently supports capabilities such as string transformations, variable grouping, geotagging and text enrichment that assist in data preparation. This is a good start to address exploration on big data sources, but to better compete in this space, Oracle should offer more usable interfaces and more capabilities for both data preparation and visualization. For example, visualizations such as decision trees and correlation matrices are important to help end users to make sense of big data and do not appear to be included in the tool.

 The third analytic focus, and the catalyst of the innovations discussed above, is Oracle’s move to the cloud. In September 2014,  Oracle released BI Cloud Service  (BICS), which helps business users access Oracle BI systems in a self-service manner with limited help from IT. Cloud computing has been a major priority for Oracle in the past few years with not just its applications but also for its entire stack of technology. With BICS, Oracle offers a stand-alone product with which a departmental workgroup can insert analytics directly into its cloud applications. When BICS is coupled with the Data-as-a-Service (DaaS) offering, which accesses internal data as well as third-party data sources in the cloud, Oracle is able to deliver cross-channel analysis and identity-as-data. Cross-channel analysis and identity management are important in cloud analytics from both business and a privacy and security perspectives.

 CustomerAnalyticsIn particular, such tools can help tie together and thus simplify the complex task of managing multichannel marketing. Availability and simplicity in analytics tools are priorities for marketing organizations.  Our research into next-generation customer analytics  shows that for most organizations data not being readily available (63%) and difficulty in maintaining customer analytics systems (56%) are top challenges.

 Oracle is not the first vendor to offer self-service discovery and flexible data preparation, but BICS begins its movement from the previous generation of BI technology to the next. BICS puts Oracle Transactional Business Intelligence (OTBI) in the cloud as a first step toward integration with vertical applications in the lines of business. It lays the groundwork for cross-functional analysis in the cloud.

 We don’t expect BICS to compete immediately with more user-friendly analytic tools designed for business and analytics or with well-established cloud computing BI players. Designers still must be trained in Oracle tools, and for this reason, it appears that the tool, at least in its first iteration, is targeted only at Oracle’s OBIEE customers seeking a departmental solution that limits IT involvement. Oracle should continue to address usability for both end users and designers. BICS also should connect to more data sources including Oracle Essbase. It currently comes bundled with  Oracle Database Schema Service  which acts as the sole data source but does not directly connect with any other database. Furthermore, data movement is not streamlined in the first iteration, and replication of data is often necessary.

 Overall, Oracle’s moves in business intelligence and analytics make sense because they use the same semantic models in the cloud as those analytic applications that many very large companies use today and won’t abandon soon. Furthermore, given Oracle’s growing portfolio of cloud applications and the integration of analytics into these transactional applications through OTBI, Oracle can leverage cloud application differentiation for companies not using Oracle. If Oracle can align its self-service discovery and big data tools with its current portfolio in reasonably timely fashion, current customers will not turn away from their Oracle investments. In particular, those with an Oracle centric cloud roadmap will have no reason to switch. We note that cloud-based business intelligence and analytics applications is still a developing market. Our previous research showed that business intelligence had been a laggard in the cloud in comparison to genres such as human capital management, marketing, sales and customer service. We are examining trends in our forthcoming  data and analytics in the cloud benchmark research, which will evaluate both the current state of such software and where the industry likely is heading in 2015 and beyond. For organizations shifting to cloud platforms, Oracle has a very progressive cloud computing portfolio that  my colleague has assessed  and they have created a path by investing in its Platform-as-a-Service (PaaS) and DaaS offerings. Its goal is to provide uniform capabilities across mobility, collaboration, big data and analytics so that all Oracle applications are consistent for users and can be extended easily by developers. However, Oracle competes against many cloud computing heavyweights like Amazon Web Services, IBM and Microsoft, so achieving success through significant growth has some challenges. Oracle customers generally and OBIEE customers especially should investigate the new innovations in the context of their own roadmaps for big data analytics, cloud computing and self-service access to analytics.

 Regards,

Ventana Research

Oracle is one of the world’s largest business intelligence and analytics software companies. Its products range from middleware, back-end databases and ETL tools to business intelligence applications and cloud platforms, and it is well established in many corporate and government accounts. A key to Oracle’s ongoing success is in transitioning its business intelligence and analytics portfolio to self-service, big data and cloud deployments. To that end, three areas in which the company has innovated are fast, scalable access for transaction data; exploratory data access for less structured data; and cloud-based business intelligence.

 Providing users with access to structured data in an expedient and governed fashion continues to be a necessity for companies. Our benchmark research into information optimization finds drilling into information within applications (37%) and search (36%) to be the capabilities most needed for end users in business.

To provide them, Oracle enhanced its database in version Oracle 12c, which was  released in 2013 . The key innovation is to enable both transaction processing and analytic processing workloads on the same system.MostImportantEndUseCapUsing in-memory instruction sets on the processor, the system can run calculations quickly without changing the application data. The result is that end users can explore large amounts of information in the context of all data and applications running on the 12c platform. These applications include Oracle’s growing cadre of cloud based applications. The value of this is evident in our big data analytics benchmark research , which finds that the number-one source of big data is transactional data from applications, mentioned by 60 percent of participants.

 Search and interactive analysis of structured data are addressed by Oracle Business Intelligence Enterprise Edition (OBIEE) through a new visualization interface that applies assets Oracle acquired from Endeca in 2011. (Currently, this approach is available in Business Intelligence Cloud Service, which I discuss below.) To run fast queries of large data sets, columnar compression can be implemented by small code changes in the Oracle SQL Developer interface. These changes use the innovation in 12c discussed above and would be implemented by users familiar with SQL. Previously, IT professionals would have to spend significant time to construct aggregate data and tune the database so users could quickly access data. Otherwise transactional databases take a long time to query since they are row-oriented and the query literally must go through every row of data to return analytic results. With columnar compression, end users can explore and interact with data in a much faster, less limited fashion. With the new approach, users no longer need to walk down each hierarchy but can drag and drop or right-click to see the hierarchy definition. Drag-and-drop and brushing features enable exploration and uniform updates across all visualizations on the screen. Under the covers,

 DefiningBDAnalyticsthe database is doing some heavy lifting, often joining five to 10 tables to compute the query in near real time. The ability to do correlations on large data sets in near real time is a critical enabler of data exploration since it allows questions to be asked and answered one after another rather than asking users to predefine what those questions might be. This type of analytic discovery enables much faster time to value especially when providing root-cause analysis for decision-making.

 Oracle also  provides Big Data SQL , a query approach that enables analysis of unstructured data analysis on systems such as Hadoop. The model uses what Oracle calls query franchising rather than query federation in which, processing is done in a native SQL dialect and the various dialects must be translated and combined into one. With franchising, Oracle SQL runs natively inside of each of the systems. This approach applies Oracle SQL to big data systems and offloads queries to the compute nodes or storage servers of the big data system. It also maintains the security and speed needed to do exploration on less structured data sources such as JSON, which the 12c database supports natively. In this way Oracle provides security and manageability within the big data environment. Looking beyond structured data is key for organizations today. Our research shows that analyzing data from all sources is how three-fourths (76%) of organizations define big data analytics.

 To visualize and explore big data, Oracle  offers Big Data Discovery , which browses Hadoop and NoSQL stores, and samples and profiles data automatically to create catalogs. Users can explore important attributes through visualization as well as using common search techniques. The system currently supports capabilities such as string transformations, variable grouping, geotagging and text enrichment that assist in data preparation. This is a good start to address exploration on big data sources, but to better compete in this space, Oracle should offer more usable interfaces and more capabilities for both data preparation and visualization. For example, visualizations such as decision trees and correlation matrices are important to help end users to make sense of big data and do not appear to be included in the tool.

 The third analytic focus, and the catalyst of the innovations discussed above, is Oracle’s move to the cloud. In September 2014,  Oracle released BI Cloud Service  (BICS), which helps business users access Oracle BI systems in a self-service manner with limited help from IT. Cloud computing has been a major priority for Oracle in the past few years with not just its applications but also for its entire stack of technology. With BICS, Oracle offers a stand-alone product with which a departmental workgroup can insert analytics directly into its cloud applications. When BICS is coupled with the Data-as-a-Service (DaaS) offering, which accesses internal data as well as third-party data sources in the cloud, Oracle is able to deliver cross-channel analysis and identity-as-data. Cross-channel analysis and identity management are important in cloud analytics from both business and a privacy and security perspectives.

 CustomerAnalyticsIn particular, such tools can help tie together and thus simplify the complex task of managing multichannel marketing. Availability and simplicity in analytics tools are priorities for marketing organizations.  Our research into next-generation customer analytics  shows that for most organizations data not being readily available (63%) and difficulty in maintaining customer analytics systems (56%) are top challenges.

 Oracle is not the first vendor to offer self-service discovery and flexible data preparation, but BICS begins its movement from the previous generation of BI technology to the next. BICS puts Oracle Transactional Business Intelligence (OTBI) in the cloud as a first step toward integration with vertical applications in the lines of business. It lays the groundwork for cross-functional analysis in the cloud.

 We don’t expect BICS to compete immediately with more user-friendly analytic tools designed for business and analytics or with well-established cloud computing BI players. Designers still must be trained in Oracle tools, and for this reason, it appears that the tool, at least in its first iteration, is targeted only at Oracle’s OBIEE customers seeking a departmental solution that limits IT involvement. Oracle should continue to address usability for both end users and designers. BICS also should connect to more data sources including Oracle Essbase. It currently comes bundled with  Oracle Database Schema Service  which acts as the sole data source but does not directly connect with any other database. Furthermore, data movement is not streamlined in the first iteration, and replication of data is often necessary.

 Overall, Oracle’s moves in business intelligence and analytics make sense because they use the same semantic models in the cloud as those analytic applications that many very large companies use today and won’t abandon soon. Furthermore, given Oracle’s growing portfolio of cloud applications and the integration of analytics into these transactional applications through OTBI, Oracle can leverage cloud application differentiation for companies not using Oracle. If Oracle can align its self-service discovery and big data tools with its current portfolio in reasonably timely fashion, current customers will not turn away from their Oracle investments. In particular, those with an Oracle centric cloud roadmap will have no reason to switch. We note that cloud-based business intelligence and analytics applications is still a developing market. Our previous research showed that business intelligence had been a laggard in the cloud in comparison to genres such as human capital management, marketing, sales and customer service. We are examining trends in our forthcoming  data and analytics in the cloud benchmark research, which will evaluate both the current state of such software and where the industry likely is heading in 2015 and beyond. For organizations shifting to cloud platforms, Oracle has a very progressive cloud computing portfolio that  my colleague has assessed  and they have created a path by investing in its Platform-as-a-Service (PaaS) and DaaS offerings. Its goal is to provide uniform capabilities across mobility, collaboration, big data and analytics so that all Oracle applications are consistent for users and can be extended easily by developers. However, Oracle competes against many cloud computing heavyweights like Amazon Web Services, IBM and Microsoft, so achieving success through significant growth has some challenges. Oracle customers generally and OBIEE customers especially should investigate the new innovations in the context of their own roadmaps for big data analytics, cloud computing and self-service access to analytics.

 Regards,

Ventana Research

Ventana Research recently completed the most comprehensiveVRMobileBIVI evaluation of mobile business intelligence products and vendors available anywhere today. The evaluation includes 16 technology vendors’ offerings on smartphones and tablets and use across Apple, Google Android, Microsoft Surface and RIM BlackBerry that were assessed in seven key categories: usability, manageability, reliability, capability, adaptability, vendor validation and TCO and ROI. The result is our Value Index for Mobile Business Intelligence in 2014. The analysis shows that the top supplier is MicroStrategy, which qualifies as a Hot vendor and is followed by 10 other Hot vendors: IBM, SAP, QlikTech, Information Builders, Yellowfin, Tableau Software, Roambi, SAS, Oracle and arcplan.

Our expertise, hands on experience and the buyer research from our benchmark research on next-generation business intelligence and on information optimization informed our product evaluations in this new Value Index. The research examined business intelligence on mobile technology to determine organizations’ current and planned use and the capabilities required for successful deployment.

What we found was wide interest in mobile business intelligence and a desire to improve the use of information in 40 percent of organizations, though adoption is less pervasive than interest. Fewer than half of organizations currently access BI capabilities on mobile devices, but nearly three-quarters (71%) expect their mobile workforce to be able to access BI capabilities in the next 12 months. The research also shows strong executive support: Nearly half of executives said that mobility is very important to their BI processes.

Mobile_BI_Weighted_OverallEase of access and use are an important criteria in this Value Index because the largest percentage of organizations identified usability as an important factor in evaluations of mobile business intelligence applications. This is an emphasis that we find in most of our research, and in this case it also may reflect users’ experience with first-generation business intelligence on mobile devices; not all those applications were optimized for touch-screen interfaces and designed to support gestures. It is clear that today’s mobile workforce requires the ability to access and analyze data simply and in a straightforward manner, using an intuitive interface.

The top five companies’ products in our 2014 Mobile Business Intelligence Value Index all provide strong user experiences and functionality. MicroStrategy stood out across the board, finishing first in five categories and most notably in the areas of user experience, mobile application development and presentation of information. IBM, the second-place finisher, has made significant progress in mobile BI with six releases in the past year, adding support for Android, advanced security features and an extensible visualization library. SAP’s steady support for the mobile access to SAP BusinessObjects platform and support for access to SAP Lumira, and its integrated mobile device management software helped produce high scores in various categories and put it in third place. QlikTech’s flexible offline deployment capabilities for the iPad and its high ranking in assurance-related category of TCO and ROI secured it the fourth spot. Information Builders’ latest release of WebFOCUS renders content directly with HTML5 and its Active Technologies and Mobile Faves, the company delivers strong mobile capabilities and rounds out the top five ranked companies. Other noteworthy innovations in mobile BI include Yellowfin’s collaboration technology, Roambi’s use of storyboarding in its Flow application.

Although there is some commonality in how vendors provide mobile access to data, there are many differences among their offerings that can make one a better fit than another for an organization’s particular needs. For example, companies that want their mobile workforce to be able to engage in root-cause discovery analysis may prefer tools from Tableau and QlikTech. For large companies looking for a custom application approach, MicroStrategy or Roambi may be good choices, while others looking for streamlined collaboration on mobile devices may prefer Yellowfin. Many companies may base the decision on mobile business intelligence on which vendor they currently have installed. Customers with large implementations from IBM, SAP or Information Builders will be reassured to find that these companies have made mobility a critical focus.

To learn more about this research and to download a free executive summary, please visit http://www.ventanaresearch.com/bivalueindex/.

Regards,

Tony Cosentino

Vice President and Research Director

A few months ago, I wrote an article on the four pillars of big data analytics. One of those pillars is what is called discovery analytics or where visual analytics and data discovery combine together to meet the business and analyst needs. My colleague Mark Smith subsequently clarified the four types of discovery analytics: visual discovery, data discovery, information discovery and event discovery. Now I want to follow up with a discussion of three trends that our research has uncovered in this space. (To reference how I’m using these four discovery terms, please refer to Mark’s post.)

The most prominent of these trends is that conversations about visual discovery are beginning to include data discovery, and vendors are developing and delivering such tool sets today. It is well-known that while big data profiling and the ability to visualize data give us a broader capacity for understanding, there are limitations that can be vr_predanalytics_predictive_analytics_obstaclesaddressed only through data mining and techniques such as clustering and anomaly detection. Such approaches are needed to overcome statistical interpretation challenges such as Simpson’s paradox. In this context, we see a number of tools with different architectural approaches tackling this obstacle. For example, Information Builders, Datameer, BIRT Analytics and IBM’s new SPSS Analytic Catalyst tool all incorporate user-driven data mining directly with visual analysis. That is, they combine data mining technology with visual discovery for enhanced capability and more usability. Our research on predictive analytics shows that integrating predictive analytics into the existing architecture is the most pressing challenge (for 55% or organizations). Integrating data mining directly into the visual discovery process is one way to overcome this challenge.

The second trend is renewed focus on information discovery (i.e., search), especially among large enterprises with widely distributed systems as well as the big data vendors serving this market. IBM acquired Vivisimo and has incorporated the technology into its PureSystems and big data platform. Microsoft recently previewed its big data information discovery tool, Data Explorer. Oracle acquired Endeca and has made it a key component of its big data strategy. SAP added search to its latest Lumira platform. LucidWorks, an independent information discovery vendor that provides enterprise support for open source Lucene/Solr, adds search as an API and has received significant adoption. There are different levels of search, from documents to social media data to machine data,  but I won’t drill into these here. Regardless of the type of search, in today’s era of distributed computing, in which there’s a need to explore a variety of data sources, information discovery is increasingly important.

The third trend in discovery analytics is a move to more embeddable system architectures. In parallel with the move to the cloud, architectures are becoming more service-oriented, and the interfaces are hardened in such a way that they can integrate more readily with other systems. For example, the visual discovery market was born on the client desktop with Qlik and Tableau, quickly moved to server-based apps and is now moving to the cloud. Embeddable tools such as D3, which is essentially a visualization-as-a-service offering, allow vendors such as Datameer to include an open source library of visualizations in their products. Lucene/Solr represents a similar embedded technology in the information discovery space. The broad trend we’re seeing is with RESTful-based architectures that promote a looser coupling of applications and therefore require less custom integration. This move runs in parallel with the decline in Internet Explorer, the rise of new browsers and the ability to render content using JavaScript Object Notation (JSON). This trend suggests a future for discovery analysis embedded in application tools (including, but not limited to, business intelligence). The environment is still fragmented and in its early stage. Instead of one cloud, we have a lot of little clouds. For the vendor community, which is building more platform-oriented applications that can work in an embeddable manner, a tough question is whether to go after the on-premises market or the cloud market. I think that each will have to make its own decision on how to support customer needs and their own business model constraints.

Regards,

Tony Cosentino

VP and Research Director

Organizations today must manage and understand a floodvr_bigdata_obstacles_to_big_data_analytics (2) of information that continues to increase in volume and turn it into competitive advantage through better decision making. To do that organizations need new tools, but more importantly, the analytical process knowledge to use them well. Our benchmark research into big data and business analytics found that skills and training are substantial obstacles to using big data (for 79%) and analytics (77%) in organizations.

But proficiency around technology and even statistical knowledge are not the only capabilities needed to optimize an organization’s use of analytics. A framework that complements the traditional analytical modeling process helps ensure that analytics are used correctly and will deliver the best results. I propose the following five principles that are concerned less with technology than with people and processes. (For more detail on the final two, see my earlier perspective on business analytics.)

Ask the right questions. Without a process for getting to the right question, the one that is asked often is the wrong one, yielding results that cannot be used as intended. Getting to the right question is a matter of defining goals and terms; when this is done, the “noise” of differing meanings is reduced and people can work together efficiently. Companies talk about strategic alignment, brand loyalty, big data and analytics, to name a few, yet these terms can mean different things to different people. Take time to discuss what people really want to know; describing something in detail ensures that everyone is on the same page. Strategic listening is a critical skill, and done right it will enable the analyst to identify, craft and focus the questions that the organization needs answered through the analytic process.

Take a Bayesian perspective. Bayesian analysis, also called posterior probability analysis, starts with assuming an end probability and works backward to determine prior probabilities. In a practical sense, it’s about updating a hypothesis when given new information; it’s about taking all available information and seeing where it is convergent. Of course, the more you know about the category you’re dealing with, the easier it is to separate the wheat from the chaff in terms of valuable information. Category knowledge allows you to look at the data from a different perspective and add complex existing knowledge. This, in and of itself is a Bayesian approach, but allows the analyst to iteratively take the investigation in the right direction. Bayesian analysis has had not only a great impact on statistics and market insights in recent years, but it has impacted how we view important historical events as well. For those interested in looking at how the Bayesian philosophy is taking hold in many different disciplines, there is an interesting book entitled The Theory That Would Not Die.

Don’t try to prove what you already know. Let the data guide the analysis rather than allowing pre-determined beliefs to guide the analysis. Physicist Enrico Fermi pointed out that measurement is the reduction of uncertainty. Analysts start with a hypothesis and try to disprove it rather than to prove it. From there, iteration is needed to come as close to the truth as possible. If we start with a gut feel and try to prove that gut feel, we are invoking the wrong approach. The point is, an analysis that starts by trying to prove that what we believe to be true, the results are rarely surprising and the analysis is likely to add nothing new.

Think in terms of “so what.” Moving beyond the “what” (i.e., measurement) to the “so what” (i.e., insights) should be a goal of any analysis, yet many are still turning out analysis that does nothing more than state the facts. Maybe 54 percent of people in a study prefer white houses, but why does anyone care that 54 percent people prefer white houses? Analyses must move beyond findings to answer critical business questions and provide informed insights, implications and even full recommendations.

Be sure to address the “now what.” The analytics professional should make sure that the findings, implications and recommendations of the analysis are heard. This is the final step in the analytic process, the “now what” – the actual business planning and implementation decisions that are driven by the analytic insights. If those insights do not lead to decision-making or action, then the effort has no value. There are a number of things that the analyst can do to facilitate that the information is heard. A compelling story line that incorporates animation and dynamic presentation is a good start. Depending on the size of the initiative, professional videography, implementation of learning systems and change management tools should also be involved.

Just because our business technology research finds vr_bti_br_technology_innovation_prioritiesanalytics as top priority and first ranked in 39 percent of organizations does not mean that adopting it will get immediate success. In order to implement a successful framework such as the one described above, organizations should build this one or a similar approach into their training programs and analytical processes. The benefits will be wide ranging including more targeted analysis, analytical depth, and analytical initiatives that have a real impact on decision making in the organization.

Regards,

Tony Cosentino

VP and Research Director

Our benchmark research found in business technology innovation that analytics is the most important new technology for improving their organization’s performance; they ranked big data only fifth out of six choices. This and other findings indicate that the best way for big data to contribute value to today’s organizations is to be paired with analytics. Recently, I wrote about what I call the four pillars of big data analytics on which the technology must be built. These areas are the foundation of big data and information optimization, predictive analytics, right-time analytics and the discovery and visualization of analytics. These components gave me a framework for looking at Teradata’s approach to big data analytics during the company’s analyst conference last week in La Jolla, Calif.

The essence of big data is to optimize the information used by the business for whatever type of need as my colleague has identified as a key value of these investmentsVR_2012_TechAward_Winner_LogoData diversity presents a challenge to most enterprise data warehouse architectures. Teradata has been dealing with large, complex sets of data for years, but today’s different data types are forcing new modes of processing in enterprise data warehouses. Teradata is addressing this issue by focusing on a workload-specific architecture that aligns with MapReduce, statistics and SQL. Its Unified Data Architecture (UDA) incorporates the Hortonworks Hadoop distribution, the Aster Data platform and Teradata’s stalwart RDBMS EDW. The Big Data Analytics appliance that encompasses the UDA framework won our annual innovation award in 2012. The system is connected through Infiniband and accesses Hadoop’s metadata layer directly through Hcatalog. Bringing these pieces together represents the type of holistic thinking that is critical for handling big data analytics; at the same time there are some costs as the system includes two MapReduce processing environments. For more on the UDA architecture, read my previous post on Teradata as well as my colleague Mark Smith’s piece.

Predictive analytics is another foundational piece of big data analytics and one of the top priorities in organizations. However, according to our vr_bigdata_big_data_capabilities_not_availablebig data research, it is not available in 41 percent of organizations today. Teradata is addressing it in a number of ways and at the conference Stephen Brobst, Teradata’s CTO, likened big data analytics to a high-school chemistry classroom that has a chemical closet from which you pull out the chemicals needed to perform an experiment in a separate work area. In this analogy, Hadoop and the RDBMS EDW are the chemical closet, and Aster Data provides the sandbox where the experiment is conducted. With mulitple algorithms currently written into the platform and many more promised over the coming months, this sandbox provides a promising big data lab environment. The approach is SQL-centric and as such has its pros and cons. The obvious advantage is that SQL is a declarative language that is easier to learn than procedural languages, and an established skills base exists within most organizations. The disadvantage is that SQL is not the native tongue of many business analysts and statisticians. While it may be easy to call a function within the context of the SQL statement, the same person who can write the statement may not know when and where to call the function. One way for Teradata to expediently address this need is through its existing partnerships with companies like Alteryx, which I wrote about recently. Alteryx provides a user-friendly analytical workflow environment and is establishing a solid presence on the business side of the house. Teradata already works with predictive analytics providers like SAS but should further expand with companies like Revolution Analytics that I assessed that are using R technology to support a new generation of tools.

Teradata is exploiting its advantage with algorithms such as nPath, which shows the path that a customer has taken to a particular outcome such as buying or not buying. According to our big data benchmark research, being able to conduct what-if analysis and predictive analytics are the two most desired capabilities not currently available with big data, as the chart shows. The algorithms that Teradata is building into Aster help address this challenge, but despite customer case studies shown at the conference, Teradata did not clearly demonstrate how this type of algorithm and others seamlessly integrate to address the overall customer experience or other business challenges. While presenters verbalized it in terms of improving churn and fraud models, and we can imagine how the handoffs might occur, the presentations were more technical in nature. As Teradata gains traction with these types of analytical approaches, it will behoove the company to show not just how the algorithm and SQL works but how it works in the use by business and analysts who are not as technically savvy.

Another key principle behind big data analytics is timeliness of the analytics. Given the nature of business intelligence and traditional EDW architectures, until now timeliness of analytics has been associated with how quickly queries run. This has been a strength of the Teradata MPP share-nothing architecture, but other appliance architectures, such as those of Netezza and Greenplum, now challenge Teradata’s hegemony in this area. Furthermore, trends in big data make the situation more complex. In particular, with very large data sets, many analytical environments have replaced the traditional row-level access with column access. Column access is a more natural way for data to be accessed for analytics since it does not have to read through an entire row of data that may not be relevant to the task at hand. At the same time, column-level access has downsides, such as the reduced speed at which you can write to the system; also, as the data set used in the analysis expands to a high number of columns, it can become less efficient than row-level access. Teradata addresses this challenge by providing both row and column access through innovative proprietary access and computation techniques.

Exploratory analytics on large, diverse data sets also has a timeliness imperative. Hadoop promises the ability to conduct iterative analysis on such data sets, which is the reason that companies store big data in the first place according to our big data benchmark research. Iterative analysis is akin to the way the human brain naturally functions, as one question naturally leads to another question. However, methods such as Hive, which allows an SQL-like method to access Hadoop data, can be very slow, sometimes taking hours to return a query. Aster enables much faster access and therefore provides a more dynamic interface for iterative analytics on big data.

Timeliness also has to do with incorporating big data in a stream-oriented environment and only 16 percent of organizations are very satisfied with timeliness of events according to our operational intelligence benchmark research. In a use case such as fraud and security, rule-based systems work with complex algorithmic functions to uncover criminal activity. While Teradata itself does not provide the streaming or complex event processing (CEP) engines, it can provide the big data analytical sandbox and algorithmic firepower necessary to supply the appropriate algorithms for these systems. Teradata partners with major players in this space already, but would be well served to further partner with CEP and other operational intelligence vendors to expand its footprint. By the way, these vendors will be covered in our upcoming Operational Intelligence Value Index, which is based on our operational intelligence benchmark research. This same research showed that analyzing business and IT events together was very important in 45 percent of organizations.

The visualization and discovery of analytics is the last foundational pillarvr_ngbi_br_importance_of_bi_technology_considerations and here Teradata is still a work in progress. While some of the big data visualizations Aster generates show interesting charts, they lack a context to help people interpret the chart. Furthermore, the visualization is not as intuitive and requires the writing and customization of SQL statements. To be fair, most visual and discovery tools today are relationally oriented and Teradata is trying to visualize large and diverse sets of data. Furthermore, Teradata partners with companies including MicroStrategy and Tableau to provide more user-friendly interfaces. As Teradata pursues the big data analytics market, it will be important to demonstrate how it works with its partners to build a more robust and intuitive analytics workflow environment and visualization capability for the line-of-business user. Usability (63%) and functionality (49%) are the top two considerations when evaluating business intelligence systems according to our research on next-generation business intelligence.

Like other large industry technology players, Teradata is adjusting to the changes brought by business technology innovation in just the last few years. Given its highly scalable databases and data modeling – areas that still represent the heart of most company’s information architectures –  Teradata has the potential to pull everything together and leverage their current deployed base. Technologists looking at Teradata’s new and evolving capabilities will need to understand the business use cases and share these with the people in charge of such initiatives. For business users, it is important to realize that big data is more than just visualizing disparate data sets and that greater value lies in setting up an efficient back end process that applies the right architecture and tools to the right business problem.

Regards,

Tony Cosentino
VP and Research Director

Responding to the trend that businesses now ask less sophisticated users to perform analysis and rely on software to help them, Oracle recently announced a new release  of its flagship Oracle BI Foundational Suite (OBIFS 11.1.1.7) as well as updates to Endeca, the discovery platform that Oracle bought in 2011. Endeca is part of a new class of tools that bring new capabilities in information discovery, self-service access and interactivity. Such approaches represent an important part of the evolution of business intelligence to business analytics as I have noted in my agenda for 2013.

Oracle Business Intelligence Foundational Suite includes many components not limited to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Essbase and a scorecard and strategy application. OBIEE is the enabling foundation that federates queries across data sources and enables reporting across multiple platforms. Oracle Essbase is an in-memory OLAP tool that enables forecasting and planning, including what-if scenarios embedded in a range of Oracle BI Applications, which are sold separately. The suite, along with the Endeca software, is integrated with Exalytics, Oracle’s appliance for BI and analytics. Oracle’s appliance strategy, which I wrote about after Oracle World last year invests heavily in the Sun Microsystems hardware acquired in 2010.

These updates are far-ranging and numerous (including more than 200 changes to the software). I’d like to point out some important pieces that advance Oracle’s position in the BI market. A visualization recommendations engine offers guidance on the type of visualization that may be appropriate for a user’s particular data. This feature, already sold by others in the market, may be considered a subset of the broader capability of guided analysis. Advanced visualization techniques have become more important for companies as they make it easier for users to understand data and is critical to compete with the likes of  Tableau, a player in this space which I wrote about last year.

Another user-focused update related to visualization is performance tiles, which enable important KPIs to be displayed prominently within the context of the screen surface area. Performance tiles are a great way to start improving the static dashboards that my colleague Mark Smith has critiqued. From what I have seen it is unclear to what degree the business user can define and change Oracle’s performance tile KPIs (for example, the red-flagged metrics assignedvr_bigdata_big_data_capabilities_not_available to the particular business user that appear within the scorecard function of the software) and how much the system can provide in a prescriptive analytic fashion. Other visualizations that have been added include waterfall charts, which enable dependency analysis; these are especially helpful for pricing analysis by showing users how changes in one dimension impact pricing on the whole. Another is MapViews for manipulation and design to support location analytics that our next generation BI research finds the capability to deploy geographic maps are most important to BI in 47 percent of organizations, and then visualize metrics associated with locations in 41 percent of organizations. Stack charts now provide auto-weighting for 100-percent sum analysis that can be helpful for analytics such as attribution models. Breadcrumbs empower users to understand and drill back through their navigation process, which helps them understand how a person came to a particular analytical conclusion. Finally Trellis View actions provides contextual functionality to help turn data into action in an operational environment. The advancements of these visualizations are critical for Oracle big data efforts as visualization is a top three big data capability not available in 37 percent of organizations according to our big data research and our latest technology innovation research on business analytics found presenting data visually as the second most important capability for organizations according to 48 percent of organizations.

vr_ngbi_br_collaboration_tool_access_preferencesThe update to Oracle Smart View for Office also puts more capability in the hands of users. It natively integrates Excel and other Microsoft Office applications with operational BI dashboards so users can perform analysis and prepare ad-hoc reports directly within these desktop environments. This is an important advance for Oracle since our benchmark research in the use of spreadsheets across the enterprise found that the combination of BI and spreadsheets happens all the time or frequently in 74 percent of organization. Additionally the importance of collaborating with business intelligence is essential and having tighter integration is a critical use case as found in our next generation business intelligence research that found using Microsoft Office for collaboration with business intelligence is important to 36 percent of organizations.

Oracle efforts to evolve its social collaboration efforts through what they call Oracle Social Network have advanced significantly but do not appear to be in the short term plan to integrate and make available through its business intelligence offering. Our research finds more than two-thirds (67%) rank this as important and then embedding it within BI is a top need in 38 percent of organizations. Much of what Oracle already provides could be easily integrated and meet business demand for a range of people-based interactions that most are still struggling to manage through e-mail.

Oracle has extended its existing capabilities in its OBIEE with Hadoop integration via a HIVE connector that allows Oracle to pull data into OBIEE from big data sources, while an MDX search function enabled by integration with the Endeca discovery tool allows OBIEE to do full text search and data discovery. Connections to new data sources are critically important in today’s environment; our research shows that retaining and analyzing more data is the number-one ranked use for big data in 29 percent of organizations according to our technology innovation research. Federated data discovery is particularly important as most companies are often unaware of their information assets and therefore unknowingly limit their analysis.

Beyond the core BI product, Oracle made significant advances with Endeca 3.0. Users can now analyze Excel files. This is an existing capability for other vendors, so it was important for Oracle to gain parity here. Beyond that, Endeca now comes with a native JavaScript Object Notation (JSON) reader and support for authorization standards. This furthers its ability to do contextual analysis and sentiment analysis on data in text and social media. Endeca also now can pull data from the Oracle BI server to marry with the analysis. Overall the new version of Endeca enables new business-driven information discovery that is essential to relieve the stress on analysts and IT to create and publish information and insights to business.

Oracle’s continued investments into BI applications that supply prebuilt analytics and these packaged analytics applications span from the front office (sales and marketing), to operations (procurement and supply chain) to the back office (finance and HR). Given the enterprise-wide support, Oracle’s BI can perform cross-functional analytics and deliver fast time to value since users do not have to spend time building the dashboards. Through interoperation with the company’s enterprise applications, customers can execute action directly into applications such as PeopleSoft, JD Edwards or Oracle Business Suite. Oracle has begun to leverage more of its score-carding function that enables KPI relationships to be mapped and information aggregated and trended. Scorecards are important for analytic cultures because they are a common communication platform for executive decision-makers and allow ownership assignment of metrics.

I was surprised to not find much advancement in Oracle business intelligence efforts that operate on smartphones and tablets. Our research finds mobile business intelligence is important to 69 percent of organizations and that 78 percent of organizations reveal that no or some BI capabilities are available in their current deployment of BI. For those that are using mobile business intelligence, only 28 percent are satisfied. For years, IT has not placed a priority on mobile support of BI while business has been clamoring for it and now more readily leading the efforts with 52 percent planning new or expanded deployments on tablets and 32 percent on smartphones. In this highly competitive market to capture more opportunity, Oracle will need to significantly advance its efforts and make its capabilities freely available without passwords as other BI providers have already done. It also will need to recognize that business is more interested in alerts and events through notifications to mobile technology than trying to make the entire suite of BI capabilities replicated on these technologies.

Oracle has foundational positions in enterprise applications and database technology and has used these positions to drive significant vr_ngbi_br_importance_of_bi_technology_considerationssuccess in BI. The company’s proprietary “walled garden” approach worked well for years, but now technology changes, including movements toward open source and cloud computing, threaten that entrenched position. Surprisingly, the company has moved slowly off of its traditional messaging stance targeted at the CIO, IT and the data center. That position seems to focus the company too much on the technology-driven 3 V’s of big data and analytics, and not enough on the business driven 3 W’s that I advocate. As the industry moves into the age of analytics, where information is looked upon as a critical commodity and usability is the key to adoption (our research finds usability to be the top evaluation consideration in 63 percent of organizations), CIOs will need to further move beyond its IT approach for BI as I have noted and get more engaged into the requirements of business. Oracle’s business intelligence strategy and how it addresses these business outcomes and the use across all business users is key to the company’s future and organizations should examine these critical advancements to its BI offering very closely to determine if you can improve the value of information and big data in an organization.

Regards,

Tony Cosentino

VP and Research Director

Last week, IBM brought industry analysts to its famed Almaden Research Center, where the company outlined its big data analytics strategy and introduced a number of new innovations. Big data is no new topic to IBM, which has for decades helped organizations store and use data. But technology has changed over those decades, and IBM is working hard to ensure it is part of the future and not just the past. Our latest business technology innovation research into big data technology finds that retaining and analyzing more data is the first-ranked priority in 29 percent of organizations. From both an IT and a business perspective, big data is critical to IBM’s future success.

On the strategy side, there was much discussion at the event around use cases and the different patterns of deployment for big data analytics. Inhi Cho Suh, vice president of strategy, outlined five compelling use cases for big data analytics:

  1. Discovery and visualization. These types of exploratory analytics in a federated environment are a big part of big data analytics, since they can unlock patterns that can be useful in areas as diverse as determining a root cause of an airline issue or understanding relationships among buyers. IBM is working hard to ensure that products such as IBM Cognos Insight can evolve to support a new generation of visual discovery for big data.
  2. 360-degree view of the customer. By bringing together data sources and applying analytics to increase such things as customer loyalty and share-of-wallet, companies can gain more revenue and market share with fewer resources. IBM needs to ensure it can actually support a broad array of information about customers – not just transactional or social media data but also voice as well as mobile interactions that also use text.
  3. Security and intelligence. This area includes areas around fraud and real-time cyber security, where companies leverage big data to predict anomalies and contain risk. IBM has been enhancing its ability to process real-time streams and transactions across any network. This is an important area for the company as it works to drive competitive advantage.
  4. Operational analysis. This is the ability to leverage networks of instrumented data sources to enable proactive monitoring through baseline analysis and real-time feedback mechanisms. The need for better operational analytics continues to increase. Our latest research on operational intelligence finds that organizations that use dedicated tools to handle this need will be more satisfied and gain better outcomes than those that do not.
  5. Data warehouse augmentation.  Big data stores can replace some traditional data stores and archival systems to allow larger sets of data to be analyzed, providing better information and leading to more precise decision-making capabilities. It should be no surprise that IBM has customers with some of the larger data warehouse deployments. The company can help customers evaluate their technology and improve or replace existing investments.

Prior to Inhi taking the stage, Dave Laverty, vice president of marketing, went through the new technologies being introduced. The first announcement was the BLU Accelerator – dynamic in-memory technology that promises to improve both performance and manageability on DB2 10.5. In tests, IBM says it achieved better than 10,000x performance on queries. The secret sauce lies in the ability to do column store data retrieval, maximize CPU processing, and provide skipping of data that is not needed for the particular analysis at hand. The benefits to the user are much faster performance across very large data sets and a reduction in manual SQL optimization. Our latest research into business technology innovation finds that in-memory technology is the technology most planned for use with big data in the next two years (22%), ahead of RDBMS (10%), data warehouse appliance (19%), specialized database (19%) and  Hadoop (20%).

vr_bigdata_obstacles_to_big_data_analytics (2)An intriguing comment from one of IBM’s customers was “What is bad SQL in a world with BLU?” An important extension of that question might be “What is the future role for database administrators, given new advancements around databases, and how do we leverage that skill set to fill the big data analytics gap?” According to our business technology innovations research, staffing (79%) and training (77%) are the two biggest challenges to implementing big data analytics.

One of IBM’s answers to the question of the skills gap comes in the form of BigSQL. A newly announced feature of InfoSphere BigInsights 2.1, BigSQL layers on top of BigInsights to provide accessibility through industry-standard SQL and SQL-based applications. Providing access to Hadoop has been a sticking point for organizations, since they have traditionally needed to write procedural code to access Hadoop data. BigSQL is similar in function to Greenplum’s Pivotal, Teradata Aster and Cloudera’s Impala, where SQL is used to mine data out of Hadoop. All of these products aim to provide access for SQL-trained users and for SQL-based applications, which represent the predominance of BI tools currently deployed in industry. The challenge for IBM, with a product portfolio that includes BigInsights and Cognos Insight, is to offer a clear message about what products meet what types of analytic needs for what types of business and IT professional needs. In addition further clarity from IBM on when to use big data analytics software partners like Datameer who was on an industry panel at the event and part of IBM global educational tour that I have also analyzed.

Another IBM announcement was the PureData System for Hadoop. This appliance approach to Hadoop provides a turnkey solution that can be up and running in a matter of hours. As you would expect in an appliance approach, it allows for consistent administration, workflow, provisioning and security with BigInsights. It also allows access to Hadoop through BigSheets, which presents summary information about the unstructured data in Hadoop, and which was already part of the BigInsights platform. Phil Francisco, vice president of big data product management and strategy, pointed out use cases around archival capabilities and the ability to do cold storage analysis as well as the ability to bring many unstructured sources together. The PureData System for Hadoop, due out in the second half of the year, adds a third version to the BigInsights lineup, which also includes the free web-based version and the Enterprise version. Expanding to support Hadoop with its appliances is critical as more organizations look to exploit the processing power of Hadoop technology for their database and information management needs.

Other announcements included new versions of InfoSphere Streams and Informix TimeSeries for reporting and analytics using smart meter and sensor technology. They help with real-time analytics and big data depending on the business and architectural needs of an organization. The integration of database and streaming analytics are key areas where IBM differentiates itself in the market.

Late in the day, Les Rechan, general manager for business analytics, told the crowd that he and Bob Picciano, general manager for information management, had recently promised the company $20 billion in revenue. That statement is important because in the age of big data, information management and analytics must be considered together, and the company needs a strong relationship between these two leaders to meet this ambitious objective. In an interview, Rechan told me that the teams realize this and are working hand-in-glove across strategy, product development and marketing. The camaraderie between the gentlemen was clear during the event, and bodes well for the organization. Ultimately, IBM will need to articulate why it should be considered for big data, as our technology innovation research finds organizations today are less worried about validation of a vendor from a size perspective (23%) compared to usability of the technology (64%).

IBM’s big data platform seems to be less a specific offer and more of an ethos of how to think about big data and big data analytics in a common-sense way. The focus on five well-thought-out use cases provides customers a frame for thinking through the benefits of big data analytics and gives them a head start with their business cases. Given the confusion in the market around big data, that common-sense approach serves the market well, and it is very much aligned with our own philosophy of focusing on what we call the business-oriented Ws rather than the technology-oriented Vs.

Big data analytics, and in particular predictive analytics, is complex and difficult to integrate into current architectures. Our benchmark research into predictive analytics shows that architectural integration is the biggest inhibitor with 55 percent of companies, which should be a message IBM takes to heart about integration of its predictive analytics tools with its big data technology options. Predictive analytics is the most important capability (49%) for business analytics, according to our technology innovation research, and IBM needs to  show more solutions that integrate predictive analytics with big data.

H.L. Mencken once said, “For every complex problem there is an answer that is clear, simple and wrong.” Big data analytics is a complex problem, and the market is still early. The latent benefit of IBM’s big data analytics strategy is that it allows IBM to continue to innovate and deliver without playing all of its chips at one time. In today’s environment, many supplier companies don’t have the same luxury.

As I pointed out in my blog post on the four pillars of big data analytics,vr_predanalytics_predictive_analytics_obstacles our research and clients are moving toward addressing big data and analytics in a more holistic and integrated manner. The focus shift is less about how organizations store or process information than how they use it. Some may argue that the IBM’s cadence is reflective of company size and is actually a competitive disadvantage, but I would argue that size and innovation leadership are not mutually exclusive. As companies grapple with the onslaught of big data and analytics, no one should underestimate IBM’s outcomes-based and services-driven approach, but in order to succeed IBM also needs to ensure it can meet the needs of organizations at a price they can afford.

Regards,

Tony Cosentino

VP and Research Director

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 72,990 hits
%d bloggers like this: