You are currently browsing the tag archive for the ‘SAP’ tag.

SAP recently presented its analytics and business intelligence roadmap and new innovations to about 1,700 customers and partners using SAP BusinessObjects at its SAP Insider event (#BI2014). SAP has one of the largest presences in business intelligence due to its installed base of SAP BusinessObjects customers. The company intends to defend its current position in the established business intelligence (BI) market while expanding in the areas of databases, discovery analytics and advanced analytics. As I discussed a year ago, SAP faces an innovator’s dilemma in parts of its portfolio, but it is working aggressively to get ahead of competitors.

vr_bti_br_technology_innovation_prioritiesOne of the pressures that SAP faces is from a new class of software that is designed for business analytics and enables users to visualize and interact on data in new ways without relationships in the data being predefined. Our business technology innovation research shows that analytics is the top-ranked technology innovation in business today, rated first by 39 percent of organizations. In conventional BI systems, data is modeled in so-called cubes or other defined structures that allow users to slice and dice data quickly and easily. The cube structure solves the problem of abstracting the complexity of the structured query language (SQL) of the database and slashes the amount of time it takes to read data from a row-oriented database. However, as the cost of memory decreases significantly, enabling the use of new column-oriented databases, these methods of BI are being challenged. For SAP and other established business intelligence providers, this situation represents both an opportunity and a challenge. In responding, almost all of these BI companies have introduced some sort of visual discovery capability. SAP introduced SAP Lumira, formerly known as Visual Intelligence, 18 months ago to compete in this emerging segment, and it has gained traction in terms of downloads, which the company estimated at 365,000 in the fourth quarter of 2013.

SAP and other large players in analytics are trying not just to catch up with visual discovery players such as Tableau but rather to make it a game of leapfrog. Toward that end, the capabilities of Lumira demonstrated at the Insider conference included information security and governance, advanced analytics, integrated data preparation, storyboarding and infographics; the aim is to create a differentiated position for the tool. For me, the storyboarding and infographics capabilities are about catching up, but being able to govern and secure today’s analytic platforms is a critical concern for organizations, and SAP means to capitalize on them. A major analytic announcement at the conference focused on the integration of Lumira with the BusinessObjects platform. Lumira users now can create content and save it to the BusinessObjects server, mash up data and deliver the results through a secure common interface.

Beyond the integration of security and governance with discovery analytics, the leapfrog approach centers on advanced analytics. SAP’s acquisition last year of KXEN and its initial integration with Lumira provide an advanced analytics tool that does not require a data scientist to use it. My coverage of KXEN prior to the acquisition revealed that the tool was user-friendly and broadly applicable especially in the area of marketing analytics. Used with Lumira, KXEN will ultimately provide front-end integration for in-database analytic approaches and for more advanced techniques. Currently, for data scientists to run advanced analytics on large data sets, SAP provides its own predictive analytic library (PAL), which runs natively on SAP HANA and offers commonly used algorithms such as clustering, classification and time-series. Integration with the R language is available through a wrapper approach, but the system overhead is greater when compared to the PAL approach on HANA.

The broader vision for Lumira and the BusinessObjects analytics platform SAP said is “collective intelligence,” which it described as “a Wikipedia for business” that provides a bidirectional analytic and communication platform. To achieve this lofty goal, SAP will vr_Big_Data_Analytics_02_defining_big_data_analyticsneed to continue to put resources into HANA and facilitate the integration of underlying data sources. Our recently released research on big data analytics shows that being able to analyze data from all data sources (selected by 75% of participants) is the most prevalent definition for big data analytics. To this end, SAP announced the idea of an “in-memory fabric” that allows virtual data access to multiple underlying data sources including big data platforms such as Hadoop. The key feature of this data federation approach is what the company calls smart data access (SDA). Instead of loading all data into memory, the virtualized system sets a proxy that points to where specific data is held. Using machine learning algorithms, it can define how important information is based on the query patterns of users and upload the most important data into memory. The approach will enable the company to analyze data on a massive scale since utilizing both HANA and the Sybase IQ columnar database which the company says was just certified as the world record for the largest data warehouse, at more than 12 petabytes. Others such as eBay and Teradata may beg to differ with the result based on another implementation, but nevertheless it is an impressive achievement.

Another key announcement was SAP Business Warehouse (BW) 7.4, which now runs on top of HANA. This combination is likely to be popular because it enables migration of the underlying database without impacting business users. Such users store many of their KPIs and complex calculations in BW, and to uproot this system is untenable for many organizations. SAP’s ability to continue support for these users is therefore something of an imperative. The upgrade to 7.4 also provides advances in capability and usability. The ability to do complex calculations at the database level without impacting the application layer enables much faster time-to-value for SAP analytic applications. Relative to the in-memory fabric and SDA discussed above, BW users no longer need intimate knowledge of HANA SDA. The complete data model is now exposed to HANA as an information cube object, and HANA data can be reflected back into BW. To back it up, the company offered testimony from users. Representatives of Molson Coors said their new system took only a weekend to move into production (after six weeks of sandbox experiments and six weeks of development) and enables users to perform right-time financial reporting, rapid prototyping and customer sentiment analysis.

SAP’s advancements and portfolio expansion are necessary for it to continue in a leadership position, but the inherent risk is confusion amongst its customer and prospect base.  SAP published its last statement of direction for analytic dashboard about this time last year, and according to company executives, it will be updated fairly soon, though they would not specify when. The many tools in the portfolio include Web Intelligence, Crystal Reports, Explorer, Xcelsius and now Lumira. SAP and its partners position the portfolio as a toolbox in which each tool is meant to solve a different organizational need. There is overlap among them, however, and the inherent complexity of the toolbox approach may not resonate well with business users who desire simplicity and timeliness.

SAP customers and others considering SAP should carefully examine how well these tools match the skills in their organizations. We encourage companies to look at the different organizationalVRMobileBIVI roles as analytic personas and try to understand which constituencies are served by which parts of the SAP portfolio. For instance, one of the most critical personas going forward is the Designer role since usability is the top priority for organizational software according to our next-generation business intelligence research. Yet this role may become more difficult to fill over time since trends such as mobility continue to add to the job requirement. SAP’s recent upgrade of Design Studio to address emerging needs such as mobility and mobile device management (MDM) may force some organizations to rebuild  dashboards and upscale their designer skill sets to include JavaScript and Cascading Style Sheets, but the ability to deliver multifunctional analytics across devices in a secure manner is becoming paramount. I note that SAP’s capabilities in this regard helped it score third overall in our 2014 Mobile Business Intelligence Value Index. Other key personas are the knowledge worker and the analyst. Our data analytics research shows that while SQL and Excel skills are abundant in organizations, statistical skills and mathematical skills are less common. SAP’s integration of KXEN into Lumira can help organizations develop these personas.

SAP is pursuing an expansive analytic strategy that includes not just traditional business intelligence but databases, discovery analytics and advanced analytics. Any company that has SAP installed, especially those with BusinessObjects or an SAP ERP system, should consider the broader analytic portfolio and how it can meet business goals. Even for new prospects, the portfolio can be compelling, and as the roadmap centered on Lumira develops, SAP may be able to take that big leap in the analytics market.

Regards,

Tony Cosentino

VP and Research Director

Ventana Research recently completed the most comprehensiveVRMobileBIVI evaluation of mobile business intelligence products and vendors available anywhere today. The evaluation includes 16 technology vendors’ offerings on smartphones and tablets and use across Apple, Google Android, Microsoft Surface and RIM BlackBerry that were assessed in seven key categories: usability, manageability, reliability, capability, adaptability, vendor validation and TCO and ROI. The result is our Value Index for Mobile Business Intelligence in 2014. The analysis shows that the top supplier is MicroStrategy, which qualifies as a Hot vendor and is followed by 10 other Hot vendors: IBM, SAP, QlikTech, Information Builders, Yellowfin, Tableau Software, Roambi, SAS, Oracle and arcplan.

Our expertise, hands on experience and the buyer research from our benchmark research on next-generation business intelligence and on information optimization informed our product evaluations in this new Value Index. The research examined business intelligence on mobile technology to determine organizations’ current and planned use and the capabilities required for successful deployment.

What we found was wide interest in mobile business intelligence and a desire to improve the use of information in 40 percent of organizations, though adoption is less pervasive than interest. Fewer than half of organizations currently access BI capabilities on mobile devices, but nearly three-quarters (71%) expect their mobile workforce to be able to access BI capabilities in the next 12 months. The research also shows strong executive support: Nearly half of executives said that mobility is very important to their BI processes.

Mobile_BI_Weighted_OverallEase of access and use are an important criteria in this Value Index because the largest percentage of organizations identified usability as an important factor in evaluations of mobile business intelligence applications. This is an emphasis that we find in most of our research, and in this case it also may reflect users’ experience with first-generation business intelligence on mobile devices; not all those applications were optimized for touch-screen interfaces and designed to support gestures. It is clear that today’s mobile workforce requires the ability to access and analyze data simply and in a straightforward manner, using an intuitive interface.

The top five companies’ products in our 2014 Mobile Business Intelligence Value Index all provide strong user experiences and functionality. MicroStrategy stood out across the board, finishing first in five categories and most notably in the areas of user experience, mobile application development and presentation of information. IBM, the second-place finisher, has made significant progress in mobile BI with six releases in the past year, adding support for Android, advanced security features and an extensible visualization library. SAP’s steady support for the mobile access to SAP BusinessObjects platform and support for access to SAP Lumira, and its integrated mobile device management software helped produce high scores in various categories and put it in third place. QlikTech’s flexible offline deployment capabilities for the iPad and its high ranking in assurance-related category of TCO and ROI secured it the fourth spot. Information Builders’ latest release of WebFOCUS renders content directly with HTML5 and its Active Technologies and Mobile Faves, the company delivers strong mobile capabilities and rounds out the top five ranked companies. Other noteworthy innovations in mobile BI include Yellowfin’s collaboration technology, Roambi’s use of storyboarding in its Flow application.

Although there is some commonality in how vendors provide mobile access to data, there are many differences among their offerings that can make one a better fit than another for an organization’s particular needs. For example, companies that want their mobile workforce to be able to engage in root-cause discovery analysis may prefer tools from Tableau and QlikTech. For large companies looking for a custom application approach, MicroStrategy or Roambi may be good choices, while others looking for streamlined collaboration on mobile devices may prefer Yellowfin. Many companies may base the decision on mobile business intelligence on which vendor they currently have installed. Customers with large implementations from IBM, SAP or Information Builders will be reassured to find that these companies have made mobility a critical focus.

To learn more about this research and to download a free executive summary, please visit http://www.ventanaresearch.com/bivalueindex/.

Regards,

Tony Cosentino

Vice President and Research Director

A few months ago, I wrote an article on the four pillars of big data analytics. One of those pillars is what is called discovery analytics or where visual analytics and data discovery combine together to meet the business and analyst needs. My colleague Mark Smith subsequently clarified the four types of discovery analytics: visual discovery, data discovery, information discovery and event discovery. Now I want to follow up with a discussion of three trends that our research has uncovered in this space. (To reference how I’m using these four discovery terms, please refer to Mark’s post.)

The most prominent of these trends is that conversations about visual discovery are beginning to include data discovery, and vendors are developing and delivering such tool sets today. It is well-known that while big data profiling and the ability to visualize data give us a broader capacity for understanding, there are limitations that can be vr_predanalytics_predictive_analytics_obstaclesaddressed only through data mining and techniques such as clustering and anomaly detection. Such approaches are needed to overcome statistical interpretation challenges such as Simpson’s paradox. In this context, we see a number of tools with different architectural approaches tackling this obstacle. For example, Information Builders, Datameer, BIRT Analytics and IBM’s new SPSS Analytic Catalyst tool all incorporate user-driven data mining directly with visual analysis. That is, they combine data mining technology with visual discovery for enhanced capability and more usability. Our research on predictive analytics shows that integrating predictive analytics into the existing architecture is the most pressing challenge (for 55% or organizations). Integrating data mining directly into the visual discovery process is one way to overcome this challenge.

The second trend is renewed focus on information discovery (i.e., search), especially among large enterprises with widely distributed systems as well as the big data vendors serving this market. IBM acquired Vivisimo and has incorporated the technology into its PureSystems and big data platform. Microsoft recently previewed its big data information discovery tool, Data Explorer. Oracle acquired Endeca and has made it a key component of its big data strategy. SAP added search to its latest Lumira platform. LucidWorks, an independent information discovery vendor that provides enterprise support for open source Lucene/Solr, adds search as an API and has received significant adoption. There are different levels of search, from documents to social media data to machine data,  but I won’t drill into these here. Regardless of the type of search, in today’s era of distributed computing, in which there’s a need to explore a variety of data sources, information discovery is increasingly important.

The third trend in discovery analytics is a move to more embeddable system architectures. In parallel with the move to the cloud, architectures are becoming more service-oriented, and the interfaces are hardened in such a way that they can integrate more readily with other systems. For example, the visual discovery market was born on the client desktop with Qlik and Tableau, quickly moved to server-based apps and is now moving to the cloud. Embeddable tools such as D3, which is essentially a visualization-as-a-service offering, allow vendors such as Datameer to include an open source library of visualizations in their products. Lucene/Solr represents a similar embedded technology in the information discovery space. The broad trend we’re seeing is with RESTful-based architectures that promote a looser coupling of applications and therefore require less custom integration. This move runs in parallel with the decline in Internet Explorer, the rise of new browsers and the ability to render content using JavaScript Object Notation (JSON). This trend suggests a future for discovery analysis embedded in application tools (including, but not limited to, business intelligence). The environment is still fragmented and in its early stage. Instead of one cloud, we have a lot of little clouds. For the vendor community, which is building more platform-oriented applications that can work in an embeddable manner, a tough question is whether to go after the on-premises market or the cloud market. I think that each will have to make its own decision on how to support customer needs and their own business model constraints.

Regards,

Tony Cosentino

VP and Research Director

Organizations today must manage and understand a floodvr_bigdata_obstacles_to_big_data_analytics (2) of information that continues to increase in volume and turn it into competitive advantage through better decision making. To do that organizations need new tools, but more importantly, the analytical process knowledge to use them well. Our benchmark research into big data and business analytics found that skills and training are substantial obstacles to using big data (for 79%) and analytics (77%) in organizations.

But proficiency around technology and even statistical knowledge are not the only capabilities needed to optimize an organization’s use of analytics. A framework that complements the traditional analytical modeling process helps ensure that analytics are used correctly and will deliver the best results. I propose the following five principles that are concerned less with technology than with people and processes. (For more detail on the final two, see my earlier perspective on business analytics.)

Ask the right questions. Without a process for getting to the right question, the one that is asked often is the wrong one, yielding results that cannot be used as intended. Getting to the right question is a matter of defining goals and terms; when this is done, the “noise” of differing meanings is reduced and people can work together efficiently. Companies talk about strategic alignment, brand loyalty, big data and analytics, to name a few, yet these terms can mean different things to different people. Take time to discuss what people really want to know; describing something in detail ensures that everyone is on the same page. Strategic listening is a critical skill, and done right it will enable the analyst to identify, craft and focus the questions that the organization needs answered through the analytic process.

Take a Bayesian perspective. Bayesian analysis, also called posterior probability analysis, starts with assuming an end probability and works backward to determine prior probabilities. In a practical sense, it’s about updating a hypothesis when given new information; it’s about taking all available information and seeing where it is convergent. Of course, the more you know about the category you’re dealing with, the easier it is to separate the wheat from the chaff in terms of valuable information. Category knowledge allows you to look at the data from a different perspective and add complex existing knowledge. This, in and of itself is a Bayesian approach, but allows the analyst to iteratively take the investigation in the right direction. Bayesian analysis has had not only a great impact on statistics and market insights in recent years, but it has impacted how we view important historical events as well. For those interested in looking at how the Bayesian philosophy is taking hold in many different disciplines, there is an interesting book entitled The Theory That Would Not Die.

Don’t try to prove what you already know. Let the data guide the analysis rather than allowing pre-determined beliefs to guide the analysis. Physicist Enrico Fermi pointed out that measurement is the reduction of uncertainty. Analysts start with a hypothesis and try to disprove it rather than to prove it. From there, iteration is needed to come as close to the truth as possible. If we start with a gut feel and try to prove that gut feel, we are invoking the wrong approach. The point is, an analysis that starts by trying to prove that what we believe to be true, the results are rarely surprising and the analysis is likely to add nothing new.

Think in terms of “so what.” Moving beyond the “what” (i.e., measurement) to the “so what” (i.e., insights) should be a goal of any analysis, yet many are still turning out analysis that does nothing more than state the facts. Maybe 54 percent of people in a study prefer white houses, but why does anyone care that 54 percent people prefer white houses? Analyses must move beyond findings to answer critical business questions and provide informed insights, implications and even full recommendations.

Be sure to address the “now what.” The analytics professional should make sure that the findings, implications and recommendations of the analysis are heard. This is the final step in the analytic process, the “now what” – the actual business planning and implementation decisions that are driven by the analytic insights. If those insights do not lead to decision-making or action, then the effort has no value. There are a number of things that the analyst can do to facilitate that the information is heard. A compelling story line that incorporates animation and dynamic presentation is a good start. Depending on the size of the initiative, professional videography, implementation of learning systems and change management tools should also be involved.

Just because our business technology research finds vr_bti_br_technology_innovation_prioritiesanalytics as top priority and first ranked in 39 percent of organizations does not mean that adopting it will get immediate success. In order to implement a successful framework such as the one described above, organizations should build this one or a similar approach into their training programs and analytical processes. The benefits will be wide ranging including more targeted analysis, analytical depth, and analytical initiatives that have a real impact on decision making in the organization.

Regards,

Tony Cosentino

VP and Research Director

Last week, IBM brought industry analysts to its famed Almaden Research Center, where the company outlined its big data analytics strategy and introduced a number of new innovations. Big data is no new topic to IBM, which has for decades helped organizations store and use data. But technology has changed over those decades, and IBM is working hard to ensure it is part of the future and not just the past. Our latest business technology innovation research into big data technology finds that retaining and analyzing more data is the first-ranked priority in 29 percent of organizations. From both an IT and a business perspective, big data is critical to IBM’s future success.

On the strategy side, there was much discussion at the event around use cases and the different patterns of deployment for big data analytics. Inhi Cho Suh, vice president of strategy, outlined five compelling use cases for big data analytics:

  1. Discovery and visualization. These types of exploratory analytics in a federated environment are a big part of big data analytics, since they can unlock patterns that can be useful in areas as diverse as determining a root cause of an airline issue or understanding relationships among buyers. IBM is working hard to ensure that products such as IBM Cognos Insight can evolve to support a new generation of visual discovery for big data.
  2. 360-degree view of the customer. By bringing together data sources and applying analytics to increase such things as customer loyalty and share-of-wallet, companies can gain more revenue and market share with fewer resources. IBM needs to ensure it can actually support a broad array of information about customers – not just transactional or social media data but also voice as well as mobile interactions that also use text.
  3. Security and intelligence. This area includes areas around fraud and real-time cyber security, where companies leverage big data to predict anomalies and contain risk. IBM has been enhancing its ability to process real-time streams and transactions across any network. This is an important area for the company as it works to drive competitive advantage.
  4. Operational analysis. This is the ability to leverage networks of instrumented data sources to enable proactive monitoring through baseline analysis and real-time feedback mechanisms. The need for better operational analytics continues to increase. Our latest research on operational intelligence finds that organizations that use dedicated tools to handle this need will be more satisfied and gain better outcomes than those that do not.
  5. Data warehouse augmentation.  Big data stores can replace some traditional data stores and archival systems to allow larger sets of data to be analyzed, providing better information and leading to more precise decision-making capabilities. It should be no surprise that IBM has customers with some of the larger data warehouse deployments. The company can help customers evaluate their technology and improve or replace existing investments.

Prior to Inhi taking the stage, Dave Laverty, vice president of marketing, went through the new technologies being introduced. The first announcement was the BLU Accelerator – dynamic in-memory technology that promises to improve both performance and manageability on DB2 10.5. In tests, IBM says it achieved better than 10,000x performance on queries. The secret sauce lies in the ability to do column store data retrieval, maximize CPU processing, and provide skipping of data that is not needed for the particular analysis at hand. The benefits to the user are much faster performance across very large data sets and a reduction in manual SQL optimization. Our latest research into business technology innovation finds that in-memory technology is the technology most planned for use with big data in the next two years (22%), ahead of RDBMS (10%), data warehouse appliance (19%), specialized database (19%) and  Hadoop (20%).

vr_bigdata_obstacles_to_big_data_analytics (2)An intriguing comment from one of IBM’s customers was “What is bad SQL in a world with BLU?” An important extension of that question might be “What is the future role for database administrators, given new advancements around databases, and how do we leverage that skill set to fill the big data analytics gap?” According to our business technology innovations research, staffing (79%) and training (77%) are the two biggest challenges to implementing big data analytics.

One of IBM’s answers to the question of the skills gap comes in the form of BigSQL. A newly announced feature of InfoSphere BigInsights 2.1, BigSQL layers on top of BigInsights to provide accessibility through industry-standard SQL and SQL-based applications. Providing access to Hadoop has been a sticking point for organizations, since they have traditionally needed to write procedural code to access Hadoop data. BigSQL is similar in function to Greenplum’s Pivotal, Teradata Aster and Cloudera’s Impala, where SQL is used to mine data out of Hadoop. All of these products aim to provide access for SQL-trained users and for SQL-based applications, which represent the predominance of BI tools currently deployed in industry. The challenge for IBM, with a product portfolio that includes BigInsights and Cognos Insight, is to offer a clear message about what products meet what types of analytic needs for what types of business and IT professional needs. In addition further clarity from IBM on when to use big data analytics software partners like Datameer who was on an industry panel at the event and part of IBM global educational tour that I have also analyzed.

Another IBM announcement was the PureData System for Hadoop. This appliance approach to Hadoop provides a turnkey solution that can be up and running in a matter of hours. As you would expect in an appliance approach, it allows for consistent administration, workflow, provisioning and security with BigInsights. It also allows access to Hadoop through BigSheets, which presents summary information about the unstructured data in Hadoop, and which was already part of the BigInsights platform. Phil Francisco, vice president of big data product management and strategy, pointed out use cases around archival capabilities and the ability to do cold storage analysis as well as the ability to bring many unstructured sources together. The PureData System for Hadoop, due out in the second half of the year, adds a third version to the BigInsights lineup, which also includes the free web-based version and the Enterprise version. Expanding to support Hadoop with its appliances is critical as more organizations look to exploit the processing power of Hadoop technology for their database and information management needs.

Other announcements included new versions of InfoSphere Streams and Informix TimeSeries for reporting and analytics using smart meter and sensor technology. They help with real-time analytics and big data depending on the business and architectural needs of an organization. The integration of database and streaming analytics are key areas where IBM differentiates itself in the market.

Late in the day, Les Rechan, general manager for business analytics, told the crowd that he and Bob Picciano, general manager for information management, had recently promised the company $20 billion in revenue. That statement is important because in the age of big data, information management and analytics must be considered together, and the company needs a strong relationship between these two leaders to meet this ambitious objective. In an interview, Rechan told me that the teams realize this and are working hand-in-glove across strategy, product development and marketing. The camaraderie between the gentlemen was clear during the event, and bodes well for the organization. Ultimately, IBM will need to articulate why it should be considered for big data, as our technology innovation research finds organizations today are less worried about validation of a vendor from a size perspective (23%) compared to usability of the technology (64%).

IBM’s big data platform seems to be less a specific offer and more of an ethos of how to think about big data and big data analytics in a common-sense way. The focus on five well-thought-out use cases provides customers a frame for thinking through the benefits of big data analytics and gives them a head start with their business cases. Given the confusion in the market around big data, that common-sense approach serves the market well, and it is very much aligned with our own philosophy of focusing on what we call the business-oriented Ws rather than the technology-oriented Vs.

Big data analytics, and in particular predictive analytics, is complex and difficult to integrate into current architectures. Our benchmark research into predictive analytics shows that architectural integration is the biggest inhibitor with 55 percent of companies, which should be a message IBM takes to heart about integration of its predictive analytics tools with its big data technology options. Predictive analytics is the most important capability (49%) for business analytics, according to our technology innovation research, and IBM needs to  show more solutions that integrate predictive analytics with big data.

H.L. Mencken once said, “For every complex problem there is an answer that is clear, simple and wrong.” Big data analytics is a complex problem, and the market is still early. The latent benefit of IBM’s big data analytics strategy is that it allows IBM to continue to innovate and deliver without playing all of its chips at one time. In today’s environment, many supplier companies don’t have the same luxury.

As I pointed out in my blog post on the four pillars of big data analytics,vr_predanalytics_predictive_analytics_obstacles our research and clients are moving toward addressing big data and analytics in a more holistic and integrated manner. The focus shift is less about how organizations store or process information than how they use it. Some may argue that the IBM’s cadence is reflective of company size and is actually a competitive disadvantage, but I would argue that size and innovation leadership are not mutually exclusive. As companies grapple with the onslaught of big data and analytics, no one should underestimate IBM’s outcomes-based and services-driven approach, but in order to succeed IBM also needs to ensure it can meet the needs of organizations at a price they can afford.

Regards,

Tony Cosentino

VP and Research Director

Big data analytics is being offered as the key to addressing a wide array of management and operational needs across business and IT. But the label “big data analytics” is used in a variety of ways, confusing people about its usefulness and value and about how best to implement to drive business value. The uncertainty this causes poses a challenge for organizations that want to take advantage of big data in order to gain competitive advantage, comply with regulations, manage risk and improve profitability.

Recently, I discussed a high-level framework for thinking about big data analytics that aligns with former Census Director Robert Groves’ ideas of designed data on the one hand and organic data on the other. This second article completes that picture by looking at four specific areas that constitute the practical aspects of big data analytics – topics that must be brought into any holistic discussion of big data analytics strategy. Today, these often represent point-oriented approaches, but architectures are now coming to market that promise more unified solutions.

Big Data and Information Optimization: the intersection of big data analytics and traditional approaches to analytics. Analytics performed by database professionals often differ significantly from analytics delivered by line-of-business staffers who work in more flat-file-oriented environments. Today, advancements in in-memory systems, vr_bigdata_obstacles_to_big_data_analyticsin-database analytics and workload-specific appliances provide scalable architectures that bring processing to the data source and allow organizations to push analytics out to a broader audience, but how to bridge the divide between the two kinds of analytics is still a key question. Given the relative immaturity of new technologies and the dominance of relational databases for information delivery, it is critical to examine how all analytical assets will interact with core database systems.  As we move to operationalizing analytics on an industrial scale, the current advanced analytical approaches break down because it requires pulling data into a separate analytic environment and does not leverage advances in parallel computing. Furthermore, organizations need to determine how they can apply existing skill sets and analytical access paradigms such as business intelligence tools, SQL, spreadsheets and visual analysis, to big data analytics. Our recent big data benchmark research shows that the skills gap is the biggest issue facing analytics initiatives with staffing and training as an obstacle in over three quarters of organizations.

Visual analytics and data discovery: Visualizing data is a hot topic, especially in big data analytics. Much of big data analysis is about finding patterns in data and visualizing them so that people can tell a story and give context to large and diverse sets of data. Exploratory analytics allows us to develop and investigate hypotheses, reduce data, do root-cause analysis and suggest modeling approaches for our predictive analytics. Until now the focus of these tools has been on descriptive statistics related to SQL or flat file environments, but now visual analytics vendors are bringing predictive capabilities into the market to drive usability, especially at the business user level. This is a difficult challenge because the inherent simplicity of these descriptive visual tools clashes with the inherent complexity that defines predictive analytics. In addition, companies are looking to apply visualization to the output of predictive models as well. Visual discovery players are opening up their APIs in order to directly export predictive model output.

New tools and techniques in visualization along with the proliferation of in-memory systems allow companies the means of sorting through and making sense of big data, but exactly how these tools work, the types of visualizations that are important to big data analytics and how they integrate into our current big data analytics architecture are still key questions, as is the issue of how search-based data discovery approaches fit into the architectural landscape.

Predictive analytics: Visual exploration of data cannot surface all patterns, especially the most complex ones. To make sense of enormous data sets, data mining and statistical techniques can find patterns, relationships and anomalies in the data and use them to predict future outcomes for individual cases. Companies need to investigate the use of advanced analytic approaches and algorithmic methods that can transform and analyze organic data for uses such as predicting security threats, uncovering fraud or targeting product offers to particular customers.

Commodity models (a.k.a. good-enough models) are allowing business users to drive the modeling process. How these models can be vr_predanalytics_benifits_of_predictive_analyticsbuilt and consumed at the front line of the organization with only basic oversight by a statistician data scientist is a key area of focus as organizations endeavor to bring analytics into the fabric of the organization. The increased load on the back end systems is another key consideration if the modeling is a dynamic software driven approach. How these models are managed and tracked is yet another consideration. Our research on predictive analytics shows that companies that update their models more frequently have much higher satisfaction ratings than those that update on a less frequent basis.  The research further shows that in over half of organizations that competitive advantage and revenue growth are the primary reasons that predictive analytics are deployed.

Right-time and real-time analytics: It’s important to investigate the intersection of big data analytics with right-time and real-time systems and learn how participants are using big data analytics in production on an industrial scale. This usage guides the decisions that we make today around how to begin the task of big data analytics. Another choice organizations must make is whether to capture and store all of their data and analyze it on the back end, attempt to process it on the fly, or do both. In this context, event processing and decision management technologies represent a big part of big data analytics since they can help examine data streams for value and deliver information to the front lines of the organization immediately. How traditionally batch-oriented big data technologies such as Hadoop fit into the broader picture of right-time consumption still needs to be answered as well. Ultimately, as happens with many aspects of big data analytics, the discussion will need to center on the use case and how to address the time to value (TTV) equation.

Organizations embarking on a big data strategy must not fail to consider the four areas above. Furthermore, their discussions cannot cover just the technological approaches, but must include people, processes and the entire information landscape. Often, this endeavor requires a fundamental rethinking of organizational processes and questioning of the status quo.  Only then can companies see the forest for the trees.

Regards,

Tony Cosentino
VP and Research Director

The challenge with discussing big data analytics is in cutting through the ambiguity that surrounds the term. People often focus on the 3 Vs of big data – volume, variety and velocity – which provides a good lens for big data technology, but only gets us part of the way to understanding big data analytics, and provides even less guidance on how to take advantage of big data analytics to unlock business value.

Part of the challenge of defining big data analytics is a lack of clarity vr_bigdata_big_data_capabilities_not_availablearound the big data analytics value chain – from data sources, to analytic scalability, to analytic processes and access methods. Our recent research on big data find many capabilities still not available including predictive analytics (41%) to visualization (37%). Moreover, organizations are unclear on how best to initiate changes in the way they approach data analysis to take advantage of big data and what processes and technologies they ought to be using. The growth in use of appliances, Hadoop and in-memory databases and the growing footprints of RDBMSes all add up to pressure to have more intelligent analytics, but the most direct and cost-effective path from here to there is unclear. What is certain is that as business analytics and big data increasingly merge, the potential for increased value is building expectations.

To understand the organizational chasm that exists with respect to big data analytics, it’s important to understand two foundational analytic approaches that are used in organizations today. Former Census Bureau Director Robert Grove’s ideas around designed data and organic data give us a great jumping off point for this discussion, especially as it relates to big data analytics.

In Grove’s estimation, the 20th century was about designed data, or what might be considered hypothesis-driven data. With designed data we engage in analytics by establishing a hypothesis and collecting data to prove or disprove it. Designed data is at the heart of confirmatory analytics, where we go out and collect data that are relevant to the assumptions we have already made. Designed data is often considered the domain of the statistician, but it is also at the heart of structured databases, since we assume that all of our data can fit into columns and rows and be modeled in a relational manner.

In contrast to the designed data approach of the 20th century, the 21st century is about organic data. Organic data is data that is not limited by a specific frame of reference that we apply to it, and because of this it grows without limits and without any structure other than that structure provided by randomness and probability. Organic data represents all data in the world, but for pragmatic reasons we may think of it as all the data we are able to instrument. RFID, GPS data, sensor data, sentiment data and various types of machine data are all organic data sources that may be characterized by context or by attributes such data sparsity (also known as low-density data). Much like the interpretation of silence in a conversation, analyzing big data is as much about interpreting that which exists between the lines as it is about what we can put on the line itself.

vr_predanalytics_adequacy_of_predictive_analytics_supportThese two types of data and the analytics associated with them reveal the chasm that exists within organizations and shed light on the skills gap that our predictive analytics benchmark research shows to be the primary challenge for analytics in organizations today. This research finds inadequate support in many areas including product training (26%) and how to apply to business problems (23%).

On one side of the chasm are the business groups and the analysts who are aligned with Grove’s idea of designed data. These groups may encompass domain experts in areas such as finance or marketing, advanced Excel users, and even Ph.D.-level statisticians. These analysts serve organizational decision-makers and are tied closely to actionable insights that lead to specific business outcomes. The primary way they get work done is through a flat file environment, as was outlined in some detail last week by my colleague Mark Smith. In this environment, Excel is often the lowest common denominator.

On the other side of the chasm exist the IT and database professionals, where a different analytical culture and mindset exist. The priority challenge for this group is dealing with the three Vs and simply organizing data into a legitimate enterprise data set. This group is often more comfortable with large data sets and machine learning approaches that are the hallmark of the organic data of 21st century. Their analytical environment is different from that of their business counterparts; rather than Excel, it is SQL that is often the lowest common denominator.

As I wrote in a recent blog post, database professionals and business analytics practitioners have long lived in parallel universes. In technology, practitioners deal with tables, joins and the ETL process. In business analysis, practitioners deal with datasets, merges and data preparation. When you think about it, these are the same things. The subtle difference is that database professionals have had a data mining mindset, or, as Grove calls it, an organic data mindset, while the business analyst has had a designed data or statistic-driven mindset.  The bigger differences revolve around the cultural mindset, and the tools that are used to carry out the analytical objectives. These differences represent the current conundrum for organizations.

In a world of big data analytics, these two sides of the chasm are being pushed together in a shotgun wedding because the marriage of these groups is how competitive advantage is achieved. Both groups have critical contributions to make, but need to figure out how to work together before they can truly realize the benefits of big data analytics. The firms that understand that the merging of these different analytical cultures is the primary challenge facing the analytics organization, and that develop approaches that deal with this challenge, will take the lead in big data analytics. We already see this as a primary focus area for leading professional services organizations.

In my next analyst perspective on big data I will lay out some pragmatic approaches companies are using to address this big data analytics chasm; these also represent the focus of the benchmark research we’re currently designing to understand organizational best practices in big data analytics.

Regards,

Tony Cosentino

VP and Research Director

SAP just released solid preliminary quarterly and annual revenue numbers, which in many ways can be attributed to a strong strategic vision around the HANA in-memory platform and strong execution throughout the organization. Akin to flying an airplane while simultaneously fixing it, SAP’s bold move to HANA may at some point see the company continuing to fly when other companies are forced to ground parts of their fleets.

Stepping into 2012, the HANA strategy was still nascent, and SAP provided incentives both to customers and the channel to bring them along. At that point SAP also promoted John Schweitzer to senior vice president and general manager for business analytics.  Schweitzer, an industry veteran who spent his career with Hyperion and Oracle before moving to SAP in 2010, gave a compelling analytics talk at SAPPHIRE NOW in early 2012 and was at the helm for the significant product launches during the course of the year.

One of these products is SAP’s Visual Intelligence, which was released in the spring of 2012. The product takes Business Objects vr_ngbi_br_importance_of_bi_technology_considerationsBusiness Explorer and moves it to the desktop so that analysts can work without the need to have IT involved. Because it runs on HANA, it allows real-time visual exploration of data akin to what we have been seeing with companies such as Tableau, which recently passed the $100 million mark in revenue.  As with many of SAP’s new offers, the advantage of running on top of HANA allows visual exploration analysis on very large data sets. The same size data sets may quickly overwhelm certain “in-memory” competitor systems. In order to facilitate buzz and help develop “Visi,” as Visual Intelligence is also called, the company ran a Data Geek Challenge in which SAP encouraged users to develop big data visualizations on top of HANA.

Tools such as SAP’s Visual Intelligence begin to address the analytics usability challenge that I recently wrote about in the The Brave New World of Business Intelligence which focused on recent research we did on next-generation business intelligence. One key takeaway from that research is that usability expectations for business intelligence are being set on a consumer level, but that our information environments and our processes in the enterprise are too fragmented to achieve the same level of usability, resulting in relatively high dissatisfaction with next-generation business tools. Despite that dissatisfaction, the high switch costs for enterprise BI mean that customers are essentially captive, and this gives incumbents time to adapt their approaches. There is no doubt that SAP looks to capitalize on this opportunity with agile development approaches and frequent iterations of the Visual Intelligence software.

Another key release in 2012 was around Predictive Analytics which went generally available late in the year after SAP TechEd in Madrid. With this release, SAP moves away from its partnership
vr_predanalytics_benifits_of_predictive_analytics with IBM SPSS. The move makes sense given that sticking with SPSS would have resulted in a critical dependency for SAP. According to our benchmark research on Predictive Analytics, 68% of organizations see predictive analytics as a source of competitive advantage. Once again, HANA may prove to be a strong differentiator for SAP in that the company will be able to leverage the in-memory system to visualize data and run predictive analytics on very large data sets and across different workloads. Furthermore, the sap Predictive Analytics offering inherits data acquisition and data manipulation functionality from SAP Visual Intelligence. However, it cannot be installed on the same machine as Visi, according to a blog on the SAP Community Network.

SAP will need to work hard to build out predictive analytics capabilities to compete with the likes of SAS, IBM SPSS and other providers which have years of custom developed vertical and Line-of-Business solution sets. Beyond the customized analytical assets, IBM and SAS will likely promote the idea of commodity models as such non-optimized modeling approaches become more important. Commodity models are a breed of “good enough” models that allow for prediction and data reduction that is a step-function better than a purely random or uninformed decision. Where deep analytical skill sets are not available, sophisticated software can run through data and match it to the appropriate model.

In 2012, SAP also continued to develop targeted analytical solutions, which bundles SAP BI and the Sybase IQ server, a columnar database. Column-oriented databases such as Sybase IQ, Vertica, ParAccel and Infobright have gained momentum over the last few years by providing a platform that organizes data in a way that allows for much easier analytical access and data compression.  Instead of writing to disk in a row oriented fashion, columnar approaches write to disk in a column oriented fashion allowing for faster analytical cycles and reduced Time-to-Value.

The competitive advantage of columnar databases, however, may be mitigated as in-memory approaches gain favor in the marketplace. For this reason, continued development on the Sybase IQ platform may be an intermediate step until the HANA analytics portfolio is built out; after which HANA will likely cannibalize parts of its own stack. SAP’s dual approach to Big Data analytics with both “in-memory” and columnar provides good visibility into the classic Innovator’s Dilemma that faces many technology suppliers today and how SAP is dealing with this dilemma. It should be noted, however, that SAP is also working on an integrated portfolio approach and that Sybase IQ may actually be a better fit as datasets move to Petabyte scale.

Another aspect of that same Innovator’s Dilemma is a fragmented choice environment as new technologies develop. Our research shows that the market is undecided on how it will roll out critical next-generation business intelligence capabilities such as collaboration. Just under two-fifth of our Next Generationvr_ngbi_br_collaboration_tool_access_preferences Business Intelligence study participants (38%) prefer business intelligence applications as the primary access method for collaborative BI, but 36 percent prefer access through office productivity tools, and 34 percent prefer access through applications themselves. (Not surprisingly, IT leans more toward providing tools within the already existing landscape, while business users are more likely to want this capability within the context of the application.)  This fragmented choice scenario carries over to analytics as well, where spreadsheets are still the dominant analytical tool in most organizations. Here at Ventana Research, we are fielding more inquiries on application-embedded analytics and how these will play out in the organizational landscape. I anticipate this debate will continue through 2013, with different parts of the market providing solid arguments for each of the three camps. Since HANA uniquely provides both transactional processing and analytic processing in one engine, it will be interesting to look closer at the HANA and Business Objects roadmap in 2013 to see how they are positioning with respect to this debate. Furthermore, as I discuss in my 2013 Research Agenda blog post, disseminating insights within the organization is a big part of moving from insights to action, and business intelligence is still the primary vehicle for moving insight into the organization.  For this reason, the natural path for many organizations may indeed be through their Business Intelligence systems.

SAP, clearly separating its strategic position, looks to continue to innovate its entire portfolio, including both applications and analytics, based on the HANA database. In the most recent quarter, SAP took a big step forward in this regard by porting its entire business suite to run on HANA as my colleague Robert Kugel discussed in a recent blog.

While there are still some critical battles to be played out, one thing remains clear: SAP is one of the dominant players in business intelligence today. Our Value Index on Business Intelligence has assessed SAP as a Hot Vendor and is top ranked. SAP aims to stay that way by continuing to innovate around HANA and giving its customers and prospects a seamless transition to next-generation analytics and technologies.

Regards,

Tony Cosentino
VP and Research Director

At Oracle OpenWorld this week I focused on what the company is doing in business analytics, and in particular on what it is doing with its Exalytics In-Memory Machine. The Exalytics appliance is an impressive in-memory hardware approach to putting right-time analytics in the hands of end users by providing a full range of integrated analytic and visualization capabilities. Exalytics fits into the broader analytics portfolio by providing support for Oracle BI Foundational Suite including OBIEE, Oracle’s formidable portfolio of business analytics and business performance applications, as well interactive visualizations and discovery capabilities.

Exalytics connects with an Exadata machine over a high-throughput InfiniBand link. The system leverages Oracle’s TimesTen In-Memory Database with columnar compression originally built for online transaction processing in the telecommunications industry. Oracle’s Summary Advisor software provides a heuristic approach to making sure the right data is in-memory at the right time, though customers at the conference mentioned that in their initial configuration they did not need to turn on Summary Advisor since they were able to load their entire datasets in memory. Oracle’s Essbase OLAP server is also integrated, but since it requires a MOLAP approach and computationally intensive pre-aggregation, one might question whether applications such as Oracle’s Hyperion planning and forecasting tools will truly provide analysis at the advertised ”speed of thought.” To address this latter point, Oracle points to examples in the consumer packaged goods industry, where it has already demonstrated its ability to reduce complex planning scenario cycle times from 24 hours down to four hours. Furthermore, I discussed with Paul Rodwick, Oracle’s vice president of product management for business intelligence, the potential for doing in-line planning, where integrated business planning and complex what-if modeling can be done on the fly. For more on this particular topic, please check out my colleague’s benchmark research on the fast clean close.

Another important part of the Exalytics appliance is the Endeca discovery tool. Endeca, which Oracle acquired just a year ago, provides an exploratory interactive approach to root cause analysis and problem-solving without the traditional struggle of complex data modeling. It does this through a search technology that leverages key-value pairings in unstructured data, thereby deriving structure delivered in the form of descriptive statistics such as recency and frequency.  This type of tool democratizes analytics in an organization and puts power into the hands of line-of-business managers. The ability to navigate across data was the top-ranked business capability in our business analytics benchmark research. However, while Endeca is a discovery and analysis tool for unstructured and semi-structured data, it does not provide full sentiment analysis or deeper text analytics, as do tools such as IBM’s SPSS Text Analytics and SAS Social Conversation Center. For more advanced analytics and data mining, Oracle integrates its version of R for the enterprise on its Exadata Database Machine and its Big Data Appliance, Oracle’s integrated Hadoop approach based on the Cloudera stack.

At the conference, Oracle announced a few updates for Exalytics. The highlights include a new release of Oracle Business Intelligence 11.1.1.6.2BP1 optimized for Exalytics. The release includes new mobility and visualization capabilities, including trellis visualizations that allow users to see a large number of charts in a single view. These advanced capabilities are enabled on mobile devices through the increased speed provided by Exalytics. In addition, Endeca is now certified with Oracle Exalytics, and the TimesTen database has been certified with GoldenGate and Oracle Data Integrator, allowing the system and users to engage in more event-driven analytics and closed-loop processes. Finally, Hyperion Planning is certified on Exalytics.

On the upside, Oracle’s legacy platform support on Exalytics allows the company to leverage its entire portfolio and offer more than 80 prebuilt analytics applications ready to run on Exalytics without any changes. The platform also supports the full range of the Business Intelligence Foundational Suite and provides a common platform and a common dimensional model. In this way, it provides alignment with overall business processes, content management systems and transactional BI systems. This alignment is especially attractive for companies that have a lot of Oracle software already installed, and companies looking to operationalize business analytics through event-based closed-loop decision-making at the front lines of the organization. The speed of the machine, its near-real-time query speeds, and its ability to deliver complex visualizations to mobile devices allow users to create use cases and ROI scenarios they could not before. For example, the San Diego Unified School District was able to utilize Exalytics to push out performance dashboards across multiple device types to students and teachers, thereby increasing attendance and garnering more money from the state.The Endeca software lets users make qualitative and to some degree quantitative assessments from social data and to overlay that with structured data. The fact that Exalytics comprises an integrated hardware and software stack makes it a turnkey solution that does not require expensive systems integration services. Each of these points makes Exalytics an interesting and even an exciting investment possibility for companies.

On the downside, the integrated approach and vendor lock-in may discourage some companies concerned about Oracle’s high switch costs. This may pose a risk for Oracle as maturing alternatives become available and the economics of switching begin to make more sense. However, it is no easy task to switch away from Oracle, especially for companies with large Oracle database and enterprise application rollouts. The economics of loyalty are such that when customers are dissatisfied but captive, they remain loyal; however, as soon as a viable alternative comes on the market, they quickly defect. This defection threat for Oracle could come from dissatisfaction with configuration and ease-of-use issues in combination with new offerings from large vendors such as SAP, with its HANA in-memory database appliance, and hardware companies such as EMC that are moving up the stack into the software environment. To be fair, Oracle is addressing the issues around useability by moving the user experience away from “one size fits all” to more of a personae or role-based interface. Oracle will likely have time to do this, given the long tail of its existing database entrenchment and the breadth and depth of its analytics application portfolio.

With Exalytics, Oracle is addressing high market demand for right-time analytics and interactive visualization. I expect this device will continue to sell well especially since it is plug-and-play and Oracle has such a large portfolio of analytic applications. For companies already running Oracle databases and Oracle applications, Exalytics is very likely a good investment. In particular, it makes sense for those that have not fully leveraged the power of analytics in their business and that are therefore losing competitive position. Our benchmark research on analytics shows a relatively low penetration for analytics software today, but investments are accelerating due to the large ROI companies are realizing from analytics. In situations where companies find the payoffs to be less obvious, or where they fear lock-in, companies should take a hard look at exactly what they hope to accomplish with analytics and which tools best suit that need. The most successful companies start with the use case to justify the investment, then determine which technology makes the most sense.

In addition to this blog, please see Ventana Research’s broader coverage of Oracle OpenWorld.

Regards,

Tony Cosentino

VP and Research Director

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,315 hits
%d bloggers like this: