You are currently browsing the tag archive for the ‘datawatch’ tag.

We recently released our benchmark research on big data analytics, and it sheds light on many of the most important discussions occurring in business technology today. The study’s structure was based on the big data analytics framework that I laid out last year as well as the framework that my colleague Mark Smith put forth on the four types of discovery technology available. These frameworks view big data and analytics as part of a major change that includes a movement from designed data to organic data, the bringing together of analytics and data in a single system, and a corresponding move away from the technology-oriented three Vs of big data to the business-oriented three Ws of data. Our big data analytics research confirms these trends but also reveals some important subtleties and new findings with respect to this important emerging market. I want to share three of the most interesting and even surprising results and their implications for the big data analytics market.

First, we note that communication and knowledge sharing is a primary vr_Big_Data_Analytics_06_benefits_realized_from_big_data_analyticsbenefit of big data analytics initiatives, but it is a latent one. Among organizations planning to deploy big data analytics, the benefits most often anticipated are faster response to opportunities and threats (57%), improving efficiency (57%), improving the customer experience (48%) and gaining competitive advantage (43%). However, once a big data analytics system has moved into production, the benefits most often mentioned as achieved are better communication and knowledge sharing (51%), gaining competitive advantage (51%), improved efficiency in business processes (49%) and improved customer experience and satisfaction (46%). (The chart shows rankings of first choices as most important.) Although the last three of these benefits are predictable, it’s noteworthy that the benefit of communication and knowledge sharing, while not a priority before deployment, becomes one of the two most often cited later.

As for the implications, in our view, one reason why communication and knowledge sharing are more often seen as a key benefit after deployment rather than before is that agreement on big data analytics terminology is often lacking within organizations. Participants from fewer than half (44%) of organizations said that the people making business technology decisions mostly agree or completely agree on the meaning of big data analytics, while the same number said there are many different opinions about its meaning. To address this particular challenge, companies should pay more attention to setting up internal communication structures prior to the launch of a big data analytics project, and we expect collaborative technologies to play a larger role in these initiatives going forward.

vr_Big_Data_Analytics_02_defining_big_data_analyticsA second finding of our research is that integration of distributed data is the most important enabler of big data analytics. Asked the meaning of big data analytics in terms of capabilities, the largest percentage (76%) of participants said it involves analyzing data from all sources rather than just one, while for 55 percent it means analyzing all of the data rather than just a sample of it. (We allowed multiple responses.) More than half (56%) told us they view big data as finding patterns in large and diverse data sets in Hadoop, which indicates the continuing influence of this original big data technology. A second tier of percentages emphasizes timeliness as an aspect of big data: doing real-time processing on streams of data (44%), visualizing large structured data sets in seconds (40%) and doing real-time scoring against a database record (36%).

The implications here are that the primary characteristic of big data analytics technology is the ability to analyze data from many data sources. This shows that companies today are focused on bringing together multiple information sources and secondarily being able to process all data rather than just a sample, as well as being able to do machine learning on especially large data sets. Fast processing and the ability to analyze streams of data are relegated to third position in these priorities. That suggests that the so-called three Vs of big data are confusing the discussion by prioritizing volume, velocity and variety all at once. For companies engaged in big data analytics today, sourcing and integration of various data sources in an expedient manner is the top priority, followed by the ideas of size and then speed of arrival of data.

Third, we found that usage is not relegated to particular industries, vr_Big_Data_Analytics_09_use_cases_for_big_data_analyticscertain types of companies or certain functional areas. From among 25 uses for big data analytics those that participants are personally involved with, three of the four most often mentioned involve customers and sales: enabling cross-selling and up-selling (38%), understanding the customer better (32%) and optimizing pricing (28%). Meanwhile, optimizing IT operations ranked fifth (24%) though it was most often chosen by those in IT roles (76%). What is particularly fascinating, however, is that 17 of the 25 use cases were named by more than 10 percent, which indicates many uses for big data analytics.

The primary implication of this finding is that big data analytics is not following the famous technology adoption curves outlined in books such as Geoffrey Moore’s seminal work, “Crossing the Chasm.” That is, companies are not following a narrowly defined path that solves only one particular problem. Instead, they are creatively deploying technological innovations en route to a diverse set of outcomes. And this is occurring across organizational functions and industries, including conservative ones, which conflicts with conventional wisdom. For this reason, companies are more often looking across industries and functional disciplines as part of their due diligence on big data analytics to come up with unique applications that may yield competitive advantage or organizational efficiencies.

In summary, it has been difficult for companies to define what big data analytics actually means and how to prioritize their investments accordingly. Research such as ours can help organizations address this issue. While the above discussion outlines a few of the interesting findings of this research, it also yields many more insights, related to aspects as diverse as big data in the cloud, sandbox environments, embedded predictive analytics, the most important data sources in use, and the challenges of choosing an architecture and deploying big data analytic products. For a copy of the executive summary download it directly from the Ventana Research community.

Regards,

Ventana Research

Our recently released benchmark research on information optimization shows that 97 percent of organizations find it important or very important to make information available to the business and customers, Ventana_Research_Benchmark_Research_Logoyet only 25 percent are satisfied with the technology they use to provide that access. This wide gap between importance and satisfaction reflects the complexity of preparing and presenting information in a world where users need to access many forms of data that exist across distributed systems.

Information optimization is a new focus in the enterprise software market. It builds on existing investments in business applications, business intelligence and information management and also benefits from recent advances in business analytics and big data, lifting information to higher levels of use and greater value in organizations. Information optimization also builds on information management and information applications, areas Ventana Research has previously researched. For more on the background and definition of information optimization, please see my colleague Mark Smith’s foundational analysis.

vr_Info_Optimization_01_whos_responsible_for_information_availabilityThe drive to improve information availability derives from a need for greater operational efficiency, according to two-thirds (67%) of organizations. The imperative is so strong that 43 percent of all organizations currently are making changes to how they design and deploy information, while another 37 percent plan to make changes in the next 12 months. The pressure for such change is being directed toward the IT group, which is involved with the task of optimizing information in more than four-fifths of organizations with or without line of business support. IT, however, is in an untenable position, as demands are far outstripping its available resources and technology to deal with the problem, which leads to dissatisfaction with the IT department in two out of five organizations, according to our research. Internally, many organizations try to optimize information using manual spreadsheet processes and are confident in their ability to get by 73% of the time. But when the focus turns to the ability to make information available to partners or customers, an increasingly important capability in today’s information-driven economy, the confidence rate drops dramatically to 62% and 55% respectively.

A large part of the information optimization challenge is users’ vr_Info_Optimization_09_most_important_end_user_capabilitiesdifferent requirements. For instance, the top needs of analysts are extracting information, designing and integrating metrics, and developing access policies. In contrast, the top needs of business users are drilling into information (37%), search capabilities (36%) and collaboration (27%). IT must also consider multiple points of integration such as security frameworks and information modeling, as well as integration with operational and content management systems. This is complicated further by multiple new standards coming into play as customer and financial data – still the most important information systems in the organization – append less structured sources of data that add context and value. SQL is still the dominant standard when it comes to information platforms, but less structured approaches such as XML and JSON are emerging fast. Furthermore, innovations in the collaborative and mobile workforce are driving standards such as HTML5 and must be considered carefully when optimizing information. Platform considerations are also affected by the increasing use of analytic databases, in-memory approaches and Hadoop. Traditional approaches like an RDBMS on standard hardware and flat files are still the most common, but the most growth is with in-memory systems and Hadoop. This is interesting because these technologies allow for multiple new approaches to analysis such as visual discovery and machine learning on large data sets.  Adding to the impetus for change is that organizations using an RDBMS on standard hardware and flat files are less satisfied than those using the more innovative approaches to big data.

Information optimization also encounters challenges associated with data preparation and data presentation. In our research, 47 percent of organizations said that  they spend the largest portion of their time in data preparation, but less than half said they are satisfied with their process of creating information. Contributing to this dissatisfaction are lack of resources, lack of flexibility and speed of integration. Lack of resources and speed of integration tend to move together. That is, when more financial and human resources are dedicated to the integration efforts, satisfaction is higher. Adding more human and financial resources does not necessarily increase flexibility. That is a function of both tools and processes, and we see it as a result of divergent data preparation workflows occurring in organizations. One is a more structured approach that follows more traditional ETL paths that can lead to timely integration of data once everything is defined and the system is in place, but is less flexible. Another data preparation approach is to merge internal and external information on the fly in a sandbox environment or in response to sudden market challenges. These different information flows ultimately have to support specific forms of information presentation for users, whether that be the creation of an analytic data set for a complex statistical procedure by a data scientist within the organization or a single number with qualitative context for an executive on a mobile device.

Thus it is clear that information optimization is a critical focus for organizations; it’s also an important area of study for Ventana Research in 2014. Our latest benchmark research shows that the challenges are complex and involve the entire organization. As new technologies come to market and information processes must be aligned with the needs of the lines of business and the functional roles within organizations, companies that are able to simplify access to information and analytics through the information optimization approaches discussed above will provide an edge on competitors.

Regards,

Tony Cosentino

VP & Research Director

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,302 hits
%d bloggers like this: