You are currently browsing the category archive for the ‘Information Management’ category.

IBM redesigned its business intelligence platform, now called IBM Cognos Analytics. Expected to be released by the end of 2015, the new version includes features to help end users model their own data without IT assistance while maintaining the centralized governance and security that the platform already has. Our benchmark research into information optimization shows that simplifying access to information is important to vr_Info_Optimization_01_whos_responsible_for_information_availabilityvirtually all (97%) participating organizations, but it also finds that only one in four (25%) are satisfied with their current software for doing that. Simplification is a major theme of the IBM Cognos redesign.

The new IBM Cognos Analytics provides a completely Web-based environment that is consistent in the user interface and security across multiple devices and browsers. The redesigned interface follows IBM’s internal cultural shift to base product development first on the user experience and second on features and functionality. This may be a wise move as our research across multiple analytic software categories finds usability to be organizations’ most often important buying criterion.

The redesign is based on the same design and self-service principles as IBM Watson Analytics which we did award a Ventana Research Technology Innovation Award for 2015 in business analytics. The redesign is most evident in the IBM Cognos Analytics authoring mode. The Report Studio and Cognos Workspace Advanced modules have been replaced with a simplified Web-based modeling environment. The extended capabilities of IBM Cognos 10.2.2 are still available, but now they are hidden and more logically arranged to provide easier user access. For example, the previous version of Cognos presented an intimidating display of tools with which to do tasks such as fine-grain manipulation of reports; now these features are hidden but still easily accessible. If a user is having difficulty finding a particular function, a “smart search” feature helps to find the correct menu to add it.

The new system indexes objects, including metadata, as they are created, providing a more robust search function suitable for nontechnical users in the lines of business. The search feature works with what IBM calls “intent-based modeling” so users can search for words or phrases – for example, revenue by unit or product costs – and be presented with only relevant tables and columns. The system can then automatically build a model by inferring relationships in the data. The result is that the person building the report need not manually design a multidimensional model of the data, so less skilled end users can serve themselves to build their own data models that underpin dashboards and reports. Previously, end users were limited to parameterized reporting in which they could work only within the context of models previously designed by IT. Many vendors of analytics have been late in exploiting the power of search and therefore may be missing a critical feature that customers desire. Ventana Research is a proponent of such capabilities; my colleague Mark Smith has written about them in the context of data discovery technology. Search is fundamental to user-friendly discovery systems, as is reflected in the success of companies such as Google and Splunk. With search becoming more sophisticated, being based on machine-learning algorithms, we expect it to become a key requirement for new analytics and business intelligence systems.

Furthering the self-service aspect is the ability for end users to access and combine multiple data sets. The previous version of IBM Cognos (10.2.2) allowed users to work with “personal data sets” such as .csv files, but they needed an IBM DB2 back end to house the files. Now such data sets can be uploaded and managed directly on the IBM Cognos Analytics server and accessed with the new Web-based authoring tool. Once data sets are uploaded they can be accessed and modeled like any other object to which the user has access. In this way, IBM Cognos Analytics addresses the “bring your own data” challenge in which data sources such as personal spreadsheets must be integrated into enterprise analytics and business intelligence systems.

After modeling the data, users can lay out new dashboards using drag-and-drop capabilities like those found in IBM Watson Analytics. Dashboards can be previewed and put into service for one-time use or put into production mode if the user has such privileges. As is the case with IBM Watson Analytics, newly designed dashboard components such as tables, charts and maps are automatically linked so that changes in one part of the dashboard automatically relate to other parts. This feature facilitates ease of use in designing dashboards. Some other tools in the market require widgets to be connected manually, which can be time-consuming and is an impediment to prototyping of dashboards.

The move to a more self-service orientation has long been in the works for IBM Cognos and so this release is an important one for IBM. The ability to automatically integrate and model data gives the IT department a more defensible position as other self-service tools are introduced into the organization and are challenging data access and preparation built within tools like IBM Cognos. vr_DAC_20_justification_for_data_preparationThis is becoming especially important as the number and complexity of data sources increases and are needed more rapidly by business. Our research into information optimization shows that most organizations need to integrate at least six data sources and some have 20 or more sources they need to bring together. All of which confirms what our data and analytics in the cloud benchmark research finds data preparation to be a top priority in over half (55%) of organizations.

Over time, IBM intends to integrate the capabilities of Cognos Analytics with those of Watson Analytics. This is an important plan because IBM Watson Analytics has capabilities beyond those of self-service tools in the market today. In particular, the ability to explore unknown data relationships and do advanced analysis is a key differentiator for IBM Watson Analytics, as I have written. IBM Watson Analytics enables users to explore relationships in data that otherwise would not be noticeable, whereas IBM Cognos Analytics enables them to explore and put into production information based on predefined assumptions.

Going forward, I will be watching how IBM aligns Cognos Analytics with Watson Analytics, and in particular, how Cognos Analytics will fit into the IBM cloud ecosystem. Currently IBM Cognos Analytics is offered both on-premises and in a hosted cloud, but here also IBM is working to align it VR_AnalyticsandBI_VI_HotVendor_2015more closely with IBM Watson Analytics. Bringing in data preparation, data quality and MDM capabilities from the IBM DataWorks product could also benefit IBM Cognos Analytics users. IBM should emphasize the breadth of its portfolio of products including IBM Cognos TM1, IBM SPSS, IBM Watson Analytics and IBM DataWorks as it faces stiff competition in enterprise analytics and business intelligence from a host of analytics companies including new cloud-based ones. IBM is rated a Hot Vendor in our Ventana Research Analytics and Business Intelligence Value Index in part because of its overall portfolio.

For organizations already using IBM Cognos, the redesign addresses the need of end users to create their own dashboards while maintaining IT governance and control. The new interface may take some getting used to, but it is modern and more intuitive than previously. For companies new to IBM Cognos, as well as departments wanting to take a look at the platform, cloud options offer less risk. For those wanting early access to the new IBM Cognos Analytics, IBM has provided access to it on www.analyticszone.com. The changes I have noted move IBM Cognos Analytics closer to the advances in analytics as a whole, and I recommend that all these groups examine the new version.

Regards,

Ventana Research

Tableau Software’s annual conference, which company spokespeople reported had more than 10,000 attendees, filled the MGM Grand in Las Vegas. Various product announcements supported the company’s strategy to deliver value to analysts and users of visualization tools. Advances include new data preparation and integration features, advanced analytics and mapping. The company also announced the release of a stand-alone mobile application called Vizable . One key message management aimed to promote is that Tableau is more than just a visualization company.

Over the last few years Tableau has made strides in the analytics and business intelligence market with a user-centric philosophy and the ability to engage younger analysts who work in the lines of business rather than in IT. Usability continues to rank as the top criteria for selecting analytic and business intelligence software in all of our business analytics benchmark research. In this area Tableau has introduced innovations such as VizQL, originally developed at Stanford University, which links capabilities to query a database and to visualize data. This combination enables users not highly skilled in languages such as SQL or using proprietary business intelligence tools to create and share visually intuitive dashboards. The effect is to provide previously unavailable visibility into areas of their operations. The impact of being able to see and compare performance across operations and people often increases communication and knowledge sharing.

Tableau 9, released in April 2015, which I discussed, introduced advances including analytic ease of use and performance, new APIs, data preparation, storyboarding and Project Elastic, the precursor to this year’s announcement of Vizable. Adoption of 9.x appears to be robust given both the number of conference attendees and increases in third-quarter revenue ($170 million) and new customers (3,100) reported to the financial markets.

As was the case last year, conference announcements included some developments already on the market as well as some still to come. Among data preparation capabilities introduced are integration and automated spreadsheet cleanup. For the former, being able to join two data sets through a union function, which adds rows to form a single data set, and to do integration across databases by joining specific data fields gives users flexibility in combining, analyzing and visualizing multiple sets of data. For the latter, to automate the spreadsheet cleanup process Tableau examined usage patterns of Tableau Public to learn how users manually clean their spreadsheets. Then it used machine-learning algorithms to help users automate the tasks. Being able to automatically scan Excel files to find subtables and automatically transform data without manual calculations and parsing will save time for analysts who vr_LA_most_important_location_analytics_capabilitiesotherwise would have to do these tasks manually. Our benchmark research into information optimization shows that data preparation consumes the largest portion of time spent on analytics by nearly half (47%) of organizations and even higher in our latest data and analytics in the cloud benchmark research by 59 percent of organizations.

Advanced analytics is another area of innovation for Tableau. The company demonstrated developments in outlier detection and clustering analysis natively integrated with the software. Use of these features is straightforward and visually oriented, replacing the need for statistical charts with drag-and-drop manipulation. The software does not enable users to identify numbers of segments or filter the degree of the outliers, but the basic capability can reduce data sets to more manageable analytic sets and facilitate exploration of anomalous data points within large sets. The skill necessary for these tasks, unlike the interpretation of box plots introduced at last year’s conference, is more intuitive and better suited for business users of information.

The company also demonstrated new mapping and geospatial features at the conference. Capabilities to analyze down to the zip code on a global basis, define custom territories, support geospatial files, integrate with vr_LA_most_important_location_analytics_capabilitiesthe open source mapping platform MapBox and perform calculations within the context of a digital map are all useful features for location analytics, which is becoming more important in areas such as customer analytics and digital devices connected in the emerging Internet of things (IoT). Tableau is adding capabilities that participants most often cited as important in our research on location analytics: to provide geographic representation (72%), visualize metrics associated with locations (65%) and directly select and analyze locations on maps (61%).

Tableau insists that its development of new capabilities is guided by customer requests. This provides a source of opportunities to address user needs especially in the areas of data preparation, advanced analytics and location analytics. However, this strategy raises the question of whether it will ultimately put the company in conflict with the partners that have helped build the Tableau ecosystem and feed the momentum of the company thus far. Tableau is positioning its product as a fully featured analytic platform of the sort that I have outlined, but to achieve that eventually it will have to encroach on the capabilities that partners such as Alteryx, Datawatch, Informatica, Lavastorm, Paxata and Trifacta offer today. Another question is whether Tableau will continue its internal development strategy or opt to acquire companies that can broaden its capabilities that has hampered its overall value rating as identified in our 2015 Analytics and Business intelligence Value Index. In light of announcements at the conference, the path seems to be to develop these capabilities in-house. While there appears to be no immediate threat to the partnerships the continuation of development of some of these capabilities eventually will impact the partner business model in a more material way. Given that the majority of the deals for its partner ecosystem flows through Tableau itself, many of the partners are vulnerable to these development efforts. In addition I will be watching how aggressively Tableau helps to market Spark, the open source big data technology that I wrote about, as compared to some of the partner technologies that Spark threatens. Tableau has already built on Spark while some of its competitors have not, which may give Tableau a window of opportunity.

Going forward, integration with transactional systems and emerging cloud ecosystems is an area for Tableau that I will be watching. Given its architecture it’s not easy for Tableau to participate in the new generation of service-oriented architectures that characterize part of today’s cloud marketplace. For this reason, Tableau will need to continue to build out its own platform and the momentum of its ecosystem – which at this point does not appear to be a problem.

Finally, it will be interesting to see how Tableau eventually aligns its stand-alone data visualization application Vizable with its broader mobile strategy. We will be looking closely at the mobile market in our upcoming Mobile Analytics and Business Intelligence Value Index in the first half of 2016 where in our last analysis found Tableau was in the middle of the pack with other providers but they have made more investments since our last analysis.

We recommend that companies exploring analytics platforms, especially for on-premises and hosted cloud use, include Tableau on their short lists. Organizations that consider deploying Tableau on an enterprise basis should look closely at how it aligns with their broader user requirements and if their cloud strategy will meet its future needs. Furthermore, while the company has made improvements in manageability and performance, these can still be a concern in some circumstances. Tableau should be evaluated also with specific business objectives in mind and in conjunction with its partner ecosystem.

Regards,

Ventana Research

PentahoWorld 2015, Pentaho’s second annual user conference, held in mid-October, centered on the general availability of release 6.0 of its data integration and analytics platform and its acquisition by Hitachi Data Systems (HDS) earlier this year. Company spokespeople detailed the development of the product in relation to the roadmap laid out in 2014 and outlined plans for its integration with those of HDS and its parent Hitachi. They also discussed Pentaho’s and HDS’s shared intentions regarding the Internet of Things (IoT), particularly in telecommunications, healthcare, public infrastructure and IT analytics.

Pentaho competes on the basis of what it calls a “streamlined data refinery” that enables a flexible way to access, transform and integrate data and embed and present analytic data sets in usable formats without writing new code. In addition, it integrates a visual analytic workflow interface with a business intelligence front end including customization extensions; this is a differentiator for the company since much of the self-serve analytics market in which it competes is still dominated by separate point products.

Pentaho 6 aims to provide manageable and scalable self-service analytics. A key advance in the new version is what Pentaho calls “virtualized data sets” that logically aggregate multiple data sets according to transformations and integration specified by the Pentaho Data Integration (PDI) analytic workflow interface. This virtual approach allows the physical processing to be executed close to the data in various systems such as Hadoop or an RDBMS, which relieves users of the burden of having to continually move data back and forth between the vr_oi_factors_impeding_ol_implementationquery and the response systems. In this way, logical data sets can be served up for consumption in Pentaho Analytics as well as other front-end interfaces in a timely and flexible manner.

One challenge that emerges when accessing multiple integrated and transformed data sets is data lineage. Tracking its lineage is important to establish trust in the data among users by enabling them to ascertain the origin of data prior to transformation and integration. This is particularly useful in regulated industries that may need access to and tracking of source data to prove compliance. This becomes even more complicated with events and completely sourcing them along with the large number of them as found in over a third of organizations in our operational intelligence benchmark research that examined operational centric analytics and business intelligence.

Similarly, Pentaho 6 uses Simple Network Management Protocol (SNMP) to deliver application programming interface (API) extensions so that third-party tools can help provide governance lower in the system stack to further enable reliability of data. Our benchmark research consistently shows that manageability of systems is important for user organizations and in particular for big data environments.

The flexibility introduced with virtual tables and improvements in Pentaho 6.0 around in-line modeling (a concept I discussed after last year’s event are two critical means to building self-service analytic environments. Marrying various data systems with different data models, sometimes referred to as big data integration, has proven to be a difficult challenge in such environments. Pentaho’s continued focus on vr_BDI_01_automating_big_data_integrationbig data integration and providing an integration backbone to the many business intelligence tools (in addition to its own) are potential competitive differentiators for the company. While analysts and users prefer integrated tool sets, today’s fragmented analytics market is increasingly dominated by separate tools that prepare data and surface data for consumption. Front-end tools alone cannot automate the big data integration process, which Pentaho PDI can do.Our research into big data integration shows the importance of eliminating manual tasks in this process: 78 percent of companies said it is important or very important to automate their big data integration processes. Pentaho’s ability to integrate with multiple visual analytics tools is important for the company, especially in light of the HDS accounts, which likely have a variety of front-end tools. In addition, the ability to provide an integrated front end can be attractive to independent software vendors, analytics services providers and certain end-user organizations that would like to embed both integration and visualization without having to license multiple products.

Going forward, Pentaho is focused on joint opportunities with HDS such as the emerging Internet of Things. Pentaho cites established industrial customers such as Halliburton, Intelligent Mechatonic Systems and Kirchoff Datensysteme Software as reference accounts for IoT. In addition, a conference participant from Caterpillar Marine Asset Intelligence shared how it embeds Pentaho to help analyze and predict equipment failure on maritime equipment. Pentaho’s ability to integrate and analyze multiple data sources is key to delivering business value in each of these environments, but the company also possesses a little-known asset in the Weka machine learning library, which is an integrated part of the product suite. Our research on next-generation predictive analytics finds that Weka is used by 5 percent of organizations, and many of the companies that use it are large or very large, which is Pentaho’s target market. Given the importance of machine learning in the IoT category, it will be interesting to see how Pentaho leverages this asset.

Also at the conference, an HDS spokesperson discussed its target markets for IoT or what the company calls “social innovation.” These markets include telecommunications, healthcare, public infrastructure and IT analytics and reflect HDS’s customer base and the core businesses of its parent company Hitachi. Pentaho Data Integration is currently embedded within major customer environments such as Caterpillar, CERN, FINRA, Halliburton, NASDAQ, Sears and Staples, but not all of these companies fit directly into the IoT segments HDS outlined. While Hitachi’s core businesses provide a fertile ground in which grow its business, Pentaho will need to develop integration with the large industrial control systems already in place in those organizations.

The integration of Pentaho into HDS is a key priority. The 2,000-strong global sales force of HDS is now incented to sell Pentaho, and it will be important for the reps to include it as they discuss their accounts’ needs. While Pentaho’s portfolio can potentially broaden sales opportunities for HDS, big data software is a more consultative sale than the price-driven hardware and systems that the sales force may be used to. Furthermore, the buying centers, which are shifting from IT to lines of business, can be significantly different based on the type of organization and their objectives. To address this will require significant training within the HDS sales force and with partner consulting channels. The joint sales efforts will be well served by emphasizing the “big data blueprints” developed by Pentaho over the last couple of years and developing of new ones that speak to IoT and the combined capabilities of the two companies.

HDS says it will begin to embed Pentaho into its product portfolio but has promised to leave Pentaho’s roadmap intact. This is important because Pentaho has done a good job of listening to its customers and addressing the complexities that exist in big data and open source environments. As the next chapter unfolds, I will be looking at how the company integrates its platform with the HDS portfolio and expands it to deal with the complexities of IoT, which we will be investigating in upcoming benchmark research study.

For organizations that need to use large-scale integrated data sets, Pentaho provides one of the most flexible yet mature tools in the market, and they should consider it. The analytics tool provides an integrated and embeddable front end that should be of particular interest to analytics services providers and independent software vendors seeking to make information management and data analytics core capabilities. For existing HDS customers, the Pentaho portfolio will open conversations in new areas of those organizations and potentially add considerable value within accounts.

Regards,

Ventana Research

Splunk’s annual gathering, this year called .conf 2015, in late September hosted almost 4,000 Splunk customers, partners and employees. It is one of the fastest-growing user conferences in the technology industry. The area dedicated to Splunk partners has grown from a handful of booths a few years ago to a vast showroom floor many times larger. While the conference’s main announcement was the release of Splunk Enterprise 6.3, its flagship platform, the progress the company is making in the related areas of machine learning and the Internet of Things (IoT) most caught my attention.

Splunk’s strength is its ability to index, normalize, correlate and query data throughout the technology stack, including applications, servers, networks and sensors. It uses distributed search that enables correlation and analysis of events across local- and wide-area networks without moving vast amounts of data. Its architectural approach unifies cloud and on-premises implementations and provides extensibility for developers building applications. Originally, Splunk provided an innovative way to troubleshoot complex technology issues, but over time new uses for Splunk-based data have emerged, including digital marketing analytics, cyber security, fraud prevention and connecting digital devices in the emerging Internet of Things. Ventana Research has covered Splunk since its establishment in the market, most recently in this analysis of mine.

Splunk’s experience in dealing directly with distributed, time-series data and processes on a large scale puts it in position to address the Internet of Things from an industrial perspective. This sort of data is at the heart of large-scale industrial control systems, but it often comes in different formats and its implementation is based on different formats and protocols. For instance, sensor technology and control systems that were invented 10 to 20 years ago use very different technology than modern systems. Furthermore, as with computer technology, there are multiple layers in stack models that have to communicate. Splunk’s tools help engineers and systems analysts cross-reference these disparate systems in the same way that it queries computer system and network data, however, the systems can be vastly different. To address this challenge, Splunk turns to its partners and its extensible platform. For example, Kepware has developed plug-ins that use its more than 150 communication drivers so users can stream real-time industrial sensor and machine data directly into the Splunk platform. Currently, the primary value drivers for organizations in this field of the industrial IoT are operational efficiency, predictive maintenance and asset management. At the conference, Splunk showcased projects in these areas including one with Target that uses Splunk to improve operations in robotics and manufacturing.

For its part, Splunk is taking a multipronged approach by acquiring companies, investing in internal development and enabling its partner ecosystem to build new products. One key enabler of its approach to IoT is machine learning algorithms built on the Splunk platform. In machine learning a model can use new data to continuously learn and adapt its answers to queries. This differs from conventional predictive analytics, in which users build models and validate them based on a particular sample; the model does not adapt over time. With machine learning, for instance, if a piece of equipment or an automobile shows a certain optimal pattern of operation over time, an algorithm can identify that pattern and build a model for how that system should behave. When the equipment begins to act in a less optimal or anomalous way, the system can alert a human operator that there may be a problem, or in a machine-to-machine situation, it can invoke a process to solve the problem or recalibrate the machine.

Machine learning algorithms allow event processes to be audited, analyzed and acted upon in real time. They enable predictive capabilities for maintenance, transportation and logistics, and asset management and can also be applied in more people-oriented domains such as fraud prevention, security, business process improvement, and digital products.  IoT potentially can have a major impact on business processes, but only if organizations can realign systems to discover-and-adapt rather than model-and-apply approaches. For instance, processes are often carried out in an uneven fashion different from the way the model was conceived and communicated through complex process documentation and systems. As more process flows are directly instrumented and more processes carried out by machines, the ability to model directly based on the discovery of those event flows and to adapt to them (through human learning or machine learning) becomes key to improving organizational processes. Such realignment of business processes, however, often involves broad organizational transformation.Our benchmark research on operational intelligence shows that challenges associated with people and processes, rather than information and technology, most often hold back organizational improvement.

Two product announcements made at the conference illuminate the direction Splunk is taking with IoT and machine learning. The first is User Behavior Analytics (UBA), based VR2015_InnovationAwardWinneron its acquisition of Caspida, which produces advanced algorithms that can detect anomalous behavior within a network. Such algorithms can model internal user behavior, and when behavior deviates from the specified norm, it can generate an alert that can be addressed through investigative processes usingSplunk Enterprise Security 4.0. Together, Splunk Enterprise Security 4.0 and UBA won the 2015 Ventana Research CIO Innovation Award.The acquisition of Caspida shows that Splunk is not afraid to acquire companies in niche areas where they can exploit their platform to deliver organizational value. I expect that we will see more such acquisitions of companies with high value ML algorithms as Splunk carves out specific positions in the emergent markets.

The other product announced is IT Service Intelligence (ITSI), which highlights machine learning algorithms alongside of Splunk’s core capabilities. The IT Service Intelligence App is an application in which end users deploy machine learning to see patterns in various IT service scenarios. ITSI can inform and enable multiple business uses such as predictive maintenance, churn analysis, service level agreements and chargebacks. Similar to UBA, it uses anomaly detection to point out issues and enables managers to view highly distributed processes such as claims process data in insurance companies. At this point, however, use of ITSI (like other areas of IoT) may encounter cultural and political issues as organizations deal with changes in the roles of IT and operations management. Splunk’s direction with ITSI shows that the company is staying close to its IT operations knitting as it builds out application software, but such development also puts Splunk into new competitive scenarios where legacy technology and processes may still be considered good enough.

We note that ITSI is built using Splunk’s Machine Learning Toolkit and showcase, which currently is in preview mode. The vr_Big_Data_Analytics_08_top_capabilities_of_big_data_analyticsplatform is an important development for the company and fills one of the gaps that I pointed out in its portfolio last year. Addressing this gap enables Splunk and its partners to create services that apply advanced analytics to big data that almost half (45%) of organizations find important. The use of predictive and advanced analytics on big data I consider a killer application for big data; our benchmark research on big data analytics backs this claim: Predictive analytics is the type of analytics most (64%) organizations wish to pursue on big data.

Organizations currently looking at IoT use cases should consider Splunk’s strategy and tools in the context of specific problems they need to address. Machine learning algorithms built for particular industries are key so it is important to understand if the problem can be addressed using prebuilt applications provided by Splunk or one of its partners, or if the organization will need to build its own algorithms using the Splunk machine learning platform or alternatives. Evaluate both the platform capabilities and the instrumentation, the type of protocols and formats involved and how that data will be consumed into the system and related in a uniform manner. Most of all, be sure the skills and processes in the organization align with the technology from an end user and business perspective.

Regards,

Ventana Research

The concept and implementation of what is called big data are no longer new, and many organizations, especially larger ones, view it as a way to manage and understand the flood of data they receive. Our benchmark research on big data analytics shows that business intelligence (BI) is the most common type of system to which organizations deliver big data. However, BI systems aren’t a good fit for analyzing big data. They were built to provide interactive analysis of structured data sources using Structured Query Language (SQL). Big data includes large volumes of data that does not fit into rows and columns, such as sensor data, text data and Web log data. Such data must be transformed and modeled before it can fit into paradigms such as SQL.

The result is that currently many organizations run separate systems for big data and business intelligence. On one system, conventional BI tools as well as new visual discovery tools act on structured data sources to do fast interactive analysis. In this area analytic databases can use column store approaches and visualization tools as a front end for fast interaction with the data. On other systems, big data is stored in distributed systems such as the Hadoop Distributed File System (HDFS). Tools that use it have been developed to access, process and analyze the data. Commercial distribution companies aligned with the open source Apache Foundation, such as Cloudera, Hortonworks and MapR, have built ecosystems around the MapReduce processing paradigm. MapReduce works well for search-based tasks but not so well for the interactive analytics for which business intelligence systems are known. This situation has created a divide between business technology users, who gravitate to visual discovery tools that provide easily accessible and interactive data exploration, and more technically skilled users of big data tools that require sophisticated access paradigms and elongated query cycles to explore data.

vr_Big_Data_Analytics_07_dissatisfaction_with_big_data_analyticsThere are two challenges with the MapReduce approach. First, working with it is a highly technical endeavor that requires advanced skills. Our big data analytics research shows that lack of skills is the most widespread reason for dissatisfaction with big data analytics, mentioned by more than two-thirds of companies. To fill this gap, vendors of big data technologies should facilitate use of familiar interfaces including query interfaces and programming language interfaces. For example, our research shows that Standard SQL is the most important method for implementing analysis on Hadoop. To deal with this challenge, the distribution companies and others offer SQL abstraction layers on top of HDFS, such as HIVE and Cloudera Impala. Companies that I have written about include Datameer and Platfora, whose systems help users interact with Hadoop data via interactive systems such as spreadsheets and multidimensional cubes. With their familiar interaction paradigms such systems have helped increase adoption of Hadoop and enable more than a few experts to access big data systems.

The second challenge is latency. As a batch process MapReduce must sort and aggregate all of the data before creating analytic output. Technology such as Tez, developed by Hortonworks, and Cloudera Impala aim to address such speed limitations; the first leverages MapReduce, and the other circumvents MapReduce altogether. Adoption of these tools has moved the big data market forward, but challenges remain such as the continuing fragmentation of the Hadoop ecosystem and a lack of standardization in approaches.

An emerging technology holds promise for bridging the gap between big data and BI in a way that can unify big data ecosystems rather than dividing them. Apache Spark, under development since 2010 at the University of California Berkeley’s AMPLab, addresses both usability and performance concerns for big data. It adds flexibility by running on multiple platforms in terms of both clustering (such as Hadoop YARN and Apache Mesos) and distributed storage (for example, HDFS, Cassandra, Amazon S3 and OpenStack’s Swift). Spark also expands the potential uses because the platform includes an SQL abstraction layer (Spark SQL), a machine learning library (MLlib), a graph library (GraphX) and a near-real-time engine (Spark Streaming). Furthermore, Spark can be programmed using modern languages such as Python and Scala. Having all of these components integrated is important because interactive business intelligence, advanced analytics and operational intelligence on big data all can work without dealing with the complexity of having individual proprietary systems that were necessary to do the same things previously.

Because of this potential Spark is becoming a rallying point for providers of big data analytics. It has become the most active Apache project as key open source contributors moved their focus from other Hadoop projects to it. Out of the effort in Berkeley, Databricks was founded for commercial development of open source Apache Spark and has raised more than $46 million. Since the initial release in May 2014 the momentum for Spark has continued to build; major companies have made announcements around Apache Spark. IBM said it will dedicate 3,500 researchers and engineers to develop the platform and help customers deploy it. This is the largest dedicated Spark effort in the industry, akin to the move IBM made in the late 1990s with the Linux open source operating system. Oracle has built Spark into its Big Data Appliance. Microsoft has Spark as an option on its HDInsight big data approach but has also announced Prajna, an alternative approach to Spark. SAP has announced integration with its SAP HANA platform, although it represents “coopetition” for SAP’s in-memory platform. In addition, all the major business intelligence players have built or are building connectors to run on Spark. In time, Spark likely will serve as a data ingestion engine for connecting devices in the Internet of Things (IoT). For instance, Spark can integrate with technologies such as Apache Kafka or Amazon Kinesis to instantly process and analyze IoT data so that immediate action can be taken. In this way, as it is envisioned by its creators, Spark can serve as the nexus of multiple systems.

Because it is a flexible in-memory technology for big data, Spark opens the door to many new opportunities, which in business use include interactive analysis, advanced customer analytics,VentanaResearch_NextGenPredictiveAnalytics_BenchmarkResearchfraud detection, and systems and network management. At the same time, it is not yet a mature technology and for this reason,  organizations considering adoption should tread carefully. While Spark may offer better performance and usability, MapReduce is already widely deployed. For those users, it is likely best to maintain the current approach and not fix what is not broken. For future big data use, however, Spark should be carefully compared to other big data technologies. In this case as well as others, technical skills can still be a concern. Scala, for instance, one of the key languages used with Spark, has little adoption, according to our recent research on next-generation predictive analytics. Manageability is an issue as for any other nascent technology and should be carefully addressed up front. While, as noted, vendor support for Spark is becoming apparent, frequent updates to the platform can mean disruption to systems and processes, so examine the processes for these updates. Be sure that vendor support is tied to meaningful business objectives and outcomes. Spark is an exciting new technology, and for early adopters that wish to move forward with it today, both big opportunities and challenges are in store.

Regards,

Ventana Research

Ventana Research recently completed the most comprehensive evaluation of analytics and business intelligence products and vendors available anywhere. As I discussed recently, such research is necessary and timely as analytics and business intelligence is now a fast-changing market. Our Value Index for Analytics and Business Intelligence in 2015 scrutinizes 15 top vendors and their product offerings in seven key
categories: Usability, Manageability, Reliability, Capability, Adaptability, Vendor Validation and TCO/ROI. The analysis shows that the top supplier is Information Builders, which qualifies as a Hot vendor and is followed by 10 other Hot vendors: SAP, IBM, MicroStrategy, Oracle, vr_VI_BI_2015_Weighted_OverallSAS, Qlik, Actuate (now part of OpenText) and Pentaho.

The evaluations drew on our research and analysis of vendors’ and products along with their responses to our detailed RFI or questionnaire, our own hands-on experience and the buyer-related findings from our benchmark research on next-generation business intelligence, information optimization and big data analytics. The benchmark research examines analytics and business intelligence from various perspectives to determine organizations’ current and planned use of these technologies and the capabilities they require for successful deployments.

We find that the processes that comprise business intelligence today have expanded beyond standard query, reporting, analysis and publishing capabilities. They now include sourcing and integration of data and at later stages the use of analytics for planning and forecasting and of capabilities utilizing analytics and metrics for collaborative interaction and performance management. Our research on big data analytics finds that new technologies collectively known as big data vr_Big_Data_Analytics_15_new_technologies_enhance_analyticsare influencing the evolution of business intelligence as well; here in-memory systems (used by 50% of participating organizations), Hadoop (42%) and data warehouse appliances (33%) are the most important innovations. In-memory computing in particular has changed BI because it enables rapid processing of even complex models with very large data sets. In-memory computing also can change how users access data through data visualization and incorporate data mining, simulation and predictive analytics into business intelligence systems. Thus the ability of products to work with big data tools figured in our assessments.

In addition, the 2015 Value Index includes assessments of their self-service tools and cloud deployment options. New self-service approaches can enable business users to reduce their reliance on IT to access and use data and analysis. However, our information optimization research shows that this change is slow to proliferate. In four out of five organizations, IT currently is involved in making information available to end users vr_Info_Optimization_01_whos_responsible_for_information_availabilityand remains entrenched in the operations of business intelligence systems.

Similarly, our research, as well as the lack of maturity of the cloud-based products evaluated, shows that organizations are still in the early stages of cloud adoption for analytics and business intelligence; deployments are mostly departmental in scope. We are exploring these issues further in our benchmark research into data and analytics in the cloud, which will be released in the second quarter of 2015.

The products offered by the five top-rated com­pa­nies in the Value Index provide exceptional functionality and a superior user experi­ence. However, Information Builders stands out, providing an excep­tional user experience and a completely integrated portfolio of data management, predictive analytics, visual discovery and operational intelligence capabilities in a single platform. SAP, in second place, is not far behind, having made significant prog­ress by integrating its Lumira platform into its BusinessObjects Suite; it added pre­dictive analytics capabilities, which led to higher Usability and Capability scores. IBM, MicroStrategy and Oracle, the next three, each provide a ro­bust integrated platform of capabilities. The key differentiator between them and the top two top is that they do not have superior scores in all of the seven categories.

In evaluating products for this Value Index we found some noteworthy innovations in business intelligence. One is Qlik Sense, which has a modern architecture that is cloud-ready and supports responsive design on mobile devices. Another is SAS Visual Analytics, which combines predictive analytics with visual discovery in ways that are a step ahead of others currently in the market. Pentaho’s Automated Data Refinery concept adds its unique Pentaho Data Integration platform to business intelligence for a flexible, well-managed user experience. IBM Watson Analytics uses advanced analytics and VR_AnalyticsandBI_VI_2015natural language processing for an interactive experience beyond the traditional paradigm of business intelligence. Tableau, which led the field in the category of Usability, continues to innovate in the area of user experience and aligning technology with people and process. MicroStrategy’s innovative Usher technology addresses the need for identity management and security, especially in an evolving era in which individuals utilize multiple devices to access information.

The Value Index analysis uncovered notable differences in how well products satisfy the business intelligence needs of employees working in a range of IT and business roles. Our analysis also found substantial variation in how products provide development, security and collaboration capabilities and role-based support for users. Thus, we caution that similar vendor scores should not be taken to imply that the packages evaluated are functionally identical or equally well suited for use by every organization or for a specific process.

To learn more about this research and to download a free executive summary, please visit.

Regards,

Ventana Research

Ventana Research recently completed the most comprehensive evaluation of analytics and business intelligence products and vendors available anywhere. As I discussed recently, such research is necessary and timely as analytics and business intelligence is now a fast-changing market. Our Value Index for Analytics and Business Intelligence in 2015 scrutinizes 15 top vendors and their product offerings in seven keyvr_VI_BI_2015_Weighted_Overall categories: Usability, Manageability, Reliability, Capability, Adaptability, Vendor Validation and TCO/ROI. The analysis shows that the top supplier is Information Builders, which qualifies as a Hot vendor and is followed by 10 other Hot vendors: SAP, IBM, MicroStrategy, Oracle, SAS, Qlik, Actuate (now part of OpenText) and Pentaho.

The evaluations drew on our research and analysis of vendors’ and products along with their responses to our detailed RFI or questionnaire, our own hands-on experience and the buyer-related findings from our benchmark research on next-generation business intelligence, information optimization and big data analytics. The benchmark research examines analytics and business intelligence from various perspectives to determine organizations’ current and planned use of these technologies and the capabilities they require for successful deployments.

We find that the processes that comprise business intelligence today have expanded beyond standard query, reporting, analysis and publishing capabilities. They now include sourcing and integration of data and at later stages the use of analytics for planning and forecasting and of capabilities utilizing analytics and metrics for collaborative interaction and performance management. Our research on big data analytics finds that new technologies collectively known as big data vr_Big_Data_Analytics_15_new_technologies_enhance_analyticsare influencing the evolution of business intelligence as well; here in-memory systems (used by 50% of participating organizations), Hadoop (42%) and data warehouse appliances (33%) are the most important innovations. In-memory computing in particular has changed BI because it enables rapid processing of even complex models with very large data sets. In-memory computing also can change how users access data through data visualization and incorporate data mining, simulation and predictive analytics into business intelligence systems. Thus the ability of products to work with big data tools figured in our assessments.

In addition, the 2015 Value Index includes assessments of their self-service tools and cloud deployment options. New self-service approaches can enable business users to reduce their reliance on IT to access and use data and analysis. However, our information optimization research shows that this change is slow to proliferate. In four out of five organizations, IT currently is involved in making information available to end users vr_Info_Optimization_01_whos_responsible_for_information_availabilityand remains entrenched in the operations of business intelligence systems.

Similarly, our research, as well as the lack of maturity of the cloud-based products evaluated, shows that organizations are still in the early stages of cloud adoption for analytics and business intelligence; deployments are mostly departmental in scope. We are exploring these issues further in our benchmark research into data and analytics in the cloud, which will be released in the second quarter of 2015.

The products offered by the five top-rated com­pa­nies in the Value Index provide exceptional functionality and a superior user experi­ence. However, Information Builders stands out, providing an excep­tional user experience and a completely integrated portfolio of data management, predictive analytics, visual discovery and operational intelligence capabilities in a single platform. SAP, in second place, is not far behind, having made significant prog­ress by integrating its Lumira platform into its BusinessObjects Suite; it added pre­dictive analytics capabilities, which led to higher Usability and Capability scores. IBM, MicroStrategy and Oracle, the next three, each provide a ro­bust integrated platform of capabilities. The key differentiator between them and the top two top is that they do not have superior scores in all of the seven categories.

In evaluating products for this Value Index we found some noteworthy innovations in business intelligence. One is Qlik Sense, which has a modern architecture that is cloud-ready and supports responsive design on mobile devices. Another is SAS Visual Analytics, which combines predictive analytics with visual discovery in ways that are a step ahead of others currently in the market. Pentaho’s Automated Data Refinery concept adds its unique Pentaho Data Integration platform to business intelligence for a flexible, well-managed user experience. IBM Watson Analytics uses advanced analytics and VR_AnalyticsandBI_VI_2015natural language processing for an interactive experience beyond the traditional paradigm of business intelligence. Tableau, which led the field in the category of Usability, continues to innovate in the area of user experience and aligning technology with people and process. MicroStrategy’s innovative Usher technology addresses the need for identity management and security, especially in an evolving era in which individuals utilize multiple devices to access information.

The Value Index analysis uncovered notable differences in how well products satisfy the business intelligence needs of employees working in a range of IT and business roles. Our analysis also found substantial variation in how products provide development, security and collaboration capabilities and role-based support for users. Thus, we caution that similar vendor scores should not be taken to imply that the packages evaluated are functionally identical or equally well suited for use by every organization or for a specific process.

To learn more about this research and to download a free executive summary, please visit.

Regards,

Ventana Research

Just a few years ago, the prevailing view in the software industry was that the category of business intelligence (BI) was mature and without room for innovation. Vendors competed in terms of feature parity and incremental advancements of their platforms. But since then business intelligence has grown to include analytics, data discovery tools and big data capabilities to process huge volumes and new types of data much faster. As is often the case with change, though, this one has created uncertainty. For example, only one in 11 participants in our benchmark research on big data analytics said that their organization fully agrees on the meaning of the term “big data analytics.”

There is little question that clear definitions of analytics and business intelligence as they are used in business today would be of value. But some IT analyst firms have tried to oversimplify the process of updating these definitions by merely combining a market basket of discovery capabilities under the label of analytics. In our estimation, this attempt is neither accurate nor useful. Discovery tools are only components of business intelligence, and their capabilities cannot accomplish all the tasks comprehensive BI systems can do. Some firms seem to want to reduce the field further by overemphasizing the visualization aspect of discovery. While visual discovery can help users solve basic business problems, other BI and analytic tools are available that can attack more sophisticated and technically challenging problems. In our view, visual discovery is one of four types of analytic discovery that can help organizations identify and understand the masses of data they accumulate today. But for many organizations visualization alone cannot provide them with the insights necessary to help make critical decisions, as interpreting the analysis requires expertise that mainstream business professionals lack.

In Ventana Research’s view, business intelligence is a technology managed by IT that is designed to produce information and reports from business data to inform business about the performance of activities, people and processes. It has provided and will continue to provide great value to business, but in itself basic BI will not meet the new generation of requirements that businesses face; they need not just information but guidance on how to take advantage of opportunities, address issues and mitigate the risks of subpar performance. Ventana_Research_Value_Index_LogoAnalytics is a component of BI that is applied to data to generate information, including metrics. It is a technology-based set of methodologies used by analysts as well as the information gained through the use of tools designed to help those professionals. These thoughtfully crafted definitions inform the evaluation criteria we apply in our new and comprehensive 2015 Analytics and Business Intelligence Value Index, which we will publish soon. As with all business tools, applications and systems we assess in this series of indexes, we evaluate the value of analytic and business intelligence tools in terms of five functional categories – usability, manageability, reliability, capability and adaptability – and two customer assurance categories – validation of the vendor and total cost of ownership and return on investment (TCO/ROI). We feature our findings in these seven areas of assessment in our Value Index research and reports. In the Analytics and Business Intelligence Value Index for 2015 we assess in depth the products of 15 of the leading vendors in today’s BI market.

The Capabilities category examines the breadth of functionality that products offer and assesses their ability to deliver the insights today’s enterprises need. For our analysis we divide this category into three subcategories for business intelligence: data, analytics and optimization. We explain each of them below.

The data subcategory of Capabilities examines data access and preparation along with supporting integration and modeling. New data sources are coming into being continually; for example, data now is generated in sensors in watches, smartphones, cars, airplanes, homes, utilities and an assortment of business, network, medical and military equipment. In addition, organizations increasingly are interested in behavioral and attitudinal data collected through various communication platforms. Examples include Web browser behavior, data mined from the Internet, social media and various survey and community polling data. The data access and integration process identifies each type of data, integrates it with all other relevant types, checks it all for quality issues, maps it back to the organization’s systems of record and master data, and manages its lineage. Master data management in particular, including newer approaches such as probabilistic matching, is a key component for creating a system that can combine data types across the organization and in the cloud to create a common organizational vernacular for the use of data.

Ascertaining which systems must be accessed and how is a primary challenge for today’s business intelligence platforms. A key part of data access is the user interface. Whether it appears in an Internet browser, a laptop, a smartphone, a tablet or a wearable device, data must be presented in a manner optimized for the interface. Examining the user interface for business intelligence systems was a primary interest of our 2014 Mobile Business Intelligence Value Index. In that research, we learned that vendors are following divergent paths and that it may be hard for some to change course as they continue. Therefore how a vendor manages mobile access and other new means impacts its products’ value for particular organizations.

Once data is accessed, it must be modeled in a useful way. Data models in the form of OLAP cubes and predefined relationships of data sometimes grow overly complex, but there is value in premodeling data in ways that make sense to business people, most of whom are not up to modeling it for themselves. Defining data relationships and transforming data through complex manipulations is often needed, for instance, to define performance indicators that align with an organization’s business initiatives. These manipulations can include business rules or what-if analysis within the context of a model or external to it. Finally, models must be flexible so they do not hinder the work of organizational users. The value of premodeling data is that it provides a common view for business users so they need not redefine data relationships that have already been thoroughly considered.

The analytics subcategory includes analytic discovery, prediction and integration. Discovery and prediction roughly map to the ideas of exploratory and confirmatory analytics, which I have discussed. Analytic discovery includes calculation and visualization processes that enable users to move quickly and easily through data to create the types of information they need for business purposes. Complementing it is prediction, which typically follows discovery. Discovery facilitates root-cause and historical analysis, but to look ahead and make decisions that produce desired business outcomes, organizations need to track various metrics and make informed predictions. Analytic integration encompasses customization of both discovery and predictive analytics and embedding them in other systems such as applications and portals.

The optimization subcategory includes collaboration, organizational management, information optimization, action and automation. Collaboration is a key consideration for today’s analytic platforms. It includes the ability to publish, share and coordinate various analytic and business intelligence functions. Notably, some recently developed collaboration platforms incorporate many of the characteristics of social platforms such as Facebook or LinkedIn. Organizational management attempts to manage to particular outcomes and sometimes provides performance indicators and scorecard frameworks. Action assesses how technology directly assists decision-making in an operational context. This includes gathering inputs and outputs for collaboration before and after a decision, predictive scoring that prescribes action and delivery of the information in the correct form to the decision-maker. Finally, automation triggers alerts in circumstances based on statistical triggers or rules and should be managed as part of a workflow. Agent technology takes automation to a level that is more proactive and autonomous.

vr_Info_Optim_Maturity_06_oraganization_maturity_by_dimensionsThis broad framework of data, analytics and optimization fits with a process orientation to business analytics that I have discussed. Our benchmark research on information optimization indicates that the people and process dimensions of performance are less well developed than the information and technology aspects, and thus a focus on these aspects of business intelligence and analytics will be beneficial.

In our view, it’s important to consider business intelligence software in a broad business context rather than in artificially separate categories that are designed for IT only. We advise organizations seeking to gain a competitive edge to adopt a multifaceted strategy that is business-driven, incorporates a complete view of BI and analytics, and uses the comprehensive evaluation criteria we apply.

Regards,

Ventana Research

The idea of not focusing on innovation is heretical in today’s business culture and media. Yet a recent article in The New Yorker suggests that today’s society and organizations focus too much on innovation and technology. The same may be true for technology in business organizations. Our research provides evidence for my claim.

My analysis on our benchmark research into information optimization shows that organizations perform better in technology and information than in the people and process dimensions. vr_Info_Optim_Maturity_06_oraganization_maturity_by_dimensionsThey face a flood of information that continues to increase in volume and frequency and must use technology to manage and analyze it in the hope of improving their decision-making and competitiveness. It is understandable that many see this as foremost an IT issue. But proficiency in use of technology and even statistical knowledge are not the only capabilities needed to optimize an organization’s use of information and analytics. They also need a framework that complements the usual analytical modeling to ensure that analytics are used correctly and deliver the desired results. Without a process for getting to the right question, users can go off in the wrong direction, producing results that cannot solve the problem.

In terms of business analytics strategy, getting to the right question is a matter of defining goals and terms; when this is done properly, the “noise” of differing meanings is reduced and people can work together efficiently. As we all know, many vr_Big_Data_Analytics_05_terminology_for_big_data_analyticsterms, especially new ones, mean different things to different people, and this can be an impediment to teamwork and achieving of business goals. Our research into big data analytics shows a significant gap in understanding here: Fewer than half of organizations have internal agreement on what big data analytics is. This lack of agreement is a barrier to building a strong analytic process. The best practice is to take time to discover what people really want to know; describing something in detail ensures that everyone is on the same page. Strategic listening is a critical skill, and done right it enables analysts to identify, craft and focus the questions that the organization needs answered through the analytic process.

To develop an effective process and create an adaptive mindset, organizations should instill a Bayesian sensibility. Bayesian analysis, also called posterior probability analysis, starts with assuming an end probability and works backward to determine prior probabilities. In a practical sense, it’s about updating a hypothesis when given new information; it’s about taking all available information and finding where it converges. This is a flexible approach in which beliefs are updated as new information is presented; it values both data and intuition. This mindset also instills strategic listening into the team and into the organization.

For business analytics, the more you know about the category you’re dealing with, the easier it is to separate what is valuable information and hypothesis from what is not. Category knowledge allows you to look at the data from a different perspective and add complex existing knowledge. This in and of itself is a Bayesian approach, and it allows the analyst to iteratively take the investigation in the right direction. This is not to say that intuition should be the analytic starting point. Data is the starting point, but a hypothesis is needed to make sense of the data. Physicist Enrico Fermi pointed out that measurement is the reduction of uncertainty. Analysts should start with a hypothesis and try to disprove it rather than to prove it. From there, iteration is needed to come as close to the truth as possible. Starting with a gut feel and trying to prove it is the wrong approach. The results are rarely surprising and the analysis is likely to add nothing new. Let the data guide the analysis rather than allowing predetermined beliefs to guide the analysis. Technological innovations in exploratory analytics and machine learning support this idea and encourage a data-driven approach.

Bayesian analysis has had a great impact not only on statistics and market insights in recent years, but it has impacted how we view important historical events as well. It is consistent with modern thinking in the fields of technology and machine learning, as well as behavioral economics. For those interested in how the Bayesian philosophy is taking hold in many different disciplines, I recommend a book entitled The Theory That Would Not Die by Sharon Bertsch McGrayne.

A good analytic process, however, needs more than a sensibility for how to derive and think about questions; it needs a tangible method to address the questions and derive business value from the answers. The method I propose can be framed in four steps: what, so what, now what and then what. Moving beyond the “what” (i.e., measurement and data) to the “so what” (i.e., insights) should be a goal of any analysis, yet many organizations are still turning out analysis that does nothing more than state the facts. Maybe 54 percent of people in a study prefer white houses, but why does anyone care? Analysis must move beyond mere findings to answer critical business questions and provide informed insights, implications and ideally full recommendations. That said, if organizations cannot get the instrumentation and the data right, findings and recommendations are subject to scrutiny.

The analytics professional should make sure that the findings, implications and recommendations of the analysis are heard by strategic and operational decision-makers. This is the “now what” step and includes business planning and implementation decisions that are driven by the analytic insights. If those insights do not lead to decision-making or action, the analytic effort has no value. There are a number of things that the analyst can do to make the information heard. A compelling story line that incorporates storytelling techniques, animation and dynamic presentation is a good start. Depending on the size of the initiative, professional videography, implementation of learning systems and change management tools also may be used.

The “then what” represents a closed-loop process in which insights and new data are fed back into the organization’s operational systems. This can be from the perspective of institutional knowledge and learning in the usual human sense which is an imperative in organizations. Our benchmark research into big data and business analytics shows a need for this: Skills and training are substantial obstacles to using big data (for 79%) and analytics (77%) in organizations. This process is similar to machine learning. That is, as new information is brought into the organization, the organization as a whole learns and adapts to current business conditions. This is the goal of the closed-loop analytic process.

Our business technology innovation research finds analytics in the top three priorities in three out of four (74%) organizations; collaboration is a top-three priority in 59 percent. vr_bti_br_technology_innovation_prioritiesBoth analytics and collaboration have a process orientation that uses technology as an enabler of the process. The sooner organizations implement a process framework, the sooner they can achieve success in their analytic efforts. To implement a successful framework such as the one described above, organizations must realize that innovation is not the top priority; rather they need the ability to use innovation to support an adaptable analytic process. The benefits will be wide-ranging, including better understanding of objectives, more targeted analysis, analytical depth and analytical initiatives that have a real impact on decision-making.

Regards,

Ventana Research

Our benchmark research into business technology innovation shows that analytics ranks first or second as a business technology innovation priority in 59 percent of organizations. Businesses are moving budgets and responsibilities for analytics closer to the sales operations, often in the form of so-calledvr_Big_Data_Analytics_15_new_technologies_enhance_analytics shadow IT organizations that report into decentralized and autonomous business units rather than a central IT organization. New technologies such as in-memory systems (50%), Hadoop (42%) and data warehouse appliances (33%) are top back-end technologies being used to acquire a new generation of analytic capabilities. They are enabling new possibilities including self-service analytics, mobile access, more collaborative interaction and real-time analytics. In 2014, Ventana Research helped lead the discussion around topics such as information optimization, data preparation, big data analytics and mobile business intelligence. In 2015, we will continue to cover these topics while adding new areas of innovation as they emerge.

Three key topics lead our 2015 business analytics research agenda. The first focuses on cloud-based analytics. In our benchmark research on information optimization, nearly all (97%) organizations said it is important or very important to Ventana_Research_Benchmark_Research_Logosimplify informa­tion access for both their business and their customers. Part of the challenge in optimizing an organization’s use of information is to integrate and analyze data that originates in the cloud or has been moved there. This issue has important implications for information presentation, where analytics are executed and whether business intelligence will continue to move to the cloud in more than a piecemeal fashion. We are currently exploring these topics in our new benchmark research called analytics and data in the cloud Coupled with the issue of cloud use is the proliferation of embedded analytics and the imperative for organizations to provide scalable analytics within the workflow of applications. A key question we’ll try to answer this year is whether companies that have focused primarily on operational cloud applications at the expense of developing their analytics portfolio or those that have focused more on analytics will gain a competitive advantage.

The second research agenda item is advanced analytics. It may be useful to divide this category into machine learning and predictive analytics, which I have discussed and covered in vr_predanalytics_benefits_of_predictive_analytics_updatedour benchmark research on big data analytics. Predictive analytics has long been available in some sectors of the business world, and two-thirds (68%) of organizations as found in our research that use it said it provides a competitive advantage. Programming languages such as R, the use of Predictive Model Markup Language (PMML), inclusion of social media data in prediction, massive scale simulation, and right-time integration of scoring at the point of decision-making are all important advances in this area. Machine learning also been around for a long time, but it wasn’t until the instrumentation of big data sources and advances in technology that it made sense to use in more than academic environments. At the same time as the technology landscape is evolving, it is getting more fragmented and complex; in order to simplify it, software designers will need innovative uses of machine learning to mask the underlying complexity through layers of abstraction. A technology such as Spark out of Amp-Lab at Berkeley is still immature, but it promises to enable increasing uses of machine learning on big data. Areas such as sourcing data and preparing data for analysis must be simplified so analysts are not overwhelmed by big data.

Our third area of focus is the user experience in business intelligence tools. Simplification and optimization of information in a context-sensitive manner are paramount. An intuitive user experience can advance the people and process dimensions VR_Value_Index_Logoof business, which have lagged technology innovation according to our research in multiple areas. New approaches coming from business end-users, especially in the tech-savvy millennial generation, are pushing the envelope here. In particular, mobility and collaboration are enabling new user experiences in both business organizations and society at large. Adding to it is data collected in more forms, such as location analytics (which we have done research on), individual and societal relationships, information and popular brands. How business intelligence tools incorporate such information and make it easy to prepare, design and consume for different organizational personas is not just an agenda focus but also one focus of our 2015 Analytics and Business Intelligence Value Index to be published in the first quarter of the year.

This shapes up as an exciting year. I welcome any feedback you have on this research agenda and look forward to providing research, collaborating and educating with you in 2015.

Regards,

Ventana Research

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,074 hits
%d bloggers like this: