You are currently browsing the monthly archive for January 2015.

Our benchmark research into business technology innovation shows that analytics ranks first or second as a business technology innovation priority in 59 percent of organizations. Businesses are moving budgets and responsibilities for analytics closer to the sales operations, often in the form of so-calledvr_Big_Data_Analytics_15_new_technologies_enhance_analytics shadow IT organizations that report into decentralized and autonomous business units rather than a central IT organization. New technologies such as in-memory systems (50%), Hadoop (42%) and data warehouse appliances (33%) are top back-end technologies being used to acquire a new generation of analytic capabilities. They are enabling new possibilities including self-service analytics, mobile access, more collaborative interaction and real-time analytics. In 2014, Ventana Research helped lead the discussion around topics such as information optimization, data preparation, big data analytics and mobile business intelligence. In 2015, we will continue to cover these topics while adding new areas of innovation as they emerge.

Three key topics lead our 2015 business analytics research agenda. The first focuses on cloud-based analytics. In our benchmark research on information optimization, nearly all (97%) organizations said it is important or very important to Ventana_Research_Benchmark_Research_Logosimplify informa­tion access for both their business and their customers. Part of the challenge in optimizing an organization’s use of information is to integrate and analyze data that originates in the cloud or has been moved there. This issue has important implications for information presentation, where analytics are executed and whether business intelligence will continue to move to the cloud in more than a piecemeal fashion. We are currently exploring these topics in our new benchmark research called analytics and data in the cloud Coupled with the issue of cloud use is the proliferation of embedded analytics and the imperative for organizations to provide scalable analytics within the workflow of applications. A key question we’ll try to answer this year is whether companies that have focused primarily on operational cloud applications at the expense of developing their analytics portfolio or those that have focused more on analytics will gain a competitive advantage.

The second research agenda item is advanced analytics. It may be useful to divide this category into machine learning and predictive analytics, which I have discussed and covered in vr_predanalytics_benefits_of_predictive_analytics_updatedour benchmark research on big data analytics. Predictive analytics has long been available in some sectors of the business world, and two-thirds (68%) of organizations as found in our research that use it said it provides a competitive advantage. Programming languages such as R, the use of Predictive Model Markup Language (PMML), inclusion of social media data in prediction, massive scale simulation, and right-time integration of scoring at the point of decision-making are all important advances in this area. Machine learning also been around for a long time, but it wasn’t until the instrumentation of big data sources and advances in technology that it made sense to use in more than academic environments. At the same time as the technology landscape is evolving, it is getting more fragmented and complex; in order to simplify it, software designers will need innovative uses of machine learning to mask the underlying complexity through layers of abstraction. A technology such as Spark out of Amp-Lab at Berkeley is still immature, but it promises to enable increasing uses of machine learning on big data. Areas such as sourcing data and preparing data for analysis must be simplified so analysts are not overwhelmed by big data.

Our third area of focus is the user experience in business intelligence tools. Simplification and optimization of information in a context-sensitive manner are paramount. An intuitive user experience can advance the people and process dimensions VR_Value_Index_Logoof business, which have lagged technology innovation according to our research in multiple areas. New approaches coming from business end-users, especially in the tech-savvy millennial generation, are pushing the envelope here. In particular, mobility and collaboration are enabling new user experiences in both business organizations and society at large. Adding to it is data collected in more forms, such as location analytics (which we have done research on), individual and societal relationships, information and popular brands. How business intelligence tools incorporate such information and make it easy to prepare, design and consume for different organizational personas is not just an agenda focus but also one focus of our 2015 Analytics and Business Intelligence Value Index to be published in the first quarter of the year.

This shapes up as an exciting year. I welcome any feedback you have on this research agenda and look forward to providing research, collaborating and educating with you in 2015.

Regards,

Ventana Research

Actuate, a company known for powering BIRT, the open source business intelligence technology, has been delivering large-scale consumer and industrial applications for more than 20 years. In December the company announced it would be acquired by OpenText of Ontario, Canada. OpenText is Canada’s largest software vendor with more than 8,000 employees and a portfolio of enterprise information management products. It serves VR2014_Leadership_AwardWinnerprimarily large companies. The attraction of Actuate for such a company can be seen in a number of its legacy assets as well as more current acquisitions and developments but also its existing customer base. It was also awarded a 2014 Ventana Research Business Leadership Award.

Actuate’s foundational asset is BIRT (Business Intelligence and Reporting Tools) and its developer community. With more than 3.5 million developers and 13.5 million downloads, the BIRT developer environment is used in a variety of companies on a global basis. The BIRT community includes Java developers as well as sophisticated business intelligence design professionals, which I discussed in my outline of analytics personas. BIRT is a key project for the Eclipse Foundation, an open source integrated development environment familiar to many developers. BIRT provides a graphical interface to build reports at a granular level, and being Java-based, it provides ways to grapple with data and build data connections in a virtually limitless fashion. While new programming models and scripting languages, such as Python and Ruby, are gaining favor, Java remains a primary coding language for large-scale applications. One of the critical capabilities for business intelligence tools is to provide information in a visually compelling and easily usable format. BIRT can provide pixel-perfect reporting and granular adjustments to visualization objects. This benefit is coupled with the advantage of the open source approach: availability of skilled technical human resources on a global basis at relatively low cost.

Last year Actuate introduced iHub 3.1, a deployment server that integrates data from multiple sources and distributes content to end users. IHub has connectors to most database systems including modern approaches such as Hadoop. While Actuate provides the most common connectors out of the box, BIRT and the Java framework allow any data from any system to be brought into the fold. This type of approach to big data becomes particularly compelling for the ability to vr_Big_Data_Analytics_04_types_of_big_data_for_analyticsintegrate both large-scale data and diverse data sources. The challenge is that the work sometimes requires customization, but for large-scale enterprise applications, developers often do this to deliver capabilities that would not otherwise be accessible to end users. Our benchmark research into big data analytics shows that organizations need to access many data sources for analysis including transactional data (60%), external data (50%), content (49%) and event-centric data (48%).

In 2014, Actuate introduced iHub F-Type, which enables users to build reports, visualizations and applications and deploy them in the cloud. F-Type mitigates the need to build a separate deployment infrastructure and can act as both a “sandbox” for development and a broader production environment. Using REST-based interfaces, application developers can use F-Type to prototype and scale embedded reports for their custom applications. F-Type is delivered in the cloud, has full enterprise capabilities out of the box, and is free up to a metered output capacity of 50MB. The approach uses output metering rather than input metering used by some technology vendors. This output metering approach encourages scaling of data and focuses organizations on which specific reports they should deployed to their employees and customers.

Also in 2014, Actuate introduced BIRT Analytics 5.0, a self-service discovery platform that includes advanced analytic capabilities. In my review of BIRT Analytics, I noted its vr_predanalytics_benefits_of_predictive_analytics_updatedabilities to handle large data volumes and do intuitive predictive analytics. Organizations in our research said that predictive analytics provides advantages such as achieving competitive advantage (for 68%), new revenue opportunities (55%) and increased profitability (52%). Advances in BIRT Analytics 5.0 include integration with iHub 3.1 so developers can bring self-service discovery into their dashboards and public APIs for use in custom applications.

The combination of iHub, the F-Type freemium model, BIRT Analytics and the granular controls that BIRT provides to developers and users presents a coherent strategy especially in the context of embedded applications. Actuate CEO Pete Cittadini asserts that the company has the most APIs of any business intelligence vendor. The position is a good one especially since embedded technology is becoming important in the context of custom applications and in the so-called Internet-of-Things. The ability to make a call into another application instead of custom-coding the function itself within the workflow of an end-user application cuts developer time significantly. Furthermore, the robustness of the Actuate platform enables applications to scale almost without limit.

OpenText and Actuate have similarities, such as the maturity of the organizations and the types of large clients they vr_Info_Optimization_02_drivers_for_deploying_informationservice. It will be interesting to see how Actuate’s API strategy will impact the next generation of OpenText’s analytic applications and to what degree Actuate remains an independent business unit in marketing to customers. As a company that has been built through acquisitions, OpenText has a mature onboarding process that usually keeps the new business unit operating separately. OpenText CEO Mark Barrenechea outlines his perspective on the acquisition which will bolster its portfolio for information optimization and analytics or what it calls enterprise information management. In fact our benchmark research on information optimization finds that analytics is the top driver for deploying information in two thirds of organizations. The difference this time may be that today’s enterprises are asking for more integrated information which embeds analytics rather than having different interfaces for each of the applications or tools. The acquisition of Actuate by OpenText has now closed and now changes will occur to Actuate that should be watched closely to determine its path forward and it potential higher value for customers within OpenText.

Regards,

Ventana Research

In 2014, IBM announced Watson Analytics, which uses machine learning and natural language processing to unify and simplify the user experience in each step of the analytic processing: data acquisition, data preparation, analysis, dashboarding and storytelling.  After a relatively short beta testing period involving more than 22,000 users, IBM released Watson Analytics for general availability in December. There are two editions: the “freemium” trial version allows 500MB of data storage and access to file sizes less than 100,000 rows of data and 50 columns; the personal edition is a monthly subscription that enables larger files and more storage.

Its initial release includes functions to explore, predict and assemble data. Many of the features are based on IBM’s SPSS Analytic Catalyst, which I wrote about and which won the 2013 Ventana Research Technology Innovation Award for business analytics. Once data is uploaded, the explore function enables users to analyze data in an iterative fashion using natural language processing and simple point-and-click actions. Algorithms decide the best fit for graphics based on the data, but users may choose other graphics as needed. An “insight bar” shows other relevant data that may contain insights such as potential market opportunities.

The ability to explore data through visualizations with minimal knowledge is a primary aim of modern analytics tools. With the explore function incorporating natural language processing, which other tools in the market lack, IBM makes analytics accessible to users without the need to drag and drop dimensions and measures across the screen. This feature should not be underestimated; usability is the buying criterion for analytics tools most widely cited in our benchmark research on next-generation business intelligence (by 63% of organizations).

vr_ngbi_br_importance_of_bi_technology_considerations_updatedThe predict capability of Watson Analytics focuses on driver analysis, which is useful in a variety of circumstances such as sales win and loss, market lift analysis, operations and churn analysis. In its simplest form, a driver analysis aims to understand causes and effects among multiple variables. This is a complex process that most organizations leave to their resident statistician or outsource to a professional analyst. By examining the underlying data characteristics, the predict function can address data sets, including what may be considered big data, with an appropriate algorithm. The benefit for nontechnical users is that Watson Analytics makes the decision on selecting the algorithm and presents results in a relatively nontechnical manner such as spiral diagrams or tree diagrams. Having absorbed the top-level information, users can drill down into top key drivers. This ability enables users to see relative attribute influences and interactivity between attributes. Understanding interactivity is an important part of driver analysis since causal variables often move together (a challenge known as multicollinearity) and it is sometimes hard to distinguish what is actually causing a particular outcome. For instance, analysis may blame the customer service department for a product defect and point to it as the primary driver of customer defection. Accepting this result, a company may mistakenly try to fix customer service when a product issue needs to be addressed. This approach also overcomes the challenge of Simpson’s paradox, in which a trend that appears in different groups of data disappears or reverses when these groups are combined. This is a hindrance for some visualization tools in the market.

Once users have analyzed the data sufficiently and want to create and share their analysis, the assemble function enables them to bring together various dashboard visualizations in a single screen. Currently, Watson Analytics does such sharing (as well as comments related to the visualizations) via email. In the future, it would good to see capabilities such as annotation and cloud-based sharing in the product.

Full data preparation capabilities are not yet integrated into Watson Analytics. Currently, it includes a data quality report that gives confidence levels for the current data based on its cleanliness, and basic sort, transform and relabeling are incorporated as well. I assume that IBM has much more in the works here. For instance, its DataWorks cloud service offers APIs for some of the best data preparation and master data management available today. DataWorks can mask data at the source and do probabilistic matching against many sources including both cloud and on-premises addresses.  This is a major challenge organizations face when needing to conduct analytics across many data sets. For instance, in multichannel marketing, each individual customer may have many email addresses as well as different mailing addresses, phone numbers and identifiers for social media. A so-called “golden record” needs to be created so all such information can be linked together. Conceptually, the data becomes one long row of data related to that golden record, rather than multiple unassociated data in rows of shorter length. This data needs to be brought into a company’s own internal systems, and personally identifiable information must be stripped out before anything moves into a public domain. In a probabilistic matching system, data is matched not on one field but through associations of data which gives levels of certainty that records should be merged. This is different than past approaches and one of the reasons for significant innovation in the category. Multiple startups have been entering the data preparation space to address the need for a better user experience in data preparation. Such needs have been documented as one of the foundational issues facing the world of big data. Our benchmark research into information optimization shows that data preparation (47%) and quality and consistency (45%) are the most time-consuming tasks for organizations in analytics.

Watson Analytics is deployed on IBM’s SoftLayer cloud vr_Info_Optimization_04_basic_information_tasks_consume_timetechnology and is part of a push to move its analytic portfolio into the cloud. Early in 2015 the company plans to move its SPSS and Cognos products into the cloud via a managed service, thus offloading tasks such as setup, maintenance and disaster recovery management. Watson Analytics will be offered as a set of APIs much as the broader Watson cognitive computing platform has been. Last year, IBM said it would move almost all of its software portfolio to the cloud via its Bluemix service platform. These cloud efforts, coupled with the company’s substantial investment in partner programs with developers and universities around the world, suggest that Watson may power many next-generation cognitive computing applications, a market estimated to grow into the tens of billions of dollars in the next several years.

Overall, I expect Watson Analytics to gain more attention and adoption in 2015 and beyond. Its design philosophy and user experience are innovative, but work must be done in some areas to make it a tool that professionals use in their daily work. Given the resources IBM is putting into the product and the massive amounts of product feedback it is receiving, I expect initial release issues to be worked out quickly through the continuous release cycle. Once they are, Watson Analytics will raise the bar on self-service analytics.

Regards,

Ventana Research

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,106 hits
%d bloggers like this: