You are currently browsing the monthly archive for August 2012.

Jaspersoft Business Intelligence Suite competes in the open source and broader BI market. Its customer base is mostly in the small and midsized business and OEMs and SaaS providers who can embed Jaspersoft code directly into their offerings. Earlier this summer, the company introduced Jaspersoft 4.7, which features advancements in interactive reporting, big data access and mobile business intelligence for Android. The 4.7 release brings interactive features such as segmenting and filtering, though these are not as user-friendly as those found in some of the other tools in the market. As many of our benchmark research reports show, usability is becoming more important in the tools environment as business users become more proactive in the selection and use of these tools. Interactive reporting is quickly becoming table stakes, but we’re still not seeing the advancements that will lead to mass business user adoption, as my colleague Mark Smith  recently noted.

To handle more big data, Jaspersoft adopted MongoDB, a real-time-oriented NoSQL database. Users can now display log data directly on a dashboard, and developers can build reports.  This addition augments Jaspersoft’s native access to other big-data NoSQL approaches such as Hadoop and Cassandra. Jaspersoft also partners with big-data companies including IBM Netezza, Datastax, HP Vertica, 10Gen and Google BigQuery. The importance of connecting with a variety of data sources is highlighted by our recent big data benchmark research, where companies say customer data (68%) and transactional data (60%) top the list of critical data sources.

Release 4.7 lets users build applications and view reports natively on Android mobile devices, adding to the native support already available for Apple iOS. An SDK for Android allows developers to embed directly at the device level. I went ahead and tried out the mobile version on my Apple iPhone 4S and found that it lacked the auto-sizing and interactivity found in other approaches today.

I also decided to test out the trial version of the cloud version of the software and I was advised that it would take an hour to provision the resources. While I waited I went ahead and downloaded the 64-bit version of Jaspersoft 4.7 for Windows. The install process was relatively seamless with the only challenge being that there was no clear user name and password available. Fortunately, there was a live operator for me to call and resolve the issue immediately. The trial included sample data to play with which was nice, but the tool really lacked many of the features such as search and collaboration that we are seeing from other available tools in the market. I can see how Jaspersoft may be a nice addition to a SaaS offering or for OEM, but as far as a standalone BI tool, Jaspersoft has an uphill climb. When the cloud version arrived as promised, the demo was essentially the same with minor hang times as opposed to the download version.

While Jaspersoft 4.7 helps move the needle in the right direction, I’d like to see further development in ease of use (especially in the mobile area) as well as in areas such search and collaboration. As our NextGen BI benchmark research will reveal, expectations for mobile and collaborative BI systems are high, but actual time-to-value is still wanting. Overall, however, the ability to embed Jaspersoft in OEM applications along with the rise in cloud computing and SaaS should give the company ample space for growth, and the advancements in the latest release in reporting, big data and mobile areas should help it take advantage of the hottest trends. Companies looking for a low cost BI option or looking to embed basic BI functionality into their own applications should consider Jaspersoft.

Regards,

Tony Cosentino – VP and Research Director

I had a refreshing call this morning with a vendor that did not revolve around integration of systems, types of data, and the intricacies of NoSQL approaches. Instead, the discussion was about how its business users analyze an important and complex problem and how the company’s software enables that analysis. The topic of big data never came up, and it was not needed, because the conversation was business-driven and issue-specific.

By contrast, we get a lot of briefings that start with big data’s impact on business, but devolve into details about how data is accessed and the technology architecture. Data access and integration are important, but when we talk about big data analytics, focusing on the business issues is even more critical. Our benchmark research into big data shows that companies employ storage (95%) and reporting (94%) of big data, but very few use it for data mining (55%) and what-if scenario modeling (49%). That must change. Descriptive analysis on big data is quickly turning into table stakes; the real competitive value of big data analytics is in the latter two categories.

Not every big data vendor drowns its message in technospeak. IBM, for instance, stokes the imagination with analytical systems such as Watson and does a good job of bringing its business-focused story to a diverse audience through its Global Business Services arm. Some newer players paint compelling pictures as well. Companies such as PivotLink, PlanView and SuccessFactors (now part of SAP) deliver analytics stories from different organizational perspectives. Part of their advantage is that they start from a cloud and application perspective, but they also tell the analytics story in context of business, not in context of technology.

Providing that business perspective is a more difficult task for BI companies that have been pitching their software to IT departments for years, but even some of these have managed to buck this trend.  Alteryx, for instance, differentiates itself by putting forward compelling industry-specific use cases, and espousing the concept of the data artisan. This right-brain/left-brain approach appeals to both the technical and business sides of the house. Datameer also does a good job of producing solid business use cases. Its recent advancements in visualization help the company paint the analytical picture from a business perspective. Unfortunately, other examples seem few and far between. Most companies are still caught pitching technology-centric solutions, despite the fact that, in the new world of analytics, it’s about business solutions, not features on a specification sheet.

This focus on business issues over technology is important because the business side of the house today controls more and more of the technology spending. While business managers understand business and often have a firm grasp of analytics, they don’t always understand or care about the intricacies of different processing techniques and data models. In our upcoming benchmark research on next-generation BI systems, the data from which I’m currently analyzing, we see this power shift clearly. While IT still has veto power, decisions are being driven by business users and being ratified at the top of the organization.

The Ventana Research Maturity Model from our business analytics benchmark research shows that the analytics category is still immature, with only 15 percent of companies reaching the innovative level. So how do we begin to change this dialog from a technology-driven discussion to a business-driven discussion? From the client perspective, it starts with a blue sky approach, since the technological limitations that drove the old world of analytics no longer exist. This blank canvas may be framed by metrics such as revenue, profit and share of wallet, but the frame is now extending itself into less tangible and forward-looking areas such as customer churn and brand equity. If these output metrics are the frame, it’s the people, process, information and tools that are our brushes with which we paint. The focal point of the piece is always the customer.

If a business has a hard time thinking in terms of a blank canvass, it can examine a number of existing cases that show the value of utilizing big data analytics to help illuminate customer behavior, web usage, security, location, fraud, regulation and compliance. Some of the bigger ones are briefly discussed in my recent blog entry on predictive analytics.

The big data industry, if we can call it that, is quickly moving from a focus on the technology stack to a focus on tangible business outcomes and time-to-value (TTV). The innovations of the last few years have enabled companies to take a blue sky perspective and do things that they have never thought possible. The key is to start with the business problem you are looking to solve; the technology will work itself out from there.

Regards,

Tony Cosentino

VP and Research Director

A study by the McKinsey Global Institute published earlier this year suggests a coming shortage of more than 140,000 workers with deep analytical skills and a shortage of more than 1.5 million data-literate managers. I’m not sure how the study defined these roles, but I’d guess that those with deep analytic skills are those folks building the complex models, and the data-literate managers are those executives, middle managers and analysts who interpret the results and use the models to help drive business decisions. In other words, businesses are facing two skills gaps – one related to those producing the analytics, the other related to those using them in some type of discovery or review purpose.

The first skills gap is personified in the so-called data scientist – a creature that’s hard to find in the real world and according to LinkedIn, there are less than 825 in the United States and most are in Silicon Valley working for technology or social media companies. Someone with the job title of data scientist should be able to bridge the divide between  computer science, statistics and particular domain expertise and deliver an integrated systems approach to analytics. To give some context, the primary obstacle to the deployment and use of predictive analytics we identified in our predictive analytics benchmark research is architectural integration. The statistician who builds models rarely has the skill set to code them, which often leads to misalignment between the intent of the model designer and what the model actually does once it is deployed and used within business processes. Even worse is that 83 percent of organizations do not have the skills training to produce their own predictive analytics.

As more BI vendors embed predictive analytics support in their portfolios, and with further adoption of the PMML standard, this architectural shortcoming should become less of an issue. But we still have the challenge of understanding the mathematics which is a challenge in 58 percent of organizations. In the world of big data and Hadoop, multiple companies are working to provide analytics platforms that leverage existing skill sets such as SQL but new skills for supporting technologies related to Hadoop are a larger challenge. As the next generation of tools emerge, the bridge will likely start to form over this first skills gap where automation and integration of technology is more readily available.

The second skills gap relates to those using the models. With this group, we might expect a certain measure of understanding of data manipulation techniques, such as cross-tabulations, what-if scenarios and chart interpretation, but we shouldn’t expect a formal background in advanced statistics or probability theory. Our recent benchmark research into business analytics also points to a skills gap in this area. One interesting finding in our benchmark research, but perhaps not a surprise to anyone who reads our posts regularly, is that spreadsheets still dominate in conducting analytics today. Unfortunately, spreadsheets are not well equipped by themselves to handle the analytics of tomorrow’s organizations, which means managers will need to learn an entirely new set of tools. While the new breed of analytics tools are visual and collaborative rather than tabular and siloed, learning another skill set is seldom a simple task.

Today’s tools vendors and the business leaders must work to create self-guided, closed-loop systems that are intuitive and don’t take an advanced degree to learn. Where a process cannot be completely automated, systems should be able to provide users guidance on what data to pay attention to in order to make decisions. Our research shows that usability is becoming more and more important to business users, and this is where we expect to see work done in terms of simplifying tools. We still have a long way to go, as my colleague Mark Smith pointed out earlier this week.

Given the way people talk about the analytics skills shortage, we might advise all of our children to get a degree in mathematics or statistics. That may be a good idea, but it also seemed like a good idea to go to law school just a few years ago. While mathematics and statistics are important foundational skills, perhaps we should regard them as a means to an end rather than as ends in themselves. The McKinsey research shows a projected skills gap in these areas, but as with any model it rests on underlying assumptions. One assumption is that analytics tools vendors will not quickly fill in the gaps and allow analytics to be consumed at a lower skill level with better wizards, information and suggestions on applying analytics. The ability to monitor and contribute to this progress is a key reason Ventana Research continues its research and education in the area of business analytics.

Regards,

Tony Cosentino – Vice President and Research Director

Over the years Tibco has provided infrastructure for enterprise data integration and has built a substantial installed base. Now the company positions itself as supplying next-generation analytics for big data through service-oriented architecture (SOA). SOA has been around for a while; Ventana Research has been tracking it since 2006 and conducted benchmark research on SOA. But it remains a vaguely understood technology. Our research shows that SOA is not clearly defined in the market and that interpretations vary across the software industry. The basic function of an SOA is to provide common components and a common implementation that enable programmers to plug in and share applications through open application programming interfaces (APIs). In recent years, SOA has morphed into more of a general approach than a fixed set of standards. SOA architectures (though not always called SOA) are at the heart of modern platforms such as salesforce.com, Facebook and Amazon Web Services. In SOA Tibco competes with IBM and Oracle, among others.

The company’s promotion of SOA is unique; none of its competitors lead with it. Originally, SOA standards were based on the Simple Object Access Protocol (SOAP), but in the past few years standards based onREpresentational State Transfer (REST) have been gaining more adoption. From an architectural and development perspective, however, SOA is still quite viable, and for this reason Tibco’s messaging strategy may succeed. Our research shows that as business use of mobile devices grows, so will adoption of distributed architectures such as SOA and cloud computing.

From this SOA orientation, Tibco introduced Tibco Silver for cloud computing in 2009. While Silver was originally compared by some to Amazon’s EC2, it is more of a development environment that needs a public cloud infrastructure to run its applications. In fact, Tibco Silver now uses Amazon Web Services as its infrastructure provider to offer services such as its Spotfire product (which I will discuss later). Beyond Spotfire, Tibco Silver offers private cloud as well as public cloud services including grid computing resources and platform as a service (PaaS). Tibco differentiates its cloud products competitively in two ways. The first is dynamic provisioning, in which the system automatically provisions the necessary resources to meet the demand for a growing customer base. The second differentiator is to embed in it the company’s complex event processing (CEP) software, BusinessEvents.

The BusinessEvents software is a key part of the company’s strategy. CEP, covered in our benchmark research on operational intelligence, is about processing and analyzing multiple data streams in real time. To illustrate, think of what a car’s computing system does at the moment of a collision. Reacting to a number of signals, the computer is able to process input and command the airbag to deploy within a fraction of a second. This is CEP in a specific, contained environment. Tibco is one of the leaders in this CEP category and competes against companies such as IBM, Microsoft, Oracle and SAP. The company also sells CEP to a range of industries. One of the prominent sectors is travel where it helps airlines and railroads make real-time adjustments to factors such as schedules and personnel in response to changing weather conditions.

The fastest-growing product in the Tibco portfolio is Tibco Spotfire, its visual discovery and analytics tool. Tibco positions Spotfire between what the company sees as standard BI reporting tools, such as IBM Cognos, Oracle OBIEE and SAP Business Objects, and the statistical “heavy lifting” tools such as SAS, SPSS and the R packages. Spotfire 4.5, released in May, provides robust visualization capabilities and iterative analysis capabilities through its associative discovery model and in-memory processing engine. Spotfire is one of a growing class of data discovery tools that employ either an interactive visual approach or a search-based approach. Of the two, Tibco Spotfire is in the former category. A demonstration of the software impressed me with its ease of use, intuitive qualities and graphing of embedded predictive analytic functions.

It’s important to note that other companies are not standing still in this area. Almost all of the major players have products in the visualization space including IBM Cognos Insight, MicroStrategy Visual Insights, Oracle’s integration of Endeca into the Exalytics platform and SAS Visual Analytics Explorer. Visualization features and functionality may have been a competitive advantage a year ago, but most companies are catching up and basic visualization aspects now are table stakes in a market where the leading edge is shifting toward collaboration and mobile access.

In short, Tibco’s strategy is to insist that SOA and CEP are essential to enable the near-real-time responses to changes in business conditions and customer demand that will convey competitive advantage in the future. These capabilities are part of the latest release of Spotfire 4.5 where it also supports access to Hadoop data and can access predictive analytics from providers like MathWorks and SAS. The market seems to approve of it, as Tibco’s stock price has gone up 50 percent this year and retail revenue doubled year over year. Some of the strength in retail likely derives from Tibco providing Amazon’s next-best-offer (NBO) analytics, which it can use to pitch predicative analytics to other major retailers.

Tibco is transitioning itself as well as its customers from a 20th century enterprise integration model to a 21st century analytics model. Organizations considering both stand-alone visual discovery tools and tools that integrate CEP into the analytical mix should look to Tibco. It also offers a one-year trial version of its Spotfire software that enables companies to test-drive the product for an extended period.

Regards,

Tony Cosentino

Vice President and Research Director

In this second in a blog series on business analytics I focus on the increasingly important area of predictive analytics. Our benchmark research into predictive analytics shows that while the vast majority of companies see this technology as important or very important for the future of their organizations, most are not taking full advantage of it. This finding suggests that there is an opportunity for companies to gain competitive advantage by implementing predictive analytics in the near term.

Earlier this year I spoke at the Predictive Analytics Summitin San Diego as part of a panel entitled “Winning with Data Science: Transforming Complexity into Simplicity”. Listening to my fellow panelists and presenters as well as speaking with vendors at their booths confirmed that the category of predictive analytics is still being defined. In fact, the environment reminded me a bit of the dot-com era in its energy as well as its disorder. But the two are different: The dot-com era was built on a “field of dreams” where companies built massive web properties but the consumers they expected to come never arrived. The value for the consumer was not obvious. The value of predictive analytics is much clearer and much more rooted in the realities of business. This is confirmed by our benchmark research, in which more than two-thirds of companies view the use of predictive analytics as conferring a competitive advantage.

Because the term predictive analytics is sometimes confused with others, let’s take a moment to define it. In its simplest sense, predictive analytics is about using existing data to predict future outcomes. For example, in database marketing it is often associated with scoring a customer record with the probability (or likelihood) of a desired behavior such as purchasing a particular product. Predictive analytics differs from descriptive analytics in that the latter is about describing existing data and examining how an existing dataset behaves. Descriptive analytics is exploratory in nature, and it is the basic approach used with legacy BI systems as well as with the new class of visual discovery tools. In descriptive analytics, the data is what it is; in predictive analytics, we use the existing data and the laws of probability to predict the future.

The market for predictive analytics is best understood by viewing it as divided into three subcategories: business operations and financial predictive analytics, industry-specific predictive analytics and customer behavior and marketing predictive analytics.

Operations and financial predictive analytics includes the use of predictive analytics in areas such as financial planning, workforce management, IT and supply chain operations. Financial forecasting (the domain of my colleague Robert Kugel) has been a part of the predictive analytics world for a long time. Financial predictive models utilize many different factors, including past company performance and leading economic indicators, to predict revenues and to budget more effectively.

More recently, in areas such as supply chain management, predictive analytics is allowing companies to match their stock with customer demand, thereby reducing inventory costs. In such a system, a manufacturer may collaborate with the retailer to look at run rates and predict stock-keeping unit (SKU) levels. Traditionally this was done with a store manager’s guess or by applying uniform assumptions across all inventories. By applying this type of predictive analysis, companies are able to reduce the inventory levels needed by their partners and segment the market to more efficiently align to an increasingly niched retail market environment.

Predictive analytics has applicability across an array of other areas as well. Workforce management systems, for example, use predictive analytics to understand the staying power of an employee based on his or her job history, or to plan capacity and rationalize new hires. IT is using predictive analytics to analyze log data and automate systems, thereby reducing the time it takes to manage the company IT infrastructure.

Industry-specific predictive analytics encompasses niche undertakings like fraud prevention, risk analysis and disease prediction. Police departments, for instance, do a better job of matching resources to threats when they use predictive models to determine when and where a violent crime might occur. Niche applications in sports analytics (think “Moneyball”) are changing how teams recruit players and even play their games. Healthcare companies and practitioners increasingly are predicting the occurrence of diseases and using these predictions to shape clinician behavior, drug production priorities and treatment protocols.

Big data approaches make possible interesting predictive analytics opportunities in specific areas such as Internet security. Predicting and preventing security threats, for example, is complex since it involves multiple variables that are constantly changing and new variables that are constantly being introduced. The ability to analyze the large volumes of network flow, log and new malware data to understand the different patterns and threat vectors now makes it possible to build predictive algorithms that can be used to recognize and score potential harm to the system.

Customer behavior and marketing predictive analytics is the area that likely hits closest to home for the many business managers who have been hearing that big data and predictive analytics are changing the world. In fact, according to our benchmark research, revenue-producing functions are the business areas where predictive analytics are being used most, with 65 percent of organizations using the technology in marketing and 59 percent in sales.

Loyalty and customer analytics are hot topics right now, and analytical CRM frameworks married with the right toolsets are providing sophisticated ways of not only predicting attrition but preventing it from happening. Companies are looking at individual-level behavior and wallet share across both online and offline environments. This individual-level view currently predominates, and it is proving to be a powerful tool when supported by the right data.

One area in particular where this sort of modeling is effectively being used is sales attribution, which is a major component of return on marketing investment (ROMI). The adage often attributed to John Wanamaker that “Half the money I spend on advertising is wasted; the trouble is I don’t know which half” has been applied to marketing spend as well, but it may not necessarily be true any longer in the era of big data and predictive analytics.

Implications and Recommendations

The adoption of predictive analytics, particularly in the important areas of marketing and sales, is forcing an uneasy partnership between CIOs and CMOs. This is because data quality and information management issues, traditionally the domain of the CIO, need to be resolved in order to realize the true value of predictive analytics. From the CMO’s perspective, predictive analytics has enormous power to predict things such as the next best customer offer, but if the product or customer data is incorrect, the value of the prediction is severely diminished.

On the flip side, some marketing services categories and approaches face disruption due to the emergence of predictive analytics. Media buying is an obvious one, but also impacted is the lesser known cottage industry around market-mix modeling. This modeling technique uses multivariate regression techniques to predict the impact of various promotional channels (that is, of the market mix) on future sales. As companies are able to do predictive behavioral modeling on an individual basis, they can fine tune how they tie together promotions and sales. This diminishes the need for less precise aggregate approaches such as market-mix models.

Increased reliance on predictive analytics may also result in realignment of business processes and roles. As business decision makers are able to do their own exploratory analysis and predictive “what-if” modeling and take immediate action based on that, sophisticated BI tools used by those executives may begin to replace the traditional analyst. However, with executive level baby boomers extending their stay in corporate America and the first generation of “digital natives” just graduating from school, such a scenario isn’t likely for mainstream businesses anytime soon.

As organizations move forward with their predictive analytics initiatives, I recommend they think broadly about how the models will be integrated into their existing systems, what type of modeling is needed, and how complex the models need to be. And organizations should by all means explore making use of the support offered by the vendors of the applications and tools that are deployed. At the moment IBM’s SPSS and SAS are leaders in the predictive analytics space with a broad range of tools and models addressing a wide range of use cases. MicroStrategy takes a different approach with its 9.3 release, allowing R to be programmed inside the software so that the functions run in an embedded manner. Many providers including those mentioned above support Predictive Modeling Markup Language (PMML), an XML-based standard that allows predictive models to be shared across applications. For big data initiatives, there are some interesting offerings: Datameer has partnered with Zementis, for example, to develop a universal PMML plug-in that allows SPSS, SAS, and R to be integrated with their Hadoop-based engine.

Approaches such as these help overcome the challenge of architectural integration, which was one of the key obstacles to the deployment and use of predictive analytics identified in our benchmark research. The integration of models is difficult because the statistician who builds the model rarely has the skill set to code the model, and so there is often much misalignment between the intent of the model designer and what the model actually does once it is implemented. With more and more vendors embedding analytic support in their portfolios and further adoption of the PMML standard, this should become less of an issue.

Companies should also pay close attention to the human factor when rolling out a predictive analytics initiative. Our Predictive Analytics Maturity Index shows that of the four dimensions (People, Process, Information, Technology) in terms of which we evaluate maturity, the People dimension is the least mature when it comes to predictive analytics. This issue, which largely is about available skills sets, potentially can be addressed through hiring of recent graduates, as many schools are teaching the R language and graduates are coming out with an appreciation of its power. Open source R is an increasingly popular language that is in many ways the “Lingua Franca” for predictive analytics. Going one step farther, IBM is working with schools such as Northwestern University to put SPSS and other of its advanced analytic tools in the hands of educators and students. SAS, meanwhile, has a very strong and loyal user base already resident in many of today’s corporations.

Predictive analytics initiatives should involve only organizational data sources in which managers have complete confidence. In the longer term, the right way for companies to do this is first to address the adequacy of their information management before embarking on wide-ranging predictive analytics initiatives. Our recent benchmark research into Information Management shows that organizations continue to face information management and data quality challenges. These result from the heterogeneous environment of disparate systems in organizations, the lack of a common metadata layer, and most of all a lack of attention and budget resources available to tackle the issue. My colleague Mark Smith offers a more in-depth look at the data quality and information management issues in his recent blog post. The net/net is that predictive models are no different than any model; if garbage goes in the front end, it’s garbage that comes out on the other side.

As organizations address these issues and respond to competitive market pressures, operational intelligence and predictive analytics inevitably will gain center-stage. Though businesses are still early in the maturity cycle with respect to predictive analytics, we at Ventana Research see companies capitalizing on the market advantage that predictive analytics provides. In some industries – financials, insurance, telecommunications, and retail, for example – things are moving quickly. Companies in these industries that are not currently taking advantage of predictive analytics or are not actively evaluating their options may be putting their businesses at risk.

What’s your thinking on the deployment and use of predictive analytics in your organization? Let me know!

Regards,

Tony Cosentino

VP and Research Director

Our benchmark research on business analytics suggests that it is counterproductive to take a general approach. A better approach is to focus on particular use cases and lines of business (LOB). For this reason, in a series of upcoming articles, I will look at our business analytics research in the context of different industries and different functional areas of an organization, and illustrate how analytics are being applied to solve real business problems.

Our benchmark research on business analytics reveals that 89 percent of organizations find that it is important or very important to make it simpler to provide analytics and metrics. To me, this says that today’s analytic environments are a Tower of Babel. We need more user-friendly tools, collaboration and most of all a common vernacular.

With this last point in mind, let’s start by defining business analytics. Here at Ventana Research, business analytics refers to the application of mathematical computation and models to generate relevant historical and predictive insights that can be used to optimize business- and IT-related processes and decisions.This definition helps us to focus on the technological underpinning of analytics, but more importantly, it focuses us on the outcomes of business and IT processes and decisions.

To provide more context, we might think of the what, the so what and the now what when it comes to information, analytics and decision-making. The what is data or information in its base form. In order to derive meaning, we apply different types of analytics and go through analytical processes. This addresses the so what, or the why should I care about the data. The now what involves decision-making and actions taken on the data; this is where ideas such as operational intelligence and predictive analytics play a big role. I will look to our benchmark research in these areas to help guide the discussion.

It’s important not to think about business analytics in a technological silo removed from the people, process, information and tools that make up the Ventana Maturity Index. In this broader sense, business analytics helps internal teams derive meaning from data and guides their decisions. Our next-generation business intelligence research focuses on collaboration and mobile application of analytics, two key components for making analytics actionable within the organization.

In addition, our research shows a lot of confusion about the terms surrounding analytics. Many users don’t understand scorecards and dashboards, and find discovery, iterative analysis, key performance metrics, root-cause analysis and predictive analytics to be ambiguous terms. We’ll be discussing all of these ideas in the context of business technology innovation, our 2012 business intelligence research agenda and of course our large body of research on technology and business analytics.

Organizations must care about analytics because analytics provides companies with a competitive advantage by showing what their customers want, when they want it and how they want it delivered. It can help reduce inventory carrying costs in manufacturing and retail, fraud in insurance and finance, churn in telecommunications and even violent crime on our streets. The better an organization can integrate data and utilize both internal and external information in a coherent fashion, the greater the value of their analytics.

I hope you enjoy this series and find it useful as you define your own analytics agenda within your organization.

Regards,

Tony Cosentino

VP & Research Director

Karmasphere has an interesting story to tell. Much like Datameer, which I recently blogged about, Karmasphere sits on top of the Hadoop distributed platform where companies such as ClouderaHortonworks and MapR compete. Karmasphere provides a collaborative environment and an analytical workbench that help companies write applications and workflows that run on top of Hadoop. The company’s business model looks to leverage legacy skill sets, such as SQL, which are already resident in most organizations, in order to ingest, analyze and act on big data.

Karmasphere’s approach begins with the common assertion that business intelligence tools were built to analyze only structured data. They use descriptive statistics and provide historical views of data, but they are limited in their iterative discovery processes and in their ability to add new data in a timely, practical manner. Newer tools, such as in-memory databases and appliances, address the old technological limitations but have their own issues. While memory is becoming less expensive and speed is improved, proprietary hardware can lock businesses into a particular technology, and the way these tools manipulate data is often proprietary, too, in terms of how data is written back to disk and what types of data are stored on disk.

It’s through this gap in the market that Hadoop developers and companies like Karmasphere want to capitalize. Hadoop provides an ideal platform for companies exploring and analyzing big data because it is built to maximize disk I/O, run on commodity hardware and scale in a linear fashion. Given that commodity hardware by definition is fast, cheap and available, Hadoop clusters fit the bill for advancing big-data analytics.

The market is responding to this logic. Not only are we seeing venture capital pouring into the space, but our own benchmark research on Hadoop shows that it is being used in 33 percent of companies’ big-data environments and evaluated in another 20 percent. Very likely these numbers are increasing as I write this.

A big issue for Hadoop adoption is the skills gap. In our research on business analytics, 89 percent of participants said it is important to make it simpler to provide analytics and metrics to all users who need them. Last year big data was the domain of ninja data scientists creating MapReduce functions. Now, while big data is still a nebulous concept for many business people outside the data world, it is starting to come into focus, and the conversation is moving to the front lines of organizations. The discussion now is about how to match the power of Hadoop with the current skills of developers, data analysts, business analysts and end users.

Karmasphere addresses the skills issue. The company previewed its 2.0 release at the Hadoop Summit in June. The release focuses on the collaborative workspace and includes a shared repository for analytical assets in a role-based Web environment. With this team-oriented focus, Karmasphere is hitting a sweet spot in the market, as end users are becoming more involved in both usage and buying decisions. We’re looking forward to discussing this trend in our next-generation business intelligence benchmark research, due out soon.

As you might expect with an integrated Hadoop approach, Karmasphere 2.0 can ingest data from multiple sources, including data from Omniture and DoubleClick. Its wizard-based approach is helpful in identifying and consuming diverse data sources into the Hive tables native to Hadoop. The software enables users to do basic descriptive data exploration to determine the techniques and algorithms that they might want to apply. From there, analysts can conduct ad-hoc queries and iterative SQL analyses to test hypotheses and find insights from the data. Karmasphere also includes support for Hive UDFs and analytics packages such as SPSS and SAS for more complex analysis. Finally, Karmasphere helps users embed their insights in other systems through REST APIs and to publish through other BI applications such as Tableau and Spotfire.

By positioning the software more toward collaboration and analytics and less toward visualization, Karmasphere appears to be trying to carve out a unique space and avoid direct comparisons with companies such as Datameer that have advanced analytics and visualization capabilities. Karmasphere’s approach to visualization seems to be to provide some basics and partner with firms such as Tableau to provide more robust front-end user tools. This strategy allows Karmasphere to focus both its messaging and development efforts on the core value of bridging the Hadoop skills gap with analytics and collaboration tools.

Karmasphere has a strong story, but broader questions remain about how the Hadoop ecosystem will evolve from early adopters to early majority. A lot of money is still flowing into the Hadoop community, but I anticipate a shakeout in the space at some point, and only vendors that provide strong time-to-value propositions will remain intact. Karmasphere provides a compelling proposition around team analytic processes, collaboration and bringing the value of Hadoop to the front lines of organizations. By competing to fill the space between the technical aspects of Hadoop and the existing skills in the market, it manages to address a critical challenge around adoption.

Regards,

Tony Cosentino

VP & Research Director

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,049 hits
%d bloggers like this: