You are currently browsing the tag archive for the ‘Microsoft’ tag.

One of the key findings in our latest benchmark research into predictive analytics is that companies are incorporating predictive analytics into their operational systems more often than was the case three years ago. The research found that companies are less inclined to purchase stand-alone predictive analytics tools (29% vs 44% three years ago) and more inclined to purchase predictive analytics built into business intelligence systems (23% vs 20%), applications (12% vs 8%), databases (9% vs 7%) and middleware (9% vs 2%). This trend is not surprising since operationalizing predictive analytics – that is, building predictive analytics directly into business process workflows – improves companies’ ability to gain competitive advantage: those that deploy predictive analyticsvr_NG_Predictive_Analytics_12_frequency_of_updating_predictive_models within business processes are more likely to say they gain competitive advantage and improve revenue through predictive analytics than those that don’t.

In order to understand the shift that is underway, it is important to understand how predictive analytics has historically been executed within organizations. The marketing organization provides a useful example since it is the functional area where organizations most often deploy predictive analytics today. In a typical organization, those doing statistical analysis will export data from various sources into a flat file. (Often IT is responsible for pulling the data from the relational databases and passing it over to the statistician in a flat file format.) Data is cleansed, transformed, and merged so that the analytic data set is in a normalized format. It then is modeled with stand-alone tools and the model is applied to records to yield probability scores. In the case of a churn model, such a probability score represents how likely someone is to defect. For a marketing campaign, a probability score tells the marketer how likely someone is to respond to an offer. These scores are produced for marketers on a periodic basis – usually monthly. Marketers then work on the campaigns informed by these static models and scores until the cycle repeats itself.

The challenge presented by this traditional model is that a lot can happen in a month and the heavy reliance on process and people can hinder the organization’s ability to respond quickly to opportunities and threats. This is particularly true in fast-moving consumer categories such as telecommunications or retail. For instance, if a person visits the company’s cancelation policy web page the instant before he or she picks up the phone to cancel the contract, this customer’s churn score will change dramatically and the action that the call center agent should take will need to change as well. Perhaps, for example, that score change should mean that the person is now routed directly to an agent trained to deal with possible defections. But such operational integration requires that the analytic software be integrated with the call agent software and web tracking software in near-real time.

Similarly, the models themselves need to be constantly updated to deal with the fast pace of change. For instance, if a telecommunications carrier competitor offers a large rebate to customers to switch service providers, an organization’s churn model can be rendered out of date and should be updated. Our research shows that organizations that constantly update their models gain competitive advantage more often than those that only update them periodically (86% vs 60% average), more often show significant improvement in organizational activities and processes (73% vs 44%), and are more often very satisfied with their predictive analytics (57% vs 23%).

Building predictive analytics into business processes is more easily discussed than done; complex business and technical challenges must be addressed. The skills gap that I recently wrote about is a significant barrier to implementing predictive analytics. Making predictive analytics operational requires not only statistical and business skills but technical skills as well.   From a technical perspective, one of the biggest challenges for operationalizing predictive analytics is accessing and preparing data which I wrote about. Four out of ten companies say that this is the part of the predictive analytics process vr_NG_Predictive_Analytics_02_impact_of_doing_more_predictive_analyticswhere they spend the most time. Choosing the right software is another challenge that I wrote about. Making that choice includes identifying the specific integration points with business intelligence systems, applications, database systems, and middleware. These decisions will depend on how people use the various systems and what areas of the organization are looking to operationalize predictive analytics processes.

For those that are willing to take on the challenges of operationalizing predictive analytics the rewards can be significant, including significantly better competitive positioning and new revenue opportunities. Furthermore, once predictive analytics is initially deployed in the organization it snowballs, with more than nine in ten companies going on to increase their use of predictive analytics. Once companies reach that stage, one third of them (32%) say predictive analytics has had a transformational impact and another half (49%) say it provides a significant positive benefits.

Regards,

Ventana Research

Our benchmark research into predictive analytics shows that lack of resources, including budget and skills, is the number-one business barrier to the effective deployment and use of predictive analytics; awareness – that is, an understanding of how to apply predictive analytics to business problems – is second. In order to secure resources and address awareness problems a business case needs to be created and communicated clearly wherever appropriate across the organization. A business case presents the reasoning for initiating a project or task. A compelling business case communicates the nature of the proposed project and the arguments, both quantified and unquantifiable, for its deployment.

The first steps in creating a business case for predictive analytics are to understand the audience and to communicate with the experts who will be involved in leading the project. Predictive analytics can be transformational in nature and therefore the audience potentially is broad, including many disciplines within the organization. Understand who should be involved in business case creation a list that may include business users, analytics users and IT. Those most often primarily responsible for designing and deploying predictive analytics are data scientists (in 31% of organizations), the business intelligence and data warehouse team (27%), those working in general IT (16%) and line of business analysts (13%), so be sure to involve these groups. Understand the specific value and challenges for each of the constituencies so the business case can represent the interests of these key stakeholders. I discuss the aspects of the business where these groups will see predictive analytics most adding value here and here.

For the business case for a predictive analytics deployment to be persuasive, executives also must understand how specifically the deployment will impact their areas of responsibilityvr_NG_Predictive_Analytics_01_front_office_functions_use_predictive_anal.._ and what the return on investment will be. For these stakeholders, the argument should be multifaceted. At a high level, the business case should explain why predictive analytics is important and how it fits with and enhances the organization’s overall business plan. Industry benchmark research and relevant case studies can be used to paint a picture of what predictive analytics can do for marketing (48%), operations (44%) and IT (40%), the functions where predictive analytics is used most.

A business case should show how predictive analytics relates to other relevant innovation and analytic initiatives in the company. For instance, companies have been spending money on big data, cloud and visualization initiatives where software returns can be more difficult to quantify. Our research into big data analytics and data and analytics in the cloud show that the top benefit for these initiatives are communication and knowledge sharing. Fortunately, the business case for predictive analytics can cite the tangible business benefits our research identified, the most often identified of which are achieving competitive advantage (57%), creating new revenue opportunities (50%), and increasing profitability vr_NG_Predictive_Analytics_03_benefits_of_predictive_analytics(46%). But the business case can be made even stronger by noting that predictive analytics can have added value when it is used to leverage other current technology investments. For instance, our big data analytics research shows that the most valuable type of analytics to be applied to big data is predictive analytics.

To craft the specifics of the business case, concisely define the business issue that will be addressed. Assess the current environment and offer a gap analysis to show the difference between the current environment and the future environment). Offer a recommended solution, but also offer alternatives. Detail the specific value propositions associated with the change. Create a financial analysis summarizing costs and benefits. Support the analysis with a timeline including roles and responsibilities. Finally, detail the major risk factors and opportunity costs associated with the project.

For complex initiatives, break the overall project into a series of shorter projects. If the business case is for a project that will involve substantial work, consider providing separate timelines and deliverables for each phase. Doing so will keep stakeholders both informed and engaged during the time it takes to complete the full project. For large predictive analytics projects, it is important to break out the due-diligence phase and try not to make any hard commitments until that phase is completed. After all, it is difficult to establish defensible budgets and timelines until one knows the complete scope of the project.

Ensure that the project time line is realistic and addresses all the key components needed for a successful deployment.  In particular with predictive analytics projects, make certain that it reflects a thoughtful approach to data access, data quality and data preparation. We note that four in 10 organizations say vr_NG_Predictive_Analytics_08_time_spent_in_predictive_analytic_processthat the most time spent in the predictive analytics process is in data preparation and another 22 percent say that they spend the most time accessing data sources. If data issues have not been well thought through, it is next to impossible for the predictive analytics initiative to be successful. Read my recent piece on operationalizing predictive analytics to show how predictive analytics will align with specific business processes.

If you are proposing the implementation of new predictive analytics software, highlight the multiple areas of return beyond competitive advantage and revenue benefits. Specifically, new software can have a total lower cost of ownership and generate direct cost savings from improved operating efficiencies. A software deployment also can yield benefits related to people (productivity, insight, fewer errors), management (creativity, speed of response), process (shorter time on task or time to complete) and information (easier access, more timely, accurate and consistent). Create a comprehensive list of the major benefits the software will provide compared to the existing approach, quantifying the impact wherever possible. Detail all major costs of ownership whether the implementation is on-premises or cloud-based: these will include licensing, maintenance, implementation consulting, internal deployment resources, training, hardware and other infrastructure costs. In other words, think broadly about both the costs and the sources of return in building the case for new technology. Also, read my recent piece on procuring predictive analytics software.

Understanding the audience, painting the vision, crafting the specific case, outlining areas of return, specifying software, noting risk factors, and being as comprehensive as possible are all part of a successful business plan process. Sometimes, the initial phase is really just a pitch for project funding and there won’t be any dollar allocation until people are convinced that the program will get them what they need.  In such situations multiple documents may be required, including a short one- to two-page document that outlines vision and makes a high-level argument for action from the organizational stakeholders. Once a cross functional team and executive support is in place, a more formal assessment and design plan following the principles above will have to be built.

Predictive analytics offers significant returns for organizations willing pursue it, but establishing a solid business case is the first step for any organization.

Regards,

Ventana Research

Our research into next-generation predictive analytics shows that along with not having enough skilled resources, which I discussed in my previous analysisNGPA AP #4 image 1the inability to readily access and integrate data is a primary reason for dissatisfaction with predictive analytics (in 62% of participating organizations). Furthermore, this area consumes the most time in the predictive analytics process: The research finds that preparing data for analysis (40%) and accessing data (22%) are the parts of the predictive analysis process that create the most challenges for organizations. To allow more time for actual analysis, organizations must work to improve their data-related processes.

Organizations apply predictive analytics to many categories of information. Our research shows that the most common categories are customer (used by 50%), marketing (44%), product (43%), financial (40%) and sales (38%). Such information often has to be combined from various systems and enriched with information from new sources. Before users can apply predictive analytics to these blended data sets, the information must be put into a common form and represented as a normalized analytic data set. Unlike in data warehouse systems, which provide a single data source with a common format, today data is often located in a variety of systems that have different formats and data models. Much of the current challenge in accessing and integrating data comes from the need to include not only a variety of relational data sources but also less structured forms of data. Data that varies in both structures and sizes is commonly called big data.

To deal with the challenge of storing and computing big data, organizations planning to use predictive analytics increasingly turn to big data technology. While flat files and relational databases on standard hardware, each cited by almost two-thirds (63%) of participants, are still the most commonly used tools for predictive analytics, more than half (52%) of organizations now use data warehouse appliances for Using Big Data with Predictive Analytics predictive analytics, and 31 percent use in-memory databases, which the second-highest percentage (24%) plan to adopt in the next 12 to 24 months. Hadoop and NoSQL technologies lag in adoption, currently used by one in four organizations, but in the next 12 to 24 months an additional 29 percent intend to use Hadoop and 20 percent more will use other NoSQL approaches. Furthermore, more than one-quarter (26%) of organizations are evaluating Hadoop for use in predictive analytics, which is the most of any technology.

 Some organizations are considering moving from on-premises to cloud-based storage of data for predictive analytics; the most common reasons for doing so are to improve accessing data (for 49%) and preparing data for analysis (43%). This trend speaks to the increasing importance of cloud-based data sources as well as cloud-based tools that provide access to many information sources and provide predictive analytics. As organizations accumulate more data and need to apply predictive analytics in a scalable manner, we expect the need to access and use big data and cloud-based systems to increase.

While big data systems can help handle the size and variety of data, they do not of themselves solve the challenges of data access and normalization. This is especially true for organizations that need to blend new data that resides in isolated systems. How to do this is critical for organizations to consider, especially in light of the people using predictive analytic system and their skills. There are three key considerations here. One is the user interface, the most common of which are spreadsheets (used by 48%), graphical workflow modeling tools (44%), integrated development environments (37%) and menu-driven modeling tools (35%). Second is the number of data sources to deal with and which are supported by the system; our research shows that four out of five of organizations need to access and integrate five or more data sources. The third consideration is which analytic languages and libraries to use and which are supported by the system; the research finds that Microsoft Excel, SQL, R, Java and Python are the most widely used for predictive analytics. Considering these three priorities both in terms of the resident skills, processes, current technology, and information sources that need to be accessed are crucial for delivering value to the organization with predictive analytics.

While there has been an exponential increase in data available to use in predictive analytics as well as advances in integration technology, our research shows that data access and preparation are still the most challenging and time-consuming tasks in the predictive analytics process. Although technology for these tasks has improved, complexity of the data has increased through the emergence of different data types, large-scale data and cloud-based data sources. Organizations must pay special attention to how they choose predictive analytics tools that can give easy access to multiple diverse data sources including big data stores and provide capabilities for data blending and provisioning of analytic data sets. Without these capabilities, predictive analytics tools will fall short of expectations.

Regards,

Ventana Research

The Performance Index analysis we performed as part of our next-generation predictive analytics benchmark research shows that only one in four organizations, those functioning at the highest Innovative level of performance, can use predictive analytics to compete effectively against others that use this technology less well. We analyze performance in detail in four dimensions (People, Process, Information and Technology), and for predictive analytics we find that organizations perform best in the Technology dimension, with 38 percent reaching the top Innovative level. This is often the case in our analyses, as organizations initially perform better in the details of selectingvr_NG_Predictive_Analytics_performance_06_dimensions and managing new tools than in the other dimensions. Predictive analytics is not a new technology per se, but the difference is that it is becoming more common in business units, as I have written.

In contrast to organizations’ performance in the Technology dimension, only 10 percent reach the Innovative level in People and only 11 percent in Process. This disparity uncovered by the research analysis suggests there is value in focusing on the skills that are used to design and deploy predictive analytics. In particular, we found that one of the two most-often cited reasons why participants are not fully satisfied with the organization’s use of predictive analytics is that there are not enough skilled resources (cited by 62%). In addition, 29 percent said that the need for too much training or customized skills is a barrier to changing their predictive analytics.

The challenge for many organizations is to find the combination of domain knowledge, statistical and mathematical knowledge, and technical knowledge that it needs to be able to integrate predictive analytics into other technology systems and into operations in the lines of business, which I also have discussed. The need for technical knowledge is evident in the research findings on the jobs held by individual participants: Three out of four require technical sophistication. More than one-third (35%) are data scientists who have a deep understanding of predictive analytics and its use as well as of data-related technology; one-fourth are data analysts who understand the organization’s data and systems but have limited knowledge of predictive analytics; and 16 percent described themselves as predictive analytics experts who have a deep understanding of this topic but not of technology in general. The research also finds that those most often primarily responsible for designing and deploying predictive analytics are data scientists (in 31% of organizations) or members of the business intelligence and data warehouse team (27%). This focus on business intelligence and data warehousing vr_NG_Predictive_Analytics_16_why_users_dont_produce_predictive_analysesrepresents a shift toward integrating predictive analytics with other technologies and indicates a need to scale predictive analytics across the organization.

In only about half (52%) of organizations are the people who design and deploy predictive analytics the same people who utilize the output of these processes. The most common reasons cited by research participants that users of predictive analytics don’t produce their own analyses are that they don’t have enough skills training (79%) and don’t understand the mathematics involved (66%). The research also finds evidence that skills training pays off: Fully half of those who said they received adequate training in applying predictive analytics to business problems also said they are very satisfied with their predictive analytics; percentages dropped precipitously for those who said the training was somewhat adequate (8%) and inadequate (6%). It is clear that professionals trained in both business and technology are necessary for an organization to successfully understand, deploy and use predictive analytics.

To determine the technical skills and training necessary for predictive analytics, it is important to understand which languages and libraries are used. The research shows that the most common are SQL (used by 67% of organizations) and Microsoft Excel (64%), with which many people are familiar and which are relatively easy to use. The three next-most commonly used are much more sophisticated: the open source language R (by 58%), Java (42%) and Python (36%). Overall, many languages are in use: Three out of five organizations use four or more of them. This array reflects the diversity of approaches to predictive analytics. Organizations must assess what languages make sense for their uses, and vendors must support many languages for predictive analytics to meet the demands of all customers.

The research thus makes clear that organizations must pay attention to a variety of skills and how to combine them with technology to ensure success in using predictive analytics. Not all the skills necessary in an analytics-driven organization can be combined in one person, as I discussed in my analysis of analytic personas. We recommend that as organizations focus on the skills discussed above, they consider creating cross-functional teams from both business and technology groups.

Regards,

Ventana Research

To impact business success, Ventana Research recommends viewing predictive analytics as a business investment rather than an IT investment.  Our recent benchmark research into next-generation predictive analytics  reveals that since our previous research on the topic in 2012, funding has shifted from general business budgets (previously 44%) to line of business IT budgets (previously 19%). Now more than vr_NG_Predictive_Analytics_15_preferences_in_purchasing_predictive_analy.._  half of organizations fund such projects from business budgets: 29 percent from general business budgets and 27 percent from a line of business IT budget. This shift in buying reflects the mainstreaming of predictive analytics in organizations,  which I recently wrote about .

This shift in funding of initiatives coincides with a change in the preferred format for predictive analytics. The research reveals that 15 percent fewer organizations prefer to purchase predictive analytics as stand-alone technology today than did in the previous research (29% now vs. 44% then). Instead we find growing demand for predictive analytics tools that can be integrated with operational environments such as business intelligence or transaction applications. More than two in five (43%) organizations now prefer predictive analytics embedded in other technologies. This integration can help businesses respond faster to market opportunities and competitive threats without having to switch applications.

  vr_NG_Predictive_Analytics_14_considerations_in_evaluating_predictive_an.._ The features most often sought in predictive analytics products further confirm business interest. Usability (very important to 67%) and capability (59%) are the top buying criteria, followed by reliability (52%) and manageability (49%). This is consistent with the priorities of organizations three years ago with one important exception: Manageability was one of the two least important criteria then (33%) but today is nearly tied with reliability for third place. This change makes sense in light of a broader use of predictive analytics and the need to manage an increasing variety of models and input variables.

Further, as a business investment predictive analytics is most often used in front-office functions, but the research shows that IT and operations are closely associated with these functions. The top four areas of predictive analytics use are marketing (48%), operations (44%), IT (40%) and sales (38%). In the previous research operations ranked much lower on the list.

To select the most useful product, organizations must understand where IT and business buyers agree and disagree on what matters. The research shows that they agree closely on how to deploy the tools: Both expressed a greater preference to deploy on-premises (business 53%, IT 55%) but also agree in the number of those who prefer it on demand through cloud computing (business 22%, IT 23%). More than 90 percent on both sides said the organization plans to deploy more predictive analytics, and they also were in close agreement (business 32%, IT 33%) that doing so would have a transformational impact, enabling the organization to do things it couldn’t do before.

However, some distinctions are important to consider, especially when looking at the business case for predictive analytics. Business users more often focus on the benefit of achieving competitive advantage (60% vs. 50% of IT) and creating new revenue opportunities (55% vs. 41%), which are the two benefits most often cited overall. On the other hand, IT professionals more often focus on the benefits of in­creased upselling and cross-selling (53% vs. 32%), reduced risk (26% vs. 21%) and better compliance (26% vs. 19%); the last two reflect key responsibilities of the IT group.

Despite strong business involvement, when it comes to products, IT, technical and data experts are indispensable for the evaluation and use of predictive analytics. Data scientists or the head of data management are most often involved in recommending (52%) and evaluating (56%) predictive analytics technologies. Reflecting the need to deploy predictive analytics to business units, analysts and IT staff are the next-most influential roles for evaluating and recommending. This involvement of technically sophisticated individuals combined with the movement away from organizations buying stand-alone tools indicates an increasingly team-oriented approach.

Purchase of predictive analytics often requires approval from high up in the organization, which underscores the degree of enterprise-wide interest in this technology. The CEO or president is most likely to be involved in the final decision in small (87%) and midsize (76%) companies. In contrast, large companies rely most on IT management (40%), and very large companies rely most on the CIO or head of IT (60%). We again note the importance of IT in the predictive analytics decision-making process in larger organizations. In the previous research, in large companies IT management was involved in approval in 9 percent of them and the CIO was involved in only 40 percent.

As predictive analytics becomes more widely used, buyers should take a broad view of the design and deployment requirements of the organization and specific lines of business. They should consider which functional areas will use the tools and consider issues involving people, processes and information as well as technology when evaluating such systems. We urge business and IT buyers to work together during the buying process with the common goal of using predictive analytics to deliver value to the enterprise.

Regards,

Ventana Research

Microsoft has been steadily pouring money into big data and business intelligence. The company of course owns the most widely used analytical tool in the world, Microsoft Excel, which our benchmark research into Spreadsheets in the Enterprise shows is not going away soon. User resistance (cited by 56% of participants) and lack of a business case (50%) are the most common reasons that spreadsheets are not being replaced in the enterprise.  The challenge is ensuring the spreadsheets are not just personally used but connected and secured into the enterprise to address consistency and a range of  and potential errors. These issues all add up to more work and maintenance as my colleague has pointed out recently.

vr_ss21_spreadsheets_arent_easily_replacedAlong with Microsoft SQL and SharePoint, Excel is at the heart of the company’s BI strategy. In particular, PowerPivot, originally introduced as an add-on for Excel 2010 and built into Excel 2013, is a discovery tool that enables exploratory analytics and data mashups. PowerPivot uses an in-memory, column store approach similar to other tools in the market. Its ability to access multiple data sources including from third parties and government through Microsoft’s Azure Marketplace, enables a robust analytical experience.

Ultimately, information sources are more important than the tool sets used on them. With the Azure Marketplace and access to other new data sources such as Hadoop through partnership with Hortonworks as my colleague assessed, Microsoft is advancing in the big data space. Microsoft has partnered with Hortonworks to bring Hadoop data into the fold through HDInsights, which enable familiar Excel environments to access HDFS via HCatalog. This approach is similar to access methods utilized by other companies, including Teradata which I wrote about last week. Microsoft stresses the 100 percent open source nature of the Hortonworks approach as a standard alternative to the multiple, more proprietary Hadoop distributions occurring throughout the industry. An important benefit for enterprises with Microsoft deployments is that Microsoft Active Directory adds security to HDInsights.

As my colleague Mark Smith recently pointed out about data discovery methods, the analytic discovery category is broad and includes visualization approaches. On the visualization side, Microsoft markets PowerView, also part of Excel 2013, which provides visual analytics and navigation on top of the Microsoft’s BI semantic model. Users also can annotate and highlight content and then embed it directly into PowerPoint presentations. This direct export feature is valuable because PowerPoint is still a critical communication vehicle in many organizations. Another visual tool, currently in preview, is the Excel add-in GeoFlow, which uses Bing Maps to render visually impressive temporal and geographic data in three dimensions. Such a 3-D visualization technique could be useful in many industries.  Our research into next generation business intelligence found that deploying geographic maps (47%) and visualizing metrics on them (41%) are becoming increasing important but Microsoft will need to further exploit location-based analytics and the need for interactivity.

Microsoft has a core advantage in being able to link its front-office tools such as Excel with its back-end systems such as SQL Server 2012 and SharePoint. In particular, having the ability to leverage a common semantic model through Microsoft Analytical Services, in what Microsoft calls its Business Intelligence Semantic Model, users can set up a dynamic exploratory environment through Excel. Once users or analysts have developed a BI work product, they can publish the work product such as a report directly or through SharePoint. This integration enables business users to share data models and solutions and manage them in common, which applies to security controls as well as giving visibility into usage statistics to see when particular applications are gaining traction with organizational users.

Usability, which our benchmark research into next-generation business intelligencevr_ss21_employee_spreadsheet_skills_are_adequate identifies as the number-one evaluation criterion in nearly two-thirds (64%) of organizations, is still a challenge for Microsoft. Excel power users will appreciate the solid capabilities of PowerPivot, but more casual users of Excel – the majority of business people – do not understand how to build pivot tables or formulas. Our research shows that only 11 percent of Excel users are power users and most skill levels are simply adequate (49%) compared to above average or excellent. While PowerView does give some added capability, a number of other vendors of visual discovery products like Tableau have focused on user experience from the ground up, so it is clear that Microsoft needs to address this shortcoming in its design environment.

When we consider more advanced analytic strategies and inclusion of advanced algorithms, Microsoft’s direction is not clear. Its Data Analysis eXpressions (DAX) can help create custom measures and calculated fields, but it is a scripting language akin to MDX. This is useful for IT professionals who are familiar with such tools, but here also business-oriented users will be challenged in using it effectively.

A wild card in Microsoft’s BI and analytics strategy is with mobile technology. Currently, Microsoft is pursuing a build-once, deploy-anywhere model based on HTML5, and is a key member of the Worldwide Web Consortium (W3C) that is defining the standard. The HTML5 standard, which has just passed a big hurdle in terms of candidate recommendation is beginning to show value in the design of new applications that can be access through web-browsers on smartphones and tablets.  The approach of HTML5 could be challenging as our technology innovation research into mobile technology finds more organizations (39%) prefer native mobile applications from the vendors specific application stores compared to 33 percent through web-browser based method and a fifth with no preference. However, the success or failure of its Windows 8-based Surface tablet will be the real barometer of Microsoft mobile BI success since its integration with the Office franchise is a key differentiator. Early adoption of the tablet has not been strong, but Microsoft is said to be doubling down with a new version to be announced shortly. Success would put Office into the hands of the mobile workforce on a widespread basis via Microsoft devices, which could have far-reaching impacts for the mobile BI market.

As it stands now, however, Microsoft faces an uphill battle in establishing its mobile platform in a market dominated by Android and Apple iOS devices like the iPhone and iPad. If the Surface ultimately fails, Microsoft will likely have to open up Office to run on Android and iOS or risk losing its dominant position.  My colleague is quite pessimistic about Microsoft overall mobile technology efforts and its ability to overcome the reality of the existing market. Our technology innovation research into mobile technology finds that over half of organizations have a preference for their smartphone and tablet technology platform, and the first ranked smartphone priorities has Apple (50%), Android (27%) and RIM (17%) as top smartphone platforms with Microsoft a distant fourth (5%); for tablets is Apple (66%), Android (19%) and then Microsoft (8%). Based on these finding, Microsoft faces challenges on both the platform front and if they adapt their technology to support others that are more preferred in business today.

Ultimately, Microsoft is trying to pull together different initiatives across multiple internal business units that are known for being very siloed and not organized well for customers.  Ultimately, Microsoft has relied on its channel partners and customers to figure out how to not just make them work together but also think about what is possible since they are not always given clear guidance from Redmond. Recent efforts find that Microsoft is trying to come together to address the big data and business analytics challenge and the massive opportunity it represents. One area in which this is coming together is Microsoft’s cloud initiatives. Last year’s announcements of Azure virtual machines enables an infrastructure-as-a-service (IaaS) play for Microsoft and positions Windows Azure SQL Database as a service. This could make the back end systems I’ve discussed available through a cloud-based offer, but currently this is only offered through the client version of the software.

For organizations that already have installed Microsoft as their primary BI platform and are looking for tight integration with an Excel-based discovery environment, the decision to move forward is relatively simple. The trade-off is that this package is still a bit IT-centric and may not attract as many in the larger body of business users as a more user-friendly discovery product might do and address the failings of business intelligence. Furthermore, since Microsoft is not as engaged in direct support and service as other players in this market, it will need to move the traditionally technology focused channel to help their customers become more business savvy. For marketing and other business departments, especially in high-velocity industries where usability and time-to-value is at a premium and back-end integration is secondary, other tools will be worth a look. Microsoft has great potential and with analytics being the top ranked technology innovation priority among its customers I hope that the many divisions inside the global software giant can finally come together to deliver a comprehensive approach.

Regards,

Tony Cosentino

VP and Research Director

The challenge with discussing big data analytics is in cutting through the ambiguity that surrounds the term. People often focus on the 3 Vs of big data – volume, variety and velocity – which provides a good lens for big data technology, but only gets us part of the way to understanding big data analytics, and provides even less guidance on how to take advantage of big data analytics to unlock business value.

Part of the challenge of defining big data analytics is a lack of clarity vr_bigdata_big_data_capabilities_not_availablearound the big data analytics value chain – from data sources, to analytic scalability, to analytic processes and access methods. Our recent research on big data find many capabilities still not available including predictive analytics (41%) to visualization (37%). Moreover, organizations are unclear on how best to initiate changes in the way they approach data analysis to take advantage of big data and what processes and technologies they ought to be using. The growth in use of appliances, Hadoop and in-memory databases and the growing footprints of RDBMSes all add up to pressure to have more intelligent analytics, but the most direct and cost-effective path from here to there is unclear. What is certain is that as business analytics and big data increasingly merge, the potential for increased value is building expectations.

To understand the organizational chasm that exists with respect to big data analytics, it’s important to understand two foundational analytic approaches that are used in organizations today. Former Census Bureau Director Robert Grove’s ideas around designed data and organic data give us a great jumping off point for this discussion, especially as it relates to big data analytics.

In Grove’s estimation, the 20th century was about designed data, or what might be considered hypothesis-driven data. With designed data we engage in analytics by establishing a hypothesis and collecting data to prove or disprove it. Designed data is at the heart of confirmatory analytics, where we go out and collect data that are relevant to the assumptions we have already made. Designed data is often considered the domain of the statistician, but it is also at the heart of structured databases, since we assume that all of our data can fit into columns and rows and be modeled in a relational manner.

In contrast to the designed data approach of the 20th century, the 21st century is about organic data. Organic data is data that is not limited by a specific frame of reference that we apply to it, and because of this it grows without limits and without any structure other than that structure provided by randomness and probability. Organic data represents all data in the world, but for pragmatic reasons we may think of it as all the data we are able to instrument. RFID, GPS data, sensor data, sentiment data and various types of machine data are all organic data sources that may be characterized by context or by attributes such data sparsity (also known as low-density data). Much like the interpretation of silence in a conversation, analyzing big data is as much about interpreting that which exists between the lines as it is about what we can put on the line itself.

vr_predanalytics_adequacy_of_predictive_analytics_supportThese two types of data and the analytics associated with them reveal the chasm that exists within organizations and shed light on the skills gap that our predictive analytics benchmark research shows to be the primary challenge for analytics in organizations today. This research finds inadequate support in many areas including product training (26%) and how to apply to business problems (23%).

On one side of the chasm are the business groups and the analysts who are aligned with Grove’s idea of designed data. These groups may encompass domain experts in areas such as finance or marketing, advanced Excel users, and even Ph.D.-level statisticians. These analysts serve organizational decision-makers and are tied closely to actionable insights that lead to specific business outcomes. The primary way they get work done is through a flat file environment, as was outlined in some detail last week by my colleague Mark Smith. In this environment, Excel is often the lowest common denominator.

On the other side of the chasm exist the IT and database professionals, where a different analytical culture and mindset exist. The priority challenge for this group is dealing with the three Vs and simply organizing data into a legitimate enterprise data set. This group is often more comfortable with large data sets and machine learning approaches that are the hallmark of the organic data of 21st century. Their analytical environment is different from that of their business counterparts; rather than Excel, it is SQL that is often the lowest common denominator.

As I wrote in a recent blog post, database professionals and business analytics practitioners have long lived in parallel universes. In technology, practitioners deal with tables, joins and the ETL process. In business analysis, practitioners deal with datasets, merges and data preparation. When you think about it, these are the same things. The subtle difference is that database professionals have had a data mining mindset, or, as Grove calls it, an organic data mindset, while the business analyst has had a designed data or statistic-driven mindset.  The bigger differences revolve around the cultural mindset, and the tools that are used to carry out the analytical objectives. These differences represent the current conundrum for organizations.

In a world of big data analytics, these two sides of the chasm are being pushed together in a shotgun wedding because the marriage of these groups is how competitive advantage is achieved. Both groups have critical contributions to make, but need to figure out how to work together before they can truly realize the benefits of big data analytics. The firms that understand that the merging of these different analytical cultures is the primary challenge facing the analytics organization, and that develop approaches that deal with this challenge, will take the lead in big data analytics. We already see this as a primary focus area for leading professional services organizations.

In my next analyst perspective on big data I will lay out some pragmatic approaches companies are using to address this big data analytics chasm; these also represent the focus of the benchmark research we’re currently designing to understand organizational best practices in big data analytics.

Regards,

Tony Cosentino

VP and Research Director

On the heels of the release of his new book, The Mobile Wave, Microstrategy’s CEO Michael Saylor delivered an interesting keynote at Microstrategy World in Amsterdam this past week. Unlike other keynotes we’ve seen at various supplier conferences, the presentation was not a sales pitch. There was no reference to the fact that the company was simultaneously launching MicroStrategy 9.3, a major new release of its flagship offer. The presentation focused almost entirely on the rise of mobile computing and its ability to change the world. Saylor sees the Apple iPad at the heart of the mobile revolution, and notes that BI capabilities delivered through the device are displacing paper and people within organizations. The iPad’s 10-inch screen, which can display 90 percent of printed pages, is the key for companies to unlock the shackles of the physical office environment. Between the lines, it’s easy to read that Microstrategy is betting a lot on mobile and on the iPad.

Saylor’s argument against paper is relatively straightforward. For years we’ve been talking about the paperless office, but technology has not yet allowed us to get away from paper, and executives are still using it for all types of reports and data. Business intelligence before mobile was restricted to columnar reporting, and business intelligence before device interactivity was a manual, paper-based process in which an executive asked an analyst to run a report to answer a question, then looked at the report on paper. The results often inspired other questions, sending the executive back to the analyst to run yet another report – and so on.  Finally, once the executive’s questions were answered, he could ask an employee to take action based on his conclusions.

The iPad, Saylor argues, changes all of this, since iOS and the 10-inch screen allow us to look at standard-size documents and interact with company data. Given the revolutionary capabilities of mobile BI systems, an executive can interactively and visually query multiple data sources, get answers immediately, run his own scenarios, and take action, all from the sidelines of his kid’s soccer game. The executive, now doing the job of three people, is much more productive (if a bit lonelier).

How does the Microstrategy iPad-focused BI strategy stack up in the new mobile world that also contains tablets such as Google’s Nexus 7 and Microsoft’s Surface? With his presentation and over the course of the conference, Saylor took aim at the mobile strategy of a number of industry stalwarts, including Google and Microsoft. Microsoft in particular, he suggested, alienated both its customers and its partners with its recent preannouncement of the Surface tablet computer.

The most obvious competitor currently in the enterprise environment is Google’s Android, but the Android development community is focused around the smartphone, not the tablet. Google’s Nexus 7 suggests that the company is not keen to take on the iPad directly in the enterprise market; the 7-inch screen suggests consumer ambitions. One argument that Saylor gives against the Android is that it lacks tight enough integration between the hardware and the software for delivery on a 10-inch device. I’m curious whether this argument will still hold as Google starts to produce larger form-factor devices with tighter hardware and software integration, and as improved content parsing technologies allow for more information to be consumed on different-sized devices.

The more interesting enterprise play is around Microsoft’s Surface tablet running Windows 8 on Intel chips. When it is finally introduced, the Microsoft advantages will be hard to ignore. As it moves away from ARM-based chips, Microsoft will be able to provide full access to entrenched office productivity software, tight integration with other Windows-based hardware and software, and backward compatibility.  Hewlett Packard, in signing with Intel earlier this year, signaled its own move into Windows 8 tablets. HP’s global distribution power could make this an important milestone. The challenge is whether business will engage and consider Microsoft tablet or how many folks will bring this technology into business and expect support for it with business intelligence.

Unlike Google today and Microsoft tomorrow, Apple takes a “walled garden” approach to its operating system and applications, and enterprise IT departments generally do not like this idea, especially as it relates to security. On the other hand, the developer community in this garden is huge, and the “bring your own device” (BYOD) trend is really helping drive iPhone and iPad into the corporate market. The most influential businesspeople and cultural icons in our society carry iPads, and corporations, much to the chagrin of IT departments all over the world, are being forced to deal with this phenomenon.

On a practical note, I had an opportunity to test-drive MicroStrategy’s platform for mobile applications. I built a number of interactive mobile dashboards for the iPad, the iPhone, and for my own smartphone running Android. While things worked well with the iPad and the iPhone, the Android applications had a lot of issues. I’m not sure if this was due to the lack of Microstrategy focus on Android, or to Android itself. What I do know is that Microstrategy Mobile works well on iPad; just about any user can create designs with minimal training, and not having to wait for coders is a huge advantage.

Nevertheless, an Apple-focused bet in the enterprise environment is a bit risky as new devices come onto the market. It will be interesting to look at Microstrategy’s tack in the context of our upcoming Next Generation Business Intelligence Benchmark Research, which focuses on mobile and collaboration technologies in the enterprise BI environment.

Regards,

Tony Cosentino – VP & Research Director

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Stats

  • 72,942 hits
%d bloggers like this: