Converged IoT systems: Bringing the data center to the edge of everything

Converged IoT systems: Bringing the data center to the edge of everything

The next BriefingsDirect thought leadership panel discussion explores the rapidly evolving architectural shift of moving advanced IT capabilities to the edge to support Internet of Things (IoT) requirements.

The demands of data processing, real-time analytics, and platform efficiency at the intercept of IoT and business benefits have forced new technology approaches. We'll now learn how converged systems and high-performance data analysis platforms are bringing the data center to the operational technology (OT) edge.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To hear more about the latest capabilities in gaining unprecedented measurements and operational insights where they’re needed most, please join me in welcoming Phil McRell, General Manager of the IoT Consortia at PTC; Gavin Hill, IoT Marketing Engineer for Northern Europe at National Instruments (NI) in London, and Olivier Frank, Senior Director of Worldwide Business Development and Sales for Edgeline IoT Systems at Hewlett Packard Enterprise (HPE). The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's driving this need for a different approach to computing when we think about IoT and we think about the “edge” of organizations? Why is this becoming such a hot issue?

McRell: There are several drivers, but the most interesting one is economics. In the past, the costs that would have been required to take an operational site -- a mine, a refinery, or a factory -- and do serious predictive analysis, meant you would have to spend more money than you would get back.

For very high-value assets -- assets that are millions or tens of millions of dollars -- you probably do have some systems in place in these facilities. But once you get a little bit lower in the asset class, there really isn’t a return on investment (ROI) available. What we're seeing now is that's all changing based on the type of technology available.

Gardner: So, in essence, we have this whole untapped tier of technologies that we haven't been able to get a machine-to-machine (M2M) benefit from for gathering information -- or the next stage, which is analyzing that information. How big an opportunity is this? Is this a step change, or is this a minor incremental change? Why is this economically a big deal, Olivier?
Frank

Frank: We're talking about Industry 4.0, the fourth generation of change -- after steam, after the Internet, after the cloud, and now this application of IoT to the industrial world. It’s changing at multiple levels. It’s what's happening within the factories and within this ecosystem of suppliers to the manufacturers, and the interaction with consumers of those suppliers and customers. There's connectivity to those different parties that we can then put together.

While our customers have been doing process automation for 40 years, what we're doing together is unleashing the IT standardization, taking technologies that were in the data centers and applying them to the world of process automation, or opening up.

The analogy is what happened when mainframes were challenged by mini computers and then by PCs. It's now open architecture in a world that has been closed.

Gardner: Phil mentioned ROI, Gavin. What is it about the technology price points and capabilities that have come down to the point where it makes sense now to go down to this lower tier of devices and start gathering information?


Hill
Hill: There are two pieces to that. The first one is that we're seeing that understanding more about the IoT world is more valuable than we thought. McKinsey Global Institute did a study that said that by about 2025 we're going to be in a situation where IoT in the factory space is going to be worth somewhere between $1.2 trillion and $3.7 trillion. That says a lot.

The second piece is that we're at a stage where we can make technology at a much lower price point. We can put that onto the assets that we have in these industrial environments quite cheaply.

Then, you deal with the real big value, the data. All three of us are quite good at getting the value from our own respective areas of expertise.

Look at someone that we've worked with, Jaguar Land Rover. In their production sites, in their power train facilities, they were at a stage where they created an awful lot of data but didn't do anything with it. About 90 percent of their data wasn't being used for anything. It doesn't matter how many sensors you put on something. If you can't do anything with the data, it's completely useless.

They have been using techniques similar to what we've been doing in our collaborative efforts to gain insight from that data. Now, they're at a stage where probably 90 percent of their data is usable, and that's the big change.

Collaboration is key

Gardner: Let's learn more about your organizations and how you're working collaboratively, as you mentioned, before we get back into understanding how to go about architecting properly for IoT benefits. Phil, tell us about PTC. I understand you won an award in Barcelona recently.

McRell: That was a collaboration that our three organizations did with a pump and valve manufacturer, Flowserve. As Gavin was explaining, there was a lot of learning that had to be done upfront about what kind of sensors you need and what kind of signals you need off those sensors to come up with accurate predictions.

When we collaborate, we rely heavily on NI for their scientists and engineers to provide their expertise. We really need to consume digital data. We can't do anything with analog signals and we don't have the expertise to understand what kind of signals we need. When we obtain that, then with HPE, we can economically crunch that data, provide those predictions, and provide that optimization, because of HPE's hardware that now can live happily in those production environments.

Gardner: Tell us about PTC specifically; what does your organization do?

McRell: For IoT, we have a complete end-to-end platform that allows everything from the data acquisition gateway with NI all the way up to machine learning, augmented reality, dashboards, and mashups, any sort of interface that might be needed for people or other systems to interact.

In an operational setting, there may be one, two, or dozens of different sources of information. You may have information coming from the programmable logic controllers (PLCs) in a factory and you may have things coming from a Manufacturing Execution System (MES) or an Enterprise Resource Planning (ERP) system. There are all kinds of possible sources. We take that, orchestrate the logic, and then we make that available for human decision-making or to feed into another system.

Gardner: So the applications that PTC is developing are relying upon platforms and the extension of the data center down to the edge. Olivier, tell us about Edgeline and how that fits into this?
Explore
HPE's Edgeline

IoT Systems
Frank: We came up with this idea of leveraging the enterprise computing excellence that is our DNA within HPE. As our CEO said, we want to be the IT in the IoT.

According to IDC, 40 percent of the IoT computing will happen at the edge. Just to clarify, it’s not an opposition between the edge and the hybrid IT that we have in HPE; it’s actually a continuum. You need to bring some of the workloads to the edge. It's this notion of time of insight and time of action. The closer you are to what you're measuring, the more real-time you are.

We came up with this idea. What if we could bring the depth of computing we have in the data center in this sub-second environment, where I need to read this intelligent data created by my two partners here, but also, actuate them and do things with them?

Take the example of an electrical short circuit that for some reason caught fire. You don’t want to send the data to the cloud; you want to take immediate action. This is the notion of real-time, immediate action.

We take the deep compute. We integrate the connectivity with NI. We're the first platform that has integrated an industry standard called PXI, which allows NI to integrate the great portfolio of sensors and acquisition and analog-to-digital conversion technologies into our systems.

Finally, we bring enterprise manageability. Since we have proliferation of systems, system management at the edge becomes a problem. So, we bring our award-winning and millions-of-licenses sold our Integrated Lights-Out (iLO) that we sell in all our ProLiant servers, and we bring that technology at the edge as well.

Gardner: We have the computing depth from HPE, we have insightful analytics and applications from PTC, what does NI bring to the table? Describe the company for us, Gavin?

Working smarter

Hill: As a company, NI is about a $1.2 billion company worldwide. We get involved in an awful lot of industries. But in the IoT space, where we see ourselves fitting within this collaboration with PTC and HPE, is our ability to make a lot of machines smarter.

There are already some sensors on assets, machines, pumps, whatever they may be on the factory floor, but for older or potentially even some newer devices, there are not natively all the sensors that you need to be able to make really good decisions based on that data. To be able to feed in to the PTC systems, the HPE systems, you need to have the right type of data to start off with.

We have the data acquisition and control units that allow us to take that data in, but then do something smart with it. Using something like our CompactRIO System, or as you described, using the PXI platform with the Edgeline products, we can add a certain level of understanding and just a smart nature to these potentially dumb devices. It allows us not only to take in signals, but also potentially control the systems as well.

We not only have some great information from PTC that lets us know when something is going to fail, but we could potentially use their data and their information to allow us to, let’s say, decide to run a pump at half load for a little bit longer. That means that we could get a maintenance engineer out to an oil rig in an appropriate time to fix it before it runs to failure. We have the ability to control as well as to read in.

The other piece of that is that sensor data is great. We like to be as open as possible in taking from any sensor vendor, any sensor provider, but you want to be able to find the needle in the haystack there. We do feature extraction to try and make sure that we give the important pieces of digital data back to PTC, so that can be processed by the HPE Edgeline system as well.
Explore
HPE's Edgeline

IoT Systems
Frank: This is fundamental. Capturing the right data is an art and a science and that’s really what NI brings, because you don’t want to capture noise; it’s proliferation of data. That’s a unique expertise that we're very glad to integrate in the partnership.

Gardner: We certainly understand the big benefit of IoT extending what people have done with operational efficiency over the years. We now know that we have the technical capabilities to do this at an acceptable price point. But what are the obstacles, what are the challenges that organizations still have in creating a true data-driven edge, an IoT rich environment, Phil?

Economic expertise

McRell: That’s why we're together in this consortium. The biggest obstacle is that because there are so many different requirements for different types of technology and expertise, people can become overwhelmed. They'll spend months or years trying to figure this out. We come to the table with end-to-end capability from sensors and strategy and everything in between, pre-integrated at an economical price point.

Speed is important. Many of these organizations are seeing the future, where they have to be fast enough to change their business model. For instance, some OEM discrete manufacturers are going to have to move pretty quickly from just offering product to offering service. If somebody is charging $50 million for capital equipment, and their competitor is charging $10 million a year and the service level is actually better because they are much smarter about what those assets are doing, the $50 million guy is going to go out of business.

McRell
We come to the table with the ability to come and quickly get that factory, get those assets smart and connected, make sure the right people, parts, and processes are brought to bear at exactly the right time. That drives all the things people are looking for -- the up-time, the safety, the yield,  and performance of that facility. It comes down to the challenge, if you don't have all the right parties together with that technology and expertise, you can very easily get stuck on something that takes a very long time to unravel.

Gardner: That’s very interesting when you move from a Capital Expenditure (CAPEX) to an Operational Expenditure (OPEX) mentality. Every little bit of that margin goes to your bottom line and therefore you're highly incentivized to look for whole new categories of ways to improve process efficiency.

Any other hurdles, Olivier, that you're trying to combat effectively with the consortium?

Frank: The biggest hurdle is the level of complexity, and our customers don't know where to start. So, the promise of us working together is really to show the value of this kind of open architecture injected into a 40-year-old process automation infrastructure and demonstrate, as we did yesterday with our robot powered by our HPE Edgeline is this idea that I can show immediate value to the plant manager, to the quality manager, to the operation manager using the data that resides in that factory already, and that 70 percent or more is unused. That’s the value.

So how do you get that quickly and simply? That’s what we're working to solve so that our customers can enjoy the benefit of the technology faster and faster.

Bridge between OT and IT

Gardner: Now, this is a technology implementation, but it’s done in a category of the organization that might not think of IT in the same way as the business side -- back office applications and data processing. Is the challenge for many organizations a cultural one, where the IT organization doesn't necessarily know and understand this operational efficiency equation and vice versa, and how are we bridging that?

Hill: I'm probably going to give you the high-level end from the operational technology (OT) side as well. These guys will definitely have more input from their own domain of expertise. But, that these guys have that piece of information for that part that they know well is exactly why this collaboration works really well.

You have situations with the idea of the IoT, where a lot of people stood up and said, "Yeah, I can provide a solution. I have the answer," but without having a plan -- never mind a solution. But we've done a really good job of understanding that we can do one part of this system, this solution, really well, and if we partner with the people who are really good in the other aspects, we provide real solutions to customers. I don't think anyone can compete with us with at this stage, and that is exactly why we're in this situation.

Frank: Actually, the biggest hurdle is more on the OT side, not really relying on the IT of the company. For many of our customers, the factory's a silo. At HPE, we haven't been selling too much to that environment. That’s also why, when working as a consortium, it’s important to get to the right audience, which is in the factory. We also bring our IT expertise, especially in the areas of security, because at the moment, when you put an IT device in an OT environment, you potentially have problems that you didn’t have before.

We're living in a closed world, and now the value is to open up. Bringing our security expertise, our managed service, our services competencies to that problem is very important.

Speed and safety out in the open

Hill: There was a really interesting piece in the HPE Discover keynote in December, when HPE Aruba started to talk about how they had an issue when they started bringing conferencing and technology out, and then suddenly everything wanted to be wireless. They said, "Oh, there's a bit of a security issue here now, isn’t there? Everything is out there."

We can see what HPE has contributed to helping them from that side. What we're talking about here on the OT side is a similar state from the security aspect, just a little bit further along in the timeline, and we are trying to work on that as well. Again, we have HPE here and they have a lot of experience in similar transformations.

Frank: At HPE, as you know, we have our Data Center and Hybrid Cloud Group and then we have our Aruba Group. When we do OT or our Industrial IoT, we bring the combination of those skills.

For example, in security, we have HPE Aruba ClearPass technology that’s going to secure the industrial equipment back to the network and then bring in wireless, which will enable the augmented-reality use cases that we showed onstage yesterday. It’s a phased approach, but you see the power of bringing ubiquitous connectivity into the factory, which is a challenge in itself, and then securely connecting the IT systems to this OT equipment, and you understand better the kind of the phases and the challenges of bringing the technology to life for our customers.

McRell: It’s important to think about some of these operational environments. Imagine a refinery the size of a small city and having to make sure that you have the right kind of wireless signal that’s going to make it through all that piping and all those fluids, and everything is going to work properly. There's a lot of expertise, a lot of technology, that we rely on from HPE to make that possible. That’s just one slice of that stack where you can really get gummed up if you don’t have all the right capabilities at the table right from the beginning. 

Gardner: We've also put this in the context of IoT not at the edge isolated, but in the context of hybrid computing and taking advantage of what the cloud can offer. It seems to me that there's also a new role here for a constituency to be brought to the table, and that’s the data scientists in the organization, a new trove of data, elevated abstraction of analytics. How is that progressing? Are we seeing the beginnings of taking IoT data and integrating that, joining that, analyzing that, in the context of data from other aspects of the company or even external datasets?

McRell: There are a couple of levels. It’s important to understand that when we talk about the economics, one of the things that has changed quite a bit is that you can actually go in, get assets connected, and do what we call anomaly detection, pretty simplistic machine learning, but nonetheless, it’s a machine-learning capability.

In some cases, we can get that going in hours. That’s a ground zero type capability. Over time, as you learn about a line with multiple assets, about how all these function together, you learn how the entire facility functions, and then you compare that across multiple facilities, at some point, you're not going to be at the edge anymore. You're going to be doing a systems type analytics, and that’s different and combined.

At that point, you're talking about looking across weeks, months, years. You're going to go into a lot of your back-end and maybe some of your IT systems to do some of that analysis. There's a spectrum that goes back down to the original idea of simply looking for something to go wrong on a particular asset.

The distinction I'm making here is that, in the past, you would have to get a team of data scientists to figure out almost asset by asset how to create the models and iterate on that. That's a lengthy process in and of itself. Today, at that ground-zero level, that’s essentially automated. You don't need a data scientist to get that set up. At some point, as you go across many different systems and long spaces of time, you're going to pull in additional sources and you will get data scientists involved to do some pretty in-depth stuff, but you actually can get started fairly quickly without that work.

The power of partnership

Frank: To echo what Phil just said, in HPE we're talking about the tri-hybrid architecture -- the edge, so let’s say close to the things; the data center; and then the cloud, which would be a data center that you don’t know where it is. It's kind of these three dimensions.

The great thing partnering with PTC is that the ThingWorx platform, the same platform, can run in any of those three locations. That’s the beauty of our HPE Edgeline architecture. You don't need to modify anything. The same thing works, whether we're in the cloud, in the data center, or on the Edgeline.

To your point about the data scientists, it's time-to-insight. There are things you want to do immediately, and as Phil pointed out, the notion of anomaly detection that we're demonstrating on the show floor is understanding those nominal parameters after a few hours of running your thing, and simply detecting something going off normal. That doesn't require data scientists. That takes us into the ThingWorx platform.
Explore
HPE's Edgeline

IoT Systems
But then, to the industrial processes, we're involving systems integration partners and using our own knowledge to bring to the mix along with our customers, because they own the intelligence of their data. That’s where it creates a very powerful solution.

Gardner: I suppose another benefit that the IT organization can bring to this is process automation and extension. If you're able to understand what's going on in the device, not only would you need to think about how to fix that device at the right time -- not too soon, not too late -- but you might want to look into the inventory of the part, or you might want to extend it to the supply chain if that inventory is missing, or you might want to analyze the correct way to get that part at the lowest price or under the RFP process. Are we starting to also see IT as a systems integrator or in a process integrator role so that the efficiency can extend deeply into the entire business process?

McRell: It's interesting to see how this stuff plays out. Once you start to understand in your facility -- or maybe it’s not your facility, maybe you are servicing someone's facility -- what kind of inventory should you have on hand, what should you have globally in a multi-tier, multi-echelon system, it opens up a lot of possibilities.

Today PTC provides a lot of network visibility, a lot of spare-parts inventory, management, and systems, but there's a limit to what these algorithms can do. They're really the best that’s possible at this point, except when you now have everything connected. That feedback loop allows you to modify all your expectations in real time, get things on the move proactively so the right person and parts, process, kit, all show up at the right time.

Then, you have augmented reality and other tools, so that maybe somebody hasn't done this service procedure before, maybe they've never seen these parts before, but they have a guided walk-through and have everything showing up all nice and neat the day of, without anybody having to actually figure that out. That's a big set of improvements that can really change the economics of how these facilities run.

Connecting the data

Gardner: Any other thoughts on process integration?

Frank: Again, the premise behind industrial IoT is indeed, as you're pointing out, connecting the consumer, the supplier, and the manufacturer. That’s why you have also the emergence of a low-power communication layer, like LoRa or Sigfox, that really can bring these millions of connected devices together and inject them into the systems that we're creating.

Hill: Just from the conversation, I know that we’re all really passionate about this. IoT and the industrial IoT is really just a great topic for us. It's so much bigger than what we're talking about. You've talked a little bit about security, you have asked us about the cloud, you have asked us about the integration of the inventory and to the production side, and it is so much bigger than what we are talking about now.

We probably could have twice this long of a conversation on any one of these topics and still never get halfway to the end of it. It's a really exciting place to be right now. And the really interesting thing that I think all of us are now realizing, the way that we have made advancements as a partnership as well is that you don't know what you don't know. A lot of companies are waking up to that as well, and we're using our collaborations to allow us to know what we don’t know

Frank: Which is why speed is so important. We can theorize and spend a lot of time in R&D, but the reality is, bring those systems to our customers, and we learn new use cases and new ways to make the technology advance.

Hill: The way that technology has gone, no one releases a product anymore -- that’s the finished piece, and that is going to stay there for 20, 30 years. That’s not what happens. Products and services are being provided that get constantly updated. How many times a week does your phone update with different pieces of firmware, the app is being updated. You have to be able to change and take the data that you get to adjust everything that’s going on. Otherwise you will not stay ahead of the market.

And that’s exactly what Phil described earlier when he was talking about whether you sell a product or a service that goes alongside a set of products. For me, one of the biggest things is that constant innovation -- where we are going. And we've changed. We were in kind of a linear motion of progression. In the last little while, we've seen a huge amount of exponential growth in these areas.

We had a video at the end of the London HPE Discover keynote, where it was one of HPE’s pieces of what the future could be. We looked at it and thought it was quite funny. There was an automated suitcase that would follow you after you left the airport. I started to laugh at that, but then I took a second and I realized that maybe that’s not as ridiculous as it sounds, because we as humans think linearly. That’s incumbent upon us. But if the technology is changing in an exponential way, that means that we physically cannot ignore some of the most ridiculous ideas that are out there, because that’s what’s going to change the industry.

And even by having that video there and by seeing what PTC is doing with the development that they have and what we ourselves are doing in trying out different industries and different applications, we see three companies that are constantly looking through what might happen next and are ready to pounce on that to take advantage of it, each with their own expertise.

Gardner: We're just about out of time, but I'd like to hear a couple of ridiculous examples -- pushing the envelope of what we can do with these sorts of technologies now. We don’t have much time, so less than a minute each, if you can each come up perhaps with one example, named or unnamed, that might have seemed ridiculous at the time, but in hindsight has proven to be quite beneficial and been productive. Phil?

McRell: You can do this as engineering with us, you can do this in service, but we've been talking a lot about manufacturing. In a manufacturing journey, the opportunity, as Gavin and Olivier are describing here, is at the level of what happened between pre- and post-electricity. How fast things will run, the quality at which they will produce products, and then therefore the business model that now you can have because of that capability. These are profound changes. You will see up-times in some of the largest factories in the world go up double digits. You will see lines run multiple times faster over time.

These are things that, if you just walked in today and walked in in a couple of years to some of the people who run the hardest, it would be really hard to believe what your eyes are seeing at that point, just like somebody who was around before factories had electricity would be astounded by what they see today.

Back to the Future

Gardner: One of the biggest issues at the most macro level in economics is the fact that productivity has plateaued for the past 10 or 15 years. People want to get back to what productivity was -- 3 or 4 percent a year. This sounds like it might be a big part of getting there. Olivier, an example?

Frank: Well, an example would be more like an impact on mankind and wealth for humanity. Think about that with those technologies combined with 3D printing, you can have new class of manufacturers anywhere in the world -- in Africa, for example. With real-time engineering, some of the concepts that we are demonstrating today, you have designing.

Another part of PTC is Computer-Aided Design (CAD) systems and Product Lifecycle Management (PLM), and we're showing real-time engineering on the floor again. You design those products and you do quick prototyping with your 3D printing. That could be anywhere in the world. And you have your users testing the real thing, understanding whether your engineering choices were relevant, if there are some differences between the digital model and the physical model, this digital twin ID.

Then, you're back to the drawing board. So, a new class of manufacturers that we don’t even know, serving customers across the world and creating wealth in areas that are (not) up to date, not industrialized.

Gardner: It's interesting that if you have a 3D printer you might not need to worry about inventory or supply chain.

Hill: Just to add on that one point, the bit that really, really excites me about where we are with technology, as a whole, not even just within the collaboration, you have 3D printing, you have the availability of open software. We all provide very software-centric products, stuff that you can adjust yourself, and that is the way of the future.

That means that among the changes that we see in the manufacturing industry, the next great idea could come from someone who has been in the production plant for 20 years, or it could come from Phil who works in the bank down the road, because at a really good price point, he has the access to that technology, and that is one of the coolest things that I can think about right now.

Where we've seen this sort of development and this use of these sort of technologies and implementations and seen a massive difference, look at someone like Duke Energy in the US. We worked with them before we realized where our capabilities were, never mind how we could implement a great solution with PTC and with HPE. Even there, based on our own technology, those guys in the para-production side of things in some legacy equipment decided to try and do this sort of application, to have predictive maintenance to be able to see what’s going on in their assets, which are across the continent.

They began this at the start of 2013 and they have seen savings of an estimated $50 billion up to this point. That’s a number.

Listen to the podcast. Find it on iTunes. Get the mobile appRead a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

IDOL-powered appliance delivers better decisions via comprehensive business information searches

IDOL-powered appliance delivers better decisions via comprehensive business information searches

The next BriefingsDirect digital transformation case study highlights how a Swiss engineering firm created an appliance that quickly deploys to index and deliver comprehensive business information.

By scouring thousands of formats and hundreds of languages, the approach then provides via a simple search interface unprecedented access to trends, leads, and the makings of highly informed business decisions.

We will now explore how SEC 1.01 AG delivers a truly intelligent services solution -- one that returns new information to ongoing queries and combines internal and external information on all sorts of resources to produce a 360-degree view of end users’ areas of intense interest.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how to access the best available information in about half the usual time, we're joined by David Meyer, Chief Technology Officer at SEC 1.01 AG in Switzerland. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Meyer
Gardner: What are some of the trends that are driving the need for what you've developed. It's called the i5 appliance?

Meyer: The most important thing is that we can provide instant access to company-relevant information. This is one of today’s biggest challenges that we address with our i5 appliance.

Decisions are only as good as the information bases they are made on. The i5 provides the ability to access more complete information bases to make substantiated decisions. Also, you don’t want to search all the time; you want to be proactively informed. We do that with our agents and our automated programs that are searching for new information that you're interested in.

Gardner: As an organization, you've been around for quite a while and involved with  large applications, packaged applications -- SAP, for example and R/3 -- but over time, more data sources and ability to gather information came on board, and you saw the need in the market for this appliance. Tell us a little bit about what led you to create it?

Accelerating the journey

Meyer: We started to dive into big data about the time that HPE acquired Autonomy, December 2011, and we saw that it’s very hard for companies to start to become a data-driven organization. With the i5 appliance, we would like to help companies accelerate their journey to become such a company.

Gardner: Tell us what you mean by a 360-degree view? What does that really mean in terms of getting the right information to the right people at the right time?

Meyer: In a company's information scope, you don’t just talk about internal information, but you also have external information like news feeds, social media feeds, or even governmental or legal information that you need and don’t have to time to search for every day.

So, you need to have a search appliance that can proactively inform you about things that happen outside. For example, if there's a legal issue with your customer or if you're in a contract discussion and your partner loses his signature authority to sign that contract, how would you get this information if you don't have support from your search engine?
Mission Critical
Server Choices

Have Never Been Better
Gardner: And search has become such a popular paradigm for acquiring information, asking a question, and getting great results. Those results are only as good as the data and content they can access. Tell us a little bit about your company SEC 1.01 AG, your size and your scope or your market. Give us a little bit of background about your company.

Meyer: We've been an HPE partner for 26 years, and we build business-critical platforms based on HPE hardware and also the HPE operating system, HP-UX. Since the merger of Autonomy and HPE in 2011, we started to build solutions based on HPE's big-data software, particularly IDOL and Vertica.

Gardner: What was it about the environment that prevented people from doing this on their own? Why wouldn't you go and just do this yourself in your own IT shop?

Meyer: The HPE IDOL software ecosystem, is really an ecosystem of different software, and these parts need to be packed together to something that can be installed very quickly and that can provide very quick results. That’s what we did with the i5 appliance.

We put all this good software from HPE IDOL together into one simple appliance, which is simple to install. We want to accelerate the time that is needed to start with big data to get results from it and to get started with the analytical part of using your data and gain money out of it.

Multiple formats

Gardner: As we mentioned earlier, getting the best access to the best data is essential. There are a lot of APIs and a lot of tools that come with the IDOL ecosystem as you described it, but you were able to dive into a thousand or more file formats, support a 150 languages, and 400 data sources. That's very impressive. Tell us how that came about.

Meyer: When you start to work with unstructured data, you need some important functionality. For example, you need to have support for lot of languages. Imagine all these social media feeds in different languages. How do you track that if you don't support sentiment analysis on these messages?

On the other hand, you also need to understand any unstructured format. For example, if you have video broadcasts or radio broadcasts and you want to search for the content inside these broadcasts, you need to have a tool to translate the speech to text. HPE IDOL brings all the functionality that is needed to work with unstructured data, and we packed that together in our i5 appliance.

Gardner: That includes digging into PDFs and using OCR. It's quite impressive how deep and comprehensive you can be in terms of all the types of content within your organization.
Access the Free
HPE Vertica

Community Edition
How do you physically do this? If it's an appliance, you're installing it on-premises, you're able to access data sources from outside your organization, if you choose to do that, but how do you actually implement this and then get at those data sources internally? How would an IT person think about deploying this?

Meyer: We've prepared installable packages. Mainly, you need to have connectors to connect to repositories, to data ports. For example, if you have a Microsoft Exchange Server, you have a connector that understands very well how the Exchange server can communicate to that connector. So, you have the ability to connect to that data source and get any content including the metadata.

You talk about metadata for an e-mail, for example, the “From” to “To”, to “Subject,” whatever. You have the ability to put all that content and this metadata into a centralized index, and then you're able to search that information and refine the information. Then, you have a reference to your original document.

When you want to enrich the information that you have in your company with external information, we developed a so-called SECWebConnector that can capture any information from the Internet. For example, you just need to enter an RSS feed or a webpage, and then you can capture the content and the metadata you want it to search for or that is important for your company.

Gardner: So, it’s actually quite easy to tailor this specifically to an industry focus, if you wish, to a geographic focus. It’s quite easy to develop an index that’s specific to your organization, your needs, and your people.

Informational scope

Meyer: Exactly. In our crowded informational system that we have with the Internet and everything, it’s important that companies can choose where they want to have the information that is important for them. Do I need legal information, do I need news information, do I need social media information, and do I need broadcasting information? It’s very important to build your own informational scope that you want to be informed about, news that you want to be able to search for.

Gardner: And because of the way you structured and engineered this appliance, you're not only able to proactively go out and request things, but you can have a programmatic benefit, where you can tell it to deliver to you results when they arise or when they're discovered. Tell us a little bit how that works.

Meyer: We call them agents. You can define which topics you're interested in, and when some new documents are found by that search or by that topic, then you get informed, with an email or with a push notification on the mobile app.

Gardner: Let’s dig into a little bit of this concept of an appliance. You're using IDOL and you're using Vertica, the column-based or high-performance analytics engine, also part of HPE, but soon to be part of Micro Focus. You're also using 3PAR StoreServ and ProLiant DL380 servers. Tell us how that integration happened and why you actually call this an appliance, rather than some other name?
In our crowded informational system that we have with the Internet and everything, it’s important that companies can choose where they want to have the information that is important for them.

Meyer: Appliance means that all the software is patched together. Every component can talk to the others, talks the same language, and can be configured the same way. We preconfigure a lot, we standardize a lot, and that’s the appliance thing.

And it’s not bound on hardware. So, it doesn’t need to be this DL380 or whatever. It also depends on how big your environment will be. It can also be a c7000 Blade Chassis or whatever.

When we install an appliance, we have one or two days until it’s installed, and then it starts the initial indexing program, and this takes a while until you have all the data in the index. So, the initial load is big, but after two or three days, you're able to search for information.

You mentioned the HPE Vertica part. We use Vertica to log every action that goes on, on the appliance. On one hand, this is a security feature. You need to prove if nobody has found the salary list, for example. You need to prove that and so you need to log it.

On the other hand, you can analyze what users are doing. For example, if they don’t find something and it’s always the same thing that people are searching in the company and can't find, perhaps there's some information you need to implement into the appliance.

Gardner: You mentioned security and privileges. How does the IT organization allow the right people to access the right information? Are you going to use some other policy engine? How does that work?

Mapped security

Meyer: It's included. It's called mapped security. The connector takes the security information with the document and indexes that security information within the index. So, you will never be able to find a document that you don't have access to in your environment. It's important that this security is given by default.

Gardner: It sounds to me, David, like were, in a sense, democratizing big data. By gathering and indexing all the unstructured data that you can possibly want to, point at it, and connect to, you're allowing anybody in a company to get access to queries without having to go through a data scientist or a SQL query author. It seems to me that you're really opening up the power of data analysis to many more people on their terms, which are basic search queries. What does that get an organization? Do you have any examples of the ways that people are benefiting by this democratization, this larger pool of people able to use these very powerful tools?

Meyer: Everything is more data-driven. The i5 appliance can give you access to all of that information. The appliance is here to simplify the beginning of becoming a data-driven organization and to find out what power is in the organization's data.
Mission Critical
Server Choices

Have Never Been Better
For example, we enabled a Swiss company called Smartinfo to become a proactive news provider. That means they put lots of public information, newspapers, online newspapers, TV broadcasts, radio broadcasts into that index. The customers can then define the topics they're interested in and they're proactively informed about new articles about their interests.

Gardner: In what other ways do you think this will become popular? I'm guessing that a marketing organization would really benefit from finding relationships within their internal organization, between product and service, go-to market, and research and development. The parts of a large distributed organization don't always know what the other part is doing, the unknown unknowns, if you will. Any other examples of how this is a business benefit?

Meyer: You mentioned the marketing organization. How could a marketing organization listen what customers are saying? For example, on social media they're communicating there, and when you have an engine like i5, you can capture these social media feeds, you can do sentiment analysis on that, and you will see an analyzed view on what's going on about your products, company, or competitors.

You can detect, for example, a shitstorm about your company, a shitstorm about your competitor, or whatever. You need to have an analytic platform to see that, to visualize that, and this is a big benefit.

On the other hand, it's also this proactive information you get from it, where you can see that your competitor has a new campaign and you get that information right now because you have an agent with the customer's name. You can see that there is something happening and you can act on that information.

Gardner: When you think about future capabilities, are there other aspects that you can add on? It seems extensible to me. What would we be talking about a year from now, for example?

Very extensible

Meyer: It's pretty much extensible. I think about all these different verticals. You can expand it for the health sector, for the transportation sector, whatever. It doesn't really matter.

We do network analysis. That means when you prepare yourself to visit a company, you can have a network picture, what relationships this company has, what employees work there, who is a shareholder of that company, which company has contracts with any of other companies?

This is a new way to get a holistic image of a company, a person, or of something that you want to know. It's thinking how to visualize things, how to visualize information, and that's the main part we are focusing on. How can we visualize or bring new visualizations to the customer?

Gardner: In the marketplace, because it's an ecosystem, we're seeing new APIs coming online all the time. Many of them are very low cost and, in many cases, open source or free. We're also seeing the ability to connect more adequately to LinkedIn and Salesforce, if you have your license for that of course. So, this really seems to me a focal point, a single pane of glass to get a single view of a customer, a market, or a competitor, and at the same time, at an affordable price.

Let's focus on that for a moment. When you have an appliance approach, what we're talking about used to be only possible at very high cost, and many people would need to be involved -- labor, resources, customization. Now, we've eliminated a lot of the labor, a lot of the customization, and the component costs have come down.
Access the Free
HPE Vertica

Community Edition
We've talked about all the great qualitative benefits, but can we talk about the cost differential between what used to be possible five years ago with data analysis, unstructured data gathering, and indexing, and what you can do now with the i5?

Meyer: You mentioned the price. We have an OEM contract, and that that's something that makes us competitive in the market. Companies can build their own intelligence service. It's affordable also for small and medium businesses. It doesn't need to be a huge company with own engineering and IT staff. It's affordable, it's automated, it's packed together, and simple to install.

Companies can increase the workplace performance and shorten the processes. Anybody has access to all the information they need in their daily work, and they can focus more on their core business. They don't lose time in searching for information and not finding it and stuff like that.

Gardner: For those folks who have been listening or reading, are intrigued by this, and want to learn more, where would you point them? How can they get more information on the i5 appliance and some of the concepts we have been discussing?

Meyer: That's our company website, sec101.ch. There you can find any information you would like to have. And this is available now.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Sumo Logic CEO on how modern apps benefit from ‘continuous intelligence’ and DevOps insights

Sumo Logic CEO on how modern apps benefit from ‘continuous intelligence’ and DevOps insights

The next BriefingsDirect applications health monitoring interview explores how a new breed of continuous intelligence emerges by gaining data from systems infrastructure logs -- either on-premises or in the cloud -- and then cross-referencing that with intrinsic business metrics information.

We’ll now explore how these new levels of insight and intelligence into what really goes on underneath the covers of modern applications help ensure that apps are built, deployed, and operated properly.

Today, more than ever, how a company's applications perform equates with how the company itself performs and is perceived. From airlines to retail, from finding cabs to gaming, how the applications work deeply impacts how the business processes and business outcomes work, too.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

We’re joined by an executive from Sumo Logic to learn why modern applications are different, what's needed to make them robust and agile, and how the right mix of data, metrics and machine learning provides the means to make and keep apps operating better than ever.

To describe how to build and maintain the best applications, welcome Ramin Sayar, President and CEO of Sumo Logic. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There’s no doubt that the apps make the company, but what is it about modern applications that makes them so difficult to really know? How is that different from the applications we were using 10 years ago?

Sayar: You hit it on the head a little bit earlier. This notion of always-on, always-available, always-accessible types of applications, either delivered through rich web mobile interfaces or through traditional mechanisms that are served up through laptops or other access points and point-of-sale systems are driving a next wave of technology architecture supporting these apps.

These modern apps are around a modern stack, and so they’re using new platform services that are created by public-cloud providers, they’re using new development processes such as agile or continuous delivery, and they’re expected to constantly be learning and iterating so they can improve not only the user experience -- but the business outcomes.

Gardner: Of course, developers and business leaders are under pressure, more than ever before, to put new apps out more quickly, and to then update and refine them on a continuous basis. So this is a never-ending process.

User experience

Sayar: You’re spot on. The obvious benefits around always on is centered on the rich user interaction and user experience. So, while a lot of the conversation around modern apps tends to focus on the technology and the components, there are actually fundamental challenges in the process of how these new apps are also built and managed on an ongoing basis, and what implications that has for security. A lot of times, those two aspects are left out when people are discussing modern apps.

Sayar
Gardner: That's right. We’re now talking so much about DevOps these days, but in the same breath, we’re taking about SecOps -- security and operations. They’re really joined at the hip.

Sayar: Yes, they’re starting to blend. You’re seeing the technology decisions around public cloud, around Docker and containers, and microservices and APIs, and not only led by developers or DevOps teams. They’re heavily influenced and partnering with the SecOps and security teams and CISOs, because the data is distributed. Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements (SLAs).

Gardner: What’s different from say 10 years ago? Distributed used to mean that I had, under my own data-center roof, an application that would be drawing from a database, using an application server, perhaps a couple of services, but mostly all under my control. Now, it’s much more complex, with many more moving parts.

Sayar: We like to look at the evolution of these modern apps. For example, a lot of our customers have traditional monolithic apps that follow the more traditional waterfall approach for iterating and release. Often, those are run on bare-metal physical servers, or possibly virtual machines (VMs). They are simple, three-tier web apps.
Access the Webinar
On Gaining Operational Visibility
Into AWS
We see one of two things happening. The first is that there is a need for either replacing the front end of those apps, and we refer to those as brownfield. They start to change from waterfall to agile and they start to have more of an N-tier feel. It's really more around the front end. Maybe your web properties are a good example of that. And they start to componentize pieces of their apps, either on VMs or in private clouds, and that's often good for existing types of workloads.
Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements.

The other big trend is this new way of building apps, what we call greenfield workloads, versus the brownfield workloads, and those take a fundamentally different approach.

Often it's centered on new technology, a stack entirely using microservices, API-first development methodology, and using new modern containers like Docker, Mesosphere, CoreOS, and using public-cloud infrastructure and services from Amazon Web Services (AWS), or Microsoft Azure. As a result, what you’re seeing is the technology decisions that are made there require different skill sets and teams to come together to be able to deliver on the DevOps and SecOps processes that we just mentioned.

Gardner: Ramin, it’s important to point out that we’re not just talking about public-facing business-to-consumer (B2C) apps, not that those aren't important, but we’re also talking about all those very important business-to-business (B2B) and business-to-employee (B2E) apps. I can't tell you how frustrating it is when you get on the phone with somebody and they say, “Well, I’ll help you, but my app is down,” or the data isn’t available. So this is not just for the public facing apps, it's all apps, right?

It's a data problem

Sayar: Absolutely. Regardless of whether it's enterprise or consumer, if it's mid-market small and medium business (SMB) or enterprise that you are building these apps for, what we see from our customers is that they all have a similar challenge, and they’re really trying to deal with the volume, the velocity, and the variety of the data around these new architectures and how they grapple and get their hands around it. At the end of day, it becomes a data problem, not just a process or technology problem.

Gardner: Let's talk about the challenges then. If we have many moving parts, if we need to do things faster, if we need to consider the development lifecycle and processes as well as ongoing security, if we’re dealing with outside third-party cloud providers, where do we go to find the common thread of insight, even though we have more complexity across more organizational boundaries?

Sayar: From a Sumo Logic perspective, we’re trying to provide full-stack visibility, not only from code and your repositories like GitHub or Jenkins, but all the way through the components of your code, to API calls, to what your deployment tools are used for in terms of provisioning and performance.

We spend a lot of effort to integrate to the various DevOps tool chain vendors, as well as provide the holistic view of what users are doing in terms of access to those applications and services. We know who has checked in which code or which branch and which build created potential issues for the performance, latency, or outage. So we give you that 360-view by providing that full stack set of capabilities.
Unlike others that are out there and available for you, Sumo Logic's architecture is truly cloud native and multitenant, but it's centered on the principle of near real-time data streaming.

Gardner: So, the more information the better, no matter where in the process, no matter where in the lifecycle. But then, that adds its own level of complexity. I wonder is this a fire-hose approach or boiling-the-ocean approach? How do you make that manageable and then actionable?

Sayar: We’ve invested quite a bit of our intellectual property (IP) on not only providing integration with these various sources of data, but also a lot in the machine learning  and algorithms, so that we can take advantage of the architecture of being a true cloud native multitenant fast and simple solution.

So, unlike others that are out there and available for you, Sumo Logic's architecture is truly cloud native and multitenant, but it's centered on the principle of near real-time data streaming.

As the data is coming in, our data-streaming engine is allowing developers, IT ops administrators, sys admins, and security professionals to be able to have their own view, coarse-grained or granular-grained, from our back controls that we have in the system to be able to leverage the same data for different purposes, versus having to wait for someone to create a dashboard, create a view, or be able to get access to a system when something breaks.

Gardner: That’s interesting. Having been in the industry long enough, I remember when logs basically meant batch. You'd get a log dump, and then you would do something with it. That would generate a report, many times with manual steps involved. So what's the big step to going to streaming? Why is that an essential part of making this so actionable?

Sayar: It’s driven based on the architectures and the applications. No longer is it acceptable to look at samples of data that span 5 or 15 minutes. You need the real-time data, sub-second, millisecond latency to be able to understand causality, and be able to understand when you’re having a potential threat, risk, or security concern, versus code-quality issues that are causing potential performance outages and therefore business impact.

The old way was hope and pray, when I deployed code, that I would find something when a user complains is no longer acceptable. You lose business and credibility, and at the end of the day, there’s no real way to hold developers, operations folks, or security folks accountable because of the legacy tools and process approach.

Center of the business

Those expectations have changed, because of the consumerization of IT and the fact that apps are the center of the business, as we’ve talked about. What we really do is provide a simple way for us to analyze the metadata coming in and provide very simple access through APIs or through our user interfaces based on your role to be able to address issues proactively.

Conceptually, there’s this notion of wartime and peacetime as we’re building and delivering our service. We look at the problems that users -- customers of Sumo Logic and internally here at Sumo Logic -- are used to and then we break that down into this lifecycle -- centered on this concept of peacetime and wartime.

Peacetime is when nothing is wrong, but you want to stay ahead of issues and you want to be able to proactively assess the health of your service, your application, your operational level agreements, your SLAs, and be notified when something is trending the wrong way.

Then, there's this notion of wartime, and wartime is all hands on deck. Instead of being alerted 15 minutes or an hour after an outage has happened or security risk and threat implication has been discovered, the real-time data-streaming engine is notifying people instantly, and you're getting PagerDuty alerts, you're getting Slack notifications. It's no longer the traditional helpdesk notification process when people are getting on bridge lines.
No longer do you need to do “swivel-chair” correlation, because we're looking at multiple UIs and tools and products.

Because the teams are often distributed and it’s shared responsibility and ownership for identifying an issue in wartime, we're enabling collaboration and new ways of collaboration by leveraging the integrations to things like Slack, PagerDuty notification systems through the real-time platform we've built.

So, the always-on application expectations that customers and consumers have, have now been transformed to always-on available development and security resources to be able to address problems proactively.

Gardner: It sounds like we're able to not only take the data and information in real time from the applications to understand what’s going on with the applications, but we can take that same information and start applying it to other business metrics, other business environmental impacts that then give us an even greater insight into how to manage the business and the processes. Am I overstating that or is that where we are heading here?

Sayar: That’s exactly right. The essence of what we provide in terms of the service is a platform that leverages the machine logs and time-series data from a single platform or service that eliminates a lot of the complexity that exists in traditional processes and tools. No longer do you need to do “swivel-chair” correlation, because we're looking at multiple UIs and tools and products. No longer do you have to wait for the helpdesk person to notify you. We're trying to provide that instant knowledge and collaboration through the real-time data-streaming platform we've built to bring teams together versus divided.

Gardner: That sounds terrific if I'm the IT guy or gal, but why should this be of interest to somebody higher up in the organization, at a business process, even at a C-table level? What is it about continuous intelligence that cannot only help apps run on time and well, but help my business run on time and well?

Need for agility

Sayar: We talked a little bit about the whole need for agility. From a business point of view, the line-of-business folks who are associated with any of these greenfield projects or apps want to be able to increase the cycle times of the application delivery. They want to have measurable results in terms of application changes or web changes, so that their web properties have either increased or potentially decreased in terms of user satisfaction or, at the end of the day, business revenue.

So, we're able to help the developers, the DevOps teams, and ultimately, line of business deliver on the speed and agility needs for these new modes. We do that through a single comprehensive platform, as I mentioned.

At the same time, what’s interesting here is that no longer is security an afterthought. No longer is security in the back room trying to figure out when a threat or an attack has happened. Security has a seat at the table in a lot of boardrooms, and more importantly, in a lot of strategic initiatives for enterprise companies today.

At the same time we're helping with agility, we're also helping with prevention. And so a lot of our customers often start with the security teams that are looking for a new way to be able to inspect this volume of data that’s coming in -- not at the infrastructure level or only the end-user level -- but at the application and code level. What we're really able to do, as I mentioned earlier, is provide a unifying approach to bring these disparate teams together.
Download the State
Of Modern Applications
In AWS Report
Gardner: And yet individuals can extract the intelligence view that best suits what their needs are in that moment.

Sayar: Yes. And ultimately what we're able to do is improve customer experience, increase revenue-generating services, increase efficiencies and agility of actually delivering code that’s quality and therefore the applications, and lastly, improve collaboration and communication.

Gardner: I’d really like to hear some real world examples of how this works, but before we go there, I’m still interested in the how. As to this idea of machine learning, we're hearing an awful lot today about bots, artificial intelligence (AI), and machine learning. Parse this out a bit for me. What is it that you're using machine learning  for when it comes to this volume and variety in understanding apps and making that useable in the context of a business metric of some kind?

Sayar: This is an interesting topic, because of a lot of noise in the market around big data or machine learning and advanced analytics. Since Sumo Logic was started six years ago, we built this platform to ensure that not only we have the best in class security and encryption capabilities, but it was centered on the fundamental purpose around democratizing analytics, making it simpler to be able to allow more than just a subset of folks get access to information for their roles and responsibilities, whether you're security, ops, or development teams.

To answer your question a little bit more succinctly, our platform is predicated on multiple levels of machine learning and analytics capabilities. Starting at the lowest level, something that we refer to as LogReduce is meant to separate the signal-to-noise ratio. Ultimately, it helps a lot of our users and customers reduce mean time to identification by upwards of 90 percent, because they're not searching the irrelevant data. They're searching the relevant and oftentimes occurring data that's not frequent or not really known, versus what’s constantly occurring in their environment.

In doing so, it’s not just about mean time to identification, but it’s also how quickly we're able to respond and repair. We've seen customers using LogReduce reduce the mean time to resolution by upwards of 50 percent.

Predictive capabilities

Our core analytics, at the lowest level, is helping solve operational metrics and value. Then, we start to become less reactive. When you've had an outage or a security threat, you start to leverage some of our other predictive capabilities in our stack.

For example, I mentioned this concept of peacetime and wartime. In the notion of peacetime, you're looking at changes over time when you've deployed code and/or applications to various geographies and locations. A lot of times, developers and ops folks that use Sumo want to use log compare or outlier predictor operators that are in their machine learning capabilities to show and compare differences of branches of code and quality of their code to relevancy around performance and availability of the service and app.

We allow them, with a click of a button, to compare this window for these events and these metrics for the last hour, last day, last week, last month, and compare them to other time slices of data and show how much better or worse it is. This is before deploying to production. When they look at production, we're able to allow them to use predictive analytics to look at anomalies and abnormal behavior to get more proactive.

So, reactive, to proactive, all the way to predictive is the philosophy that we've been trying to build in terms of our analytics stack and capabilities.
Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

Gardner: How are some actual customers using this and what are they getting back for their investment?

Sayar: We have customers that span retail and e-commerce, high-tech, media, entertainment, travel, and insurance. We're well north of 1,200 unique paying customers, and they span anyone from Airbnb, Anheuser-Busch, Adobe, Metadata, Marriott, Twitter, Telstra, Xora -- modern companies as well as traditional companies.

What do they all have in common? Often, what we see is a digital transformation project or initiative. They either have to build greenfield or brownfield apps and they need a new approach and a new service, and that's where they start leveraging Sumo Logic.

Second, what we see is that's it’s not always a digital transformation; it's often a cost reduction and/or a consolidation project. Consolidation could be tools or infrastructure and data center, or it could be migration to co-los or public-cloud infrastructures.

The nice thing about Sumo Logic is that we can connect anything from your top of rack switch, to your discrete storage arrays, to network devices, to operating system, and middleware, through to your content-delivery network (CDN) providers and your public-cloud infrastructures.

As it’s a migration or consolidation project, we’re able to help them compare performance and availability, SLAs that they have associated with those, as well as differences in terms of delivery of infrastructure services to the developers or users.

So whether it's agility-driven or cost-driven, Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

Gardner: Ramin, how about a couple of concrete examples of what you were just referring to.

Cloud migration

Sayar: One good example is in the media space or media and entertainment space, for example, Hearst Media. They, like a lot of our other customers, were undergoing a digital-transformation project and a cloud-migration project. They were moving about 36 apps to AWS and they needed a single platform that provided machine-learning analytics to be able to recognize and quickly identify performance issues prior to making the migration and updates to any of the apps rolling over to AWS. They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

Another example would be JetBlue. We do a lot in the travel space. JetBlue is also another AWS and cloud customer. They provide a lot of in-flight entertainment to their customers. They wanted to be able to look at the service quality for the revenue model for the in-flight entertainment system and be able to ascertain what movies are being watched, what’s the quality of service, whether that’s being degraded or having to charge customers more than once for any type of service outages. That’s how they're using Sumo Logic to better assess and manage customer experience. It's not too dissimilar from Alaska Airlines or others that are also providing in-flight notification and wireless type of services.

The last one is someone that we're all pretty familiar with and that’s Airbnb. We're seeing a fundamental disruption in the travel space and how we reserve hotels or apartments or homes, and Airbnb has led the charge, like Uber in the transportation space. In their case, they're taking a lot of credit-card and payment-processing information. They're using Sumo Logic for payment-card industry (PCI) audit and security, as well as operational visibility in terms of their websites and presence.
They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

Gardner: It’s interesting. Not only are you giving them benefits along insight lines, but it sounds to me like you're giving them a green light to go ahead and experiment and then learn very quickly whether that experiment worked or not, so that they can find refine. That’s so important in our digital business and agility drive these days.

Sayar: Absolutely. And if I were to think of another interesting example, Anheuser-Busch is another one of our customers. In this case, the CISO wanted to have a new approach to security and not one that was centered on guarding the data and access to the data, but providing a single platform for all constituents within Anheuser-Busch, whether security teams, operations teams, developers, or support teams.

We did a pilot for them, and as they're modernizing a lot of their apps, as they start to look at the next generation of security analytics, the adoption of Sumo started to become instant inside AB InBev. Now, they're looking at not just their existing real estate of infrastructure and apps for all these teams, but they're going to connect it to future projects such as the Connected Path, so they can understand what the yield is from each pour in a particular keg in a location and figure out whether that’s optimized or when they can replace the keg.

So, you're going from a reactive approach for security and processes around deployment and operations to next-gen connected Internet of Things (IoT) and devices to understand business performance and yield. That's a great example of an innovative company doing something unique and different with Sumo Logic.

Gardner: So, what happens as these companies modernize and they start to avail themselves of more public-cloud infrastructure services, ultimately more-and-more of their apps are going to be of, by, and for somebody else’s public cloud? Where do you fit in that scenario?

Data source and location

Sayar: Whether you’re running on-prem, whether you're running co-los, whether you're running through CDN providers like Akamai, whether you're running on AWS or Azure, Heroku, whether you're running SaaS platforms and renting a single platform that can manage and ingest all that data for you. Interestingly enough, about half our customers’ workloads run on-premises and half of them run in the cloud.

We’re agnostic to where the data is or where their applications or workloads reside. The benefit we provide is the single ubiquitous platform for managing the data streams that are coming in from devices, from applications, from infrastructure, from mobile to you, in a simple, real-time way through a multitenant cloud service.

Gardner: This reminds me of what I heard, 10 or 15 years ago about business intelligence (BI), drawing data, analyzing it, making it close to being proactive in its ability to help the organization. How is continuous intelligence different, or even better, and something that would replace what we refer to as BI?
The expectation is that it’s sub-millisecond latency to understand what's going on, from a security, operational, or user-experience point of view.

Sayar: The issue that we faced with the first generation of BI was it was very rear-view and mirror-centric, meaning that it was looking at data and things in the past. Where we're at today with this need for speed and the necessity to be always on, always available, the expectation is that it’s sub-millisecond latency to understand what's going on, from a security, operational, or user-experience point of view.

I'd say that we're on V2 or next generation of what was traditionally called BI, and we refer to that as continuous intelligence, because you're continuously adapting and learning. It's not only based on what humans know and what rules and correlation that they try to presuppose and create alarms and filters and things around that. It’s what machines and machine intelligence needs to supplement that with to provide the best-in-class type of capability, which is what we refer to as continuous intelligence.

Gardner: We’re almost out of time, but I wanted to look to the future a little bit. Obviously, there's a lot of investing going on now around big data and analytics as it pertains to many different elements of many different businesses, depending on their verticals. Then, we're talking about some of the logic benefit and continuous intelligence as it applies to applications and their lifecycle.

Where do we start to see crossover between those? How do I leverage what I’m doing in big data generally in my organization and more specifically, what I can do with continuous intelligence from my systems, from my applications?

Business Insights

Sayar: We touched a little bit on that in terms of the types of data that we integrate and ingest. At the end of the day, when we talk about full-stack visibility, it's from everything with respect to providing business insights to operational insights, to security insights.

We have some customers that are in credit-card payment processing, and they actually use us to understand activations for credit cards, so they're extracting value from the data coming into Sumo Logic to understand and predict business impact and relevant revenue associated with these services that they're managing; in this case, a set of apps that run on a CDN.
Try Sumo Logic for Free
To Get Critical Data and Insights
Into Apps and Infrastructure Operations
At the same time, the fraud and risk team are using us for threat and prevention. The operations team is using us for understanding identification of issues proactively to be able to address any application or infrastructure issues, and that’s what we refer to as full stack.

Full stack isn’t just the technology; it's providing business visibility insights to line the business users or users that are looking at metrics around user experience and service quality, to operational-level impacts that help you become more proactive, or in some cases, reactive to wartime issues, as we've talked about. And lastly, the security team helps you take a different security posture around reactive and proactive, around threat, detection, and risk.

In a nutshell, where we see these things starting to converge is what we refer to as full stack visibility around our strategy for continuous intelligence, and that is technology to business to users.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Sumo Logic.

You may also be interested in:

OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

The next BriefingsDirect digital transformation case study explores how UK IT consultancy OCSL has set its sights on the holy grail of hybrid IT -- helping its clients to find and attain the right mix of hybrid cloud.

We'll now explore how each enterprise -- and perhaps even units within each enterprise -- determines the path to a proper mix of public and private cloud. Closer to home, they're looking at the proper fit of converged infrastructure, hyper-converged infrastructure (HCI), and software-defined data center (SDDC) platforms.

Implementing such a services-attuned architecture may be the most viable means to dynamically apportion applications and data support among and between cloud and on-premises deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

To describe how to rationalize the right mix of hybrid cloud and hybrid IT services along with infrastructure choices on-premises, we are joined by Mark Skelton, Head of Consultancy at OCSL in London. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: People increasingly want to have some IT on premises, and they want public cloud -- with some available continuum between them. But deciding the right mix is difficult and probably something that’s going to change over time. What drivers are you seeing now as organizations make this determination?
Accelerate Your Business
With Hybrid Cloud from HPE

Learn More
Skelton: It’s a blend of lot of things. We've been working with enterprises for a long time on their hybrid and cloud messaging. Our clients have been struggling just to understand what hybrid really means, but also how we make hybrid a reality, and how to get started, because it really is a minefield. You look at what Microsoft is doing, what AWS is doing, and what HPE is doing in their technologies. There's so much out there. How do they get started?

We've been struggling in the last 18 months to get customers on that journey and get started. But now, because technology is advancing, we're seeing customers starting to embrace it and starting to evolve and transform into those things. And, we've matured our models and frameworks as well to help customer adoption.

Gardner: Do you see the rationale for hybrid IT shaking down to an economic equation? Is it to try to take advantage of technologies that are available? Is it about compliance and security? You're probably temped to say all of the above, but I'm looking for what's driving the top-of-mind decision-making now.

Start with the economics

Skelton: The initial decision-making process begins with the economics. I think everyone has bought into the marketing messages from the public cloud providers saying, "We can reduce your costs, we can reduce your overhead -- and not just from a culture perspective, but from management, from personal perspective, and from a technology solutions perspective."

Skelton

CIOs, and even financial officers, are seeing economics as the tipping point they need to go into a hybrid cloud, or even all into a public cloud. But it’s not always cheap to put everything into a public cloud. When we look at business cases with clients, it’s the long-term investment we look at. Over time, it’s not always cheap to put things into public cloud. That’s where hybrid started to come back into the front of people’s minds.

We can use public cloud for the right workloads and where they want to be flexible and burst and be a bit more agile or even give global reach to long global businesses, but then keep the crown jewels back inside secured data centers where they're known and trusted and closer to some of the key, critical systems.

So, it starts with the finance side of the things, but quickly evolves beyond that, and financial decisions aren't the only reasons why people are going to public or hybrid cloud.

Gardner: In a more perfect world, we'd be able to move things back and forth with ease and simplicity, where we could take the A/B testing-type of approach to a public and private cloud decision. We're not quite there yet, but do you see a day where that choice about public and private will be dynamic -- and perhaps among multiple clouds or multi-cloud hybrid environment?

Skelton: Absolutely. I think multi-cloud is the Nirvana for every organization, just because there isn't one-size-fits-all for every type of work. We've been talking about it for quite a long time. The technology hasn't really been there to underpin multi-cloud and truly make it easy to move on-premises to public or vice versa. But I think now we're getting there with technology.

Are we there yet? No, there are still a few big releases coming, things that we're waiting to be released to market, which will help simplify that multi-cloud and the ability to migrate up and back, but we're just not there yet, in my opinion.
There are still a few big releases coming, things that we're waiting to be released to market, which will help simplify that multi-cloud and the ability to migrate up and back, but we're just not there yet.

Gardner: We might be tempted to break this out between applications and data. Application workloads might be a bit more flexible across a continuum of hybrid cloud, but other considerations are brought to the data. That can be security, regulation, control, compliance, data sovereignty, GDPR, and so forth. Are you seeing your customers looking at this divide between applications and data, and how they are able to rationalize one versus the other?

Skelton: Applications, as you have just mentioned, are the simpler things to move into a cloud model, but the data is really the crown jewels of the business, and people are nervous about putting that into public cloud. So what we're seeing lot of is putting applications into the public cloud for the agility, elasticity, and global reach and trying to keep data on-premises because they're nervous about those breaches in the service providers’ data centers.

That's what we are seeing, but we are seeing an uprising of things like object storage, so we're working with Scality, for example, and they have a unique solution for blending public and on-premises solutions, so we can pin things to certain platforms in a secure data center and then, where the data is not quite critical, move it into a public cloud environment.

Gardner: It sounds like you've been quite busy. Please tell us about OCSL, an overview of your company and where you're focusing most of your efforts in terms of hybrid computing.

Rebrand and refresh

Skelton: OCSL had been around for 26 years as a business. Recently, we've been through a re-brand and a refresh of what we are focusing on, and we're moving more to a services organization, leading with our people and our consultants.

We're focusing on transforming customers and clients into the cloud environment, whether that's applications or, if it's data center, cloud, or hybrid cloud. We're trying to get customers on that journey of transformation and engaging with business-level people and business requirements and working out how we make cloud a reality, rather than just saying there's a product and you go and do whatever you want with it. We're finding out what those businesses want, what are the key requirements, and then finding the right cloud models that to fit that.

Gardner: So many organizations are facing not just a retrofit or a rethinking around IT, but truly a digital transformation for the entire organization. There are many cases of sloughing off business lines, and other cases of acquiring. It's an interesting time in terms of a mass reconfiguration of businesses and how they identify themselves.

Skelton: What's changed for me is, when I go and speak to a customer, I'm no longer just speaking to the IT guys, I'm actually engaging with the finance officers, the marketing officers, the digital officers -- that's he common one that is creeping up now. And it's a very different conversation.
Accelerate Your Business
With Hybrid Cloud from HPE

Learn More
We're looking at business outcomes now, rather than focusing on, "I need this disk, this product." It's more: "I need to deliver this service back to the business." That's how we're changing as a business. It's doing that business consultancy, engaging with that, and then finding the right solutions to fit requirements and truly transform the business.

Gardner: Of course, HPE has been going through transformations itself for the past several years, and that doesn't seem to be slowing up much. Tell us about the alliance between OCSL and HPE. How do you come together as a whole greater than the sum of the parts?

Skelton: HPE is transforming and becoming a more agile organization, with some of the spinoffs that we've had recently aiding that agility. OCSL has worked in partnership with HPE for many years, and it's all about going to market together and working together to engage with the customers at right level and find the right solutions. We've had great success with that over many years.

Gardner: Now, let’s go to the "show rather than tell" part of our discussion. Are there some examples that you can look to, clients that you work with, that have progressed through a transition to hybrid computing, hybrid cloud, and enjoyed certain benefits or found unintended consequences that we can learn from?

Skelton: We've had a lot of successes in the last 12 months as I'm taking clients on the journey to hybrid cloud. One of the key ones that resonates with me is a legal firm that we've been working with. They were in a bit of a state. They had an infrastructure that was aging, was unstable, and wasn't delivering quality service back to the lawyers that were trying to embrace technology -- so mobile devices, dictation software, those kind of things.

We came in with a first prospectus on how we would actually address some of those problems. We challenged them, and said that we need to go through a stabilization phase. Public cloud is not going to be the immediate answer. They're being courted by the big vendors, as everyone is, about public cloud and they were saying it was the Nirvana for them.

We challenged that and we got them to a stable platform first, built on HPE hardware. We got instant stability for them. So, the business saw immediate returns and delivery of service. It’s all about getting that impactful thing back to the business, first and foremost.

Building cloud model

Now, we're working through each of their service lines, looking at how we can break them up and transform them into a cloud model. That involves breaking down those apps, deconstructing the apps, and thinking about how we can use pockets of public cloud in line with the hybrid on-premise in our data-center infrastructure.

They've now started to see real innovative solutions taking that business forward, but they got instant stability.

Gardner: Were there any situations where organizations were very high-minded and fanciful about what they were going to get from cloud that may have led to some disappointment -- so unintended consequences. Maybe others might benefit from hindsight. What do you look out for, now that you have been doing this for a while in terms of hybrid cloud adoption?

Skelton: One of the things I've seen a lot of with cloud is that people have bought into the messaging from the big public cloud vendors about how they can just turn on services and keep consuming, consuming, consuming. A lot of people have gotten themselves into a state where bills have been rising and rising, and the economics are looking ridiculous. The finance officers are now coming back and saying they need to rein that back in. How do they put some control around that?
People have bought into the messaging from the big public-cloud vendors about how they can just turn on services and keep consuming, consuming, consuming.

That’s where hybrid is helping, because if you start to hook up some workloads back in an isolated data center, you start to move some of those workloads back. But the key for me is that it comes down to putting some thought process into what you're putting into cloud. Just think through to how can you transform and use the services properly. Don't just turn everything on, because it’s there and it’s click of a button away, but actually think about put some design and planning into adopting cloud.

Gardner: It also sounds like the IT people might need to go out and have a pint with the procurement people and learn a few basics about good contract writing, terms and conditions, and putting in clauses that allow you to back out, if needed. Is that something that we should be mindful of -- IT being in the procurement mode as well as specifying technology mode?

Skelton: Procurement definitely needs to be involved in the initial set-up with the cloud  whenever they're committing to a consumption number, but then once that’s done, it’s IT’s responsibility in terms of how they are consuming that. Procurement needs to be involved all the way through in keeping constant track of what’s going on; and that’s not happening.

The IT guys don’t really care about the cost; they care about the widgets and turning things on and playing around that. I don’t think they really realized how much this is going to cost-back. So yeah, there is a bit of disjoint in lots of organizations in terms of procurement in the upfront piece, and then it goes away, and then IT comes in and spends all of the money.

Gardner: In the complex service delivery environment, that procurement function probably should be constant and vigilant.

Big change in procurement

Skelton: Procurement departments are going to change. We're starting to see that in some of the bigger organizations. They're closer to the IT departments. They need to understand that technology and what’s being used, but that’s quite rare at the moment. I think that probably over the next 12 months, that’s going to be a big change in the larger organizations.

Gardner: Before we close, let's take a look to the future. A year or two from now, if we sit down again, I imagine that more micro services will be involved and containerization will have an effect, where the complexity of services and what we even think of as an application could be quite different, more of an API-driven environment perhaps.

So the complexity about managing your cloud and hybrid cloud to find the right mix, and pricing that, and being vigilant about whether you're getting your money’s worth or not, seems to be something where we should start thinking about applying artificial intelligence (AI), machine learning, what I like to call BotOps, something that is going to be there for you automatically without human intervention.
Hopefully, in 12 months, we can have those platforms and we can then start to embrace some of this great new technology and really rethink our applications.

Does that sound on track to you, and do you think that we need to start looking to advanced automation and even AI-driven automation to manage this complex divide between organizations and cloud providers?

Skelton: You hit a lot of key points there in terms of where the future is going. I think we are still in this phase if we start trying to build the right platforms to be ready for the future. So we see the recent releases of HPE Synergy for example, being able to support these modern platforms, and that’s really allowing us to then embrace things like micro services. Docker and Mesosphere are two types of platforms that will disrupt organizations and the way we do things, but you need to find the right platform first.

Hopefully, in 12 months, we can have those platforms and we can then start to embrace some of this great new technology and really rethink our applications. And it’s a challenge to the ISPs. They've got to work out how they can take advantage of some of these technologies.
Accelerate Your Business
With Hybrid Cloud from HPE

Learn More
We're seeing a lot of talk about Cervalis and computing. It's where there is nothing and you need to spin up results as and when you need to. The classic use case for that is Uber; and they have built a whole business on that Cervalis type model. I think that in 12 months time, we're going to see a lot more of that and more of the enterprise type organizations.

I don’t think we have it quite clear in our minds how we're going to embrace that but it’s the ISV community that really needs to start driving that. Beyond that, it's absolutely with AI and bots. We're all going to be talking to computers, and they're going to be responding with very human sorts of reactions. That's the next way.

I am bringing that into enterprise organizations for how we can solve some business challenges. Service test management is one of the use cases where we're seeing, in some of our clients, whether they can get immediate response from bots and things like that to common queries, so they don’t need as many support staff. It’s already starting to happen.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how high-performing big-data analysis powers an innovative artificial intelligence (AI)-based investment opportunity and evaluation tool. We'll learn how LogitBot in New York identifies, manages, and contextually categorizes truly massive and diverse data sources.

By leveraging entity recognition APIs, LogitBot not only provides investment evaluations from across these data sets, it delivers the analysis as natural-language information directly into spreadsheets as the delivery endpoint. This is a prime example of how complex cloud-to core-to edge processes and benefits can be managed and exploited using the most responsive big-data APIs and services.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To describe how a virtual assistant for targeting investment opportunities is being supported by cloud-based big-data services, we're joined by Mutisya Ndunda, Founder and CEO of LogitBot and Michael Bishop, CTO of LogicBot, in New York. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s look at some of the trends driving your need to do what you're doing with AI and bots, bringing together data, and then delivering it in the format that people want most. What’s the driver in the market for doing this?

Ndunda: LogitBot is all about trying to eliminate friction between people who have very high-value jobs and some of the more mundane things that could be automated by AI.

Ndunda

Today, in finance, the industry, in general, searches for investment opportunities using techniques that have been around for over 30 years. What tends to happen is that the people who are doing this should be spending more time on strategic thinking, ideation, and managing risk. But without AI tools, they tend to get bogged down in the data and in the day-to-day. So, we've decided to help them tackle that problem.

Gardner: Let the machines do what the machines do best. But how do we decide where the demarcation is between what the machines do well and what the people do well, Michael?

Bishop: We believe in empowering the user and not replacing the user. So, the machine is able to go in-depth and do what a high-performing analyst or researcher would do at scale, and it does that every day, instead of once a quarter, for instance, when research analysts would revisit an equity or a sector. We can do that constantly, react to events as they happen, and replicate what a high-performing analyst is able to do.

Gardner: It’s interesting to me that you're not only taking a vast amount of data and putting it into a useful format and qualitative type, but you're delivering it in a way that’s demanded in the market, that people want and use. Tell me about this core value and then the edge value and how you came to decide on doing it the way you do?

Evolutionary process

Ndunda: It’s an evolutionary process that we've embarked on or are going through. The industry is very used to doing things in a very specific way, and AI isn't something that a lot of people are necessarily familiar within financial services. We decided to wrap it around things that are extremely intuitive to an end user who doesn't have the time to learn technology.

So, we said that we'll try to leverage as many things as possible in the back via APIs and all kinds of other things, but the delivery mechanism in the front needs to be as simple or as friction-less as possible to the end-user. That’s our core principle.
Humanization of Machine Learning
For Big Data Success
Learn More
Bishop: Finance professionals generally don't like black boxes and mystery, and obviously, when you're dealing with money, you don’t want to get an answer out of a machine you can’t understand. Even though we're crunching a lot of information and  making a lot of inferences, at the end of the day, they could unwind it themselves if they wanted to verify the inferences that we have made.

Bishop
We're wrapping up an incredibly complicated amount of information, but it still makes sense at the end of the day. It’s still intuitive to someone. There's not a sense that this is voodoo under the covers.

Gardner: Well, let’s pause there. We'll go back to the data issues and the user-experience issues, but tell us about LogitBot. You're a startup, you're in New York, and you're focused on Wall Street. Tell us how you came to be and what you do, in a more general sense.

Ndunda: Our professional background has always been in financial services. Personally, I've spent over 15 years in financial services, and my career led me to what I'm doing today.

In the 2006-2007 timeframe, I left Merrill Lynch to join a large proprietary market-making business called Susquehanna International Group. They're one of the largest providers of liquidity around the world. Chances are whenever you buy or sell a stock, you're buying from or selling to Susquehanna or one of its competitors.

What had happened in that industry was that people were embracing technology, but it was algorithmic trading, what has become known today as high-frequency trading. At Susquehanna, we resisted that notion, because we said machines don't necessarily make decisions well, and this was before AI had been born.

Internally, we went through this period where we had a lot of discussions around, are we losing out to the competition, should we really go pure bot, more or less? Then, 2008 hit and our intuition of allowing our traders to focus on the risky things and then setting up machines to trade riskless or small orders paid off a lot for the firm; it was the best year the firm ever had, when everyone else was falling apart.

That was the first piece that got me to understand or to start thinking about how you can empower people and financial professionals to do what they really do well and then not get bogged down in the details.

Then, I joined Bloomberg and I spent five years there as the head of strategy and business development. The company has an amazing business, but it's built around the notion of static data. What had happened in that business was that, over a period of time, we began to see the marketplace valuing analytics more and more.

Make a distinction

Part of the role that I was brought in to do was to help them unwind that and decouple the two things -- to make a distinction within the company about static information versus analytical or valuable information. The trend that we saw was that hedge funds, especially the ones that were employing systematic investment strategies, were beginning to do two things, to embrace AI or technology to empower your traders and then also look deeper into analytics versus static data.

That was what brought me to LogitBot. I thought we could do it really well, because the players themselves don't have the time to do it and some of the vendors are very stuck in their traditional business models.

Bishop: We're seeing a kind of renaissance here, or we're at a pivotal moment, where we're moving away from analytics in the sense of business reporting tools or understanding yesterday. We're now able to mine data, get insightful, actionable information out of it, and then move into predictive analytics. And it's not just statistical correlations. I don’t want to offend any quants, but a lot of technology [to further analyze information] has come online recently, and more is coming online every day.

For us, Google had released TensorFlow, and that made a substantial difference in our ability to reason about natural language. Had it not been for that, it would have been very difficult one year ago.

At the moment, technology is really taking off in a lot of areas at once. That enabled us to move from static analysis of what's happened in the past and move to insightful and actionable information.
Relying on a backward-looking mechanism of trying to interpret the future is kind of really dangerous, versus having a more grounded approach.

Ndunda: What Michael kind of touched on there is really important. A lot of traditional ways of looking at financial investment opportunities is to say that historically, this has happened. So, history should repeat itself. We're in markets where nothing that's happening today has really happened in the past. So, relying on a backward-looking mechanism of trying to interpret the future is kind of really dangerous, versus having a more grounded approach that can actually incorporate things that are nontraditional in many different ways.

So, unstructured data, what investors are thinking, what central bankers are saying, all of those are really important inputs, one part of any model 10 or 20 years ago. Without machine learning and some of the things that we are doing today, it’s very difficult to incorporate any of that and make sense of it in a structured way.

Gardner: So, if the goal is to make outlier events your friend and not your enemy, what data do you go to to close the gap between what's happened and what the reaction should be, and how do you best get that data and make it manageable for your AI and machine-learning capabilities to exploit?

Ndunda: Michael can probably add to this as well. We do not discriminate as far as data goes. What we like to do is have no opinion on data ahead of time. We want to get as much information as possible and then let a scientific process lead us to decide what data is actually useful for the task that we want to deploy it on.

As an example, we're very opportunistic about acquiring information about who the most important people at companies are and how they're connected to each other. Does this guy work on a board with this or how do they know each other? It may not have any application at that very moment, but over the course of time, you end up building models that are actually really interesting.

We scan over 70,000 financial news sources. We capture news information across the world. We don't necessarily use all of that information on a day-to-day basis, but at least we have it and we can decide how to use it in the future.

We also monitor anything that companies file and what management teams talk about at investor conferences or on phone conversations with investors.

Bishop: Conference calls, videos, interviews.

Audio to text

Ndunda: HPE has a really interesting technology that they have recently put out. You can transcribe audio to text, and then we can apply our text processing on top of that to understand what management is saying in a structural, machine-based way. Instead of 50 people listening to 50 conference calls you could just have a machine do it for you.

Gardner: Something we can do there that we couldn't have done before is that you can also apply something like sentiment analysis, which you couldn’t have done if it was a document, and that can be very valuable.

Bishop: Yes, even tonal analysis. There are a few theories on that, that may or may not pan out, but there are studies around tone and cadence. We're looking at it and we will see if it actually pans out.

Gardner: And so do you put this all into your own on-premises data-center warehouse or do you take advantage of cloud in a variety of different means by which to corral and then analyze this data? How do you take this fire hose and make it manageable?

Bishop: We do take advantage of the cloud quite aggressively. We're split between SoftLayer and Google. At SoftLayer we have bare-metal hardware machines and some power machines with high-power GPUs.
Humanization of Machine Learning
For Big Data Success
Learn More
On the Google side, we take advantage of Bigtable and BigQuery and some of their infrastructure tools. And we have good, old PostgreSQL in there, as well as DataStax, Cassandra, and their Graph as the graph engine. We make liberal use of HPE Haven APIs as well and TensorFlow, as I mentioned before. So, it’s a smorgasbord of things you need to corral in order to get the job done. We found it very hard to find all of that wrapped in a bow with one provider.

We're big proponents of Kubernetes and Docker as well, and we leverage that to avoid lock-in where we can. Our workload can migrate between Google and the SoftLayer Kubernetes cluster. So, we can migrate between hardware or virtual machines (VMs), depending on the horsepower that’s needed at the moment. That's how we handle it.

Gardner: So, maybe 10 years ago you would have been in a systems-integration capacity, but now you're in a services-integration capacity. You're doing some very powerful things at a clip and probably at a cost that would have been impossible before.

Bishop: I certainly remember placing an order for a server, waiting six months, and then setting up the RAID drives. It's amazing that you can just flick a switch and you get a very high-powered machine that would have taken six months to order previously. In Google, you spin up a VM in seconds. Again, that's of a horsepower that would have taken six months to get.

Gardner: So, unprecedented innovation is now at our fingertips when it comes to the IT side of things, unprecedented machine intelligence, now that the algorithms and APIs are driving the opportunity to take advantage of that data.

Let's go back to thinking about what you're outputting and who uses that. Is the investment result that you're generating something that goes to a retail type of investor? Is this something you're selling to investment houses or a still undetermined market? How do you bring this to market?

Natural language interface

Ndunda: Roboto, which is the natural-language interface into our analytical tools, can be custom tailored to respond, based on the user's level of financial sophistication.

At present, we're trying them out on a semiprofessional investment platform, where people are professional traders, but not part of a major brokerage house. They obviously want to get trade ideas, they want to do analytics, and they're a little bit more sophisticated than people who are looking at investments for their retirement account.  Rob can be tailored for that specific use case.

He can also respond to somebody who is managing a portfolio at a hedge fund. The level of depth that he needs to consider is the only differential between those two things.

In the back, he may do an extra five steps if the person asking the question worked at a hedge fund, versus if the person was just asking about why is Apple up today. If you're a retail investor, you don’t want to do a lot of in-depth analysis.

Bishop: You couldn’t take the app and do anything with it or understand it.
If our initial findings here pan out or continue to pan out, it's going to be a very powerful interface.

Ndunda: Rob is an interface, but the analytics are available via multiple venues. So, you can access the same analytics via an API, a chat interface, the web, or a feed that streams into you. It just depends on how your systems are set up within your organization. But, the data always will be available to you.

Gardner: Going out to that edge equation, that user experience, we've talked about how you deliver this to the endpoints, customary spreadsheets, cells, pivots, whatever. But it also sounds like you are going toward more natural language, so that you could query, rather than a deep SQL environment, like what we get with a Siri or the Amazon Echo. Is that where we're heading?

Bishop: When we started this, trying to parameterize everything that you could ask into enough checkboxes and forums pollutes the screen. The system has access to an enormous amount of data that you can't create a parameterized screen for. We found it was a bit of a breakthrough when we were able to start using natural language.

TensorFlow made a huge difference here in natural language understanding, understanding the intent of the questioner, and being able to parameterize a query from that. If our initial findings here pan out or continue to pan out, it's going to be a very powerful interface.

I can't imagine having to go back to a SQL query if you're able to do it natural language, and it really pans out this time, because we’ve had a few turns of the handle of alleged natural-language querying.

Gardner: And always a moving target. Tell us specifically about SentryWatch and Precog. How do these shake out in terms of your go-to-market strategy?

How everything relates

Ndunda: One of the things that we have to do to be able to answer a lot of questions that our customers may have is to monitor financial markets and what's impacting them on a continuous basis. SentryWatch is literally a byproduct of that process where, because we're monitoring over 70,000 financial news sources, we're analyzing the sentiment, we're doing deep text analysis on it, we're identifying entities and how they're related to each other, in all of these news events, and we're sticking that into a knowledge graph of how everything relates to everything else.

It ends up being a really valuable tool, not only for us, but for other people, because while we're building models. there are also a lot of hedge funds that have proprietary models or proprietary processes that could benefit from that very same organized relational data store of news. That's what SentryWatch is and that's how it's evolved. It started off with something that we were doing as an import and it's actually now a valuable output or a standalone product.

Precog is a way for us to showcase the ability of a machine to be predictive and not be backward looking. Again, when people are making investment decisions or allocation of capital across different investment opportunities, you really care about your forward return on your investments. If I invested a dollar today, am I likely to make 20 cents in profit tomorrow or 30 cents in profit tomorrow?

We're using pretty sophisticated machine-learning models that can take into account unstructured data sources as part of the modeling process. That will give you these forward expectations about stock returns in a very easy-to-use format, where you don't need to have a PhD in physics or mathematics.
We're using pretty sophisticated machine-learning models that can take into account unstructured data sources as part of the modeling process.

You just ask, "What is the likely return of Apple over the next six months," taking into account what's going on in the economy.  Apple was fined $14 billion. That can be quickly added into a model and reflect a new view in a matter of seconds versus sitting down in a spreadsheet and trying to figure out how it all works out.

Gardner: Even for Apple, that's a chunk of change.

Bishop: It's a lot money, and you can imagine that there were quite a few analysts on Wall Street in Excel, updating their models around this so that they could have an answer by the end of the day, where we already had an answer.

Gardner: How do the HPE Haven OnDemand APIs help the Precog when it comes to deciding those sources, getting them in the right format, so that you can exploit?

Ndunda: The beauty of the platform is that it simplifies a lot of development processes that an organization of our size would have to take on themselves.

The nice thing about it is that a drag-and-drop interface is really intuitive; you don't need to be specialized in Java, Python, or whatever it is. You can set up your intent in a graphical way, and then test it out, build it, and expand it as you go along. The Lego-block structure is really useful, because if you want to try things out, it's drag and drop, connect the dots, and then see what you get on the other end.

For us, that's an innovation that we haven't seen with anybody else in the marketplace and it cuts development time for us significantly.

Gardner: Michael, anything more to add on how this makes your life a little easier?

Lowering cost

Bishop: For us, lowering the cost in time to run an experiment is very important when you're running a lot of experiments, and the Combinations product enables us to run a lot of varied experiments using a variety of the HPE Haven APIs in different combinations very quickly. You're able to get your development time down from a week, two weeks, whatever it is to wire up an API to assist them.

In the same amount of time, you're able to wire the initial connection and then you have access to pretty much everything in Haven. You turn it over to either a business user, a data scientist, or a machine-learning person, and they can drag and drop the connectors themselves. It makes my life easier and it makes the developers’ lives easier because it gets back time for us.

Gardner: So, not only have we been able to democratize the querying, moving from SQL to natural language, for example, but we’re also democratizing the choice on sources and combinations of sources in real time, more or less for different types of analyses, not just the query, but the actual source of the data.
The power of a lot of this stuff is in the unstructured world, because valuable information typically tends to be hidden in documents.

Bishop: Correct.

Ndunda: Again, the power of a lot of this stuff is in the unstructured world, because valuable information typically tends to be hidden in documents. In the past, you'd have to have a team of people to scour through text, extract what they thought was valuable, and summarize it for you. You could miss out on 90 percent of the other valuable stuff that's in the document.

With this ability now to drag and drop and then go through a document in five different iterations by just tweaking, a parameter is really useful.

Gardner: So those will be IDOL-backed APIs that you are referring to.

Ndunda: Exactly.

Bishop: It’s something that would be hard for an investment bank, even a few years ago, to process. Everyone is on the same playing field here or starting from the same base, but dealing with unstructured data has been traditionally a very difficult problem. You have a lot technologies coming online as APIs; at the same time, they're also coming out as traditional on-premises [software and appliance] solutions.
Humanization of Machine Learning
For Big Data Success
Learn More
We're all starting from the same gate here. Some folks are little ahead, but I'd say that Facebook is further ahead than an investment bank in their ability to reason over unstructured data. In our world, I feel like we're starting basically at the same place that Goldman or Morgan would be.

Gardner: It's a very interesting reset that we’re going through. It's also interesting that we talked earlier about the divide between where the machine and the individual knowledge worker begins or ends, and that's going to be a moving target. Do you have any sense of how that changes its characterization of what the right combination is of machine intelligence and the best of human intelligence?

Empowering humans

Ndunda: I don’t foresee machines replacing humans, per se. I see them empowering humans, and to the extent that your role is not completely based on a task, if it's based on something where you actually manage a process that goes from one end to another, those particular positions will be there, and the machines will free our people to focus on that.

But, in the case where you have somebody who is really responsible for something that can be automated, then obviously that will go away. Machines don't eat, they don’t need to take vacation, and if it’s a task where you don't need to reason about it, obviously you can have a computer do it.

What we're seeing now is that if you have a machine sitting side by side with a human, and the machine can pick up on how the human reasons with some of the new technologies, then the machine can do a lot of the grunt work, and I think that’s the future of all of this stuff.
I don’t foresee machines replacing humans, per se. I see them empowering humans.

Bishop: What we're delivering is that we distill a lot of information, so that a knowledge worker or decision-maker can make an informed decision, instead of watching CNBC and being a single-source reader. We can go out and scour the best of all the information, distill it down, and present it, and they can choose to act on it.

Our goal here is not to make the next jump and make the decision. Our job is to present the information to a decision-maker.

Gardner: It certainly seems to me that the organization, big or small, retail or commercial, can make the best use of this technology. Machine learning, in the end, will win.

Ndunda: Absolutely. It is a transformational technology, because for the first time in a really long time, the reasoning piece of it is within grasp of machines. These machines can operate in the gray area, which is where the world lives.

Gardner: And that gray area can almost have unlimited variables applied to it.

Ndunda: Exactly. Correct.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How lastminute.com uses machine learning to improve travel bookings user experience

How lastminute.com uses machine learning to improve travel bookings user experience

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how online travel and events pioneer lastminute.com leverages big-data analytics with speed at scale to provide business advantages to online travel services.

We'll explore how lastminute.com manages massive volumes of data to support cutting-edge machine-learning algorithms to allow for speed and automation in the rapidly evolving global online travel research and bookings business.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how a culture of IT innovation helps make highly dynamic customer interactions for online travel a major differentiator, we're joined by Filippo Onorato, Chief Information Officer at lastminute.com group in Chiasso, Switzerland. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Most people these days are trying to do more things more quickly amid higher complexity. What is it that you're trying to accomplish in terms of moving beyond disruption and being competitive in a highly complex area?
Join myVertica
To Get The Free
HPE Vertica Community Edition
Onorato: The travel market -- and in particular the online travel market -- is a very fast-moving market, and the habits and behaviors of the customers are changing so rapidly that we have to move fast.

Disruption is coming every day from different actors ... [requiring] a different way of constructing the customer experience. In order to do that, you have to rely on very big amounts of data -- just to style the evolution of the customer and their behaviors.

Gardner: And customers are more savvy; they really know how to use data and look for deals. They're expecting real-time advantages. How is the sophistication of the end user impacting how you work at the core, in your data center, and in your data analysis, to improve your competitive position?

Onorato
Onorato: Once again, customers are normally looking for information, and providing the right information at the right time is a key of our success. The brand we came from was called Bravofly and Volagratis in Italy; that means "free flight." The competitive advantage we have is to provide a comparison among all the different airline tickets, where the market is changing rapidly from the standard airline behavior to the low-cost ones. Customers are eager to find the best deal, the best price for their travel requirements.

So, the ability to construct their customer experience in order to find the right information at the right time, comparing hundreds of different airlines, was the competitive advantage we made our fortune on.

Gardner: Let’s edify our listeners and reader a bit about lastminute.com. You're global. Tell us about the company and perhaps your size, employees, and the number of customers you deal with each day.

Most famous brand

Onorato: We are 1,200 employees worldwide. Lastminute.com, the most famous brand worldwide, was acquired by the Bravofly Rumbo Group two years ago from Sabre. We own Bravofly; that was the original brand. We own Rumbo; that is very popular in Spanish-speaking markets. We own Volagratis in Italy; that was the original brand. And we own Jetcost; that is very popular in France. That is actually a metasearch, a combination of search and competitive comparison between all the online travel agencies (OTAs) in the market.

We span across 40 countries, we support 17 languages, and we help almost 10 million people fly every year.

Gardner: Let’s dig into the data issues here, because this is a really compelling use-case. There's so much data changing so quickly, and sifting through it is an immense task, but you want to bring the best information to the right end user at the right time. Tell us a little about your big-data architecture, and then we'll talk a little bit about bots, algorithms, and artificial intelligence.

Onorato: The architecture of our system is pretty complex. On one side, we have to react almost instantly to the search that the customers are doing. We have a real-time platform that's grabbing information from all the providers, airlines, other OTAs, hotel provider, bed banks, or whatever.

We concentrate all this information in a huge real-time database, using a lot of caching mechanisms, because the speed of the search, the speed of giving result to the customer is a competitive advantage. That's the real-time part of our development that constitutes the core business of our industry.

Gardner: And this core of yours, these are your own data centers? How have you constructed them and how do you manage them in terms of on-premises, cloud, or hybrid?

Onorato: It's all on-premises, and this is our core infrastructure. On the other hand, all that data that is gathered from the interaction with the customer is partially captured. This is the big challenge for the future -- having all that data stored in a data warehouse. That data is captured in order to build our internal knowledge. That would be the sales funnel.
Right now, we're storing a short history of that data, but the goal is to have two years worth of session data.

So, the behavior of the customer, the percentage of conversion in each and every step that the customer does, from the search to the actual booking. That data is gathered together in a data warehouse that is based on HPE Vertica, and then, analyzed in order to find the best place, in order to optimize the conversion. That’s the main usage of the date warehouse.

On the other hand, what we're implementing on top of all this enormous amount of data is session-related data. You can imagine how much a data single interaction of a customer can generate. Right now, we're storing a short history of that data, but the goal is to have two years' worth of session data. That would be an enormous amount of data.

Gardner: And when we talk about data, often we're concerned about velocity and volume. You've just addressed volume, but velocity must be a real issue, because any change in a weather issue in Europe, for example, or a glitch in a computer system at one airline in North America changes all of these travel data points instantly.

Unpredictable events

Onorato: That’s also pretty typical in the tourism industry. It's a very delicate business, because we have to react to unpredictable events that are happening all over the world. In order to do a better optimization of margin, of search results, etc, we're also applying some machine-learning algorithm, because a human can't react so fast to the ever-changing market or situation.

In those cases, we use optimization algorithms in order to fine tune our search results, in order to better deal with a customer request, and to propose the better deal at the right time. In very simple terms, that's our core business right now.

Gardner: And Filippo, only your organization can do this, because the people with the data on the back side can’t apply the algorithm; they have only their own data. It’s not something the end user can do on the edge, because they need to receive the results of the analysis and the machine learning. So you're in a unique, important position. You're the only one who can really apply the intelligence, the AI, and the bots to make this happen. Tell us a little bit about how you approached that problem and solved it.
Join myVertica
To Get The Free
HPE Vertica Community Edition
Onorato: I perfectly agree. We are the collector of an enormous amount of product-related information on one side. On the other side, what we're collecting are the customer behaviors. Matching the two is unique for our industry. It's definitely a competitive advantage to have that data.

Then, what you do with all those data is something that is pushing us to do continuous innovation and continuous analysis. By the way, I don't think something can be implemented without a lot of training and a lot of understanding of the data.

Just to give you an example, what we're implementing, the machine learning algorithm that is called multi-armed bandit, is kind of parallel testing of different configurations of parameters that are presented to the final user. This algorithm is reacting to a specific set of conditions and proposing the best combination of order, visibility, pricing, and whatever to the customer in order to satisfy their research.

What we really do in that case is to grab information, build our experience into the algorithm, and then optimize this algorithm every day, by changing parameters, by also changing the type of data that we're inputting into the algorithm itself.
It's endless, because the market conditions are changing and the actors in the market are changing as well.

So, it’s an ongoing experience; it’s an ongoing study. It's endless, because the market conditions are changing and the actors in the market are changing as well, coming from the two operators in the past, the airline and now the OTA. We're also a metasearch, aggregating products from different OTAs. So, there are new players coming in and they're always coming closer and closer to the customer in order to grab information on customer behavior.

Gardner: It sounds like you have a really intense culture of innovation, and that's super important these days, of course. As we were hearing at the HPE Big Data Conference 2016, the feedback loop element of big data is now really taking precedence. We have the ability to manage the data, to find the data, to put the data in a useful form, but we're finding new ways. It seems to me that the more people use our websites, the better that algorithm gets, the better the insight to the end user, therefore the better the result and user experience. And it never ends; it always improves.

How does this extend? Do you take it to now beyond hotels, to events or transportation? It seems to me that this would be highly extensible and the data and insights would be very valuable.

Core business

Onorato: Correct. The core business was initially the flight business. We were born by selling flight tickets. Hotels and pre-packaged holidays was the second step. Then, we provided information about lifestyle. For example, in London we have an extensive offer of theater, events, shows, whatever, that are aggregated.

Also, we have a smaller brand regarding restaurants. We're offering car rental. We're giving also value-added services to the customer, because the journey of the customer doesn't end with the booking. It continues throughout the trip, and we're providing information regarding the check-in; web check-in is a service that we provide. There are a lot of ancillary businesses that are making the overall travel experience better, and that’s the goal for the future.

Gardner: I can even envision where you play a real-time concierge, where you're able to follow the person through the trip and be available to them as a bot or a chat. This edge-to-core capability is so important, and that big data feedback, analysis, and algorithms are all coming together very powerfully.

Tell us a bit about metrics of success. How can you measure this? Obviously a lot of it is going to be qualitative. If I'm a traveler and I get what I want, when I want it, at the right price, that's a success story, but you're also filling every seat on the aircraft or you're filling more rooms in the hotels. How do we measure the success of this across your ecosystem?
We can jump from one location to another very easily, and that's one of the competitive advantages of being an OTA.

Onorato: In that sense, we're probably a little bit farther away from the real product, because we're an aggregator. We don’t have the risk of running a physical hotel, and that's where we're actually very flexible. We can jump from one location to another very easily, and that's one of the competitive advantages of being an OTA.

But the success overall right now is giving the best information at the right time to the final customer. What we're measuring right now is definitely the voice of the customer, the voice of the final customer, who is asking for more and more information, more and more flexibility, and the ability to live an experience in the best way possible.
Join myVertica
To Get The Free
HPE Vertica Community Edition
So, we're also providing a brand that is associated with wonderful holidays, having fun, etc.

Gardner: The last question, for those who are still working on building out their big data infrastructure, trying to attain this cutting-edge capability and start to take advantage of machine learning, artificial intelligence, and so forth, if you could do it all over again, what would you tell them, what would be your advice to somebody who is merely more in the early stages of their big data journey?

Onorato: It is definitely based on two factors -- having the best technology and not always trying to build your own technology, because there are a lot of products in the market that can speed up your development.

And also, it's having the best people. The best people is one of the competitive advantages of any company that is running this kind of business. You have to rely on fast learners, because market condition are changing, technology is changing, and the people needs to train themselves very fast. So, you have to invest in people and invest in the best technology available.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

WWT took an enterprise Tower of Babel and delivered comprehensive intelligent search

WWT took an enterprise Tower of Babel and delivered comprehensive intelligent search

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how World Wide Technology, known as WWT, in St. Louis, found itself with a very serious yet somehow very common problem -- users simply couldn’t find relevant company content.

We'll explore how WWT reached deep into its applications, data, and content to rapidly and efficiently create a powerful Google-like, pan-enterprise search capability. Not only does it search better and empower users, the powerful internal index sets the stage for expanded capabilities using advanced analytics to engender a more productive and proactive digital business culture.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how WWT took an enterprise Tower of Babel and delivered cross-applications intelligent search are James Nippert, Enterprise Search Project Manager, and Susan Crincoli, Manager of Enterprise Content, both at World Wide Technology in St. Louis. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems pretty evident that the better search you have in an organization, the better people are going to find what they need as they need it. What holds companies back from delivering results like people are used to getting on the web?

Nippert
Nippert:  It’s the way things have always been. You just had to drill down from the top level. You go to your Exchange, your email, and start there. Did you save a file here? "No, I think I saved it on my SharePoint site," and so you try to find it there, or maybe it was in a file directory.

Those are the steps that people have been used to because it’s how they've been doing it their entire lives, and it's the nature of beast as we bring more and more enterprise applications into the fold. You have enterprises with 100 or 200 applications, and each of those has its own unique data silos. So, users have to try to juggle all of these different content sources where stuff could be saved. They're just used to having to dig through each one of those to try to find whatever they’re looking for.

Gardner: And we’ve all become accustomed to instant gratification. If we want something, we want it right away. So, if you have to tag something, or you have to jump through some hoops, it doesn’t seem to be part of what people want. Susan, are there any other behavioral parts of this?

Find the world

Crincoli: We, as consumers, are getting used to the Google-like searching. We want to go to one place and find the world. In the information age, we want to go to one place and be able to find whatever it is we’re looking for. That easily transfers into business problems. As we store data in myriad different places, the business user also wants the same kind of an interface.

Crincoli
Gardner: Certain tools that can only look at a certain format or can only deal with certain tags or taxonomy are strong, but we want to be comprehensive. We don’t want to leave any potentially powerful crumbs out there not brought to bear on a problem. What’s been the challenge when it comes to getting at all the data, structured, unstructured, in various formats?

Nippert: Traditional search tools are built off of document metadata. It’s those tags that go along with records, whether it’s the user who uploaded it, the title, or the date it was uploaded. Companies have tried for a long time to get users to tag with additional metadata that will make documents easier to search for. Maybe it’s by department, so you can look for everything in the HR Department.

At the same time, users don’t want to spend half an hour tagging a document; they just want to load it and move on with their day. Take pictures, for example. Most enterprises have hundreds of thousands of pictures that are stored, but they’re all named whatever number the camera gave, and they will name it DC0001. If you have 1,000 pictures named that you can't have a successful search, because no search engine will be able to tell just by that title -- and nothing else -- what they want to find.

Gardner: So, we have a situation where the need is large and the paybacks could be large, but the task and the challenge are daunting. Tell us about your journey. What did you do in order to find a solution?

Nippert: We originally recognized a problem with our on-premises Microsoft SharePoint environment. We were using an older version of SharePoint that was running mostly on metadata, and our users weren’t uploading any metadata along with their internet content.
Your average employee can spend over an entire work week per year searching for information or documentation that they need to get their job done.

We originally set out to solve that issue, but then, as we began interviewing business users, we understood very quickly that this is an enterprise-scale problem. Scaling out even further, we found out it’s been reported that as much as 10 percent of staffing costs can be lost directly to employees not being able to find what they're looking for. Your average employee can spend over an entire work week per year searching for information or documentation that they need to get their job done.

So it’s a very real problem. WWT noticed it over the last couple of years, but as there is the velocity in volume of data increase, it’s only going to become more apparent. With that in mind, we set out to start an RFI process for all the enterprise search leaders. We used the Gartner Magic Quadrants and started talks with all of the Magic Quadrant leaders. Then, through a down-selection process, we eventually landed on HPE.

We have a wonderful strategic partnership with them. It wound up being that we went with the HPE IDOL tool, which has been one of the leaders in enterprise search, as well as big data analytics, for well over a decade now, because it has very extensible platform, something that you can really scale out and customize and build on top of. It doesn’t just do one thing.
Humanizes Machine Learning
For Big Data Success
Gardner: And it’s one solution to let people find what they're looking for, but when you're comprehensive and you can get all kinds of data in all sorts of apps, silos and nooks and crannies, you can deliver results that the searching party didn’t even know was there. The results can be perhaps more powerful than they were originally expecting.

Susan, any thoughts about a culture, a digital transformation benefit, when you can provide that democratization of search capability, but maybe extended into almost analytics or some larger big-data type of benefit?

Multiple departments

Crincoli: We're working across multiple departments and we have a lot of different internal customers that we need to serve. We have a sales team, business development practices, and professional services. We have all these different departments that are searching for different things to help them satisfy our customers’ needs.

With HPE being a partner, where their customers are our customers, we have this great relationship with them. It helps us to see the value across all the different things that we can bring to bear to get all this data, and then, as we move forward, what we help people build more relevant results.

If something is searched for one time, versus 100 times, then that’s going to bubble up to the top. That means that we're getting the best information to the right people in the right amount of time. I'm looking forward to extending this platform and to looking at analytics and into other platforms.
That means that we're getting the best information to the right people in the right amount of time.

Gardner: That’s why they call it "intelligent search." It learns as you go.

Nippert: The concept behind intelligent search is really two-fold. It first focuses on business empowerment, which is letting your users find whatever it is specifically that they're looking for, but then, when you talk about business enablement, it’s also giving users the intelligent conceptual search experience to find information that they didn’t even know they should be looking for.

If I'm a sales representative and I'm searching for company "X," I need to find any of the Salesforce data on that, but maybe I also need to find the account manager, maybe I need to find professional services’ engineers who have worked on that, or maybe I'm looking for documentation on a past project. As Susan said, that Google-like experience is bringing that all under one roof for someone, so they don’t have to go around to all these different places; it's presented right to them.

Gardner: Tell us about World Wide Technology, so we understand why having this capability is going to be beneficial to your large, complex organization?
Humanizes Machine Learning
For Big Data Success
Crincoli: We're a $7-billion organization and we have strategic partnerships with Cisco, HPE, EMC, and NetApp, etc. We have a lot of solutions that we bring to market. We're a solution integrator and we're also a reseller. So, when you're an account manager and you're looking across all of the various solutions that we can provide to solve the customer’s problems, you need to be able to find all of the relevant information.

You probably need to find people as well. Not only do I need to find how we can solve this customer’s problem, but also who has helped us to solve this customer’s problem before. So, let me find the right person, the right pre-sales engineer or the right post-sales engineer. Or maybe there's somebody in professional services. Maybe I want the person who implemented it the last time. All these different people, as well as solutions that we can bring in help give that sales team the information they need right at their fingertips.

It’s very powerful for us to think about the struggles that a sales manager might have, because we have so many different ways that we can help our customer solve those problems. We're giving them that data at their fingertips, whether that’s from Salesforce, all the way through to SharePoint or something in an email that they can’t find from last year. They know they have talked to somebody about this before, or they want to know who helped me. Pulling all of that information together is so powerful.

We don’t want them to waste their time when they're sitting in front of a customer trying to remember what it was that they wanted to talk about.

Gardner: It really amounts to customer service benefits in a big way, but I'm also thinking this is a great example of how, when you architect and deploy and integrate properly on the core, on the back end, that you can get great benefits delivered to the edge. What is the interface that people tend to use? Is there anything we can discuss about ease of use in terms of that front-end query?

Simple and intelligent

Nippert: As far as ease of use goes, it’s simplicity. If you're a sales rep or an engineer in the field, you need to be able to pull something up quickly. You don’t want to have to go through layers and layers of filtering and drilling down to find what you're looking for. It needs to be intelligent enough that, even if you can’t remember the name of a document or the title of a document, you ought to be able to search for a string of text inside the document and it still comes back to the top. That’s part of the intelligent search; that’s one of the features of HPE IDOL.

Whenever you're talking about front-end, it should be something light and something fast. Again, it’s synonymous with what users are used to on the consumer edge, which is Google. There are very few search platforms out there that can do it better. Look at the  Google home page. It’s a search bar and two buttons; that’s all it is. When users are used to that at home and they come to work, they don’t want a cluttered, clumsy, heavy interface. They just need to be able to find what they're looking for as quickly and simply as possible. 

Gardner: Do you have any examples where you can qualify or quantify the benefit of this technology and this approach that will illustrate why it’s important?
It’s gotten better at finding everything from documents to records to web pages across the board; it’s improving on all of those.

Nippert: We actually did a couple surveys, pre- and post-implementation. As I had mentioned earlier, it was very well known that our search demands weren't being met. The feedback that we heard over and over again was "search sucks." People would say that all the time. So, we tried to get a little more quantification around that with some surveys before and after the implementation of IDOL search for the enterprise. We got a couple of really great numbers out of it. We saw that people’s satisfaction with search went up by about 30 percent with overall satisfaction. Before, it was right in the middle, half of them were happy, half of them weren’t.

Now, we're well over 80 percent that have overall satisfaction with search. It’s gotten better at finding everything from documents to records to web pages across the board; it’s improving on all of those. As far as the specifics go, the thing we really cared about going into this was, "Can I find it on the first page?" How often do you ever go to the second page of search results.

With our pre-surveys, we found that under five percent of people were finding it on the first page. They had to go to second or third page or four through 10. Most of the users just gave up if it wasn’t on the first page. Now, over 50 percent of users are able to find what they're looking for on the very first page, and if not, then definitely the second or third page.

We've gone from a completely unsuccessful search experience to a valid successful search experience that we can continue to enhance on.

Crincoli: I agree with James. When I came to the company, I felt that way, too -- search sucks. I couldn’t find what I was looking for. What’s really cool with what we've been able to do is also review what people are searching for. Then, as we go back and look at those analytics, we can make those the best bets.

If we see hundreds of people are searching for the same thing or through different contexts, then we can make those the best bets. They're at the top and you can separate those things out. These are things like the handbook or PTO request forms that people are always searching for.

Gardner: I'm going to just imagine that if I were in the healthcare, pharma, or financial sectors, I'd want to give my employees this capability, but I'd also be concerned about proprietary information and protection of data assets. Maybe you're not doing this, but wonder what you know about allowing for the best of search, but also with protection, warnings, and some sort of governance and oversight. 

Governance suite

Nippert: There is a full governance suite built in and it comes through a couple of different features. One of the main ones is induction, where as IDOL scans through every single line of a document or a PowerPoint slide of a spreadsheet whatever it is, it can recognize credit card numbers, Social Security numbers anything that’s personally identifiable information (PII) and either pull that out, delete it, send alerts, whatever.

You have that full governance suite built in to anything that you've indexed. It also has a mapped security engine built in called Omni Group, so it can map the security of any content source. For example, in SharePoint, if you have access to a file and I don’t and if we each ran a search, you would see a comeback in the results and I wouldn’t. So, it can honor any content’s security.  

Gardner: Your policies and your rules are what’s implemented, and that’s how it goes?

Nippert: Exactly. It is up to as the search team or working with your compliance or governance team to make sure that that does happen.

Gardner: As we think about the future and the availability for other datasets to be perhaps brought in, that search is a great tool for access to more than just corporate data, enterprise data and content, but maybe also the front-end for some advanced querying analytics, business intelligence (BI), has there been any talk about how to take what you are doing in enterprise search and munge that, for lack of a better word, with analytics BI and some of the other big data capabilities.
It is going to be something that we can continue to build on top of, as well and come up with our own unique analytic solutions.

Nippert: Absolutely. So HPE has just recently released BI for Human Intelligence (BIFHI), which is their new front end for IDOL and that has a ton of analytics capabilities built into it that really excited to start looking at a lot of rich text, rich media analytics that can pull the words right off the transcript of an MP4 raw video and transcribe it at the same time. But more than that, it is going to be something that we can continue to build on top of, as well and come up with our own unique analytic solutions.

Gardner: So talk about empowering your employees. Everybody can become a data scientist eventually, right, Susan?

Crincoli: That’s right. If you think about all of the various contexts, we started out with just a few sources, but we also have some excitement because we built custom applications, both for our customers and for our internal work. We're taking that to the next level with building an API and pulling that data into the enterprise search that just makes it even more extensible to our enterprise.

Gardner: I suppose the next step might be the natural language audio request where you would talk to your PC, your handheld device, and say, "World Wide Technology feed me this," and it will come back, right?

Nippert: Absolutely. You won’t even have to lift a finger anymore.

Cool things

Crincoli: It would be interesting to loop in what they are doing with Cortana at Microsoft and some of the machine learning and some of the different analytics behind Cortana. I'd love to see how we could loop that together. But those are all really cool things that we would love to explore.

Gardner: But you can’t get there until you solve the initial blocking and tackling around content and unstructured data synthesized into a usable format and capability.
Humanizes Machine Learning
For Big Data Success
Nippert: Absolutely. The flip side of controlling your data sources, as we're learning, is that there are a lot of important data sources out there that aren’t good candidates for enterprise search whatsoever. When you look at a couple of terabytes or petabytes of MongoDB data that’s completely unstructured and it’s just binaries, that’s enterprise data, but it’s not something that anyone is looking for.

So even though our original knee-jerk is to index everything, get everything to search, you want to able to search across everything. But you also have to take it with a grain of salt. A new content source could be hundreds or thousands of results that could potentially clutter the accuracy of results. Sometimes, it’s actually knowing when not to search something.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Meet George Jetson – your new AI-empowered chief procurement officer

Meet George Jetson – your new AI-empowered chief procurement officer

The next BriefingsDirect technology innovation thought leadership discussion explores how rapid advances in artificial intelligence (AI) and machine learning are poised to reshape procurement -- like a fast-forwarding to a once-fanciful vision of the future.

Whereas George Jetson of the 1960s cartoon portrayed a world of household robots, flying cars, and push-button corporate jobs -- the 2017 procurement landscape has its own impressive retinue of decision bots, automated processes, and data-driven insights.

We won’t need to wait long for this vision of futuristic business to arrive. As we enter 2017, applied intelligence derived from entirely new data analysis benefits has redefined productivity and provided business leaders with unprecedented tools for managing procurement, supply chains, and continuity risks.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the future of predictive -- and even proactive -- procurement technologies, please welcome Chris Haydon, Chief Strategy Officer at SAP Ariba. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems like only yesterday that we were content to gain a common view of the customer or develop an end-to-end bead on a single business process. These were our goals in refining business in general, but today we've leapfrogged to a future where we're using words like “predictive” and “proactive” to define what business function should do and be about. Chris, what's altered our reality to account for this rapid advancement from visibility into predictive -- and on to proactive?

Haydon: There are a couple of things. The acceleration of the smarts, the intelligence, or the artificial intelligence, whatever the terminology that you identify with, has really exploded. It’s a lot more real, and you see these use-cases on television all the time. The business world is just looking to go in and adopt that.

And then there’s this notion of the Lego block of being able to string multiple processes together via an API is really exciting -- that coupled with the ability to have insight. The last piece, the ability to make sense of big data, either from a visualization perspective or from a machine-learning perspective, has accelerated things.

These trends are starting to come together in the business-to-business (B2B) world, and today, we're seeing them manifest themselves in procurement.

Gardner: What is it about procurement as a function that’s especially ripe for taking advantage of these technologies?

Transaction intense

Haydon: Procurement is obviously very transaction-intense. Historically, what transaction intensity means is people, processing, exceptions. When we talk about these trends now, the ability to componentize services, the ability to look at big data or machine learning, and the input on top of this contextualizes intelligence. It's cognitive and predictive by its very nature, a bigger data set, and [improves] historically inefficient human-based processes. That’s why procurement is starting to be at the forefront.

Haydon

Gardner: Procurement itself has changed from the days of when we were highly vertically integrated as corporations. We had long lead times on product cycles and fulfillment. Nowadays, it’s all about agility and compressing the time across the board. So, procurement has elevated its position. Anything more to add?

Haydon: Everyone needs to be closer to the customer, and you need live business. So, procurement is live now. This change in dynamic -- speed and responsiveness -- is closer to your point. It’s also these other dimensions of the consumer experience that now has to be the business-to-business experience. All that means same-day shipping, real-time visibility, and changing dynamically. That's what we have to deliver.

Gardner: If we go back to our George Jetson reference, what is it about this coming year, 2017? Do you think it's an important inception point when it comes to factoring things like the rising role of procurement, the rising role of analytics, and the fact that the Internet of Things (IoT) is going to bring more relevant data to bear? Why now?

Haydon: There are a couple of things. The procurement function is becoming more mature. Procurement leaders have extracted those first and second levels of savings from sourcing and the like. And they have control of their processes.

With cloud-based technologies and more of control of their processes, they're looking now to how they're going to serve their internal customers by being value-generators and risk-reducers.

How do you forward the business, how do you de-risk, how do you get supply continuity, how do you protect your brand? You do that by having better insight, real-time insight into your supply base, and that’s what’s driving this investment.

Gardner: We've been talking about Ariba being a 20-year-old company. Congratulations on your anniversary of 20 years.

Haydon: Thank you.

AI and bots

Gardner: You're also, of course, part of SAP. Not only have you been focused on procurement for 20 years, but you've got a large global player with lots of other technologies and platform of benefits to avail yourselves of. So, that brings me to the point of AI and bots.

It seems to me that right at the time when procurement needs help, when procurement is more important than ever, that we're also in a position technically to start doing some innovative things that get us into those words "predictive" and more "intelligent."

Set the stage for how these things come together.

Haydon: You allude to being part of SAP, and that's really a great strength and advantage for a domain-focused procurement expertise company.

The machine-learning capabilities that are part of a native SAP HANA platform, which we naturally adopt and get access to, put us on the forefront of not having to invest in that part of the platform, but to focus on how we take that platform and put it into the context of procurement.

There are a couple of pretty obvious areas. There's no doubt that when you’ve got the largest B2B network, billions in spend, and hundreds and millions of transactions on invoicing, you apply some machine learning on that. We can start doing a lot smarter matching an exception management on that, pretty straightforward. That's at one end of the chain.
It's not about upstream and downstream, it's about end-to-end process, and re-imagining and reinventing that.

On the other end of the chain, we have bots. Some people get a little bit wired about the word “bot,” “robotics,” or whatever, maybe it's a digital assistant or it's a smart app. But, it's this notion of helping with decisions, helping with real-time decisions, whether it's identifying a new source of supply because there's a problem, and the problem is identified because you’ve got a live network. It's saying that you have a risk or you have a continuity problem, and not just that it's happening, but here's an alternative, here are other sources of a qualified supply.

Gardner: So, it strikes me that 2017 is such a pivotal year in business. This is the year where we're going to start to really define what machines do well, and what people do well, and not to confuse them. What is it about an end-to-end process in procurement that the machine can do better that we can then elevate the value in the decision-making process of the people?

Haydon: Machines can do better in just identifying patterns -- clusters, if you want to use a more technical word. They transform category management and enables procurement to be at the front of their internal customer set by looking not just at their traditional total cost of ownership (TCO), but total value and use. That's a part of that real dynamic change.

What we call end-to-end, or even what SAP Ariba defined in a very loose way when we talked about upstream, it was about outsourcing and contracting, and downstream was about procurement, purchasing, and invoicing. That's gone, Dana. It's not about upstream and downstream, it's about end-to-end process, and re-imagining and reinventing that.

The role of people

Gardner: When we give more power to a procurement professional by having highly elevated and intelligent tools, their role within the organization advances and the amount of improvement they can make financially advances. But I wonder where there's risk if we automate too much and whether companies might be thinking that they still want people in charge of these decisions. Where do we begin experimenting with how much automation to bring, now that we know how capable these machines have been, or is this going to be a period of exploration for the next few years?

Haydon: It will be a period of exploration, just because businesses have different risk tolerances and there are actually different parts of their life cycle. If you're in a hyper growth mode and you're pretty profitable, that's a little bit different than if you're under a very big margin pressure.

For example, maybe if you're in high tech in the Silicon Valley, and some big names that we could all talk about are, you're prepared to be able to go at it, and let it all come.

If you're in a natural-resource environment, every dollar is even more precious than it was a year ago.

That’s also the beauty, though, with technology. If you want to do it for this category, this supplier, this business unit, or this division you can do that a lot easier than ever before and so you go on a journey.
If you're in a hyper growth mode and you're pretty profitable, that's a little bit different than if you're under a very big margin pressure.

Gardner: That’s an important point that people might not appreciate, that there's a tolerance for your appetite for automation, intelligence, using machine learning, and AI. They might even change, given the context of the certain procurement activity you're doing within the same company. Maybe you could help people who are a little bit leery of this, thinking that they're losing control. It sounds to me like they're actually gaining more control.

Haydon: They gain more control, because they can do more and see more. To me, it’s layered. Does the first bot automatically requisition something -- yes or no? So, you put tolerances on it. I'm okay to do it if it is less than $50,000, $5,000, or whatever the limit is, and it's very simple. If the event is less than $5,000 and it’s within one percent of the last time I did it, go and do it. But tell me that you are going to do it or let’s have a cooling-off period.

If you don't tell me or if you don’t stop me, I'm going to do it, and that’s the little bit of this predictive as well. So you still control the gate, you just don’t have to be involved in all the sub-processes and all that stuff to get to the gate. That’s interesting.

Gardner: What’s interesting to me as well, Chris, is because the data is such a core element of how successful this is, it means that companies in a procurement intelligence drive will want more data, so they can make better decisions. Suppliers who want to be competitive in that environment will naturally be incentivized to provide more data, more quickly, with more openness. Tell us some of the implications for intelligence brought to procurement on the supplier? What we should expect suppliers to do differently as a result?

Notion of content

Haydon: There's no doubt that, at a couple of levels, suppliers will need to let the buyers know even more about themselves than they have ever known before.

That goes to the notion of content. It’s like there is unique content to be discovered, which is whom am I, what do I do well and demonstrate that I do well. That’s being discovered. Then, there is the notion of being able to transact. What do I need to be able to do to transact with you efficiently whether that's a payment, a bank account, or just the way in which I can consume this?

Then, there is also this last notion of the content. What content do I need to be able to provide to my customer, aka the end user, for them to be able to initiate the business with them?

These three dimensions of being discovered, how to be dynamically transacted with, and then actually providing the content of what you do even as a material of service to the end user via the channel. You have to have all of these dimensions right.
If you don't have the context of the business process between a buyer and a seller and what they are trying to affect through the network, how does it add value?

That’s why we fundamentally believe that a network-based approach, when it's end to end, meaning a supplier can do it once to all of the customers across the [Ariba] Discovery channel, across the transactional channel, across the content channel is really value adding. In a digital economy, that's the only way to do it.

Gardner: So this idea of the business network, which is a virtual repository for all of this information isn't just quantity, but it's really about the quality of the relationship. We hear about different business networks vying for attention. It seems to me that understanding that quality aspect is something you shouldn't lose track of.

Haydon: It’s the quality. It’s also the context of the business process. If you don't have the context of the business process between a buyer and a seller and what they are trying to affect through the network, how does it add value? The leading-practice networks, and we're a leading-practice network, are thinking about Discovery. We're thinking about content; we're thinking about transactions.

Gardner: Again, going back to the George Jetson view of the future, for organizations that want to see the return on their energy and devotion to these concepts around AI, bots, and intelligence. What sort of low-hanging fruit do we look for, for assuring them that they are on the right path? I'm going to answer my own question, but I want you to illustrate it a bit better, and that’s risk and compliance and being able to adjust to unforeseen circumstances seems to me an immediate payoff for doing this.

Severance of pleadings

Haydon: The United Kingdom is enacting a law before the end of the year for severance of pleadings. It’s the law, and you have to comply. The real question is how you comply.

You eye your brand, you eye your supply chain, and having the supply-chain profile information at hand right now is top of mind. If you're a Chief Procurement Officer (CPO) and you walk into the CEO’s office, the CEO could ask, "Can you tell me that I don’t have any forced labor, I don’t have any denied parties, and I'm Office of Foreign Assets Control (OFAC) compliant? Can you tell me that now?"

You might be able to do it for your top 50 suppliers or top 100 suppliers, and that’s great, but unfortunately, a small, $2,000 supplier who uses some forced labor in any part of the world is potentially a problem in this extended supply chain. We've seen brands boycotted very quickly. These things roll.

So yes, I think that’s just right at the forefront. Then, it's applying intelligence to that to give that risk threshold and to think about where those challenges are. It's being smart and saying, "Here is a high risk category. Look at this category first and all the suppliers in the category. We're not saying that the suppliers are bad, but you better have a double or triple look at that, because you're at high risk just because of the nature of the category."
Think larger than yourself in trying to solve that problem differently. Those cloud deployment models really help you.

Gardner: Technically, what should organizations be thinking about in terms of what they have in place in order for their systems and processes to take advantage of these business network intelligence values? If I'm intrigued by this concept, if I see the benefits in reducing risk and additional efficiency, what might I be thinking about in terms of my own architecture, my own technologies in order to be in the best position to take advantage of this?

Haydon: You have to question how much of that you think you can build yourself. If you think you're asking different questions than most of your competitors, you're probably not. I'm sure there are specific categories and specific areas on tight supplier relationships and co-innovation development, but when it comes to the core risk questions, more often, they're about an industry, a geography, or the intersection of both.

Our recommendation to corporations is never try and build it yourself. You might need to have some degree of privacy, but look to have it as more industry-based. Think larger than yourself in trying to solve that problem differently. Those cloud deployment models really help you.

Gardner: So it really is less of a technical preparatory thought process than process being a digital organization, availing yourself of cloud models, being ready to think about acting intelligently and finding that right demarcation between what the machines do best and what the people do best.

More visible

Haydon: By making things digital they are actually more visible. You have to be able to balance the pure nature of visibility to get at the product; that's the first step. That’s why people are on a digital journey.

Gardner: Machines can’t help you with a paper-based process, right?

Haydon: Not as much. You have to scan it and throw it in. Then, you are then digitizing it.

Gardner: We heard about Guided Buying last year from SAP Ariba. It sounds like we're going to be getting a sort of "Guided Buying-Plus" next year and we should keep an eye on that.

Haydon: We're very excited. We announced that earlier this year. We're trying to solve two problems quickly through Guided Buying.
Our Guided Buying has a beautiful consumer-based look and feel, but with embedded compliance. We hide the complexity. We just show the user what they need to know at the time, and the flow is very powerful.

One is the nature of the ad-hoc user. We're all ad-hoc users in the business today. I need to buy things, but I don’t want to read the policy, I don’t want to open the PDF on some corporate portal on some threshold limit that, quite honestly, I really need to know about once or twice a year.

So our Guided Buying has a beautiful consumer-based look and feel, but with embedded compliance. We hide the complexity. We just show the user what they need to know at the time, and the flow is very powerful.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: SAP Ariba.

You may also be interested in:

Strategic view across more data delivers digital business boost for AmeriPride

Strategic view across more data delivers digital business boost for AmeriPride

The next BriefingsDirect Voice of the Customer digital transformation case study explores how linen services industry leader AmeriPride Services uses big data to gain a competitive and comprehensive overview of its operations, finances and culture.

We’ll explore how improved data analytics allows for disparate company divisions and organizations to come under a single umbrella -- to become more aligned -- and to act as a whole greater than the sum of the parts. This is truly the path to a digital business.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how digital transformation has been supported by innovations at the big data core, we’re joined by Steven John, CIO, and Tony Ordner, Information Team Manager, both at at AmeriPride Services in Minnetonka, Minnesota. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s discuss your path to being a more digitally transformed organization. What were the requirements that led you to become more data-driven, more comprehensive, and more inclusive in managing your large, complex organization?

John


John: One of the key business drivers for us was that we're a company in transition -- from a very diverse organization to a very centralized organization. Before, it wasn't necessarily important for us to speak the same data language, but now it's critical. We’re developing the lexicon, the Rosetta Stone, that we can all rely on and use to make sure that we're aligned and heading in the same direction.

Gardner: And Tony, when we say “data,” are we talking about just databases and data within applications? Or are we being even more comprehensive -- across as many information types as we can?

Ordner: It’s across all of the different information types. When we embarked on this journey, we discovered that data itself is great to have, but you also have to have processes that are defined in a similar fashion. You really have to drive business change in order to be able to effectively utilize that data, analyze where you're going, and then use that to drive the business. We're trying to institute into this organization an iterative process of learning.

Gardner: For those who are not familiar with AmeriPride Services, tell us about the company. It’s been around for quite a while. What do you do, and how big of an umbrella organization are we talking about?

Long-term investments

John: The company is over 125 years old. It’s family-owned, which is nice, because we're not driven by the quarter. We can make longer-term investments through the family. We can have more of a future view and have ambition to drive change in different ways than a quarter-by-quarter corporation does.

We're in the laundry business. We're in the textiles and linen business. What that means is that for food and beverage, we handle tablecloths, napkins, chef coats, aprons, and those types of things. In oil and gas, we provide the safety garments that are required. We also provide the mats you cross as you walk in the door of various restaurants or retail stores. We're in healthcare facilities and meet the various needs of providing and cleansing the garments and linens coming out of those institutions. We're very diverse. We're the largest company of our kind in Canada, probably about fourth in the US, and growing.
Become a Member of myVertica
Gain Access to the Free 
HPE Vertica Community Edition
Gardner: And this is a function that many companies don't view as core and they're very happy to outsource it. However, you need to remain competitive in a dynamic world. There's a lot of innovation going on. We've seen disruption in the taxicab industry and the hospitality industry. Many companies are saying, “We don’t want to be a deer in the headlights; we need to get out in front of this.”

Tony, how do you continue to get in front of this, not just at the data level, but also at the cultural level?

Ordner: Part of what we're doing is defining those standards across the company. And we're coming up with new programs and new ways to get in front and to partner with the customers.

Ordner
As part of our initiative, we're installing a lot of different technology pieces that we can use to be right there with the customers, to make changes with them as partners, and maybe better understand their business and the products that they aren't buying from us today that we can provide. We’re really trying to build that partnership with customers, provide them more ways to access our products, and devise other ways they might not have thought of for using our products and services.

With all of those data points, it allows us to do a much better job.

Gardner: And we have heard from Hewlett Packard Enterprise (HPE) the concept that it's the “analytics that are at the core of the organization,” that then drive innovation and drive better operations. Is that something you subscribe to, and is that part of your thinking?

John: For me, you have to extend it a little bit further. In the past, our company was driven by the experience and judgment of the leadership. But what we discovered is that we really wanted to be more data-driven in our decision-making.

Data creates a context for conversation. In the context of their judgment and experience, our leaders can leverage that data to make better decisions. The data, in and of itself, doesn’t drive the decisions -- it's that experience and judgment of the leadership that's that final filter.

We often forget the human element at the end of that and think that everything is being driven by analytics, when analytics is a tool and will remain a tool that helps leaders lead great companies.

Gardner: Steven, tell us about your background. You were at a startup, a very successful one, on the leading edge of how to do things different when it comes to apps, data, and cloud delivery.

New ways to innovate

John: Yes, you're referring to Workday. I was actually Workday’s 33rd customer, the first to go global with their product. Then, I joined Workday in two roles: as their Strategic CIO, working very closely with the sales force, helping CIOs understand the cloud and how to manage software as a service (SaaS); and also as their VP of Mid-Market Services, where we were developing new ways to innovate, to implement in different ways and much more rapidly.

And it was a great experience. I've done two things in my life, startups and turnarounds, and I thought that I was kind of stepping back and taking a relaxing job with AmeriPride. But in many ways, it's both; AmeriPride’s both a turnaround and a startup, and I'm really enjoying the experience.

Gardner: Let’s hear about how you translate technology advancement into business advancement. And the reason I ask it in that fashion is that it seems as a bit of a chicken and the egg, that they need to be done in parallel -- strategy, ops, culture, as well as technology. How are you balancing that difficult equation?

John: Let me give you an example. Again, it goes back to that idea of, if you just have the human element, they may not know what to ask, but when you add the analytics, then you suddenly create a set of questions that drive to a truth.
Become a Member of myVertica
Gain Access to the Free 
HPE Vertica Community Edition
We're a route-based business. We have over a 1,000 trucks out there delivering our products every day. When we started looking at margin we discovered that our greatest margin was from those customers that were within a mile of another customer.

So factoring that in changes how we sell, that changes how we don't sell, or how we might actually let some customers go -- and it helps drive up our margin. You have that piece of data, and suddenly we as leaders knew some different questions to ask and different ways to orchestrate programs to drive higher margin.

Gardner: Another trend we've seen is that putting data and analytics, very powerful tools, in the hands of more people can have unintended, often very positive, consequences. A knowledge worker isn't just in a cube and in front of a computer screen. They're often in the trenches doing the real physical work, and so can have real process insights. Has that kicked in yet at AmeriPride, and are you democratizing analytics?

Ordner: That’s a really great question. We've been trying to build a power-user base and bring some of these capabilities into the business segments to allow them to explore the data.

You always have to keep an eye on knowledge workers, because sometimes they can come to the wrong conclusions, as well as the right ones. So it's trying to make sure that we maintain that business layer, that final check. It's like, the data is telling me this, is that really where it is?

I liken it to having a flashlight in a dark room. That’s what we are really doing with visualizing this data and allowing them to eliminate certain things, and that's how they can raise the questions, what's in this room? Well, let me look over here, let me look over there. That’s how I see that.

Too much information

John: One of the things I worry about is that if you give people too much information or unstructured information, then they really get caught up in the academics of the information -- and it doesn’t necessarily drive a business process or drive a business result. It can cause people to get lost in the weeds of all that data.

You still have to orchestrate it, you still have to manage it, and you have to guide it. But you have to let people go off and play and innovate using the data. We actually have a competition among our power-users where they go out and create something, and there are judges and prizes. So we do try to encourage the innovation, but we also want to hold the reins in just a little bit.

Gardner: And that gets to the point of having a tight association between what goes on in the core and what goes on at the edge. Is that something that you're dabbling in as well?

John: It gets back to that idea of a common lexicon. If you think about evolution, you don't want a Madagascar or a Tasmania, where groups get cut off and then they develop their own truth, or a different truth, or they interpret data in a different way -- where they create their own definition of revenue, or they create their own definition of customer.

If you think about it as orbits, you have to have a balance. Maybe you only need to touch certain people in the outer orbit once a month, but you have to touch them once a month to make sure they're connected. The thing about orbits and keeping people in the proper orbits is that if you don't, then one of two things happens, based on gravity. They either spin out of orbit or they come crashing in. The idea is to figure out what's the right balance for the right groups to keep them aligned with where we are going, what the data means, and how we're using it, and how often.

Gardner: Let’s get back to the ability to pull together the data from disparate environments. I imagine, like many organizations, that you have SaaS apps. Maybe it’s for human capital management or maybe it’s for sales management. How does that data then get brought to bear with internal apps, some of them may even be on a mainframe still, or virtualized apps from older code basis and so forth? What’s the hurdle and what words of wisdom might you impart to others who are earlier in this journey of how to make all that data common and usable?

Ordner: That tends to be a hurdle. As to the data acquisition piece, as you set these things up in the cloud, a lot of the times the business units themselves are doing these things or making the agreements. They don't put into place the data access that we've always needed. That’s been our biggest hurdle. They'll sign the contracts, not getting us involved until they say, "Oh my gosh, now we need the data." We look at it and we say, "Well, it’s not in our contracts and now it’s going to cost more to access the data." That’s been our biggest hurdle for the cloud services that we've done.

Once you get past that, web services have been a great thing. Once you get the licensing and the contract in place, it becomes a very simple process, and it becomes a lot more seamless.

Gardner: So, maybe something to keep in mind is always think about the data before, during, and after your involvement with any acquisition, any contract, and any vendor?

Ordner: Absolutely.

You own three things

John: With SaaS, at the end of the day, you own three things: the process design, the data, and the integration points. When we construct a contract, one of the things I always insist upon is what I refer to as the “prenuptial agreement.”

What that simply means is, before the relationship begins, you understand how it can end. The key thing in how it ends is that you can take your data with you, that it has a migration path, and that they haven't created a stickiness that traps you there and you don't have the ability to migrate your data to somebody else, whether that’s somebody else in the cloud or on-premise.

Gardner: All right, let’s talk about lessons learned in infrastructure. Clearly, you've had an opportunity to look at a variety of different platforms, different requirements that you have had, that you have tested and required for your vendors. What is it about HPE Vertica, for example, that is appealing to you, and how does that factor into some of these digital transformation issues?

Ordner: There are two things that come to mind right away for me. One is there were some performance implications. We were struggling with our old world and certain processes that ran 36 hours. We did a proof of concept with HPE and Vertica and that ran in something like 17 minutes. So, right there, we were sold on performance changes.

As we got into it and negotiated with them, the other big advantage we discovered is that the licensing model with the amount of data, versus the core model that everyone else runs in the CPU core. We're able to scale this and provide that service at a high speed, so we can maintain that performance without having to take penalties against licensing. Those are a couple of things I see. Anything from your end, Steven?

John: No, I think that was just brilliant.

Gardner: How about on that acquisition and integration of data. Is there an issue with that that you have been able to solve?

Ordner: With acquisition and integration, we're still early in that process. We're still learning about how to put data into HPE Vertica in the most effective manner. So, we're really at our first source of data and we're looking forward to those additional pieces. We have a number of different telematics pieces that we want to include; wash aisle telematics as well as in-vehicle telematics. We're looking forward to that.

There's also scan data that I think will soon be on the horizon. All of our garments and our mats have chips in them. We scan them in and out, so we can see the activity and where they flow through the system. Those are some of our next targets to bring that data in and take a look at that and analyze it, but we're still a little bit early in that process as far as multiple sources. We're looking forward to some of the different ways that Vertica will allow us to connect to those data sources.

Gardner: I suppose another important consideration when you are picking and choosing systems and platforms is that extensibility. RFID tags are important now; we're expecting even more sensors, more data coming from the edge, the information from the Internet of Things (IoT). You need to feel that the systems you're putting in place now will scale out and up. Any thoughts about the IoT impact on what you're up to?

Overcoming past sins

John: We have had several conversations just this week with HPE and their teams, and they are coming out to visit with us on that exact topic. Being about a year into our journey, we've been doing two things. We've been forming the foundation with HPE Vertica and we've been getting our own house in order. So, there's a fair amount of cleanup and overcoming the sins of the past as we go through that process.

But Vertica is a platform; it's a platform where we have only tapped a small percentage of its capability. And in my personal opinion, even HPE is only aware of a portion of its capability. There are a whole set of things that it can do, and I don’t believe that we have discovered all of them.
Become a Member of myVertica
Gain Access to the Free 
HPE Vertica Community Edition
With that said, we're going to do what you and Tony just described; we're going to use the telematics coming out of our trucks. We're going to track safety and seat belts. We're going to track green initiatives, routes, and the analytics around our routes and fuel consumption. We're going to make the place safer, we're going to make it more efficient, and we're going to get proactive about being able to tell when a machine is going to fail and when to bring in our vendor partners to get it fixed before it disrupts production.

Gardner: It really sounds like there is virtually no part of your business in the laundry services industry that won't be in some way beneficially impacted by more data, better analytics delivered to more people. Is that fair?

Ordner: I think that’s a very fair statement. As I prepared for this conference, one of the things I learned, and I have been with the company for 17 years, is that we've done a lot technology changes, and technology has taken an added significance within our company. When you think of laundry, you certainly don't think of technology, but we've been at the leading edge of implementing technology to get closer to our customers, closer to understanding our products.

[Data technology] has become really ingrained within the industry, at least at our company.

John: It is one of those few projects where everyone is united, everybody believes that success is possible, and everybody is willing to pay the price to make it happen.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Swift and massive data classification advances score a win for better securing sensitive information

Swift and massive data classification advances score a win for better securing sensitive information

The next BriefingsDirect Voice of the Customer digital transformation case study explores how -- in an era when cybersecurity attacks are on the rise and enterprises and governments are increasingly vulnerable -- new data intelligence capabilities are being brought to the edge to provide better data loss prevention (DLP).

We'll learn how Digital Guardian in Waltham, Massachusetts analyzes both structured and unstructured data to predict and prevent loss of data and intellectual property (IP) with increased accuracy.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.
 
To learn how data recognition technology supports network and endpoint forensic insights for enhanced security and control, we're joined by Marcus Brown, Vice President of Corporate Business Development for Digital Guardian. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the major trends making DLP even more important, and even more effective?

Brown: Data protection has very much to come to the forefront in the last couple of years. Unfortunately, we wake up every morning and read in the newspapers, see on television, and hear on the radio a lot about data breaches. It’s pretty much every type of company, every type of organization, government organizations, etc., that’s being hit by this phenomenon at the moment.

Brown

So, awareness is very high, and apart from the frequency, a couple of key points are changing. First of all, you have a lot of very skilled adversaries coming into this, criminals, nation-state actors, hactivists, and many others. All these people are well-trained and very well resourced to come after your data. That means that companies have a pretty big challenge in front of them. The threat has never been bigger.

In terms of data protection, there are a couple of key trends at the cyber-security level. People have been aware of the so-called insider threat for a long time. This could be a disgruntled employee or it could be someone who has been recruited for monetary gain to help some organization get to your data. That’s a difficult one, because the insider has all the privilege and the visibility and knows where the data is. So, that’s not a good thing.

Then, you have employees, well-meaning employees, who just make mistakes. It happens to all of us. We touch something in Outlook, and we have a different email address than the one we were intending, and it goes out. The well-meaning employees, as well, are part of the insider threat.

Outside threats

What’s really escalated over the last couple of years are the advanced external attackers or the outside threat, as we call it. These are well-resourced, well-trained people from nation-states or criminal organizations trying to break in from the outside. They do that with malware or phishing campaigns.

About 70 percent of the attacks stop with the phishing campaign, when someone clicks on something that looked normal. Then, there's just general hacking, a lot of people getting in without malware at all. They just hack straight in using different techniques that don’t rely on malware.

People have become so good at developing malware and targeting malware at particular organizations, at particular types of data, that a lot of tools like antivirus and intrusion prevention just don’t work very well. The success rate is very low. So, there are new technologies that are better at detecting stuff at the perimeter and on the endpoint, but it’s a tough time.

There are internal and external attackers. A lot of people outside are ultimately after the two main types of data that companies have. One is a customer data, which is credit card numbers, healthcare information, and all that stuff. All of this can be sold on the black market per record for so-and-so many dollars. It’s a billion-dollar business. People are very motivated to do this.
Learn More About HPE IDOL
Advanced Enterprise Search and Analytics
For Unstructured Data
Most companies don’t want to lose their customers’ data. That’s seen as a pretty bad thing, a bad breach of trust, and people don’t like that. Then, obviously, for any company that has a product where you have IP, you spent lots of money developing that, whether it’s the new model of a car or some piece of electronics. It could be a movie, some new clothing, or whatever. It’s something that you have developed and it’s a secret IP. You don’t want that to get out, as well as all of your other internal information, whether it’s your financials, your plans, or your pricing. There are a lot of people going after both of those things, and that’s really the challenge.

In general, the world has become more mobile and spread out. There is no more perimeter to stop people from getting in. Everyone is everywhere, private life and work life is mixed, and you can access anything from anywhere. It’s a pretty big challenge.

Gardner: Even though there are so many different types of threats, internal, external, and so forth, one of the common things that we can do nowadays is get data to learn more about what we have as part of our inventory of important assets.

While we might not be able to seal off that perimeter, maybe we can limit the damage that takes place by early detection of problems. The earlier that an organization can detect that something is going on that shouldn’t be, the quicker they can come to the rescue. How does the instant analysis of data play a role in limiting negative outcomes?

Can't protect everything

Brown: If you want to protect something, you have to know it’s sensitive and that you want to protect it. You can’t protect everything. You're going to find which data is sensitive, and we're able to do that on-the-fly to recognize sensitive data and nonsensitive data. That’s a key part of the DLP puzzle, the data protection puzzle.

We work for some pretty large organizations, some of the largest companies and government organizations in the world, as well as lot of medium- and smaller-sized customers. Whatever it is we're trying to protect, personal information or indeed the IP, we need to be in the right place to see what people are doing with that data.

Our solution consists of two main types of agents. Some agents are on endpoint computers, which could be desktops or servers, Windows, Linux, and Macintosh. It’s a good place to be on the endpoint computer, because that’s where people, particularly the insider, come into play and start doing something with data. That’s where people work. That’s how they come into the network and it’s how they handle a business process.

So the challenge in DLP is to support the business process. Let people do with data what they need to do, but don’t let that data get out. The way to do that is to be in the right place. I already mentioned the endpoint agent, but we also have network agents, sensors, and appliances in the network that can look at data moving around.

The endpoint is really in the middle of the business process. Someone is working, they're working with different applications, getting data out of those applications, and they're doing whatever they need to do in their daily work. That’s where we sit, right in the middle of that, and we can see who the user is and what application they're working with it. It could be an engineer working with the computer-aided design (CAD) or the product lifecycle management (PLM) system developing some new automobile or whatever, and that’s a great place to be.

We rely very heavily on the HPE IDOL technology for helping us classify data. We use it particularly for structured data, anything like a credit card number, or alphanumeric data. It could be also free text about healthcare, patient information, and all this sort of stuff.

We use IDOL to help us scan documents. We can recognize regular expressions, that’s a credit card number type of thing, or Social Security. We can also recognize terminology. We rely on the fact that IDOL supports hundreds of languages and many different subject areas. So, using IDOL, we're able to recognize a whole lot of anything that’s written in textual language.

Our endpoint agent also has some of its own intelligence built in that we put on top of what we call contextual recognition or contextual classification. As I said, we see the customer list coming out of Salesforce.com or we see the jet fighter design coming out of the PLM system and we then tag that as well. We're using IDOL, we're using some of our technology, and we're using our vantage point on the endpoint being in the business process to figure out what the data is.

We call that data-in-use monitoring and, once we see something is sensitive, we put a tag on it, and that tag travels with the data no matter where it goes.

An interesting thing is that if you have someone making a mistake, an unintentional, good-willed employee, accidentally attaching the wrong doc to something that it goes out, obviously it will warn the user of that.

We can stop that

If you have someone who is very, very malicious and is trying to obfuscate what they're doing, we can see that as well. For example, taking a screenshot of some top-secret diagram, embedding that in a PowerPoint and then encrypting the PowerPoint, we're tagging those docs. Anything that results from IP or top-secret information, we keep tagging that. When the guy then goes to put it on a thumb drive, put it on Dropbox, or whatever, we see that and stop that.

So that’s still a part of the problem, but the two points are classify it, that’s what we rely on IDOL a lot for, and then stop it from going out, that’s what our agent is responsible for.

Gardner: Let’s talk a little bit about the results here, when behaviors, people and the organization are brought to bear together with technology, because it’s people, process and technology. When it becomes known in the organization that you can do this, I should think that that must be a fairly important step. How do we measure effectiveness when you start using a technology like Digital Guardian? Where does that become explained and known in the organization and what impact does that have?

Brown: Our whole approach is a risk-based approach and it’s based on visibility. You’ve got to be able to see the problem and then you can take steps and exercise control to stop the problems.
Learn More About HPE IDOL
Advanced Enterprise Search and Analytics
For Unstructured Data
When you deploy our solution, you immediately gain a lot of visibility. I mentioned the endpoints and I mentioned the network. Basically, you get a snapshot without deploying any rules or configuring in any complex way. You just turn this on and you suddenly get this rich visibility, which is manifested in reports, trends, and all this stuff. What you get, after a very short period of time, is a set of reports that tell you what your risks are, and some of those risks may be that your HR information is being put on Dropbox.

You have engineers putting the source code onto thumb drives. It could all be well-meaning, they want to work on it at home or whatever, or it could be some bad guy.

One the biggest points of risk in any company is when an employee resigns and decides to move on. A lot of our customers use the monitoring and the reporting we have at that time to actually sit down with the employee and say, "We noticed that you downloaded 2,000 files and put them on a thumb drive. We’d like you to sign this saying that you're going to give us that data back."

That’s a typical use case, and that’s the visibility you get. You turn it on and you suddenly see all these risks, hopefully, not too many, but a certain number of risks and then you decide what you're going to do about it. In some areas you might want to be very draconian and say, "I'm not going to allow this. I'm going to completely block this. There is no reason why you should put the jet fighter design up on Dropbox."

Gardner: That’s where the epoxy in the USB drives comes in.

Warning people

Brown: Pretty much. On the other hand, you don’t want to stop people using USB, because it’s about their productivity, etc. So, you might want to warn people, if you're putting some financial data on to a thumb drive, we're going to encrypt that so nothing can happen to it, but do you really want to do this? Is this approach appropriate? People get a feeling that they're being monitored and that the way they are acting maybe isn't according to company policy. So, they'll back out of it.

In a nutshell, you look at the status quo, you put some controls in place, and after those controls are in place, within the space of a week, you suddenly see the risk posture changing, getting better, and the incidence of these dangerous actions dropping dramatically.

Very quickly, you can measure the security return on investment (ROI) in terms of people’s behavior and what’s happening. Our customers use that a lot internally to justify what they're doing.

Generally, you can get rid of a very large amount of the risk, say 90 percent, with an initial pass, or initial first two passes of rules to say, we don’t want this, we don’t want that. Then, you're monitoring the status, and suddenly, new things will happen. People discover new ways of doing things, and then you’ve got to put some controls in place, but you're pretty quickly up into the 90 percent and then you fine-tuning to get those last little bits of risk out.

Gardner: Because organizations are becoming increasingly data-driven, they're getting information and insight across their systems and their applications. Now, you're providing them with another data set that they could use. Is there some way that organizations are beginning to assimilate and analyze multiple data sets including what Digital Guardian’s agents are providing them in order to have even better analytics on what’s going on or how to prevent unpleasant activities?

Brown: In this security world, you have the security operations center (SOC), which is kind of the nerve center where everything to do with security comes into play. The main piece of technology in that area is the security information and event management (SIEM) technology. The market leader is HPE’s ArcSight, and that’s really where all of the many tools that security organizations use come together in one console, where all of that information can be looked at in a central place and can also be correlated.

We provide a lot of really interesting information for the SIEM for the SOC. I already mentioned we're on the endpoint and the network, particularly on the endpoint. That’s a bit of a blind spot for a lot of security organizations. They're traditionally looking at firewalls, other network devices, and this kind of stuff.

We provide rich information about the user, about the data, what’s going on with the data, and what’s going on with the system on the endpoint. That’s key for detecting malware, etc. We have all this rich visibility on the endpoint and also from the network. We actually pre-correlate that. We have our own correlation rules. On the endpoint computer in real time, we're correlating stuff. All of that gets populated into ArcSight.

At the recent HPE Protect Show in National Harbor in September we showed the latest generation of our integration, which we're very excited about. We have a lot of ArcSight content, which helps people in the SOC leverage our data, and we gave a couple of presentations at the show on that.

Gardner: And is there a way to make this even more protected? I believe encryption could be brought to bear and it plays a role in how the SIEM can react and behave.

Seamless experience

Brown: We actually have a new partnership, related to HPE's acquisition of Voltage, which is a real leader in the e-mail security space. It’s all about applying encryption to messages and managing the keys and making that user experience very seamless and easy to use.

Adding to that, we're bundling up some of the classification functionality that we have in our network sensors. What we have is a combination between Digital Guardian Network, DOP, and the HPE Data Security Encryption solution, where an enterprise can define a whole bunch of rules based on templates.

We can say, "I need to comply with HIPAA," "I need to comply with PCI," or whatever standard it is. Digital Guardian on the network will automatically scan all the e-mail going out and automatically classify according to our rules which e-mails are sensitive and which attachments are sensitive. It then goes on to the HPE Data Security Solution where it gets encrypted automatically and then sent out.

It’s basically allowing corporations to apply standard set of policies, not relying on the user to say they need to encrypt this, not leaving it to the user’s judgment, but actually applying standard policies across the enterprise for all e-mail making sure they get encrypted. We are very excited about it.
Learn More About HPE IDOL
Advanced Enterprise Search and Analytics
For Unstructured Data
Gardner: That sounds key -- using encryption to the best of its potential, being smart about it, not just across the waterfront, and then not depending on a voluntary encryption, but doing it based on need and intelligence.
 
Brown: Exactly.

Gardner: For those organizations that are increasingly trying to be data-driven, intelligent, taking advantage of the technologies and doing analysis in new interesting ways, what advice might you offer in the realm of security? Clearly, we’ve heard at various conferences and other places that security is, in a sense, the killer application of big-data analytics. If you're an organization seeking to be more data-driven, how can you best use that to improve your security posture?

Brown: The key, as far as we’re concerned, is that you have to watch your data, you have to understand your data, you need to collect information, and you need visibility of your data.

The other key point is that the security market has been shifting pretty dramatically from more of a network view much more toward the endpoint. I mentioned earlier that antivirus and some of these standard technologies on the endpoint aren't really cutting it anymore. So, it’s very important that you get visibility down at the endpoint and you need to see what users are doing, you need to understand what your systems are running, and you need to understand where your data is.

So collect that, get that visibility, and then leverage that visibility with analytics and tools so that you can profit from an automated kind of intelligence.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

2016 election campaigners look to big data analysis to gain an edge in intelligently reaching voters

2016 election campaigners look to big data analysis to gain an edge in intelligently reaching voters

The next BriefingsDirect Voice of the Customer digital transformation case study explores how data-analysis services startup BlueLabs in Washington, DC helps presidential election campaigns better know and engage with potential voters.

We'll learn how BlueLabs relies on high-performing analytics platforms that allow a democratization of querying, of opening the value of vast data resources to discretely identify more of those in the need to know.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how big data is being used creatively by contemporary political organizations for two-way voter engagement, we're joined by Erek Dyskant Co-Founder and Vice President of Impact at BlueLabs Analytics in Washington. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Obviously, this is a busy season for the analytics people who are focused on politics and campaigns. What are some of the trends that are different in 2016 from just four years ago. It’s a fast-changing technology set, it's also a fast-changing methodology. And of course, the trends about how voters think, react, use social, and engage are also dynamic. So what's different this cycle?

Dyskant: From a voter-engagement perspective, in 2012, we could reach most of our voters online through a relatively small set of social media channels -- Facebook, Twitter, and a little bit on the Instagram side. Moving into 2016, we see a fragmentation of the online and offline media consumption landscape and many more folks moving toward purpose-built social media platforms.

If I'm at the HPE Conference and I want my colleagues back in D.C. to see what I'm seeing, then maybe I'll use Periscope, maybe Facebook Live, but probably Periscope. If I see something that I think one of my friends will think is really funny, I'll send that to them on Snapchat.
Join myVertica
To Get the Free
HPE Vertica Community Edition
Where political campaigns have traditionally broadcast messages out through the news-feed style social-media strategies, now we need to consider how it is that one-to-one social media is acting as a force multiplier for our events and for the ideas of our candidates, filtered through our campaign’s champions.

Gardner: So, perhaps a way to look at that is that you're no longer focused on precincts physically and you're no longer able to use broadcast through social media. It’s much more of an influence within communities and identifying those communities in a new way through these apps, perhaps more than platforms.

Social media

Dyskant: That's exactly right. Campaigns have always organized voters at the door and on the phone. Now, we think of one more way. If you want to be a champion for a candidate, you can be a champion by knocking on doors for us, by making phone calls, or by making phone calls through online platforms.

You can also use one-to-one social media channels to let your friends know why the election matters so much to you and why they should turn out and vote, or vote for the issues that really matter to you.

Gardner: So, we're talking about retail campaigning, but it's a bit more virtual. What’s interesting though is that you can get a lot more data through the interaction than you might if you were physically knocking on someone's door.

Dyskant: The data is different. We're starting to see a shift from demographic targeting. In 2000, we were targeting on precincts. A little bit later, we were targeting on combinations of demographics, on soccer moms, on single women, on single men, on rural, urban, or suburban communities separately.

Dyskant

Moving to 2012, we've looked at everything that we knew about a person and built individual-level predictive models, so that we knew each person's individual set of characteristics made that person more or less likely to be someone that our candidate would have an engaging conversation through a volunteer.

Now, what we're starting to see is behavioral characteristics trumping demographic or even consumer data. You can put whiskey drinkers in your model, you can put cat owners in your model, but isn't it a lot more interesting to put in your model that fact that this person has an online profile on our website and this is their clickstream? Isn't it much more interesting to put into a model that this person is likely to consume media via TV, is likely to be a cord-cutter, is likely to be a social media trendsetter, is likely to view multiple channels, or to use both Facebook and media on TV?

That lets us have a really broad reach or really broad set of interested voters, rather than just creating an echo chamber where we're talking to the same voters across different platforms.

Gardner: So, over time, the analytics tools have gone from semi-blunt instruments to much more precise, and you're also able to better target what you think would be the right voter for you to get the right message out to.

One of the things you mentioned that struck me is the word "predictive." I suppose I think of campaigning as looking to influence people, and that polling then tries to predict what will happen as a result. Is there somewhat less daylight between these two than I am thinking, that being predictive and campaigning are much more closely associated, and how would that work?

Predictive modeling

Dyskant: When I think of predictive modeling, what I think of is predicting something that the campaign doesn't know. That may be something that will happen in the future or it may be something that already exists today, but that we don't have an observation for it.

In the case of the role of polling, what I really see about that is understanding what issues matter the most to voters and how it is that we can craft messages that resonate with those issues. When I think of predictive analytics, I think of how is it that we allocate our resources to persuade and activate voters.

Over the course of elections, what we've seen is an exponential trajectory of the amount of data that is considered by predictive models. Even more important than that is an exponential set of the use cases of models. Today, we see every time a predictive model is used, it’s used in a million and one ways, whereas in 2012 it might have been used in 50, 20, or 100 sessions about each voter contract.

Gardner: It’s a fascinating use case to see how analytics and data can be brought to bear on the democratic process and to help you get messages out, probably in a way that's better received by the voter or the prospective voter, like in a retail or commercial environment. You don’t want to hear things that aren’t relevant to you, and when people do make an effort to provide you with information that's useful or that helps you make a decision, you benefit and you respect and even admire and enjoy it.

Dyskant: What I really want is for the voter experience to be as transparent and easy as possible, that campaigns reach out to me around the same time that I'm seeking information about who I'm going to vote for in November. I know who I'm voting for in 2016, but in some local actions, I may not have made that decision yet. So, I want a steady stream of information to be reaching voters, as they're in those key decision points, with messaging that really is relevant to their lives.
I want a steady stream of information to be reaching voters, as they're in those key decision points, with messaging that really is relevant to their lives.

I also want to listen to what voters tell me. If a voter has a conversation with a volunteer at the door, that should inform future communications. If somebody has told me that they're definitely voting for the candidate, then the next conversation should be different from someone who says, "I work in energy. I really want to know more about the Secretary’s energy policies."

Gardner: Just as if a salesperson is engaging with process, they use customer relationship management (CRM), and that data is captured, analyzed, and shared. That becomes a much better process for both the buyer and the seller. It's the same thing in a campaign, right? The better information you have, the more likely you're going to be able to serve that user, that voter.

Dyskant: There definitely are parallels to marketing, and that’s how we at BlueLabs decided to found the company and work across industries. We work with Fortune 100 retail organizations that are interested in how, once someone buys one item, we can bring them back into the store to buy the follow-on item or maybe to buy the follow-on item through that same store’s online portal. How it is that we can provide relevant messaging as users engage in complex processes online? All those things are driven from our lessons in politics.

Politics is fundamentally different from retail, though. It's a civic decision, rather than an individual-level decision. I always want to be mindful that I have a duty to voters to provide extremely relevant information to them, so that they can be engaged in the civic decision that they need to make.

Gardner: Suffice it to say that good quality comparison shopping is still good quality comparison decision-making.

Dyskant: Yes, I would agree with you.

Relevant and speedy

Gardner: Now that we've established how really relevant, important, and powerful this type of analysis can be in the context of the 2016 campaign, I'd like to learn more about how you go about getting that analysis and making it relevant and speedy across large variety of data sets and content sets. But first, let’s hear more about BlueLabs. Tell me about your company, how it started, why you started it, maybe a bit about yourself as well.

Dyskant: Of the four of us who started BlueLabs, some of us met in the 2008 elections and some of us met during the 2010 midterms working at the Democratic National Committee (DNC). Throughout that pre-2012 experience, we had the opportunity as practitioners to try a lot of things, sometimes just once or twice, sometimes things that we operationalized within those cycles.

Jumping forward to 2012 we had the opportunity to scale all that research and development to say that we did this one thing that was a different way of building models, and it worked for in this congressional array. We decided to make this three people’s full-time jobs and scale that up.

Moving past 2012, we got to build potentially one of the fastest-growing startups, one of the most data-driven organizations, and we knew that we built a special team. We wanted to continue working together with ourselves and the folks who we worked with and who made all this possible. We also wanted to apply the same types of techniques to other areas of social impact and other areas of commerce. This individual-level approach to identifying conversations is something that we found unique in the marketplace. We wanted to expand on that.
Join myVertica
To Get the Free
HPE Vertica Community Edition
Increasingly, what we're working on is this segmentation-of-media problem. It's this idea that some people watch only TV, and you can't ignore a TV. It has lots of eyeballs. Some people watch only digital and some people consume a mix of media. How is it that you can build media plans that are aware of people's cross-channel media preferences and reach the right audience with their preferred means of communications?

Gardner: That’s fascinating. You start with the rigors of the demands of a political campaign, but then you can apply in so many ways, answering the types of questions anticipating the type of questions that more verticals, more sectors, and charitable organizations would want to be involved with. That’s very cool.

Let’s go back to the data science. You have this vast pool of data. You have a snappy analytics platform to work with. But, one of the things that I am interested in is how you get more people whether it's in your organization or a campaign, like the Hillary Clinton campaign, or the DNC to then be able to utilize that data to get to these inferences, get to these insights that you want.

What is it that you look for and what is it that you've been able to do in that form of getting more people able to query and utilize the data?

Dyskant: Data science happens when individuals have direct access to ask complex questions of a large, gnarly, but well-integrated data set. If I have 30 terabytes of data across online contacts, off-line contacts, and maybe a sample of clickstream data, and I want to ask things like of all the people who went to my online platform and clicked the password reset because they couldn't remember their password, then never followed up with an e-mail, how many of them showed up at a retail location within the next five days? They tried to engage online, and it didn't work out for them. I want to know whether we're losing them or are they showing up in person.

That type of question maybe would make it into a business-intelligence (BI) report a few months from that, but people who are thinking about what we do every day, would say, "I wonder about this, turn it into a query, and say, "I think I found something." If we give these customers phone calls, maybe we can reset their passwords over the phone and reengage them.

Human intensive

That's just one tiny, micro example, which is why data science is truly a human-intensive exercise. You get 50-100 people working at an enterprise solving problems like that and what you ultimately get is a positive feedback loop of self-correcting systems. Every time there's a problem, somebody is thinking about how that problem is represented in the data. How do I quantify that. If it’s significant enough, then how is it that the organization can improve in this one specific area?

All that can be done with business logic is the interesting piece. You need very granular data that's accessible via query and you need reasonably fast query time, because you can’t ask questions like that when you're going to get coffee every time you run a query.

Layering predictive modeling allows you to understand the opportunity for impact if you fix that problem. That one hypothesis with those users who cannot reset their passwords is that maybe those users aren't that engaged in the first place. You fix their password but it doesn’t move the needle.

The other hypothesis is that it's people who are actively trying to engage with your server and are unsuccessful because of this one very specific barrier. If you have a model of user engagement at an individual level, you can say that these are really high-value users that are having this problem, or maybe they aren’t. So you take data science, align it with really smart individual-level business analysis, and what you get is an organization that continues to improve without having to have at an executive-decision level for each one of those things.

Gardner: So a great deal of inquiry experimentation, iterative improvement, and feedback loops can all come together very powerfully. I'm all for the data scientist full-employment movement, but we need to do more than have people have to go through data scientist to use, access, and develop these feedback insights. What is it about the SQL, natural language, or APIs? What is it that you like to see that allows for more people to be able to directly relate and engage with these powerful data sets?
It's taking that hypothesis that’s driven from personal stories, and being able to, through a relatively simple query, translate that into a database query, and find out if that hypothesis proves true at scale.

Dyskant: One of the things is the product management of data schemas. So whenever we build an analytics database for a large-scale organization I think a lot about an analyst who is 22, knows VLOOKUP, took some statistics classes in college, and has some personal stories about the industry that they're working in. They know, "My grandmother isn't a native English speaker, and this is how she would use this website."

So it's taking that hypothesis that’s driven from personal stories, and being able to, through a relatively simple query, translate that into a database query, and find out if that hypothesis proves true at scale.

Then, potentially take the result of that query, dump them into a statistical-analysis language, or use database analytics to answer that in a more robust way. What that means is that our schemas favor very wide schemas, because I want someone to be able to write a three-line SQL statement, no joins, that enters a business question that I wouldn't have thought to put in a report. So that’s the first line -- is analyst-friendly schemas that are accessed via SQL.

The next line is deep key performance indicators (KPIs). Once we step out of the analytics database, consumers drop into the wider organization that’s consuming data at a different level. I always want reporting to report on opportunity for impact, to report on whether we're reaching our most valuable customers, not how many customers are we reaching.

"Are we reaching our most valuable customers" is much more easily addressable; you just talk to different people. Whereas, when you ask, "Are we reaching enough customers," I don’t know how find out. I can go over to the sales team and yell at them to work harder, but ultimately, I want our reporting to facilitate smarter working, which means incorporating model scores and predictive analytics into our KPIs.

Getting to the core

Gardner: Let’s step back from the edge, where we engage the analysts, to the core, where we need to provide the ability for them to do what they want and which gets them those great results.

It seems to me that when you're dealing in a campaign cycle that is very spiky, you have a short period of time where there's a need for a tremendous amount of data, but that could quickly go down between cycles of an election, or in a retail environment, be very intensive leading up to a holiday season.

Do you therefore take advantage of the cloud models for your analytics that make a fit-for-purpose approach to data and analytics pay as you go? Tell us a little bit about your strategy for the data and the analytics engine.

Dyskant: All of our customers have a cyclical nature to them. I think that almost every business is cyclical, just some more than others. Horizontal scaling is incredibly important to us. It would be very difficult for us to do what we do without using a cloud model such as Amazon Web Services (AWS).

Also, one of the things that works well for us with HPE Vertica is the licensing model where we can add additional performance with only the cost of hardware or hardware provision through the cloud. That allows us to scale up our cost areas during the busy season. We'll sometimes even scale them back down during slower periods so that we can have those 150 analysts asking their own questions about the areas of the program that they're responsible for during busy cycles, and then during less busy cycles, scale down the footprint of the operation.
I do everything I can to avoid aggregation. I want my analysts to be looking at the data at the interaction-by-interaction level.

Gardner: Is there anything else about the HPE Vertica OnDemand platform that benefits your particular need for analysis? I'm thinking about the scale and the rows. You must have so many variables when it comes to a retail situation, a commercial situation, where you're trying to really understand that consumer?

Dyskant: I do everything I can to avoid aggregation. I want my analysts to be looking at the data at the interaction-by-interaction level. If it’s a website, I want them to be looking at clickstream data. If it's a retail organization, I want them to be looking at point-of-sale data. In order to do that, we build data sets that are very frequently in the billions of rows. They're also very frequently incredibly wide, because we don't just want to know every transaction with this dollar amount. We want to know things like what the variables were, and where that store was located.

Getting back to the idea that we want our queries to be dead-simple, that means that we very frequently append additional columns on to our transaction tables. We’re okay that the table is big, because in a columnar model, we can pick out just the columns that we want for that particular query.
Join myVertica
To Get the Free
HPE Vertica Community Edition
Then, moving into some of the in-database machine-learning algorithms allows us to perform more higher-order computation within the database and have less data shipping.

Gardner: We're almost out of time, but I wanted to do some predictive analysis ourselves. Thinking about the next election cycle, midterms, only two years away, what might change between now and then? We hear so much about machine learning, bots, and advanced algorithms. How do you predict, Erek, the way that big data will come to bear on the next election cycle?

Behavioral targeting

Dyskant: I think that a big piece of the next election will be around moving even more away from demographic targeting, toward even more behavioral targeting. How is it that we reach every voter based on what they're telling us about them and what matters to them, how that matters to them? That will increasingly drive our models.

To do that involves probably another 10X scale in the data, because that type of data is generally at the clickstream level, generally at the interaction-by-interaction level, incorporating things like Twitter feeds, which adds an additional level of complexity and laying in computational necessity to the data.

Gardner: It almost sounds like you're shooting for sentiment analysis on an issue-by-issue basis, a very complex undertaking, but it could be very powerful.

Dyskant: I think that it's heading in that direction, yes.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

ServiceMaster’s path to an agile development twofer: Better security and DevOps business benefits

ServiceMaster’s path to an agile development twofer: Better security and DevOps business benefits

The next BriefingsDirect Voice of the Customer security transformation discussion explores how home-maintenance repair and services provider ServiceMaster develops applications with a security-minded focus as a DevOps benefit.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript o download a copy.

To learn how security technology leads to posture maturity and DevOps business benefits, we're joined by Jennifer Cole, Chief Information Security Officer and Vice President of IT, Information Security, and Governance for ServiceMaster in Memphis, Tennessee, and Ashish Kuthiala, Senior Director of Marketing and Strategy at Hewlett Packard Enterprise DevOps. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jennifer, tell me, what are some of the top trends that drive your need for security improvements and that also spurred DevOps benefits?

Cole: When we started our DevOps journey, security was a little bit ahead of the curve for application security and we were able to get in on the front end of our DevOps transformation.

Cole

The primary reason for our transformation as a company is that we are an 86-year-old company that has seven brands under one umbrella, and we needed to have one brand, one voice, and be able to talk to our customers in a way that they wanted us to talk to them.

That means enabling IT to get capabilities out there quickly, so that we can interact with our customers "digital first." As a result of that, we were able to see an increase in the way that we looked at security education and process. We were normally doing our penetration tests after the fact of a release. We were able to put tools in place to test prior to a release, and also teach our developers along the way that security is everyone's responsibility.

ServiceMaster has been fortunate that we have a C-suite willing to invest in DevOps and an Agile methodology. We also had developers who were willing to learn, and with the right intent to deliver code that would protect our customers. Those things collided, and we have the perfect storm.

So, we're delivering quicker, but we also fail faster allowing us to go back and fix things quicker. We're seeing an uptick in what we're delivering being a lot more secure.

Gardner: Ashish, it seems obvious, having heard Jennifer describe it, DevOps and security hand-in-hand -- a whole greater than the sum of the parts. Are you seeing this more across various industries?

Stopping defects

Kuthiala: Absolutely. With the adoption of DevOps increasing more across enterprises, security is no different than any other quality-assurance (QA) testing that you do. You can't let a defect reach your customer base; and you cannot let a security flaw reach your customer base as well.

Kuthiala
If you look at it from that perspective, and the teams are willing to work together, you're treated no differently than any other QA process. This boils not just to the vulnerability of your software that you're releasing in the marketplace, but there are so many different regulations and compliance [needs] -- internal, external, your own company policies -- that you have to take a look at. You don't want to go faster and compromise security. So, it's an essential part of DevOps.

Cole: DevOps allows for continuous improvement, too. Security comes at the front of a traditional SDLC process, while in the old days, security came last. We found problems after they were in production or something had been compromised. Now, we're at the beginning of the process and we're actually getting to train the people that are at the beginning of the process on how and why to deliver things that are safe for our customers.

Gardner: Jennifer, why is security so important? Is this about your brand preservation? Is this about privacy and security of data? Is this about the ability for high performance to maintain its role in the organization? All the above? What did I miss? Why is this so important?

Cole: Depending on the lens that you are looking through, that answer may be different. For me, as a CISO, it's making sure that our data is secure and that our customers have trust in us to take care of their information. The rest of the C-suite, I am sure, feels the same, but they're also very focused on transformation to digital-first, making sure customers can work with us in any way that they want to and that their ServiceMaster experience is healthy.

Our leaders also want to ensure our customers return to do business with us and are happy in the process.  Our company helps customers in some of the most difficult times in their life, or helps them prevent a difficult time in the ownership of their home.

But for me and the rest of our leadership team, it's making sure that we're doing what's right. We're training our teams along the way to do what's right, to just make the overall ServiceMaster experience better and safe. As young people move into different companies, we want to make sure they have that foundation of thinking about security first -- and also the customer.
Learn More About DevOps
Solutions that Unify
Development and Operations
We tend to put IT people in a back room, and they never see the customer. This methodology allows IT to see what they could have released and correct it if it's wrong, and we get an opportunity to train for the future.
Through my lens, it’s about protecting our data and making sure our customers are getting service that doesn't have vulnerabilities in it and is safe.

Gardner: Now, Ashish, user experience is top of mind for organizations, particularly organizations that are customer focused like ServiceMaster. When we look at security and DevOps coming together, we can put in place the requirements to maintain that data, but it also means we can get at more data and use it more strategically, more tactically, for personalization and customization -- and at the same time, making sure that those customers are protected.

How important is user experience and data gathering now when it comes to QA and making applications as robust as they can be?

Million-dollar question

Kuthiala: It's a million-dollar question. I'll give you an example of a client I work with. I happen to use their app very, very frequently, and I happen to know the team that owns that app. They told me about 12 months ago that they had invested -- let’s just make up this number -- $1 million in improving the user experience. They asked me how I liked it. I said, "Your app is good. I only use this 20 percent of the features in your app. I really don’t use the other 80 percent. It's not so useful to me."

That was an eye-opener to them, because the $1 million or so that they would have invested in enriching the user experience -- if they knew exactly what I was doing as a user, what I use, what I did not use, where I had problems -- could have used that toward that 20 percent that I use. They could have made it better than anybody else in the marketplace and also gathered information on what is it that the market wants by monitoring the user experience with people like me.
It's not just the availability and health of the application; it’s the user experience. It's having empathy for the user, as an end user.

It's not just the availability and health of the application; it’s the user experience. It's having empathy for the user, as an end-user. HPE of course, makes a lot of these tools, like HPE AppPulse, which is very specifically designed to capture that mobile user experience and bring it back before you have a flood of calls and support people screaming at you as to why the application isn’t working.

Security is also one of those things. All is good until something goes wrong. You don't want to be in a situation when something has actually gone wrong and your brand is being dragged through mud in the press, your revenue starts to decline, and then you look at it. It’s one of those things that you can't look at after the fact.

Gardner: Jennifer, this strikes me as an under-appreciated force multiplier, that the better you maintain data integrity, security, and privacy, the more trust you are going to get to get more data about your customers that you can then apply back to a better experience for them. Is that something that you are banking on at ServiceMaster?
Learn More About DevOps
Solutions that Unify
Development and Operations
Cole: Absolutely. Trust is important, not only with our customers, but also our employees and leaders. We want people to feel like they're in a healthy environment, where they can give us feedback on that user experience. What I would say to what Ashish was saying is that DevOps actually gives us the ability to deliver what the business wants IT to deliver for our customers.

In the past 25 years, IT has decided what the customer would like to see. In this methodology, you're actually working with your business partners who understand their products and their customers, and they're telling you the features that need to be delivered. Then, you're able to pick the minimum viable product and deliver it first, so that you can capture that 20 percent of functionality.

Also, if you're wrapping security in front of that, that means security is not coming back to you later with the penetration test results and say that you have all of these things to fix, which takes time away from delivering something new for our customers.

This methodology pays off, but the journey is hard. It’s tough because in most companies you have a legacy environment that you have to support. Then, you have this new application environment that you’re creating. There's a healthy balance that you have to find there, and it takes time. But we've seen quicker results and better revenue, our customers are happier, they're enjoying the ServiceMaster experience, instead of our individual brand families, and we've really embraced the methodology.

Gardner: Do you have any examples that you can recall where you've done development projects and you’ve been able to track that data around that particular application? What’s going on with the testing, and then how is that applied back to a DevOps benefit? Maybe you could just walk us through an example of where this has really worked well.

Digital first

Cole: About a year and a half ago, we started with one of our brands, American Home Shield, and looked at where the low hanging fruit -- or minimum viable product -- was in that brand for digital first. Let me describe the business a little bit. Our customers reach out to us, they purchase a policy for their house and we maintain appliances and such in their home, but it is a contractor-based company. We send out a contractor who is not a ServiceMaster associate.

We have to make that work and make our customer feel like they've had a seamless experience with American Home Shield. We had some opportunity in that brand for digital first. We went after it and drastically changed the way that our customers did business with us. Now, it's caught on like wildfire, and we're really trying to focus on one brand and one voice. This is a top-down decision which does help us move faster.

All seven of our brands are home services. We're in 75,000 homes a day and we needed to identify the customers of all the brands, so that we could customize the way that we do business with them. DevOps allows us to move faster into the market and deliver that.

Gardner: Ashish, there aren't that many security vendors that do DevOps, or DevOps vendors that do security. At HPE, how have you made advances in terms of how these two areas come together?
The strengths of HPE in helping its customers lies with the very fact that we have an end-to-end diverse portfolio.

Kuthiala: The strengths of HPE in helping its customers lies with the very fact that we have an end-to-end diverse portfolio. Jennifer talked about taking the security practices and not leaving it toward the end of the cycle, but moving it to the very beginning, which means that you have to get developers to start thinking like security experts and work with the security experts.

Given that we have a portfolio that spans the developers and the security teams, our best practices include building our own customer-facing software products that incorporate security practices, so that when developers are writing code, they can begin to see any immediate security threats as well as whether their code is compliant with any applicable policies or not. Even before code is checked in, the process runs the code through security checks and follows it all the way through the software development lifecycle.

These are security-focused feedback loops. At any point, if there is a problem, the changes are rejected and sent back or feedback is sent back to the developers immediately.

If it makes through the cycle and a known vulnerability is found before release to production, we have tools such as App Defender that can plug in to protect the code in production until developers can fix it, allowing you to go faster but remain protected.

Cole: It blocks it from the customer until you can fix it.

Kuthiala: Jennifer, can you describe a little bit how you use some of these products?

Strategic partnership

Cole: Sure. We’ve had a great strategic partnership with HPE in this particular space. Application security caught on fire about two years ago at RSA, which is one of the main security conferences for anyone in our profession.

The topic of application security has not been focused to CISOs in my opinion. I was fortunate enough that I had a great team member who came back and said that we have to get on board with this. We had some conversations with HPE and ended up in a great strategic partnership. They've really held our hands and helped us get through the process. In turn, that helped make them better, as well as make us better, and that's what a strategic partnership should be about.

Now, we're watching things as they are developed. So, we're teaching the developer in real-time. Then, if something happens to get through, we have App Defender, which will actually contain it until we can fix it before it releases to our customer. If all of those defenses don’t work, we still do the penetration test along with many other controls that are in place. We also try to go back to just grassroots, sit down with the developers, and help them understand why they would want to develop differently next time.
The next step for ServiceMaster specifically is making solid plans to migrate off of our legacy systems, so that we can truly focus on maturing DevOps and delivering for our customer in a safer, quicker way.

Someone from security is in every one of the development scrum meetings and on all the product teams. We also participate in Big Room Planning. We're trying to move out of that overall governing role and into a peer-to-peer type role, helping each other learn, and explaining to them why we want them to do things.

Gardner: It seems to me that, having gone at this at the methodological level with those collaboration issues solved, bringing people into the scrum who are security minded, puts you in a position to be able to scale this. I imagine that more and more applications are going to be of a mobile nature, where there's going to be continuous development. We're also going to start perhaps using micro-services for development and ultimately Internet of Things (IoT) if you start measuring more and more things in your homes with your contractors.

Cole: We reach 75,000 homes a day. So, you can imagine that all of those things are going to play a big part in our future.

Gardner: Before we sign-off, perhaps you have projections as to where you like to see things go. How can DevOps and security work better for you as a tag team?
Learn More About DevOps
Solutions that Unify
Development and Operations
Cole: For me, the next step for ServiceMaster specifically is making solid plans to migrate off of our legacy systems, so that we can truly focus on maturing DevOps and delivering for our customer in a safer, quicker way, and so we're not always having to balance this legacy environment and this new environment.
If we could accelerate that, I think we will deliver to the customer quicker and also more securely.

Gardner: Ashish, last word, what should people who are on the security side of the house be thinking about DevOps that they might not have appreciated?

Higher quality

Kuthiala: This whole approach of adopting DevOps is to deliver your software faster to your customers with higher quality says it. DevOps is an opportunity for security teams to get deeply embedded in the mindset of the developers, the business planners, testers, production teams – essentially the whole software development lifecycle, which earlier they didn’t have the opportunity to do.

They would usually come in before code went to production and often would push back the production cycles by a few weeks because they had to do the right thing and ensure release of code that was secure. Now, they’re able to collaborate with and educate developers, sit down with them, tell them exactly what they need to design and therefore deliver secure code right from the design stage. It’s the opportunity to make this a lot better and more secure for their customers.

Cole: The key is security being a strategic partner with the business and the rest of IT, instead of just being a governing body.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript o download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Why government agencies could lead the way in demanding inter-public cloud interoperability and standardization

Why government agencies could lead the way in demanding inter-public cloud interoperability and standardization

The next BriefingsDirect thought leadership panel discussion explores how public-sector organizations can gain economic benefits from cloud interoperability and standardization.

Our panel comes to you in conjunction with The Open Group Paris Event and Member Meeting October 24 through 27, 2016 in France, with a focus on the latest developments in eGovernment.

As government agencies move to the public cloud computing model, the use of more than one public cloud provider can offer economic benefits by a competition and choice. But are the public clouds standardized efficiently for true interoperability, and can the large government contracts in the offing for cloud providers have an impact on the level of maturity around standardization?

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how to best procure multiple cloud services as eGovernment services at low risk and high reward, we're joined by our panel, Dr. Chris Harding, Director for Interoperability at The Open Group; Dave Linthicum, Senior Vice President at Cloud Technology Partners, and Andras Szakal, Vice President and Chief Technology Officer at IBM U.S. Federal. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Andras, I've spoken to some people in the lead-up to this discussion about the level of government-sector adoption of cloud services, especially public cloud. They tell me that it’s lagging the private sector. Is that what you're encountering, that the public sector is lagging the private sector, or is it more complicated than that?

Szakal
Szakal: It's a bit more complicated than that. The public sector born-on-the-cloud adoption is probably much greater than the public sector and it differentiates. So the industry at large, from a born-on-the-cloud point of view is very much ahead of the public-sector government implementation of born-on-the-cloud applications.

What really drove that was innovations like the Internet of Things (IoT), gaming systems, and platforms, whereas the government environment really was more about taking existing government citizens to government shared services and so on and so forth and putting them into the cloud environment.

When you're talking about public cloud, you have to be very specific about the public sector and government, because most governments have their own industry instance of their cloud. In the federal government space, they're acutely aware of the FedRAMP certified public-cloud environments. That can go from moderate risk, where you can have access to the yummy goodness of the entire cloud industry, but then, to FedRAMP High, which would isolate these clouds into their own environments in order to increase the level of protection and lower the risk to the government.

So, the cloud service provider (CSP) created instances of these commercial clouds fit-for-purpose for the federal government. In that case, if we're talking about enterprise applications shifting to the cloud, we're seeing the public sector government side, at the national level, move very rapidly, compared to some of the commercial enterprises who are more leery about what the implications of that movement may be over a period of time. There isn't anybody that's mandating that they do that by law, whereas that is the case on the government side.

Attracting contracts

Gardner: Dave, it seems that if I were a public cloud provider, I couldn't think of a better customer, a better account in terms of size and longevity, than some major government agencies. What are we seeing from the cloud providers in trying to attract the government contracts and perhaps provide the level of interoperability and standardization that they require?

Linthicum: The big three -- Amazon, Google and Microsoft -- are really making an effort to get into that market. They all have federal sides to their house. People are selling into that space right now, and I think that they're seeing some progress. The FAA and certainly the DoD have been moving in that direction.

Linthicum
However, they do realize that they have to build a net new infrastructure, a net new way of doing procurement to get into that space. In the case where the US is building the world’s biggest private cloud at the CIA, they've had to change their technology around the needs of the government.

They see it as really the "Fortune 1." They see it as the largest opportunity that’s there, and they're willing to make huge investments in the billions of dollars to capture that market when it arrives.

Gardner: It seems to me, Chris, that we might be facing a situation where we have cloud providers offering a set of services to large government organizations, but perhaps a different set to the private sector. From an interoperability and standardization perspective, that doesn’t make much sense to me.

What’s your perspective on how public cloud services and standardization are shaping up? Where did you expect things to be at this point?

Harding: The government has an additional dimension to that of the private sector when it comes to procurement in terms of the need to be transparent and to be spending the money that’s entrusted to them by the public in a wise manner. One of the issues they have with a lack of standardization is that it makes it more difficult for them to show that they're visibly getting the best deals from the taxpayers when they come to procure cloud services.

Harding
In fact, The Open Group produced a guide to cloud computing for business a couple of years ago. One of the things that we argued in that was that, when procuring cloud services, the enterprise should model the use that it intends to make of the cloud services and therefore be able to understand the costs that they were likely to incur. This is perhaps more important for government, even more than it is for private enterprises. And you're right, the lack of standardization makes it more difficult for them to do this.

Gardner: Chris, do you think that interoperability is of a higher order of demand in public-sector cloud acquisition than in the private sector, or should there be any differentiation?

Need for interoperability

Harding: Both really have the need for interoperability. The public sector perhaps has a greater need, simply because it’s bigger than a small enterprise and it’s therefore more likely to want to use more cloud services in combination.

Gardner: We've certainly seen a lot of open-source platforms emerge in private cloud as well as hybrid cloud. Is that a driving force yet in the way that the public sector is looking at public cloud services acquisition? Is open source a guide to what we should expect in terms of interoperability and standardization in public-cloud services for eGovernment?

Szakal: Open source, from an application implementation point of view, is one of the questions you're asking, but are you also suggesting that somehow these cloud platforms will be reconsidered or implemented via open source? There's truth to both of those statements.

IBM is the number two cloud provider in the federal government space, if you look at hybrid and the commercial cloud for which we provide three major cloud environments. All of those cloud implementations are based on open source -- OpenStack and Cloud Foundry are key pieces of this -- as well as the entire DevOps lifecycle.
So, the economy of APIs and the creation of this composite services are going to be very, very important elements. If they're closed and not open to following the normal RESTful approaches defined by the W3C and other industry consortia, then it’s going to be difficult to create these composite clouds.

So, open source is important, but if you think of open source as a way to ensure interoperability, kind of what we call in The Open Group environment "Executable Standards," it is a way to ensure interoperability.

That’s more important at the cloud-stack level than it is between cloud providers, because between cloud providers you're really going to be talking about API-driven interoperability, and we have that down pretty well.

So, the economy of APIs and the creation of this composite services are going to be very, very important elements. If they're closed and not open to following the normal RESTful approaches defined by the W3C and other industry consortia, then it’s going to be difficult to create these composite clouds.

Gardner: We saw that OpenStack had its origins in a government agency, NASA. In that case, clearly a government organization, at least in the United States, was driving the desire for interoperability and standardization, a common platform approach. Has that been successful, Dave? Why wouldn’t the government continue to try to take that approach of a common, open-source platform for cloud interoperability?

Linthicum: OpenStack has had some fair success, but I wouldn’t call it excellent success. One of the issues is that the government left it dangling out there, and while using some aspects of it, I really expected them to make some more adoption around that open standard, for lots of reasons.

So, they have to hack the operating systems and meet very specific needs around security, governance, compliance, and things like that. They have special use cases, such as the DoD, weapons control systems in real time, and some IoT stuff that the government would like to move into. So, that’s out there as an opportunity.
Register for
The Open Group Event
Next in Your Region
In other words, the ability to work with some of the distros out there, and there are dozens of them, and get into a special government version of that operating system, which is supported openly by the government integrators and providers, is something they really should take advantage of. It hasn’t happened so far and it’s a bit disappointing.

Insight into Europe

Gardner: Do any of you have any insight into Europe and some of the government agencies there? They haven’t been shy in the past about mandating certain practices when it comes to public contracts for acquisition of IT services. I think cloud should follow the same path. Is there a big difference in what’s going on in Europe and in North America?

Szakal: I just got off the phone a few minutes ago with my counterpart in the UK. The nice thing about the way the UK government is approaching cloud computing is that they're trying to do so by taking the handcuffs off the vendors and making sure that they are standards-based. They're meeting a certain quality of services for them, but they're not mandating through policy and by law the structure of their cloud. So, it allows for us, at least within IBM, to take advantage of this incredible industry ecosystem you have on the commercial side, without having to consider that you might have to lift and shift all of this very expensive infrastructure over to these industry clouds.

The EU is, in similar ways, following a similar practice. Obviously, data sovereignty is really an important element for most governments. So, you see a lot of focus on data sovereignty and data portability, more so than we do around strict requirements in following a particular set of security controls or standards that would lock you in and make it more difficult for you to evolve over a period of time.

Gardner: Chris Harding, to Andras’ point about data interoperability, do you see that as a point on the arrow that perhaps other cloud interoperability standards would follow? Is that something that you're focused on more specifically than more general cloud infrastructure services?

Harding: Cloud is a huge spectrum, from the infrastructure services at the bottom,up to the business services, the application services, to software as a service (SaaS), and data interoperability sits on top of that stack.

I'm not sure that we're ready to get real data interoperability yet, but the work that's being done on trying to establish common frameworks for understanding data, for interpreting data, is very important as a basis for gaining interoperability at that level in the future.

We also need to bear in mind that the nature of data is changing. It’s no longer a case that all data comes from a SQL database. There are all sorts of ways in which data is represented, including human forms, such as text and speech, and interpreting those is becoming more possible and more important.

This is the exciting area, where you see the most interesting work on interoperability.

Gardner: Dave Linthicum, one of the things that some of us who have been proponents of cloud for a number of years now have looked to is the opportunity to get something that couldn’t have been done before, a whole greater than the sum of the parts.
Register for
The Open Group Event
Next in Your Region
It seems to me that if you have a common cloud fabric and the sufficient amount of interoperability for data and/or applications and infrastructure services and that cuts across both the public and the private sector, then this difficulty we've had with health insurance, payer and provider, interoperability and communication, sharing of government services, and data with the private sector, many of the things that have been probably blamed on bureaucracy and technical backwardness in some ways could be solved if there was a common public cloud approach adopted by the major public cloud providers. It seems to me a very significant benefit could be drawn when the public and private sector have a commonality that having your own data centers of the past just couldn't provide.

Am I chewing on too much pie in the sky here, Dave, or is there actually something to be said about the cloud model, not just between government to government agencies, but the public and private sectors?

Getting more savvy

Linthicum: The public-cloud providers out there, the big ones, are getting more savvy about providing interoperability, because they realized that it’s going to be multi-cloud. It’s going to be different private and public cloud instances, different kinds of technologies, that are there, and you have to work and play well with a number of different technologies.

However, to be a little bit more skeptical, over the years, I've found out that they're in it for their own selfish interests, and they should be, because they're corporations. They're going to basically try to play up their technology to get into a market and hold on to the market, and by doing that, they typically operate against interoperability. They want to make it as difficult as possible to integrate with the competitors and leverage their competitors’ services.

So, we have that kind of dynamic going on, and it’s incredibly frustrating, because we can certainly stand up, have the discussion, and reveal the concepts. You just did a really good job in revealing that this has been Nirvana, and we should start moving in this direction. You will typically get lots of head-nodding from the public-cloud providers and the private-cloud providers but actions speak louder than words, and thus far, it’s been very counterproductive.

Interoperability is occurring but it’s in dribs and drabs and nothing holistic.

Gardner: Chris, it seems as if the earlier you try to instill interoperability and standardization both in technical terms, as well as methodological, that you're able to carry that into the future where we don't repave cow paths, but we have highly non-interoperable data centers replaced by them being in the cloud, rather than in some building that you control.
The public-cloud providers out there, the big ones, are getting more savvy about providing interoperability, because they realized that it’s going to be multi-cloud.

What do you think is going to be part of the discussion at The Open Group Paris Event, October 24, around some of these concepts of eGovernment? Shouldn’t they be talking about trying to make interoperability something that's in place from the start, rather than something that has to be imposed later in the process?

Harding: Certainly this will be an important topic at the forthcoming Paris event. My personal view is that the question of when you should standardize something to gain interoperability is a very difficult balancing act. If you do it too late, then you just get a mess of things that don’t interoperate, but equally, if you try to introduce standards before the market is ready for them, you generally end up with something that doesn’t work, and you get a mess for a different reason.

Part of the value of industry events, such as The Open Group events, is for people in different roles in different organizations to be able to discuss with each other and get a feel for the state of maturity and the directions in which it's possible to create a standard that will stick. We're seeing a standard paradigm, the API paradigm, that was mentioned earlier. We need to start building more specific standards on top of those, and certainly in Paris and at future Open Group events, those are the things we'll be discussing.

Gardner: Andras, you wear a couple of different hats. One is the Chief Technology Officer at IBM US Federal, but you're also very much involved with The Open Group. I think you're on the Board of Directors. How do you see this progression of what The Open Group has been able to do in other spheres around standardization and both methodological, such as an enterprise architecture framework, TOGAF®, an Open Group standard,, as well as the implementation enforcement of standards? Is what The Open Group has done in the past something you expect to be applicable to these cloud issues?

Szakal: IBM has a unique history, being one of the only companies in the technology arena. It’s over a 100-years-old and has been able to retain great value to its customers over that long period of time, and we shifted from a fairly closed computing environment to this idea of open interoperability and freedom of choice.

That's our approach for our cloud environment as well. What drives us in this direction is because our customers require it from IBM, and we're a common infrastructure and a glue that binds together many of our enterprise and the largest financial banking and healthcare institutions in the world to ensure that they can interoperate with other vendors.
Register for
The Open Group Event
Next in Your Region
As such, we were one of the founders of The Open Group, which has been at the forefront of helping facilitate this discussion about open interoperability. I'm totally with Chris as to when you would approach that. As I said before, my concern is that you interoperate at the service level in the economy of APIs. That would suggest that there are some other elements for that, not just the API itself, but the ability to effectively manage credentials, security, or some other common services, like being able to manage object stores to the place that you would like to be able to store your information, so that data sovereignty isn’t an issue. These are all the things that will occur over a period of time.

Early days

It’s early, heady days in the cloud world, and we're going to see all of that goodness come to pass here as we go forward. In reality, we talk about cloud it as if it’s a thing. It’s true value isn't so much in the technology, but in creating these new disruptive business capabilities and business models. Openness of the cloud doesn’t facilitate that creation of those new business models.

That’s where we need to focus. Are we able to actually drive these new collaborative models with our cloud capabilities? You're going to be interoperating with many CSPs not just two, three, or four, especially as you see different factors grow into the cloud. It won’t matter where they operate their cloud services from; it will matter how they actually interoperate at that API level.

Gardner: It certainly seems to me that the interoperability is the killer application of the cloud. It can really foster greater inter-department collaboration and synergy, government to government, state to federal, across the EU, for example as well, and then also to the private sector, where you have healthcare concerns and you've got monetary and banking and finance concerns all very deeply entrenched in both public and private sectors. So, we hope that that’s where the openness leads to.
It won’t matter where they operate their cloud services from; it will matter how they actually interoperate at that API level.

Chris, before we wrap up, it seems to me that there's a precedent that has been set successfully with The Open Group, when it comes to security. We've been able to do some pretty good work over the past several years with cloud security using the adoption of standards around encryption or tokenization, for example. Doesn’t that sort of give us a path to greater interoperability at other levels of cloud services? Is security a harbinger of things to come?

Harding: Security certainly is a key aspect that needs to be incorporated in the standards where we build on the API paradigm. But, some people talk about move to digital transformation, the digital enterprise. So, cloud and other things like IoT, big-data analysis, and so on are all coming together, and a key underpinning requirement for that is platform integration. That's where the Open Platform 3.0™ Forum of The Open Group is centering on the possibilities for platform interoperability to enable digital platform integration. Security is a key aspect of that, but there are other aspects too.

Gardner: I am afraid we will have to leave it there. We've been discussing the latest developments in eGovernment and cloud adoption with a panel of experts. Our focus on these issues comes in conjunction with The Open Group Paris Event and Member Meeting, October 24-27, 2016 in Paris, France, and there is still time to register at www.opengroup.org and find more information on that event, and many others coming in the near future.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

How propelling instant results to the Excel edge democratizes advanced analytics

How propelling instant results to the Excel edge democratizes advanced analytics

The next BriefingsDirect Voice of the Customer digital transformation case study explores how powerful and diverse financial information is newly and uniquely delivered to the ubiquitous Excel spreadsheet edge.

We'll explore how HTI Labs in London provides the means and governance with its Schematiq tool to bring critical data services to the interface users want. By leveraging the best of instant cloud-delivered data with spreadsheets, Schematiq democratizes end-user empowerment while providing powerful new ways to harness and access complex information.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how complex cloud core-to-edge processes and benefits can be better managed and exploited we're joined by Darren Harris, CEO and Co-Founder of HTI Labs, and Jonathan Glass, CTO and Co-Founder of HTI Labs, based in London. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let's put some context around this first. What major trends in the financial sector led you to create HTI Labs, and what are the problems you're seeking to solve?

Harris
Harris: Obviously, in finance, spreadsheets are widespread and are being used for a number of varying problems. A real issue started a number of years ago, where spreadsheets got out of control. People were using them everywhere, causing lots of operational risk processes. They wanted to get their hands around it for governance, and there were loads that we needed to eradicate -- Excel-type issues.

That led to the creation of centralized teams that locked down rigid processes and effectively took away a lot of the innovation and discovery process that traders are using to spot opportunities and explore data.

Through this process, we're trying to help with governance to understand the tools to explore, and [deliver] the ability to put the data in the hands of people ... [with] the right balance.

So by taking the best of regulatory scrutiny around what a person needs, and some innovation that we put into Schematiq, we see an opportunity to take Excel to another level -- but not sacrifice the control that’s needed.

Gardner: Jonathan, are there technology trends that allowed you to be able to do this, whereas it may not have been feasible economically or technically before?

Upstream capabilities

Glass: There are lot of really great back-end technologies that are available now, along with the ability to either internally or externally scale compute resources. Essentially, the desktop remains quite similar. Excel has remained quite the same, but the upstream capabilities have really grown.

Glass
So there's a challenge. Data that people feel they should have access to is getting bigger, more complex, and less structured. So Excel, which is this great front-end to come to grips with data, is becoming a bit of bottleneck in terms of actually keeping up with the data that's out there that people want.

Gardner: So, we're going to keep Excel. We're not going to throw the baby out with the bathwater, so to speak, but we are going to do something a little bit different and interesting. What is it that we're now putting into Excel and how is that different from what was available in the past?

Harris: Schematiq extends Excel and allows it to access unstructured data. It also reduces the complexity and technical limitations that Excel has as an out-of-the-box product.

We have the notion of a data link that's effectively in a single cell that allows you to reference data that’s held externally on a back-end site. So, where people used to ingest data from another system directly into Excel, and effectively divorce it from the source, we can leave that data where it is.
Learn More About
Haven OnDemand
Sign Up Now
It's a paradigm of take a question to the data; don’t pull the data to the question. That means we can leverage the power of the big-data platforms and how they process an analytic database on the back-end, but where you can effectively use Excel as the front screen. Ask questions from Excel, but push that query to the back-end. That's very different in terms of the model that most people are used to working with in Excel.

Gardner: This is a two-way street. It's a bit different. And you're also looking at the quality, compliance, and regulatory concerns over that data.

Harris: Absolutely. An end-user is able to break down or decompose any workflow process with data and debug it the same way they can in a spreadsheet. The transparency that we add on top of Excel’s use with Schematiq allows us to monitor what everybody is doing and the function they're using. So, you can give them agility, but still maintain the governance and the control.

In organizations, lots of teams have become disengaged. IT has tried to create some central core platform that’s quite restrictive, and it's not really serving the users. They have gotten disengaged and they've created what Gartner referred to as the Shadow BI Team, with databases under their desk, and stuff like that.

By bringing in Schematiq we add that transparency back, and we allow IT and the users to have an informed discussion -- a very analytic conversation -- around what they're using, how they are using it, where the bottlenecks are. And then, they can work out where the best value is. It's all about agility and control. You just can't give the self-service tools to an organization and not have the transparency for any oversight or governance.

To the edge

Gardner: So we have, in a sense, brought this core to the edge. We've managed it in terms of compliance and security. Now, we can start to think about how creative we can get with what's on that back-end that we deliver. Tell us a little bit about what you go after, what your users want to experiment with, and then how you enable that.

Glass: We try to be as agnostic to that as we can, because it's the creativity of the end-user that really drives value.

We have a variety of different data sources, traditional relational databases, object stores, OLAP cubes, APIs, web queries, and flat files. People want to bring that stuff together. They want some way that they can pull this stuff in from different sources and create something that's unique. This concept of putting together data that hasn't been put together before is where the sparks start to fly and where the value really comes from.

Gardner: And with Schematiq you're enabling that aggregation and cleansing ability to combine, as well as delivering it. Is that right?
The iteration curve is so much tighter and the cost of doing that is so much less. Users are able to innovate and put together the scenario of the business case for why this is a good idea.

Harris: Absolutely. It's that discovery process. It may be very early on in a long chain. This thing may progress to be something more classic, operational, and structured business intelligence (BI), but allowing end-users the ability to cleanse, explore data, and then hand over an artifact that someone in the core team can work with or use as an asset. The iteration curve is so much tighter and the cost of doing that is so much less. Users are able to innovate and put together the scenario of the business case for why this is a good idea.

The only thing I would add to the sources that Jon has just mentioned is with HPE Haven OnDemand, [you gain access to] the unstructured analytics, giving the users the ability to access and leverage all of the HPE IDOL capabilities. That capability is a really powerful and transformational thing for businesses.

They have such a set of unstructured data [services] available in voice and text, and when you allow business users access to that data, the things they come up with, their ideas, are just quite amazing.

Technologists always try to put themselves in the minds of the users, and we've all historically done a bad job of making the data more accessible for them. When you allow them the ability to analyze PDFs without structure, to share that, to analyze sentiment, to include concepts and entities, or even enrich a core proposition, you're really starting to create innovation. You've raised the awareness of all of these analytics that exist in the world today in the back-end, shown end-users what they can do, and then put their brains to work discovering and inventing.

Gardner: Many of these financial organizations are well-established, many of them for hundreds of years perhaps. All are thinking about digital transformation, the journey, and are looking to become more data-driven and to empower more people to take advantage of that. So, it seems to me you're almost an agent of digital transformation, even in a very technical and sophisticated sector like finance.

Making data accessible

Glass: There are a lot of stereotypes in terms of who the business analysts are and who the people are that come up with ideas and intervention. The true power of democratization is making data more accessible, lowering the technical barrier, and allowing people to explore and innovate. Things always come from where you least expect them.

Gardner: I imagine that Microsoft is pleased with this, because there are some people who are a bit down on Excel. They think that it's manual, that it's by rote, and that it's not the way to go. So, you, in a sense, are helping Excel get a new lease on life.

Glass: I don’t think we're the whole story in that space, but I love Excel. I've used it for years and years at work. I've seen the power of what it can do and what it can deliver, and I have a bit of an understanding of why that is. It’s the live nature of it, the fact that people can look at data in a spreadsheet, see where it’s come from, see where it’s going, they can trust it, and they can believe in it.
Learn More About
Haven OnDemand
Sign Up Now
That’s why what we're trying to do is create these live connections to these upstream data sources. There are manual steps, download, copy/paste, move around the sheet, which is where errors creep in. It’s where the bloat, the slowness, and the unreliability can happen, but by changing that into a live connection to the data source, it becomes instant and it goes back to being trusted, reliable, and actionable.

Harris: There's something in the DNA, as well, of how people interact with data and so we can lay out effectively the algorithm or the process of understanding a calculation or a data flow. That’s why you see a lot of other systems that are more web-based or web-centric and replicate an Excel-type experience.

The user starts to use it and starts to think, "Wow, it’s just like Excel," and it isn’t. They hit a barrier, they hit a wall, and then they hit the "export" button. Then, they put it back [into Excel] and create their own way to work with it. So, there's something in the DNA of Excel and the way people lay things out. I think of [Excel] almost like a programing environment for non-programers. Some people describe it as a functional language very much like Haskell, and the Excel functions they write were effectively then working and navigating through the data.


Gardner: No need to worry that if you build it, will they come; they're already there.

Harris: Absolutely.

Gardner: Tell us a bit about HTI Labs and how your company came about, and where you are on your evolution.

Cutting edge

Harris: HTI labs was founded in 2012. The core backbone of the team actually worked for the same tier 1 investment bank, and we were building risk and trading systems for front-office teams. We were really, I suppose, the cutting edge of all the big data technologies that were being used at the time -- real-time, disputed graphs and cubes, and everything.

As a core team, it was about taking that expertise and bringing it to other industries. Using Monte Carlo farms in risk calculations, the ability to export data at speed and real-time risk. These things were becoming more centric to other organizations, which was an opportunity.

At the moment, we're focusing predominately on energy trading. Our software is being used across a number of other sectors and our largest client has installed Schematiq on 120 desktops, which is great. That’s a great validation of what we're doing. We're also a member of the London Stock Exchange Elite Program, based in London for high-growth companies.

Glass: Darren and I met when we were working for the same company. I started out as a quant doing the modeling, the map behind pricing, but I found that my interest lay more in the engineering. Rather than doing it once, can I do it a million times, can I do these things reliably and scale them?
The algorithms are built, but the key to making them so much more improved is the feedback loop between your domain users, your business users, and how they can enrich and train effectively these algorithms.

Because I started in a front-office environment, it was very spreadsheet-dominated, it was very VBA-dominated. There's good and bad in that. A lot of those lessened, and Darren and I met up. We crossed the divide together from the top-down, big IT systems and the bottom-up end-user best-developed spreadsheets, and so on. We found a middle ground together, which we feel is a quite powerful combination.

Gardner: Back to where this leads. We're seeing more-and-more companies using data services like Haven OnDemand and starting to employ machine learning, artificial intelligence (AI), and bots to augment what the humans do so well. Is there an opportunity for that to play here, or maybe it already is? The question basically is, how does AI come to bear on what you can deliver out to the Excel edge?

Harris: I think what you see is that out of the box, you have a base unit of capability. The algorithms are built, but the key to making them so much more improved is the feedback loop between your domain users, your business users, and how they can enrich and train effectively these algorithms.

So, we see a future where the self-service BI tools that they use to interact with data and explore would almost become the same mechanism where people will see the results from the algorithms and give feedback to send back to the underlying algorithm.

Gardner: And Jonathan, where do you see the use of bots, particularly perhaps with an API model like Haven OnDemand?

The role of bots

Glass: The concept for bots is replicating an insight or a process that somebody might already be doing manually. When people create these data flows and analyses that they maybe run once so it’s quite time-consuming to run. The real exciting possibility is that you make these things run 24×7. So, you start receiving notifications, rather than having to pull from the data source. You start receiving notifications from your own mailbox that you have created. You look at those and you decide whether that's a good insight or a bad insight, and you can then start to train it and refine it.

The training and refining is that loop that potentially goes back to IT, gets back through a development loop, and it’s about closing that loop and tightening that loop. That's the thing that really adds value to those opportunities.

Gardner: Perhaps we should unpack Schematiq a bit to understand how one might go back and do that within the context of your tool. Are there several components of the tool, one of which might lend itself to going back and automating?

Glass: Absolutely. You can imagine the spreadsheet has some inputs and some outputs. One of the components within the Schematiq architecture is the ability to take a spreadsheet, to take the logic and the process that’s embedded in our spreadsheet, and turn it into an executable module of code, which you can host on your server, you can schedule, you can run as often as you like, and you can trigger based on events.
It’s very much all about empowering the end-user to connect, create, govern, share instantly and then allow consumption from anybody on any device.

It’s a way of emitting code from a spreadsheet. You take some of the insight, you take without a business analysis loop and a development loop, and you take the exact thing that the user, the analyst, has programmed. You make it into something that you can run, commoditize, and scale. That’s quite an important way in which we reduce that development loop. We create that cycle that’s tight and rapid.

Gardner: Darren, would you like to explain the other components that make-up Schematiq?

Harris: There are four components of Schematiq architecture. There's the workbench that extends Excel and allows the ability to have large structured data analytics. We have the asset manager, which is really all about governance. So, you can think of it like source control for Excel, but with a lot more around metadata control, transparency, and analytics on what people are using and how they are using it.

There's a server component that allows you just to off-load and scale analytics horizontally, if they do that, and build repeatable or overnight processes. The last part is the portal. This is really about allowing end-users to instantly share their insights with other people. Picking up from Jon’s point about the compound executable, but it’s defined in Schematiq. That can be off-loaded to a server and exposed as another API to a computer, the mobile, or even a function.

So, it’s very much all about empowering the end-user to connect, create, govern, share instantly and then allow consumption from anybody on any device.

Market for data services

Gardner: I imagine, given the sensitive nature of the financial markets and activities, that you have some boundaries that you can’t cross when it comes to examining what’s going on in between the core and the edge.

Tell me about how you, as an organization, can look at what’s going on with the Schematiq and the democratization, and whether that creates another market for data services when you see what the demand entails.

Harris: It’s definitely the case that people have internal datasets they create and that they look after. People are very precious about them because they are hugely valuable, and one of the things that we strive to help people do is to share those things.

Across the trading floor, you might effectively have a dozen or more different IT infrastructures, if you think of what’s existing on the desk as being a miniature infrastructure that’s been created. So, it's about making easy for people to share these things, to create master datasets that they gain value from, and to see that they gain mutual value from that, rather than feeling closed in, and don’t want to share this with their neighbors.

If we work together and if we have the tools that enable us to collaborate effectively, then we can all get more done and we can all add more value.
If we work together and if we have the tools that enable us to collaborate effectively, then we can all get more done and we can all add more value.

Gardner: It's interesting to me that the more we look at the use of data, the more it opens up new markets and innovation capabilities that we hadn’t even considered before. And, as an analyst, I expect to see more of a marketplace of data services. You strike me as an accelerant to that.

Harris: Absolutely. As the analytics are coming online and exposed by API’s, the underlying store that’s used is becoming a bit irrelevant. If you look at what the analytics can do for you, that’s how you consume the insight and you can connect to other sources. You can connect from Twitter, you connect from Facebook, you can connect PDFs, whether it’s NoSQL, structured, columnar, rows, it doesn’t really matter. You don’t see that complexity. The fact that you can just create an API key, access it as consumer, and can start to work with it is really powerful.

There was the recent example in the UK of a report on the Iraq War. It’s 2.2 million words, it took seven years to write, and it’s available online, but there's no way any normal person could consume or analyze that. That’s three times the complete works of Shakespeare.

Using these APIs, you can start to pull out mentions, you can pull out countries, locations and really start to get into the data and provide anybody with Excel at home, in our case, or any other tool, the ability to analyze and get in there and share those insights. We're very used to media where we get just the headline, and that spin comes into play. People turn things on their, head and you really never get to delve into the underlying detail.
Learn More About
Haven OnDemand
Sign Up Now
What’s really interesting is when democratization and sharing of insights and collaboration comes, we can all be informed. We can all really dig deep, and all these people that work there, the great analysts, could start to collaborate and delve and find things and find new discoveries and share that insight.

Gardner: All right, a little light bulb just went off in my head whereas we would go to a headline and a new story and we might have a hyperlink to a source. I could get a headline and a news story, open up my Excel spreadsheet, get to the actual data source behind the entire story and then probe and plumb and analyze that any which way I wanted to.

Harris: Yes, Exactly. I think the most savvy consumer now, the analyst, is starting to demand that transparency. We've seen in the UK, words, election messages and quotes and even financial stats where people just don’t believe the headlines. They're demanding transparency in that process, and so governance can only be really a good thing.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How JetBlue turns mobile applications quality assurance into improved user experience wins

How JetBlue turns mobile applications quality assurance into improved user experience wins

The next BriefingsDirect Voice of the Customer performance engineering case study discussion examines how JetBlue Airways in New York uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-time monitoring of mobile applications.

We'll now hear how JetBlue cultivated a DevOps model by including advanced performance feedback in the continuous integration process to enable greater customer and workforce productivity.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how efficient performance engineering has reduced testing, hardware and maintenance costs by as much as 60 percent, we're joined by Mohammed Mahmud, the Senior Software Performance Engineer at JetBlue Airways in New York. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why is mobile user experience so very important to your ability to serve customers well these days?

Mahmud: It's really very important for us to give the customer an option to do check-in, book flights, manage bookings, check flight status, and some other things. On flights, they have an option to watch TV, listen to music, and buy stuff using mobile devices as well. But on board, they have to use Fly-Fi [wireless networks]. This is one of the most important business drivers for JetBlue Airways.

Gardner: What sort of climate or environment have you had to put together in order to make sure that those mobile apps really work well, and that your brand and reputation don’t suffer?
Effective Performance Engineering
Download the Report
Mahmud: I believe a real-time monitoring solution is the key to success. We use HPE Business Service Management (BSM), integrated with third-party applications for monitoring purposes. We created some synthetic transactions and put them out there on a real device to see ... how it impacts performance. If there are any issues with that, we can fix it before it happens in the production environment.

Also, we have a real-time monitoring solution in place. This solution uses real devices to get the real user experience and to identify potential performance bottlenecks in a live production environment. If anything goes wrong there, we can get alerts from the production environment, and we can mitigate that issue right away.

DevOps benefits

Gardner: How have you been able to connect the development process to the operational environment?

Mahmud
Mahmud: My area is strictly performance engineering, but we're in the process of putting the performance effort into our DevOps model. We're going to be part of the continuous integration (CI) process, so we can take part in the development process and give performance feedback early in the development phase.

In this model, an application module upgrade is kicking off the functional test cases and giving feedback to the developers. Our plan is to take part of that CI process and include the performance test cases to provide performance feedback in the very early stage of the development process.

Gardner: How often are you updating these apps? Are you doing it monthly, quarterly, more frequently?

Mahmud: Most of them on a two- or three-week basis.

Gardner: How are you managing the virtual environment to create as close to the operating environment as you can? How do the virtualized services and networks benefit you?

Mahmud: We're maintaining a complete virtualized environment for our performance testing and performance engineering. Before our developers create any kind of a service, or put it out there, they do mock-ups using third-party applications. The virtual environment they're creating is similar to the production environment, so that when it’s being deployed out there in the actual environment, it works efficiently and perfectly without any issue.
Effective Performance Engineering
Download the Report
Our developers recently started using the service virtualization technology. Also, we use network virtualization technology to measure the latency for various geographical locations.

Gardner: How has performance engineering changed over the past few years? We've seen a lot of changes in general, in development, mobile of course, DevOps, and the need for more rapid development. But, how have you seen it shift in the past few years in performance engineering?

Mahmud: When I came to JetBlue Airways, LoadRunner was only one product they had. The performance team was responsible for evaluating the application performance by running a performance test and give the test results with identifying pass/fail based on the requirements provided. It was strictly performance testing.

The statistics they used to provide were pretty straightforward, maybe some transaction response times and some server statistics, but no other analysis or detailed information. But now, it’s more than that. Now, we don’t just test the application and determine the pass/fail. We analyze the logs, traffic flow, user behavior, and how they behave, etc. in order to create and design an effective test. Now, this is more performance engineering than performance testing.

Early in the cycle

We're getting engaged early in the development cycle to provide performance feedback. We're doing the performance testing, providing the response time in cases where multiple users are using that application or that module, finding out how this is going to impact the performance, and finding bottlenecks before it goes to the integration point.

So, it’s more of coming to the developers' table, sitting together, and figuring out any performance issue.

Gardner: Understanding the trajectory forward, it seems that we're going to be doing more with microservices, APIs, more points of contact, generating more data, trying to bring the analysis of that data back into the application. Where do you see it going now that you've told us where it has come from? What will be some of the next benefits that performance engineering can bring to the overall development process?

Mahmud: Well, as I mentioned earlier, we're planning to be part of the continuous integration; our goal is to become engaged earlier in the development process. That's sitting together with the developers on a one-to-one basis to see what they need to make sure that we have performance-efficient applications in our environment for our customers. Again, this is all about getting involved in the earlier stages. That's number one.
We're trying to become engaged in the early stages and be part of the development process as well.

Number two, we're trying to mitigate any kind of volume-related issue. Sometimes, we have yearly sales. We don’t know when that's going to happen, but when it happens, it’s an enormous pressure on the system. It's a big thing, and we need to make sure we're prepared for that kind of traffic on our site.

Our applications are mostly JetBlue.com and JetBlue mobile applications. It’s really crucial for us and for our business. We're trying to become engaged in the early stages and be part of the development process as well.

Gardner: Of course it’s important to be able to demonstrate value. Do you have any metrics of success or you can point to ways in which getting in early, getting in deep, has been of significant value? How do you measure your effectiveness?

Mahmud: We did an assessment in our production environment to see, if JetBlue.com goes down for an hour, how much it’s going to cost us? I'm not authorized to discuss any numbers, but I can tell you that it was in the millions of dollars.
Effective Performance Engineering
Download the Report
So, before it goes to production with any kind of performance-related issue, we make sure that we're solving it before it happens. Right there, we're saving millions of dollars. That’s the value we are adding.

Gardner: Of course more and more people identify the mobile app with the company. This is how they interact; it becomes the brand. So, it's super important for that as well.

Adding value

Mahmud: Of course, and I can add another thing. When I joined JetBlue three years ago, industry standard-wise our position was bottom on the benchmark list. Now, we're within the top five in the benchmark list. So, we're adding value to our organization.

Gardner: It pays to get it done right the first time and get it early, almost in any activity these days.

What comes next? Where would you like to extend continuous integration processes, to more types of applications, developing more services? Where do you take the success and extend it?

Mahmud: Right now, we're more engaged with JetBlue.com and mobile applications. Other teams are interested in doing performance testing for their systems as well. So, we're getting engaged with the SAP, DB, HR, and payroll team as well. We're getting engaged more day by day. It’s getting bigger every day.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Seven secrets to highly effective procurement: How business networks fuel innovation and transformation

Seven secrets to highly effective procurement: How business networks fuel innovation and transformation

The next BriefingsDirect innovation discussion focuses on how technology, data analysis, and digital networks are transforming procurement and the source-to-pay process as we know it. We’ll also discuss what it takes to do procurement well in this new era of business networks. 

Far beyond just automating tasks and transactions, procurement today is a strategic function that demands an integrated, end-to-end approach built on deep insights and intelligence to drive informed source-to-pay decisions and actions that enable businesses to adopt a true business ecosystem-wide digital strategy.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

And according to the findings of a benchmarking survey conducted by SAP Ariba, there are seven essential traits of modern procurement organizations that are driving this innovation and business transformation.

To learn more about the survey results on procurement best practices, please join me in welcoming Kay Ree Lee, Director of Value Realization at SAP. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Procurement seems more complex than ever. Supply chains now stretch around the globe, regulation is on the rise, and risk is heightened on many fronts in terms of supply chain integrity.

Innovative companies, however, have figured out how to overcome these challenges, and so, at the value realization group you have uncovered some of these best practices through your annual benchmarking survey. Tell us about this survey and what you found.

Lee: We have an annual benchmarking program that covers purchasing operations, payables, sourcing, contract management, and working capital. What's unique about it, Dana, is that it combines a traditional survey with data from our procurement applications and business network.

This past year, we looked at more than 200 customers who participated, covering more than $350 billion in spend. We analyzed their quantitative and qualitative responses and identified the intersection between those responses for top performers compared to average performers. Then, we drew correlations between which top performers did well and the practices that drove those achievements.

Gardner: By making that intersection, it’s an example of the power of business networks, because you're able to gather intelligence from your business network environment or ecosystem and then apply a survey back into that. It seems to me that there is a whole greater than the sum of the parts between what the Ariba Network can do and what market intelligence is demanding.

Universe of insights

Lee: That’s right. The data from the applications in the Ariba Network contain a universe of insights, intelligence, and transactional data that we've amassed over the last 20-plus years. By looking at the data, we've found that there are specific patterns and trends that can help a lot of companies improve their procurement performance -- either by processing transactions with fewer errors or processing them faster. They can source more effectively by collaborating with more suppliers, having suppliers bid on more events, and working collaboratively with suppliers.

Lee
Gardner: And across these 200 companies, you mentioned $350 billion of spend. Do you have any sense of what kind of companies these are, or do they cross a variety of different types of companies in different places doing different vertical industry activities?

Lee: They're actually cross-industry. We have a lot of companies in the services industry and in the manufacturing industry as well.

Gardner: This sounds like a unique, powerful dataset, indicative of what's going on not just in one or two places, but across industries. Before we dig into the detail, let’s look at the big picture, a 100,000-foot view. What would you say are some the major high-level takeaways that define best-in-class procurement and organizations that can produce it these days based on your data?

Lee: There are four key takeaways that define what best-in-class procurement organizations do.

The first one is that a lot of these best-in-class organizations, when they look at source-to-pay or procure-to-pay, manage it as an end-to-end process. They don't just look at a set of discrete tasks; they look at it as a big, broad picture. More often than not, they have an assigned process expert or a process owner that's accountable for the entire end-to-end process. That's key takeaway number one.
A lot of these best-in-class organizations also have an integrated platform from which they manage all of their spend.

Key takeaway number two is that a lot of these best-in-class organizations also have an integrated platform from which they manage all of their spend. And through this platform, procurement organizations provide their internal stakeholders with flexibility, based on what they're trying to purchase.

For example, if a company needs to keep track of items that are critical to manufacturing and they need to have inventory visibility and tracking. That's one requirement.

Another requirement is if they have to purchase manufacturing or machine parts that are not stocked, that can be purchased through supply catalogs with pre-negotiated part description and item pricing.
   
Gardner: Are you saying that this same platform can be used in these companies across all the different types of procurement and source-to-pay activities -- internal services, even indirect, perhaps across different parts of a large company? That could be manufacturing or transportation? Is it the common platform common for all types of purchasing?

Common platform

Lee: That's right. One common platform for different permutations of what you're trying to buy. This is important.

The third key takeaway was that best-in-class organizations leverage technology to fuel greater collaboration. They don't just automate tasks. One example of this is by providing self-service options.

Perhaps a lot of companies think that self-service options are dangerous, because you're letting the person who is requesting items select on their own, and they could make mistakes. But the way to think about a self-service option is that it's providing an alternative for stakeholders to buy and to have a guided buying experience that is both simple and compliant and that's available 24/7.

You don't need someone there supervising them. They can go on the platform and they can pick the items, because they know the items best -- and they can do this around the clock. That's another way of offering flexibility and fueling greater collaboration and ultimately, adoption.
Networks have become very prevalent these days, but best-in-class companies connect to networks to assess intelligence, not just transact.

Gardner: We have technologies like mobile these days that allow that democratization of involvement. That sounds like a powerful approach.

Lee: It is. And it ties to the fourth key takeaway, which is that best-in-class organizations connect to networks. Networks have become very prevalent these days, but best-in-class companies connect to networks to assess intelligence, not just transact. They go out to the network, they collaborate, and they get intelligence. A network really offers scale that organizations would otherwise have to achieve by developing multiple point-to-point connections for transacting across thousands of different suppliers.

You now go on a network and you have access to thousands of suppliers. Years ago, you would have had to develop point-to-point connectivity, which costs money, takes a long time, and you have to test all those connections, etc.

Gardner: I'm old enough to remember Metcalfe's Law, which roughly says that the more participants in a network, the more valuable that network becomes, and I think that's probably the case here. Is there any indication from your data and research that the size and breadth and depth of the business network value works in this same fashion?

Lee: Absolutely. Those three words are key. The size -- you want a lot of suppliers transacting on there. And then the breadth -- you want your network to contain global suppliers, so some suppliers that can transact in remote parts of the world, even Nigeria or Angola.

Then, the depth of the network -- the types of suppliers that transact on there. You want to have suppliers that can transact across a plethora of different spend categories -- suppliers that offer services, suppliers that offer parts, and suppliers that offer more mundane items.

But you hit the nail on the head with the size and breadth of the network.

Pretty straightforward

Gardner: So for industry analysts like myself, these seem pretty straightforward. I see where procurement and business networks are going, I can certainly agree that these are major and important points.

But I wonder, because we're in such a dynamic world and because companies -- at least in many of the procurement organizations -- are still catching up in technology, how are these findings different than if you had done the survey four or five years ago? What's been a big shift in terms of how this journey is progressing for these large and important companies?

Lee: I don't think that there's a big shift. Over the last two to five years, perhaps priorities have changed. So, there are some patterns that we see in the data for sure. For example, within sourcing, while sourcing savings continue to go up, go down, sourcing continues to be very important to a lot of organizations to deliver cost savings.

The data tells us organizations need to be agile and they need to continue to do more with less. Networks have become very prevalent these days, but best-in-class companies connect to networks to assess intelligence, not just transact.
They have fewer people operating certain processes, and that means that it costs organizations less to operate those processes.

One of the key takeaways from this is that the cost structure of procurement organizations have come down. They have fewer people operating certain processes, and that means that it costs organizations less to operate those processes, because now they're leveraging technology even more. Then, they're able to also deliver higher savings, because they're including more and different suppliers as they go to market for certain spend categories.

That's where we're seeing difference. It's not really a shift, but there are some patterns in the data.

Gardner: It seems to me, too, though, that because we're adding through that technology more data and insight, we can elevate procurement more prominently into the category of spend management. That allows companies to really make decisions at a large environment level across the entire industries, maybe across the entire company based on these insights, based on best practices, and they can save a lot more money.

But then, it seems to me that that elevates procurement to a strategic level, not just a way to save money or to reduce costs, but to actually enable processes and agility, as you pointed out, that haven't been done before.

Before we go the traits themselves, is there a sense that your findings illustrate this movement of procurement to a more strategic role?

Front and center

Lee: Absolutely. That's another one of the key traits that we have found from the study. Top performing organizations do not view procurement as a back-office function. Procurement is front and center. It plays a strategic role within the organization to manage the organization’s spend.

When you talk about managing spend, you could talk about it at the surface level. But we have a lot of organizations that manage spend to a depth that includes performing strategic supplier relationship management, supplier risk management, and deep spend analysis. The ability to manage at this depth distinguishes top performers from average performers.

Gardner: As we know, Kay Ree, many people most trust their cohorts, people in other companies doing the same function they are, for business acumen. So this information is great, because we're learning from the people that are doing it in the field and doing it well. What are some of the other traits that you uncovered in your research?
Top performers play a strategic role within the organization. They manage more spend and they manage that spend at a deep level.

Lee: Let me go back to the first trait. The first one that we saw that drove top performing organizations was that top performers play a strategic role within the organization. They manage more spend and they manage that spend at a deep level.

One of the stats that I will share is that top performers see a 36 percent higher spend under management, compared to the average organization. And they do this by playing a strategic role in the organization. They're not just processing transactions. They have a seat at the leadership table. They're a part of the business in making decisions. They're part of the planning, budgeting, and financial process.

They also ensure that they're working collaboratively with their stakeholders to ensure that procurement is viewed as a trusted business adviser, not an administrator or a gatekeeper. That’s really the first trait that we saw that distinguishes top performers.

The second one is that top performers have an integrated platform for all procurement spend, and they conduct regular stakeholder spend reviews -- resulting in higher sourcing savings.

And this is key. They conduct quarterly – or even more frequent -- meetings with the businesses to review their spend. These reviews serve different purposes. They provide a forum for discussing various sourcing opportunities.

Imagine going to the business unit to talk to them about their spend from the previous year. "Here is who you have spent money with. What is your plan for the upcoming year? What spend categories can we help you source? What's your priority for the upcoming year? Are there any capital projects that we can help out with?"

Sourcing opportunities

It's understanding the business and requirements from stakeholders that helps procurement to identify additional sourcing opportunities. Then, collaborating with the businesses and making sure that procurement is being responsive and agile to the stakeholder requirements. Procurement, has to be proactive in collaborating with stakeholders and ensuring that they’re being responsive and agile to their requirements. That's the second finding that we saw from the survey.

The third one is that top performers manage procure-to-pay as an end-to-end process with a single point of accountability, and this really drives higher purchase order (PO) and invoicing efficiency. This one is quite straightforward. Our quantitative and qualitative research tells us that having a single point of accountability drives a higher transactional efficiency.

Gardner: I can speak to that personally. In too many instances, I work with companies where one hand doesn’t know what the other is doing, and there is finger pointing. Any kind of exception management becomes bogged down, because there isn’t that point of accountability. I think that’s super important.

Lee: We see that as well. Top performers operationalize savings after they have sourced spend categories and captured negotiated savings. The question then becomes how do they operationalize negotiated savings so that it becomes actual savings? The way top performers approach it is that they manage compliance for those sourced categories by creating fit-for-purpose strategies for purchase. So, they drive more spend toward contract and electronic catalogs through a guided buying experience.

You do that by having available to your stakeholders contracts and catalogs that would guide them to the negotiated pricing, so that they don't have to enter pricing, which would then dilute your savings. Top performers also look at working capital, and they look at it closely, with the ability to analyze historical payment trends and then optimize payment instruments resulting in higher discounts.
Top performers leverage technology and provide self-service to enable around-the-clock business.

Sometimes, working capital is not as important to procurement because it's left to the accounts payable (AP) function, but top performers or top performing procurement organizations look at it holistically; as another lever that they manage within the sourcing and procure-to-pay process.

So, it's another negotiation point when they are sourcing, to take advantage of opportunities to standardize payment terms, take discounts when they need to, and also look at historical data and really have a strategy, and variations of the strategy, for how we're going to pay strategic suppliers. What’s the payment term for standard suppliers, when do we pay on terms versus discounts, and then when do we pay on a P-Card? They look at working capital holistically as part of their entire procurement process.

Gardner: It really shows where being agile and intelligent can have major benefits in terms of your ability to time and enforce delivery of goods and services -- and also get the best price in the market. That’s very cool.

Lee: And having all of that information and having the ability to transact efficiently is key. Let’s say you have all the information, but you can't transact efficiently. You're slow to make invoice payments, as an example. Then, while you have a strategy and approach, you can’t even make a change there (related to working capital). So, it's important to be able to do both, so that you have the options and the flexibility to be able to operationalize that strategy.

Top performers leverage technology and provide self-service to enable around-the-clock business. This really helps organizations drive down cycle time for PO processing.

Within the oil and gas sector, for example, it's critical for organizations to get the items out to the field, because if they don't, they may jeopardize operations on a large scale. Offering the ability to perform self-service and to enable that 24x7 gives organizations flexibility and offers the users the ability to maneuver themselves around the system quite easily. Systems nowadays are quite user-friendly. Let the users do their work, trust them in doing their work, so that they can purchase the items they need to, when they want to.

User experience

Gardner: Kay Ree, this really points out the importance of the user experience, and not just your end-user customers, but your internal employee users and how younger folks, millennials in particular, expect that self-service capability.

Lee: That’s right. Purchasing shouldn't be any different. We should follow the lead of other industries and other mobile apps and allow users to do self-service. If you want to buy something, you go out there, you pick the item, the pricing is out there, it’s negotiated pricing, so you pick the item, and then let’s go.

Gardner: That’s enabling a lot of productivity. That’s great. Okay, last one.

Lee: The last one is that top performers leverage technology to automate PO and invoice processing to increase administrative efficiency. What we see is best-in-class organizations leverage technology with various features and functionalities within the technology itself to increase administrative efficiency.

An example of this could be the ability to collaborate with suppliers on the requisitioning process. Perhaps you're doing three bids and a buy, and during that process it's not picking up the phone anymore. You list out your requirements for what you're trying to buy and you send it out automatically to three suppliers, and then they provide responses back, you pick your responses and then the system converts the requirements to a PO.
Top performers are able to achieve about 7.8 percent in savings per year as a percent of source spend. That’s a key monetary benefit that most organizations look to.

So that flexibility by leveraging technology is key.

Gardner: Of course, we expect to get even more technology involved with business processes. We hear things about the Internet of Things (IoT), more data, more measurement, more scientific data analysis being applied to what may have been more gut instinct types of business decision making, now it’s more empirical. So I think we should expect to see even more technology being brought to bear on many of these processes in the next several years. So that’s kind of important to see elevated to a top trait.

All right, what I really like about this, Kay Ree, is this information is not just from an academic or maybe a theory or prediction, but this is what organizations are actually doing. Do we have any way of demonstrating what you get in return? If these are best practices as the marketplace defines them, what is the marketplace seeing when they adopt these principles? What do they get for this innovation? Brass tacks, money, productivity and benefits -- what are the real paybacks?

Lee: I'll share stats for top performers. Top performers are able to achieve about 7.8 percent in savings per year as a percent of source spend. That’s a key monetary benefit that most organizations look to. It’s 7.8 percent in savings.

Gardner: And 7.8 percent to someone who's not familiar with what we're talking about might not seem large, but this is a huge amount of money for many companies.

Lee: That's right. Per billion dollars, that’s $78 million.

Efficient processing

They also manage more than 80 percent of their spend and they manage this spend to a greater depth by having the right tools to do it -- processing transactions efficiently, managing contracts, and managing compliance. And they have data that lets them run deeper spend analysis. That’s a key business benefit for organizations that are looking to transact over the network, looking to leverage more technology.

Top performers also transact and collaborate electronically with suppliers to achieve a 99 percent-plus electronic PO rate. Best-in-class organizations don't even attach a PDF to an email anymore. They create a requisition, it gets approved, it becomes a PO, and it is automatically sent to a supplier. No one is involved in it. So the entire process becomes touch-less.

Gardner: These traits promote that automation that then leads to better data, which allows for better process. And so on. It really is a virtuous cycle that you can get into when you do this.

Lee: That’s right. One leads to another.
They create a requisition, it gets approved, it becomes a PO, and it is automatically sent to a supplier. No one is involved in it. So the entire process becomes touch-less.

Gardner: Are there other ways that we're seeing paybacks?

Lee: The proof of the pudding is in the eating. I'll share a couple of examples from my experience looking at data for specific companies. One organization utilizes the availability of collaboration and sourcing tools to source transportation lanes, to obtain better-negotiated rates, and drive higher sourcing savings.

A lot of organizations use collaboration and sourcing tools, but the reason why this is interesting is because when you think about transportation, there are different ways to source transportation, but doing it to an eSourcing tool and having the ability to generate a high percentage in savings through collaboration and sourcing tools, that was an eye-opener for me. That’s an example of an organization really using technology to its benefit of going out and sourcing an uncommon spend category.

For another example, I have a customer that was really struggling to get control of their operational costs related to transaction processing, while trying to manage and drive a high degree of compliance. What they were struggling with is that their cost structure was high. They wanted to keep the cost structure lower, but still drive a high degree of compliance.

When we looked at their benchmark data, it helped open the eyes of the customer to understand how to drive improvements by directing transactions to catalogs and contracts where applicable, driving suppliers to create invoice-based contracts in the Ariba Network and then they were enabling more suppliers to invoice electronically. This then helped increase administrative efficiency and reduced invoice errors, which were resulting in a lot of rework for the AP team.

So, these two examples, in addition to the quantitative benefits, show the tremendous opportunity organizations have to adopt and leverage some of these technologies.

Virtuous cycle

Gardner: So, we're seeing more technology become available, more data and analytics become available with the business networks are being built out in terms of size, breadth and depth, and we've identified that the paybacks can lead to a virtuous cycle of improvement.

Where do you see things going now that you've had a chance to really dig into this data and see these best practices in actual daily occurrence? What would you see happening in the future? How can we extrapolate from what we've learned in the market to what we should expect to see in the market?

Lee: We're still only just scratching the surface with insights. We have a roadmap of advanced insights that we're planning for our customers that will allow us to further leverage the insights and intelligence embedded in our network to help our customers increase efficiency in operations and effectiveness of sourcing.
We have a roadmap of advanced insights that we're planning for our customers that will allow us to further leverage the insights and intelligence embedded in our network.

Gardner: It sounds very exciting, and I think we can also consider bringing artificial intelligence and machine learning capabilities into this as we use cloud computing. And so the information and insights are then shared through a sophisticated infrastructure and services delivery approach. Who knows where we might start seeing the ability to analyze these processes and add all sorts of new value-added benefits and transactional efficiency? It's going to be really exciting in the next several years.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

How flash storage provides competitive edge for Canadian music service provider SOCAN

How flash storage provides competitive edge for Canadian music service provider SOCAN

The next BriefingsDirect Voice of the Customer digital business transformation case study examines how Canadian nonprofit SOCAN faced digital disruption and fought back with a successful storage modernizing journey. We'll learn how adopting storage innovation allows for faster responses to end-user needs and opens the door to new business opportunities.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how SOCAN gained a new competitive capability for its performance rights management business we're joined by Trevor Jackson, Director of IT Infrastructure for SOCAN, the Society of Composers, Authors and Music Publishers of Canada, based in Toronto. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The music business has changed a lot in the past five years or so. There are lots of interesting things going on with licensing models and people wanting to get access to music, but people also wanting to control their own art.

Tell us about some of the drivers for your organization, and then also about some of your technology decisions.
A Tech Guide
For the Savvy
Flash Buyer
Jackson: We've traditionally been handling performances of music, which is radio stations, television and movies. Over the last 10 or 15 years, with the advent of YouTube, Spotify, Netflix, and digital streaming services, we're seeing a huge increase in the volume of data that we have to digest and analyze as an organization.

Gardner: And what function do you serve? For those who are might not be familiar with your organization or the type of organization, tell us the role you play in the music and content industries.

Play music ethically

Jackson: At a very high level, what we do is license the use of music in Canada. What that means is that we allow businesses through licensing to ethically play any type of music they want within their environment. Whether it's a bar, restaurant, television station, or a radio station, we collect the royalties on behalf of the creators of the music and then redistribute that to them.

Jackson
We're a not-for-profit organization. Anything that we don't spend on running the business, which is the collecting, processing, and payment of those royalties, goes back to the creators or the publishers of the music.

Gardner: When you talk about data, tell us about the type of data you collect in order to accomplish that mission?

Jackson: It's all kinds of data. For the most part, it's unstructured. We collect it from many different sources, again radio and television stations, and of course, YouTube is another example.

There are some standards, but one of the challenges is that we have to do data transformation to ensure that, once we get the data, we can analyze it and it fits into our databases, so that we can do the processing on information.

Gardner: And what sort of data volumes are we talking about here?

Jackson: We're not talking about petabytes, but the thing about performance information is that it's very granular. For example, the files that YouTube sends to us may have billions of rows for all the performances that are played, as they're going through their cycle through the month; it's the same thing with radio stations.

We don't store any digital files or copies of music. It's all performance-related information -- the song that was played and when it was played. That's the type of information that we analyze.
We don't store any digital files or copies of music. It's all performance-related information.

Gardner: So, it's metadata about what's been going on in terms of how these performances have been used and played. Where were you two years ago in this journey, and how have things changed for you in terms of what you can do with the data and how performance of your data is benefiting your business?

Jackson: We've been on flash for almost two years now. About two and a half years ago, we realized that the storage area network (SAN) that we did have, which was a traditional tiered-storage array, just didn't have the throughput or the input/output operations per second (IOPS) to handle the explosive amount of data that we were seeing.

With YouTube coming online, as well as Spotify, we knew we had to do something about that. We had to increase our throughput.

Performance requirements

Gardner: Are you generating reports from this data at a certain frequency or is there streaming? How is the output in terms of performance requirements?

Jackson: We ingest a lot of data from the data-source providers. We have to analyze what was played, who owns the works that were played, correlate that with our database, and then ensure that the monies are paid out accordingly.

Gardner: Are these reports for the generation of the money done by the hour, day, or week? How frequently do you have to make that analysis?

Jackson: We do what we call a distribution, which is a payment of royalties, once a quarter. When we're doing a payment on a distribution, it’s typically on performances that occurred nine months prior to the day of the distribution.
A Tech Guide
For the Savvy
Flash Buyer
Gardner: What did you do two and a half years ago in terms of moving to flash and solid state disk (SSD) technologies? How did you integrate that into your existing infrastructure, or create the infrastructure to accommodate that, and then what did you get for it?

Jackson: When we started looking at another solution to improve our throughput, we actually started looking at another tiered-storage array. I came to the HPE Discover [conference] about two years ago and saw the presentation on the all-flash [3PAR Storage portfolio] that they were talking about, the benefits of all-flash for the price of spinning disk, which was to me very intriguing.

I met with some of the HPE engineers and had a deep-dive discussion on how they were doing this magic that they were claiming. We had a really good discussion, and when I went back to Toronto, I also met with some HPE engineers in the Toronto offices. I brought my technical team with me to do a bit of a deeper dive and just to kick the tires to understand fully what they were proposing.
We saw some processes that we were running going from days to hours just by putting it on all flash. To us, that's a huge improvement.

We came away from that meeting very intrigued and very happy with what we saw. From then on, we made the leap to purchase the HPE storage. We've had it running for about [two years] now, and it’s been running very well for us.

Gardner: What sort of metrics do you have in terms of technology, speeds and feeds, but also metrics in terms of business value and economics?

Jackson: I don’t want to get into too much detail, but as an anecdote, we saw some processes that we were running going from days to hours just by putting it on all-flash. To us, that's a huge improvement.

Gardner: What other benefits have you gotten? Are there some analytics benefits, backup and recovery benefits, or data lifecycle management benefits?

OPEX perspective

Jackson: Looking at it from an OPEX perspective, because of the IOPS that we have available to us, planning maintenance windows has actually been a lot easier for the team to work with.

Before, we would have to plan something akin to landing the space shuttle. We had to make sure that we weren’t doing it during a certain time, because it could affect the batch processes. Then, we'd potentially be late on our payments, our distributions. Because we have so many IOPS on tap, we're able to do these maintenance windows within business hours. The guys are happier because they have a greater work-life balance.

The other benefit that we saw was that all-flash uses less power than spinning disk. Because of less power, there less heat, and a need for less floor space. Of course, speed is the number one driving factor for a company to go all-flash.

Gardner: In terms of automation, integration, load-balancing, and some of those other benefits that come with flash storage media environments, were you able to use some of your IT folks for other innovation projects, rather than speeds and feeds projects?

Jackson: When you're freeing up resources from keeping the lights on, it's adding more value to the business. IT traditionally is a cost center, but now we can take those resources and take them off of the day-to-day mundane tasks and put them into projects, which is what we've been doing. We're able to add greater benefit to our members.
We know our business very well and we're hoping to leverage that knowledge with technology to further drive our business forward.

Gardner: And has your experience with flash in modernizing your storage prompted you to move toward other infrastructure modernization techniques including virtualization, software-defined composable infrastructure, maybe hyper converged? Is this an end point for you or maybe a starting point?

Jackson: IT is always changing, always transforming, and we're definitely looking at other technologies.

Some of the big buzzwords out there, blockchain, machine learning, and whatnot are things that we’re looking at very closely as an organization. We know our business very well and we're hoping to leverage that knowledge with technology to further drive our business forward.

Gardner: We're hearing a lot promising sorts of vision these days about how machine learning could be brought to bear on things like data transformation and making that analysis better, faster, cheaper. So, that’s a pretty interesting stuff.
A Tech Guide
For the Savvy
Flash Buyer
Are you now looking to extend what you do? Is the technology an enabler more than a cost center in some ways for your general SOCAN vision and mission?

Jackson: Absolutely. We're in the music business, but there is no way we can do what we do without technology; technically it’s impossible. We're constantly looking at ways that we can leverage what we have today, as well as what’s out in the marketplace or coming down the pipe, to ensure that we can definitely add the value to our members to ensure that they're paid and compensated for their hard work.

Gardner: And user experience and user quality of experience are top-of-mind for everybody these days.

Jackson: Absolutely, that’s very true.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Strategic DevOps—How advanced testing brings broad benefits to Independent Health

Strategic DevOps—How advanced testing brings broad benefits to Independent Health

The next BriefingsDirect Voice of the Customer digital business transformation case study highlights how Independent Health in Buffalo, New York has entered into a next phase of "strategic DevOps."

After a two-year drive to improve software development, speed to value, and improved user experience of customer service applications, Independent Health has further extended advanced testing benefits to ongoing apps production and ongoing performance monitoring.

Learn here how the reuse of proven performance scripts and replaying of synthetic transactions that mimic user experience have cut costs and gained early warning and trending insights into app behaviors and system status.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how to attain such new strategic levels of DevOps benefits are Chris Trimper, Manager of Quality Assurance Engineering at Independent Health in Buffalo, New York, and Todd DeCapua, Senior Director of Technology and Product Innovation at CSC Digital Brand Services Division and former Chief Technology Evangelist at Hewlett Packard Enterprise (HPE). The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner:What were the major drivers that led you to increase the way in which you use DevOps, particularly when you're looking at user-experience issues in the field and in production?

Trimper: We were really hoping to get a better understanding of our users and their experiences. The way I always describe it to folks is that we wanted to have that opportunity to almost look over their shoulder and understand how the system was performing for them.

Whether your user is internal or external, if they don't have that good user experience, they're going to be very frustrated and they're going to have a poor experience. Internally, time is money. So, if it takes longer for things to happen, and you get frustrated potential turnover, it's an unfortunate barrier.

Gardner: What kind of applications are we talking about? Is this across the spectrum of different type of apps, or did you focus on one particular type of app to start out?

End users important

Trimper: Well, when we started, we knew that the end user, our members, were the most important thing to us, and we started off with the applications that our servicing center used, specifically our customer relationship management (CRM) tool.

Trimper
If the member information doesn’t pop fast when a member calls, it can lead to poor call quality, queuing up calls, and it just slows down the whole business. We pride ourselves on our commitment to our members. That goes even as far as, when you call up, making sure that the person on the other end of the phone can service you well. Unfortunately, they can only service you as well as the data that’s provided to them to understand the member and their benefits.

Gardner: It’s one thing to look at user experience through performance, but it's a whole new dimension or additional dimension when you're looking at user experience in terms of how they utilize that application, how well it suits their particular work progress, or the processes for their business, their line of business. Are you able to take that additional step, or are you at the point where the feedback is about how users behave and react in a business setting in addition to just how the application performs?

Trimper: We're starting to get to that point. Before, we only had as much information as we were provided about how an application was used or what they were doing. Obviously, you can't stand there and watch what they're doing 24x7.

Lately, we've been consuming an immense amount of log data from our systems and understanding what they're doing, so that we can understand their problems and their woes, or make sure that what we're testing, whether it's in production monitoring or pre-production testing, is an accurate representation of our user. Again, whether it’s internal or external, they're both just as valuable to us.

Gardner: Before we go any further, Chris, tell us a little bit about Independent Health. What kind of organization is it, how big is it, and what sort of services do you provide in your communities?
Get the New Book
On Effective
Performance Engineering
Trimper: We're a healthcare company for the Western New York area. We're a smaller organization. We define the red-shirt treatment that stands for the best quality care that we can provide our members. We try to be very proactive in everything that we do for our members as well. We drive members to the provider to do preventative things, that healthier lifestyle that everybody is trying to go for.

Gardner: Todd, we're hearing this interesting progression toward a feedback loop of moving beyond performance monitoring into behaviors and use patterns and improving that user experience. How common is that, or is Independent Health on the bleeding edge?

Ahead of the curve

DeCapua: Independent Health is definitely moving with, or maybe a little bit ahead of, the curve in the way that they're leveraging some of these capabilities.

DeCapua
If we were to step back and look at where we've been from an industry perspective across many different markets, Agile was hot, and now, as you start to use Agile and break all the right internal systems for all the right reasons, you have to start adopting some of these DevOps practices.

Independent Health is moving a little bit ahead on some of those pieces, and they're probably focusing on a lot of the right things, when you look across other customers I work with. It's things like speed of time to value. That goes across technology teams, business teams, and they're really focused on their end customer, because they're talking about getting these new feature functions to benefit their end customers for all the right reasons.

You heard Chris talking about that improved end-user experience about around their customer service applications. This is when people are calling in, and you're using tools to see what’s going on and what your end users are doing.

There's another organization that actually recorded what their customers were doing when they were having issues. That was a production-monitoring type thing, but now you're recording a video of this. If you called within 10 minutes of having that online issue, as you are calling in and speaking with that customer service representative, they're able to watch the video and see exactly what you did to get that error online to cause that phone call. So having these different types of users’ exceptions, being able to do the type of production monitoring that Independent Health is doing is fantastic.
I do think that Independent Health is hitting the bleeding edge on that piece. That’s what I've observed.

Another area that Chris was telling me about is some of the social media aspects and being able to monitor that is another way of getting feedback. Now, I do think that Independent Health is hitting the bleeding edge on that piece. That’s what I've observed.

Gardner: Let’s hear some more about that social media aspect, getting additional input, additional data through all the available channels that you can.

Trimper: It would be foolish not to pay attention to all aspects of our members, and we're very careful to make sure that they're getting that quality that we try to aim for. Whether it happens to be Facebook, Twitter, or some other mechanism that they give us feedback on, we take all that feedback very seriously.

I remember an instance or two where there might have been some negative feedback. That went right to the product-management team to try to figure out how to make that person’s experience better. It’s interesting, from a healthcare perspective, thinking about that. Normally, you think about a member’s copay or their experience in the hospital. Now, it's their experience with this application or this web app, but those are all just as important to us.

Broadened out?

Gardner: You started this with those customer-care applications. Has this broadened out into other application development? How do you plan to take the benefits that you've enjoyed early and extend them into more and more aspects of your overall IT organization?

Trimper: We started off with the customer service applications and we've grown it into observing our provider portals as well. A provider can come in and look at the benefits of a member, the member portal that the members actually log in to. So, we're actually doing production monitoring of pretty much all of our key areas.

We also do pre-production monitoring of it. So, as we are doing a release, we don’t have to wait until it gets to production to understand how it went. We're going a little bit beyond normal performance testing. We're running the same exact types of continuous monitoring in both our pre-production region and our production regions to ensure that quality that we love to provide.

Gardner: And how are the operations people taking this? Has this been building bridges? Has this been something that struck them as a foreign entity in their domain? How has that gone?

Trimper: At first, it was a little interesting. It felt like to them it was just another thing that they had to check out and had to look at, but I took a unique approach with it. I sat down and talked to them personally and said, "You hear about all these problems that people have, and it’s impossible for you to be an expert on all these applications and understand how it works. Luckily, coming from the quality organization, we test them all the time and we know the business processes."
Get the New Book
On Effective
Performance Engineering
The way I sold it to them is, when you see an alert, when you look at the statistics, it’s for these key business processes that you hear about, but you may not necessarily want to know all the details about them or have the time to do that. So, we really gave them insight into the applications.

As far as the alerting, there was a little bit of an adoption practice for that, but overall we've noticed a decrease in the number of support tickets for applications, because we're allowing them to be more proactive, whether it’s proactive of an unfortunately blown service-level agreement (SLA), or it’s a degradation in quality of the performance. We can observe both of those, and then they can react appropriately.

Gardner: Todd, he actually sat down and talked to the production people. Is this something novel? Are we seeing more of that these days?

DeCapua: We're definitely seeing more of it, and I know it’s not unique for Chris. I know there was some push back at the beginning from the operations teams.

There was another thing that was interesting. I was waiting for Chris to hit on it, and maybe he can go into it a little bit more. It was the way that he rolled this out. When you're bringing a monitoring solution in, it’s often the ops team that’s bringing in this solution.

Making it visible

What’s changing now is that you have these application-development testing teams that are saying, "We also want to be able to get access to these types of monitoring, so that our teams can see it and we can improve what we are doing and improve the quality of what we deliver to you, the ops teams. We are going to do instrumenting and everything else that we want to get this type of detail to make it visible."

Chris was sharing with me how he made this available first to the directors, and not just one group of directors, but all the directors, making this very plain-sight visible, and helping to drive some of the support for the change that needed to happen across the entire organization.

As we think about that as a proven practice, maybe Chris is one of the people blazing the trail there. It was a big way of improving and helping to illuminate for all parties, this is what’s happening, and again, we want to work to deliver better quality.

Gardner: Anything to add to that, Chris?

Trimper: There were several folks in the development area that weren’t necessarily the happiest when they learned that the perception of what they originally thought was there and what was really there in terms of performance wasn’t that great.
It was a big way of improving and helping to illuminate for all parties, this is what’s happening.

One of the directors shared an experience with me. He would go into our utilities and look at the dashboards before he was heading to a meeting in our customer service center. He would understand what kind of looks he was going to be given when he walked in, because he was directly responsible for the functionality and performance of all this stuff.

He was pleased that, as they went through different releases and were able to continually make things better, he started seeing everything is green, everything is great today. So, when I walk in, it’s going to be sunshine and happiness, and it was sunshine and happiness, as opposed to potentially a little bit doomy and gloomy. It's been a really great experience for everyone to have. There's a little bit of pain going through it, but eventually, it has been seen as a very positive thing.

Gardner: What about the tools that you have in place? What allows you to provide these organizational and cultural benefits? It seems to me that you need to have data in your hands. You need to have some ability to execute once you have got that data. What’s the technology side of this; we've heard quite a bit about the people and the process?

Trimper: This whole thing came about because our CIO came to me and said. "We need to know more about our production systems. I know that your team is doing all the performance testing in pre-production. Some of the folks at HPE told me about this new tool called Performance Anywhere. Here it is, check it out, and get back to me. "

We were doing all the pre-production testing and we learned that all the scripts that we did, which had already been tried and true and been running and continuously get updates as we get new releases, could just be turned into these production monitors. Then, we found through using the tool, through our trial, and now all of our two plus years that we have been working with it that it was a fairly easy process.

Difficult point

The most difficult point was understanding how to get production data that we could work with, but you could literally take a test on your VUGen script and turn it into a production monitor in 5-10 minutes, and that was pretty invaluable to us.

That means that every time we get a release, we don’t have to modify two sets of scripts and we don’t have two different teams working on everything. We have one team that is involved in the full life cycle of these releases and that can very knowledgeably make the change to those production monitors.

Gardner: HPE Performance Anywhere. Todd, are lot of people using it in the same fashion where they're getting this dual benefit from pre-production and also in deployment and operations?

DeCapua: Yes, it’s definitely something that’s becoming more-and-more aware. It’s a capability that's been around for a little while. You'll also hear about things like IT4IT, but I don’t want to open up that whole can of worms unless we want to dive into it. But as that starts to happen, people like Chris, people like his CIO, want to be able to get better visibility into all systems that are in production, and is there an easy way to do that? Being able to provide that easy way for all of your stakeholders and all of your customers are capabilities that we're definitely seeing people adopt. It was a big way of improving and helping to illuminate for all parties, this is what’s happening
That means that every time we get a release, we don’t have to modify two sets of scripts and we don’t have two different teams working on everything.

Gardner: Can you provide a bit more detail in terms of the actual products and services that made this possible for you, Chris?

Trimper: We started with our HPE LoadRunner scripts, specifically the VUGen scripts, that we were able to turn into the production monitors. Using the AppPulse Active tool from the AppPulse suite of tools, we were able to build our scripts using their SaaS infrastructure and have these monitors built for us and available to test our systems.

Gardner: So what do you see in our call center? Are you able to analyze in any way and say, "We can point to these improvements, these benefits, from the ability for us to tie the loop back on production and quality assurance across the production spectrum?"

Trimper: We can do a lot of trend analysis. To be perfectly honest, we didn’t think that the report would run, but we did a year-to-date trend analysis and it actually was able to compile all of our statistics. We saw really two neat things.

When you had open enrollment, we saw this little spike that shot up there, which we would expect to see, but hopefully we can be more prepared for it as time goes. But we saw a gradual decrease, and I think, due to the ability to monitor, due to the ability to react and plan better for a better performing system, through the course of the year, for this one key piece of pulling member data, we went from an average of about 12-14 seconds down to 4 seconds, and that trend actually is continuing to go down.

I don’t know if it’s now 3 or less today, but if you think about that 12 or 14 down to about 4, that was a really big improvement, and it spoke volumes to our capabilities of really understanding that whole picture and being able to see all of that in one place was really helpful to us.

Where next?

Gardner: Looking to the future, now that you've made feedback loops demonstrate important business benefits and even move into a performance benefit for the business at large, where can you go next? Perhaps you're looking at security and privacy issues, given that you're dealing with compliance and regulatory requirements like most other healthcare organizations. Can you start to employ these methods and these tools to improve other aspects of your SLAs?

Trimper: Definitely, in terms of the SLAs and making sure that we're keeping everything alive and well. As for some of the security aspects, those are still things where we haven’t necessarily gone down the channels yet. But we've started to realize that there are an awful lot of places where we can either tie back or really start closing the gaps in our understanding of just all that is our systems.

Gardner: Todd, last word, what should people be thinking about when they look at their tooling for quality assurance and extending those benefits into full production and maybe doing some cultural bonding at the same time?
The culture is a huge piece. No matter what we talk about nowadays, it starts with that.

DeCapua: The culture is a huge piece. No matter what we talk about nowadays, it starts with that. When I look at somebody like Independent Health, the focus of that culture and the organization is on their end user, on their customer.

When you look at what Chris and his team has been able to do, at a minimum, it’s reducing the number of production incidents. And while you're reducing production incidents, you're doing a number of things. There are actually hard costs there that you're saving. There are opportunity costs now that you can have these resources working on other things to benefit that end customer.

We've talked a lot about DevOps, we've talked a lot about monitoring, we've mentioned now culture, but where is that focus for your organization? How is it that you can start small and incrementally show that value? Because now, what you're going to do is be able to illustrate that in maybe two or three slides, two or three pages.
Get the New Book
On Effective
Performance Engineering
But some of the things that Chris has been doing, and other organizations are also doing, is showing, "We did this, we made this investment, this is the return we got, and here's the value." For Independent Health, their customers have a choice, and if you're able to move their experience from 12-14 seconds to 4 seconds, that’s going to help. That’s going to be something that Independent Health wants to be able to share with their potential new customers.

As far as acquiring new customers and retaining their existing customers, this is the real value. That's probably my ending point. It's a culture, there are tools that are involved, but what is the value to the organization around that culture and how is it that you can then take that and use that to gain further support as you move forward?

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How always-available data forms the digital lifeblood for a university medical center

How always-available data forms the digital lifeblood for a university medical center

The next BriefingsDirect Voice of the Customer digital business transformation case study examines how the Nebraska Medical Center in Omaha consolidated and unified its data-protection capacities.

We'll explore how adopting storage innovation protects the state's largest hospital from data disruption and adds operational simplicity to complex data lifecycle management.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how more than 150 terabytes of data remain safe and sound, we're joined by Jeff Bergholz, Manager of Technical Systems at The Nebraska Medical Center in Omaha. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the major drivers that led you to seek a new backup strategy as a way to keep your data sound and available no matter what.

Bergholz: At Nebraska Medicine, we consist of three hospitals with multiple data centers. We try to keep an active-active data center going. Epic is our electronic medical record (EMR) system, and with that, we have a challenge of making sure that we protect patient data as well as keeping it highly available and redundant.

We were on HPE storage for that, and with it, were really only able to do a clone-type process between data centers and keep retention of that data, but it was a very traditional approach.

Bergholz
A couple of years ago, we did a beta program with HPE on the P6200 platform, a tertiary replica of our patient data. With that, this past year, we augmented our data protection suite. We went from license-based to capacity-based and we introduced some new D2D dedupe devices into that, and StoreOnce as well. What that affords us is to easily replicate that data over to another StoreOnce appliance with minimal disruption.

Part of our goal is to keep backup available for potential recovery solutions. With all the cyber threats that are going on in today's world, we've recently increased our retention cycle from 7 weeks to 52 weeks. We saw and heard from the analysts that the average vulnerability sits in your system for 205 to 210 days. So, we had to come up with a plan for what would it take to provide recovery in case something were to happen.

We came up with a long-term solution and we're enacting it now. Combining HPE 3PAR storage with the StoreOnce, we're able to more easily move data throughout our system. What's important there is that our backup windows have greatly been improved. What used to take us 24 hours now takes us 12 hours, and we're able to guarantee that we have multiple copies of the EMR in multiple locations.

We demonstrate it, because we're tested at least quarterly by Epic as to whether we can restore back to where we were before. Not only are we backing it up, we're also testing and ensuring that we're able to reproduce that data.

More intelligent approach

Gardner: So it sounds like a much more intelligent approach to backup and recovery with the dedupe, a lower cost in storage, and the ability to do more with that data now that it’s parsed in such a way that it’s available for the right reason at the right time.

Bergholz: Resource wise, we always have to do more with less. With our main EMR, we're looking at potentially 150 terabytes of data in a dedupe that shrinks down greatly, and our overall storage footprint for all other systems were approaching 4 petabytes of storage within that.

We've seen some 30:1 decompression ratios within that, which really has allowed my staff and other engineers to be more efficient and frees up some of their time to do other things, as opposed to having to manage the normal backup and retention of that.
HPE Data Protector:
Backup with Brains
Learn More Here
We're always challenged to do more and more. We grow 20-30 percent annually, and by having appropriate resources, we're not going to get 20 to 30 percent more resources every year. So, we have to work smarter with less and leverage the technologies that we have.

Gardner: Many organizations these days are using hybrid media across their storage requirements. The old adage was that for backup and recovery, use the cheaper, slower media. Do you have a different approach to that and have you gone in a different direction?

Bergholz: We do, and backup is as important to us as our data that exists out there. Time and time again, we’ve had to demonstrate the ability to restore in different scenarios, the accepted time of being able to restore and provide service back. They're not going to wait for that. When clinicians or caregivers are taking care of patients, they want that data as quickly as possible. While it may not be the EMR, it maybe some ancillary documents that they need to be able to get in order to provide better care.
We're able, upon request, to enact and restore in 5-10 minutes. In many cases, once we receive a ticket or a notification, we have full data restoration within 15 minutes.

We're able, upon request, to enact and restore in 5 to 10 minutes. In many cases, once we receive a ticket or a notification, we have full data restoration within 15 minutes.

Gardner: Is that to say that you're all flash, all SSD, or some combination? How did you accomplish that very impressive recovery rate?

Bergholz: We're pretty much all dedupe-type devices. It’s not necessarily SSD, but it's good spinning disk, and we have the technology in place to replicate that data and have it highly available on spinning disk, versus having to go to tape to do the restoration. We deal with bunches of restorations on a daily basis. It’s something we're accustomed to and our customers require quick restoration.

In a consolidated strategic approach, we put the technology behind it. We didn’t do the cheapest, but we did the best sort of thing to do, and having an active-active data center and backing up across both data centers enables us to do it. So, we did spend money on the backup portion because it's important to our organization.

Gardner: You mentioned capacity-based pricing. For those of our listeners and readers who might not be familiar with that, what is that and why was that a benefit to you?

Bit of a struggle

Bergholz: It was a little bit of a struggle for us. We were always traditionally client-based or application-based in the backup. If we needed Microsoft Exchange email boxes we had to have an Exchange plug-in. If we had Oracle, we had to have an Oracle plug-in, a SQL plug-in.

While that was great, it enabled us to do a lot, it we were always having to get another plug-in thing to do it. When we saw that with our dedupe compression ratios we were getting, going to a capacity-based license allowed us to strategically and tactically plan for any increase that we were doing within our environment. So now, we can buy in chunklets and keep ahead of the game, making sure that we’re effective there.

We're in throes of enacting archive-type solution through a product called QStar, which I believe HPE is OEM-ing, and we're looking at that as a long-term archive-type process. That's going to a linear tape file system, utilizing the management tools that that product brings us to afford the long-term archive of patient information.

Our biggest challenge is that we never delete anything. It’s always hard with any application. Because of the age of the patient, many cases are required to be kept for 21 years; some, 7 years; some, 9 years. And we're a teaching hospital and research is done on some of that data. So we delete almost nothing.
HPE Data Protector:
Backup with Brains
Learn More Here
In the case of our radiology system, we're approaching 250 terabytes right now. Trying to backup and restore, that amount of data with traditional tools is very ineffective, but we need to keep it forever.

By going to a tertiary-type copy, which this technology brings us, we have our source array, our replicated array, plus now, a tertiary array to take that, too, which is our LTFS solution.

Gardner: And with your backup and recovery infrastructure in place and a sense of confidence that comes with that, has that translated back into how you do the larger data lifecycle management equation? That is to say, are there some benefits of knowledge of quality assurance in backup that then allows people to do things they may not have done or not worried about, and therefore have a better business transformation outcome for your patients and your clinicians?
Being able to demonstrate solutions time and time again buys confidence through leadership throughout the organization and it makes those people sleep safer at night.

Bergholz: From a leadership perspective, there's nothing real sexy about backup. It doesn’t get oohs and ahs out of people, but when you need data to be restored, you get the oohs and ahs and the thank-yous and the praise for doing that. Being able to demonstrate solutions time and time again buys confidence through leadership throughout the organization and it makes those people sleep safer at night.

Recently, we passed HIMSS Level 7. One of the remarks from that group was that a) we hadn’t had any production sort of outage, and b) when they asked a physician on the floor, what do you do when things go down, and what do you do when you lose something? He said the awesome part here is that we haven’t gone down and, when we lose something, we're able to restore that in a very timely manner. That was noted on our award.

Gardner: Of course, many healthcare organizations have been using thin clients and keeping everything at the server level for a lot of reasons, a edge to core integration benefit. Would you feel more enabled to go into mobile and virtualization knowing that everything that's kept on the data-center side is secure and backed up, not worrying about the fact that you don't have any data on the incline? Is that factored into any of your architectural decisions about how to do client decision-making?

Desktop virtualization

Bergholz: We have been in the throes of desktop virtualization. We do a lot of Citrix XenApp presentations of applications that keeps the data in a data center and a lot of our desktop devices connect to that environment.

The next natural progression for us is desktop virtualization (VDI), ensuring that we're keeping that data safe in the data center, ensuring that we're backing it up, protecting the patient information on that, and it's an interesting thought and philosophy. We try to sell it as an ROI-type initiative to start with. By the time you start putting all pieces to the puzzle, the ROI really doesn't pan out. At least we've seen in two different iterations.

Although it can be somewhat cheaper, it's not significant enough to make a huge launch in that route. But the main play there, and the main support we have organizationally, is from a data-security perspective. Also, it's the the ease of managing the virtual desktop environment. It frees up our desktop engineers from being feet on the ground, so to speak, to being application engineers and being able to layer in the applications to be provisioned through the virtual desktop environment.
The next natural progression for us is desktop virtualization (VDI), ensuring that we're keeping that data safe in the data center, ensuring that we're backing it up, protecting the patient information on that.

And one important thing in the healthcare industry is that when you have a workstation that has an issue and requires replacement or re-imaging, that’s an invasive step. If it’s in a patient room or in a clinical-care area, you actually have to go in, disrupt that flow, put a different system in, re-image, make sure you get everything you need. It can be anywhere from an hour to a three-hour process.

We do have a smattering of thin devices out there. When there are issues, it’s merely just replaying or redoing a gold image to it. The great part about thin devices versus thick devices is that in lot of cases, they're operating in a sterile environment. With traditional desktops, the fans are sucking air through infection control and all that; there's noise; perhaps they're blowing dust within a room, if it's not entirely clean. SSD devices are a perfect-play there. It’s really a drop-off, unplug, and re-plug sort of technology.

We're excited about that for what it will bring to the overall experience. Our guiding principle is that you have the same experience no matter where you're working. Getting there from Step A to Step Z is a journey. So, you do that a little bit a time and you learn as you go along, but we're going to get there and we'll see the benefit of that.
HPE Data Protector:
Backup with Brains
Learn More Here
Gardner: And ensuring the recovery and voracity of that data is a huge part of being able to make those other improvements.

Bergholz: Absolutely. What we've seen from time to time is that users, while they're fairly knowledgeable, save their documents where they save them to. Policy is to make sure you put them within the data center. That may or may not always be adhered to. By going to a desktop virtualization, they won’t have any other choice.

A thin client takes that a step further and ensures that nothing gets saved back to a device, where that device could potentially disappear and cause a situation.

We do encrypt all of our stuff. Any device that's out there is covered by encryption, but still there's information on there. It’s well-protected, but this just takes away that potential.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Loyalty management innovator Aimia’s transformation journey to modernized IT

Loyalty management innovator Aimia’s transformation journey to modernized IT

The next BriefingsDirect Voice of the Customer digital business transformation case study examines how loyalty management innovator Aimia is modernizing, consolidating, and standardizing its global IT infrastructure.

As a result of rapid growth and myriad acquisitions, Montreal-based Aimia is in a leapfrog mode -- modernizing applications, consolidating data centers, and adopting industry standard platforms. We'll now learn how improving end-user experiences and leveraging big data analytics helps IT organizations head off digital disruption and improve core operations and processes.
 
Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how Aimia is entering a new era of strategic IT innovation, we're joined by André Hébert, Senior Vice President of Technology at Aimia in Montreal. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the major drivers that have made you seek a common IT strategy?

Hébert: If you go back in time, Aimia grew through a bunch of acquisitions. We started as Aeroplan, Air Canada's frequent flyer program and decided to go in the loyalty space. That was the corporate strategy all along. We acquired two major companies, one in the UK and one that was US-based, which gave us a global footprint. As a result of these acquisitions, we ended up with quite a large IT footprint worldwide and wanted to look at ways of globalizing and also consolidating our IT footprint.

Hébert
Gardner: For many people, when they think of a loyalty program, it's frequent flyer miles, perhaps points at a specific retail outlet, but this varies quite a bit market to market around the globe. How do you take something that's rather fractured as a business and make it a global enterprise?

Hébert: We've split the business into two different business units. The first one is around coalition loyalty. This is where Aimia actually runs the program. Good examples are Aeroplan in Canada or Nectar in the UK, where we own the currency, we operate the program, and basically manage all of the coalition partners. That's one side.

The other side is what we call our global loyalty solutions. This is where we run loyalty programs for other companies. Through our standard technology, we set up a technology footprint within the customer site or preferably in one of our data centers and we run the technology, but the program is often white-labeled, so Aimia's name doesn't appear anywhere. We run it for banks, retailers and many industry verticals.

Almost like money

Gardner: You mentioned the word currency, and as I think about it, loyalty points are almost like money -- it is currency -- it can be traded, and it can be put into other programs. Tell us about this idea. Are you operating almost like a bank or a virtual currency trader of some sort?

Hébert: You could say that the currency is like money. It is accumulated. If you look at our systems, they're very similar to bank-account systems. So our systems are like banks'. If you look at debit and credit transactions, they mimic the accumulation and redemption transactions that our members do.
Gardner: What's been your challenge from an IT perspective to allow your company to thrive in this digital economy?

Hébert: Our biggest challenge was how large the technology footprint was. We still operate many dozens of data centers across the globe. The project with HPE is to consolidate all of our technology footprint into four Tier 3 data centers that are scattered across the globe to better serve our customers. Those will benefit from the best security standards and extremely robust data-center infrastructure. 

On the infrastructure side, it's all about simplifying, consolidating, virtualizing, using the cloud, leveraging the cloud, but in a virtual private way, so that we also keep our data very secured. That's on the infra side.

On the application side, we probably have more applications than we have customers. One of the big drivers there is that we have a global product strategy. Several loyalty products have now been developed. We're slowly migrating all of our customers over to our new loyalty systems that we've created to simplify our application portfolios. We have a large number of applications today, and the plan is to try to consolidate all these applications into key products that we've been developing over the last few years.
We've shopped around for a partner that can help us in that space and we thought that HPE had the best credentials, the best offer for us to go forward.

Gardner: That’s quite a challenge. You're modernizing and consolidating applications. At the same time, you're consolidating and modernizing your infrastructure. It reminds me of what HPE did just a few years ago when it decided to split and to consolidate many data centers. Was that something that attracted you to HPE, that they have themselves gone through a similar activity?

Hébert: Yes, that is one of the reasons. We've shopped around for a partner that can help us in that space and we thought that HPE had the best credentials, the best offer for us to go forward. 

Virtual Private Cloud (VPC), a solution that they have offered, is both innovative, yet it is virtual and private. So, we feel that our customer’s data will be significantly more secure than just going to any public cloud.

Gardner: How is consolidating applications and modernizing infrastructure at the same time helping you to manage these compliance and data-protection issues?

Raising the bar

Hébert: The modernization and infrastructure consolidation is, in fact, helping greatly in continuing to secure data and meet ever more difficult security standards, such as PCI and DSS 3.0. Through this process, we're going to raise the bar significantly over data privacy.

Gardner: André, a lot of organizations don't necessarily know how to start. There's so much to do when it comes to apps, data, infrastructure modernization and, in your case, moving to VPC. Do you have any thoughts about how to chunk that out, how to prioritize, or are you making this sort of a big bang approach, where you are going to do it all at once and try to do it as rapidly as possible? Do you have a philosophy about how to go about something so complex?

Hébert: We've actually scheduled the whole project. It’s a three-year journey into the new HPE world. We decided to attack it by region, starting with Canada and the US, North America. Then, we moved on to zooming into Asia-Pacific, and the last phase of the project is to do Europe. We decided to go geographically. 
The program is run centrally from Canada, but we have boots on the ground in all of those regions. HPE has taken the lead into the actual technical work. Aimia does the support work, providing documentation, helping with all of the intricacies of our systems and the infrastructure, but it's a co-led project, with HPE doing the heavy lifting.

Gardner: Something about costs comes to mind when you go standard. Sometimes, there are some upfront cost, you have to leapfrog that hurdle, but your long-term operating costs can be significantly lower. What is it about the cost structure? Is it the standardized infrastructure platforms, are you using cheaper hardware, is it open source software, all the above? How do you factor this as a return on investment (ROI) type of an equation?

Hébert: It’s all of the above. Because we're right in the middle of this project, it will allow us to standardize, to evergreen, a lot of our technology that was getting older. A lot of our servers were getting old. So, we're giving the infrastructure a shot in the arm as far as modernization. 

From a VPC point of view, we're going to leverage this internal cloud much more significantly. From a CPU point of view, and from an infrastructure point of view, we're going to have significantly fewer physical servers than what we have today. It's all operated and run by HPE. So, all of the management, all of the ITO work is done by HPE, which means that we can focus on apps, because our secret sauce is in apps, not in infrastructure. Infrastructure is a necessary evil.

Gardner: That brings up another topic, DevOps. When you're developing, modernizing, or having a continuous-development process for your applications, if you have that cloud and infrastructure in place and it’s modern, that can allow you to do more with the development phase. Is that something you've been able to measure at all in terms of the ability to generate or update apps more rapidly?

Hébert: We're just dipping our toe into advanced DevOps, but definitely there are some benefits around that. We're currently focused on trying to get more value from that.

Gardner: When you think about ROI, there are, of course, those direct costs on infrastructure, but there are ancillary benefits in terms of agility, business innovation, and being able to come to market faster with new products and services. Is that something that is a big motivator for you and do you have anything to demonstrate yet in terms of how that could factor?

Relationship 2.0

Hébert: We're very much focused right now on what I would say is Relationship 1.0, but HPE was selected as a partner for their ability to innovate. They also are in a transition phase, as we all know, so while we're focused on getting the heavy lifting done, we're focusing on innovation and focusing on new projects with HPE. We actually call that Relationship 2.0.

Gardner: For others who are looking at similar issues -- consolidation, modernization, reducing costs over time, leveraging cloud models -- any words of advice now that you are into this journey as to how to best go about it or maybe things to avoid?
Hébert: When we first looked at this, we thought that we could do a lot of that consolidation work ourselves. Consolidating 42 data centers into 4 is a big job, and where HPE helps in that regard is that they bring the experience, they bring the teams, and they bring the focus to this. 

We probably could have done it ourselves. It probably would have cost more and it probably would have taken longer. One of the benefits that I also see is that HPE manages thousands and thousands of servers. With their ability to automate all of the server management, they've taken it to a level. As a small company, we couldn’t afford to do all of the automation that they can afford doing on these thousands of servers.
We probably could have done it ourselves. It probably would have cost more and it probably would have taken longer.

Gardner: Before we close out, André, looking to the future -- two, three, four years out -- when you've gone through this process, when you've gotten those modern apps and they are running on virtual private clouds and you can take advantage of cloud models, where do you see this going next? 

Do you have some ideas about mobile applications, about different types of transactional capabilities, maybe getting more into the retail sector? How does this enable you to have even greater growth strategically as a company in a few years?

Hébert: If you start with the cloud, the world is about to see a very different cloud model. If you fast forward five years, there will be mega clouds, and everybody will be leveraging these clouds. Companies that actually purchase servers will be a thing of the past. 

When it comes to mobile, clearly Aimia’s strategy around mobile is very focused. The world is going mobile. Most apps will require mobile support. If you look at analytics, we have a whole other business that focuses on analytics. Clearly, loyalty is all about making all this data make sense, and there's a ton of data out there. We have got a business unit that specializes in big data, in advanced analytics, as it pertains to the consumers, and clearly for us it is a very strategic area that we're investing in significantly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Big data and cloud combo spark momentous genomic medicine advances at HudsonAlpha

Big data and cloud combo spark momentous genomic medicine advances at HudsonAlpha

The next BriefingsDirect Voice of the Customer IT innovation case study explores how the HudsonAlpha Institute for Biotechnology engages in digital transformation for genomic research and healthcare paybacks.

We'll learn how HudsonAlpha leverages modern IT infrastructure and big-data analytics to power a pioneering research project incubator and genomic medicine innovator.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe new possibilities for exploiting cutting-edge IT infrastructure and big data analytics for potentially unprecedented healthcare benefits, we're joined by Dr. Liz Worthey, Director of Software Development and Informatics at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems to me that genomics research and IT have a lot in common. There's not much daylight between them -- two different types of technology, but highly interdependent. Have I got that right?

Worthey: Absolutely. It used to be that the IT infrastructure was fairly far away from the clinic or the research, but now they're so deeply intertwined that it necessitates many meetings a week between the leadership of both in order to make sure we get it right.

Gardner: And you have background in both.

Worthey: My background is primarily on the biology side, although I'm Director of Informatics and I've spent about 20 years working in the software-development and informatics side. I'm not IT Director, but I'm pretty IT savvy, because I've had to develop that skill set over the years. My undergraduate degree was in immunology, and since then, my focus has really been on genetics informatics and bioinformatics.

Gardner: Please describe what genetic informatics or genomic informatics is for our audience.

Worthey: Since 2003, when we received the first version of a human reference genome, there's been a large field involved in the task of extracting knowledge that can be used for society and health from genomic data.

Worthey
A [human] genome is 3.2 billion nucleotides in length, and in there, there's a lot of really useful information. There's information about which diseases that individual may be more likely to get and which diseases they will get.

It’s also information about which drugs they should and shouldn't take; information about which types of procedures, surveillance procedures, what colonoscopies they should have. And so, the clinical aspects of genomics are really developing the analytical capabilities to extract that data in real time so that we can use it to help an individual patient.

On top of that, there's also a lot of research. A lot of that is in large-scale studies across hundreds of thousands of individuals to look for signals that are more difficult to extract from a single genome. Genomics, clinical genomics, is all of that together.

Parallel trajectory

Gardner: Where is the societal change potential in terms of what we can do with this information and these technologies?

Worthey: Genomics has existed for maybe 20 years, but the vast majority of that was the first step. Over the last six years, we've taken maybe the second or third step in a journey that’s thousands of steps long.

We're right on the edge. We didn’t used to be able to do this, because we didn't have any data. We didn't have the capability to sequence a genome cheaply enough to sequence lots. We also didn't have the storage capabilities to store that data, even if we could produce it, and we certainly didn't have enough compute to do the analysis, infrastructure-wise. On top of that, we didn’t actually have the analytical know-how or capabilities either. All of that is really coalescing at the same time.
Start Your HPE Vertica
Community Edition Trial
As we are doing genomics, and that technology and the sequencing side has come up, the compute and the computing technologies have come up at the time. They're feeding each other, and genomics is now driving IT to think about things in a very different way.

Gardner: Let's dive into that a little bit. What are the hurdles technologically for getting to where you want to be, and how do you customize that or need to customize that, for your particular requirements?

Worthey: There are a number of hurdles. Certainly, there are simpler hurdles that we have to get past, like storage, storage tied with compression. How do you compress that data to where you can store millions of genomes at a price that's affordable.

A bigger hurdle is the ability to query information at a lot of disparate sites. When we think about genomic medicine, one of the things that we really want do is share data between institutions that are geographically diverse. And the data that we want to share is millions of data points, each of which has hundreds or thousands of annotations or curations.
When we think about genomic medicine, one of the things that we really want do is share data between institutions that are geographically diverse.

Those are fairly complex queries, even when you're doing it in one site, but in order to really change the practice of medicine, we have to be able to do that regionally, nationally, and globally. So, the analytics questions there are large.

We have 3.2 billion data points for each individual. The data is quite broad, but it’s also pretty deep. One of the big problems is that we don’t have all the data that we need to do genomic medicine. There's going to be data mining -- generate the data, form a hypothesis, look at the data, see what you get, come back with a new hypothesis, and so on.

Finally, one of the problems that we have is that a lot of algorithms that you might use only exists in the brains of MDs, other clinical folks, or researchers. There is really a lot of human computer interaction work to be done, so that we can extract that knowledge.

There are lots of problems. Another big problem is that we really want to put this knowledge in the hands of the doctor while they have seven minutes to see the patient. So, it’s also delivery of answers at that point in time, and the ability to query the data by the person who is doing the analysis, which ideally will be an MD.

Cloud technology

Gardner: Interestingly, the emergence of cloud methods and technology over the past five or 10 years would address some of those issues about distributing the data effectively -- and also perhaps getting actionable intelligence to a physician in an actual critical-care environment. How important is cloud to this process and what sort of infrastructure would be optimal for the types of tasks that you have in mind?

Worthey: If you had asked me that question two years ago, on the genomic medicine side, I would have said that cloud isn't really part of the picture. It wasn't part of the picture for anything other than business reasons. There were a lot of questions around privacy and sharing of healthcare information, and hospitals didn’t like the idea.

They're very reluctant to move to the cloud. Over the last two years, that has started to change. Enough of them had to decide to do it, before everybody would view it as something that was permissible.

Cloud is absolutely necessary in many ways, because we have periods where lots of data that has to be computed and analytics has to be run. Then, we have periods where new information is coming off the sequencer. So, it’s that perfect crest and trough.

If you don't have the ability to deal with that sort of fluctuation, if you buy a certain amount of hardware and you only have it available in-house, your pipeline becomes impacted by the crests and then often sits idle for a long time.
Start Your HPE Vertica
Community Edition Trial
But it’s also important to have stuff in-house, because sometimes, you want to do things in a different way. Sometimes, you want to do things in a more secure manner.

It's kind of our poster child for many of the new technologies that are coming out that look at both of those, that allow you to run things in-house and then also allow you to run the same jobs on the same data in the cloud as well. So, it’s key.

Gardner: That brings me to the next question about this concept of genomics as a service or a platform to support genomics as a service. How do you envision that and how might that come about?

Worthey: When we think about the infrastructure to support that, it has to be something flexible and it has to be provided by organizations that are able to move rapidly, because the field is moving really quickly.

It has to be infrastructure that supports this hypothesis-driven research, and it has to be infrastructure that can deal with these huge datasets. Much of the data is ordered, organized, and well-structured, but because it's healthcare, a lot of the information that we use as part of the interpretation phase of genomic medicine is completely unstructured. There needs to be support for extraction of data from silos.

My dream is that the people who provide these technologies will also help us deal with some of these boundaries, the policy boundaries, to sharing data, because that’s what we need to do for this to become routine.

Data and policy

Gardner: We've seen some of that when it comes to other forms of data, perhaps in the financial sector. More and more, we're seeing tokenization, authentication, and encryption, where data can exist for a period of time with a certain policy attached to it, and then something will happen if the data is a result for that policy. Is that what you're referring to?

Worthey: Absolutely. It's really interesting to come to a meeting like HPE Discover because you get to see what everybody else is doing in different fields. Much of the things that people in my field have regarded as very difficult are actually not that hard at all; they happen all the time in other industries.

A lot of this -- the encryption, the encrypted data sharing, the ability to set those access controls in a particular way that only lasts for a certain amount of time for a particular set of users -- seems complex, but it happens all the time in other fields. A big part of this is talking to people who have a lot of experience in a regulated environment. It’s just not this regulated environment and learning the language that they use to talk to the people that set policy there and transferring that to our policy makers and ideally getting them together to talk to one another.

Gardner: Liz, you mentioned the interest layers in getting your requirements to the technology vendors, cloud providers, and network providers. Is that under way? Is that something that's yet to happen? Where is the synergy between the genomic research community and the technology-vendor platform provider community?
This is happening fast. For genomics, there's been a shift in the volume of genomic data that we can produce with some new sequencing technology that's coming.

Worthey: This is happening fast. For genomics, there's been a shift in the volume of genomic data that we can produce with some new sequencing technology that's coming. If you're a provider of hardware or service user solutions to deal with big data, looking at genomics, as the people here are probably going to overtake many of those other industries in terms of the volume and complexity of the data that we have.

The reason that that's really interesting is because then you get invited to come and talk at forums, where there's lots of technology companies and you make them aware of the work that has to be done in the field of medicine, and in genomic research, and then you can start having those discussions.

A lot of the things that those companies are already doing, the use cases, are similar and maybe need some refinement, but a lot of that capability is already there.

Gardner: It's interesting that you’ve become sort of the “New York” of use cases. If you can make it there, you can make it anywhere. In other words, if we can solve this genomic data issue and use the cloud fruitfully to distribute and gather -- and then control and monitor the data as to where it should be under what circumstances -- we can do just about anything.

Correct me if I am wrong, though. We're using data in the genomic sense for population groups. We're winnowing those groups down into particular diseases. How farfetched is it to think about individuals having their own genomic database that would follow them like an authenticated human design? Is that completely out of the bounds? How far would that possibly be?

Technology is there

Worthey: I’ve had my genome sequenced, and it’s accessible. I could pick it up and look at it on the tools that I developed through my phone sitting here on the table. In terms of the ability to do that, a lot of that technology is already here.

The number of people that are being sequenced is increasing rapidly. We're already using genomics to make diagnosis in patients and to understand their drug interactions. So, we are here.

One of the things that we are talking about just now is, at what point in a person’s life should you sequence their genome. I and a number of other people in the field believe that that is earlier, rather than later, before they get sick. Then, we have that information to use when they get those first symptoms. You are not waiting until they're really ill before you do that.

I can’t imagine a future where that's not what's going to happen, and I don’t think that future is too far away. We're going to see it in our lifetimes, and our children are definitely going to see it in theirs.
The data that we already have, clinical information, is really for that one person, but your genome is shared among your family, even distant relatives that you’ve never met.

Gardner: The inhibitors, though, would be more of an ethical nature, not a technological nature.

Worthey: And policy, and society; the society impact of this is huge.

The data that we already have, clinical information, is really for that one person, but your genome is shared among your family, even distant relatives that you’ve never met. So, when we think about this, there are many very hard ethical questions that we have to think about. There are lots of experts that are working on that, but we can’t let that get in the way of progress. We have to do it. We just have to make sure we do it right.

Gardner: To come back down a little bit toward the technology side of things, seeing as so much progress has been made and that there is the tight relationship between information technology and some of the fantastic things that can happen with the proper knowledge around genomic information, can you describe the infrastructure you have in place? What’s working? What do you use for big-data infrastructure, and cloud or hybrid cloud as well?

Worthey: I'm not on the IT side, but I can tell you about the other side and I can talk a little bit on the IT side as well. In terms of the technologies that we use to store all of that varying information, we're currently using Hadoop and Mongo DB. We finished our proof of concept with HPE, looking at their Vertica solution.

We have to work out what the next steps might be for our proof of concept. Certainly, we’re very interested in looking at the solutions that they have in here. They fit with our needs. The issue that’s been addressed on that side is lots of variants, complex queries, that you need to answer really fast.
Start Your HPE Vertica
Community Edition Trial
On the other side, one of the technological hurdles that we have to meet is the unstructured data. We have electronic health record (EHR) information that’s coming in. We want to hook up to those EHRs and we want to use systems to process that data to make it organized, so that we can use it for the interpretation part.

In-house solution

We developed in-house solutions that we're using right now that allow humans to come in and look at that data and select the terms from it. So, you’d select disease terms. And then, we have in-house solutions to map them to the genomic side. We're looking at things like HPE’s IDOL as a proof-of-concept (POC) on that side. We're talking to some EHR companies about how to hook up the EHR to those solutions to our software to make it a seamless product and that would give us all that.

In terms of hardware, we do have HPE hardware in-house. I think we have 12 petabytes of their storage. We also have data direct network hardware, a general parallel file system solution. We even have things down to graphics processors for some of the analysis that we do. We’ve a large deck of such GPUs because in some cases it’s much faster for some other types of problems that we have to solve. So we are pretty IT-rich, a lot of heavy investment on the IT side.

Gardner: And cloud -- any preference to the topology that works for you architecturally for cloud, or is that still something you are toying with?
We not only do the research and the clinical, but we also have a lab that produces lots of data for other customers, a lab that produces genomic data as a service.

Worthey: We're currently looking at three different solutions that are all cloud solutions. We not only do the research and the clinical, but we also have a lab that produces lots of data for other customers, a lab that produces genomic data as a service.

They have a challenge of getting that amount of data returned to customers in a timely fashion. So, there are solutions that we're looking at there. There are also, as we talked at the start, solutions to help us with that in-flow of the data coming off the sequencers and the compute -- and so we're looking at a number of different solutions that are cloud-based to solve some of those challenges.

Gardner: Before we close, we’ve talked about healthcare and population impacts, but I should think there's also a commercial aspect to this. That kind of information will lend itself to entrepreneurial activities, products and services, a great demand in the marketplace? Is that something you're involved with as well, and wouldn’t that help foot the bill for some of these many costly IT infrastructure investments?

Worthey: One of the ways that HudsonAlpha Institute was set up was just that model. We have a research, not-for-profit side, but we also have a number of affiliate companies that are for-profit, where intellectual property and ideas can go across to that site and be used to generate revenue that fund the research and keep us moving and be on the cutting-edge.

We do have a services lab that does genomic sequencing in analytics. You can order that from them. We also service a lot of people who have government contracts for this type of work. And then, we have an entity called Envision Genomics. For disclosure, I'm one of founders of that entity. It’s focused on empowering people to do genomic medicine and working with lots of different solution providers to get genomic medicine being done everywhere it’s applicable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Cybersecurity crosses the chasm: How IT now looks to the cloud for best security

Cybersecurity crosses the chasm: How IT now looks to the cloud for best security

The next BriefingsDirect cybersecurity innovation and transformation panel discussion explores how cloud security is rapidly advancing, and how enterprises can begin to innovate and prevail over digital disruption by increasingly using cloud-defined security.

We'll examine how a secure content collaboration services provider removes the notion of organizational boundaries so that businesses can better extend processes. And we'll hear how less boundaries and cloud-based security together support transformative business benefits.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To share how security technology leads to business innovations, we're joined by Daren Glenister, Chief Technology Officer at Intralinks in Houston, and Chris Steffen, Chief Evangelist for Cloud Security at HPE. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Daren, what are the top three trends driving your need to extend security and thereby preserve trust with your customers?

Glenister
Glenister: The top thing for us is speed of business, people being able to do business beyond boundaries, and how can they enable the business rather than just protect it. In the past, security has always been about how we shut things down and stop data. But now it's how we do it all securely, and how we perform business outside of the organization. So, it's enabling business.

The second thing we've seen is compliance. Compliance is a huge issue for most of the major corporations. You have to be able to understand where the data is and who has access to it, and to know who's using it and make sure that they can be completely compliant.

The third thing is primarily around the shift between security inside and outside of the organization. It's been a fundamental shift for us, and we've seen that security has moved from people's trust in their own infrastructure, versus using a third-party who can provide that security and have a far higher standard, because that’s what they do the whole day, every day. That security shift from on-premise to the cloud is a third big driver for us, and we've seen that in the market.

Gardner: You're in a unique position to be able to comment on this. Tell us about Intralinks, what the company does, and why security at the edge is part of your core competency.

Secure collaboration

Glenister: We're a software-as-a-service (SaaS) provider and we provide secure collaboration for data, wherever that data is, whether it’s inside a corporation or it’s shared outside. Typically, once people share data outside, whether it’s through e-mail or any other method, some of the commercial tools out there have lost control of that data.

We have the ability to actually lock that data down, control that, and put the governance and the compliance around that to secure that data, know where the high-value intellectual property (IP) is, who has access to it, and then be able to even share as well. And, if you’re in a situation of losing data, revoke access to someone who has left the organization.

Gardner: And these are industries that have security as a paramount concern. So, we’re talking about finance and insurance. Give us a little bit more indication of the type of data we’re talking about.

Glenister: It's anybody with high-value IP or compliance requirements -- banking, finance, healthcare, life sciences, for example, and manufacturing. Even when you’re looking at manufacturing overseas and you have IP going over to China to manufacture your product, your plans are also being shared overseas. We've seen a lot of companies now asking how to protect those plans and therefore, protect IP.
Critical Security
And Compliance Considerations
For Hybrid Cloud Deployments
Gardner: Chris, Intralinks seems to be ahead of the curve, recognizing how cloud can be an enabler for security. We're surely seeing a shift in the market, at least I certainly am. In the last six months or so, companies that were saying that security was a reason not to go to the cloud are now saying that security is a reason they're going to the cloud. They can attain security better. What's happened that has made that perspective flip?

Steffen: I don't know exactly what’s happened, but you're absolutely right; that flip is going on. We've done a lot of research recently and shown that when you’re looking at inherent barriers going to a cloud solution, security and compliance considerations are always right there at the top. We commissioned the study through 451 Research, and we kind of knew that’s what was going on, but they sure nailed it down, one and two, security and compliance, right there. [Get a copy of the report.]

Steffen
The reality, though, is that that the C-table, executives, IT managers, those types, are starting to look at the massive burden of security and hoping to find help somewhere. They can look at a provider like Intralinks, they can look at a provider like HPE and ask, "How can they help us meet our security requirements?"

They can’t just third-party their security requirements away. That’s not going to cut it with all the regulators that are out there, but we have solutions. HPE has a solution, Intralinks has solutions, a lot of third-party providers have solutions that will help the customer address some of those concerns, so those guys can actually sleep at night.

Gardner: We're hearing so much about digital disruption in so many industries, and we're hearing about why IT can’t wait, IT needs to be agile and have change in the business model to appeal to customers to improve their user experience.

It seems that security concerns have been a governor on that. "We can’t do this because 'blank' security issue arises." It seems to me that it's a huge benefit when you can come to them and say, "We're going to allow you to be agile. We're going to allow you to fight back against disruption because security can, in fact, be managed." How far are we to converting disruption in security into an enabler when you go to the cloud?

Very difficult

Glenister: The biggest thing for most organizations is they're large, and it’s very difficult to transform just the legacy systems and processes that are in-place. It's very difficult for organizations to change quickly. To actually drive that, they have to look at alternatives, and that’s why a lot of people move into cloud. Driving the move to the cloud is, "Can we quickly enable the business? Can we quickly provide those solutions, rather than having to spend 18 months trying to change our process and spend millions of dollars doing it?"

Enablement of the business is actually driving the need to go to the cloud, and obviously will drive security around that. To Chris’s point a few minutes ago, not all vendors are the same. Some vendors are in the cloud and they're not as secure as others. People are looking for trusted partners like HPE and Intralinks, and they are putting their trust and their crown jewels, in effect, with us because of that security. That’s why we work with HPE, because they have a similar philosophy around security as we do, and that’s important.

Steffen: The only thing I would add to that is that security is not only a concern of the big business or the small business; it’s everybody’s concern. It’s one of those things where you need to find a trusted provider. You need to find that provider that will not only understand the requirements that you're looking for, but the requirements that you have.
You don’t want to migrate to a cloud solution and then have all the compliance work that you’ve done previously just wiped away.

This is my opinion, but when you're kicking tires and looking at your overall compliance infrastructure, there's a pretty good chance you had to have that compliance for more than a day or two. It’s something that has been iterative; it may change, it may grow, whatever.

So, when you're looking at a partner, a lot of different providers will start to at least try to ensure that you don’t start at square-one again. You don’t want to migrate to a cloud solution and then have all the compliance work that you’ve done previously just wiped away. You want a partner that will map those controls and that really understands those controls.

Perfect examples are in the financial services industry. There are 10 or 11 regulatory bodies that some of the biggest banks in the world all have to be compliant with. It’s extremely complicated. You can’t really expect that Big Bank 123 is going to just throw away all that effort, move to whatever provider, and hope for the best. Obviously, they can’t be that way. So the key is to take a map of those controls, understand those controls, then map those controls to your new environment.

Gardner: Let’s get into a little bit of the how ... How this happens. What is it that we can do with security technology, with methodologies, with organizations that allow us to go into cloud, remove this notion of a boundary around your organization and do it securely? What’s the secret sauce, Daren?

Glenister: One of the things for us, being a cloud vendor, is that we can protect data outside. We have the ability to actually embed the security into documents wherever documents go. Instead of just having the control of data at rest within the organization, we have the ability to actually control it in motion inside and outside the perimeter.

You have the ability to control that data, and if you think about sharing with third parties, quite often people say, "We can’t share with a third-party because we don’t have compliance, we don’t have a security around it." Now, they can share, they can guarantee that the information is secure at rest, and in motion.

Typically, if you look at most organizations, they have at-rest data covered. Those systems and procedures are relative child’s play. But that’s been covered for many years. The challenge is that it's newly in motion. How do you actually extend working with third parties and working with outside organizations?

Innovative activities

Gardner: It strikes me that we're looking at these capabilities through the lens of security, but isn’t it also the case that this enables entirely new innovative activities. When you can control your data, when you can extend where it goes, for how long, to certain people, under certain circumstances, we're applying policies, bringing intelligence to a document, to a piece of data, not just securing it but getting control over it and extending its usefulness. So why would companies not recognize that security-first brings larger business benefits that extend for years?

Glenister: Historically, security has always been, "No, you can’t do this, let’s stop." If you look in a finance environment, it’s stop using thumb drives, stop using emails, stop using anything rather than ease of solution. We've seen a transition. Over the last six months, you're starting to see a transition where people are saying, "How do we enable? How do we get people to control them?' As a result of that, you see new solutions coming out from organizations and how they can impact the bottom line.

Gardner: Behavior modification has always a big part of technology adoption. Chris, what is it that we can do in the industry to show people that being secure and extending the security to wherever the data is going to go gives us much more opportunity for innovation? To me this is a huge enticing carrot that I don’t think people have perhaps fully grokked.
What is cloud security? What does it mean to have defense in depth? What does it mean to have a matured security policy vision?

Steffen: Absolutely. And the reality of it is that it’s an educational process. One of the things that I've been doing for quite some time now is trying to educate people. I can talk with a fellow CISSP and we can talk about Diffie-Hellman encryption and I promise that your CEO does not care, and he shouldn’t. He shouldn’t ever have to care. That’s not something that he needs to care about, but he does need to understand total cost of ownership (TCO), he needs to understand return on investment (ROI). He needs to be able to go to bed at night understanding that his company is going to be okay when he wakes up in the morning and that his company is secure.

It’s an iterative process; it’s something that they have to understand. What is cloud security? What does it mean to have defense in depth? What does it mean to have a matured security policy vision? Those are things that really change the attitudinal barriers that you have at a C-table that you then have to get past.

Security practitioners, those tinfoil hat types -- I classify myself as one of those people, too -- truly believe that they understand how data security works and how the cloud can be secured, and they already sleep well at night. Unfortunately, they're not the ones who are writing the checks.

It's really about shifting that paradigm of education from the practitioner level, where they get it, up to the CIO, the CISO who hopefully understands, and then up to the C-table and the CFO making certain that they can understand and write that check to ensure that going to a cloud solution will allow them to sleep at night and allow the company to innovate. They'll take any security as an enabler to move the business forward.
Critical Security
And Compliance Considerations
For Hybrid Cloud Deployments
Gardner: So, perhaps it’s incumbent upon IT and security personnel to start to evangelize inside their companies as to the business benefits of extended security, rather than the glass is always half empty.

Steffen: I couldn’t agree more. It’s a unique situation. Having your -- again, I'll use the term -- tinfoil hat people talking to your C-table about security -- they're big and scary, and so on. But the reality of it is that it really is critically important that they do understand the value that security brings to an organization.

Going back to our original conversations, in the last 6 to 12 months, you're starting to see that paradigm shifted a little bit, where C-table executives aren’t satisfied with check-box compliance. They want to understand what it takes to be secure, and so they have experts in house and they want to understand that. If they don’t have experts in-house, there are third-party partners out there that can provide that amount of education.

Gardner: I think it’s important for us to establish that the more secure and expert you are at security the more of a differentiator you have against your competition. You're going to clean up in your market if you can do it better than they can.

Step back

Steffen: Absolutely, and even bring that a step further back. People have been talking for two decades now about technology as a differentiator and how you can make a technical decision or embrace and exploit technology to be the differentiator in your vertical, in your segment, so on.

The credit reporting agency that I worked for a long time ago was one of those innovators, and people thought we were nuts for doing some of the stuff that we are doing. Years later, everybody is doing the same thing now.

It really can set up those things. Security is that new frontier. If you can prove that you're more secure than the next guy, that your customer data is more secured than the next guy, and that you're willing to protect your customers more than the next guy, maybe it’s not something you put on a billboard, but people know.

Would you go to retailer A because they have had a credit card breach or do you decide to go retailer B? It's not a straw man. Talk to Target, talk to Home Depot, talk to some of these big big-box stores that have had breaches and ask how their numbers looked after they had to announce that they had a breach.
Customers are now more demanding because the media is blowing up all of the cyber crimes, threats, and hacks. The consumer is now saying they need their data to be protected.

Gardner: Daren, let’s go to some examples. Can think of an example of IntraLinks and a security capability that became a business differentiator or enable?

Glenister: Think about banks at the moment, where they're working with customers. There's a drive for security. Security people have always known about security and how they can enable and protect the business.

But what’s happening is that the customers are now more demanding because the media is blowing up all of the cyber crimes, threats, and hacks. The consumer is now saying they need their data to be protected.

A perfect example is my daughter, who was applying for a credit card recently. She's going off to college. They asked her to send a copy of her passport, Social Security card, and driver’s license to them by email. She looked at me and said, "What do you think?" It's like, "No. Why would you?"

People have actually voted, saying they're not going to do business with that organization. If you look in the finance organizations now, banks and the credit-card companies are now looking at how to engage with the customer and show that they have been securing and protecting their data to enable new capabilities like loan or credit-card applications and protecting the customer’s data, because customers can vote with their feet and choose not to do business with you.

So, it’s become a business-enabler to say we're protecting your data and we have your concerns at heart.

Gardner: And it’s not to say that that information shouldn’t be made available to a credit card or an agency that’s ascertaining credit, but you certainly wouldn’t do it through email.

Insecure tool

Glenister: Absolutely, because email is the biggest sharing tool on the planet, but it’s also one of the most insecure tools on the planet. So, why would you trust your data to it?

Steffen: We've talked about security awareness, the security awareness culture, and security awareness programs. If you have a vendor management program and you’re subject to a vendor management from some other entity, one of the things they also would request is that you have a security awareness program?

Even five to seven years ago, people looked at that as drudgery. It was the same thing as all the other nonsensical HR training that you have to look at. Maybe, to some extent, it still is, but the reality is that when I've given those programs before, people are actually excited. It's not only because you get the opportunity to understand security from a business perspective, but a good security professional will then apply that to, "By the way, your email is not secured here, but your email is not secured at home, too. Don’t be stupid here, but don’t be stupid there either."

We're going to fix the router passwords. You don’t need to worry about that, but you have a home router, change the default password. Those sounds like very simple straightforward things, but when you share that with your employees and you build that culture, not only do you have more secure employees, but then the culture of your business and the culture of security changes.
It has to be a year-round, day-to-day culture with every organization understanding the implications of security and the risk associated with that.

In effect, what’s happening is that you'll finally be getting to see that translate into stuff going on outside of corporate America. People are expecting to have information security parameters around the businesses that they do business with. Whether it's from the big-box store, to the banks, to the hospitals, to everybody, it really is starting to translate.

Glenister: Security is a culture. I look at a lot of companies for whom we do once-a-year certification or attestation, an online test. People click through it, and some may have a test at the end and they answer the questions and that’s it, they're done. It's nice, but it has to be a year-round, day-to-day culture with every organization understanding the implications of security and the risk associated with that.

If you don’t do that, if you don’t embed that culture, then it becomes a one-time entity and your security is secure once a year.

Steffen: We were talking about this before we started. I'm a firm believer in security awareness. One of the things that I've always done is take advantage of these pretend Hallmark holidays. The latest one was Star Wars Day. Nearly everybody has seen Star Wars or certainly heard of Star Wars at some point or another, and you can’t even go into a store these days without hearing about it.

For Star Wars Day, I created a blog to talk about how information-security failures led to the downfall of the Galactic Empire.
Critical Security
And Compliance Considerations
For Hybrid Cloud Deployments
It was a fun blog. It wasn't supposed to be deadly serious, but the kicker is that we talked about key information security points. You use that holiday to get people engaged with what's going on and educate them on some key concepts of information security and accidentally, they're learning. That learning then comes to the next blog that you do, and maybe they pay a little bit more attention to it. Maybe they pay attention to simply piggybacking through the door and maybe they pay attention to not putting something in an e-mail and so on.

It's still a little iterative thing; it’s not going to happen overnight. It sounds silly talking about information security failures in Star Wars, but those are the kind of things that engage people and make people understand more about information security topics.

Looking to the future

Gardner: Before we sign off, let’s put on our little tinfoil hat with a crystal ball in front. If we've flipped in the last six months or so, people now see the cloud as inherently more secure, and they want to partner with their cloud provider to do security better. Let’s go out a year or two, how impactful will this flip be? What are the implications when we think about this, and we take into consideration what it really means when people think that cloud is the way to go to be secure on the internet?

Steffen: The one that immediately comes to mind for me -- Intralinks is actually starting to do some of this -- is you're going to see niche cloud. Here's what I mean by niche cloud. Let’s just take some random regulatory body that's applicable to a certain segment of business. Maybe they can’t go to a general public cloud because they're regulated in a way that it's not really possible.

What you're going to see is a cloud service that basically says, "We get it, we love your type, and we're going to create a cloud. Maybe it will cost you a little bit more to do it, but we understand from a compliance perspective the hell that you are going through. We want to help you, and our cloud is designed specifically to address your concerns."

When you have niche cloud, all of a sudden, it opens up your biggest inherent barriers. We’ve already talked about security. Compliance is another one, and compliance is a big fat ugly one. So, if you have a cloud provider that’s willing to maybe even assume some of the liability that comes with moving to their cloud, they're the winners. So let’s talk 24 months from now. I'm telling you that that’s going to be happening.
You definitely see security now transforming business, enabling businesses to do things and interact with their customs in ways they've never done before.

Gardner: All right, we'll check back on that. Daren, your prediction?

Glenister: You are going to see a shift that we're already seeing, and Chris will probably see this as well. It's a shift from discussions around security to transformation. You definitely see security now transforming business, enabling businesses to do things and interact with their customs in ways they've never done before.

You'll see that impacting two ways. One is going to be new business opportunities, so revenue coming in, but it’s also going to be streamlined in the internal processes, so making things easier to do internally. And you'll see a transformation of the business inside and outside. That’s going to drive a lot of new opportunities and new capabilities and innovations we've seen before.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How software-defined storage translates into just-in-time data center scaling and hybrid IT benefits

How software-defined storage translates into just-in-time data center scaling and hybrid IT benefits

The next BriefingsDirect Voice of the Customer case study examines how hosting provider Opus Interactive adopted a software-defined storage approach to better support its thousands of customers.

We'll learn how scaling of customized IT infrastructure for a hosting organization in a multi-tenant environment benefits from flexibility of modern storage, unified management, and elastic hardware licensing. The result is gaining the confidence that storage supply will always meet dynamic hybrid computing demand -- even in cutting-edge hosting environments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how massive storage and data-center infrastructure needs can be met in a just-in-time manner, we're joined by Eric Hulbert, CEO at Opus Interactive in Portland, Oregon. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner:What were the major drivers when you decided to re-evaluate your storage, and what were the major requirements that you had?

Hulbert: Our biggest requirement was high-availability in multi-tenancy. That was number one, because we're service providers and we have to meet the needs of a lot of customers, not just a single enterprise or even enterprises with multiple business groups.

Hulbert
So we were looking for something that met those requirements. Cost was a concern as well. We wanted it to be affordable, but needed it to be enterprise-grade with all the appropriate feature sets -- but most importantly it would be the scale-out architecture.

We were tired of the monolithic controller-bound SANs, where we'd have to buy a specific bigger size. We'd start to get close to where the boundary would be and then we would have to do a lift-and-shift upgrade, which is not easy to do with almost a thousand customers.

Ultimately, we made the choice to go to one of the first software-defined storage architectures, which is a company called LeftHand Networks, later acquired by Hewlett Packard Enterprise (HPE), and then some 3PAR equipment, also acquired by HPE. Those were, by far, the biggest factors while we made that selection on our storage platform.

Gardner: Give us a sense of the scale-out requirements.

Hulbert: We have three primary data centers in the Pacific Northwest and one in Dallas, Texas. We also have the ability for a little bit of space in New York, for some of our East Coast customers, and one in San Jose, California. So, we have five data centers in total.

Gardner: Is there a typical customer, or a wide range of customers?

Big range

Hulbert: We have a pretty big range. Our typical customers are in finance and travel and tourism, and the hospitality industries. There are quite a few in there. Healthcare is a growing vertical for us as well.

Then, we rounded out with manufacturing and little bit of retail. One of our actual verticals, if you could call it vertical, are the MSPs and IT companies, and even some VARs, that are moving into the cloud.

We enable them to do their managed services and be the "boots on the ground" for their customers. That spreads us into the tens of thousands of customers, because we have about 30 to 25 MSPs that work with us throughout the country, using our infrastructure. We just provide the infrastructure as a service, and that's been a pretty growing vertical for us.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Gardner: And then, across that ecosystem, you're doing colocation, cloud hosting, managed services? What's the mix? What’s the largest part of the pie chart in terms of the services you're providing in the market?

Hulbert: We're about 75 percent cloud hosting, specifically a VMware-based private cloud, a multi-tenant private cloud. It's considered public cloud, but we call it private cloud.

We do a lot of hybrid cloud, where we have customers that are doing bursting into Amazon or [Microsoft] Azure. So, we have the ability to get them either Direct Connect Amazon connections or Azure ExpressRoute connections into any of our data centers. Then, 20 percent is colocation and about 5 percent for back-up, and disaster recovery (DR) rounds that out.

Gardner: Everyone, it seems, is concerned about digital disruption these days. For you, disruption is probably about not being able to meet demand. You're in a tight business, a competitive business. What’s the way that you're looking at this disruption in terms of your major needs as a business? What are your threats? What keeps you up at night?

Still redundant

Hulbert: Early on, we wanted a concurrently maintainable infrastructure, which also follows through with the data centers that we're at. So, we needed Tier 3-plus facilities that are concurrently maintainable. We wanted the infrastructure be the same. We're not kept up at night, because we can take an entire section of our solution offline for maintenance. It could be a failure, but we're still redundant.

It's a little bit more expensive, but we're not trying to compete with the commodity hosting providers out there. We're very customized. We're looking for customers that need more of that high-touch level of service, and so we architect these big solutions for them -- and we host with a 100 percent up-time.

The infrastructure piece is scalable with scale-out architecture on the storage side. We use only HP blades, so that we just keep stacking in blades as we go. We try to stay a couple of blade chassis ahead, so that we can take pretty large bursts of that infrastructure as needed.

That's the architecture that I would recommend for other service providers looking for a way to make sure they can scale out and not have to do any lift-and-shift on their SAN, or even the stack and rack services, which take more time.

We have to cable all of them versus needing to do one-blade chassis. Then, you can just slot in 16 blades quickly, as you're scaling. That allows you to scale quite a bit faster.
We use only HP blades, so that we just keep stacking in blades as we go. We try to stay a couple blade chassis ahead, so that we can take pretty large bursts of that infrastructure as needed.

Gardner: When it comes to making the choice for software-defined, what has that gotten you? I know people are thinking about that in many cases -- not just service providers, but enterprises. What did service-defined storage get for you, and are you furthering your software-defined architecture to more parts of your infrastructure?

Hulbert: We wanted it to be software-defined because we have multiple locations and we wanted one pane of glass. We use HPE OneView to manage that, and it would be very similar for an enterprises. Say we have 30 remote offices, they want to put the equipment there, and the business units need to provision some service and storage. We want to be going to each individual appliance or chassis or application in one place to provision it all.

Since we're dealing now with nearly a thousand customers -- and thousands and thousands of virtual servers, storage nodes, and all of that, the chunklets of data are distributed across all these. Being able to do that from one single pane of the glass from a management standpoint is quite important for us.

So, it's that software-defined aspect, especially distributing the data into chunklets, which allows us to grow quicker, and putting a lot of  automation on the back-end.

We only have 11 system administrators and engineers on our team managing that many servers, which shows you that our density is pretty high. That only works well if we have really good management tools, and having it software-defined means fewer people walking to and from the data center.

Even though our data centers are manned facilities, our infrastructure is basically lights out. We do everything from remote terminals.

Gardner: And does this software-defined extend across networking as well? Are you hyper-converged, converged? How would you define where you're going or where you'd like to go?

Converged infrastructure

Hulbert: We're not hyper-converged. For our scale, we can’t get into the prepackaged hyper-converged product. For us, it would be more of a converged infrastructure approach.

As I said, we do use the c-Class blade chassis with Virtual Connect, which is software-defined networking. We do a lot of VLANs and things like that on the software side.

We till have some outside of that out of band, networking, the network stacks, because we're not just a cloud provider. We also do colocation and a lot of hybrid computing where people are connecting between them. So, we have to worry about Fibre Channel on iSCSI and connections in SAN.

That adds a couple of other layers that are a few extra management steps, but in our scale, it’s not like we're adding tens of thousands of servers a day or even an hour, as I'm sure Amazon has to. So we can take that one small hit to pull that portion of the networking out, and it works pretty good for us.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Gardner: How do you see the evolution of your business in terms of moving past disruption, adopting these newer architectures? Are there types of services, for example, that you're going to be able to offer soon or in the foreseeable future, based on what you're hearing from some of the vendors?

Hulbert: Absolutely. One of the first ones I mentioned earlier was the ability for customers that want to burst into public cloud to be able to do the Amazon Direct Connects. Even with the telecom providers back on, you're looking at 15 to 25 milliseconds latency. For some of these applications, that’s just too much latency. So, it’s not going to work.

Now, with the most recent announcement from Amazon, they put a physical Direct Connect node in Oregon, about a mile from our data-center facility. It's from EdgeConneX, who we partnered with.

Now, we can offer the lowest latency for both Amazon and Azure ExpressRoute in the Pacific Northwest, specifically in Oregon. That’s really huge for our customers, because we have some that do a lot of public-cloud bursting on bold platforms. So that’s one new offering we are doing.

Disruption, as we've heard, is around containers. We're launching a new container-as-a-service platform later this year based on ContainerX. That will allow us to do containers for both Windows or Starnix platforms, regardless of what the developers are looking for.

We're targeting developers, DevOps guys, who are looking to do microservices to take their application, old or new, and architect it into the containers. That’s going to be a very disruptive new offering. We've been working on a platform for a while now because we have multiple locations and we can do the geographic dispersion for that.

I think it’s going to take a little bit of the VMware market share over time. We're primarily a VMware shop, but I don’t think it’s going to be too much of an impact to us. It's another vertical we're going to be going after. Those are probably the two most important things we see as big disruptive factors for us.

Hybrid computing

Gardner: As an organization that's been deep into hybrid cloud and hybrid computing, is there anything out there in terms of the enterprises that you think they should better understand? Are there any sort of misconceptions about hybrid computing that you detect in the corporate space that you would like to set them straight on?

Hulbert: The hybrid that people typically hear about is more like having on-premises equipment. Let’s say I'm a credit union and I’ve got one of the bank branches that we decided to put three or four cabinets of our equipment and one on the vaults. Maybe they've added one UPS and one generator, but it’s not to the enterprise level, and they're bursting to the public cloud for the things that makes sense to meet their security requirements.

To me, that’s not really the best use of hybrid IT. Hybrid IT is where you're putting what used to be on-premises in an actual enterprise-level, Tier 3 or higher data center. Then, you're using either a form of bursting into private dedicated cloud from a provider in one of those data centers or into the public cloud, which is the most common definition of that hybrid cloud. That’s what I would typically define as hybrid cloud and hybrid IT.

Gardner: What I'm hearing is that you should get out of your own data center, use somebody else's, and then take advantage of the proximity in that data center, the other cloud services that you can avail yourself of.
Then, you're using either a form of bursting into private dedicated cloud from a provider in one of those data centers or into the public cloud which is the most common definition of that hybrid cloud.

Hulbert: Absolutely. The biggest benefit to them is at their individual location or bank branches. This the scenario where we use the credit union. They're going to have maybe one or two telco providers, and they're going to be their 100 or maybe 200 Mb-per-second circuits.

They're paying a pretty premium for them, and now when they get into one of these data centers, they're going to have the ability to have 10-gig or even 40- or 100-gig connected internet pipes with a lot higher headroom for connectivity at a better price point. 

On top of that, they'll have 10-gig connection options into the cloud, all the different cloud providers. Maybe they have an Oracle stack that they want to put on an Oracle cloud some day along with their own on- premises. The hybrid things get more challenging, because now, they're not going to get the connectivity they need. Maybe they want to be into the software, they want to do an Amazon or Azure, or maybe they want a Opus cloud.

They need faster connectivity for that, but they have equipment that still has usable life. Why not move that to an enterprise-grade data center and not worry about air conditioning challenges, electrical problems, or whether it’s secure.

All of these facilities, including ours, have every checkbox for compliance and auditing that happens on an annual basis. Those things that used to be really headaches aren’t core of their business. They don’t do those any more. Focus on what's core, focus on the application and their customers.

Gardner: So proximity still counts, and probably will count for an awfully long time. You get benefits from taking advantage of proximity in these data centers, but you can still have, as you say, what you consider core under your control, under your tutelage and set up your requirements appropriately?

Mature model

Hulbert: It really comes down to the fact that the cloud model is very mature at this point. We’ve been doing it for over a decade. We started doing cloud before it was even called cloud. It was just virtualization. We launched our platform in late 2005 and it proved out, time and time again, with 100 percent up-time.

We have one example of a large customer, a travel and tourism operator, that brings visitors from outside the US to the US. They do over a $1 billion a year in revenue, and we host their entire infrastructure.

It's a lot of infrastructure and it’s a very mature model. We've been doing it for a long time, and that helps them to not worry about what used to be on-premises for them. They moved it all. A portion of it is colocated, and the rest is all on our private cloud. They can just focus on the application, all the transactions, and ultimately on making their customers happy.

Gardner: Going back to the storage equation, Eric, do you have any examples of where the storage software-defined environment gave you the opportunity to satisfy customers or price points, either business or technical metrics that demonstrate how this new approach to storage particularly fills out this costs equation?
The ability to easily provision the different sized data storage we need for the virtual servers that are running on that is absolutely paramount.

Hulbert: In terms of the software-defined storage, the ability to easily provision the different sized data storage we need for the virtual servers that are running on that is absolutely paramount.

We need super-quick provisioning, so we can move things around. When you add in the layers of VMware, like storage vMotion, we can replicate volumes between data centers. Having that software-defined makes that very easy for us, especially with the built-in redundancy that we have and not being controller-bound like we mentioned earlier on.

Those are pretty key attributes, but on top of that , as customers are growing, we can very easily add more volumes for them. Say they have a footprint in our Portland facility and want to add a footprint in our Dallas, Texas facility and do geographic load balancing. It makes it very easy for us to do the applications between the two facilities, slowly adding on those layers as customers need to grow. It makes that easy for them as well.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Gardner: One last question, what comes next in terms of containers? What we're seeing is that containers have a lot to do with developers and DevOps, but ultimately I'd  think that the envelope gets pushed out into production, especially when you hear about things like composable infrastructure. If you've been composing infrastructure in the earlier part of the process and development, it takes care of itself in production.

Do you actually see more of these trends accomplishing that where production is lights-out like you are, where more of the definition of infrastructure and applications, productivity, and capabilities is in that development in DevOps stage?

Virtualization

Hulbert: Definitely. Over time, it is going to be very similar to what we saw when customers were moving from dedicated physical equipment into the cloud, which is really virtualization.

This is the next evolution, where we're moving into containers. At the end of the day, the developers, the product managers for the applications for whatever they're actually developing, don't really care what and how it all works. They just want it to work.

They want it to be a utility consumption-based model. They want the composable infrastructure. They want to be able to get all their microservices deployed at all these different locations on the edge, to be close to their customers.

Containers are going to be a great way to do that because they have all the overhead of dealing with the operations knowledge. So, they can just put these little APIs and the different things that they need where they need it. As we see more of that stuff pushed to the edge to get the eyeball traffic, that’s going to be a great way to do that. With the ability to do even further bursting and into the bigger public clouds worldwide, I think we can get to a really large scale in a great way.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

 You may also be interested in:

How IT innovators turn digital disruption into a business productivity force multiplier

How IT innovators turn digital disruption into a business productivity force multiplier

The next BriefingsDirect business innovation thought leadership panel discussion examines how digital business transformation has been accomplished by several prominent enterprises. We'll explore how the convergence of cloud, mobility, and big-data analytics has prompted companies to innovate and produce new levels of award-winning productivity.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how these trend-setters create innovation value, we're joined by some finalists from the Citrix Synergy 2016 Innovation Awards Program: Olaf Romer, Head of Corporate IT and group CIO at Bâloise in Basel, Switzerland; Alan Crawford, CIO of Action for Children in London, and Craig Patterson, CEO of Patterson and Associates in San Antonio, Texas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Olaf, what are the major trends that drove you to reexamine the workplace conceptually, and how did you arrive at your technology direction for innovating in that regard?

Romer: First of all, we're Swiss traditional insurance. So, our driver was to become a little bit more modern to get the new generation of people in our company. In Switzerland, this is s a little bit of problem. We also have big companies in Zurich, for example. So, it’s very important for us.

Romer
We did this in two directions. One direction is on the IT side, and the other direction is on the real-estate side. We changed from the traditional office boxes to a flex office with open space, like Google has. Nobody has their own desk, not even me. We can go anywhere in our office and sit with whom we think it’s necessary. This is also on the IT side. We go in this direction to go for more mobility, an easier way to work in our company.

Gardner: And because you’re an insurance organization, you have a borderless type of enterprise, where you need to interact with field offices, other payers, suppliers, and customers, of course.

Was that ability to deal with many different types of end-point environments also a concern, and how did you solve that?

Romer: The first step was inside our company, and now, we want to go outside to our brokers and to our customers. The security aspect is very, very important. We're still working on being absolutely secure, because we're handling sensitive customer data. We're still in the process of opening our ecosystem outward to the brokers and customers, but also to other companies we work with. [See related post, Expert panel explores the new reality for cloud security and trusted mobile apps delivery.]

Gardner: Alan, tell us about Action for Children and what you’ve been doing in terms of increasing the mobile style of interactions in business.

Crawford: Action for Children is a UK charity. It helps 300,000 children, families, and young people every year. About 5,000 staff, operate from between 300 and 500 branches. So, 300 are our own and a couple of hundred locations are with our partner agencies.

Crawford
When I started there, the big driver was around security and mobility. A lot of the XP computers were running out of support, and the staff outside the office was working on paper.

There was a great opportunity in giving modern tablets to staff to improve the productivity. Productivity in our case means that if you spend less time doing unnecessary visits or do something in one visit instead of three, you can spend more quality time with the family to improve the outcomes for the children.

Gardner: And, of course, as a non-profit organization, costs are always a concern. We’ve heard an awful lot here at Citrix Synergy about lower cost client and endpoint devices. Has that been a good news to your ears? [Learn more about Citrix Synergy 2016.]

Productivity improvements

Crawford: It has. We started with security and productivity as being the main drivers, but actually, as we’ve rolled out, we’ve seen those productivity improvements arise. Now, we're looking at the cost, about the savings we can make on travel, print, and stationery. Our starting budget this year is £1.3 million ($1.7 million) less than it was the year before we introduced tablets for those things. We're trying to work out exactly how much of that we can attribute to the mobile technology and how much of that is due to other factors.

Gardner: Craig, you're working with a number of public sector organizations. Tell us about what they are facing and what mobility as a style of work means to them.

Patterson: Absolutely. I'm working with a lot of public housing authorities. One is Lucas Metropolitan, and other is Hampton Redevelopment Agency. What they're facing is declining budgets and a need to do more with less.

Patterson
When we look at traditional housing-authority and government-service agencies that are paper-based, paper just continues to multiply. You put one piece in the copier and 20 pieces come out. So, being able to take the documents that contain secure private information of our clients and connect those with the clients out in the field is why we need mobility and efficiency and workflows.

And the cloud is what came to mind with that. With content management, we can capture data out in the field. We can move our staff out in the field. We don’t have to bring all of the clients into the office, which can sometimes pose a hardship, especially for elderly, disabled, and many of those in the greatest need. Mobility and efficiency with the cloud and the security have become paramount in how we perform our business.

Gardner: I suppose another aspect of mobility is the ability to bring data in analytics to the very edge. Have you yet to take advantage of that or do you see that it’s something that you’re going to be working toward?

Patterson: We know that it’s something we're working toward. We know from the analytics that we’ve been able to see so far that mobility is the key. For some time, people have thought that we can’t put online things like applications for affordable housing, because people don’t have access to the Internet.

Our analytics prove that entirely wrong. Age groups of 75 and 80 were accessing it on mobile devices faster than the younger group was. What it means is that they find a relative, a grandchild or whoever they need that allows them to access the Internet. It’s been our mindset that has kept us from making the internet and those mobility avenues into our systems available on a broader scale. So, we're moving in that direction so that self service to that community can be displayed more in a broader context.

Measuring outcomes

Crawford: On the analytics and how that’s helped by the mobile working, we had a very similar result in Action for Children in the same year we brought out tablets. We started to do outcome measures with the children we were with. To reach a child, we do a baseline measure when we first meet the family, and then maybe three months later, whatever the period of the intervention, we do a further measure.

Doing that directly on a tablet with the family present has really enhanced the outcome measures. We now have measures on 50,000 children and we can aggregate that, see what the trends are, see what the patterns are geographically by types of service and types of intervention.

Gardner: So it’s that two-way street; the more data and analytics you can bring down to the edge, the more you can actually capture and reapply, and that creates a virtuous cycle of improvement in productivity.

Crawford: Absolutely. In this case, we're looking at the data and learning lessons about what works better to improve the outcomes for disadvantaged children, which is really what we're about.

Gardner: Olaf, user experience is a big topic these days, and insurance, going right to the very edge of where there might be a settlement event of some sort, back to the broker, back to the enterprise. User experience improvements at every step of that means ultimately a better productive outcome for your end-customers. [See related post, How the Citrix Technology Professionals Program produces user experience benefits from greater ecosystem collaboration.]

How does user experience factor into this mobility and data in an analytics equation?
We're looking at the data and learning lessons about what works better to improve the outcomes for disadvantaged children, which is really what we're about.

Romer: First of all, the insurance business is a little bit different business than the others here. The problem is that our customers normally don’t want to touch us during the year. They get a one-time invoice from us and they have to pay the premium. Then, they hope, and we also hope, that they will not have a claim.

We have only one touch a year, and this is little bit of problem. We try to do everything to be more attractive for the customer to get them to us, so that for them it’s clear if they have a problem or need a new insurance, they go to Bâloise Insurance.

We're working on it to bring a little bit of consumerization. In former years the insurance business was very difficult and it wasn’t transparent. The customers have to answer 67 questions before they can take out insurance with us, and this is the point. To make it as simple as possible and to work with a new technology, we have to be attractive for the customers, like taking out insurance through an iPhone. That’s not so easy.

If you talk with a core insurance guy to calculate the premiums, they won’t already have the 67 answers from the customers.  So, it's not only the technology, but working a little bit in a differently in the insurance business. The technology will also help us there. For me, the buzzword is big data, and now we have to bring out the value of the data we have in our business, so that we can go directly with the right user interface to the right customer area.

Gardner: Another concept that we have heard quite a bit at Synergy is the need to allow IT to say yes more often. Starting with you Craig, what are you seeing in the trends and in the technology that is perhaps most impactful for you to be able to say yes to the requests and the need for agility in these businesses, in these public sector organizations?

Device agnosticism

Patterson: It’s the device agnosticism, where you bring your own device (BYOD). It’s a device that the individuals are already familiar with. I'm going to take it from two angles. It could be the employee that’s delivering a service out to a customer in the field that can bring their own device, or a partner or contractor, so that we can integrate and shrink-wrap certain data. We will still have data security while they're deploying or doing something out in the field for us. It could be inspections, customer service, medical, etc.

But then, on the client end, they have their own device. By our being able to deliver products through portals that don’t care what device they have, it’s based on mobile protocols and security. Those are the types of trends that are going to allow us to collect the big analytics, know what we think we know, and find out whether we really know it or not and find it, get the facts for it.

The other piece of it though is to make it easy to access the services that we provide to the community, because now it’s a digital community; it’s not just the hardcore community. To see people in a waiting line now for applications hurts my feelings. We want to see them online, accessing it 24×7, when it makes sense for them. Those are the types of services that I see becoming the greater trends in our industry.
Those are the types of trends that are going to allow us to collect the big analytics, know what we think we know, and find out whether we really know it or not and find it, get the facts for it.

Gardner: Alan, what allows you to say “yes” more often?

Crawford: When I started with the XP laptops, we were saying no. So doing lot of comparisons in program within our center now, they're using the tablets and the technology. You have closed Facebook groups with those families. There's now peer support outside hours, when children are going to bed, which is often when they have issues in a family.

They use Eventbrite, the booking app. There are some standard off-the-shelf apps, but the real enterprise in our service in a rural community currently tells everybody in that community what services they're running through posters and flyers that were printed off. That moved to developing our own app. The prototypes are already out there, and the full app will be out there in a few weeks time. We're saying yes to all of those things. We want to support them. It is not just yes, but yes and how can we help you do that.

Gardner: Olaf, of course, productivity is only as good as the metrics that we need to convince the higher-ups in the board room that we need more investment or that we're doing good work with our technology. Do you have any measurements, metrics, even anecdotes about how you measure productivity and what you've done to modernize your workspaces?

Romer: Yes, for us it’s the feedback from the people. It’s very difficult to measure it on a clear technology level, but feedback from the people is very good and very important for us. You can see  with the BYOD we introduced one and a half years ago, a stronger cultural change in collaboration. We work together much more efficiently in the company and in the different departments.

In former times, we had closed file shares, and I couldn't see the files of the department next to me. Now, we're working completely in a modern collaboration way. Still, on traditional insurances, let’s say with the government, it’s very hard for them to work in the new style..

In the beginning, there were very strong concerns about that, and now we're in a cultural shift on this. We get a lot of good feedback that in project teams, or in the case of some problems or issues, we can work much better and faster together.

Metrics of success

Gardner: Craig, of course it’s great to say yes to your constituents, but it’s also good to say that we're doing more with less to your higher-ups and those that control the budget. Any metrics of success that you can recall in some of the public-sector organizations you're working with?

Patterson: Absolutely. I'll talk about files in workflow. When a document comes into the organization before, we mapped how much time and money it took to get it in a file folder, having been viewed by everyone that it needs to get viewed by. To give quick context, before, a document took a file folder, a label maker, copy machine, and every time a person needed to put a document in that folder, someone had to get it there. Now, the term "file clerk" is actually becoming obsolete.

When a document come in, it gets scanned, it’s instantaneously put in the correct order in the right electronic folder, and an electronic notification is sent to the person who needs to know. That happens in seconds. When you look at each month, it amounts to savings; before, we were managing files, rather than assisting people.
We can now see how many file folders you looked at, how many documents you actually touched, read, and reviewed in comparison with somebody else.

The metrics are in the neighborhood of just about 75 percent paper reduction, because people aren’t making copies. This means they're not going to the copy machine and along the way, the water-cooler and conversation pits. That also abates some of the efficiencies. We can now see how many file folders you looked at, how many documents you actually touched, read, and reviewed in comparison with somebody else.

We had as many as five documents, in comparison with 1,700 in a month. That starts to tell you some things about where your workload is shifting. Not everyone likes that. They might consider it a little bit "big brother," but we need those analytics to know how best to change our workflows to serve our customer, and that’s the community.

Gardner: I don’t know if this is a metric that’s easy to measure, but less bureaucracy would be something that I think just about everyone would be in favor of. Can you point to something that says we're able to reduce bureaucracy through technology?

Patterson: When you look at bureaucracy and unnecessary paper flows, there are certain yes-and-no questions that are part of bureaucracy. Somebody has it go their desk and their job is to stamp yes or no on it. What decision do you have to make? Well they really don’t; they just have to stamp yes. To me, that’s classic bureaucracy.

Well, if the document hits that person’s desk and it meets a certain criteria or threshold, the computer automatically and instantaneously approves it and it has a documented audit trail. That saves some of our clients in the housing-authority industry, when the auditors come and review things. But if you had to make a decision, it forced you to know how long it took you to make it. So, we can look at why is it taking so long or there are questions that you don’t need to be answering.

Gardner: So let the systems do what they do best and let the people do the exception management and the value-added activities. Alan, you had some thoughts about metrics of success of bureaucracy or both?

Proxy measure

Crawford: Yes, it’s the metrics. The Citrix CEO [Kirill Tatarinov] talked at Citrix Synergy about productivity actually going down in the last few years. We’ve put all these tablets out there and we have individual case studies where we know a particular family-support worker has driven 1,700 miles in the year with the tablet, and it was 3,400 miles in the year without. That’s a proxy measure of how much time they're spending on the road, and we have all the associated cost of fuel and wasted time and effort.

We've just installed an app -- actually I have rolled it out in the last month or so -- that measures how many tablets have been switched on in the month, how much they're been used in the day, and what they've been used for. We can break that down by the geographical areas and give that information back to the line managers, because they're the people to whom it will actually make sense.

I'm right at a stage where it’s great information. It’s really powerful, but it’s actually to understand how many hours a day they should be using that tablet. We're not quite sure, and it probably varies from one type of service to another.

We look at those trends over a period of months. We can tell managers that, yes, total staff used them 90 percent, but it’s 85 percent in yours. All managers, I find, are fairly competitive.
There are inhibitors around mobile network coverage and even broadband coverage in some rural areas. We just follow up on all of those user experience information we get back and try and proactively improve them.

Gardner: Well, that may be a hallmark of business agility, when you can try things out, A/B testing. We’ll try this, we’ll try that, we don’t pay a penalty for doing that. We can simply learn from it and immediately apply our lesson back to the process.

Crawford: It’s all about how we support those areas where we identify that they're not making the most of the technology they’ve been given. And it might be human factors. The staff or even the managers are very fearful. Or it might be technical factors. There are inhibitors around mobile network coverage and even broadband coverage in some rural areas. We just follow up on all of those user experience information we get back and try and proactively improve them.

Gardner: Olaf, when we ask enterprises where they are in their digital transformation, many are saying they're just at the beginning. For you, who are obviously well into a digital transformation process, what lessons learned could you share; any words of advice for others as they embark on this journey?

Romer: The first digital transformation in the insurance business was in the middle of 1990s, when we started to go paperless and work with a digital system. Today, more than 90 percent of our new insurance contracts are completely paperless. In Germany, for example, you can give a digital signature. It’s not allowed for the moment in Switzerland, but from a technical perspective, we can do this.

My advice would be that digitalization gives you a good situation to think about to make it simple. We built up great complexity over the years, and now we're able to bring this down and make it as simple as possible. We created the slogan, “Simply Safe,” for us to rethink everything that we're doing to make it simple and safe. Again, for insurance, it's very important that the digitalization brings us not more complexity, but reduces it.

Gardner: Craig, digital transformation, lessons learned, what advice can you offer others as they embark?

Document and workflow

Patterson: In digital transformation, I’ll just use document and workflow. Start with the higher-end items; there's low-hanging fruit there. I don’t know if we'll ever be totally paperless, which would really allow us to go mobile, but at the same time, know what not to scan. Know what to archive and just get rid off. And don't hang on to old technologies for too long. That’s something else that’s starting to happen. The technological revolution in lifecycle of technology is shorter and we need to plan our strategies along those lines.

Gardner: Alan, words of advice on those also interested in digital transformation?

Crawford: For us, it started about connecting with our cause. We’ve got social care staff and since we’re going to do digital transformation, it's not going to really enthuse them. However, if you explain that this is about actually improving the lives of children with technology, then they start to get interested. So, there is a bit about using your cause and relating the change to your cause.
You’ve got to follow through on all this change to get the real benefits out of it. You’ve got to be a bit tenacious with it to really see the benefits in the end.

A lot of our people factors are on how to engage and train. It's no longer IT saying, "Here’s the solution, and we expect you to do ABC." I was working with those social-care workers, and here are the options, what will work for you and how should we approach that, but then it’s never letting up.

Actually, you’ve got to follow through on all this change to get the real benefits out of it. You’ve got to be a bit tenacious with it to really see the benefits in the end.

Gardner: Tie your digital transformation and the organization’s mission that there is no daylight between them.

Crawford: We’ve got the project digitally enabling Action for Children and that was to try and link the two together inextricably.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Infrastructure as destiny — How Purdue builds an IT support fabric for big data-enabled IoT

Infrastructure as destiny — How Purdue builds an IT support fabric for big data-enabled IoT

The next BriefingsDirect Voice of the Customer IT infrastructure thought leadership case study explores how Purdue University has created a strategic IT environment to support dynamic workload requirements.

We'll now hear how Purdue extended a research and development IT support infrastructure to provide a common and "operational credibility" approach to support myriad types of compute demands by end users and departments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how a public university is moving toward IT as a service, please join Gerry McCartney, Chief Information Officer at Purdue University in Indiana. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: When you're in the business of IT infrastructure, you need to predict the future. How do you close the gap between what you think will be demanded of your infrastructure in a few years and what you need to put in place now?

McCartney: A lot of the job that we do is based on trust and people believing that we can be responsive to situations. The most effective way to show that right now is to respond to people’s issues today. If you can do that effectively, then you can present a case that you can take a forward-looking perspective and satisfy what you and they anticipate to be their needs.

McCartney
I don’t think you can make forward-looking statements credibly, especially to a somewhat cynical group of users, if you're not able to satisfy today’s needs. We refer to that as operational credibility. I don’t like the term operational excellence, but are you credible in what you provide? Do people believe you when you speak?

Gardner: We hear an awful lot about digital disruption in other industries. We see big examples of it in taxi cabs, for example, or hospitality. Is there digital disruption going on at university campuses as well, and how would you describe that?

McCartney: A university you can think of as consisting of three main lines of business, two of which are our core activities, of teaching students, educating students; and then producing new knowledge or doing research. The third is the business of running that business, and how do you do that. A very large infrastructure is built up around that third leg, for a variety of reasons.

But if we look at the first two, research in particular, which is where we started, this concept of the third leg of science has been around for some time now. It used to be just experimentation and theory creations. You create a theory, then you do an experiment with some test tubes or something like this, or grow a crop in the field. Then, you would refine your theory and you would continue in that kind of dyadic mode of just going backward and forward.

Third leg of science

That was all right until we wanted to crash lorries into walls or to fly a probe into the sun. You don’t get to do that a thousand times, because you can’t afford it, or it’s too big or too small. Simulation has now become what we refer to as the third leg of science.

Slightly more than 35 percent of our actual research now uses high-performance computing (HPC) in some key parts of it to produce results, then shape the theory formulation, and the actual experimentation, which obviously still goes on.

Around teaching, we've seen for-profit universities, and we've seen massive open online courses (MOOCs) more recently. There's a strong sense that the current mode of instructional delivery cannot stay the same as it has been for the last hundreds of years and that it’s ripe for reform.

Indeed, my boss at Purdue, Mitch Daniels, would be a clear and vibrant voice in that debate himself. To go back to my earlier comments, our job there is to be able to provide credible alternatives, credible solutions to ideas as they emerge. We still haven’t figured that out collectively as an industry, but that’s something that is in the forefront of a lot of peoples’ minds.

Gardner: Suffice to say that information technology will play a major role in that, whatever it is.

McCartney: It’s hard to imagine a solution that isn’t actually completely dependent upon information technology, for at least its delivery, and maybe for more than that.
Right now, our principal requirement is around research computing, because we have to put the storage close to the compute. That's just a requirement of the technology.

Gardner: So, high-performance computing is a bedrock for the simulations needed in modern research. Has that provided you with a good stepping stone toward more cloud-based, distributed computing-based fabric, and ultimately composable infrastructure-based environments?

McCartney: Indeed it has. I can go back maybe seven or eight years at our place, and we had close to 70 data centers on our campus. And by a data center, I mean a room with at least 200-amp supply, and at least 30 tons of additional cooling, not just a room that happens to have some computers in it. I couldn't possibly count how many of them there are now. Those stand-alone data centers are almost all gone now, thanks to our community cluster program, and the long game is that we probably won't have much hardware on our campus at some point a few years from now.

Right now, our principal requirement is around research computing, because we have to put the storage close to the compute. That's just a requirement of the technology.

In fact, many of our administrative services right now are provided by cloud providers. Our users are completely oblivious to that, but we have no on-premises solution at all. We're not doing travel, expense reimbursement and a variety of back-office things on our campus at all.
Gain Data Insights and Business Value
From the Proliferation
Of IoT Connected Devices and Machines
That trend is going to continue, and the forcing function there is that I can't spend enough on security to protect all the assets I have. So, rather than spend even more on security and fail to provide that completely secure environment, it's better to go to somebody who can provide that environment.

Data-compute link

Gardner: What sort of an infrastructure software environment do you think will give you that opportunity to make the right choices when you decide on-prem versus cloud, even for those intensive workloads that require a tight data and compute link?

McCartney: The worry for any CIO is that the only thing I have that's mine is my business data. Anything else -- web services, network services -- I can buy from a vendor. What nobody else can provide me are my actual accounts, if you wish to just choose a business term, but that can be research information, instructional information, or just regular bookkeeping information.

When you come into a room of a new solution, you're immediately looking at the exit door. In other words, when I have to leave, how easy, difficult, or expensive is it going to be to extract my information back from the solution?

That drives a huge part of any consideration, whether it's cloud or on-prem or whether it's proprietary or open code solution. When this product dies, the company goes bust, we lose interest in it, or whatever -- how easy, expensive, difficult is it for me to extract my business data back from that environment, because I am going to need to do that?

I'm quite happy for everybody else to knock the bumps out to the road for me, and I'll be happy to drive along it when it’s a six-lane highway.
Gardner: What, at this juncture, meets that requirement in your mind? We've heard a lot recently about container technology, standards for open-source platforms, industry accepted norms for cloud platforms. What do you think reduces your risk at this point?

McCartney: I don't think it's there yet for me. I'm happy to have, relatively speaking, small lines of business. Also, you're dependent then on your network availability and volume. So, I'm quite happy there, because I wasn't the first, and because that's not an important narrative for us as an institution.

I'm quite happy for everybody else to knock the bumps out of the road for me, and I'll be happy to drive along it when it’s a six-lane highway. Right now it's barely paved, and I'll allow other brave souls to go there ahead of me.

Gardner: You mentioned early on in our discussion the word "cynical." Tell me a little bit about the unique requirements in a university environment where you need to provide a common, centrally managed approach to IT for cost and security and manageability, but also see to the unique concerns and requirements of individual stakeholders?

McCartney: All universities are, as they should be, full of self-consciously very smart people who are all convinced they could do a job, any particular job, better than the incumbent is doing it. Having said that, the vast bulk of them have very little interest in anything to do with infrastructure.

The way this plays out is that the central IT group provides the core base that services the network -- the wireless services, base storage, base compute, things like that. As you move to the edge, the things that make a difference at the edge.

Providing the service

In other words, if you have a unique electrical device that you want to plug in to a socket in the wall because you are in paleontology, cell biology, or organic chemistry, that's fine. You don't need your own electricity generating plants to do that. I can provide you with the electricity. You just need the cute device and you can do your business, and everybody is happy.

Whatever the IT equivalent to that is, I want to be the energy supplier. Then, you have your device at the edge that makes a difference for you. You don't have to worry about the electricity working; it's just there. I go back to that phrase "operational credibility." Are we genuinely surprised when the service doesn’t work? That’s what credibility means.

Gardner: So, to me, that really starts to mean IT as a service, not just electricity or compute or storage. It's really the function of IT. Is that in line with your thinking, and how would you best describe IT as a service?

McCartney: I think that's exactly right, Dana. There are two components to this. There's an operational component, which is, are you a credible provider of whatever the institution decides the services are that it needs, lighting, air-conditioning or the IT equivalence of that? They just work. They work at reasonable cost; it's all good. That’s the operational component.

The difference with IT, as opposed to other infrastructure components, is that IT has itself the capability to transform entire processes. That’s not true of other infrastructure things. I can take an IT process and completely reengineer something that's important to me, using advantages that the technology gives me.
Gain Data Insights and Business Value
From the Proliferation
Of IoT Connected Devices and Machines
For example, I might be concerned about student performance in particular programs. I can use geo-location data about their movement. I can use network activity. I can use a variety of other resources available to me to help in the guidance of those students on what’s good behavior and what’s helpful behavior to an outcome that they want. You can’t do that with an air-conditioning system.

IT has that capability to reinvent itself and reinvent entire processes. You mentioned some of them the way that things like Uber has entirely disrupted the taxi industry. I’d say the same thing here.

There's one part of the CIO’s job that’s operational; does everything work? The second part is, if we're in transition period to a new business model, how involved are the IT leaders in your group in that discussion? It's not just can we do this with IT or not, but it’s more can a CIO and the CIO’s staff bring an imagination to the conversation, that is a different perspective than other voices in the organization? That's true of any industry or line of business.

Are you merely there as a handmaiden waiting to be told what to do, or are you an active partner in the conversation? Are you a business partner? I know that’s a phrase people like to use. There's a kind of a great divide there.

Gardner: I can see where IT is a disruptor -- and it’s also a solution to the disruptor, but that solution might further disrupt things. So, it's really an interesting period. Tell me a little bit more about this concept of student retention using new technologies -- geolocation for example -- as well as big data which has become more available at much lower cost. You might even think of analytics as a service as another component of IT as a service.

How impactful will that be on how you can manage your campus, not only for student retention, but perhaps for other aspects of a smarter intelligent campus opportunity? [See related post, Nottingham Trent University Elevates Big Data’s Role to Improving Student Retention in Higher Education.]

Personalized attention

McCartney: One of the great attractions of small educational institutions is that you get a lot of personalized attention. The constraint of a small institution is that you have very little choice. There's a small number of faculty, and they simply can’t offer the options and different concentrations that you get in a large institution.

In a large institution, you have the exact opposite problem. You have many, many choices, perhaps even too many subjects that, as a 19-year-old, you've never even heard of. Perhaps you get less individualized attention and you fill that gap by taking advice from students who went to your high school a year before, who are people in your residence hall, or people you bump into on the street. The knowledge that you acquire there is accidental, opportunistic, and not structured in any way around you as an individual, but it’s better than nothing.

There are advisors, of course, and there are people, but you don't know these individuals. You have to go and form relationships with them and they have to understand you and you have to understand them.

A big-data opportunity here is to be able to look at the students at some level of individuality. "Look, this is your past, this is what you have done, this is what you think, and this is the behavior that we are not sure you're engaging in right now. Have you thought about this path, have you thought about this kind of behavior for yourself?"
One of the great attractions of small educational institutions is that you get a lot of personalized attention. The constraint of a small institution is that you have very little choice.

A well-established principle in student services is that the best indicator of student success is how engaged they are in the institution. There are many surrogate measures of that, like whether they participate in clubs. Do they go home every weekend, indicating they are not really engaged, that they haven’t made that transition?

Independent of your academic ability, your SAT scores, and your GPA that you got in high school, for students that engage, that behavior is highly correlated with success and good outcomes, the outcomes everybody wants.

As an institution, how do you advise or counsel. They'll say perhaps there's nothing here they're interested in, and that can be a problem with a small institution. It's very intimate. Everybody says, "Dana, we can see you're not having a great time. Would you like to join the chess club or the drafts club?" And you say, "Well, I was looking for the Legion of Doom Club, and you don’t seem to have one here."

Well, you go to a large institution, they probably have two of those things, but how would you find it and how would you even know to look for that? How would you discover new things that you didn't even know you liked, because the high school you went to didn't teach applied engineering or a whole pile of other things, for that matter.

Gardner: It’s interesting when you look at it that way. The student retention equation is, in a business sense, the equivalent of user experience, personalization, engagement, share of wallet, those sorts of metrics.

We have the opportunity now, probably for the first time, to use big data, Internet of Things (IoT), and analytics to measure, predict, and intercede at a behavioral level. So in this case, to make somebody a productive member of society at a capacity they might miss and you only have one or two chances at that, seems like a rather monumental opportunity.

Effective path

McCartney: You’re exactly right, Dana. I'm not sure I like the equivalence with a customer, but I get the point that you're making there. What you're trying to do is to genuinely help students discover an effective path for themselves and learn that. You can learn it randomly, and that's nice. We don't want to create this kind of railroad track. Well, you're here; you’ve got to end up over there. That’s not helpful either.

My own experience, and I don’t know about other people listening to this, is that you have remarkably little information when you're making these choices at 19 and 20. Usually, if you were getting direction, it was from somebody who had a plan for you that was more based on their experience of life, some 20 or 30 years previously than on your experience of life.
Gain Data Insights and Business Value
From the Proliferation
Of IoT Connected Devices and Machines
So where big data can be a very effective play here, was to say, "Look, here are people that look like you, and here were the choices they've made. You might find some of these choices interesting. If you might, then here’s how you’d go about exploring that."

As you rightly say, and implicitly suggested, there is a concern with the high costs, especially of residential education, right now. The most wasteful expenditures there are is where you do a year or two to find out you shouldn't have ever been in this program, you have no love for this thing, you have no affinity for it.
What you're trying to do is to genuinely help students discover an effective path for themselves and learn that. You can learn it randomly, and that's nice.

The sooner you can find that out for yourself and make a conscious choice the better. We see big data having a very active role in that because one of the great advantages of being in a large institution is that we have tens of thousands of students over many years. We know what those outcomes look like, and we know different choices that different people have made. Yes, you can be the first person to make a brand new choice, and good for you if you are.

Gardner: Well it’s an interesting way of looking at big data that has a major societal benefit in the offing. It also provides predictability and tools for people in ways they hadn’t had before. So, I think it’s very commendable.

Before we sign-off, what comes next – high performance computing (HPC), fabric cloud, IT-as-a service -- is there another chapter on this journey that perhaps you have a bead on that that we’re not aware of?

McCartney: Oh my goodness, yes. We have an event now that I started three years ago called "Dawn or Doom," in which if technology is a forcing function, if it is. We're not even going to assert that definitely. Are we reaching a point of a new nirvana, a new human paradise where we’ve resolved all major social problems, and health problems or have we created some new seventh circle of hell where it’s actually an unmitigated disaster for almost everybody; if not everybody? Is this the end of life as we know it? We create robots that are superior to us in every way and we become just some intermediate form of life that has reached the end of its cycle.

This is an annual event that's free and open. Anybody who wants to come is very welcome to attend. You can Google "Dawn or Doom Purdue." We look at it from all different perspectives. So, we have obviously engineers and computer scientists, but we have psychologists, we have labor economists. What about the future of work? If nobody has a job, is that a blessing or a curse?

Psychologists, philosophers, what does it mean, what does artificial intelligence mean, what does a self-conscious machine mean? Currently, of course, we have things like food security we worry about. And the Zika virus -- are we spawning a whole new set of viruses we have no cure for? Have we reached the end of the effectiveness of antibiotics or not?

These are all incredibly interesting questions I would think any intelligent person would want to at least probe around, and we've had some significant success with that.

Next event

Gardner: When is the next Dawn or Doom event, and where will it be?

McCartney: It would be in West Lafayette, Indiana, on October 3 and 4. We have a number of external high-profile key note speakers, then we have a passel of Purdue faculty. So, you will find something that entertain even the most arcane of interests. [For more on Dawn or Doom, see the book, Dawn or Doom: The Risks and Rewards of Emerging Technologies.]

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How UPS automates supply chain management and gains greater insight for procurement efficiency

How UPS automates supply chain management and gains greater insight for procurement efficiency

The next BriefingsDirect business innovation for procurement case study examines how UPS modernizes and streamlinines its procure-to-pay processes.

Learn how UPS -- across billions of dollars of supplier spend per year -- automates supply-chain management and leverages new technologies to provide greater insight into procurement networks. This business process innovation exchange comes to you in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To explore how procurement has become strategic for UPS, BriefingsDirect sat down with Jamie Dawson, Vice-President of UPS's Global Business Services Procurement Strategy in Atlanta. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the major trends that you are seeing in procurement, and how you're changing your strategy to adjust?

Dawson: We're seeing a lot of evolution in the marketplace in terms of both technology and new opportunities in ways to procure goods and that really is true around the globe. We're adjusting our strategy and also challenging some of our business partners to come along with us.

We're a $60 billion company. Last year, our total expenses were somewhere in the $50-billion range, lots of goods and services flowing around the globe.

Gardner: And so, any way that you can find new efficiency, new spend management benefits that turns into some significant savings.

Dawson: Absolutely.

Gardner: Now that you're looking for new strategies and new solutions, what is it in procurement that’s of most interest to you and how are you using technology in ways you didn't before?

Collaboration and partnerships

Dawson: One of the new ways is a combination or partnerships both with third parties as well as our own internal business partners. We're collaborating with other functions, and procurement is not something we are doing to them; we're working with them to understand what their needs are and working with their suppliers as well.

Dawson
Gardner: We're hearing some very interesting things these days about using machine learning and artificial intelligence, combining that with human agents who are specialized. It sounds like, in some ways, external procurement services can do the job better than anyone. Is that something that you're open to? Is procurement as a service something you're looking at? [See related post, How new modes of buying and evaluating goods and services disrupts business procurement — for the better.]


Dawson: Procurement-as-a-service has a certain niche play. There will always be basic buy-and-sell items, even as individuals. There are some things you don’t research, but you just go out and buy. There are other things for which you do a lot of research and you look into different solutions.

There are different things that will cause you to research more. Maybe it's a competitive advantage, maybe you're looking for an opportunity in a new space or a new corner of the globe. So, you'll do a lot more research, and your solutions need to be scalable. If you create and start in Europe, maybe you'll also want to use it in Asia. If you start in the US, maybe you want to use elsewhere.

Gardner: It sure sounds like, during a period of experimentation, that where the boundary was between things that you would buy by rote versus things you would buy with a lot of expertise or research is shifting or changing. Are you experimenting as an organization, and what is interesting to you as you look at new opportunities from those people who are in the procurement network space?
There will always be complex areas that require solution orientation more than just price. They need a deep understanding of industry, knowledge, and partnership.

Dawson: There will always be complex areas that require solution orientation more than just price. They need a deep understanding of industry, knowledge, and partnership. There are a lot of other areas where the opportunities are expanding every day. [See related post, ChainLink analyst on how cloud-enabled supply chain networks drive companies to better manage finances, procurement.]

Gardner: As you think about what you've done and been able to accomplish, do you have any advice for other organizations that are also starting to think about modernizing and strategizing, rather than just doing it in the traditional old way? What would you tell them?

Dawson: Two things. One would be within the procurement organizations to be open to new ideas. And second, get the rest of the organization behind you, because you're going to need their support.

Gardner: It seems that procurement as a function is just far more strategic than it used to be. Not only are you able to get more goods and services, but you can save significant amounts of money. Do you feel that your profile as an organization within UPS is rising or expanding in terms of the role you play in the larger organization? [See related post, CPO expert Joanna Martinez extols the virtues of redesigning procurement for strategic business agility.]


Don't have to sell

Dawson: I'm certainly aware that the knowledge of the capabilities and the demonstrated successes are now being recognized throughout the organization. And it becomes self feeding. You actually get on a roll and can further expand the capabilities once that knowledge is out there; you don’t have to sell.

Gardner: Last question, looking to the future, on a vision level, what’s really exciting to you? What are you thinking that might be more important to you in how you do business two or three years from now? It could be technology, suppliers, ecosystems, cloud enabled intelligence, that sort of thing.

Dawson: It’s a very interesting question, because it’s almost the same answer. Your greatest fear is the greatest benefit. I listened to what we just heard on the Tradeshift Go tool, and it’s crazy how exciting that this is. You heard all the questions in the room about how to adapt that to what you already have today? The world still exists as it exists today.
There's this huge transition period where we were bolting on these fantastic great ideas to our existing infrastructure. That transition into what's new and really embracing it is the most exciting of all.

So, there's this huge transition period where we were bolting on these fantastic great ideas to our existing infrastructure. That transition into what's new and really embracing it is the most exciting of all.

Gardner: Disruption can be good and disruption can be bad.

Dawson: It will be a challenging journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Tradeshift.

You may also be interested in:

How the Citrix Technology Professionals Program produces user experience benefits from greater ecosystem collaboration

How the Citrix Technology Professionals Program produces user experience benefits from greater ecosystem collaboration

The next BriefingsDirect thought leadership panel discussion focuses on how expert user communities around technologies and solutions create powerful digital business improvements.
As an example, we will explore how the Citrix Technology Professionals Program, or CTPs as they are referred to, gives participants a larger say in essential strategy initiatives such as enabling mobile work styles.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the CTP program and how an ongoing dialogue between vendors and experts provides the best end-user experiences, we're joined by Douglas Brown, Founder of DABCC.com in Sarasota, Florida; Rick Dehlinger, an Independent Technologist and Business Visionary in Sacramento, California; Jo Harder is the Cloud Architect at D+H and an Industry Analyst at Virtualization Practice in Fort Myers, Florida, and Steve Greenberg, President of Thin Client Computing in Scottsdale, Arizona. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We hear so much nowadays about user experience. You might say that you, as a community-based organization, are the original user experience provider. What is the CTP program as a user group and how ultimately does your user experience translate into improvements for the Citrix community and ecosystem?

Brown: I've been a CTP since the conception of the CTP Program, and within the Citrix Community since 1997.

Brown
What's neat about the CTP Program and the Citrix Community in general is that we're able to bring a bunch of great, talented people together, and then in return, take that combined experience and knowledge and share that with other people.

What was interesting and what got me into the community way back when, was the fact that there was just no information. You were just really out on your own trying to solve problems. And when we were able to then put that in the community, we all exponentially got better.

What I've found through the Citrix Community in general, the Citrix Users Group that Citrix has recently started, and the CTP Program is that you're always better together. That's the biggest takeaway for me from not just the 10 years of CTP, but 15 or 16 years of being in the greater Citrix Community itself.

Gardner: Steve, how well and effective does this advocacy role work? How much traction are you getting?

Greenberg: It's amazing how well it works. Doug referred to the old days. We had a 1997 to 2007 era, where you didn't have the feedback loop, and products evolved slowly. We'd see a new product release and ask why they did that. So, this passionate group, because of the Internet, because we're all kind of little freaks in our little neighborhoods somewhere around the world, all found each other and come together with such a passion.

Greenberg
We haven't calculated it, but it's in excess of 1,000 years of hands-on experience between this group of 50 or so people. It works, and Citrix has come to value it. Other companies are following the model and developing community programs. It's really invigorating to learn something from the true end user, the customer, and bring it back to headquarters and see the products evolve and change.

Brown: It's really a 360-degree type of program. It's not just for us; it also benefits Citrix, and then, of course, everyone, the customer and the end engineers, what have you.

Gardner: As was mentioned, we're in this era of social media, and people can be their own publisher and they can be an earphone and a megaphone at the same time. So Rick, do you feel like you're representing a large group, and how do they communicate to you what they're feeling?

Much broader audience

Dehlinger: I do feel like I represent a pretty large group, especially when you start wandering the halls of Citrix Synergy. It’s like a college or high school reunion that happens every year. I definitely feel like we represent a much broader audience.

We (the members of the CTP program) also have people who represent perspectives from various locations across the world, different industries, industry functions, different customer bases -- even different seats in the ecosystem -- the partner community, end user, customer, and other technology-provider companies.

Dehlinger
In terms of communication, some of the tools have evolved over the last 10 years. Steve made a good point. I hadn’t really thought about the fact that we have two different eras. The era of the last 10 years has really been one of greatly increased communication and transparency, and that's one of the things that the CTP program is fantastic about.

[Interesting editorial note: shortly after the inception of the CTP Program in 2006, a couple of the founding CTP’s – Brian Madden and Rick Dehlinger – wrote blog articles essentially calling Citrix out for being closed off and not showing any thought leadership in the industry.

Then Citrix CEO Mark Templeton got the message loud and clear, and reversed the policy against Citrixites blogging. This was effectively the turning point between the eras Steve Greenberg mentioned, and the first big impact the CTP’s had on Citrix and the industry.]

Learn How the Citrix Technology Professionals Program Helps
Shape the Future of Cloud Computing  
Steve had mentioned that a lot of the other vendors are starting to use this (CTP Program) as a model to build their community programs around. This group of people is very passionate about Citrix Technologies and passionate about touching the lives of others. This combines the two of those (passions) and puts us behind a closed door with the opportunity to have a very real conversation and communication with the leaders, developers, product managers at Citrix.

We have impacted some very substantial and positive change in Citrix -- helped them stop going down some roads that were very disastrous and recover from some decisions that started to be disastrous or were dead ends -- and they ultimately improved it.
We continue to be inspired by what they bring out and put in front of us as a possible vision; it’s incredible.

Greenberg: And to Doug’s 360-degree comment, we continue to be inspired by what they bring out and put in front of us as a possible vision; it’s incredible. Just so you understand, we're usually locked in a room for two full days, approximately 10 to 12 hours, a couple of times a year, and it gets deep. It’s like an inside family having a family discussion that gets real hot, but it's two-way.

Perhaps at first, it was us saying, "You have got to fix this stuff," but now it's inspiring to see what comes out, that they touch the community and say, "We're thinking about this; how would that work?" It's really, really cool.

Brown: I like the fact that Steve mentioned it’s really two different eras. Prior to the CTP Program, and I was around when they started this, we really had to push something like this for Citrix. A typical corporation back then was not about outside feedback per se. They did not blog; there was no social media. It was a very controlled message.

Nowadays, obviously they need to control the message, but it’s just wide open. It’s a wide -open world out there today.

Interactive, wide ranging

Gardner: Jo, you're like a focus group in a sense, but interactive and wide-ranging in terms of your impact and getting information from the field. So as a focus group, what did you accomplishing recently at Citrix Synergy 2016?

Harder: Let me step back and say that we're under NDA with Citrix. These closed-door discussions that Steve mentioned are very private discussions. The product managers go into what's happening, what they're thinking about for future products, and that's really the basis for those discussions.

Harder
I never really thought about us as like a focus group, but we are. It's really great that we can give feedback to each other. Because we have such varied experiences and expertise, there are some products that I know really well that the person sitting next to me might touch once a year. So we have complete variety in the group. It's really great to be able to have those discussions as a focus group, if you will, and to be able to provide that feedback to the folks at Citrix and really to each other as well, because we do learn a lot from each other.

Gardner: Because Citrix has so many different lines and different products, they have inherited things through acquisition, they have built things organically, no one user consumes them in the same way. What are you seeing in terms of adoption? What would you say is the most interesting part of Citrix’s solutions in this particular day and age?

Dehlinger: The most interesting thing for me and in our little focus group is community representation. I tend to be one of the ones that advocates very heavily for the cloud, and for increasing the pace of evolution, helping drag the traditional Citrix enterprise customer base further into the new world that we live in. For me the most exciting stuff has definitely got to be the cloud.

The evolution of Citrix’s Cloud Services, now called Citrix Cloud, and all that stuff underneath it, is fantastic. It’s monumental, not just for the consumer base, but also for Citrix, because it gets them into the world of rapid prototyping and rapid evolution, consistent, evergreen products and services, and also starts to put them into a different world, where it's cloud-based consumption and pricing.
Every day, every week, every month, every year, you have to continue to prove your value and improve your value service.

Every day, every week, every month, every year, you have to continue to prove your value and improve your value and provide a high quality level of service. If you don't, you're cut off; the customer has the opportunity to walk away.

One of the things that's most exciting about that for me is the opportunity for Citrix to evolve into the cloud first world alongside Microsoft. If you look at any of the traditional enterprise technology vendors that are out there, they've been selling based on a capital-expenditure model into the enterprise.
The customers go spend all these big bucks up front; these vendors’ entire ecosystems - their sales teams, even their product development cycles - they’re based off these big buys and long deployment processes. There's so much of a company (that revolves around up-front capital expenditure and long deployment cycles), and the entire ecosystem gets tied to that.

Then, you look at the polar opposite end of that; that is the cloud, where it’s consumption-based pricing, the attributes that I mentioned a little bit earlier.

Adoption patterns

Gardner: So it could be quite interesting on adoption patterns. We could be seeing all sorts of new models popping up, and that could be interesting for companies as well as the end user organizations.

Dehlinger: In my mind, it increases the transparency on both sides. Citrix knows and understands who is using what, and what they are not using also. The customer has an opportunity to vote with their dollars, not just once upfront when they are seeing all the stars of the sales pitch, but on a monthly or a yearly basis.

That's actually the most exciting part to me, because Microsoft has made that pivot now, with Office 365 and Azure and all that product family. They've brought their ecosystem around and they're showing the world now that it's possible to evolve from being a traditional enterprise software/ technology vendor to being a cloud service provider.
What I see as the future of Citrix and of the community is Citrix getting over that hump themselves and really getting into it. They have reinvented themselves many times over the years.

So, it's exciting for me. What I see as the future of Citrix and of the community is Citrix getting over that hump themselves and really getting into it. They have reinvented themselves many times over the years.

Gardner: Steve, thin-client computing, always an interesting solution, but tying that to any device, any cloud -- what do you see are some of the most interesting developments?

Greenberg: To me, it's that push forward, and it’s the new CEO Kirill Tatarinov making a strong statement that we're going to the cloud, as Rick says, taking it forward. But the most exciting thing for me, because day in and day out I architect and implement design, is to take this suite and to fit it to the organization. Every organization is different, and the best part of my job is going in and learning a new organization and what it is they do and how they do it. Inevitably, something Citrix is doing makes that better.

Now, as Rick said, we just have more options. If this organization needs cloud, it’s the best delivery model. Perhaps they're distributed around the world or some other factor, and now they can do it. They have Citrix behind them casting the vision.

Learn How the Citrix Technology Professionals Program Helps
Shape the Future of Cloud Computing  
So it’s the flexibility, it's the power and excitement that you get from moving at the speed of the business. It's not IT saying no, not IT saying, "Well, I can't do that new product line because our system is blah, blah, blah." If we need to move quick, throw it in Azure. Let’s get on to that new offering.

Harder: Say "yes."

Gardner: Jo, virtualization has never been as prominent as it is now. What do you see from the virtualization perspective with the new products and the new embrace of virtualization at multiple abstractions?

Tying in security

Harder: I'm looking at it from the banking sector, because that's what I live and breathe. I'm looking at it from security, compliance, everything that comes along with the finance industry. I look at that probably a little bit more cautiously than most, but what I find pretty interesting is that Citrix is really tying in security end-to-end.

Some of the sessions here at Synergy have talked about the whole security piece. You want to be progressive, but you have to do it very securely. That's one of the pieces that I'm really embracing from a virtualization standpoint.

From the standpoint of finance, there should be no data on the workstation. If somebody were to walk into a bank and steal that client device, they should not be able to walk off with any Social Security numbers, no non-public personal information (NPI), nothing of that sort. That's what excites me about virtualization and tying that together, the way that Citrix has all the moving parts.

In the future, the next step for the banks is getting into wireless, getting into mobility. Citrix is very well-poised for that. So, the future is bright.
In the future, the next step for the banks is getting into wireless, getting into mobility. Citrix is very well-poised for that.

Gardner: So, security was the original big use case for VDI, nothing on the client. But now clients are everywhere. So it's really, “How do we get the data from the edge and to the edge securely.”

Douglas, what are some of the key points from your perspective in terms of the Citrix product line and how that impacts users that you represent?

Brown: That's a good question. I'm a XenApp baby. I see the cloud as the real, true information highway. It's the enabler to allow us to bring things to market quicker. XenApp is that ultimate tool to then give access to the applications anywhere, any time.

I don't care if it's 2016, with all the stuff that we do today, or if it's 1999, at the end of the day, I have never met an end user that comes into the company and says, "Gee, I can't wait to use Windows 10," or "Gee, I can't wait to use that new Cisco Core Router they just bought." They don’t come into work and say, "Oh no, I have to do a spreadsheet today." They don't even talk about Excel.

With all these different technologies we're bringing around, be it the cloud, or mobility, or whatever, back to the user experience piece, Citrix is able to give the end user a better, faster time to market for them. At the end of the day, they're able to work better from any place, any time.

I've been living a lot in Sarasota, but also I commute to Berlin, Germany. It’s sort of an interesting commute, but it doesn't matter where I live, and this is the same story that we've said for 15 years.

It's not about a new story; it's just about bringing more components to make that, to fulfill that destiny of a better user experience. What's IT there for? It’s to enable the users to do their jobs better, and ultimately, that's what Citrix is about. Everything else is just fluff. Everything else is just the machinery.

Network intelligence

Gardner: Rick, when we think about changes in Citrix over the past couple of years -- and there have been a lot of them -- one of the things that strikes me is that they seem to be much more interested in strutting their stuff as to what their network intelligence capabilities are.

There's a lot more discussion of NetScaler and how that integrates to mobility, security, big-data analytics, and cloud. Do you agree with me that the NetScaler and the intelligent networks component are more prominent, and how does that play into the future?

Dehlinger: NetScaler was, by anybody's measure, one of the best acquisitions Citrix ever made. They got some fantastic technology and brilliant talent. Some of the things that we've been able to do with NetScaler in our tool bag, as we're out solving problems and helping customers take things to the next level, is just mind-boggling.

I'm thrilled at the change. It seems like they finally started to figure out a better way to both communicate what NetScaler is and its role in this whole game. You asked me about the Microsoft-Citrix relationship a bit earlier. Some of the stuff that Citrix is doing now (in that partnership) to start incorporating and leveraging the NetScaler and its unique layer of visibility between the user and the applications - will enable some some really amazing new capabilities.
NetScaler was, by anybody's measure, one of the best acquisitions Citrix ever made. They got some fantastic technology and brilliant talent.

I think it's fantastic that they finally found the language. NetScaler is starting to get its feet underneath it, although you could argue it already has its feet underneath it; it’s been a billion dollar-plus business for Citrix for a couple of years now.

Gardner: Jo, how about you in terms of security and in the banking sector in particular, intelligent network services, something really impressive; important or what?

Harder: Just to expand on what Rick said, I think what Citrix is doing with NetScaler is great. Some days, I feel like I don’t fully understand, and I'm immersed in these technologies, but then you learn something else that NetScaler can do for you. There is more, there is more, there is more. It’s in there, and it’s a matter of finding out exactly how to best use it, and then going ahead and using the products. With NetScaler, I totally agree with Rick; the sky is the limit on it.

Dehlinger: Well, NetScaler used to be the realm of the packet trace junkies. Load balancing is the easiest thing that people can use to describe what NetScaler does, but that whole world was just fraught with massive acronyms, crazy technology, terminology, standards, and stuff that (for the normal human being or the business person in particular) was just mind-boggling and baffling.

It’s great that Citrix is finally finding some language to be able to demystify a little bit of that, and show that underneath all that mysticism and the support for all these crazy new fancy TLAs and acronyms, here is some really amazing powerful business value there just waiting to be unlocked and leveraged.

Gardner: Steve, mobile work styles as opposed to mobility or device or bring your own device (BYOD) -- how far do you feel that your community contacts have gotten in that direction of a mobility style change rather than simply doing something with a smaller device in more places?

Transforming organizations

Greenberg: That's a great question, because I think this particular group has been at the core of this for some time, and we have taken some very notable large organizations and completely transformed them.

People work from home. People work on a multitude of devices. I can be sitting at the desktop in the office, grab a laptop and go jump in a cab, take my phone, and there is that seamless experience. We really are there. At this point, it’s just a matter of getting it more widely infiltrated, getting people aware of what they can do.

To this day, although it seems old to us, I still go into new client sites and opportunities and say, "You could do this," and they say, "Really? I didn’t know I could do that." It’s there, but now the society is catching up, if that makes sense.
Now that you can transmit data securely, when it hits your phone, you're working on it natively.

Gardner: It also seems that some of the file-share demonstrations and announcements show the benefit of the whole greater than the sum of the parts, when you can integrate with cloud, with devices. Any thoughts about the power of an integrated file share rather than just the plain vanilla one-size-fits-all type of cloud-based file share?

Greenberg: That's the final layer that makes this mobile work style a reality. Before, if you could remote in the XenApp style that Doug was referring to, you could get your job done. But now that you can transmit data securely, when it hits your phone, you're working on it natively.

I go into the subway and the signal drops. Well, that file is there and I can edit it, sign it, get my signal back, and go. It has taken that virtualization mobility to a level now where it can travel and it can be seamless.

Gardner: And that’s an intelligent container. So, if your requirements around privacy or security mean that you have to have control over what that session is and does, you can get that.

Douglas, how important is that intelligent container when put in the context of an intelligent network?

Brown: Extremely important. It's important from every aspect of the business. Nowadays, we're able to do those things where we have never been able to in the past, at the level they are at now.

It can’t be understated how important those components are. It comes down to maturity. The technology and the vision have been there -- or the vision has been there, and the technology is coming around. Now, with technologies such as that, it's matured, and then we're able to achieve all of our goals, from business, and to end users.

New capability

Greenberg: Citrix demonstrated at Synergy 2016 a new capability that wasn't there before. We're all familiar with the Dropbox model, where I can send a file, but once you send it, it’s out there in the wild. What they showed today was sending a file and then changing its status. So, even though that person had received the file and looked at it, when the status changed, they could no longer see it. That’s the home run. That’s the piece that was not part of this capability before.

Harder: I tweeted this morning that this new capability really propelled Citrix ShareFile into being the file-sharing solution for business. There are a lot of other solutions out there, but they're really not suitable for business. They don't provide that level of security and a signature signing that enables. Think about the security impacts of that, the legalities. They have it covered. There's a lot more coming. Once some of the states start to add how the digital signature can be incorporated as the notarized signature, wow.
This new capability really propelled Citrix ShareFile into being the file-sharing solution for business. There are a lot of other solutions out there, but they're really not suitable for business.

Gardner: Many business processes really do get that mobile style of work as a result, and rather than just repaving cow paths, you're really doing something quite new and different.

Before we sign off, I would like to allow our listeners and readers to get more information on the Citrix Technology Professional Program. If they're interested in learning more, maybe taking some role themselves, where should they go?

Dehlinger: Definitely start with the CTP page on the Citrix website. That's a great place to find out more about this group and what they do. However, look at the Citrix User Group Communities out there. There are a lot of fantastic people present. We (CTP’s) are blessed by having the opportunity to be able to represent a big base, but in a lot of localities around the world, the Citrix User Group Communities have been doing some fantastic things, and making a difference locally.

Gardner: Sort of a federation of groups around the world.

Dehlinger: Absolutely.

Greenberg: I would add, blog, tweet, turn out for user groups, come out to Synergy, come out to Summit. If you're one of the reseller partners, make yourself known.

We 're a community of almost-crazy enthusiasts. We have a ridiculous level of interest and passion. We have a tendency to find each other, and we're always amazed to see new people come from a place, a country, or a business we never heard of with new solutions.

Learn How the Citrix Technology Professionals Program Helps
Shape the Future of Cloud Computing  
A great event happening today is the Geek Speak tonight. We have done a GeekOvation program, where people submit their projects and their work and come up and get recognized for it and have a little contest. There are endless possibilities. Just get out there and start communicating.

Dehlinger: Participate!

Harder: And have fun.

Brown: In a couple of weeks I'm going to be going to Norway with Rick for one of the best and oldest Citrix User Groups around the world, but that advocacy, is only halfway done, programs and other things for people looking to get into the CTP Program or just sharing knowledge in general.

Start up a blog, have some fun, share knowledge. I've always said, knowledge is not power; power is in dispersing that knowledge.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Securing data provides Canadian online bank rapid path to new credit card business

Securing data provides Canadian online bank rapid path to new credit card business

The next BriefingsDirect data and security transformation use-case scenario describes how Tangerine Bank in Toronto has improved its speed to new business initiatives by gaining data-security agility.

We'll now learn how improving end-user experiences for online banking and making data more secure across its lifecycle has helped speed the delivery of a new credit card offering.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to explore how compliance, data security technology, and banking innovation come together to support a digital business success story is  Billy Lo, Head of Enterprise Architecture at Tangerine Bank in Toronto. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: First, tell us a little bit about digital disruption in the banking industry. Obviously, there are lots of changes in many industries these days, but it seems that banking is particularly within the cross-hairs of disruption.
Learn More About Safeguarding
Data Throughout Its Lifecycle
Read the full Report
Lo: No doubt about that. Our bank used to be known as ING Direct. It started in Canada about 20 years ago. Our founders initially recognized this need and started a journey. Since then, we've been full-speed ahead. We're seeing the savings that we get out of being branchless and passing that back to the clients. That message resonates very well with our client base, and so far, so good.

Gardner: When you say online banking, there are no branches or no brick-and-mortar buildings with the word "bank" on the front. It's all done by mobile, via online. Am I missing anything?

Lo: On top of the fully digital experience, we also actually dive into a little bit of the physical as well, but not in a traditional way.

At Tangerine, we have a couple of other in-person kinds of channels. One is what we call a café. In an informal setting, you can get a coffee or orange juice at the café and get some advice. But most of the functionality is available through the digital channel, through a tablet onsite with someone guiding you along the way.

We've recently been exploring a concept called Mobile Pop-Ups, but not at malls. We refurbished containers and put them into different location to introduce the concept within your bank to different geographies. We also found that very rewarding, because you can reach many people online, but there are still some who need a little extra nudge to get comfortable with starting a banking relationship online.

User expectations

Gardner: That brings up an interesting topic. User expectations are also rapidly evolving in our world. Is there something about somebody who is attracted to online banking that you need to be aware of? Is there something about speed or agility? What is it about the banking customer who prefers online that you need to cater to?

Lo
Lo: Dana, you're right on the point. In this case, both speed and agility are expected from a bank that highlights their services in terms of user experience online. They're now used to the Gmail inbox, Facebook, instant messaging. The good old days of submitting a form and waiting for someone to come back to you is gone, really gone.

From an expectation point of view, we're heavily impacted by the consumerization of technology. All those things that you see on a smartphone, taking pictures and depositing a check, are almost, as we call it, table stakes. We have to work harder at inventing things that surprise and delight our clients.

Gardner: Of course a big part of being able to delight your customers is to know them and have data about them that you can use to allow services to be customized and personalized. So data is essential, but at the same time, you're in a highly regulated business where privacy issues and security are big concerns. How are you achieving the balance between data availability and data protection?

Lo: We in the banking business are in the business of trust. In everything that we do, trust has to be number one. We have to be ready for any kind of questions from our client base on how we handle the information. There's no doubt that transparency will help, and over time, with transparency, our clients learn that we're up-front in how we're using information. And it's not just transparency, but also putting the information in a way that's easily understandable up-front.

If you look at our registration process, one of the first thing that we tell people is "Here is our not-so-fine print." It's in big, bold fonts and that’s very important, because especially in a digital bank, a lion's share of the interactions are through non-face-to-face kind of interactions. If you invest the time in being transparent, invest the time in building up your security infrastructure to protect your information, and be vigilant about all of the current things that are happening. It can be done.
We in the banking business are in the business of trust. In everything that we do, trust has to be number one.

Gardner: Tell us a little bit about your journey toward this new credit-card offering and why putting the blocks of infrastructure investment in place in advance is so important for agility and for quality of service in a new offering.

Lo: Let's take this journey back a little bit as far as our credit-card offering is concerned. We started out as a savings bank and highlighted our high-interest offering at the beginning. That resonated well, and we quickly recognized the fact that we're going to need to expand our product offering. People actually wanted to use us as an everyday bank.

Unfortunately, at the time, we didn’t have the complete suite of products that our clients would need. So, over time, we built up with mutual funds, investments, and mortgages, and the last piece of the puzzle is credit cards. Once we have that, we can officially say to everybody that we're not just a peripheral bank, but have real full-service functions that you can have to support your everyday life.

In our case, efficiency and the speed of adoption is key. Every month that we wait for this offering to come out of the door, we're losing opportunities to turn a regular client into a full-service client. So, we were starting from scratch. We had zero infrastructure. We hired. We built up the technology behind it, partnered with a few of our trusted partners to build up the infrastructure, but the foundation does take time to do it right.

Foundational effort

One thing that not a whole lot of people understand is the foundational effort. If you spend a month or two on building up the right foundation, the saving going forward is actually exponential. With HPE, we adopted the tokenization solution to help protect [credit] card number information. We were able to complete the whole journey in a very quick fashion. That saves us a lot of time, because everything revolves around the card number. If we don’t get the foundation done right at the beginning, quickly, the cost and schedule impact is exponential.

Gardner: So quality is important because you want to get it right the first time. It's not just doing it quickly; it’s also doing it correctly. If you have to go back and redo infrastructure, that can be a huge tax on your innovation and really put a cultural drag on how things proceed.

Lo: Right on, and I don’t even want to think about it. Seriously, on the adoption of these foundational components, speed is key and that saves us a lot of hassle going forward in conversion as well as data cleansing. Once the cat is out of the bag, if you will, it’s so much harder.

Gardner: Billy, I've heard from other organizations that recognize that moving data around in the old-fashioned way doesn't work. Being PCI compliant, having privacy issues met, in fact, having less data and detailed information about a customer is much more desirable. Is that the case with you and the tokenization process and encryption use? How would you describe about what data to keep, what data to transact, and what's the right balance?
Learn More About Safeguarding
Data Throughout Its Lifecycle
Read the full Report
Lo: Just as any other security person would tell you, you have to know where the walls and the doors are with security information. We made a conscientious effort in identifying where we would need the actual card number available, such as for collections or for some operational process, and identify who needs them, where the door needs to be, and then lock them up. Tokenization allows us to do that without too much overhead, and overall, our experience has been definitely well worth investing the time.

Now, I have one place to monitor, and one door to monitor. As soon as I allow access to that information, I'll have an audit trail of who accessed what, when, and how. That gives me the comfort level that I have. Our clients specifically demand it, both on the business side and the front-end client point of view. They appreciate that.

Gardner: For some of our audience, who are not security folks per se, describe what secured data and stateless tokenization means. How does that work -- just an idea architecturally of how this actually works?

Lo: Imagine your card number or any kind of personally identifiable information (PII) that is important to you. Think of it as a piece of fruit, an apple, and you pass it around identifying yourself. Tokenization, and the Stateless Tokenization technology that HPE offers in particular, is that you have an exchange process. The middleman takes your apple, turn it into a pear through a specific algorithm. The reverse process can be applied when someone gives me a pear and ask for an actual apple; the visual is coming back to you.

So, every time, every piece of information that is passed along in the message exchange, they go through this process. The key term here is stateless, of course, so that we don’t have a rack of this mapping information stored somewhere, which becomes yet another vulnerability. That makes our operations a lot easier, especially in a multi data-center environment.

Gardner: So, you get the use of that tokenized data, but you don’t have to store it. It’s not in the state in different places that then have to be protected. There are fewer spots where somebody could be liable to expose it or get access to it.

Difficult to guarantee

Lo: No doubt. In fact, if you think about a larger-scale environment where pieces of information are stored in the cloud, in multiple data centers, in some cases, you may not even know physically where they are. It's very difficult to make that guarantee and say that we know where our information is, and that’s just online. There are the backups that are necessary to run a successful operation.

Gardner: We've heard now that you started from scratch with your credit-card activities. You put in the necessary infrastructure, recognizing that doing it right and fast is a great saver over time. Tell us a little bit about the actual credit card project. How has it come about, and where are you in its delivery to the market?

Lo: It’s been very exciting for us. Our clients have been looking forward to this. We started a public launch in March this year, after about six month’s trial within the bank and with some selected clients. We're now full force and we’ve been running campaigns.

How do we do this? How do we attract our clients? First of all, being transparent. Our product features are very specific, and we don’t hide the interest rate. We're very upfront about fair fees. We're offering promotions right now in three categories. We have four percent cash back for our product, which is a very attractive offering that the market is looking forward to. It’s been working really well.
Well over 80 percent of the merchants in the Canadian marketplace are already Tap and Go and chip ready.

Gardner: And what’s the name of the card? Is it just Tangerine Bank card or is there a branded name to it?

Lo: There is nothing fancy about it right now. This is our only card; so, it can’t go wrong.

Gardner: And is it both debit and credit?

Lo: We've had a debit card for a while now. In this case with the credit card, we have the technology behind it that uses the typical chip-card infrastructure as well as the MasterCard PayPass Tap and Go. And we're also venturing into a mobile payment in the very near future.

Gardner: That was my very next question. Now that you’re a full service bank online, more and more people are wondering how to automate this payment process, particularly with a mobile device. We’ve seen other organizations attempt this, but it doesn’t seem to have gone mainstream yet. Tell us about what you foresee for mobile payments and how you think you might be a leader in that market?
Learn More About Safeguarding
Data Throughout Its Lifecycle
Read the full Report
Lo: In the Canadian marketplace, the merchant landscape is very different from most other geographies. Well over 80 percent of the merchants in the Canadian marketplace are already Tap and Go and chip ready.

With the adoption of mobile payments in big-vendor environments such as Apple Pay as well as Android Pay, we're very, very optimistic. Tap and Go is already a significant component of the payment process, especially for small amounts. Naturally, this is just an extension, whether it’s through the mobile phone or your watch. The impediments that other geographies have around merchants’ reluctance or infrastructure constraints doesn’t really exist in the Canadian market place. So, we're ready.

Extra distance

Gardner: It seems to me that, given the emphasis on user experience and convenience, those organizations like yours that go the extra distance and make that user experience simple, transparent and worthwhile in terms of convenience and productivity, that customers will just put more and more and more of their transactions into that card. It could become so central to their lives. Is that part of your strategy?

Lo: Yes, in many ways. The Tap and Go payment process, once the merchant environment supports it, is very, very efficient. The more information we have around where our client is spending their time, the more we can customize our offering to cater for their specific needs and personalize insights that support their everyday life. No doubt about that. In fact, speaking of the credit card offering and differentiation factor, one of the things that we made very clear is that convenience comes with a cost in terms of people’s comfort level in using that product.
The more information we have around where our client is spending their time, the more we can customize our offering.

Now, if you lose your phone, what’s going to happen? We made it a very high priority to enable our client to freeze the card very easily. Let’s say, if I leave my card in a restaurant, I just pick up my phone or go to an Internet-connected device, freeze my card -- don’t cancel it, freeze it -- until I find it. So, we take quite a bit of time in exploring and making sure that people will feel comfortable using this new channels.

Gardner: It sounds like we're only just scratching the surface on these ancillary services that could be brought to bear when you have the underlying infrastructure in place, the security and data availability in place. It’s going to be interesting in the next several years how convenience can be even completely redefined.

Lo: Yes. We can't wait to continue to innovate for our clients, and in many ways, our clients are looking forward to all of these things as we progress. Banking is our everyday life.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

CPO expert Joanna Martinez extols the virtues of redesigning procurement for strategic business agility

CPO expert Joanna Martinez extols the virtues of redesigning procurement for strategic business agility

The next BriefingsDirect business innovation thought leadership discussion focuses on how companies are exploiting technology advances in procurement and finance services to produce new types of productivity benefits.

We'll now hear from a procurement expert on how companies can better manage their finances and have tighter control over procurement processes and their supply chain networks. This business process innovation exchange comes to you in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

To learn more about how technology trends are driving innovation into invoicing and spend management, please welcome Joanna Martinez, Founder at Supply Chain Advisors and former Chief Procurement Officer at both Cushman and Wakefield and AllianceBernstein. She's based in New York. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's behind the need to redesign business procurement for agility?

Martinez: I speak to a lot of chief procurement officers and procurement execs, and people are caught up in this idea of, we’ve got to save money, we’ve got to save money. We have to deliver five times the cost of our group, 10 times, whatever their metric is. They've been focused on this, and their businesses have been focused on this, for a long time.

The reality is that the world really is changing. It's been a 25-year run of professional procurement and strategic sourcing focused on cost out, and even the most brilliant of sourcing executives, at some point, is going to encounter a well that's run dry.

Sometimes you work in a manufacturing company, where there is a constant influx of new products. You can move from one to another, but those of us who have worked in the services industries -- in real estate, in other kinds of businesses where a tangible good isn't made and where it's really a service -- don't always have that influx. It's a real conundrum, a real problem out there.

I believe, though, that events and these changes are forcing the good, the smart procurement people to think about ways they can be more agile, accept the disruption, and figure out a way to continue to add value despite of it.

Gardner: So perhaps cost-out is still important, but innovation-in is even more important?

Changing metrics

Martinez: That's it, exactly. In fact, I have seen some things written lately. Accenture did a piece on procurement, "The Future Procurement Organization of One," I think it was called. They talked about the metrics changing, and that procurement is evolving into an organization that's measured on the value it adds to the company's strategy.

Martinez
People talk a lot about changing the conversation. I don't think it's necessarily changing the conversation; it's adjusting the conversation. After you've been reviewing your cost savings for the last five years for your CFO, you don't walk in one day and say, "Now we're going to talk about something else." No, you get smart about it, you start to think about the other ways you're adding value, and you enhance the conversation with those.

So, you don't go from a hundred to zero on the cost savings part of it. There's always going to be some expectation, a value added in that piece, but you can show relatively quickly that there are a whole lot of other places. [See related post, How new modes of buying and evaluating goods and services disrupts business procurement — for the better.]

Gardner: While it might be intimidating to some, it seems to me that there are many more tools and technologies that have come to bear that the procurement professional can use. They have many more arrows in their quiver, if they're interested in shooting them. What do you think are some of the more important technological changes that benefit procurement?

Martinez: Well, there are all these services in the cloud. It's become a lot cheaper and a lot faster to move to something new. For years, you’ve had a large IT community managing the disruption of trying to put in a product that's integrated with every piece of data and servers.

It's not over, because lot of those legacy systems are there and have to be dealt with as they age. But as new services are developed, people can learn about them and will figure out ways to bring it to the company. They require a different kind of agility: It’s OPEX, not capital expense. There is more transparency when service is being provided in the cloud. So some new procurement skill sets are required.
People talk a lot about changing the conversation. I don't think it's necessarily changing the conversation; it's adjusting the conversation.

I'm going to speak later tonight, and I have a picture of an automobile assembly line. It says, "This is yesterday's robot." When you talk about robotics, people think of Ford Motor Company. The reality is that robotics are being used in the insurance industry and in other industries that are processing a lot of repetitive information. It is the robotics of technology. The procurement organization knows these suppliers and sees what the rest of the world is doing. It's incumbent upon procurement to start to bring that new knowledge to companies.

Gardner: Joanna, we also hear a lot of these days about business networks whereby moving services and data to a cloud model, you can assimilate data that perhaps couldn't have been brought to bear before. You can create partner relationships that are automated and then create wholes greater than the sum of the parts. How do you come down on business networks as a powerful tool for procurement? [See related post, ChainLink analyst on how cloud-enabled supply chain networks drive companies to better manage finances, procurement.]

Martinez: Procurement has to get over the “not invented here” syndrome. By the way, over the years I have been as guilty of this is anyone else. You want to be in the center of things. You want to be the one at the meeting with the suppliers coming in and the new product development people at your company.

The procurement organization has to understand and make friends with the product development and the revenue-generating side of the business. Then They have to turn 180 degrees and look to the outside world, and understand how the supplier community can help to create those networks, then move onto the next one, and then, be smart enough in the contracting, and in things like the termination clauses to make sure that those networks can be decoupled when they need to be.

Redesigning procurement

Gardner: Do you have any examples of organizations that have really jumped on the bandwagon around redesigning procurement for agility? What was it like for them, and what did they get out of it? It's always important to be able to go and show some metrics of success when you're trying to reinvent something.

Martinez: If you're looking for an example, you’ve got Zara, the global retailing chain. Zara changes their product constantly. They're known for their efficient supply chains. They have some in-house manufacturing, and that in-house manufacturing gets done by them, but it's for the basic product, the high volume, where lean manufacturing is important, because the variability is low and the volume is high.

When you get to things like the trend of the minute, be it gold buttons, asymmetrical hemlines, or something like that, they're