CPO expert Joanna Martinez extols the virtues of redesigning procurement for strategic business agility

CPO expert Joanna Martinez extols the virtues of redesigning procurement for strategic business agility

The next BriefingsDirect business innovation thought leadership discussion focuses on how companies are exploiting technology advances in procurement and finance services to produce new types of productivity benefits.

We'll now hear from a procurement expert on how companies can better manage their finances and have tighter control over procurement processes and their supply chain networks. This business process innovation exchange comes to you in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

To learn more about how technology trends are driving innovation into invoicing and spend management, please welcome Joanna Martinez, Founder at Supply Chain Advisors and former Chief Procurement Officer at both Cushman and Wakefield and AllianceBernstein. She's based in New York. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's behind the need to redesign business procurement for agility?

Martinez: I speak to a lot of chief procurement officers and procurement execs, and people are caught up in this idea of, we’ve got to save money, we’ve got to save money. We have to deliver five times the cost of our group, 10 times, whatever their metric is. They've been focused on this, and their businesses have been focused on this, for a long time.

The reality is that the world really is changing. It's been a 25-year run of professional procurement and strategic sourcing focused on cost out, and even the most brilliant of sourcing executives, at some point, is going to encounter a well that's run dry.

Sometimes you work in a manufacturing company, where there is a constant influx of new products. You can move from one to another, but those of us who have worked in the services industries -- in real estate, in other kinds of businesses where a tangible good isn't made and where it's really a service -- don't always have that influx. It's a real conundrum, a real problem out there.

I believe, though, that events and these changes are forcing the good, the smart procurement people to think about ways they can be more agile, accept the disruption, and figure out a way to continue to add value despite of it.

Gardner: So perhaps cost-out is still important, but innovation-in is even more important?

Changing metrics

Martinez: That's it, exactly. In fact, I have seen some things written lately. Accenture did a piece on procurement, "The Future Procurement Organization of One," I think it was called. They talked about the metrics changing, and that procurement is evolving into an organization that's measured on the value it adds to the company's strategy.

Martinez
People talk a lot about changing the conversation. I don't think it's necessarily changing the conversation; it's adjusting the conversation. After you've been reviewing your cost savings for the last five years for your CFO, you don't walk in one day and say, "Now we're going to talk about something else." No, you get smart about it, you start to think about the other ways you're adding value, and you enhance the conversation with those.

So, you don't go from a hundred to zero on the cost savings part of it. There's always going to be some expectation, a value added in that piece, but you can show relatively quickly that there are a whole lot of other places. [See related post, How new modes of buying and evaluating goods and services disrupts business procurement — for the better.]

Gardner: While it might be intimidating to some, it seems to me that there are many more tools and technologies that have come to bear that the procurement professional can use. They have many more arrows in their quiver, if they're interested in shooting them. What do you think are some of the more important technological changes that benefit procurement?

Martinez: Well, there are all these services in the cloud. It's become a lot cheaper and a lot faster to move to something new. For years, you’ve had a large IT community managing the disruption of trying to put in a product that's integrated with every piece of data and servers.

It's not over, because lot of those legacy systems are there and have to be dealt with as they age. But as new services are developed, people can learn about them and will figure out ways to bring it to the company. They require a different kind of agility: It’s OPEX, not capital expense. There is more transparency when service is being provided in the cloud. So some new procurement skill sets are required.
People talk a lot about changing the conversation. I don't think it's necessarily changing the conversation; it's adjusting the conversation.

I'm going to speak later tonight, and I have a picture of an automobile assembly line. It says, "This is yesterday's robot." When you talk about robotics, people think of Ford Motor Company. The reality is that robotics are being used in the insurance industry and in other industries that are processing a lot of repetitive information. It is the robotics of technology. The procurement organization knows these suppliers and sees what the rest of the world is doing. It's incumbent upon procurement to start to bring that new knowledge to companies.

Gardner: Joanna, we also hear a lot of these days about business networks whereby moving services and data to a cloud model, you can assimilate data that perhaps couldn't have been brought to bear before. You can create partner relationships that are automated and then create wholes greater than the sum of the parts. How do you come down on business networks as a powerful tool for procurement? [See related post, ChainLink analyst on how cloud-enabled supply chain networks drive companies to better manage finances, procurement.]

Martinez: Procurement has to get over the “not invented here” syndrome. By the way, over the years I have been as guilty of this is anyone else. You want to be in the center of things. You want to be the one at the meeting with the suppliers coming in and the new product development people at your company.

The procurement organization has to understand and make friends with the product development and the revenue-generating side of the business. Then They have to turn 180 degrees and look to the outside world, and understand how the supplier community can help to create those networks, then move onto the next one, and then, be smart enough in the contracting, and in things like the termination clauses to make sure that those networks can be decoupled when they need to be.

Redesigning procurement

Gardner: Do you have any examples of organizations that have really jumped on the bandwagon around redesigning procurement for agility? What was it like for them, and what did they get out of it? It's always important to be able to go and show some metrics of success when you're trying to reinvent something.

Martinez: If you're looking for an example, you’ve got Zara, the global retailing chain. Zara changes their product constantly. They're known for their efficient supply chains. They have some in-house manufacturing, and that in-house manufacturing gets done by them, but it's for the basic product, the high volume, where lean manufacturing is important, because the variability is low and the volume is high.

When you get to things like the trend of the minute, be it gold buttons, asymmetrical hemlines, or something like that, they're using a network of third parties to do that. In those cases, the volume is low, the variability is high, and so they create and disassemble these networks.

Whether financial services companies realize it or not, there's a lot of agility built into that. There are some firms, some third parties, that a financial services firm will use to get those shareholder reports out. They send them the monthly reports, and the companies have very high volume, very excellent quality controls. Post offices are on-site. They don't even truck it to the post office; the post office is sitting right there, and the mailings go out.
HCM is an important organization for procurement to bond with. Often, in a company, there's a lot of technology and human resources (HR) spend, and not a lot of professional third parties on the use of that spend.

When you need to do something, for example a special mailing on a particular fund or shareholder meetings that might only be held once every couple of years, you find yourself in a situation where those kinds of networks don't serve you very well, and you have to kind of assemble and disassemble temporary networks.

Gardner: We hear a lot these days, with services organizations in particular, that finding labor and skills is a big issue for them. It seems to me that when we look at some of the tools that procurement is using, and the role that procurement is playing, that perhaps there is some more synergy between procurement and human resources management than we have seen in the past.

Do you see that as a potential benefit when you're looking for agility and procurement, that they should be working hand-in-hand, perhaps using some of the same platforms and methods of procurement and human capital management (HCM)?

Martinez: HCM is an important organization for procurement to bond with. Often, in a company, there's a lot of technology and human resources (HR) spend, and not a lot of professional third parties on the use of that spend.

There consultants who can advise you on insurance policies, but they're not always using the best tools to go out and find those providers. Sometimes, there are relationships, payments, rebates, and that sort of thing that are in play that the HR community might not be aware of or asking about.

In HR, legal, and some of the other parts of a company that often use services, there are technology solutions that are coming in place. So, if you’ve got a procurement specialist working with HR who knows a lot about recruiters and doing deals with recruiters, they had better be learning how to do a deal with LinkedIn. They had better be able to understand that those traditional service providers are not going to be needed any longer.

Procurement advice

Gardner: What advice would you give procurement professionals who are interested in redesigning their procurement for agility? Maybe they haven’t begun that journey fully. What would you advise them as important opening position steps or thinking?

Martinez: Two things. Number one, there's no reason for your organization to call you up one day and say, "You can do this differently." You have to be self-motivated and you have to recognize that the change has to occur, do-it-yourself. I was going to say to ask forgiveness not permission, but you're not going to have to ask forgiveness, because you're going to find lots of good things.
There are supply chains embedded all through organizations, even when no one in the organization has heard the term “supply chain”.

The other thing is that there are supply chains embedded all through organizations, even when no one in the organization has heard the term “supply chain.”

Procurement organizations have to think about making sure that someone in their group understands supply chain or understands that mentality of owning something from start to finish, because as long as you're looking at discrete little pieces, you're not going to extract the maximum value.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Tradeshift.

You may also be interested in:

Zyme: Emergence and Evolution of Channel Data Management Software

Zyme: Emergence and Evolution of Channel Data Management Software

Previous to the official launch of the new version of Zyme’s solution, I had the opportunity to chat and be briefed by Ashish Shete, VP of Products and Engineering at Zyme, in regard to version 3.0 of what Zyme describes as its channel data management (CDM) solution platform. This conversation was noteworthy from both the software product and industry perspectives. In particular, the solution
The Quest for Real Time Computing

The Quest for Real Time Computing

The history of computing can be seen as mankind’s journey toward making a machine imitate the human mind. Our brains process multiple streams and many types of data, simultaneously and in real time. We are able to focus on what’s important to us at any given time while we ignore [...]
Where is the Data Warrior Now?

Where is the Data Warrior Now?

Hi folks. Time to update y’all on some upcoming speaking engagements for this summer and fall. Here are a few talks recently scheduled: Houston DAMA When: August 9, 2016 1:30 PM Where: Salon Consulting, 2200 Post Oak Boulevard, Suite 1450 , Houston, TX Topic: Harnessing the Elasticity of the Cloud for Analytics with Snowflake and Tableau Register: Houston […]
How Big Data is Affecting Business Decisions

How Big Data is Affecting Business Decisions

No matter what industry a business is in, there’s one buzzword on everyone’s lips – data. And not just any data, big data.

We’re living in an information economy. The more data a company collects and analyzes, the more information they have to go off of when making major business decisions. Companies are no longer flying blind or having to guesstimate. They have exact measurements for parameters that once couldn’t be measured at all.

It’s so important today, big data can affect a business’ valuation. The data goes beyond a company’s tangible assets with deeper insights and future projections that many view as indicators of success. Goodwill is the perfect example of this. It’s a key intangible asset that can now be gauged through data collection and factored into valuation.

How businesses use data and the decisions they make based on data is also going to affect success rates. It’s become so influential, the data itself is now being used to determine a company’s value.

The Most Important Decision - Selecting Software for Data Collection

Selecting business intelligence software is one of the most important parts of the process since this is how data is gathered and analyzed. For many businesses, deciding on a software ...


Read More on Datafloq
Top Reasons of Hadoop – Big Data Project Failures

Top Reasons of Hadoop – Big Data Project Failures

There are multiple reasons why a big data projects fail:

Analysis Failures

Not asking any questions pertaining to the data

Back in 2008, Google started predicting the trend of flu, and declared the outbreak of an epidemic, weeks before the CDC. Again, a few years later, Google reported an over-approximated doctor visits report, which was inflated by at least 50%. Internet users, rather than questioning the data displayed by Google, followed the trend started by the Internet Search Engine giant.

Not asking a single right question

Once, a car manufacturer, that owned several car dealerships all around the world, set out on a course to a project of sentiment analysis. The project’s cost amounted to USD 10 million, and the time taken was around six months. Once the project yielded results, all dealerships were contacted and asked if the results do have any effect on the total annual sales. Sadly, the result was proven to be a wrong one, as no such effect existed.

Making use of wrong and incorrect models

A bank’s PhD decided to turn to other sectors for the purpose of searching for big data related successes, and incorporate such successful ideas into the workings of the bank. Such an idea was discovered in ...


Read More on Datafloq
Quote of the Day: Mark Twain

Quote of the Day: Mark Twain

It is better to deserve honors and not have them         than to have them and not deserve them. — Mark TwainFiled under: Quotes Tagged: Mark Twain, quote of the day, quotes
Counting Elephants – How to Solve Big Problems with Big Data

Counting Elephants – How to Solve Big Problems with Big Data

One ... two ... three ... four hundred ... five hundred thousand .... How many elephants can you count before it's too many (or too much)?

Counting is one of the first skills learned as a child.  Before addition and subtraction, the numerical building blocks of 1-2-3 are right there with the ABCs.  Whether it's your blessings or the dealer's cards or your money, counting comes in handy.  How many or how much is core to decision making.  How much money or how many resources do I have?   How many does the enemy have?

Counting to 10 or maybe 100 is easy, but as more needs to be counted, it becomes tedious and time intensive.  The practice loses its return on energy.  That's where math and probability and statistics come in.  Using what can be counted easily can be leveraged not only to count more but also to add value to the meaning of what is counted.

Enter the classic "what to wear" problem (poignant for math geeks).

Instead of laying out each combination and "counting" it, you know how many outfits you have.  This simple example can be exploited in far greater combinations...to the nth degree.

But what if you don't know how many shirts and pants you have?

Big Data Counting - The Next Generation

Counting ...


Read More on Datafloq
Why Big data Fuels Significant Change in the Real Estate Market

Why Big data Fuels Significant Change in the Real Estate Market

Big data fuels significant change in the real estate market. Sites like Zillow and Fizber offer detailed real estate data that buyers, sellers, agents, brokers and investors use to make informed decisions. Gone are the days when real estate brokers and agents had proprietary access to real estate market information that the average buyer or seller could not easily find. Depending on your role within the real estate industry, there are definitely pros and cons associated with the growth of big data.

Building Management Improvements

According to Keith Outlaw who was quoted in the article published on CNBC, How Big Data Is Transforming Real Estate, waste can be reduced by adding sensors to buildings to track things like temperatures and electrical usage for improved usage efficiencies and decreased repair costs. While the initial investment can be steep, the upside can pay off in the future. Algorithms are used to track relevant correlations that can improve data analysis efforts for the purpose of cutting expenses.

Evolving Roles of Real Estate Brokers and Agents

Now that prospective buyers and sellers can easily access real estate market information without going through an agent who can unlock the wonders of the MLS for their buyers and sellers, real estate ...


Read More on Datafloq
Look Back Over TDWI 2016

Look Back Over TDWI 2016

Last week in June I was at the TDWI Conference 2016 at Munich. ITGAIN, my employer, had as a platin sponsor a booth to present our products and services!



In my point of view, it was another great TDWI conference at Munich with a lot of awesome people I could talk with - including an interesting discussion about data architecture with Mark (Madsen) and all the nonsense happening in the Big Data world.

Why The Internet of Things is Getting Real Now

Why The Internet of Things is Getting Real Now

Few things have gotten as much hype as the Internet of Things (IoT). Some say it will be the biggest technological revolution since the rise of the internet itself. To be honest, the predictions aren’t that far off. If the IoT manages to live up to expectations, the impact it will have dwarfs anything that’s come before. At times, though, it becomes difficult to imagine what the IoT means for the individual. Yes, all this talk of connected toasters and cars can sound exciting, but it all seems a far way off, and you might not think it will affect you all that much. It’s natural to be a bit sceptical when everyone is proclaiming how amazing the IoT will be, but it’s time to set aside the prognostication. The Internet of Things is getting real, and the impact will be felt all over the world.

It’s not just that the Internet of Things has the ability to make our lives more convenient, it’s that it can improve quality of life tremendously. A lot of people are buzzing about wearable technology and how well it fits in with the IoT, but wearable tech goes far beyond people being able to track ...


Read More on Datafloq
How new modes of buying and evaluating goods and services disrupts business procurement — for the better

How new modes of buying and evaluating goods and services disrupts business procurement — for the better

The next BriefingsDirect business innovation thought leadership discussion focuses on how new modes of buying and evaluating goods and services are disrupting business procurement.

We'll hear now from a leading industry analyst on how machine learning, cloud services, and artificial intelligence-enabled human agents are all combining to change the way that companies can order services, buy goods, and even hire employees and contractors. This business process innovation exchange comes to you in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy

To learn more about how new trends are driving innovation into invoicing and spend management, please join me in welcoming Pierre Mitchell, Chief Research Officer and Managing Director at Azul Partners, where he leads the Spend Matters Procurement research activities. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We're seeing an awful lot of disruption in how companies can buy and sell goods and how suppliers can reach new markets. What is causing this disruption?

Mitchell: The technology is disruptive. In the old days, a lot of procurement executives would just say, "The technology is really just enabling our existing process, it’s really just a tool to automate the processes that we're looking to do."

That’s starting to change. Technology is fundamentally disrupting value chains. You see what’s happening in the business-to-consumer (B2C) world and the disintermediation that’s happening. Amazon, Uber, and Airbnb are having big impacts and that’s not limited to a B2C world. Look at the impact of Amazon, Uber, Airbnb, and now someone like Tradeshift? What’s going to be the impact on the business-to-business (B2B) travel process on the supply-chain process, on freight forwarding, on the logistics? It’s going to be a major impact.

So, you can say that technology is just automating, but it’s not. It’s enabling new, much more innovative value chains, and it's truly disruptive. I know it’s a buzzword out there, but it really is.

Go and Skills

Gardner: From what you’ve heard at Tradeshift’s recent announcements around Go and Skills, what are the factors that combine in a way that you think are quite new or something that we haven’t seen before? [See related post, ChainLink analyst on how cloud-enabled supply chain networks drive companies to better manage finances, procurement.]

Mitchell: The Skills terminology is interesting. When you look at Skills, they're really talking about a fairly atomic or higher-level kind of business process as a service. And if you're going to do business process as service, it’s not just having a bunch of cloud apps, because cloud apps are basically a more efficient machine tool, if you will.

Mitchell
Just taking an on-premises app and deploying it in the cloud is great in terms of making it more efficient for the deployment, but an empty app in an empty app. What really brings the app to deliver a business outcome, to deliver that business process, is intelligence. That intelligence is going to either come from the bottom up, based on analytics that turn information as insight, but also it’s going to come from how we take information and knowledge out of our minds and put it into that software.

That’s truly disruptive and probably the topic of our conversation of what we do with 30 percent unemployment, as the robots come to take all our jobs. But certainly, in this kind of knowledge-based area, where there is some level of repetitive tasks, the game is starting to change from on-premise apps to software-as-a-service (SaaS) apps, to moving toward the cognitive and using those apps to really deliver business outcomes.

Gardner: I agree that this has wide implications across many industries and across many facets of any particular business. Just to focus on what Tradeshift is doing with Go, what’s interesting to me is that they’re combining accessible, but pertinent, real-time streamed travel data, analyzing that in the context of a data environment. But they’re also adding human travel agents, empowering humans who are very skilled in order to present very rapid returns for fairly complex business problems.

What is it about this combination of machine and human that is pushing boundaries today?

Mitchell: I like how they went about this solution. First of all, they started with the business problem and the outcome, especially in mid-market organizations, but also for large enterprises. We want to focus on making the process of buying and traveling much easier and much more intuitive, but still obviously with some of the controls that you need to have in place.

The problem is that a lot of these processes have been very siloed across multiple places. So you have your travel and expense reports, we have our purchasing cards (P-Cards), maybe an e-procurement system here and there, or maybe an e-invoicing. So you have all these different little channels that are dealing with bits and parts of the problem, but it hasn’t really come together as one kind of seamless experience.

Seamless experience

The only way that you can make that experience seamless is to have this combination of domain expertise around the process, the software to kind of support it, and then more and more this area around cognitive and the skills and being able to empower humans to do this process better.

Probably more of the repetitive tasks that those humans were previously doing will be more bot-enabled rather than human-enabled. That’s going to happen over time, but ultimately, that frees up the humans to do higher value-added activity, rather than just these rote tasks.

Gardner: My sense is that it will start with rote, but it could very easily move up a value chain of intelligence. The other interesting thing to me is that they're using a messaging application, which people are very familiar with, and brings it to a democratization level, where almost anyone in the organization can take part.

Furthermore, what’s interesting is the ability to act on it very rapidly. So, when you create a virtual credit card, you're able to pay for something as rapidly as you're able to find it. It really brings decision-making and execution down to a fundamental level of whoever in the business needs to act can act, and it removes all those middle layers. To me, that’s a fairly impressive productivity benefit.
Millennials are entering the workforce. They're highly messaging based.

Mitchell: What’s nice about it is that if you look at the changing workforce now, Millennials are entering the workforce. They're highly messaging-based. So, it’s really accommodating a multichannel world. The new UI with the changing workforce is going to be messaging-based, but just because it’s quick, easy, and real-time, and it’s in a metaphor that they’re familiar with, doesn’t mean that your need for controls goes away.

The platform capabilities that Tradeshift is increasingly bringing to bear have the ability to take these little atomic levels of services around whether I do a budget check in real time, how do I take what you’re asking for and turn that information into a commodity code, a merchant code, or into being able to translate all this complexity on the back end.

That doesn’t go away. You're just shielding the end-users from it and allowing them to work in a style that’s familiar to them. Too often, it’s been a trade-off between ease of use and high controls. If you can bring those two together, especially for this changing workforce, that’s a huge win-win.

Gardner: We hear a lot these days about the need for more productivity in our economy in general in order to create a better standard of living and increased wages and so forth. It seems to me that for many years, maybe generations, big businesses had an advantage over smaller business. They've been able to integrate processes, have efficiencies of scale, and buy and sell at scale.

But now, when you look at some of these technologies like Tradeshift has brought to bear, maybe mid-market and small companies will get an advantage. They can be fleet, agile, and use these services and cut their costs, while being innovative all along.

Do you share my sense that maybe this is a day and age where the smaller companies have an advantage?

Level of orchestration

Mitchell: Yes, and no. I would probably vote for the school of piranhas over the shark any day, but for those piranhas to win they have to be able to assemble with each other at will. That requires a new level of orchestration and standing up business processes to get those going, rather than what’s been available in the past.

So, taking a traditional enterprise architecture and trying to stand up these cloud-enabled, API-driven services in the cloud that are getting increasingly intelligent isn't possible with the older technology.

I'm with you, and it does require a new class of technology to stand-up these new value chains and these business networks.

Gardner: I suppose there's nothing really stopping even the largest companies from bringing some of these atomic services to bear inside their organizations. Yes, you have to change some processes, but it seems to me that they might not have a choice when their competition gets there first.
Look at what’s happening to the supply markets. They're getting digitized, and the supply chains are getting digitized.

Mitchell: Absolutely. There is so much activity going on right now around digital supply chain and digital disruption. Look at what’s happening to the supply markets. They're getting digitized, and the supply chains are getting digitized.

So, who were the folks who are really responsible for helping the organization tap innovation from those supply markets? Hopefully, procurement is taking a leadership role in doing that. There's a real fork in the road here for procurement to say "Look, it’s time to help us educate our stakeholders about how these value chains are going digital. How can we tap that?"

By the way, procurement is a service provider, too, and you are only going to get so much budget. So, if you can figure out some disruptive ways to carve off stuff that makes absolutely no sense for you to be doing on an ongoing basis, you can really help automate that away, so that you can focus your time on really going deep in certain categories, in innovation projects, and really doing things are really going to make a difference.

The biggest cost in procurement is the opportunity cost of wasting your time on low-value activities, such as cost-center stuff, and not really doing the true profit-center innovative kinds of things. Ultimately, you have to evolve or you're going to die. "Stay above the API," some people say.

Gardner: It sure seems like we’re now in a period where procurement can rise and become an evangelist within organizations for innovation across many different dimensions of the business that could have vast savings, but also put them in a highly competitive position when they could otherwise be disrupted.

So, to the procurement people, "Go get them," right? [See related post, ChainLink analyst on how cloud-enabled supply chain networks drive companies to better manage finances, procurement.]

Can't do it alone

Mitchell: Absolutely. And you have to work with IT and everybody else and work with your suppliers, too. You can’t do it alone, but what’s nice is that you’re finally starting to see some better options out there -- a much bigger utility belt of tools that you can use to kind of make it happen, because otherwise, it’s just not possible.

Gardner: Last point, Pierre. It seems like it’s incumbent upon organizations to get a bit more experimental. There's such a wide variety of new services coming on board. They might not want to take a bite the whole enchilada, but do you share my opinion that being experimental, doing pilot projects, trying new things is extremely important these days?

Mitchell: Absolutely. This whole notion of self funding is that it’s just become part of the new normal. The idea is what can you actually do in the short term that can add some new incremental value, demonstrate credibility, engage your stakeholders, and in doing so, unlock getting to the next level, where now you can build upon that, or if it didn’t work, you redirect, but you need to work towards a long-term vision.
You have to work with IT and everybody else and work with your suppliers too. You can’t do it alone.

This is the part where platforms, architecture, and thinking some of the stuff through is important, so that you can do stuff in the short term and get some business results, but you want to work towards a more flexible and open architecture so that you have options. Because in procurement, and for the stakeholders, it’s all about having options and flexibility. That’s what enables agility, being able to have those options.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Tradeshift.

You may also be interested in:

They Know What You’re Watching (And Why You’re Watching it)

They Know What You’re Watching (And Why You’re Watching it)

It wasn’t long ago that the entertainment industry had virtually no information on the end consumers of its products. A studio would create a television show, which would then be transmitted over the airwaves or through a satellite or cable feed. Even the broadcasters had virtually no information on which consumers watched what content. Outside of high level ratings and survey data, the industry was in the dark ages when it came to customer analytics and insight and there was little opportunity for customer engagement of any kind.

The First Wave: Set Top Box Data

Less than a decade ago, cable and satellite providers began to collect and analyze “set top box” data in earnest. This data effectively captures every push made on a remote control so that very precise information on what each subscriber watched can be captured and analyzed. What shows do members of a household watch? Do they watch live or record it? Do they commonly watch programs more than once? Do they rewind frequently? Do they pause and finish later? A wealth of information suddenly became available and the best ways to use that information are still being determined even as a lot of value has been driven ...


Read More on Datafloq
How to Build the Internal Reputation of Your Insight Team

How to Build the Internal Reputation of Your Insight Team

Why do some customer insight teams have a better internal reputation than others?

Did some insight leaders just get lucky, with a great culture & receptive directors?

If so, then it appears many did not get so lucky. I say that because lack of internal influence, being neglected or treated as just a service function – these are common concerns raised in coaching sessions with insight leaders.

Ironically, given that many customer insight teams are located within Marketing functions, one of the causes is a lack of intentional marketing.

Are you marketing your customer insight team internally? Do you know how to build awareness or manage PR for your ‘insight brand’?

Before writing this post I did scour popular blogs (insight & leadership), as I was keen to share a range of views. However,this appears to be a neglected topic, certainly for customer insight teams. So, in this post I’ll share my experience (of leading such teams & ideas I’ve seen work for my peers). My focus will be on communicating the good work & skilled capability that exists. It almost goes without saying that you need to be delivering before declaring, so ensure you deliver on your promises first.

Here are 6 things to get ...


Read More on Datafloq
How Big Data Completely Transformed the Insurance Industry

How Big Data Completely Transformed the Insurance Industry

Big data is changing the way we think about many of the major factors in our lives today, and insurance is no exception. The digital age is completely transforming the way in which the insurance industry operates, and all of this has big implications for both consumers and insurance agents. In short big data is a term that is being used more and more recently, to refer to the flood of information that now exists and is accessible primarily through the internet. All major companies are trying to figure out how they can use this new technology, and big data is revealing new trends in how we behave and what we buy. Because the phenomenon is so new, and the technologies involved are often very complex, it is important to understand as much as you can about how big data is changing the insurance game.

For one, insurance companies now know much more about their consumers than they ever have. While previously, insurance agents would pay most of their attention to individual cases and individual policies, now they can understand their consumers on a much larger level. This means they can see large trends in the way insurance works, for example ...


Read More on Datafloq
Are CEO’s Missing out on Big Data’s Big Picture?

Are CEO’s Missing out on Big Data’s Big Picture?

Big data allows marketing and production strategists to see where their efforts are succeeding and where they need some work. With big data analytics, every move you make for your company can be backed by data and analytics. While every business venture involves some level of risk, with big data, that risk gets infinitesimally small, thanks to information and insights on market trends, customer behaviour, and more.

Unfortunately, however, many CEOs seem to think that big data is available to all of their employees as soon as it’s available to them. In one survey, nearly half of all CEOs polled thought that this information was disseminated quickly and that all of their employees had the information they needed to do their jobs. In the same survey, just a little over a quarter of employees responded in agreement.

Great Leadership Drives Big Data

In entirely too many cases, CEOs look at big data as something that spreads in real-time and that will just magically get to everyone who needs it in their companies. That’s not the case, though. Not all employees have access to the same data collection and analytics tools, and without the right data analysis and data science, all of that data does ...


Read More on Datafloq
BBBT to Host Webinar from GoodData on Analytics as a Profit Center

BBBT to Host Webinar from GoodData on Analytics as a Profit Center

This Friday, the Boulder Business Intelligence Brain Trust (BBBT), the largest industry analyst consortium of its kind, will host a private webinar from GoodData on how it is helping enterprises and ISVs build smart business applications that transform their analytic insights into revenue generating opportunities.

(PRWeb July 12, 2016)

Read the full story at http://www.prweb.com/releases/2016/07/prweb13544430.htm

How Big Data Can Help You Recruit the Prime Candidate

How Big Data Can Help You Recruit the Prime Candidate

During the past two years, human race has managed to produce more data than we have since the beginning of time and life as we know it. Today, more than 3.2 zettabytes of data is stored all across the world, and by the end of the year 2020 we will have more than 40 zettabytes of information stored.

To help you comprehend the vastness of this sea of information, we will examine an actual example of how much of data is really produced every single minute. For instance, every minute we exchange more than 200 million emails. During that time, users from all across the globe have uploaded more than 200,000 images on their Facebook profiles.

More than 100 hours of video material gets uploaded on YouTube every single minute, and if a user would try to watch all that video material uploaded during one single day – it would take him a total of 15 years. Throughout that minute, and during every second of it, more than 40,000 Google searches happen, resulting in an average of 3.5 billion searches daily on this engine alone.

As you can see, we are constantly producing enormous amounts of data. The irony of this phenomenon becomes even ...


Read More on Datafloq
How Allegiant Air solved its PCI problem and got a whole lot better security culture, too

How Allegiant Air solved its PCI problem and got a whole lot better security culture, too

The next BriefingsDirect security market transformation discussion explores how airline Allegiant Air solved its payment card industry (PCI) problem -- and got a whole lot better security culture to boot.

When Allegiant needed to quickly manage its compliance around the Payment Card Industry Data Security Standard, it embraced many technologies, including tokenization, but the company also adopted an improved position toward privacy methods in general.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share how security technology can lead to posture maturity -- and then ultimately to cultural transformation with many business benefits -- we're joined by Chris Gullett, Director of Information Assurance at Allegiant Air in Las Vegas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let's begin at a high level. What are the major trends that are driving a need for better privacy and security, particularly when it comes to customer information, and not just for your airline, but for the airline industry in general?

Gullett
Gullett: The airline industry in general has quite a bit of personally identifiable information (PII). When you think about what you have to go through to get on the plane these days, everything from your whole name, your date of birth, your address, your phone number, your flight itinerary, is all going in the record.

There is lot of information that you would rather not have in the public domain, and the airline has to protect that. In fact, there have been a couple of data breaches involving major airlines with things like frequent-flyer programs. So, we have to look carefully at how we interact with our customers and make sure that data is incredibly safe. We just don't want to take the brand hit that would occur if data leaked out.

Gardner: At the same time, we’re enjoying much better benefits by attaching more data to transactions, to process; we're able to cross organizational boundaries. And so, the user-experience benefits of having more data are huge. We don't want to back off from that, but we do want to be able to make sure that that data is protected.

What are some of the major ways we can recognize the need for better data uses, but keep it protected? Can they be balanced?

Technology fronts

Gullett: The airline industry is moving forward on a lot of technology fronts. Some airlines, for example, are using mobile devices to welcome specific customers on board with a complete history of how good a customer they are to that particular airline, so they can provide additional services in the air.

Other airlines are using beaconing [location] technologies, which I think is kind of cool. If you have a mobile app on your phone for the airline and you're transiting through the airport, how cool is it to know where you are and how long it's taking you to get through security. So, the airline might adapt at the gate as to whether there are going to be problems or not in boarding that particular plane.
Learn More About Safeguarding
Data Throughout Its Lifecycle
Read the full Report
There are a lot of different data points that are being collected and used now with different airlines handling them in different ways. In any event, the need for privacy is important, especially in the European Union (EU), which has incredibly tight data-privacy protection laws.

Gardner: We've talked about that on this podcast series. Now, the answer isn’t just the old thinking around security, where we'll just wall it off, or we'll use as little data as possible. Instead, we need to have more data in more places -- even down at that mobile edge.
We need data out to the edge where it's actually being consumed; that’s what has to happen these days.

So, as we think about ways to accommodate our need for more data in more places, even everywhere, is there top-level thinking that goes along with being able to make the data private, but also usable?

Gullett: That's the balancing point. Everybody wants their data everywhere. Before, a data center protected data inside the tight little confined, hardened shell you used to have, a perimeter with a firewall, and things like that. But we need data out to the edge where it's actually being consumed; that’s what has to happen these days.

Some airlines are putting consumer PII right in hands of the flight attendant on the plane. At Allegiant, for example, we're using mobile devices to accept credit cards on the plane. We're experimenting with a number of different technologies that fall into a category of Internet of Things (IoT), when you think about them. What they all have in common is that they're outside any possible perimeter.

So, you have to find a way to make every device have its own individual perimeter, and harden the data, harden the device, or some combination of the two.

Gardner: Let's hear more about your particular airline. Tell us about Allegiant Air and what makes it unique in the airline industry.

Regular profitability

Gullett: At Allegiant, we're up to 54 consecutive quarters of profit, which is unheard of in the airline industry. The famous phrase about the airline industry is, “How do you become a millionaire? You start with a billion dollars and you buy an airline.”

The profitability of airlines has been much in the news over the last couple of decades, because it's cyclical. Airlines fail, go into bankruptcy, or consolidate. There's been a lot of consolidation in the United States, with United taking on Continental, and Delta taking on Northwest as examples. Southwest taking on AirTran is another. Everybody has been in the game.

Allegiant is kind of off on its own. We've found an interesting niche that has very little direct competition on the routes that we serve, and that is taking vacationers to their favorite vacation destinations.

We connect small- and medium-sized markets -- markets like Kalispell, Montana or Indianapolis, Indiana, a medium-sized city. We'll take them to Florida, Las Vegas, or Los Angeles. We have about 19 vacation destinations now. We have about 115 cities overall. In fact, we serve more cities than Southwest, if you want to get a comparison on the size of the route map. And we're also taking the charter operators to three different countries in the Caribbean.
We've found an interesting niche that has very little direct competition on the routes that we serve, and that is taking vacationers to their favorite vacation destinations.

We have quite a different footprint. That adds up to about $1.3 billion in revenue a year, and from a profitability standpoint, Allegiant is regularly recognized as one of the most profitable airlines in the world.

Gardner: It sounds like most of your passengers, perhaps even all of them, are vacationers, not business travelers. Does that change anything when it comes to user experience, privacy, and data security?

Gullett: It doesn't change anything as far as the need to protect the data, but it puts a greater risk of brand problems concerning data breaches.

Consider the fact that our average customer flies with us once or twice a year. They are, in many cases, flying Allegiant, rather than driving to their vacation destination. Or maybe they're taking a vacation they wouldn't have otherwise because of Allegiant's low prices.

So what you have is “not-frequent travelers.” In fact, that would be kind of a name. If we were going to have a frequent-flyer program it would be the “not-frequent-flyer program,” because vacationing people just don't fly as frequently.

If I'm a business traveler, I am on so-and-so [airline], and they had a breach, I'm going to continue to fly them because I have marvelous status with their frequent-flyer program. Allegiant customers say, “Gee, I'm a little concerned about that and if they have a data breach, I think I'll drive instead.”

So the brand damage from a breach, I believe, is higher for our airline than some of the other airlines out there.

Everyone's responsibility

Gardner: Given how important it is to your business, to your brand, how do you rationalize these approaches to security to the larger organization? I know that's probably not as prominent a problem as it used to be, because we can see directly the business implications of security issues. But how do you make security everybody's responsibility? Is that something that you have been trying to do?

Gullett: First, we're very lucky at Allegiant to have incredibly broad support from the C-suite level and the board of directors for our security program. That's not a benefit that every company has, but we do, and it certainly makes life easier in developing the procedures and processes, and the technologies, necessary to protect our customer data.

We came into the business at Allegiant with the idea that we have the typical triad of people, process, and technology to deal with in the information security program -- the three legs on a stool. If you miss one of those, you are going to be on your butt on the ground because the stool isn't going to work very well.
We've really moved into more of a stage of being people-focused now. In fact, much of our budgetary spend is on security awareness for our people.

We focused on technology and process early on, because those were the easy things. Those were the low-hanging fruit. We've really moved into more of a stage of being people-focused now. In fact, much of our budgetary spend is on security awareness for our people.

We really had to look at how we best introduce security awareness to the entire company, and to make the company more culturally sensitive to information security. That extends from the customer service agent who's checking you in at the ticket counter all the way up to the board of directors.

The [security leadership] has certainly chimed in and made our board more aware of problems concerning information security. Recently U.S. Senator Edward Markey (D-Massachusetts) has also introduced legislation that specifically targets cyber security in the United States domestic airline industry.

That need to protect the data has to be recognized, and the most important part of protecting the data is the people that are handling the data. Awareness is really a big part of our program now.

Gardner: How did PCI-compliance form a trigger for your organization? What did that change mean for you, and maybe you could explain how you have gone about it at the process, people, and technology levels?

Compliance requirements

Gullett: Well, god bless compliance, because I think I got my first information-security job thanks to an auditor telling someone that they needed an information security guy because of Sarbanes-Oxley. And I joined Allegiant because of PCI. These various compliance regulations have certainly done wonders for the job market in information security. I can only imagine what it’s like with the data security and the EU General Data Protection Regulation (GDPR).

But, in regards to our travel into the world of PCI, Allegiant is also a unique airline in that the software that runs through the airline, the applications that run the airline, are proprietary. We actually write that ourselves. We have a large development staff and every aspect of the operation of the airline is run by custom software that we control and we write.

There are a lot of benefits to that because it allows us to be very agile and flexible if we want to make changes, but there is a downside. Some of the code dates back to the green screen days of the 1990s, and that code was going to be very difficult to bring into compliance from a PCI standpoint. It was just not written with security in mind, and while it wasn’t directly handling credit-card data, it was in the process scope.
Learn More About Safeguarding
Data Throughout Its Lifecycle
Read the full Report
A big concern was how we were going to ever bring a significantly non-compliant custom app that would take a great number of application-developer hours to bring it up to snuff and still meet a relatively tight schedule for becoming PCI-compliant. And so, at the time we looked at a number of different products out there and we thought, well, we can't solve every problem right now. So let’s bite off small chunks and we'll take care of that.

The first thing that looked like it would be fairly easy to do, or at least straightforward from a technology standpoint, was tokenization. And so, our search was, how can we tokenize the cards that we are storing. And that led us to stateless tokenization. We compared a number of different products, but we looked at HPE [Secure] Stateless Tokenization, and that was ultimately our choice for tokenization.

Interestingly enough, while we were on our search for what the best tokenization product was, I happened to read a press release on a website that talked about format-preserving encryption as a new technology that was going to become available -- and that actually became HPE SecureData Web. We found that by accident; it wasn’t even a product that was available at the time. It was going to be targeted at card acquirers, and we actually had a hard time convincing the sales folks to sell it to us as a different type of end-user.

That solved our application problem because it allowed us to encrypt the data that was passing through those legacy apps. Between the tokenization and the format-preserving encryption (FPE) SecureData Web product, we were able to dramatically reduce the overall scope of PCI data, and that finally led us to become compliant.

Gardner: Now, this sounds like, with custom apps, it could take months, even quarters. How much time did it take you, and how important was that to you?

Gullett: The time to implement any application that is outside of what we develop ourselves is always a concern, because that takes our developers, who now have to serve as integrators, off of projects that might lead to higher revenues for the airline or to solve a problem or offer a feature that the airline would like to do. And we're very focused on improving the overall business.

We found that the overall implementation of the HPE products was very efficient. In fact, I think we had one-and-a-half full-time equivalent (FTE) application developers on the project. It took them about three months, and that was integrating with multiple payment-card interfaces. I think we started at the end of October and we went live at the end January. So it was pretty lightweight from the standpoint of integrating significant products into our ecosystem.

Stateless tokenization

Gardner: Secure stateless tokenization can often take organizations like yours out of the business of storing credit card information at all. You're basically passing it through and using various technologies to avoid being in a position where you could have a privacy problem. Was that the case with you, and did you extend that to other types of data?

Gullett: That was one of the marvelous parts of bringing the system online as it did take us from storing many, many millions of credit card numbers down to absolutely zero. We store no payment card numbers at this time. Everything is tokenized. The card data comes into our internal payment process and the system can send it off to the card acquirer to determine whether it should be approved or denied, and it’s immediately tokenized. So that has been a real win for the company -- just much less to worry about from the card standpoint.

Now from the standpoint of how we can encrypt or protect other data, we're looking at a number of possible scenarios now that we have gotten past the PCI hurdle. For example, while we don’t fly internationally with scheduled service, we do handle the charters for other companies. At some point, the company may well fly to international locations, and we will be collecting passport numbers. That would be the kind of thing we would also look at, in effect using some type of format preserving encryption, so that we're not storing the actual data.
We store no payment card numbers at this time. Everything is tokenized.

We've gained a lot of experience with the product over the last three years and that’s going to be a fairly easy implementation that will offer a great deal of protection. But we can also extend that out to customer names, birth dates, and all kinds of different things and we are looking at that now.

Gardner: The HPE SecureData Web and the Page-Integrated Encryption are being used by a lot of folks for the webpage, of course, the browser-based apps, but that also can provide a secure way to go to mobile. Many people are interested in the mobile web, not necessarily just native apps. Is that something you have been able to use as well? The SecureData Web as a way to get to the mobile edge securely?

Gullett: We do use SecureData Web in our mobile applications. We've been using it since we initially integrated the product several years ago. In fact, that was one of the data points that we had to protect from Day One. So we have the app going out to the Internet, grabbing the one-time encryption key and encrypting that data in the application itself on the mobile device, on the Android device, the Apple device, and then sending that encrypted data back to our payment-processing system, passing through any systems in the middle as an encrypted form.

We also have a subsidiary that it is not directly airline-related that is also developing a payment-processing app for the business space it works within. Because they're developing a true native application for iOS, they're going to be developing with the SecureData Web SDK that’s been released for mobile devices, which will certainly be much easier.

Gardner: Chris, we hear a lot of times that security is a cost center, that people don’t necessarily see it as a way of bolstering business value or growing revenue streams. It sounds like when you can employ some of these technologies, create a better posture, it frees you up, it makes you able to innovate and transform. Has that been the case with you? Can you point to any ways in which you've actually been able to increase revenue? I know that for airlines it’s a fairly tight margin on the travel, but some of those ancillary services can be a make or break; is that the case here?

Unbundled travel

Gullett: Allegiant is a leader in what we call unbundled travel; we would rather sell you exactly what you want. When an airline says that they offer free bags, for example, they're not offering you free bags. It does cost to put those bags in the hold, to put those bags in the overhead and carry those bags on the plane with you. There is weight, and then that costs fuel. So, there is an expense associated with every aspect of your travel on an airline today; that’s just the way it is.

Allegiant’s unbundled services allow us to say to a traveler, “Well, sure, if you want to get on the plane and you want to bring something and put it under the seat, we'll sell you a seat on the plane. If you want to bring 40 pounds of baggage to put in the hold, we'll charge for that,” because not everybody wants to bring a 40-pound bag to put in the hold.

The thing about Allegiant with its proprietary application that runs the airline is that if we see an opportunity to offer a new service to the customer or a new ancillary service to the customer, we don't have to go to a third-party and say, would you please add this so we can offer this feature to the customer; we can just do it.
We were able to implement the necessary controls with the HPE products in about three months, with about one-and-a-half FTEs.

At the time, we were worrying about PCI compliance and how we were going to accomplish PCI compliance, we also had a project to begin charging for carry-on bags, the bags that go up in the overhead. We could either spend a lot of time retrofitting the legacy app for PCI or we could spend time generating revenue by offering this new feature to the customer that they would be charged for carry-on bags up in the overhead.

The seats on the plane, everything associated with the airline, have a very quick expiration date. When the plane takes off, an empty seat has no value and it will have no value ever again. When a seat takes off empty, we can’t sell that person a Coke, we can’t sell them a bag, we can’t sell them a [rental] car, we can’t sell them a hotel room; that's gone forever. So, speed to market is incredibly important for the airline industry and it may be more important for Allegiant.

In the case of our travails on PCI and how we were going to solve our PCI-compliance issue, we wanted to be able to add this feature to charge for carry-on bags. So now you have a choice. Do you spend a lot of time integrating and cleaning up legacy apps for PCI? Do you move ahead with something that could bring in millions of dollars in revenue? The answer, of course is that you have to be compliant with PCI. So, we have to do that first.
Learn More About Safeguarding
Data Throughout Its Lifecycle
Read the full Report
The fact that we were able to implement the necessary controls with the HPE products in about three months, with about one-and-a-half FTEs, meant that other application developers could spend time on that carry-on bag feature in our software, allowing us to go to market with that sooner than we would have otherwise.

Now, if you look at the fact that we went to market three months earlier than we would have normally, if we had spent three months of stopping everything to do nothing but PCI compliance. Instead, we were able to use that time to develop carry-on bag charging services, that is millions of dollars that would never have been captured in any other way, because it expires, it’s gone. Once the plane leaves the ground, you can’t charge anymore.

So there was a real delivery to the bottom line as far as a profitable feature was concerned by being able to roll out that carry-on bags feature sooner. We had a much easier, quicker, and lower resource-intensity standpoint ability to integrate, using the HPE Security products.

Where next?

Gardner: So going back to our opening sentiment around the fact that you can’t just wall off data, meaning the more data, the better for your business and the more places that data can get to, the better. You've demonstrated that that’s also core to business innovation, such as growing revenue in new ways, and being agile and adaptive to very competitive markets. That’s a very interesting example.

Before we sign off, Chris, where do you go next? How do you think your security steps so far have enabled you to be more fleet, more agile, and perhaps find other business benefits?

Gullett: There is no substitute for delivering innovative solutions to problems that are well-known throughout the business, and helping that to build your credibility with the executives and the board of directors. Certainly, the solution to our PCI-compliance issues, which did get a lot of exposure to the company’s executives and the board, by being able to solve that quickly and without an impact to the operations of the airline, that brought information security awareness to a level that we had not previously enjoyed at the airline.

Although, if you talk to our executives and our board, they're going to tell you information security is very important, and I believe they believe that. The fact that you can demonstrate that you can deliver solutions that don't break the bank and do what they say they do, means a lot.

Going back to that three-legged stool, technology and the HPE Security products that we implemented for PCI are just one part. For example, if the folks aren't handling the credit cards properly or if they're not adequately protecting the data that they have on their mobile devices out in the field, our risk is just as great as a credit-card data breach would have been before we had implemented the tokenization. These are all things we kind of worry about.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How Retailers Can Use Right Time Marketing

How Retailers Can Use Right Time Marketing

With consumers more connected than ever, retailers must constantly shift strategies to keep up with consumer’s ever changing needs and preferences. Mobile seems to be the holy grail of marketing this year, social is more imperative than ever, and multi-channel and omni-channel strategies are expected of every retailer who wants to keep up with today’s consumers.

Just as notable, where real-time and instantaneous messaging were once considered to be the answer to every marketing situation, retailers must now strive to take personalization a step further and implement strategies to market to a consumer at the RIGHT time with the right message and through the right channels – which is the concept behind Right Time Marketing. So while you may not have a crystal ball to always determine the best strategy for each customer and prospect that interacts with your brand, the right blend of marketing data, technology and analytical solutions can get you close to just the right answers.

What is Right Time Marketing?

Right Time Marketing is about identifying the right audience (those who are in-market and most likely to convert) and using marketing data and technology processes to drive optimally timed contact for the best ROI. Right Time Marketing begins with ...


Read More on Datafloq
4 Examples of Big Data Implementation in Customer Micro-Segmentation

4 Examples of Big Data Implementation in Customer Micro-Segmentation

The digital age is rife with possibilities for organizations to further business and improve engagement and conversion. One great improvement technology has allowed us to develop is big data, or simply data that segments customers in a clear and organized way.

There are various forms of data at your disposal. It’s just a matter of choosing which one would better fit your current purpose. Some of these available types come in the following:



Activity-based – You can tap into a number of resources to obtain this user information. Some of the most common involve website traffic, purchase history, call and mobile data, and response to incentives


Social Network Profiling – While these are largely offline information, it is also a form of data you can retrieve and look into to better understand your customer’s profile. A few examples would be work history, or group membership


Sentiment Data – Emotions are a large part of the customer’s experience. Finding out what makes people tick is the first step to piquing their interest. Look into their products and companies, likes or follows in social media, comments and reviews, and customer service records.



Plenty of companies are using the method of real-time micro-segmentation for more efficient advertising and ...


Read More on Datafloq
ChainLink analyst on how cloud-enabled supply chain networks drive companies to better manage finances, procurement

ChainLink analyst on how cloud-enabled supply chain networks drive companies to better manage finances, procurement

The next BriefingsDirect business innovation thought leadership discussion focuses on how companies are exploiting advances in procurement and finance services to produce new types of productivity benefits.

We'll now hear from a leading industry analyst on how more data, process integration, and analysis efficiencies of cloud computing are helping companies to better manage their finances in tighter collaboration with procurement and supply-chain networks. This business-process innovation exchange comes in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

To learn more about how new trends are driving innovation into invoicing and spend management, we're joined by Bill McBeath, Chief Research Officer at ChainLink Research in Newton, Mass. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's going on in terms of disruption across organizations that are looking to do better things with their procurement-to-payment processes? What is it that's going on, that's focusing them more on this? Why is the status quo no longer acceptable?

McBeath: There are a couple things. There is this longer-term trends toward digitization, moving away from paper and manual processes. That's nothing new, but having said that, when we do research we always see these huge percentage of companies that are still either on paper or even more common is a mix. They have some portion of their stuff on paper and another portion that's automated. That's foundational and still in the process.

McBeath
A big part of that is getting the long tail of suppliers on board. The large suppliers have the internal resources and that they can get hooked up with these networks and systems to get automated. Smaller suppliers, we think about people that may have less than 100 people or even mid-sized suppliers, have no dedicated IT resources. They may have a very limited ability to do these things.

That's where the challenge is and that's where we see some of the innovations in helping lower the barriers for them. It's helping get a company that's trying to automate all of their invoices or other things -- that can be a mix of paper, fax, e-mail, and EDI documents -- and then gradually move that customer base over to some sort of automation, whether it's through a portal or starting to directly integrate their systems.

So that ability to get that long tail into, so that everything comes in digitally ultimately, is one of the things we're seeing.

Common denominator

Gardner: In order to get digital, as you put it, it seems like we need a common-denominator environment that all the players -- the suppliers, the buyers, the partners -- can play in. It can't be too confining, but it can be too loosey-goosey and insecure either. Have we found that balance between the right level of platform that's suitable for these processes but that doesn't stifle innovation and doesn't push people away because of rigid rules?

McBeath: I want to make a couple points on that. One is about the network approach, versus the portal approach. They are distinctive approaches. In the portal approach, each buyer will set up their own portal and that's how they'll try to get that long tail in. The problem for the suppliers is that if they have dozens or hundreds of customers, they now have dozens or hundreds of portals to deal with.

The network is supposed to solve that problem with a network of buyers and suppliers. If you have a supplier who has multiple buyers on the network, they just have to integrate once to the network. That's the theory, and it helps, but the problem there is that there are also lots of networks.

No one has cracked the nut yet, from the supplier’s point of view, on how not to deal with all these multiple technologies. There are a couple companies out there that are trying to build this supplier capability to just integrate once into one network and then it goes out and gets all the other networks. So, people are trying to solve that problem.

Gardner: And we have seen this before with Salesforce.com for example. We have an environment to develop on, trying to provide services that people would use in the customer relationship management (CRM) space, for example. We saw in June that Tradeshift has come out with an app store. Is this what you are getting at? Do you think the app store model with a development element to it is an important step in the right direction?
The salesforce.com or Tradeshift approach is different. It's not just a set of APIs to integrate to their application; it's really a full development kit, so that you can build applications on top of that.

McBeath: I mentioned there were two points. The network point was one point, and the second one is exactly what you're talking about, which is that you may have a network, but it's still constrained to just that solution provider's functionality.

The Salesforce.com or Tradeshift approach is different. It's not just a set of APIs to integrate to their application; it's really a full development kit, so that you can build applications on top of that.

There's a bit of a fuzzy line there, but there are definitely things you can point to. There are enough APIs that you can write an application from scratch. That's question number one. Does that include UI integration? That would be the second question I would ask, so that when you develop using their UI  APIs and UI guidelines, it actually looks as fully integrated as if it was one application.

There's also a philosophy point of view. More and more large-solution providers are kind of in the “light bulb is going out” [stage] and they can't necessarily build it all. Everyone has had partners. So, there's nothing new about partnering and having ISV partners and integrating, but it's a wholesale shift to building a whole toolkit, promoting it, and making it easy, and then trying to get others to build those pieces. That's a different kind of approach.

Gardner: So clearly, a critical mass is necessary to attract enough suppliers that then attracts the buyers, that then attracts more development, and so on. What's an important element to bring to that critical mass capability? I'm thinking about data analytics as one, mobile enablement, and security. What's the short list of critical factors that you think these network and platform approaches need to have in order to reach critical mass?

Critical mass

McBeath: I would separate it into technology and industry-focused things, and I'll cover the second one first. Supplier communities, especially for direct materials, tend to cluster around industries. What I see for these networks is that they can potentially meet critical mass within a specific industry by focusing on the industry. So, you get more buyers in the industry, more suppliers in the industry, and now it becomes almost the de facto way to do business within that industry.

Related to that, there are sometimes very industry-specific capabilities that are needed on the platform. It could be regulated industries like pharma or chemical that have certain things they have to do that are different from other industries. Or it could be aerospace defense, which has super-high security requirements. They may look for all of these robust identity-management capabilities.

That would be one aspect of building up a critical mass within an industry. Indirect is a little more of a horizontal play; indirect suppliers tend to go more across industries. In that case, it can be just the aggregate size of the marketplace, but it can also be the capabilities that are built in.
Some companies are trying to provide more value to suppliers, not just in terms of how they market themselves, but then also outward-facing supply-chain and logistics capabilities.

One interesting part of this is the supplier’s perspective, and for some of these networks, what they offer to suppliers is basically a platform to get noticed and to transact. But some companies are trying to provide more value to suppliers, not just in terms of how they market themselves, but then also outward-facing supply-chain and logistics capabilities. They're building rich capabilities that suppliers might actually be willing to pay for, instead of just paying for the honor of transaction on a platform.

Gardner: Suffice to say things are changing rapidly in the pay-to-procure space. What advice would you give both buyers and sellers, suppliers, when it comes to looking at the landscape and trying to make evaluations and making good decisions about being on the leading edge of disruption, taking advantage of it, rather than being perhaps injured or negatively impacted by it?

McBeath: That can be a challenging question. Eventually, the winners become quite obvious when it comes to network space, because certain networks, as I mentioned, will dominate within an industry. Then, it becomes somewhat easy decision.

Before that happens, you're trying to figure out if you're going to bet on the right horse. Part of that is looking at the kind of capabilities on the platform. One of them that's important, going back to this API extensibility thing, is that it's very difficult for one platform to do it all.

So, you'd look at whether they can do 80 percent of what you need. But then, do they also provide the tools for the other 20 percent, especially if that 20 percent, even though it may be a small amount of functionality, it may be very critical functionality for your business that you really can't live without or get high value from? If it has the ability for you to build that yourself, so that you can really get the value, that's always a good thing.

Gardner: It sounds like it would be a good idea to try a lot of things on, see what you can do in terms of that innovation at the platform level, look at the portal approach, and see what works best for you. We've heard many times that each company is, in fact, quite different, and each business grouping and ecosystem is different.

Getting the long tail

McBeath: There's a supplier perspective, and there is a buyer perspective. Besides your trading partners on the platform, from a buyer’s perspective, one of the things we talked about is getting that long tail.

Buyers should be looking at, and interested in, what level of effort it takes to onboard a new supplier, how automated can that be, and then how attractive is it to the supplier. You can ask or tell your suppliers to get on board. But if it's really hard to do, if it's expensive for them, if it takes a lot of time, then it’s going to be like pulling teeth. Whereas, if there are benefits for the suppliers, it’s easy to do, and it’s actually helping them, this becomes much easier to get that long tail of suppliers onboard.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Tradeshift.

You may also be interested in:

Data Theft Is A Problem And Needs Quick Redressal

Data Theft Is A Problem And Needs Quick Redressal

Data theft is a serious problem at a consumer as well as enterprise level. A study published by Pew Research in January 2014 showed that nearly 18 percent of American adults online have had confidential information including Social Security numbers, credit cards data and bank account information stolen. The numbers are a lot scarier among enterprise customers. Close to 70 percent of respondents in a recent Accenture study reported having experienced attempted or successful theft or corruption of data by company insiders during the prior 12 months.

Whether it is done by company insiders, competitors or hackers, data theft can seriously damage the credibility of an organization and can have a direct impact on the bottom-line. While the Accenture study points out that the threat is only likely to worsen from here, what's worse is that enterprise investments in data security continue to remain abysmal due to insufficient budgets and resources. Nearly 36% of respondents in the study believe that the management considers investment in cyber-security an unnecessary expense.

One of the main reasons why a lot of these businesses don't seem to be addressing the low investments in data security appears to be 'moral hazard'; the practice of taking more risks ...


Read More on Datafloq
Why Your Big Data & IoT Security Are Vulnerable (And What to Do About It)

Why Your Big Data & IoT Security Are Vulnerable (And What to Do About It)

Big data has been around for a long time.

With the introduction of the Internet of Things, this technology has quickly evolved. As data continues to grow, IoT becomes more ingrained in our everyday lives. Gartner predicts that by 2020 there will be 21 billion connected devices globally. Moreover, the IDC's Digital Universe study reveals that the world's data will grow to be ten times what it is now. IoT will contribute to at least 10% of this massive expansion.

Major industries will continue to grow. Their value will be worth billions of dollars. Moreover, security failures and breaches will become more hazardous. In fact, Nomura Research found that CIOs are spending a massive amount of money on security.


Source: Nomura research

Security is always an issue. But what role does big data and IoT play?

Sources of Vulnerability

Resolving security issues requires identifying potential vulnerabilities. Once discovered, a strategic plan can be created to prevent future problems.
 

Source: Allerin Tech

Security challenges are a side effect of our increasingly interconnected world. In the past, companies worried less about whether their data was protected. Most of the data was stored internally. But as more companies rely on cloud storage systems, it becomes difficult to adequately protect data.

Big Data

Data is no longer stored in physical locations that you control. ...


Read More on Datafloq
How Big Data Has Completely Transformed Wall Street

How Big Data Has Completely Transformed Wall Street

One of the biggest advances in big data in recent years has come from the financial sector. This should come as no surprise to most, as there is incredible amounts of money to be made in finance. World markets run on data, with investors, funds, and governments all trying to use the information at hand to value investment and risk. Interpretting the data correctly can lead to billions of dollars in instant profits, while incorrect interpretations can ruin entire companies. A good exampe of this is Delta Airlines, who recently reported nearly half a billion dollars in losses after they misjudged oil data and made some bad hedges.

So how has big data transformed the industry, and where is it going?

The number one thing financial services have taken to using big data for is risk management. Large financial firms are constantly balancing the need to make profits off of investments, loans, and other tools with the need to avoid risks that could threaten the future of the company. A good example of this is the housing crisis of 2008. Many investment banks and firms did not have an accurate picture of the risk they were taking on, and this hole in ...


Read More on Datafloq
What to Know to Avoid Hiring a Bad Data Scientist

What to Know to Avoid Hiring a Bad Data Scientist

One of the biggest challenges companies face today is the big data talent gap. Businesses want to use big data, but in order to do that, they need to have the right personnel on hand capable of performing those tasks. This is a tall order for many organizations, mostly because good data scientists are hard to come by. The demand for talented data scientists is high, but sadly the supply is low. Universities and other educational institutions are working hard to teach a new generation of data scientists, but we’re still years away from even coming close to having supply reach demand. That means businesses have tough decisions to make when it comes to hiring a data scientist. While they certainly want to make the best hire possible, it’s just as easy to get that decision wrong. With this in mind, here are several things to keep in mind to make sure you don’t make the mistake of hiring a bad data scientist.

Perfection Doesn’t Exist

A data scientist that has mastered every single programming language, has become an expert in each big data platform, and is the top of the class at statistics, mathematics, and development is often called a unicorn. ...


Read More on Datafloq
How European GDPR compliance enables enterprises to both gain data privacy and improve their bottom lines

How European GDPR compliance enables enterprises to both gain data privacy and improve their bottom lines

The next BriefingsDirect security market transformation discussion focuses on the implications of the European Parliament’s recent approval of the General Data Protection Regulation or GDPR.

This sweeping April 2016 law establishes a fundamental right to personal data protection for European Union (EU) citizens. It gives enterprises that hold personal data on any of these people just two years to reach privacy compliance -- or face stiff financial penalties.

But while organizations must work quickly to comply with GDPR, the strategic benefits of doing so could stretch far beyond data-privacy issues alone. Attaining a far stronger general security posture -- one that also provides a business competitive advantage -- may well be the more impactful implication.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We've assembled a panel of cybersecurity and legal experts to explore the new EU data privacy regulation and discuss ways that companies can begin to extend these needed compliance measures into essential business benefits.

Here to help us sort through the practical path of working within the requirements of a single digital market for the EU are: Tim Grieveson, Chief Cyber and Security Strategist, Enterprise Security Products EMEA, at Hewlett Packard Enterprise (HPE); David Kemp, EMEA Specialist Business Consultant at HPE, and Stewart Room, Global Head of Cybersecurity and Data Protection at PwC Legal. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, the GDPR could mean significant financial penalties in less than two years if organizations don’t protect all of their targeted data. But how can large organizations look at this under a larger umbrella, perhaps looking at this as a way of improving their own security posture?

Grieveson: It’s a great opportunity for organizations to take a step back and review the handling of personal information and security as a whole. Historically, security has been about locking things down and saying no.

Grieveson
We need to break that mold. But, this is an opportunity, because it’s pan-European, to take a step back, look at the controls that we have in place, look at the people, look at the technology holistically, and look at identifying opportunities where we can help to drive new revenues for the organization, but doing it in a safe and secure manner.

Gardner: David, is there much difference between privacy and security? If one has to comply with a regulation, doesn’t that also give them the ability to better master and control their own internal destiny when it comes to digital assets?

Kemp: Well, that’s precisely what a major European insurance company headquartered in London said to us the other day. They regard GDPR as a catalyst for their own organization to appreciate that the records management at the heart of their organization is chaotic. Furthermore, what they're looking at, hopefully with guidance from PwC Legal, is for us to provide them with an ability to enforce the policy of GDPR, but expand this out further into a major records-management facility.

Gardner: And Stewart, wouldn’t your own legal requirements for any number of reasons be bolstered by having this better management and privacy capability?
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
Room: The GDPR obviously is a legal regime. So it’s going to make the legal focus much, much greater in organizations. The idea that the GDPR can be a catalyst for wider business-enabling change must be right. There are a lot of people we see on the client side who have been waiting for the big story, to get over the silos, to develop more holistic treatment for data and security. This is just going to be great -- regardless of the legal components -- for businesses that want to approach it with the right kind of mindset.

Kemp: Just to complement that is a recognition that I heard the other day, which was of a corporate client saying, "I get it. If we could install a facility that would help us with this particular regulation, to a certain extent relying once again on external counsel to assist us, we could almost feed any other regulation into the same engine."

Kemp
That is very material in term of getting sponsorship, buy in, interest from the front of the business, because this isn’t a facility just simply for this one, particular type of regulation. There’s so much more that could be engaged on.

Room: The important part, though, is that it’s a cultural shift, a mindset. It’s not a box-ticking exercise. It’s absolutely an opportunity, if you think of it in that mindset, of looking holistically. You can really maximize the opportunities that are out there.

Gardner: And because we have a global audience for our discussion, I think that this might be the point on the arrow for a much larger market than the EU. Let’s learn about what this entails, because not everyone is familiar with it yet. So in a nutshell, what does this new law require large companies to do? Tim, would you like to take that?

Protecting information

Grieveson: It’s ultimately about protecting European citizens' private and personal information. The legislation gives some guidance around how to protect data. It talks about encryption and anonymization of the information, should that inevitable breach happen, but it also talks about how to enable a quicker response for a breach.

To go back to David’s point earlier on, the key part of this is really around records management. It’s understanding what information you have where and classifying that information. What you need to do with it is key to this, ultimately because of the bad guys out there. In my world as an ex-CIO and as an ex-CISO, I was always looking to try and protect myself from the bad guys who were changing their process to monetize.

They're ultimately out to steal something, whether it be credit card information, personal information, or intellectual property (IP). Organizations often don’t understand what information they have where or who owns it, and quite often, they don’t actually value that data. So, this is a great approach to help them do that.

Gardner: And what happens if they don’t comply? This is a fairly stiff penalty.

Grieveson: It is. Up to four percent of the parent company’s annual revenue is exposed as part of a fine, but also there's a mandatory breach notification, where companies need to inform the authorities within 72 hours of a breach.
We're seeing that trend going in the wrong direction. We're seeing it getting more expensive. On average, a breach costs in excess of $7.7 million, but we are also seeing the time to remediate going up.

If we think of the Ponemon Report, the average time that the bad guy is inside an organization is 243 days, so clearly that’s going to be challenge for lots of organizations who don’t know they have been breached, but also that remediation afterwards once that inevitable breach happens, on average, globally, is anywhere from 40 to 47 days.

We're seeing that trend going in the wrong direction. We're seeing it getting more expensive. On average, a breach costs in excess of US$7.7 million, but we are also seeing the time to remediate going up.

This is what I talked about with this cultural change in thinking. We need to get much smarter about understanding the data we have and, when we have that inevitable breach, protecting the data.

Gardner: Stewart, how does this affect companies that might not just be based in the EU countries, companies that deal with any customers, or supply chain partners, alliances, the ecosystem. Give us a sense of the concentric circles of impact that this pertains to inside the EU and beyond?

Room: Yes, the law has global effect. It’s not about just regulating European activities or protecting or controlling European data. The way it works is that any entity or data controller that’s outside of Europe and that targets Europe for goods and services will be directly regulated. It doesn’t need to have an establishment, a physical presence, in Europe. It targets the goods and services. Or, if that entity pre-files and tracks the activity of European citizens on the web, they're regulated as well. So, there are entities that are physically not in Europe.

Any entity outside of Europe that receives European data or data from Europe for data processing is regulated as well. Then, any entity that’s outside of Europe that exports data into Europe is going to be regulated as well.

So it has global effect. It’s not about the physical boundaries of Europe or the presence only of data in Europe. It’s whether there is an effect on Europe or an effect on European people’s data.

Fringes of the EU

Kemp: If I could add to that, the other point is about those on the fringes of the EU, because that is where this is originating from, places such as Norway and Switzerland, and even South Africa, with the POPI legislation. These countries are not part of the EU, but as Stewart was saying, because a lot of their trade is going through the EU, they're adopting local regulation in order to mirror it in order to provide a level playing field for their corporate.

Gardner: And this notion of a fundamental right to personal data protection, is that something new? Is that a departure and does that vary greatly from country to country or region to region?

Room: This is not a new concept. The European data-protection law was first promulgated in the late 1960s. So, that’s when it was all invented. And the first European legislative instruments about data privacy were in 1973 and 1974.

Room
We've had international data-protection legislation in place since 1980, with the OECD, the Council of Europe in 1981, the Data Protection Directive of 1995. So, we're talking about stuff that is almost two generations old in terms of priority and effect.

The idea that there is a fundamental right to data protection has been articulated expressly within the EU treaties for a while now. So, it’s important that entities don’t fall into the trap of feeling that they're dealing with something new. They're actually doing something with a huge amount of history, and because it has a huge amount of history, both the problems and the solutions are well understood.

If the first time that you deal with data protection, you feel that this is new, you're probably misaligned with the sophistication of those people who would scrutinize you and be critical of you. It's been around for a long time.

Grieveson: I think it’s fair to say there is other legislation as well in certain industries that make some organizations much better prepared for dealing with what’s in the new legislation.

For example, in the finance industry, you have payment card industry (PCI) security around credit-card data. So, some companies are going to be better prepared than others, but it still gives us an opportunity as an auditor to go back and look at what you have and where it fits.

Gardner: Let’s look at this through the solution lens. One of the ways that the law apparently makes it possible for this information to leave its protected environment is if it’s properly encrypted. Is there a silver bullet here where if everything is encrypted, that solves your problem, or does that oversimplify things?

No silver bullet

Grieveson: I don’t think there is a silver bullet. Encryption is about disruption, because ultimately, as I said earlier, the bad guys are out to steal data, if I come from a cyber-attack point of view, and even the most sophisticated technologies can at some point be bypassed.

But what it does do is reduce that impact, and potentially the bad guys will go elsewhere. But remember, this isn't just about the bad guys; it’s also about people who may have done something inadvertently in releasing the data.

Encryption has a part to play, but it’s one of the components. On top of that, you have technology around having the right people and the right process, having the data-protection officer in place, and training your business users and your customers and your suppliers.

The encryption part isn't the only component, but it’s one of the tools in your kit bag to help reduce the likelihood of the data actually being commoditized and monetized.
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
Gardner: And this concept of the personally identifiable information (PII), how does that play a role, and should companies that haven't been using that as an emphasis perhaps rethink the types of data and the types of identification with it?

Room: The idea of PII is known to US law. It lives inside the US legal environment, and it’s mainly constrained to a number of distinct datasets. My point is that the idea of PII is narrow.

The [EU] data-protection regime is concerned with something else, personal data. Personal data is any information relating to an identifiable living individual. When you look at how the legislation is built, it’s much, much more expansive than the idea of PII, which seems to be around name, address, Social Security number, credit-card information, things like that, into any online identifier that could be connected to an individual.

The human genome is an example of personal data. It’s important that listeners in a global sense understand the expansiveness of the idea or rather understand that the EU definition of personal data is intended to be highly, highly expansive.

Gardner: And, David Kemp, when we're thinking about where we should focus our efforts first, is this primarily about business-to-consumer (B2C) data, is it about business to business (B2B), less so or more so, or even internally for business to employee (B2E)? Is there a way for us to segment and prioritize among these groups as to what is perhaps the most in peril of being in violation of this new regulation?

Commercial view

Kemp: It’s more a commercial view rather than a legal one. The obvious example will be B2C, where you're dealing with a supermarket like Walmart in the US or Coop or Waitrose in Europe, for example. That is very clearly my personal information as I go to the supermarket.

Two weeks ago I was listening to the head of privacy at Statoil, the major Norwegian energy company, and they said we have no B2C, but in fact, even just the employee information we have is critical to us and we're taking this extremely seriously as the way in which we manage that.

Of course, that means this applies to every single corporate, that it is both an internal and an external aggregation of information.

Grieveson: The interesting thing is, as digital disruption comes to all organizations and we start to see the proliferation and the tsunami of data being gathered, it becomes more of a challenge or an opportunity, depending on how you look at that. Literally, the new [business] perimeter is on your mobile phone, on your cellphone, where people are accessing cloud services.
As digital disruption comes to all organizations and we start to see the proliferation and the tsunami of data being gathered, it becomes more of a challenge or an opportunity, depending on how you look at that.

If I use the British Airways app, for example, I'm literally accessing 18 cloud services through my mobile phone. That then, makes it a target for that data to be gathered. Do I really understand what’s being stored where? That’s where this really helps, trying to formalize what information is stored where and how it is being transacted and used.

Gardner: On another level of segmentation, is this very much different for a government, or public organization, versus a private? There might be some verticals industries like finance or health, where they've become accustomed to protecting data, but does this have implications for the public sector as well?

Room: Yes, the public sector is regulated by this. There's a separate directive that’s been adopted to cover policing and law enforcement, but the public sector has been in scope for a very long time now.

Gardner: How does one go about the solution on a bit more granular level? Someone mentioned the idea of the data-protection officer. Do we have any examples or methodologies that make for a good approach to this, both at the tactical level of compliance but also at the larger strategic level of a better total data and security posture? What do we do, what’s the idea of a data-protection officer or office, and is that a first step -- or how does one begin?

Compliance issue

Room: We're stressing to entities that data [management] view. This is a compliance issue, and there are three legs to the stool. They need to understand the economic goals that they have through the use of data or from data itself. So, economically, what are they trying to do?

The second issue is the question of risk, and where does our risk appetite lie in the context of the economic issues? And then, the third is obligation. So, compliance. It’s really important that these three things be dealt with or considered at the very beginning and at the same time.

Think about the idea simply of risk management. If we were to look at risk management in isolation of an economic goal, you could easily build a technology system that doesn’t actually deliver any gain. A good example would be personalization and customer insights. There is a huge amount of risk associated with that, and if you didn’t have the economic voice within the conversation, you could easily fail to build the right kind of insight or personalization engine. So, bringing this together is really important.

Once you've brought those things together in the conversation, the question is what is your vision, what’s your desired end-state, what is it that you're trying to achieve in light of those three things? Then, you build it out from there. What a lot of entities are doing is making tactical decisions absent the strategic decision. We know that, in a tactical sense, it’s incredibly important to do data mapping and data analysis.
Once you've brought those things together in the conversation, the question is what is your vision, what’s your desired end state, what is it that you're trying to achieve in light of those three things? Then, you build it out from there.

We feel at PwC that that’s a really critical step to take, but you want to be doing that data mapping in the context of a strategic view, because it affects the order of priority and how you tackle the work. So, some non-obvious matters will become clearer than data mapping might be if you take the proper strategic view.

A specific example of that would be complaint handling. Not many people have complaint handling on the agenda -- how we operate inside the call center, for instance. If people are cross, it's probably a much more important strategic decision in the very beginning than some of the more obvious steps that you might take. Bringing those things forward and having a desired vision for a desired end-state will tell you the steps that you want to take and mold.

Gardner: Tim, this isn’t something you buy out of a box. The security implications of being able to establish that a breach has taken place in as little 72 hours sounds to me like it involves an awful lot more than a product or service. How should one approach this from the security culture perspective, and how should one start?

Grieveson: You're absolutely right. This is not a single product or a point solution. You really have to bake it into the culture of your organization and focus not just on single solutions, but actually the end-to-end interactions between the user, the data, and the application of the data.

If you do that, what you're starting to look at is how to build things in a safe, secure manner, but also how do you build them to enable your business to do something? There's no point in building a data lake, for example, and gathering all this data unless you actually have from that data some insight, which is actionable and measured back to the business outcomes.

I actually don't use the word “security” often when I am talking to customers. I'll talk about "protection," whether that's protection of revenue or growing new markets. I put it into business language, rather than using technology language. I think it’s the first thing, because that puts people off.

What are you protecting?

The second thing is to understand what is it that you're going to protect and why, where does it reside, and then stop to build the culture from the top down and also from the bottom up. It’s not just the data protection office's problem or issue to deal with. It’s not just the CIO or the CISO, but it’s building a culture in your organization where it becomes normal everyday business. Good security is good business.

Once you've done that, this is not a project; it’s not do it once and forget it. It’s really around building a journey, but this is an evolving journey. It’s not just a matter of doing it, getting to the point where you have that check box to say, yes, you are complying. It’s absolutely around continuing to look at how you're doing your business, continuing to look at your data as new markets come on or new data comes on.

You have to reassess where you are in this structure. That’s really important, but the key thing for me is that if you focus on that data and those interactions, you have less of a conversation about the technology. The technology is an enabler, but you do need a good mix of people, process, and technology to deliver good security in a data-driven organization.
The technology is an enabler, but you do need a good mix of people, process, and technology to deliver good security in a data-driven organization.

Gardner: Given that this cuts across different groups within a large organization that may not have had very much interaction in the past -- given that this is not just technology but process and people, as Tim mentioned -- how does the relationship between HPE and PwC come together to help organization solve this? Perhaps, you can describe the alliance a bit for us.

Kemp: I'm a lawyer by profession. I very much respect our ability to collaborate with PwC, which is a global alliance [partner] of ours. On the basis of that, I regard Stewart and his very considerable department as providing a translation of the regulation into deliverables. What is it that you want me to do, what does the regulation say? It may say that you have to safeguard information. What does that entail? There are three major steps here.

One, is the external counsel guidance on what the regulation means into set of deliverables.

Secondly, a privacy audit. This has been around in terms of a cultural concept since the 1960s. Where are you already in terms of your management of PII? When that is complete, then we can introduce the technology that you might need in order to make this work. That is really where HPE comes in. That’s the sequence.

Then, if we just look very simply at the IT architecture, what’s needed? Well, as we said right at the beginning, my view is that this is under the records management coherence strategy in an organization. One of the first things is, can you connect to the sources of data around your organization, given that most entities have grown up by acquisition and not organically? Can you actually connect to and read the information where it is, wherever it is around the world, in whatever silo?

For example, Volkswagen, had a little problem in relation to diesel emissions, but one of the features there is not so much how do they defend themselves, but how do they get to the basic information in many countries as to whether a particular sales director knew about this issue or not.

Capturing data

So, connectivity is one point. The second thing is being able to capture information without moving it across borders. That's where [data] technology, which handles the metadata of the basic components of a particular piece of digital information, [applies] and can [the data] be captured, whether it is structured or unstructured. Let’s bear in mind that when we're talking about data, it could be audio or visual or alphanumeric. Can we bring that together and can we capture it?

Then, can we apply rules to it? If you had to say in a nutshell what is HPE doing as a collaboration with PwC, we're doing policy enforcement. Whatever Stewart and his professional colleagues advise in relation to the deliverables, we are seeking to affect that and make that work across the organization.

That's an easy way to describe it, even to non-technical people. So, General Counsel, Head of Compliance or Risk, they can appreciate the three steps of the legal interpretation, the privacy audit, and then the architecture. Then, second, this building up of the acquisition of information in order to be able to make sure that the standards that are set by PwC are actually being complied with.
If you had to say in a nutshell what is HPE doing as a collaboration with PwC, we're doing policy enforcement.

Gardner: We're coming up toward the end of our time, but I really wanted to get into some examples to describe what it looks like when an organization does this correctly, what the metrics of success are. How do you measure this state of compliance and attainment? Do any of you have an example of an organization that has gone through many of these paces, has acquired the right process, technology and culture, and what that looks like when you get there?

Room: There are various metrics that people have put in place, and it depends which principles you're talking about. We obviously have security, which we've spoken about quite a lot here, but there are other principles: accuracy, retention, delete, transfers, and on and on.

But one of the metrics that entities are putting in, which is non-security controlled, is about the number of people who are successfully participating in training sessions and passing the little examination at the very end. The reason that key performance indicator (KPI) is important is that during enforcement cases, when things go wrong -- and there are lots and lots of these cases out there -- the same kind of challenges are presented by the regulators and by litigants, and that's an example of one of them.

So, when you're building your metrics and your KPIs, it's important to think not just about the measures that would achieve operational privacy and operational security, but also think about the metrics that people who would be adverse to you would understand: judges, regulators, litigants, etc. There are essentially two kinds of metrics, operational results metrics, but also the judgment metrics that people may apply to you.

Gardner: At HPE, do you have any examples or perhaps you can describe why we think that doing this correctly could get you into a better competitive business position? What is it about doing this that not only allows you to be legally compliant, but also puts you in an advantageous position in a market and in terms of innovation and execution?

Biggest sanction

Kemp: If I could quote some of our clients, especially in the Nordic Region, there are about six major reasons for paying strict and urgent attention to this particular subject. One of them, listening to my clients, has to do with compliance. That is the most obvious one. That is the one that has the biggest sanction.

But there are another five arguments -- I won't go into all of them -- which have to do with advancement of the business. For example, a major media company in Finland said, if we could only be able to say on our website that we were GDPR-compliant that would increase materially the customer belief in our respect for their information, and it would give us a market advantage. So it's actually advancing the business.

The second aspect, which I anticipated, but I've also heard from corporations, is that in due course, if it's not here already, there might be a case where governments would say that if you're not GDPR compliant, then you can’t bid on our contracts.

The third might be, as Tim was referring to earlier, what if you wanted to make best use of this information? There’s even a possibility of corporations taking the PII, making sure it's fully anonymous or pseudo-anonymized, and then mixing it with other freely available information, such as Facebook, and actually saying to a customer, David, we would like to use your PII, fully anonymized. We can prove to you that we have followed the PwC legal guidance. And furthermore, if we do use this information and use it for analytics, we might even want to pay you for this. What are you doing? You are increasing the bonding and loyalty with your customers.
In due course, if it's not here already, there might be a case where governments would say that if you're not GDPR compliant, then you can’t bid on our contracts.

So, we should think about the upsides of the business advancement, which ironically is coming out of a regulation, which may not be so obvious.

Gardner: Let’s close out with some practical hints as to how to get started, where to find more resources, both on the GDPR, but also how to attain a better data privacy capability. Any thoughts about where we go to begin the process?

Kemp: I would say that in the public domain, the EU is extremely good at promulgating information about the regulation itself coming in and providing some basic interpretation. But then, I would hand it on to Stewart in terms of what PwC Legal is already providing in the public domain.

Room: We have two accelerators that we've built to help entities go forward. The first is our GDPR Readiness Assessment Tool (RAT), and lots of multinationals run the RAT at the very beginning of their GDPR programs.
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
What does it do? It asks 70 key questions against the two domains of operation and legal privacy. Privacy architecture and privacy principles are mapped into a maturity metric that assesses people’s confidence about where they stand. All of that is then mapped into the articles and recitals of the GDPR. Lots of our clients use the RAT.

The second accelerator is the PwC Privacy and Security Enforcement Tracker. We've been tracking the results of regulatory cases and litigation in this area over many years. That gives us a very granular insight into the real priorities of regulators and litigants in general.

Using those two tools at the very beginning gives you a good insight into where you are and what your risk priorities are.

Gardner: Last word to you, Tim. Any thoughts on getting started -- resources, places to go to get on your journey or further along?

The whole organization

Grieveson: You need to involve the whole organization. As I said earlier on, it’s not just about passing it over to the data-protection officer. You need to have the buy-in from every part of the organization. Clearly, working with organizations who understand the GDPR and the legal implications, such as the collaboration between PwC and HPE, is where I would go.

When I was in the seat as a CISO, I'm not a legal expert, so one of the first things that I did was go and get that expertise and brought it in. Probably the first place I would start is getting buy-in from the business and making sure that you have the right people around the table to help you on the journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Privacy Shield – Houston, We Still Have a Problem!

Privacy Shield – Houston, We Still Have a Problem!

Imagine posting a picture on Facebook of yourself and your mates clubbing and the next thing you know you are getting a phone call from a United States (U.S.) government representative asking you about your night out.

Scary, isn’t it? This way George Orwell’s quote “Big Brother is Watching You” in his book “1984”, is becoming fact rather than being fiction.[1] In order to prevent this from happening the European Commission (EC) has been working on an agreement with the U.S., called the Privacy Shield, to impose obligations on U.S. companies to protect personal data of EU citizens when transferring data between the EU and the U.S.

This agreement was formed following the destruction of the previous agreement, Safe Harbour, which was in place between the two continents. The previous agreement was ruled invalid by the European Court of Justice in the Schrems case.

By discussing the difference between both agreements in this article, it will become clear that the EC needs to have a closer look at the Privacy Shield agreement they are about to sign with the U.S. in order to protect its citizens from any 1984-scenario to become reality.

Why did the Safe Harbour agreement get destroyed by Schrems?

As a matter ...


Read More on Datafloq
5 IoT Standardization and Implementation Challenges

5 IoT Standardization and Implementation Challenges

The rapid evolution of the IoT market has caused an explosion in the number and variety of IoT solutions. Additionally, large amounts of funding are being deployed at IoT startups. Consequently, the focus of the industry has been on manufacturing and producing the right types of hardware to enable those solutions. In the current model, most IoT solution providers have been building all components of the stack, from the hardware devices to the relevant cloud services or as they would like to name it as "IoT solutions", as a result, there is a lack of consistency and standards across the cloud services used by the different IoT solutions.

As the industry evolves, the need for a standard model to perform common IoT backend tasks, such as processing, storage, and firmware updates, is becoming more relevant. In that new model, we are likely to see different IoT solutions work with common backend services, which will guarantee levels of interoperability, portability and manageability that are almost impossible to achieve with the current generation of IoT solutions.

Creating that model will never be an easy task by any level of imagination, there are hurdles and challenges facing the standardization and implementation of IoT solutions and ...


Read More on Datafloq
Introduction to Snowflake!

Introduction to Snowflake!

Heads up! I will be giving a webinar next week, called Enabling Cloud-Native Elastic Data Warehousing to introduce folks to the Snowflake Elastic Data Warehouse. Sign up here and join me on July 12th! Special thanks to DAMA International for inviting me to do this! See you there! Kent The Data WarriorFiled under: Big Data, Data Warehouse, […]
What’s Damming Data Lakes? Insights from Tableau, Qlik and Logi Analytics

What’s Damming Data Lakes? Insights from Tableau, Qlik and Logi Analytics

Data lakes are used to store data in its natural format. Data experts can then test data relations without committing to a structure. It's a flexible data storage strategy for combining structured and unstructured data, and is best used as a sandbox alongside a data warehouse.

So, What's Damming Data Lakes?

The promise of combining unstructured and structured data in one place is alluring, but this leads to one very serious dilemma. 

When too much data is dumped into a data lake it risks becoming a data swamp instead. This term was coined by Michael Stonebraker to describe the murkiness of data curation. If the data isn't curated before analysis, it's impossible to gain valuable insights. Curation requires meticulous detail within these four steps:


Ingestation
Transformation
Cleansing
Consolidation


While investigating the growth of the data lake phenomenon, we were able to hear from experts within Tableau, Logi Analytics and Qlik. We've also included an opinion from Gartner in light of this 2016 business intelligence trend.


...


Read More on Datafloq
BBBT to Host Webinar from WhereScape on Data Warehouse Automation

BBBT to Host Webinar from WhereScape on Data Warehouse Automation

This Friday, the Boulder Business Intelligence Brain Trust (BBBT), the largest industry analyst consortium of its kind, will host a private webinar from WhereScape on automation software for profiling, prototyping, developing, loading, extending, and managing data warehouse, advanced analytic, and big data environments.

(PRWeb July 05, 2016)

Read the full story at http://www.prweb.com/releases/2016/07/prweb13526036.htm

How Data Management Platforms Offer Publishers New Revenue Sources

How Data Management Platforms Offer Publishers New Revenue Sources

Data Management Platforms (DMPs) or unified DMPs refer to centralized computing systems that collect, integrate, and manage huge sets of data, both structured and unstructured that are generated by different sources.

These platforms provide reliable, precise, and timely access to data, helping marketers to meet their business goals. Publishers are able to draw significant marketing insights from the first and third-party information, and can target fresh groups of audiences not only on their own website but also across the web. They gain more command over demand-side and supply-side platforms, networks, ad exchanges, and other tools required for managing the vital audience data assets. Many vendors create costly platforms that merge data management technologies and tools for data analytics.

Publishers look forward to a number of features when selecting a DMP and only a few technology companies are able to provide them. A unified control over all the advertisements, audience groups, and tools to study these groups on a real-time basis are some of the many things that any all-inclusive data management channel should provide. Following is an elaboration on the features that any comprehensive data management platform should contain:

Data Management Platform Features

1. Easy and Secure Ingestion of Data

An ideal DMP should bring ...


Read More on Datafloq
Ontology: A Tree or a Forest – What do You Need?

Ontology: A Tree or a Forest – What do You Need?

Last month having written about Taxonomies, I thought I would spend time understanding Ontologies as the word popped up a lot associated with Taxonomies.

The challenge became how to explain the use of a word “Ontology” that is mostly used in academic and data scientist circles. Whereas the term Taxonomies is mostly used commercially in business.

Recap – Taxonomy – the hierarchical classification of entities. Including the principles that underlie such classification – according to Wikipedia.


A car is a subtype of vehicle, a car is a vehicle but not every vehicle is a car


Ontology

On the technical side, ontologies imply a broader scope of information. People often refer to a taxonomy as a “tree”, and extending that analogy, an Ontology is often more of a “forest”.  An ontology might encompass a number of taxonomies, with each taxonomy organizing a subject in a particular way.

An ontology is a formal way of organizing information. It includes putting things into categories and relating these categories with each other. They can have any type of relationship between categories, whereas in a taxonomy there can only be hierarchies.

The synonym for ontology would be model, and the synonym for taxonomy would be tree.

A technical difference between taxonomies and ontologies deals with structure ...


Read More on Datafloq
Forecast: Partly Cloudy — Connecting #OBIEE to @SnowflakeDB

Forecast: Partly Cloudy — Connecting #OBIEE to @SnowflakeDB

My good friends at RedPill Analytics have done it again! In their never ending mission to #ChallengeEverything, they thought it would be cool to try to connect OBIEE (Oracle Business Intelligence Enterprise Edition) to the Snowflake Elastic Data Warehouse as a way to give OBIEE users access to a high performance data warehouse cloud service. This […]
Why a Single Customer View is Essential for Modern Marketing

Why a Single Customer View is Essential for Modern Marketing

The traditional customer journey has changed. With ever increasing innovations in technology, consumers now have access to an unprecedented array of channels and devices that give them control over exactly when and how they shop. As a result, influencing customer touchpoints and critical decisions along the path to purchase has become steadily more complex for businesses.

Despite this, a great number of customer engagement opportunities are presented by gaining an understanding of individual customer journeys and the various touchpoints involved.

The challenge for businesses is being able to track, store and effectively interpret the infinite amounts data associated with every transaction. “Big Data” can be daunting for many organisations, especially those who don’t have a team of data scientists and analysts on staff. Not dealing with data adequately means that these businesses are missing out on actionable insights than can be essential for the development of successful customer experiences. This is where the holy grail of data marketing, the “Single Customer View” comes in.

The Single Customer View

A SCV is the result of collating, sorting and interrogating a myriad of customer data from a multitude of touchpoints. Giving marketers an accessible total view of customer activity, the SCV acts as an intelligent central ...


Read More on Datafloq
Digital Transformation – the Key to Customer Loyalty in the Insurance Industry

Digital Transformation – the Key to Customer Loyalty in the Insurance Industry

According to consumers, insurers are not keeping up with their expectations - and customers are not sticking around long enough to give them the opportunity to prove otherwise. Those in the insurance industry are well aware of the poor rankings in customer experience satisfaction scores and know that customer loyalty is at an all-time low.

Customer churn due to declining loyalty and poor customer experiences represents as much as $470 billion in life and P&C premiums globally. This is according to Accenture’s 2015 Global Consumer Pulse Research which analyzed responses from more than 13,000 P&C and life insurance customers in 33 countries. Other notable findings indicated that only 29% of insurance customers are satisfied with their current provider and that fewer than one in six respondents (16%) said that they would definitely buy more products from their current insurance provider.

What Are Insurance Customers Looking For?

Like most industries today, a good digital experience rates is appreciated by insurance customers. In the Accenture report, consumers rated digital and cross-channel engagement as being very important and an area in which they seek improvement from insurers. Nearly half (47%) of the survey’s respondents said they want more online interactions with their insurers. In the past ...


Read More on Datafloq
5 Best Practices to Avoid Data Breaches in the Healthcare Industry

5 Best Practices to Avoid Data Breaches in the Healthcare Industry

Data breaches are common and can occur at almost every type of organization or company, but they are particularly troublesome and widespread in the healthcare industry. Patients’ sensitive medical records are constantly at risk, whether the organization is large or small, affecting individuals at every level of data breach.

The U.S. Department of Health and Human Services maintains an online database of healthcare breaches affecting over 500 individuals, but many smaller breaches occur each year as well. According to Forbes, over 112 million records were compromised by data breaches in 2015 alone—and 90% of the top ten breaches were related to hacking or IT incidents.

The average cost of a breach continues to rise, and in 2014, that average stood at $5.9 million. With the high prevalence of cybercrime still rising, the healthcare industry must take steps to reduce the number and impact of data breaches, which lead to the compromise of sensitive data and financial consequences. Healthcare organizations should follow cyber security best practices to minimize the risk of a breach. These steps include:

1. Educating Employees on Security Risks

Healthcare organizations may have stellar employees, but human error can always lead to security issues. Proper training on regulations, security protocols—and support for ...


Read More on Datafloq
Survey Shows Enterprises Struggling with Bad Data

Survey Shows Enterprises Struggling with Bad Data

Last week we announced the results of a survey of over 300 enterprise data professionals conducted by Dimensional Research and sponsored by StreamSets. We were trying to understand the market’s state of play for managing their big data flows. What we discovered was that there is an alarming issue at hand: companies are struggling to detect and keep bad data out of their stores.

There is a Bad Data Problem Within Big Data

When we asked data pros about their challenges with big data flows, the most-cited issue was ensuring the quality of the data in terms of accuracy, completeness and consistency, getting votes from over ⅔ of respondents. Security and operations were also listed by more than half. The fact that quality was rated as a more common challenge than even security and compliance is quite telling, as you usually can count on security to be voted the #1 challenge for most IT domains.

The painful reality of this challenge was hammered home by the number of people who admitted to flowing bad data into their stores (87%) or knowingly having bad data in their stores (74%). On an equally disturbing note, nearly one in 8 respondents (12%), answered “I don’t know” to the question ...


Read More on Datafloq
Best Scala channels for Data Scientists on Gitter

Best Scala channels for Data Scientists on Gitter

Scala is an object-oriented functional language that has gained wide acceptance in developer and data science communities for many of its merits, including runtime performance & stability, capability of building robust systems, ease of learning, use for both static and dynamic language coders, as well as good libraries and language facilities for building concurrent and distributed applications. Learning Scala is particularly valuable to data scientists working with large data sets, as well as those applying machine learning at scale. 

There's plenty of channels on Gitter dedicated to Scala — dive in & enjoy!


scala/scala — Channel dedicated to the Scala programming language. 
akka/akka — Channel for all Akka enthusiasts — newbies as well as gurus — for the exchange of knowledge and the coordination of efforts around Akka.
scala-js/scala-js — Channel dedicated to the Javascript compiler for Scala.
playframework/playframework — Play Framework combines productivity and performance making it easy to build scalable web applications with Java and Scala. Play is developer friendly with a “just hit refresh” workflow and built-in testing support. With Play, applications scale predictably due to a stateless and non-blocking architecture.
typelevel/cats — Cats is a library which provides abstractions for functional programming in Scala. The name is a playful shortening of the word category. Cats is currently available for Scala 2.10 and 2.11.
milessabin/shapeless — Shapeless is a type class and dependent type based generic programming library ...


Read More on Datafloq
5 Ways How To Make Data Storage More Secure

5 Ways How To Make Data Storage More Secure

Data storage security essentially means two things - protecting data sufficiently from unintended loss or corruption, as well as securing the data from unauthorized access. The most common ways to secure data storage involve encrypting the data (to prevent unauthorized access) and creating multiple layers of backup. While this is a necessary first step, this may alone not be sufficient to protect your data from a sophisticated attack or a multi-level corruption of database. In this article, we will look at the other important strategies that businesses must invest in for a more sophisticated data storage security system.

Physical & Logical Authorization

Restricting data access to authorized users is good, but not enough. For maximum security, it is important to not only restrict access to authorized logged-in users, but also ensure that these users access the system from within authorized physical spaces. This includes authorized IP addresses and devices. This way, it is possible to avoid data attacks even if the details of an authorized user is compromised.

Firewalls Integrated With Virus Detection Systems

Computer devices that are physically authorized to access confidential data systems must be securely integrated with firewalls and virus detection programs that prevent access to other third party websites and ...


Read More on Datafloq
A Starter Kit for Solving the Cyber Security Problem

A Starter Kit for Solving the Cyber Security Problem

Ownership of a company’s cybersecurity is akin to an issue like climate change or eco-preservation: It’s a concern that touches everyone. For cybersecurity, however, universal ownership may not be the best approach to ensure accountability.

Technology is an integral part of the corporate environment, but the responsibility of protecting that interconnectivity doesn’t always fit into neatly defined departments. It’s everyone’s—and no one’s—job until accountability is established. To do so, a company should select a leader with cybersecurity as her mission.

It’s a question that was raised in a recent webinar, “From Ashley Madison to the eBay Hack: Cybersecurity Best Practices”—when it comes to leading the charge on cybersecurity, who’s the right person for the job?

A Fearless Leader

The answer, not surprisingly, isn’t simple. It depends on the company’s size and structure. In a larger company, the role is often filled by a c-suite executive, such as chief privacy officer or chief information security officer (CISO), a board member, or general counsel.

It doesn’t necessarily matter which leader claims ownership—only that someone does, and that the individual responsible seeks appropriate expertise to understand the technical complexities and operational issues at a sufficient level to make sound decisions. What’s most important is that she has the ...


Read More on Datafloq
How Much is that Big Data Worth? — Big Data Decisions Impact Business Valuations

How Much is that Big Data Worth? — Big Data Decisions Impact Business Valuations

As both digital and more traditional companies become more and more dependent on data to compete in today’s information economy, data is starting to have an irrefutable impact on companies’ valuation and reputation. The decisions companies make about how to use data can have an enormous impact on the success of modern enterprises, as well as on their image, their public perception, their competitors, and regulators.

According to recent research, companies must recognize this new reality in which corporate reputations may be negatively impacted by decisions they make concerning data within their control. As companies are incurring significant costs to capitalize on the enormous amounts of data – so-called Big Data – constantly generated by the Internet of Things (IoT), social media platforms, websites, and other sources, they must appreciate that their use, misuse and governance of data can have a direct impact on their goodwill and ultimate valuation.

I.  How Modern Corporate Enterprises are Valued

The calculation of a modern company’s valuation extends beyond the sum of its tangible assets. Goodwill, an important component of any company’s valuation, has no physical component and qualifies as an intangible asset rather than tangible brick and mortar assets. Valuing goodwill is the difference between the purchase ...


Read More on Datafloq
CA streamlines cloud and hybrid IT infrastructure adoption through better holistic services monitoring

CA streamlines cloud and hybrid IT infrastructure adoption through better holistic services monitoring

New capabilities in CA Unified Infrastructure Management (CA UIM) are designed to help enterprises adopt cloud more rapidly and better manage hybrid IT infrastructure heterogeneity across several major cloud environments.

Enterprises and SMBs are now clamoring for hybrid cloud benefits, due to an ability for focus on on apps and to gain speed for new business initiatives, says Stephen Orban, Global Head of Enterprise Strategy at AWS.

"Going cloud-first allows organizations to focus on the apps that make the business run, says Orban. Using hybrid computing, the burden of proof soon shifts to why should we use cloud for more of IT," he says.

As has been the case with legacy IT for decades, the better the overall management, the better the adoptions success, productivity, and return on investment (ROI) for IT systems and the apps they support -- no matter their location of IT architecture. This same truth is now being applied to solve the cloud heterogeneity problem, just as it did the legacy platforms heterogeneity problem. The total visibility solution may be even more powerful in this new architectural era.

Cloud-fist is business-first

The stakes are now even higher. As you migrate to the cloud, one weak link in a complex hybrid cloud deployment can ruin the end-user experience, says Ali Siddiqui, general manager, Agile Operations at CA, "By providing insight across the performance of all of an organization's IT resources in a single and unified view, CA UIM gives users the power to choose the right mix of modern cloud enablement technologies."
UIM gives users the power to choose the right mix of modern cloud enablement technologies that can best support new endeavors that can contribute to business growth.

CA UIM reduces complexity of hybrid infrastructures by providing visibility across on-premises, private-, and public-cloud infrastructures through a single console UI. Such insight enables users to adopt new technologies and expand monitoring configurations across existing and new IT resource elements. CA expects the solution to reduce the need for multiple monitoring tools. [Disclosure: CA is a sponsor of BriefingsDirect.]

"Keep your life simple from a monitoring and management perspective, regardless of your hybrid cloud [topology]," said Michael Morris, Senior Director Product Management, at CA Technologies in a recent webcast.

To grease the skids to hybrid cloud adoption, CA UIM now supports advanced performance monitoring of Docker containers, PureStorage arrays, Nutanix, hyperconverged systems, OpenStack cloud environments, and additional capabilities for Amazon Web Services (AWS) cloud infrastructures, CA Technologies announced last week.

CA is putting its IT systems management muscle behind the problem of migrating from data centers to the cloud, and then better supporting hybrid models, says Siddiqui. The "single pane of glass" monitoring approach that CA is delivering allows measurement and enforcement of service-level agreements (SLAs) before and after cloud migration. This way, continuity of service and IT value-add can be preserved and measured, he added.

Managing a cloud ecosystem

"Using advanced monitoring and management can significantly cut costs of moving to cloud," says Siddiqui.

Indeed, CA is working with several prominent cloud and IT infrastructure partners to make the growing diversity of cloud implementations a positive, not a drawback. For example, "Virtualization tools are too constrained to specific hypervisors, so you need total cloud visibility," says Steve Kaplan, Vice President of Client Strategy at Nutanix, of CA's new offerings.

And it's not all performance monitoring. Enhancements to CA UIM's coverage of AWS cloud infrastructures include billing metrics and support for additional services that provide deeper actionable insights on cloud brokering.

CA UIM now also provides:

  • Service-centric and unified analytics capabilities that rapidly identify the root cause of performance issues, resulting in a faster time to repair and better end-user experience
  • Out-of-the-box support for more than 140 on-premises and cloud technologies

  • Templates for easier configuring of monitors than can be applied to groups of disparate systems
What's more, to ensure the reliability of networks such as SDN/NFV that connect and scale hybrid environments, CA has also delivered CA Virtual Network Assurance, which provides a common view of dynamic changes across virtual and physical network stacks.

You may also be interested in:

Retail Data Sharing: Put Your Idle Data To Work And Boost Profits

Retail Data Sharing: Put Your Idle Data To Work And Boost Profits

Most retailers still lag behind when it comes to utilizing data for transforming the analytical ability and drawing conclusive insights. More than collecting available and obscure data, it’s the acumen of putting it to effective use which is costing retailers dearly. Surprisingly, most retailers are still treading slowly as the process of supplier collaboration is often slow, time-consuming, complex, and delivers slow results. And only those retailers with a keen eye to improving profit margins really put data to work.

The two primary aspects here are collecting possible data and putting it to use.

Data Collaboration

Do not just rely on point of sale data; rather look for details from all possible sources in and around the store to garner information that can truly help. Inventory movement trends, stock replenishment cycles, demand spikes, shopping trends, fast and slow moving products, product visibility impact, store-help induced sales, gauging impact of promotions, collecting shopper feedback, trend analysis, shopper demographics, periodic sale comparison, order-to-delivery time lag, ordering cycles and return to vendors are some key areas from where valuable data and insights can be extracted. Processes and technologies should be utilized for collecting, storing, and easily retrieving data when needed.

Analytics and Action

The retailer has to take ...


Read More on Datafloq
Here’s how two part-time DBAs maintain mobile app ad platform Tapjoy’s massive data needs

Here’s how two part-time DBAs maintain mobile app ad platform Tapjoy’s massive data needs

The next BriefingsDirect Voice of the Customer big data case study discussion examines how mobile app advertising platform Tapjoy handles fast and massive data -- some two dozen terabytes per day -- with just two part-time database administrators (DBAs).

Examine how Tapjoy’s data-driven business of serving 500 million global mobile users -- or more than 1.5 million add engagements per day, a data volume of a 120 terabytes -- runs with extreme efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how high scale and complexity meets minimal labor for building user and advertiser loyalty we're joined by David Abercrombie, Principal Data Analytics Engineer at Tapjoy in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mobile advertising has really been a major growth area, perhaps more than any other type of advertising. We hear a lot about advertising waning, but not mobile app advertising. How does Tapjoy and its platform help contribute to the success of what we're seeing in the mobile app ad space?

Abercrombie: The key to Tapjoy’s success is engaging the users and rewarding them for engaging with an ad. Our advertising model is you engage with an ad and then you get typically some sort of reward: A virtual currency in the game you're playing or some sort of discount.

Abercrombie
We actually have the kind of ads that lead users to seek us out to engage with the ads and get their rewards.

Gardner: So this is quite a bit different than a static presented ad. This is something that has a two-way street, maybe multiple directions of information coming and going. Why the analysis? Why is that so important? And why the speed of analysis?

Abercrombie: We have basically three types of customers. We have the app publishers who want to monetize and get money from displaying ads. We have the advertisers who need to get their message out and pay for that. Then, of course, we have the users who want to engage with the ads and get their rewards.

The key to Tapjoy’s success is being able to balance the needs of all of these disparate uses. We can’t charge the advertisers too much for their ads, even though the monetizers would like that. It’s a delicate balancing act, and that can only be done through big-data analysis, careful optimization, and careful monitoring of the ad network assets and operation.

Gardner: Before we learn more about the analytics, tell us a bit more about what role Tapjoy plays specifically in what looks like an ecosystem play for placing, evaluating, and monetizing app ads? What is it specifically that you do in this bigger app ad function?

Ad engagement model

Abercrombie: Specifically what Tapjoy does is enable this rewarded ad engagement model, so that the advertisers know that people are going to be paying attention to their ads and so that the publishers know that the ads we're displaying are compatible with their app and are not going to produce a jarring experience. We want everybody to be happy -- the publishers, the advertisers, and the users. That’s a delicate compromise that’s Tapjoy’s strength.

Gardner: And when you get an end user to do something, to take an action, that’s very powerful, not only because you're getting them to do what you wanted, but you can evaluate what they did under what circumstances and so forth. Tell us about the model of the end user specifically. What is it about engaging with them that leads to the data -- which we will get to in a moment?
HPE Vertica
Community Edition
Start Your Free Trial Now
Abercrombie: In our model of the user, we talk about long-term value. So even though it may be a new user who has just started with us, maybe their first engagement, we like to look at them in terms of their long-term value, both to the publishers and the advertiser.

We don’t want people who are just engaging with the ad and going away, getting what they want and not really caring about it. Rather, we want good users who will continue their engagement and continue this process. Once again, that takes some fairly sophisticated machine-learning algorithms and very powerful inferences to be able to assess the long-term value.

As an example, we have our publishers who are also advertisers. They're advertising their app within our platform and for them the conversion event, what they are looking for, is a download. What we're trying to do is to offer them users who will not only download the game once to get that initial payoff reward, but will value the download and continue to use it again and again.
The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising.

So all of our models are designed with that end in mind -- to look at the long-term value of the user, not just the immediate conversion at this instant in time.

Gardner: So perhaps it’s a bit of a misnomer to talk about ads in apps. We're really talking about a value-add function in the app itself.

Abercrombie: Right. The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising. If it’s another app, they want good users for whom that app is relevant and useful.

That’s really the way we look at it. That’s the way to enhance the overall experience in the long-term. We're not just in it for the short-term. We're looking at developing a good solid user base, a good set of users who engage thoroughly.

Gardner: And as I said in my set-up, there's nothing hotter in all of advertising than mobile apps and how to do this right. It’s early innings, but clearly the stakes are very high.

A tough business

Abercrombie: And it’s a tough business. People are saturated. Many people don’t want ads. Some of the business models are difficult to master.

For instance, there may be a sequence of multiple ad units. There may be a video followed by another ad to download something. It becomes a very tricky thing to balance the financing here. If it was just a simple pass-through and we take a cut, that would be trivial, but that doesn't work in today's market. There are more sophisticated approaches, which do involve business risk.

If we reward the user, based on the fact that they're watching the video, but then they don't download the app, then we don't get money. So we have to look very carefully at the complexity of the whole interaction to make it as smooth and rewarding as possible, so that the thing works. That's difficult to do.

Gardner: So we're in a dynamic, fast-growing, fairly fresh, new industry. Knowing what's going to happen before it happens is always fun in almost any industry, but in this case, it seems with those high stakes and to make that monetization happen, it’s particularly important.
HPE Vertica
Community Edition
Start Your Free Trial Now
Tell me now about gathering such large amounts of data, being able to work with it, and then allowing analysis to happen very swiftly. How do you go about making that possible?

Abercrombie: Our data architecture is relatively standard for this type of clickstream operation. There is some data that can be put directly into a transactional database in real time, but typically, that's only when you get to the very bottom of the funnel, the conversion stuff. But all that clickstream stuff gets written, has JSON formatted log files, gets swept up by a queuing system, and then put into our data systems.

Our legacy system involved a homegrown queuing system, dumping data into HDFS. From there, we would extract and load CSVs into Vertica. As with so many other organizations, we're moving to more real-time operations. Our queuing system has evolved from a couple of different homegrown applications, and now we're implementing Apache Kafka.

We use Spark as part of our infrastructure, as sort of a hub, if you will, where data is farmed out to other systems, including a real-time, in-memory SQL database, which is fairly new to us this year. Then, we're still putting data in HDFS, and that's where the machine learning occurs. From there, we're bringing it into Vertica.

In Vertica -- and our Vertica cluster has two main purposes -- there is the operational data store, which has the raw, flat tables that are one row for every event, with the millisecond timestamps and the IDs of all the different entities involved.

From that operational data store, we do a pure SQL ETL extract into kind of an old-school star schema within Vertica, the same database.

Pure SQL

So our business intelligence (BI) ETL is pure SQL and goes into a full-fledged snowflake schema, moderately denormalized with all the old-school bells and whistles, the type 1, type 2, slowly changing dimensions. With Vertica, we're able to denormalize that data warehouse to a large degree.

Sitting on top of that we have a BI tool. We use MicroStrategy, for which we have defined our various metrics and our various attributes, and it’s very adept at knowing exactly which fact table and which dimensions to join.

So we have sort of a hybrid architecture. I'd say that we have all the way from real-time, in-memory SQL, Hadoop and all of its machine learning and our algorithmic pipelines, and then we have kind of the old-school data warehouse with the operational data store and the star schema.

Gardner: So a complex, innovative, custom architectural approach to this and yet I'm astonished that you are running and using Vertica in multiple ways with two part-time DBAs. How is it possible that you have minimal labor, given this topology that you just described?

Abercrombie: Well, we found Vertica very easy to manage. It has been very well-behaved, very stable.
In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

For instance, we don’t even really use the Management Console, because there is not enough to manage. Our cluster is about 120 terabytes. It’s only on eight nodes and it’s pretty much trouble free.

One of the part-times DBAs deals with kind of more operating-system level stuff --  patches, cluster recovery, those sorts of issues. And the other part-time DBA is me. I deal more with data structure design, SQL tuning and Vertica training for our staff.

In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

When we first started out, we tried running Vertica in Amazon EC2. Mind you, this was four or five years ago. Amazon EC2 was not where it is today. It failed. It was very difficult to manage. There were perplexing problems that we couldn’t solve. So we moved our Vertica and essentially all of our big-data data systems out of the cloud onto dedicated hardware, where they are much easier to manage and much easier to bring the proper resources.

Then, at one time in our history, when we built a dedicated hardware cluster for Vertica, we failed to heed properly the hardware planning guide and did not provision enough disk I/O bandwidth. In those situations, Vertica is unstable, and we had a lot of problems.

But once we got the proper disk I/O, it has been smooth sailing. I can’t even remember the last time we even had a node drop out. It has been rock solid. I was able to go on a vacation for three weeks recently and know that there would be no problem, and there was no problem.

Gardner: The ultimate key performance indicator (KPI), "I was able to go on vacation."

Fairly resilient

Abercrombie: Exactly. And with the proper hardware design, HPE Vertica is fairly resilient against out-of-control queries. There was a time when half my time was spent monitoring for slow queries, but again, with the proper hardware, it's smooth sailing. I don’t even bother with that stuff anymore.

Our MicroStrategy BI tool writes very good SQL. Part of the key to our success with this BI portion is designing the Vertica schema and the MicroStrategy metadata layer to take advantage of each other’s strengths and avoid each other’s weaknesses. So that really was key to the stable, exceptional performance we get. I basically get no complaints of slow queries from my BI tool. No problem.

Gardner: The right kind of problem to have.

Abercrombie: Yes.

Gardner: Okay, now that we have heard quite a bit about how you are doing this, I'd like to learn, if I could, about some of the paybacks when you do this properly, when it is running well, in terms of SQL queries, ETL load times reduction, the ability for you to monetize and help your customers create better advertising programs that are acceptable and popular. What are the paybacks technically and then in business terms?
The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

Abercrombie: In order to get those paybacks, a key element was confidence in the data, the results that we were shipping out. The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

What that also means is that as a product is under development and when it’s not ready yet, the instrumentation isn’t ready, that stuff doesn’t make it into our BI tool. You can only get that stuff from ad hoc.

So the benefit has been a very clear understanding of the day-to-day operations of our ad network, both for our internal monitoring to know when things are behaving properly, when the instrumentation is working as expected, and when the queues are running, but also for our customers.

Because of the flexibility that we can do from a traditional BI system with 500 metrics, over a couple of dozen dimensions, our customers, the publishers and the advertisers, get incredible detail, customized exactly the way they need for ingestion into their systems or to help them understand how Tapjoy is serving them. Again, that comes from confidence in the data.

Gardner: When you have more data and better analytics, you can create better products. Where might we look next to where you take this? I don’t expect you to pre-announce anything, but where can you now take these capabilities as a business and maybe even expand into other activities on a mobile endpoint?

Flexibility in algorithms

Abercrombie: As we expand our business and move into new areas, what we really need is flexibility in our algorithms and the way we deal with some of our real-time decision making.

So one area that’s new to us this year is the in-memory SQL database like MemSQL. Some of our old real-time ad optimization was based on pre-calculating data and serving it up through HBase KeyValue, but now, where we can do real-time aggregation queries using SQL, that is easy to understand, easy to modify, very expressive and very transparent. It gives us more flexibility in terms of fine-tuning our real-time decision-making algorithms, which is absolutely necessary.

As an example, we acquired a company in Korea called 5Rocks that does app tech and that tracks the users within the app, like what level they're on, or what activities they're doing and what they enjoy, with an eye towards in-app purchase optimization.
HPE Vertica
Community Edition
Start Your Free Trial Now
And so we're blending the in-app purchase optimization along with traditional ad network optimization, and the two have different rules and different constraints. So we really need the flexibility and expressiveness of our real-time decision making systems.

Gardner: One last question. You mentioned machine learning earlier. Do you see that becoming more prominent in what you do and how you're working with data scientists, and how might that expand in terms of where you employ it?

Abercrombie: Tapjoy started with machine learning. Our data scientists are machine learning. Our productive algorithm team is about six times larger than our traditional Vertica BI team. Mostly what we do at Tapjoy is predictive analytics and various machine-learning things. So we wouldn't be alive without it. And we expanded. We're not shifting in one direction or another. It's apples and oranges, and there's a place for both.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

High performance – Data Vault and Exasol

High performance – Data Vault and Exasol

You may have received an e-mail invitation from EXASOL or from ITGAIN inviting you to our forthcoming webinar, such as this:

Do you have difficulty incorporating different data sources into your current database? Would you like an agile development environment? Or perhaps you are using Data Vault for data modeling and are facing performance issues?
If so, then attend our free webinar entitled “Data Vault Modeling with EXASOL: High performance and agile data warehousing.” The 60-minute webinar takes place on July 15 from 10:00 to 11:00 am CEST.
High performance – Data Vault and Exasol

High performance – Data Vault and Exasol

You may have received an e-mail invitation from EXASOL or from ITGAIN inviting you to our forthcoming webinar, such as this:

Do you have difficulty incorporating different data sources into your current database? Would you like an agile development environment? Or perhaps you are using Data Vault for data modeling and are facing performance issues?
If so, then attend our free webinar entitled “Data Vault Modeling with EXASOL: High performance and agile data warehousing.” The 60-minute webinar takes place on July 15 from 10:00 to 11:00 am CEST.
How Poor Data Prevents Retailers to Deliver Omni-Channel Experiences

How Poor Data Prevents Retailers to Deliver Omni-Channel Experiences

In the digital era where consumers are just as likely to purchase online as in a brick-and-mortar location, delivering a seamless channel experience has become the new competitive imperative. Retailers are ramping up their investments in omnichannel and multichannel strategies to deliver exceptional experiences, wherever and whenever today’s consumers choose to interact with brands.

What is Omni-Channel Retail?

Omnichannel is an approach to marketing and retail that utilizes multiple communication channels to reach customers. The key is that all platforms need to be aware of the other to facilitate a seamless experience. The customer is the focus, and they need to be able to switch between channels quickly and efficiently, getting the same information and experience wherever they go. Unlike multi-channel marketing approaches, each channel in an omni-channel strategy intuitively knows how a customer interacted with another channel, which is used to help guide and continue the customer experience.

As new technologies emerge and more consumers demand it, it is becoming increasingly important for retailers to extend the brick-and-mortar experience to their online channels. Having a presence online and offline has practically become a requirement for some shoppers to even consider buying or using your product. In fact, Forrester Research projects that online retail sales ...


Read More on Datafloq
How You Can Improve Customer Experience with Fast Data Analytics

How You Can Improve Customer Experience with Fast Data Analytics

In today’s constantly connected world, customers expect more than ever before from the companies they do business with. With the emergence of big data, businesses have been able to better meet and exceed customer expectations thanks to analytics and data science. However, the role of data in your business’ success doesn’t end with big data – now you can take your data mining and analytics to the next level to improve customer service and your business’ overall customer experience faster than you ever thought possible.

Fast data is basically the next step for analysis and application of large data sets (big data). With fast data, big data analytics can be applied to smaller data sets in real time to solve a number of problems for businesses across multiple industries. The goal of fast data analytics services is to mine raw data in real time and provide actionable information that businesses can use to improve their customer experience.


“Fast data analytics allows you to turn raw data into actionable insights instantly” - Albert Mavashev


Analyze Streaming Data with Ease

The Internet of Things (IoT) is growing at an incredible rate. People are using their phones and tablets to connect to their home thermostats, security systems, fitness ...


Read More on Datafloq
Which Brands Reign over London Instagram in 2016?

Which Brands Reign over London Instagram in 2016?

What we’ve learned about brands in London from 5 million Instagram posts.
View Interactive London Instagram Map

Why London Instagram

As any modern fashion mecca and large financial center London is big on instagram, so it’s not surprising it is the most instagrammed city in Great Britain and 2nd one in the world after New York, and followed by Paris.
What Londoners and guests of the city instagram about? What places do they like the most? Where do they feel miserable? These were the questions to be answered by InData Labs team.

How

Almost five million instagram posts were collected in order to be visualized on the map of the city. All the posts were geo-tagged, which made it possible to transfer them on the map. Colors on the map show density and sentiment of Instagram posts: positive, neutral, and negative. We’ve already described all the technical features of our maps in one of our recent posts.

Brands on London Instagram 

InData Labs primary focus was on brands in London, aiming to uncover which brands are dominating London Instagram, who their audience is, and which brand has established the best relationship with it.

The fact that Instagram is one of the most popular social networks for sharing visual content is the main reason ...


Read More on Datafloq
BBBT to Host Webinar from Sisense on Simplifying Business Analytics for Complex Data

BBBT to Host Webinar from Sisense on Simplifying Business Analytics for Complex Data

This Friday, the Boulder Business Intelligence Brain Trust (BBBT), the largest industry analyst consortium of its kind, will host a private webinar from Sisense on its innovative Single Stack™ and In-Chip™ technologies.

(PRWeb June 23, 2016)

Read the full story at http://www.prweb.com/releases/2016/06/prweb13507009.htm

Barabási új könyve – Leesett az állam

Barabási új könyve – Leesett az állam

Eddig is tudtuk, hogy Barabási Albert-László azon ritka kutatók egyike, akik a szélesebb közönség számára is élvezhető módon tudnak kommunikálni. A korábbi könyveit ismerők (Behálózva, Villanások) már bizonyára felkapták a fejüket az új könyv megjelenésének hírére. Ők biztosan csalódottak lesznek, ha kezükbe veszik a "A hálózatok tudománya" című kiadványt, ugyanis ez nem ismeretterjesztő céllal, hanem mint egyetemi tankönyv jelent meg. A rengeteg képleten, levezetésen bizonyára csak nagyon kevesen fogják magukat végigrágni, főleg azok, akik kreditpontokat kapnak azért, ha levizsgáznak az adott anyagból.

a_halozatok_tudomanya_barabasi.JPGMindezek ellenére szenzációsnak tartom a könyv megjelenését, elég ehhez egy kicsit más szemmel lapozgatni az oldalakat. Ha egyetemi oktatóként nézek erre a kiadványra, hirtelen minden idegszálam felvillanyozódik, le vagyok nyűgözve - mint mikor az első valóban okostelefont tartottam a kezemben: sosem gondoltam, hogy szükség van ilyenre a világnak, de mikor megtapasztalom annak erejét, minden más mobiltelefon elveszti értelmét és szürke semmiséggé válik.

Ugyanez az érzés fogott el Barabási új könyve kapcsán is, azt hiszem ez az első igazi magyar nyelvű egyetemi tankönyv, amit a kezemben tartottam valaha. Nem egyszerűen szép és szemléletes, hanem magával ragadó ábrák, remek illusztrációk, színes érdekességek, háttérinformációk teszik kiemelkedővé a könyvet. Jól felépített fejezetek, a komolyabb levezetések külön kiszerkesztve, hogy olyan szakokon is lehessen belőle tanítani, ahol nincs meg minden tétel bizonyításához a megfelelő matematikai háttér. Gyakorlófeladatok, házifeladatok, ellenőrző kérdések. Mindezt keménykötéses, nagyméretű alakban, majdnem 500 oldalon keresztül élvezhetjük a legfinomabb papírra nyomtatva.

Mindenkinek javaslom, hogy ha könyvesboltban jár, keressen meg egy példányt, és csak játsszon el a gondolattal, hogy mi lett volna ha ilyen jellegű könyvekből kellett volna felkészülnie egyetemi évei alatt egy-egy tárgyból. Egy ilyen könyvet kézben tartva az kevésbé tűnik valószínűleg, hogy a közeljövőben az egyetemek helyett a fiatalok a Coursera kurzusaira járnak majd, hogy mindent a Youtube-on látható karizmatikus előadóktól érdemes csak tanulni.

Érdeklődő hallgatóknak

Biztos vagyok benne, hogy már szeptembertől lesznek olyan kurzusok a hazai felsőoktatásban, aminek alapját a könyvben leírtak adják. Ha hallgatóként mégis úgy érzed, hogy nem lesz elérhető számodra ilyen óra, szívesen megtanulnád ami a könyvben van, de félsz, hogy csak úgy önszorgalomból nem leszel elég kitartó, úgy van egy formabontó ajánlatunk:

  • Adunk neked ajándékba egy példányt a könyvből
  • Gyere el vizsgázni hozzánk a könyv anyagából 2016. október 1-ig, szívesen levizsgáztatunk belőle.
  • Ha nem sikerülne a vizsgád, vagy végül nem tudtál eljönni, akkor visszakérjük tőled a könyvet.

A részleteket úgyis megbeszéljük majd, a lényeg, hogy segítünk abban, hogy rávedd magad arra, hogy a könyvben foglaltakat megtanuld. A lehetőséget első körben a 3 leggyorsabban jelentkező hallgatónak adjuk meg. Jelentkezni: gaspar@tmit.bme.hu címen tudtok.

Megosztom Facebookon! Megosztom Twitteren! Megosztom Tumblren!

What Makes Business Intelligence “Enterprise”?

What Makes Business Intelligence “Enterprise”?

I have an article in the Spring TDWI Journal. It has now been six months and the organization has been kind enough to provide me with a copy of my article to use on my site: TDWI_BIJV21N1_Teich.

If you like my article, and I know you will, check out the full journal.

 

The post What Makes Business Intelligence “Enterprise”? appeared first on Teich Communications.

Expert panel explores the new reality for cloud security and trusted mobile apps delivery

Expert panel explores the new reality for cloud security and trusted mobile apps delivery

The next BriefingsDirect thought leadership panel discussion focuses on the heightened role of security in the age of global cloud and mobile delivery of apps and data.

As enterprises and small to medium-sized businesses (SMBs) alike weigh the balance of apps and convenience with security -- a new dynamic is emerging. Security concerns increasingly dwarf other architecture considerations.

Yet advances in thin clients, desktop virtualization (VDI), cloud management services, and mobile delivery networks are allowing both increased security and edge applications performance gains.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about the new reality for end-to-end security for apps and data, please welcome our panel: Stan Black, Chief Security Officer at Citrix; Chad Wilson, Director of Information Security at Children's National Health System in Washington, DC; Whit Baker, IT Director at The Watershed in Delray Beach, Florida; Craig Patterson, CEO of Patterson and Associates in San Antonio, Texas, and Dan Kaminsky, Chief Scientist at White Ops in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stan, a first major use case of VDI was the secure, stateless client. All the data and apps remain on the server, locked down, controlled. But now that data is increasingly mobile, and we're all mobile. So, how can we take security on the road, so to speak? How do we move past the safe state of VDI to full mobile, but not lose our security posture?

Black: Probably the largest challenge we all have is maintaining consistent connectivity. We're now able to keep data locally or make it highly extensible, whether it’s delivered through the cloud or a virtualized application. So, it’s a mix and a blend. But from a security lens, each one of those of service capabilities has a certain nuance that we need to be cognizant of while we're trying to protect data at rest, in use, and in motion.

Gardner: I've heard you speak about bring your own device (BYOD), and for you, BYOD devices have ended up being more secure than company-provided devices. Why do you think that is?

Caring for assets

Black: Well, if you own the car, you tend to take care of it. When you have a BYOD asset, you tend to take care of it, because ultimately, you're going to own that, whether it’s purchased for you with a retainer or what have you.

Black
Often, corporate-issued assets are like a car rental. You might not bring it back the same way you took it. So it has really changed quite a bit. But the containerization gives us the ability to provide as much, if not more, control in that BYOD asset.

Gardner: This also I think points out the importance of behaviors and end-user culture and thinking about security, acting in certain ways. Let's go to you, Craig. How do we get that benefit of behavior and culture as we think more about mobility and security?

Patterson: When we look at mobile, we've had people who would have a mobile device out in the field. They're accustomed to being able to take an email, and that email may have, in our situation, private information -- Social Security numbers, certain client IDs -- on it, things that we really don't want out in the public space. The culture has been, take a picture of the screen and text it to someone else. Now, it’s in another space, and that private information is out there.

You go from working in a home environment, where you text everything back and forth, to having secure information that needs to be containerized, shrink-wrapped, and not go outside a certain control parameter for security. Now, you're having a culture fight [over] utilization. People are accustomed to using their devices in one way and now, they have to learn a different way of using devices with a secure environment and wrapping. That’s what we're running into.

Gardner: We've also heard at the recent Citrix Synergy 2016 in Las Vegas that IT should be able to increasingly say "Yes," that it's an important part of getting to better business productivity.
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Dan, how do we get people to behave well in secure terms, but not say "No"? Is there a carrot approach to this?

Kaminsky: Absolutely. At the end of the day, our users are going to go ahead and do stuff they need to get their jobs done. I always laugh when people say, "I can’t believe that person opened a PDF from the Internet." They work in HR. Their job is to open resumes. If they don’t open resumes, they're going to lose their job and be replaced by someone else.

Kaminsky
The thing I see a lot is that these software-as-a-service (SaaS) providers are being pressed into service to provide the things that people need. It’s kind of like a rogue IT or an outsourced IT, with or without permission.

The unusual realization that I had is that all these random partners we're getting have random policies and are storing data. We hear a lot of stuff about the Internet of Things (IoT), but I don't know any toasters that have my Social Security number. I know lots of these DocuSign, HelloSign systems that are storing really sensitive documents.

Maybe the solution, if we want people to implement our security technologies, or at least our security policies, is to pay them. Tell them, "If you actually have attracted our users, follow these policies, and we'll give you this amount of money per day, per user, automatically through our authentication layer." It sounds ridiculous, but you have to look at the status quo. The status quo is on fire, and maybe we can pay people to put out their fires.

Quid pro quo

Gardner: Or perhaps there are other quid pro quos that don't involve money? Chad, you work at a large hospital organization and you mentioned that you're 100 percent digital. How did you encourage people with the carrot to adhere to the right policies in a challenging environment like a hospital?

Wilson: We threw out the carrot-and-stick philosophy and just built a new highway. If you're driving on a two-lane highway, and it's always congested, and you want somebody to get there faster, then build a new highway that can handle the capacity and the security. Build the right on- and off-ramps to it and then cut over.

Wilson
We've had an electronic medical record (EMR) implementation for a while. We just finished up rolling out to all of our ambulatory spaces for electronic medical record. It's all delivered through virtualization on that highway that we built. So, they have access to it wherever they need it.

Gardner: It almost sounds like you're looking at the beginning bowler’s approach, where you put rails up on the gutters, so you can't go too far afield, whether you wish to or not. Whit Baker, tell us a little bit about The Watershed and how you view security behavior. Is it rails on the gutters, carrots or sticks, how does it go?

Baker: I would say rails on the gutters for us. We've completely converted everything to a VDI environment. Whether they're connecting with a laptop, with broadband, or their own home computer or mobile device, that session is completely bifurcated from their own operating system.

So, we're not really worried. Your desktop machine can be completely loaded with malware and whatnot, but when you open that session, you're inside of our system. That's basically how we handle the security. It almost doesn't require the users to be conscious of security.

Baker
At the same time, we're still afraid of attachments and things like that. So, we do educational type things. When we see some phishing emails come in, I'll send out scam alerts and things like that to our employees, and they're starting to become self-aware. They are starting to ask, "Should I even open this?" -- those sort of things.

So, it's a little bit of containerization, giving them some rails that they can bounce off of, and education.

Gardner: Stan, thinking about other ways that we can encourage good security posture in the mobility era, authentication certainly comes to mind, multi-factor authentication (MFA). How does that play into this keeping people safe?

Behavior elements

Black: It’s a mix of how we're going to deliver the services, but it's also a mix of the behavior elements and the fact that now technology has progressed so much that you can provide a user an entire experience that they actually enjoy. It gives them what they need, inside of a secure session, inside of a secure socket layer, with the inability to go outside of those bowling lanes, if they're not authorized to do so.

Additionally, authentication technologies have come a long way from hard tokens that we used to wear. I've seen people with four, five, or six of them, all in one necklace. I think I might have been one of them.
Authentication technologies have come a long way from hard tokens that we used to wear.

Multi-factor authentication and the user interface  are all pieces of information that aren't tied to the person's privacy or that individual, like their Social Security Number, but it’s their user experience enabling them to connect seamlessly. Often, when you have a help-desk environment, as an example, you put a time-out on their system. They go from one phone call to another phone call and then they have to log back in.

The interfaces that we have now and the MFA, the simple authentication, the simplified side on all of those, enable a person, depending upon what their role is, to connect into the environment they need to do their job quickly and easily.

Gardner: You mentioned user experience, and maybe that’s the quid pro quo. You get more user experience benefits if you take more precautions with how you behave using your devices.

Dan, any thoughts on where we go with authentication and being able to say, Yes, and encourage people to do the right thing?
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Kaminsky: I cannot emphasize how important usability is in getting security wins. We've had some major ones. We moved people from Telnet to SSH. Telnet was unencrypted and was a disaster. SSH is encrypted. It is actually the thing people use now, because if you jump through a few hoops, you stopped having to type in a password.

You know what VPNs meant? VPNs meant you didn't have to drive into the office on a Sunday. You could be at home and fix the problem, and hours became minutes or seconds. Everything that we do that really works involves making things more useable and enabling people. Security is giving you permission to do this thing that used to be dangerous.
Security is giving you permission to do this thing that used to be dangerous.

I actually have a lot of hope in the mobility space, because a lot of these mobile environments and operating systems are really quite secure. You hand someone an iPad, and in a year, that iPad is still going to work. There are other systems where you hand someone a device and that device is not doing so well a year from now.

So there are a lot more controls and stability from some of these mobile things that people actually like to use more, and they turn out to also be significantly more secure.

Gardner: Craig, as we're also thinking about ways of keeping people on the straight and narrow path, we're getting more intelligent networks. We're starting to get more data and analytics from those devices and we're able to see what goes on in that network in high detail.

Tell us about the ways in which we can segment and then make zones for certain purposes that may come and go based on policies. Basically, how are intelligent networks helping us provide that usability and security?

Access to data

Patterson: The example that comes to my mind is that in many of the industries, we have partners who come on site for a short period of time. They need access to data. They might be doing inspections for us and they'll be going into a private area, but we don't want them to take certain photos, documents and other information off site after a period of time.

Patterson
Containerizing data and having zones allows a person to have access while they're on premises, within a certain "electronic wire fence," if you will, or electronic guardrails. Once they go outside of that area, that data is no longer accessible or they've been logged off the system and they no longer have access to those documents.

We had kind of an old-fashioned example where people think they are more secure, because they don't know what they're losing. We had people with file cabinets that were locked and they had the key around their neck. They said, "Why should we go to an electronic documents system where I can see when you viewed it, when you downloaded it, where you moved that document to?" That kind of scared some people.

Then, I walked in with half their file cabinet and I said, "You didn’t even know these were gone, but you felt secure the whole time. Wouldn’t you rather know that it was gone and have been able to institute some security protocols behind it?"

A lot of it goes to usability. We want to make things usable and we have to have access to it, but at the same time, those guardrails include not only where we can access it and at what time, but for how long and for what purposes.
Once they go outside of that area, that data is no longer accessible or they've been logged off the system and they no longer have access to those documents.

We have mobile devices for which we need to be able to turn the camera functions off in certain parts of our facility. For mobile device management, that's helpful. For BYOD, that becomes a different challenge, and that's when we have to handle giving them a device that we can control, as opposed to BYOD.

Gardner: Stan, another major trend these days is the borderless enterprise. We have supply chains, alliances, ecosystems that provide solutions, an API-first mentality, and that requires us to be able to move outside and allow others to cross over. How does the network-intelligence factor play into making that possible so that we can say, Yes, and get a strong user experience regardless of which company we're actually dealing with?

Black: I agree with the borderless concept. The interesting part of it, though, is with networks knowing where they're connecting to physically. The mobile device has over 20 sensors in it. When you take all of that information and bring it together with whatever APIs are enabled in the applications, you start to have a very interesting set of capabilities that we never had before.

A simple example is, if you're a database administrator and you're administering something inside the European Union (EU), there are very stringent privacy laws that make it so you're not allowed to do that.

We don’t have to make it that we have to train the person or make it more difficult for them; we simply disable the capability through geofencing. When one application is talking securely through a socket, all the way to the back end, from a mobile device, all the way into the data center, you have pretty darn good control. You can also separate duties; system administration being one function, whereas database administration is another very different thing. One set doesn't see the private data; one set has very clear access to it.

Getting visibility

Gardner: Chad, you mentioned how visibility is super important for you and your organization. Tell me a bit about moving beyond the user implications. What about the operators? How do you get that visibility and keep it, and how important is that to maintaining your security posture?

Wilson: If you can't see it, you can’t protect it. No matter how much visibility we get into the back end, if the end user doesn't adopt the application or the virtualization that we've put in place or the highway that we've built, then we're not going to see the end-to-end session. They're going to continue to do workarounds.

So, usability is very important to end-user adoption and adopting the new technologies and the new platforms. Systems have to be easy for them to access and to use. From the back-end, the visibility piece, we look at adopting technology strategically to achieve interoperability, not just point products here and there to bolt them on.
So, instead of thinking about things from a device-to-device-to-device perspective, we're thinking about one holistic service-delivery platform, and that's the new highway that provides that visibility.

A strategic innovation and a strategic procurement around technology and partnership, like we have with Citrix, allows us to have a consistent delivery of the application and the end user experience, no matter what device they go to, and where they access from in the world. On the back side, that helps us, because we can have that end-to-end visibility of where our data is heading, the authentication right upfront, as well as all the pieces and parts of the network that go into play to deliver that experience.

So, instead of thinking about things from a device-to-device-to-device perspective, we're thinking about one holistic service-delivery platform, and that's the new highway that provides that visibility.

Gardner: Whit, we've heard a lot about the mentality that you should always assume someone unwanted is in your network. Monitoring and response is one way of limiting that. How does your organization acknowledge that bad things can happen, but that you can limit that, and how important is monitoring and response for you in reducing damage?

Baker: In our case, we have several layers of user experience. Through policy, we only allow certain users to do certain things. We're a healthcare system, but we have various medical personnel; doctors, nurses and therapists, versus people in our corporate billing area and our call center.  All of those different roles are basically looking only at the data that they need to be accessing, and through policy, it’s fairly easy to do.

Gardner: Stan, on the same subject, monitoring and response, assuming that people are in, what is Citrix seeing in the field, and how are you giving that response time as low a latency as possible?

Standard protocol

Black: The standard incident-response protocol is identify, contain, control, and communicate. We're able to shrink what we need to identify. We're able to connect from end-to-end, so we're able to communicate effectively, and we've changed how much data we gather regarding transmissions and communications.

If you think about it, we've shrunk our tech surface, we've shrunk our vulnerable areas, methods, or vectors by which people can enter in. At the same time, we've gained incredibly high visibility and fidelity into what is supposed to be going over a wire or wireless, and what is not.

We're now able to shrink the identify, contain, control, and communicate spectrum to a much shorter area and focus our efforts with really smart threat intelligence and incident response people versus everyone in the IT organization and everyone in security. Everyone is looking at the needle in the haystack; now we just have a smaller stack of needles.

Patterson: I had a thought on that, because as we looked at a cloud-first strategy, one of the issues that we looked at was, "We have a voice-over-IP system in the cloud, we have Azure, we have Citrix, we have our NetScaler. What about our firewalls now, and how do we actually monitor intrusion?"
Citrix and Microsoft are helping us with that in our environments, but those are still open questions for us. We're not entirely satisfied with the answers yet.

We have file attachments and emails coming through in ways that aren’t on our on-premises firewall and not with all our malware detection. So, those are questions that I think all of us are trying to answer, because now we're creating known unknowns and really unknown unknowns. When it happens, we're going to say, "We didn’t know that that part could happen."

That’s where part of the industry is, too. Citrix and Microsoft are helping us with that in our environments, but those are still open questions for us. We're not entirely satisfied with the answers yet.

Gardner: Dan, one of the other ways that we want to be able to say, Yes, to our users and increase their experiences as workers is to recognize the heterogeneity -- any cloud, any device, multiple browser types, multiple device types. How do you see the ability to say, Yes, to vast heterogeneity, perhaps at a scale we've never seen before, but at the same time, preserve that security and keep those users happy?

Kaminsky: The reason we have different departments and multiple teams is because different groups have different requirements. They have different needs that are satisfied in ways that we don't necessarily understand. It’s not the heterogeneity that bothers us; it’s the fact that a lot of systems have different risks. We can merge the risks, or simultaneously address them with consistent technologies, like containerization and virtualization, like the sort of centralization solutions out there.

People are sometimes afraid of putting all their eggs in one basket. I'll take one really well-built basket over 50,000 totally broken ones. What I see is, create environments in which users can use whatever makes their job work best, and go ahead and realize that it's not actually the fact that the risks are that distinct, that they are that unique. The risk patterns of the underlying software are less diverse than the software itself.
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Gardner: Stan, most organizations that we speak to say they have at least six, perhaps more, clouds. They're using all sorts of new devices. Citrix has recently come out with Raspberry Pi at less than a $100 to be a viable Windows 10 endpoint. How do we move forward and keep the options open for any cloud and any device?

Multitude of clouds

Black: When you look at the cloud, there is a multitude of public clouds. Many companies have internal clouds. We've seen all of this hyperconvergence, but what has blurred over time are the controls between whether it’s a cloud, whether it’s the enterprise, and whether it’s mobile.

Again, some of what you've seen has been how certain technologies can fulfill controls between the enterprise and the cloud, because cloud is nimble, it’s fast, and it's great.

At the same time, if you don't control it, don’t manage it, or don't know what you have in the cloud, which many companies struggle with, your risk starts to sprawl and you don't even know it's happened.

So it's not adding difficult controls, what I would call classic gates, but transparency, visibility, and thresholds. You're allowed to do this between here and here. An end user doesn't know those things are happening.
Also, weaving analytics into every connection, knowing what that wire is supposed to look like, what that packet is supposed to look like gives you a heck of a lot more control than we've had for decades.

Also, weaving analytics into every connection, knowing what that wire is supposed to look like, what that packet is supposed to look like gives you a heck of a lot more control than we've had for decades.

Gardner:Chad, for you and your organization, how would you like to get security visibility in terms of an analytic dashboard, visualization, and alerts? What would you like to see happen in terms of that analytics benefit?

Wilson: It starts with population health and the concept behind it. Population health takes in all the healthcare data, puts it into a data warehouse, and leverages analytics to be able to show trends with, say, kids presenting with asthma or patients presenting with asthma across their lifespan and other triggers. That goes to quality of care.

The same concept should be applied to security. When we bring that data together, all the various logs, all of the various threat vectors and what we are seeing, not just signatures, but we're able to identify trends, and how folks are doing it, how the bad guys are doing it. Are the bad guys single-vectored or have they learned the concept of combined arms, like our militaries have? Are they able to put things together to have better impact? And where do we need to put things together to have better protection?

We need to change the paradigm, so when they show their hand once, it doesn't work anymore. The only way that we can do that is by being able to detect that one time when they show their hand. It's getting them to do one thing to show how they are going to attack us. To do that, we have to pull together all the logs, all of the data, and provide analytics and get down to behavior; what is good behavior, what is bad behavior.

That's not a signature that you're detecting for malware; that is a behavior pattern. Today I can do one thing, and tomorrow I can do it differently. That's what we need to be able to get to.

Getting information

Patterson: I like the illustration that was just used. What we're hoping for with the cloud strategy is that, when there's an attack on one part of the cloud, even if it's someone else that’s in Citrix or another cloud provider, then that is shared, whereas before we have had all these silos that need to be independently secured.

Now, the windows that are open in these clouds that we're sharing are going to be ways that we can protect each one from the other. So, when one person attacks Citrix a certain way, Azure a certain way, or AWS a certain way, we can collectively close those windows.
I want to know where the windows are open and where the heat loss went or where there was air intrusion.

What I like to see in terms of analytics is, and I'll use kind of a mechanical engineering approach, I want to know where the windows are open and where the heat loss went or where there was air intrusion. I would like to see, whether it went to an endpoint that wasn't secured or that I didn't know about. I'd like to know more about what I don't know in my analytics. That’s really what I want analytics for, because the things that I know I know well, but I want my analytics to tell me what I don't know yet.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Great Courses Down Under

Great Courses Down Under

For all Australian readers, there are a couple of great courses coming to Australia.  Firstly, Barry Devlin is running a 2 day course from 24th August (see here).  For those that don’t know Barry, he is the un-official farther of data warehousing.  There at the start and  never go the media attention of others.  What […]
What is Success?

What is Success?

Late last week I took a quick trip back to central New York (Syracuse-area) to attend a family funeral for my brother’s father-in-law (who I had know for 20+ years). While it was of course a sad occasion, it did allow me an unplanned visit to see my father (for an early father’s day no less), […]
How Unlimited Computing Power, Swarms of Sensors and Algorithms Will Rock our World

How Unlimited Computing Power, Swarms of Sensors and Algorithms Will Rock our World

We have entered a world where accelerated change is the only constant. The speed at which technologies are currently developing is unlike any other since the existence of mankind. When we look at the past two hundred years, we have seen multiple inventions that changed society as we knew it. First we had the invention of the book press, which made books available for the (general) public. Then we had the invention of the steam machine, which significantly altered any industry on earth and pushed mankind to the next level. And in the 20th century we saw the invention of the Internet and the computer. This latest ‘information revolution’ is of a different scale than the industrial revolution made possible by the steam machine. 

Although when the Internet was invented, it did not yet look like something major was going one. But isn't that always the case with ground breaking inventions? It takes some time before you know what has happened. Now, almost fifty years later, we can finally get a glimpse about the profoundness of this invention as we slowly, but surely, enter the information era. 

In the past decades, we have seen the creation of a digital infrastructure that is ...


Read More on Datafloq
How Algorithms Could Propel Us to Earth 2.0

How Algorithms Could Propel Us to Earth 2.0

We have entered a world where accelerated change is the only constant. The speed at which technologies are currently developing is unlike any other since the existence of mankind. When we look at the past two hundred years, we have seen multiple inventions that changed society as we knew it. First we had the invention of the book press, which made books available for the (general) public. Then we had the invention of the steam machine, which significantly altered any industry on earth and pushed mankind to the next level. And in the 20th century we saw the invention of the Internet and the computer. This latest ‘information revolution’ is of a different scale than the industrial revolution made possible by the steam machine. 

Although when the Internet was invented, it did not yet look like something major was going one. But isn't that always the case with ground breaking inventions? It takes some time before you know what has happened. Now, almost fifty years later, we can finally get a glimpse about the profoundness of this invention as we slowly, but surely, enter the information era. 

In the past decades, we have seen the creation of a digital infrastructure that is ...


Read More on Datafloq
5 Security Vulnerabilities Looming for the Internet of Things

5 Security Vulnerabilities Looming for the Internet of Things

Almost three years ago, I wrote in my IoT blog the posts “Are you prepared to answer M2M/IoT security questions of your customers ?. and “There is no consensus how best to implement security in IoT” given the importance that Security has to fulfil the promise of the Internet of Things (IoT).

Now, I have been sharing my opinion about the key role of IoT Security with other international experts in articles such as “What is the danger of taking M2M communications to the Internet of Things? and at different events (Cycon , IoT Global Innovation Forum 2016).

Security Has Always Been a Trade-off Between Cost and Benefit

I am honest when I say that I do not known how McKinsey calculates the total impact of IoT on the world economy in 2025, even in one of the specific sectors, and if they took into account the challenge of security, but it hardly matters: “The opportunities generated by IoT far outweigh the risks”.

With increased IoT opportunities coms increased security risks and a flourishing IoT Security Market (According to Zion Research the IoT Security Market will growth to USD 464 million in 2020).

A Decade of Breaches and the Biggest Attack is Still Looming

We all ...


Read More on Datafloq
How to Make Business Processes More Efficient With Big Data

How to Make Business Processes More Efficient With Big Data

Adhering to standard business processes can sometimes cost a lot of money. Commercial flights, for instance, are mandated by law to adhere to a strict schedule of maintenance and inspection. The loss to airline companies due to these routine checks can be as high as $10,000 for each hour of non-operation. A recent study conducted by SAP showed that by using big data and in-memory technology to analyse airline engine data any by streamlining this maintenance process, unplanned downtime dropped by as much as 18%.

So what exactly does streamlining entail? In the case of the airline industry, operators set up sensors to measure more than 300,000 different parameters relating to the engine of the flight. This amounted to nearly 20 terabytes of information for each hour of the flight. Analysing these different parameters against benchmarked indices was sufficient to accurately predict any possible malfunctions. Doing such big data analysis not only prevents unplanned downtime, but also helps operators optimize their flight runs to account for future maintenance work, reduce the time it takes to inspect the symptoms (since the exact engine parameters are known) and thus avoid flight delays.

Such data driven streamlining of business processes is today possible in every ...


Read More on Datafloq
451 analyst Berkholz on how DevOps, automation and orchestration combine for continuous apps delivery

451 analyst Berkholz on how DevOps, automation and orchestration combine for continuous apps delivery

The next BriefingsDirect Voice of the Customer thought leadership discussion focuses on the burgeoning trends around DevOps and how that’s translating into new types of IT infrastructure that both developers and operators can take advantage of.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about trends and developments in DevOps, micro services, containers, and the new direction for composable infrastructure, we’re joined by Donnie Berkholz, Research Director at 451 Research, and he’s based in Minneapolis. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why are things changing so much for apps deployment infrastructure? Why is DevOps newly key for software development? And why are we looking for “composable infrastructure?”

Berkholz: It’s a good question. There are a couple of big drivers behind it. One of them is cloud, probably the biggest one, because of the scale and transience that we have to deal with now, with virtual machines (VMs) appearing and disappearing on such a rapid basis.

Berkholz
We have to have software, processes, and cultures that support that kind of new approach, and IT is getting more-and-more demands for scale and to do more from the line of business. They're not getting more money or people, and they have to figure out what’s the right approach to deal with this. How can we scale and how can we do more and how can we be more agile?

DevOps is the approach that’s been settled on. One of the big reasons behind that is the automation. That’s one of what I think of as the three pillars of DevOps, which are culture, automation, and measurement.

Automation is what lets you move from this metaphor of cattle versus pets, moving from the pet side of it, where you carefully name and handcraft each server, to a cattle mindset, where you're thinking about fleets of servers and about services rather than individual servers, VMs, containers, or what have you. You can have syste
ms administrators maintaining 10,000 VMs, rather than 100 or 150 servers by hand. That’s what automation gets you.

More with less

So you're doing more with less. Then, as I said, they're also getting demands from the business to be more agile and deliver it faster, because the businesses all want to compete with companies like Netflix or Zenefits, the Teslas of the world, the software-defined organizations. How can they be more agile, how can they become competitive, if they're a big insurance company or a big bank?

DevOps is one of the key approaches behind that. You get the automation, not just on the server side, but on the application-delivery pipeline, which is really a critical aspect of it. You're moving toward this continuous delivery approach, and being able to move a step beyond agile to bring agile all the way through to production and to deploy software, maybe even on every comment, which is the far end of DevOps. There are a lot of organizations that aren’t there yet, but they're taking steps toward that, toward moving from deployments every three months or six months to every few weeks.
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
Gardner: So the vision of having that constant iterative process, continuous development, continuous test, continuous deployment -- at the same time being able to take advantage of these new cloud models -- it’s still kind of a tricky equation for people to work out.

What is it that we need to put in place that allows us to be agile as a development organization and to be automated and orchestrated as an operations organization? How can we make that happen practically?

Berkholz: It always goes back to three things -- people, process, and technology. From the people perspective, what I have run into is that there are a lot of organizations that have either development or operational groups, where some of them just can't make this transition.
IT is going through this kind of existential crisis of moving from being a cost center to fighting shadow IT, fighting bring your own device (BYOD), trying to figure out how to bring that all into the fold.

They can't start thinking about the business impacts of what they're doing. They're focused on keeping the lights on, maintaining the servers, writing the code, and being able to make that transition to focusing on what the business needs. How am I helping the company is the critical step from an individual level, but also from an organizational level.

IT is going through this kind of existential crisis of moving from being a cost center to fighting shadow IT, fighting bring your own device (BYOD), trying to figure out how to bring that all into the fold. How they do so is this transition toward IT as a service is the way we think about it. IT becoming more like a service provider in their own right, pulling in all these external services and providing a better experience in house.

If you think about shadow IT, for example, you think about developers using a credit card to sign-up for some public cloud or another. That’s all well and good, but wouldn’t it be even nicer if they didn’t have to worry about the billing, the expensing, the payments, and all that stuff, because IT already provided that for them. That’s where things are going, because that’s the IT-as-a-service provider model.

Gardner: People, process, technology, and existential issues. The vendors are also facing existential issues, things and changing so fast, and they provide technology, the people and the process which is up to the enterprise to figure out. What's happening on the technology side, and how are the vendors reacting to allow enterprises to then employ the people and put in place the processes that will bring us to this better DevOps automated reality? What can we put in place technically to make this possible?

Two approaches

Berkholz: It goes back to two approaches -- one coming in from the development side and one coming in from the operational side.

From a development side, we're talking about things like continuous-delivery pipelines --  what does the application delivery process look like? Typically, you'd start with something like continuous integration (CI).

Just moving toward an automated testing environment, every commit you make, you're testing the code base against it one way or another. This is a big transition for people to make, especially as you think about moving the next step to continuous delivery, which is not just testing the code base, but testing the full environment and being ready to deploy that to production with every commit, or perhaps on a daily basis.
Just moving toward an automated testing environment, every commit you make, you're testing the code base against it one way or another.

So that's a continuous-integration, continuous-delivery approach using CI servers. There's a pretty well-known open-source one called Jenkins. There are many other examples of as-a-service options around the prime options. That tends to be step one, if you're coming in from the development side.

Now, on the operational side, automation is much more about infrastructure as code. It's really the core tenet, and this is embodied by configuration management software like Puppet, Chef, Ansible, Salt, maybe CFEngine, and the approaches defining server configuration and code and maintaining it in version control, just like you would maintain the software that you're building in version control. You can scale it easily because you know exactly how a server is created.

You can ask if that's one mail server or is it 20? It doesn’t really matter. I'm just running the same code again to deploy a new VM or to deploy onto a bare-metal environment or to deploy a new container. It’s all about that infrastructure-as-code approach using configuration-management tools. When you bring those two things together, that’s what enables you to really do continuous delivery.

You’ve got the automated application delivery pipeline on the top and you've got the automated server environment on the bottom. Then, in the middle, you’ve got things like service virtualization, data virtualization, and continuous-integration servers all letting you have an extremely reliable and reproducible and scalable environment that is the same all the way from development to production.
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
Gardner: And when we go to infrastructure as code, when we go to software-based everything. There's a challenge getting there, but there are also some major paybacks. When you feed-up to analyze your software, when you can replicate things rapidly, when you can deploy to a cloud model that works for your economic or security requirements, you get lot of benefits.

Are we seeing those yet, Donnie?

Berkholz: One of the challenges is that we know there are benefits, but they're very challenging to quantify. When you talk about the benefit of delivering a solution to market faster than your competitors, the benefit is that you're still in business. The benefit is that you’re Netflix and you're not Blockbuster. The benefit is that you’re Tesla and you’re not one of the big-three car manufacturers. Tesla, for example, can ship an update to its cars that let them self-drive on-the-fly for people who already purchased the car.
If you want to survive, you’re going to have to take this DevOps mindset, so that you can be more agile, not just as a software group, but as a business.

You can't really quantify the value of that easily. What you can quantify is natural selection and action. There's no mandatory requirement that any company survive or that any company can make the transition to software-defined. But, if you want to survive,  you’re going to have to take this DevOps mindset, so that you can be more agile, not just as a software group, but as a business.

Gardner: Perhaps one of the ways we can measure this is that we used to look at IT spend as a percentage of capital spend for an enterprise. Many organizations, over the past 20 or 30, years found themselves spending 50 percent or more of their capital expenditures on IT.

I think they'd like to ratchet back. If we go to IT as a service, if we pay for things at an operations level, if we only pay for what we use, shouldn't we start to see a fairly significant decrease in the total IT spend, versus revenue or profit for most organizations?

Berkholz: The one underlying factor is how important software is to your company. If that importance is growing, you're probably going to spend more as a percentage. But you're going to be generating more margin as a result of that. That's one of the big transitions that are happening, the move from IT as a cost center to IT as a collaborator with the business.

The move is away from your traditional old CIO view of we're going to keep the lights on. A lot of companies are bringing in Chief Digital Officers, for example, because the CIO wasn't taking this collaborative business view. They're either making the transition or they're getting left behind.

Spending increase

I think we'll see IT spend increase as a percentage, because companies are all realizing that, in actuality, they're software companies or they're becoming software companies. But as I said, they are going to be generating a lot more value on top of that spend.

To your point about OPEX and buying things for the service, the piece of advice I always give to companies is the saying, "How many of these things that you're doing are significant differentiators for your company?" Is it really a differentiator for your company to be an expert at automating a delivery pipeline, to be an expert at automating your servers, to be an expert at setting up file sharing, to be an expert at setting up an internal chat server? None of those, right?

Why not outsource them to people who are experts and to people who do generate that as their core differentiator and their core value creator and focus on the things that your business cares about.

Gardner: Let's get back to this infrastructure equation. We're hearing about composable infrastructure, software-defined data center (SDDC), micro services, containers and, of course, hybrid cloud or hybrid computing. If I'm looking to improve my business agility where do I look to in terms of understanding my future infrastructure partners? Is my IT organization just a broker and are they going to work with other brokers? Are we looking at a hierarchy of brokering with some sort of a baseline commoditized set of services underneath?
Everything is becoming polyglot or heterogeneous, and the only way to cope with that is to really focus on composability.

So, where do we go in terms of knowing who the preferred vendors are. I guess we're sort of looking at a time when no one got fired for from buying IBM, for example. Everyone is saying Amazon is going to take over the world, but I've heard that about other vendors in the past, and it didn't pan out. This is a roundabout way of saying when you want to compose infrastructure, how do you keep choice, how to keep from getting locked in, how do you find a way to be in a market at all times?

Berkholz: Composability is really key. We see a lot of IT organizations. As you said, they used to just buy Big Blue, for example, at their IBM shops. That's no longer a thing in the way that it used to be. There's a lot more fragmentation in terms of technology, programming languages, hardware, JavaScript toolkits, and databases.

Everything is becoming polyglot or heterogeneous, and the only way to cope with that is to really focus on composability. Focus on multi-vendor solutions, focus on openness, opening APIs, and open-source as well, are incredibly important in this composable world, because everything has to be able to piece together.

But the problem is that when you give traditional enterprises a bunch of pieces, it's like having kids just create a huge mess on the floor. Where do you even get started? That's one of the challenges they need to have. The way I always think about it is what are enterprises looking for? They're looking for a Lego castle, right? They don’t want the Lego pieces, and they don't want like that scene in The Lego Movie where the father glues all the blocks together. They don't want to be stuck. That's the old monolithic world.

The new composable world is where you get that castle and you can take off the tower and put on a new tower if you want to. But you're not given just the pieces; you're given not just something that is composable, but something that is pre-composed for you, for your use. case. So that generates value and looks like what we used to think about reference architectures, for example, being something sitting on a PowerPoint slides with kind of a fancy diagram.

It’s moving more toward reference architectures in the form of code, where it’s saying, "Here's a piece of code that’s ready to deploy and that’s enabled through things like infrastructure as code."

Gardner: Or a set of APIs.

Ready to go

Berkholz: Exactly. It’s enabled by having all of that stuff ready to go, ready to build in a way that wasn’t possible before. The best-case scenario before was, "Here’s a virtual appliance; have fun with that." Now, you can distribute the code and they can roll that up, customize it, take a piece out, put a piece in, however they want to.

Gardner: Before we close out, Donnie, any words of advice for organizations back to that cultural issue -- probably the more difficult one really? You have a lot of choices of technology, but how you actually change the way people think and behave among each other is always difficult. DevOps, leading to composable infrastructure, leading to this sort of services brokering economy, for lack of a better word, or marketplace perhaps.

What are you telling people about how to make that cultural shift? How do organizations change while still keeping the airplane flying so to speak?
You can’t do it as a big bang. That's absolutely the worst possible way to go about it.

Berkholz: You can’t do it as a big bang. That's absolutely the worst possible way to go about it. If you think about change management, it’s a pretty well-studied discipline at this point. There's an approach I prefer from a guy named John Kotter who has written books about change management. He lays out an eight- or nine-step process of how to make these changes happen. The funny thing about it is that actually doing the change is one of the last steps.

So much of it is about building buy-in, about generating small wins, about starting with an independent team and saying, "We're going to take the mobile apps team and we're going to try a continuous delivery over there. We're not going to stop doing everything for six months as we are trying to roll this out across the organization, because the business isn’t going to stand for that."
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
They're going to say, "What are you doing over there? You're not even shipping anything. What are you messing around with?" So, you’ve got to go piece by piece. Let’s say, start by rolling out continuous integration and slowly adding more and more automated tests to it, while keeping the manual testers alongside, so that you're not dropping all of the quality that you had before. You're actually adding more quality by adding the automation and slowly converting those manual testers over to the engineers on test.

That’s the key to it. Generate small wins, start small, and then gradually work your way up as you are able to prove the value to the organization. Make sure while you're doing so that you have executive buy-in. The tool side of things you can start at a pretty small level, but thinking about reorganization and cultural change, if you don’t have executive buy-in, is never going to fly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

4 Ways Big Data is Altering the SEO Landscape

4 Ways Big Data is Altering the SEO Landscape

Big data is changing so many industries and SEO is in no way excluded from this. In fact, big data is making notable inroads in this industry. So, let’s take a look at some of the ways it’s being used to optimise searches.

Big Data Offers Deep SEO Insights

Google and the other search engines focus on breaking apart content into quantifiable data. This makes it easier for a marketer to find sight from the information the data users are searching for.

With big data, SEOs can track and analyze a keyword, on-page optimization strategies, backlinks and other search areas in order to optimize their efforts, which can include:


Quality guideline compliance
Local SEO
Global SEO
Content marketing
Mobile optimization
SEO ROI


This talk at SEO conference, UnGagged, in London looks like it will showcase some of the ways it can provide deeper insight.

Content Is Becoming Data at an Exponential Rate

Basically, content is just published information. However, when Google emerged as a curator of big data, content began to be seen as a quantifiable entity.

When content is turned into data, it can  be easily analyzed by search engines, which can turn data to content and deliver the content to people looking for relevant information.

As a result of the content to data conversion, ...


Read More on Datafloq
How to Win your Customers for Life with Predictive Analytics

How to Win your Customers for Life with Predictive Analytics

Winning your customer for life is a challenging task for organizations. How can you connect with your customer and how can you ensure that they stay with your organization for a long time? Questions that many organizations face.  Fortunately, with the advance of big data and analytics, it has become a little bit easier for organizations. Last week, I spoke at the Retail, eCommerce, Payments and Cards conference in Dubai, one of the biggest in the Middle East, and I would like to share some of my keynote insights with you through this article.

These are challenging times for organizations. Organizations have to face disruptive innovations from many different angles and accelerated change in technological advances require organizations to constantly change and adapt. On the other hand, we have moved from descriptive and diagnostic analytics to the more advanced predictive analytics and we are moving towards prescriptive analytics. The more we use data to predict what will happen and what action should be taken, the more difficult it becomes, but also the more value that can be created.



Source: Gartner

New organizations that disrupt multiple industries understand this very well. They use data in every possible way. At every possible touchpoint with customers ...


Read More on Datafloq
The Data-Driven Advantage for the Insurance Industry

The Data-Driven Advantage for the Insurance Industry

Advancements in technologies, bigger data sets and predictive analytics have changed the game for the insurance industry. For those who want to compete, data-driven marketing approaches must drive every customer engagement strategy.

According to research by Applied systems, insurers understand the huge advantage data and technology bring to the table with 50% of insurance executives prioritizing technology investments to capture new client insights over the next 3 years. IDC analyst Tomasz Sloniewski said it best with his statement, “Information is money. The ability to extract the right information at the right time holds an immense value and should be the goal of every executive, manager and employee.”

Analyzing customer data allows insurers to gain new insights into how to better serve clients and attract new ones. These enhanced data-driven insights result in increased revenue and policy retention and ultimately more profitable client relationships. According to McKinsey & Company, companies that use data analytics extensively are more than twice as likely to generate above average profits.

"Analyzing customer data allows insurers to gain new insights into how to better serve and attract clients."

While few insurers lack some kind of analytical capabilities, many are still challenged to make the transition into becoming a data-driven enterprise. ...


Read More on Datafloq
An Interview with Dataiku’s CEO: Florian Douetteau

An Interview with Dataiku’s CEO: Florian Douetteau

As an increasing number of organizations look for ways to take their analytics platforms to higher grounds, many of them are seriously considering the incorporation of new advanced analytics disciplines, this includes hiring data science specialists and solutions that can enable the delivery of improved data analysis and insights. As a consequence, this also triggers the emergence of new
How IT4IT helps turn IT into a transformational service for digital business innovation

How IT4IT helps turn IT into a transformational service for digital business innovation

The next BriefingsDirect expert panel discussion examines the value and direction of The Open Group IT4IT initiative, a new reference architecture for managing IT to help business become digitally innovative.

IT4IT was a hot topic at The Open Group San Francisco 2016 conference in January. This panel, conducted live at the event, explores how the reference architecture grew out of a need at some of the world's biggest organizations to make their IT departments more responsive, more agile.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn now how those IT departments within an enterprise and the vendors that support them have reshaped themselves, and how others can follow their lead. The expert panel consists of Michael Fulton, Principal Architect at CC&C Solutions; Philippe Geneste, a Partner at Accenture; Sue Desiderio, a Director at PriceWaterhouseCoopers; Dwight David, Enterprise Architect at Hewlett Packard Enterprise (HPE); and Rob Akershoek, Solution Architect IT4IT at Shell IT International. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How do we bridge the divide between a cloud provider, or a series of providers, and have IT take on a brokering role within the organization? How do we get to that hybrid vision role?

Geneste: We'll get there step-by-step. There's a practical step that’s implementable today. My suggestion would be that every customer or company that selects an outsourcer, that selects a cloud vendor, that selects a product, uses the IT4IT Reference Architecture in the request for proposal (RFP), putting a strong emphasis on the integration.

We see a lot of RFPs that are still silo-based -- which one is the best product for project and portfolio management, which one is the best service management tool -- but it’s not very frequent that we see the integration as being the topnotch value measured in the RFP. That would be one point.

The discussions with the vendors, again, cloud vendors or outsourcers or consulting firms should start from this, use it as an integration architecture, and tell us how you would do things based on these standardized concepts. That’s a practical step that can be used or employed today.

In a second step, when we go further into the vendor specification, there are vendors today, when you analyze the products and the cloud offerings that are closer to the concepts we have in the reference architecture. They're maybe not certified, maybe not the same terminology, but the concepts are there, or the way to the concepts is closer.

And then ultimately, step 3 and 3.5 will be product vendor certified, cloud service offering certified, hopefully full integration according to the reference architecture, and eventually, even plug-and-play. We're doing a little bit about plug-and-play, but at least integration.

Gardner: What sort of time frame would you put on those steps? Is this a two-year process, a four-year process, to soon to tell?

Achievable goals

Geneste: That’s a tough one. I suppose the vendor should be responding to this one. For the service providers, for the cloud service providers, it’s a little bit trickier, but for the consulting firm for the service providers it should be what it takes to get the workforce trained and to get the concepts spread inside the organization. So within six to 12 months, the critical mass should be there in these organizations. It's tough, but project by project, customer by customer it’s achievable.

Some vendors are on the way, and we've seen several vendors talk about IT4IT in this conference. I know that those have significant efforts on the way and are preparing for vendor certification. It will be probably a multiyear process to get the full suite of products certified, because there is quite a lot to change in the underlying software, but progressively, we should get there.

So, it's having first levels of certification within one to two years, possibly even sooner. I would be interested in knowing what the vendor responses will be.

Gardner: Sue, along the same lines, what do you see needed in order to make the IT department able to exercise the responsibility of delivering IT across multiple players and multiple boundaries?

Desiderio
Desiderio: Again, it’s starting with the awareness and the open communication about IT4IT and, on a specific instance, where that fits in. Depending on the services we're getting from vendors, or whether it's even internal services that we are getting, where do they fit into the whole IT4IT framework, what functions are we getting, what are the key components, and where are our interface points?

Have those conversations upfront in the contract conversations, so that everyone is aware of what we're trying to accomplish and that we're trying to seek that seamless integration between those suppliers and us.

Gardner: Rob, this would appear to be a buyer’s market in terms of their ability to exercise some influence. If they go seeking RFPs, if there are fewer cloud providers than there were general vendors in a traditional IT environment, they should be able to dictate this, don’t you think?

Akershoek: In the cloud world, the consumer would not dictate at all. That’s the traditional way that we dictate how an operator should provide us data. That’s the problem with the cloud. We want to consume a standard service. So we can't tell the cloud vendor, send me your cost data in this format. That won't work, because we don’t want the cloud vendor to make something proprietary for us.

That’s the first challenge. The cloud vendors are out there and we don’t want to dictate; we want to consume a standard service. So if they set up a catalog in their way, we have to adopt that. If they do the billing their way, we have to adopt it or select another cloud vendor. That’s the only option you have, select another vendor or adopt the management practices of the cloud vendor. Otherwise, we will continuously have to update it according to our policy. That’s a key challenge.

Akershoek
That’s why managing your cloud vendor is really about the entire value chain. You start with making your portfolio, thinking about what cloud services you put in your offerings, or your portfolio. So for past platforms, we use vendor A, and for infrastructure and service, vendor B. That’s where it starts. Which vendors do I engage with?

And then, going down to the Request to Fulfill, it’s more like what are the products that we're allowed to order and how do we provision those? Unfortunately, the cloud vendors don’t have IT4IT yet, meaning we have to do some work. Let’s say we want to provision the cloud environment. We make sure that all the cloud resources we provision are linked to that subscription, linked to that service, so at least we know the components that a cloud vendor is managing, where it belongs, and which service is consuming that.

Different expectations

Fulton: Rob has a key point here around the expectations being different around cloud vendors, and that’s why IT4IT is actually so powerful. A cloud vendor is not going to customize their interfaces for every single individual company, but we can hold cloud vendors accountable to an open industry standard like IT4IT, if we have detailed out the right levels of interoperability.
Fulton

To me, the way this thing comes together long term is through this open standard, and then through that RFP process, customer organizations holding their vendors accountable to delivering inside that open standard. In the world of cloud, that’s actually to the benefit of the cloud providers as well.

Akershoek: That’s a key point you make there, indeed.

David: And just to piggyback on what we're saying, it goes back to the value proposition. Why am I doing this? If we have something that’s an open standard, it enables velocity. You can identify costs much easier. It’s simpler and it goes back again to the value proposition and showing these cloud vendors that because of a standard, I'm able to consume more of your services, I'm able to consume your services easier, and here I'm guaranteed because it’s a standard to get my value. Again, it's back to the value proposition that the open standard offers.

Gardner: Sue, how about this issue of automation? Is it essential to be largely automated to realize the full benefits of IT4IT or is that more of a nice-to-have goal? What's the relationship between a high degree of automation in your IT organization for the support of these activities and the standard and Reference Architecture?

Automation is key

Desiderio: I'm a believer that automation is key, so we definitely have to get automation throughout the whole end-to-end value chain no matter what. That’s really part of the whole transformation going into this new model.

You see that throughout the whole value chain. We talked about it individually on the different value streams and how it comes back.

I also want to touch on what’s the right size company or firm to pick up IT4IT. I agree with where Philippe was coming from. Smaller shops can pick it up and start leveraging it more quickly, because they don't have that legacy IT that was done, where it's not built on composite services and things. Everything on a system is pinpointing direct servers and direct networks, instead of building it on services, like a hosting service and a monitoring response service.

For larger IT organizations, there's a lot more change, but it's critical for us to survive and be viable in the future for those IT shops, the larger ones in large organizations, to start adopting and moving forward.
We, in a larger IT shop, are going to be running in a mixed mode for a long time to come. So, it's looking at where to start seeing that business value as you look at new initiatives and things within your organization.

It's not a big bang. We, in a larger IT shop, are going to be running in a mixed mode for a long time to come. It's looking at where to start seeing that business value as you look at new initiatives and things within your organization. How do you start moving into the new model with the new things? How do you start transitioning your legacy systems and whatnot into more of the new way of thinking and looking at that consumption model and what we're trying to do, which is focus on that business outcome.

So it's much harder for the larger IT shops, but the concepts apply to all sizes.

Gardner: Rob, the subject of the moment is size and automation.

Akershoek: I think the principle we just discussed, automation, is a good principle, but if you look at the legacy, as you mentioned, you're not going to automate your legacy, unless you have a good business case for that. You need to standardize your services on many different layers, and that's what you see in the cloud.

Cloud vendors are standardizing extremely, defining standard component services. You have to do the same and define your standard services and then automate all of those. The legacy ones you can't automate or probably don’t want to automate.

So it's more standardization, more standard configurations, and then you can automate and develop or Detect to Correct as well, if you have a very complex configuration and it changes all the time without any standards.

The size of the organization doesn’t matter. Both for large and smaller organizations you need to adopt standard cloud practices from the vendors and automate the delivery to make things repeatable.

Desire to grow

David: Small organizations don’t want to remain small all the time; they actually want to grow. Growth starts with a mindset, a thinking mindset. By applying the Reference Architecture, even though you don't apply every single point to my one-man or two-man shop, it then helps me, it positions me, and it gives me the frame of reference, the thinking to enable growth.
David

It grows organically. So, you don't end up with the legacy baggage that most of the large companies have. And small companies may get acquired, but at least they have good discipline or they may acquire others as they grow. The application of the IT4IT Reference Architecture is just not for large companies, it’s also for small companies, and I'm saying that as a small-business owner myself.

Akershoek: Can I add to that? If you're starting out deployed to the cloud, maybe the best way is to start with automation at first or at least design for automation. If you have a few thousand servers running in the cloud and you didn't start with that concept, then you already have legacy after a few years running in the cloud. So, you should start thinking about automation from the start, not with your legacy of course, but if you're now moving to the cloud design, build that immediately.

The entire Reference Architecture applies from day one for companies of any size; it's just a question of whether it's explicit or implicit.
Fulton: On this point, one of the directions we're heading is to figure out this very issue, what of the reference architecture applies at what size and evolution in a company’s growth.

As I mentioned, I think I made this comment earlier, the entire reference architecture applies from day one for companies of any size; it's just a question of whether it's explicit or implicit.

If it's implicit, it's in the head of the founder. You're still doing the elements, or you can be still doing the elements, of the reference architecture in your mind and your thought process, but there are pieces you need to make explicit even when you are, as Charlie likes to say, two people in a garage.

On the automation piece, the key thing that has been happening throughout our industry related to automation has been, at least in my perspective, when we've been automating within functional components. What the IT4IT Reference Architecture and its vision of value streams allow us to do is rethink automation along the lines of value streams, across functional components. That's where it starts to really add a considerable value, especially when we can start to put together interoperability between tooling on some of these things. That’s where we're going to see automation take us to that next level as IT organizations.

Gardner: As IT4IT matures and becomes adopted and serves both consumers and providers of services, it seems to me that there will be a similar track with digital business of how you run your business, which is going to be more a brokering activity at a business level, that a business is really a constituency of different providers across supply chains, increasingly across service providers.

Is there a dual track for IT4IT on the IT side and for business management of services through a portal, through dashboard, something that your business analyst and on up would be involved with? Should we let them happen separately? How can we make them more aligned and even highly integrated and synergistic?

Best practices

Geneste: We have such best practices in IT4IT that the businesses themselves can replicate that and use that for themselves. I suppose certain companies do that a little bit today; if you take the Ubers and the Airbnbs and have these disintermediation connecting with private individuals a lot of the time, but have some of these service-oriented concepts today effectively, even though they don’t use IT4IT.

Just as much as we see today, we have cases where businesses, for their help-desks or for their request management, turn to the likes of HPE for service-management software to help them with their business help-desk. We're likely to see that those best practices in terms of individualization and specification of individual conceptual service, service catalogue, or subscription mechanisms. You're right; the concepts could very easily apply to businesses. As to how that would turn out, I would need to do a little bit more thinking, but I think from a concept’s standpoint, it truly should be useful.

Desiderio: We're trying to move ourselves up the stack to help the businesses in the services that they're providing and so it’s very relevant as we're looking at IT4IT and how we're managing the IT services. It’s also those business services, it’s concurrent, it’s evolving and training and making the business aware of where we're trying to go and how they can leverage that in their own services that they are providing outward.
Where we start talking about transformation, we must be aligned with the business so we understand their business processes and the services that they're trying to serve.

When you look at adopting this, even when you go back down to your IT in your organization where you have your different typical organizational teams, there's a challenge for each IT team to look at the services they're providing and how they start looking at what they do in terms of services, instead of just the functions.

That goes all the way up the stack including the business, the business services, and IT’s job. When we start talking about transformation, we must be aligned with the business so we understand their business processes and the services that they're trying to serve and then how are we truly that business-enabler.

Akershoek: I interpret your question like it's about shadow IT, that there is no shadow IT. Some IT management activity is performed by the business, and you mentioned as well, the business needs to apply IT4IT practices as well. As soon as IT activities are done by the business, like they select and manage their own software-as-a-service (SaaS) application, they need to perform the IT4IT related activities themselves. They're even starting to configure SaaS services themselves. The business can do the configuration and they might even provide the end-user support. Also in these cases, these management activities fit in the IT4IT reference structure model as well.

Gardner: Dwight, we have a business scorecard, we have an IT scorecard, why shouldn’t they be the same scorecard?

David: I'm always reminded that IT is in place to help the business, right? The business is the function, and IT should be the visible enabler of business success. I would classify that as catching up to the business expectations. Could some of the principles that we apply in IT be used for the business? Yeah, it can be, but I see it more the other way around. If you look at a whole value chain that came from the business perspective being approached, being applied to IT, I still see that the business is driven, but really IT is becoming more seamless in enabling the business to achieve their particular goals.

Application of IT

Fulton: The whole concept of digital business is actually a complete misnomer. I hate it; I think it’s wrong. It’s all about the application of information technology. In the context of what we typically talk about with IT4IT, we're talking about the application of information technology to the management of the IT department.

We also talk about the application of information technology to the transformation of business processes. Most of the time, that happens inside companies, and we're using the principles of IT4IT to do that. When we talk about digital business, usually we're talking about the application of information technology into the transformation of business models of companies. Again, it’s still all about applying information technology to make the company work in a different way. For me, the IT4IT principles, the Reference Architecture, the value streams, will still hold for all of that.

Geneste: The two innovations that we have in the IT4IT Reference Architecture -- the Service Backbone and the Request to Fulfill (R2F) value stream -- are the two greatest novelties of the reference architecture.

Are they mature? They're mature enough, and they'll probably evolve in their level of maturity. There are a number of areas that are maturing, and some that we have in design. The IT Financial Management, for instance, is one that I'm working on, and the service costing within that, which I think we'll get a chance to get ready by version 2.1. The idea is to have it as guidance in version 2.1.

The value streams by themselves are also mature and almost complete. There are a number of improvements we can make to all of them, but I think overall the reference architecture is usable today as an architecture to start with. It's not quite for vendor certification, although that’s upcoming, but there are a number of good things and a number of implementations that would benefit from using the current IT4IT Reference Architecture 2.0.

Gardner: Sue, where do you see the most traction and growth, and what would you like to see improved?

Desiderio: An easy entry point to start with is Detect to Correct because it’s one of the value streams that’s a little bit more known and understood. So that’s an easier point of entry for the whole IT4IT Value Chain, compared to some of the other value streams.

The service model, as we've stated all along, is definitely the backbone to the whole IT value chain. Although it's well-formed and in a good, mature state, there's still plenty of work to do to make that consumable to the IT organizations to understand all the different phases of the life cycle and all the different data objects that make up the Service Backbone. That's something that we're currently working on for the 2.1 version, so that we have better examples. We can show how it applies in a real IT organization, and it’s not just what’s in the documentation today.

More detail

Akershoek: I don’t think it’s about positive and negative in this case, but more about areas that we need to work on in more detail, like defining the service-broker role that you see in the new IT organization [and] how you interface with your external service providers. We've identified a number of areas where the IT organization has key touch points with these vendors, like your service catalog, you need to synchronize catalog information with the external vendors and aggregate it into your own catalog.

But there's also the fulfillment API -- how do you communicate a request to your suppliers or different technology stacks and get the consumption and cost data back in? I think we define that today in the IT4IT standard, but we need to go to a lower level of detail -- how do we actually integrate with vendors and our service providers?

So interfacing with the vendors in the eco-system sits on many different levels. It’s on the catalog level and the request fulfillment, that you actually do provision, the cost consumption data, and those kind of aspects.

Another topic is still the linking in to security and identity and access management. It's an area where we still need to clarify. We need to clarify how all the subscriptions in a service link in to that access management capability, which is part of the subscription and, of course, the fulfillment. We didn’t identify it as a separate functional component.

Gardner: Dwight, where are you most optimistic and where would you put more emphasis?

David: I'll start with the latter. More emphasis needs to be on our approach to Detect to Correct. Oftentimes, I see people thinking about Detect to Correct as in the traditional mode of being reactive, as opposed to understanding that this model can be applied even to the new changing user-friendly type of economy and within the hybrid type of IT. A change in thinking in the application of the value streams would also help us.

Many of us have a lot of gray hairs, including myself, and we revert to the old way of thinking, as opposed to the way we should be moving forward. That’s the area where we can do the most.

What's really good, though, is that a lot of people understand Detect to Correct. So it’s an easy adoption in terms of understanding the Reference Architecture. It’s a good entry point to the IT4IT Reference Architecture. That’s where I see the actual benefit. I would encourage us to make it useful, use it, and try it. The most benefit happens then.

Gardner: And Michael, room for optimism and room for improvement?


Management Guide

Fulton: I want to build on Dwight’s point around trying it by sharing. The one thing I'm most excited about, particularly this week, is the Management Guide -- very specifically, chapter 5 of the Management Guide. I hope all of you got a chance to grab your copy of that. If you haven’t, I recommend downloading it from The Open Group website. That chapter is absolutely rich in content about how to actually implement IT4IT.

And I tip my hat to Rob, who did a great piece of work, along with several other people. If you want to pick up the standard and use it, start there, start with chapter 5 of the Management Guide. You may not need to go much further, because that’s just great content to work with. I'm very excited about that.

From the standpoint of where we need to continue to evolve and grow as a standard, we've referenced some of the individual pieces, but at a higher level. The supporting activities in general all still need to evolve and get to the level of detail that we have with the value streams. That’s a key area for me.

The next area that I would highlight, and I know we're actively starting work on this, is around getting down to that level of detail where we can do data interoperability, where we can start to outline the specifics that are needed to define APIs between the functional components in such a way that we can ultimately bring us back to that Open Group vision of a boundaryless information flow.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

It’s Finally Here… Internet of Things Security Certification

It’s Finally Here… Internet of Things Security Certification

Despite all the excitement generated by the rise of the Internet of Things (IoT), the one major concern lurking in the background has been that of security. The frightening prospect of securing every single IoT device has lead to more than a fair share of headaches for IT personnel. A quick glance at how the IoT is being developed shows that the security worries are more than justified. After all, the number of items and devices that will be connected to the internet will likely reach the tens of billions before the end of the decade. That means a whole host of items looking to take advantage of the new technology while simultaneously dismissing security needs. The lack of any universal security standards for the IoT also remains a pressing problem. Luckily, a number of organizations are trying to remedy this issue with new IoT security certifications.

One of the more recent major announcement regarding security certification for IoT devices comes from Underwriters Laboratories (UL). The organization is calling it their Cybersecurity Assurance Program, or CAP for short. The basic idea is to test new IoT devices for any security vulnerabilities along with whether it includes data encryption, software updates, and ...


Read More on Datafloq
Patterns Recur In Analytics Just Like In Nature

Patterns Recur In Analytics Just Like In Nature

I have always loved science and math, and that’s why I got into statistics and focused on analytics for a career. One thing that has always fascinated me is how certain patterns show up again and again in different places across nature and mathematics. When looking at two seemingly unrelated topics, it suddenly becomes clear that there is actually quite a strong linkage between the two and that they are simply different examples of the same underlying concept.

One example of this is the Fibonacci sequence which shows up in nature regularly in places such as the way sea shell spirals grow and the pattern of seeds in a sunflower. I recently came across a terrific example of the concept of similar patterns at work within the realm of data and analytics.

A Recurring Pattern in Analytics

I recently took part in an event (see a summary video here) where professor Eric Bradlow of Wharton gave a presentation about research he’s done on what he calls “clumpiness” in customer purchasing. Eric and I got excited about a tie between Eric’s formal work on customer clumpiness and some work my team had done a few years prior around store sales forecasts. My team had ...


Read More on Datafloq
How Do You Use Analytics Within Your Organization?

How Do You Use Analytics Within Your Organization?

Thanks to everyone who has already completed our latest analytics poll, on real world usage.

If you haven’t yet recorded your scores, there’s still time (you can complete it here). But, with over 35 votes already in, I thought it worth sharing some interim results with you.

During the same week, I was reading a report from IBM as to the ‘real world use of big data in financial services‘. Although a number of sources in this report were from older surveys, circa 2012, their work with Said Business School in Oxford looks robust & relevant.

So, as an interesting backdrop to the results of our survey, here is a graph from IBM (comparing % of firms in Financial Markets using different analytics capabilities with a Global cross-industry selection):



A number of key differences are striking in this graph. I’d call out:


Higher use of Data Mining within FS (and over 75%)
Higher use of Data Visualisation within FS (and over 70%)
Higher use of Natural Language Text Analytics within FS (and over 50%)


So, compared to other sectors, whose use of analytics appears still more focussed on simpler query & reporting – will we find sophisticated application of analytics in our survey?

The answer is not quite. As I normally discover when talking honestly with clients, their routine use of analytics ...


Read More on Datafloq
Learn the Art of Data Science in Five Steps

Learn the Art of Data Science in Five Steps

The field of data science is one of the youngest and most exciting fields in the technology sector. In no other industry or field can you combine statistics, data analysis, research, and marketing to do jobs that help businesses make the digital transformation and come to full digital maturity.

In today’s business world, companies can no longer afford to look at their websites and social media presences as add-ons or after thoughts. The success of the technological aspects of a company is as crucial to its overall success as any other department. To gain that kind of success online, businesses must embrace big data and analytics. They must learn about their customers’ behaviour online and specifically on their sites, and they must use data to drive their marketing and production strategies.

Data Science is dedicated to analyzing large data sets, showing trends in customer and market behaviours, predicting future trends, and finding algorithms to help improve the customer experience and increase sales for the future. To learn and dive into this intriguing new field, you’ll really only need to take five simple steps…

1. Get Passionate About Big Data

As you set out to learn this field and become a data scientist, you’ll find that ...


Read More on Datafloq
Meetup – Data Vault Interest Group

Meetup – Data Vault Interest Group

I reactivated my Meetup Data Vault Interest Group this week. Long time ago I was thinking about a table of fellow regulars to network with other, let’s call them Data Vaulters. It should be a relaxed get-together, no business driven presentation or even worse advertisement for XYZ tool, consulting or any flavor of Data Vault. The feedback of many people was that they want something different to the existing Business Intelligence Meetings. So, here it is!

Meetup – Data Vault Interest Group

Meetup – Data Vault Interest Group

I reactivated my Meetup Data Vault Interest Group this week. Long time ago I was thinking about a table of fellow regulars to network with other, let’s call them Data Vaulters. It should be a relaxed get-together, no business driven presentation or even worse advertisement for XYZ tool, consulting or any flavor of Data Vault. The feedback of many people was that they want something different to the existing Business Intelligence Meetings. So, here it is!

Meetup – Data Vault Interest Group

Meetup – Data Vault Interest Group

I reactivated my Meetup Data Vault Interest Group this week. Long time ago I was thinking about a table of fellow regulars to network with other, let’s call them Data Vaulters. It should be a relaxed get-together, no business driven presentation or even worse advertisement for XYZ tool, consulting or any flavor of Data Vault. The feedback of many people was that they want something different to the existing Business Intelligence Meetings. So, here it is!

Alation centralizes data knowledge by employing machine learning and crowdsourcing

Alation centralizes data knowledge by employing machine learning and crowdsourcing

The next BriefingsDirect Voice of the Customer big-data case study discussion focuses on the Tower of Babel problem for disparate data, and explores how Alation manages multiple data types by employing machine learning and crowdsourcing.

We'll explore how Alation makes data more actionable via such innovative means as combining human experts and technology systems.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how enterprises and small companies alike can access more data for better analytics, please join Stephanie McReynolds, Vice-President of Marketing at Alation in Redwood City, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: I've heard of crowdsourcing for many things, and machine learning is more-and-more prominent with big-data activities, but I haven't necessarily seen them together. How did that come about? How do you, and why do you need to, employ both machine learning and experts in crowdsourcing?

McReynolds: Traditionally, we've looked at data as a technology problem. At least over the last 5-10 years, we’ve been pretty focused on new systems like Hadoop for storing and processing larger volumes of data at a lower cost than databases could traditionally support. But what we’ve overlooked in the focus on technology is the real challenge of how to help organizations use the data that they have to make decisions. If you look at what happens when organizations go to apply data, there's often a gap between the data we have available and what decision-makers are actually using to make their decisions.

McReynolds
There was a study that came out within the last couple of years that showed that about 56 percent of managers have data available to them, but they're not using it . So, there's a human gap there. Data is available, but managers aren't successfully applying data to business decisions, and that’s where real return on investment (ROI) always comes from. Storing the data, that’s just an insurance policy for future use.

The concept of crowdsourcing data, or tapping into experts around the data, gives us an opportunity to bring humans into the equation of establishing trust in data. Machine-learning techniques can be used to find patterns and clean the data. But to really trust data as a foundation for decision making human experts are needed to add business context and show how data can be used and applied to solving real business problems.

Gardner: Usually, when you're employing people like that, it can be expensive and doesn't scale very well. How do you manage the fit-for-purpose approach to crowdsourcing where you're doing a service for them in terms of getting the information that they need and you want to evaluate that sort of thing? How do you balance that?

Using human experts

McReynolds: The term "crowdsourcing" can be interpreted in many ways. The approach that we’ve taken at Alation is that machine learning actually provides a foundation for tapping into human experts.

We go out and look at all of the log data in an organization. In particular, what queries are being used to access data and databases or Hadoop file structures. That creates a foundation of knowledge so that the machine can learn to identify what data would be useful to catalog or to enrich with human experts in the organization. That's essentially a way to prioritize how to tap into the number of humans that you have available to help create context around that data.

That’s a great way to partner with machines, to use humans for what they're good for, which is establishing a lot of context and business perspective, and use machines for what they're good for, which is cataloging the raw bits and bytes and showing folks where to add value.
Embed the HPE
Big Data
OEM Software
Gardner: What are some of the business trends that are driving your customers to seek you out to accomplish this? What's happening in their environments that requires this unique approach of the best of machine and crowdsourcing and experts?

McReynolds: There are two broader industry trends that have converged and created a space for a company like Alation. The first is just the immense volume and variety of data that we have in our organizations. If it weren’t the case that we're adding additional data storage systems into our enterprises, there wouldn't be a good groundwork laid for Alation, but I think more interestingly perhaps is a second trend and that is around self-service business intelligence (BI).

So as we're increasing the number of systems that we're using to store and access data, we're also putting more weight on typical business users to find value in that data and trying to make that as self-service a process as possible. That’s created this perfect storm for a system like Alation which helps catalog all the data in the organization and make it more accessible for humans to interpret in accurate ways.
So as we're increasing the number of systems that we're using to store and access data, we're also putting more weight on typical business users to find value in that data and trying to make that as self-service a process as possible.

Gardner: And we often hear in the big data space the need to scale up to massive amounts, but it appears that Alation is able to scale down. You can apply these benefits to quite small companies. How does that work when you're able to help a very small organization with some typical use cases in that size organization?

McReynolds: Even smaller organizations, or younger organizations, are beginning to drive their business based on data. Take an organization like Square, which is a great brand name in the financial services industry, but it’s not a huge organization in and of itself, or Inflection or Invoice2go, which are also Alation customers.

We have many customers that have data analyst teams that maybe start with five people or 20 people. We also have customers like eBay that have closer to a thousand analysts on staff. What Alation provides to both of those very different sizes of organizations is a centralized place, where all of the information around their data is stored and made accessible.

Even if you're only collaborating with three to five analysts, you need that ability to share your queries, to communicate on which queries addressed which business problems, which tables from your HPE Vertica database were appropriate for that, and maybe what Hive tables on your Hadoop implementation you could easily join to those Vertica tables. That type of conversation is just as relevant in a 5-person analytics team as it is in a 1000-person analytics team.

Gardner: Stephanie, if I understand it correctly, you have a fairly horizontal capability that could apply to almost any company and almost any industry. Is that fair, or is there more specialization or customization that you apply to make it more valuable, given the type of company or type of industry?

Generalized technology

McReynolds: The technology itself is a generalized technology. Our founders come from backgrounds at Google and Apple, companies that have developed very generalized computing platforms to address big problems. So the way the technology is structured is general.

The organizations that are going to get the most value out of an Alation implementation are those that are data-driven organizations that have made a strategic investment to use analytics to make business decisions and incorporate that in the strategic vision for the company.

So even if we're working with very small organizations, they are organizations that make data and the analysis of data a priority. Today, it’s not every organization out there. Not every mom-and-pop shop is going to have an Alation instance in their IT organization.

Gardner: Fair enough. Given those organizations that are data-driven, have a real benefit to gain by doing this well, they also, as I understand it, want to get as much data involved as possible, regardless of its repository, its type, the silo, the platform, and so forth. What is it that you've had to do to be able to satisfy that need for disparity and variety across these data types? What was the challenge for being able to get to all the types of data that you can then apply your value to?
Embed the HPE
Big Data
OEM Software
McReynolds: At Alation, we see the variety of data as a huge asset, rather than a challenge. If you're going to segment the customers in your organization, every event and every interaction with those customers becomes relevant to understanding who that individual is and how you might be able to personalize offerings, marketing campaigns, or product development to those individuals.

That does put some burden on our organization, as a technology organization, to be able to connect to lots of different types of databases, file structures, and places where data sits in an organization.

So we focus on being able to crawl those source systems, whether they're places where data is stored or whether they're BI applications that use that data to execute queries. A third important data source for us that may be a bit hidden in some organizations is all the human information that’s created, the metadata that’s often stored in Wiki pages, business glossaries, or other documents that describe the data that’s being stored in various locations.

We actually crawl all of those sources and provide an easy way for individuals to use that information on data within their daily interactions. Typically, our customers are analysts who are writing SQL queries. All of that context about how to use the data is surfaced to them automatically by Alation within their query-writing interface so that they can save anywhere from 20 percent to 50 percent of the time it takes them to write a new query during their day-to-day jobs.

Gardner: How is your solution architected? Do you take advantage of cloud when appropriate? Are you mostly on-premises, using your own data centers, some combination, and where might that head to in the future?

Agnostic system

McReynolds: We're a young company. We were founded about three years ago and we designed the system to be agnostic as to where you want to run Alation. We have customers who are running Alation in concert with Redshift in the public cloud. We have customers that are financial services organizations that have a lot of personally identifiable information (PII) data and privacy and security concerns, and they are typically running an on-premise Alation instance.

We architected the system to be able to operate in different environments and have an ability to catalog data that is both in the cloud and on-premise at the same time.

The way that we do that from an architectural perspective is that we don’t replicate or store data within Alation systems. We use metadata to point to the location of that data. For any analyst who's going to run a query from our recommendations, that query is getting pushed down to the source systems to run on-premise or on the cloud, wherever that data is stored.

Gardner: And how did HPE Vertica come to play in that architecture? Did it play a role in the ability to be agnostic as you describe it?
It gives the IT department insight. Day-to-day, Alation is typically more of a business person’s tool for interacting with data.

McReynolds: We use HP Vertica in one portion of our product that allows us to provide essentially BI on the BI that’s happening. Vertica is used as a fundamental component of our reporting capability called Alation Forensics that is used by IT teams to find out how queries are actually being run on data source systems, which backend database tables are being hit most often, and what that says about the organization and those physical systems.

It gives the IT department insight. Day-to-day, Alation is typically more of a business person’s tool for interacting with data.

Gardner: We've heard from HPE that they expect a lot more of that IT department specific ops efficiency role and use case to grow. Do you have any sense of what some of the benefits have been from your IT organization to get that sort of analysis? What's the ROI?

McReynolds: The benefits of an approach like Alation include getting insight into the behaviors of individuals in the organization. What we’ve seen at some of our larger customers is that they may have dedicated themselves to a data-governance program where they want to document every database and every table in their system, hundreds of millions of data elements.
Embed the HPE
Big Data
OEM Software
Using the Alation system, they were able to identify within days the rank-order priority list of what they actually need to document, versus what they thought they had to document. The cost savings comes from taking a very data-driven realistic look at which projects are going to produce value to a majority of the business audience, and which projects maybe we could hold off on or spend our resources more wisely.

One team that we were working with found that about 80 percent of their tables hadn't been used by more than one person in the last two years. In that case, if only one or two people are using those systems, you don't really need to document those systems. That individual or those two individuals probably know what's there. Spend your time documenting the 10 percent of the system that everybody's using and that everyone is going to receive value from.

Where to go next

Gardner: Before we close out, any sense of where Alation could go next? Is there another use case or application for this combination of crowdsourcing and machine learning, tapping into all the disparate data that you can and information including the human and tribal knowledge? Where might you go next in terms of where this is applicable and useful?

McReynolds: If you look at what Alation is doing, it's very similar to what Google did for the Internet in terms of being available to catalog all of the webpages that were available to individuals and service them in meaningful ways. That's a huge vision for Alation, and we're just in the early part of that journey to be honest. We'll continue to move in that direction of being able to catalog data for an enterprise and make easily searchable, findable, and usable all of the information that is stored in that organization.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Where Hadoop Can Use a Tuneup

Where Hadoop Can Use a Tuneup

With Hadoop becoming more versatile and useful to businesses of nearly any type and size, more organizations than ever before have started using this helpful big data tool. Hadoop has been around for more than a decade now, but the rise of big data analytics has placed the magnifying lens squarely over the platform. Its open source nature has allowed Hadoop to evolve over that timeframe, giving it more capabilities and solving some of the issues that plagued it from its early years. Having said that, Hadoop is not a perfect tool. Far from it, in fact. There are still improvements to be made. These aren’t the type of improvements that require wholesale changes to Hadoop. Consider these adjustments to be more of a tuneup -- slight modifications that can make Hadoop a sleeker, faster-performing tool able to give businesses the results they’re looking for without some of the headaches they experience now.

One issue, often considered one of the biggest ones, is the need to improve Hadoop’s quality of service. Workloads in the Hadoop environment can often grow to be complex, especially as the number of data businesses collects multiplies. The more jobs that are run in Hadoop, the more ...


Read More on Datafloq
BBBT to Host Webinar from TimelinePI on A New Approach to Process Intelligence

BBBT to Host Webinar from TimelinePI on A New Approach to Process Intelligence

This Friday, the Boulder Business Intelligence Brain Trust (BBBT), the largest industry analyst consortium of its kind, will host a private webinar from TimelinePI on how it brings the power and flexibility of data discovery analysis to process-centric data.

(PRWeb June 08, 2016)

Read the full story at http://www.prweb.com/releases/2016/06/prweb13469576.htm

Storytelling as evolution, not revolution

Storytelling as evolution, not revolution

My latest Tech Target column is on storytelling.

The post Storytelling as evolution, not revolution appeared first on Teich Communications.

KDnuggets 2016 szoftver poll eredményei Google motion chart-ban

KDnuggets 2016 szoftver poll eredményei Google motion chart-ban

Kikerültek a végleges eredmények a KDnuggets 2016-os (17.) data science szoftverhasználati felméréséről. A hivatalos oldalon böngészhetünk az adatok között és elolvashatjuk az igen részletes elemzést (idén már két oldalas), amit itt a blogon egy Google motion chart-tal egészítettünk ki. Ezen a mozgó diagramon a teljes időintervallumon, vagyis 2001 és 2016 között követhetjük nyomon azt, hogy évről-évre hogyan változott az előző évhez képest a “top 50” eszköz népszerűsége a szavazók körében. Akiket érdekelnek az eredmények, azt hiszem jó párszor végig fogják nézni a chart-ot...

[...] Bővebben!

Megosztom Facebookon! Megosztom Twitteren! Megosztom Tumblren!

Data Science 101: The Rise and Shine of Machine Learning

Data Science 101: The Rise and Shine of Machine Learning

We are living in a digital era where Customer is the king. Many businesses have capitulated to this new realm and have started interacting with customers dynamically. Today the customers are free to navigate a merchant (eCommerce) website any way they fancy. Also, the merchant can display content and place offers dynamically based on how a given customer interacts with his website. To add to the complexity, purchase decisions are not necessarily made on the first visit itself. Internet savvy customers now have all the information at their fingertips to land themselves the best deal.

When contemplating a purchase, Customers go through something which marketers call the AIDA journey:-


A: Attention/Awareness – attract the attention of the customer
I: Interest of the customer
D: Desire - convince customers that they want and desire the product or service and that it will satisfy their needs
A: Action – lead customers towards purchase


In most scenarios, customer’s site navigation on the day of the purchase is mere execution of a decision that has been made even before the customer lands on the site – the customer has been on the site before; the customer is aware of what is on offer; the customer knows exactly how to get ...


Read More on Datafloq
Will Analytics and Technology Put an End to Credit Card Fraud?

Will Analytics and Technology Put an End to Credit Card Fraud?

If you haven’t noticed the change, you’re living in a cave. Now, businesses are charged with helping banks and credit card processors fight credit card fraud. The stakes are high and the battle against thieves is on a field that includes big data, machine learning, and hardware.

Around half of the world’s credit card fraud happens in the United States. Because of that stunning statistic, in October of 2015 the Federal Reserve Bank ordered US merchants to adopt EMV (Europay Mastercard Visa) readers by the end of 2016. If merchants don’t get an EMV chip card reader (and many places I frequent still have not), they face the liability shift.

The banks are saying, "chip cards are safer than magnetic strip cards because they’re harder to counterfeit. We’ll issue the cards and it’s not our fault if you don’t get the technology necessary to run them." The shift means merchants will be liable for fraudulent transactions if the customer has a chip card but the merchant doesn’t have the reader.

For whatever reason, the US has been behind Europe and the rest of the world on this. By 2013, nearly 97% of transactions were EMV in Europe alone. Since half of the world’s ...


Read More on Datafloq
5 Ways How to Prevent Hackers From Accessing Your Router

5 Ways How to Prevent Hackers From Accessing Your Router

The abundance of internet-connected devices in this high-tech age has no doubt made your life easier, but all of these new devices bring their share of headaches as well. After all, the more devices you’re connected to, the more opportunities you provide for hackers to breach your information.

It’s not just PCs, laptops and smartphones. Hackers can also target the “gateway” to your home or office’s internet connection — your wireless router.

Here is a look at why hackers are increasingly targeting routers in their exploits, and what you can do to limit the risk of having your router hacked.

Why Hackers Target Routers

A simple reason explains why routers are a growing source of activity for hackers: Many of them have security weaknesses.

In fact, The Wall Street Journal recently analyzed 20 of the best-selling routers on the market and found that half of them have documented security weaknesses. In addition, another half of said routers failed to prompt users for potential software updates during installation.

Often, people set up their wireless internet at home or work and simply let the router sit until someone has an issue with the internet connection. However, you can easily stay on top of your router security game, and …

Read More on Datafloq

How To Make Job Costing More Accurate With Big Data

How To Make Job Costing More Accurate With Big Data

Big data plays a significant role in human resources where data aggregated from millions of data points from across the industry can be used to interpret and formulate work policies that are more attuned to what your employees want. There are a number of big data analytics reports published today that analyze the various metrics that businesses must change in order to stay ahead of their competition. But before we get to the part about employee policies, there is one area of human resources where big data is really useful and that is in the area of job costing.

Job costing is the process of tracking all the expenses incurred against a job role and benchmarking it against the revenues earned against it in order to measure the profitability of the role. It is a typically straight-forward accounting task for consulting, sales and marketing type roles where the expenses incurred and revenues earned against them are tangible. This is however not the case for departments considered “cost centers” like human resources, finance or even R&D.

Because of the complexity in determining the expenses and, more importantly, the revenues against cost center roles, job costing has traditionally not been used for such job ...


Read More on Datafloq
How Data Driven is your Organization? (Big Data Survey)

How Data Driven is your Organization? (Big Data Survey)

Over the past years, Big Data has become an expansive field. Becoming data driven is not just about centralizing data and finding the right technology. A data driven way of working also requires the right people with the right skills, intelligent use of data and algorithms, a supportive management, budget, an agile way of working, ample room for experimentation, and many more things.

In order to find out how data driven organization really are, and to learn how they compare to each other, we have launched a Big Data Survey. Do you have experience with data or are you still in early stages? Share your experiences and receive back insights from hundreds of others.

What is the Big Data Survey?

The Big Data Survey researches how organizations use Big Data and is an initiative of the largest Benelux Big Data tradeshow Big Data Expo, the data consultants of GoDataDriven, and Datafloq. 

Participation takes no more than 5 minutes of your time. We value your participation highly. 

Participants not only receive the report, but are also elgibile to win some fantastic prizes, including an Apple Watch, Moleskine Notepads, VIP-access to the Big Data Expo and Bol.com giftcards.

Click here to start the survey

How Data Driven is your Organization?

Last years survey ...


Read More on Datafloq
Catbird CTO on why new security models are essential for highly virtualized data centers

Catbird CTO on why new security models are essential for highly virtualized data centers

The next BriefingsDirect Voice of the Customer discussion explores how increased virtualization across data centers translates into the need for new hybrid-computing approaches to security, compliance, and governance.

Just as next-generation data centers and private clouds are gaining traction, security threats are on the rise -- and attack techniques are becoming more sophisticated.

Are yesterday’s perimeter-based security infrastructure methods up to the task? Or are new approaches needed to gain policy-based control over all virtual assets at all times?

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To explore the future of security for virtual workloads, we're joined by Holland Barry, CTO at Catbird in Scotts Valley, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us why it’s a different picture nowadays when we look at data centers and private clouds. Oftentimes, people think similarly about security -- just wrap a firewall around it and you're okay. Why isn’t that the case? What’s new?

Barry
Barry: As we've introduced many layers of abstraction into the data center, trying to adapt those physical appliances that don’t move around as fluid as the workloads they're protecting, it has become an issue. And as people virtualize more and we go more to this notion of a software-defined data center (SDDC), it has just proven a challenge to keep up, and we know that that layer on the perimeter is probably not sufficient anymore.

Gardner: It also strikes me that it’s a moving target, virtual workloads come and go. You want elasticity. You want to be able to have fit-for-purpose infrastructure, but that's also a challenge when you can’t keep track of things and therefore secure them. 

Barry: That’s absolutely right. The transient nature of workloads themselves make any type of rigid enforcement from a single device pretty tough to deal with. So you need something that was built to be fluid alongside those dynamic workloads.

Gardner: And I suppose, too, that enterprise architects that are putting more virtualization together across the data center, the SDDC, aren’t always culturally aligned with the security folks. So you have more than just a technology issue here. Tell us what Catbird does that goes beyond just the technology, and perhaps works toward a cultural and organizational benefit?

Greater skill set

Barry: Even just from an interface standpoint or trying to create a tool that can cater to those different administrative silos, you have people who have virtualization expertise, compute expertise, and then different security practice expertise. There are many slim lanes within that security category, and the next generation set of workloads in the hybrid IT environment is going to demand more of a skill set that can span all those domains. 

Gardner: We talk a lot about DevOps and SecOps combining. There's also this need for automation and orchestration. So policy-based seems to be really the only option to keep up with the speed on security.
Learn How
Cloud Protection Starts
With a Security-First Mindset
Barry: That’s exactly right. There has to be an application-centric approach to how you're applying security to your workloads. Ideally that would be something that could be templatized or defined up front. So as new workloads present themselves in the network, there's already a predetermined way that they're going to be secured and that security will take place right up against the edge of that workload.

Gardner: Holland, tell us about Catbird, what you do, how you're deployed, and how you go about solving some of these challenges.
Having that single point of policy definition and enforcement is going to be critical to people adopting and really taking the next leap to put a layer of defense in their data center.

Barry: Catbird was born and raised in virtualized environments. We've been around for a number of years. It was this notion of bringing the perimeter and the control landscape closer to the workload, and that’s via hypervisor integration and also via the virtual data-path integration. So it's having a couple of different vantage points from within the fabric and applying security with a purpose-built solution that can span multiple platforms.

So that hybrid IT environment, which is becoming a reality, may have a little bit of OpenStack, it may have a little bit of VMware. Having that single point of policy definition and enforcement is going to be critical to people adopting and really taking the next leap to put a layer of defense in their data center.

Gardner: How are you deployed, you are a software appliance yourself, virtualized software?

Barry: Exactly right. Our solutions are comprised of two components, and it’s a very basic hub-and-spoke architecture. We have a policy enforcement point, a virtual machine (VM) appliance that installs out on each hypervisor, and we have a management node that we call the Control Center. That’s another VM, and those two components talk together in a secure manner. 

Gardner: What’s a typical scenario? Where in this type of east-west traffic virtualization environment, security works better and how it protects? Are there some examples that would demonstrate where the perimeter approach breaks down would but your model got the task done?

Doing enforcement

Barry: I think that anytime that you need to have the granularity of not only visibility, but enforcement -- I'm going to get a little technical here -- down to the UUID of the vNIC, that smallest unit of measure as it relates to a workload, that’s really where we shine, because that’s where we do our enforcement. 

Gardner: Okay. How about partnerships? Obviously you're working in an environment where there are a lot of different technologies, lots of moving parts. What’s going on with you and HPE in terms of deployment, working with private cloud, operating systems, and then perhaps even moving toward modeling and some of the HPE ArcSight technology?

Barry: We have a number of different integration points inside HPE’s portfolio. We're a Helion-ready certified partner. We just announced our support for the 2.0 Helion OpenStack release.
Learn How
Cloud Protection Starts
With a Security-First Mindset
We're doing a lot of work the ArcSight team in terms of getting very detailed event feeds and visibility into the virtualized workloads.

And we just announced some work that we are doing with HPE’s HPN team around their software-defined networking (SDN) VAN Controller as well, extending Catbird’s east-west visibility into the physical domain, leveraging the placement of the SDN controller and its command over the switches. So it’s pretty exciting work there.

Gardner: Let’s dig into that a bit, the (SDN) advances that are going on and how that’s changing how people think about deployment and management of infrastructure and data centers. Doesn’t this really give you some significant boost in the way that you can engage with security, intercept and stop issues before they propagate? What is it about SDN that is good for security?
Knowing the state of the workload, is going to be critical to applying those traditional security controls.

Barry: As the edges of what has traditionally been rigid network boundaries become fluid as well, knowing the state of the network, knowing the state of the workload, is going to be critical to applying those traditional security controls. So we're really trying to tie all this together -- not only with our integration with Helion, but also utilizing the knowledge that the SDN Controller has of the data path. We can surface indications that compromise and maybe get you to a problem a little bit quicker than traditional methods.

Gardner: I always like to try to show and not just tell. Do you have any examples of organizations that are doing this, what it has done for them, and why it’s a path to even greater future benefits as they further virtualize and go to even larger hybrid environments?

Barry: Absolutely. I can’t name them by name, but one of the US’ largest carriers telcos is one of our customers. They came to us to solve a problem of that consistency of policy definition and enforcement across those hybrid platforms. So it’s amongst VMware and OpenStack workloads.

That's not only for the application of the security controls and not only for the visibility of the traffic, but also the evidence of assurance of compliance, being able to do mapping back to regulatory frameworks and things like that.

Agentless fashion

There are a couple of different use cases in there, but it’s really that notion where I can do it in an agentless fashion, and I think that’s an important thing to differentiate and point out about our solution. You don’t have to install an agent within the workload. We don’t require a presence inside the OS.

We're doing it just outside of the workload, at the hypervisor level. It’s key that we have the specific tailored integrations to the different hypervisor platforms, so we can abstract away the complexity of applying the security controls where you just have a single pane of glass. You define the security policy and it doesn’t matter which platform you're on, it’s going to be able to do it in that agentless fashion.
We're aware of it, and I think our method of doing the security control application is going to be the one that wins.

Gardner: Of course, the march of technology continues, and we're not just dealing with virtualization. We're now talking about containers, micro-services, composable infrastructure. How will your solution, in conjunction with HPE, adapt to that, and is there more of a role as you get closer to the edge, even out into the Internet of Things (IoT), where we're talking about all sorts of more discrete devices really extending the network in all directions?

Barry: As the workload types proliferate and we get fancier about how we virtualize, whether it’s using a container or a virtualization platform, and then the vast amount of IoT devices that are going to present themselves, we're working closely with the HPE team in lockstep as mass adoption of these technologies happens.
Learn How
Cloud Protection Starts
With a Security-First Mindset
We have plans in place to solve platform by platform, and we believe taking an approach where we're looking at that specific problem and asking how we're going to attack this thing while keeping that bigger vision of, "We're still going to keep you in that same console and the method in which you apply the security is going to be the same."

Containers are a great example, something that we know we need to tackle, something that’s getting adopted in a fashion far more than I have ever seen with anything else. That’s a pretty exciting one. But at the end of the day, it’s a way of virtualizing a service or micro-services. We're aware of it, and I think our method of doing the security control application is going to be the one that wins.

Gardner: Pretty hard to secure a perimeter when there really isn’t a perimeter.

Barry: Perimeter is quickly fading, it seems.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

It WAS the #Best #DataVault Event Ever!

It WAS the #Best #DataVault Event Ever!

Last week I had the pleasure of spending a few days in lovely Stowe, Vermont at the Stoweflake Mountain Resort and Spa attending the 3rd Annual World Wide Data Vault Consortium (#WWDVC). Not only was the location picturesque, the weather was near perfect, the beer was tasty, and the learning and networking were outstanding. We had 75 […]
Should Startups Be Worried About Big Data Legal Hassles?

Should Startups Be Worried About Big Data Legal Hassles?

Startups should have some concern about legal hassles surrounding big data. There are many gray areas in the law, and that causes a lot of confusion for legal staff, business executives and startup owners. While knowing the law is one thing, practicing proper protocols is another. It is important to have access to a legal expert at all times to ensure that your startup is following big data rules and stays out of the courtroom.

Acquisition Situations

During mergers and acquisitions, startups tend to forget that a disclosure needs to be included in the terms of use and privacy policies, which includes the fact that stored consumer data goes with the sale of the company. Companies are also supposed to inform its registered consumers of a sale of the company and what it means about their stored personal data.

Intended Use of Collected Data

The intended use of any collected data in a business’ database must serve a specific intended purpose. Those purposes must be stated in the terms of use and privacy policies as well. Consumers have a right to know what companies are doing with their personal information. Sensitive personal data such as social security numbers, credit/debit card numbers and banking information ...


Read More on Datafloq
A Marketer’s Guide to Using Data

A Marketer’s Guide to Using Data

Data is one of the most important assets in the marketing mix. Data-driven marketing delivers results in terms of customer loyalty, customer engagement and market growth.  According to a report by Forbes Insights and Turn, Data Driven and Digitally Savvy: The Rise of the New Marketing Organization, “Organizations that are ‘leaders’ in data-driven marketing report far higher levels of customer engagement and market growth than their ‘laggard’ counterparts. In fact, leaders are three times more likely than laggards to say they have achieved competitive advantage in customer engagement/loyalty (74% vs. 24%) and almost three times more likely to have increased revenues (55% vs. 20%).”

The Three Types of Data

You may have heard the terms 1st party, 2nd part and 3rd party data. If you aren’t entirely familiar with what each type of data entails, here’s a brief overview.

1st Party Data

This is data you have captured based on the actions someone makes when interacting with your business. For example, this may be data collected when a user fills out a form on your website (i.e. name, address, phone number); it could be purchases and other transactions, both offline and online, that  your customers make (i.e. what types of products or services they ...


Read More on Datafloq
Why business apps design must better cater to consumer habits to improve user experience

Why business apps design must better cater to consumer habits to improve user experience

The next BriefingsDirect technology innovation thought leadership discussion focuses on new user experience demands for applications, and the impact that self-service and consumer habits are having on the new user experience design.

As more emphasis is placed on user experiences and the application of consumer-like processes in business-to-business (B2B) commerce, a softer side of software seems to be emerging. We'll now explore a new approach to design that emphasizes simple and intuitive process flows.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how business apps design must better cater to consumer habits to improve user experience, we're joined by Michele Sarko, Chief Design Officer at SAP Ariba. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There seems to be a hand-off between the skills that are new to apps' user interface design versus older skills that had a harder edge from technology-centric requirements. Are we seeing a shift in the way that software is designed, from a user-experience perspective, and how different is it from the past?

Sarko: It’s more about understanding the end users first. It’s more about empathy and universal design. What used to happen was that technology was so new that we as designers were challenging it do things it didn’t do before. Now, technology is the table stakes from which everything is measured, and designers -- and our users for that matter -- expect it to just work.

Sarko
The differentiator now is to bring the human element into enterprise products, and that’s why there's a shift happening in software. The softer side of this is happening because we're building these products more for the people who actually use them, and not just for the people who buy them.

Gardner: We've heard from some discussions at the SAP Ariba LIVE Conference recently about the need for greater and more rapid adoption and getting people more deeply into business networks and applications. It seems to me that this user experience and that adoption relationship are quite closely aligned.

Sarko: Yes, they absolutely are, because at the end of the day, it’s about people. When we're selling consumer software or enterprise software or any types of business software, if people don't use it or don’t want to use it, you're not going to have adoption. You don’t want it to become “shelfware,” so to speak. You want to make a good business investment, but you also want your end users to be able to do it effectively. That’s where adoption comes into play and why its key to our customers as well as our own business.

Intuitive approach

Gardner: Another thing we heard was that people don't read the how-to manuals and they don't watch the videos. They simply want to dive in and be able to work and proceed with apps. There needs to be an intuitive approach to it.

I'm old enough to remember that when new software arrived in the office, we would all get a week of training and we'd sit there for hours of training. But no more training these days. So how do people learn to use new software?

Sarko: First and foremost, we need to build it intuitively, so that you naturally apply the patterns that you have to that software, but we should come about it in a different way, where training is in context, in product.

We're doing new things with overlays. and to take users through a tour, or step them through a new feature, to give them just the quick highlights of where things are. You see this sort of thing in mobile apps all the time after you install an update. In addition to that, we build in-context questions or answers right there at the point of need, where the user is likely to encounter something new or initially unknown in the the product.

So it’s just-in-time and in little snippets. But underpinning all of it, the experience has to be very, very simple, so that you don't have to go through this overarching hurdle to understand it.
We can keep those two things separate, making us able to iterate a lot faster. That's enabling us to go quicker and to understand users’ needs.

Gardner: I suppose, too, that there's an enterprise architectural change afoot. Before, when we had packaged software, the cycles for changing that would be sometimes years, if not more. Nowadays, when we go to cloud and software-as-a-service (SaaS) applications, where there’s multitenancy, and where the developer, the supplier of the software, can change things very rapidly, a whole new opportunity opens up. How does this new cloud architecture model benefit the user experience, as compared to the architecture of packaged software?

Sarko: The software and the capabilities that we're using now are definitely a step forward. With SAP Ariba, we’ve been able to decouple the application in the presentation layer in such a way that we can change the user experience more rapidly, do A/B testing, do a lot of in-product metrics and tracking, and still keep all of the deep underpinnings and the safety and security right there.

So we don't have to spend all of our time building it deep into the underpinnings. We can keep those two things separate, making us able to iterate a lot faster. That's enabling us to go quicker and to understand users’ needs.

Gardner: The drive to include mobile devices with any software and services now plays a larger role. We saw some really interesting demos at the SAP Ariba LIVE conference around the ability to discover and onboard a vendor using a mobile device, in this case a smartphone. How is the drive for mobile-first impacting this?

Sarko: Well, the mobile-first mindset is something that we always employ now. This is the way that we should, and do, design a lot of things, because it puts on a different set of restraints, form factors, and simplicity. On mobile, you only have so much real estate with which to work. Approaching it from that mindset allows us to take the learning that we do on mobile and bring them back to all the other device options that we have.

Design philosophy

Gardner: Tell me a little bit about your philosophy about design. When you look at software that maybe has years of a legacy, the logic has been there for quite some time, but you want to get this early adoption, rapid adoption. You want a mobile-first mentality. How do you approach this from a design philosophy point of view?

Sarko: It has to be somewhat pragmatic, because you can't move the behemoth of the company that you are to something different. The way that I approach it, and that we’re looking at within SAP Ariba, is to consider new ways to improve and new innovations and start there, with the mobile-first mindset, or really by just redesigning aspects of the product.

At the same time, pick the most important aspects or areas of your current product suite and reinvent those. It may take a little more time or it may be on a different technology stack. It may be inconsistent for a while, but the improvements are going to be there and are will outweigh that inconsistency. And then as we go, over time, we'll make that process change overall. But you can’t do it all at once. You have to be very pragmatic and judicious about where you start.

Gardner: Of course, as we mentioned earlier, you can adjust as you go. You have more opportunity to fix things or adjust the apps and design.
As a user, you’re never alone. We see countless other users facing the same challenges as you, with the same needs and expectations.

You also said something interesting at SAP Ariba LIVE, that designers should, “Know your users better than they know themselves.” First, what did you mean by that in more detail; and then secondly, who are the users of SAP Ariba applications and services, and how are they different from users of the past?

Sarko: What I meant by “know the users better than they know themselves” is that we're observing them, we're listening to them, we're drawing patterns across them. The user may know who they are, but they often feel like they may be alone. What we end up seeing is that as a user, you’re never alone. We see countless other users facing the same challenges as you, with the same needs and expectations.

You may just be processing invoices all day, or you may be the IT professional that now has to order all of the equipment for your organization. We start to see you as a person and the issues that you face, but then we start to figure out how we help not only you in your specific need, but we learn from others about new features and requirements that you didn't even think you might need.

So, we're looking in aggregate to find out solutions that would fit many and give it to all rather than just solve it one by one. That's what I mean by, "know your users better than they know themselves."

And then who are the users? There are different personas. Historically, SAP Ariba focused mostly only on the customer, the folks who made the purchasing decisions, who owned the business decisions. I'm trying to help the company understand that there is a shift, that we also have to pay equal attention to the end users, the people who are in the product using it everyday. As a company, SAP Ariba has to focus on the various roles and satisfy both needs in order for it to be successful.

Gardner: It must be difficult to create software for multiple roles. You mentioned the importance of being role-based in this design process. Is it that difficult to create software that has a common underpinning in terms of logic, but then effectively caters to these different roles?

Design patterns

Sarko: The way that we approach it is through building blocks and systems. We have design patterns, which are building blocks, and these little elements then get manifested together to build the experience.

Where the roles come in is what gets shown or not. Different modules may be exposed with those building blocks to one group of people, but not to the other. Based on roles and permissions, we can hide and show what’s needed. That’s how we approach the role-based design and make it right for you.

Gardner: And I suppose too one of the goals for SAP Ariba is to not just have the purchasing people do the purchasing, but have more people, more self-service. Tell me a bit more about self-service and this idea that people are shopping and not necessarily procuring.

Sarko: Yes, because this is really the shift that we're trying to communicate design for. We come to work every day with our biases from our personal lives, and it really shouldn't be all that different when talking about procurement. I mentioned earlier that this is not really about procurement for end users; it’s about shopping, because that's what you're doing when you buy things, whether you’re buying them for work or for your personal life.
The terminology has to be consistent with what we know from our daily lives and not technical jargon. Bringing those things to bear and making that experience much more consumer-like will enable our customers to be more successful.

The terminology has to be consistent with what we know from our daily lives and not technical jargon. Bringing those things to bear and making that experience much more consumer-like will enable our customers to be more successful.

Gardner: We've already seen some fruits of these labors and ideas. We saw an example of Guided Buying, a really fresh, clean interface, very similar to a business-to-consumer (B2C) shopping experience. Tell me a little bit about some of the examples we have seen and how far we are along the spectrum to getting to where you want to go.

Sarko: We're very far down the path of building this out. We've been spending the past six months developing and iterating on ideas, and we'll be able to market the first release relatively soon.

And through the process of exploration and working with customers, there have been all of kinds of nuances about policy compliance and understanding what’s allowed and what’s not allowed. And not just for the end user, but for the procurement professional, for the buyer in their specific areas, in addition to for the procurement folks behind the scenes. All of these roles now are thought of as individual players in an orchestra, because they all have to work together. We're actually quite far along, and I'm really excited to see the product come to market pretty soon.

Gardner: Any other ideas about where we go when we start bringing more reactions to what users are doing in the software? We saw instances where people were procuring things, but then the policy issue would pop-up, the declaration of, "That's not within our rules; you can’t do that."

It seems to me that if we take that a step further, we're going to start bringing in more analysis and say, "Well, you're going down this path, but we have information that could help you analyze and better make a decision." Is that something we should expect soon as well?

Better recommendations

Sarko: Yes, absolutely. We're trying to use the intelligence that we have to make better recommendations for the end users. Then, when the policy compliance comes in, we're not preventing the end user from completing their task. We're just bringing in the policy person at the other end to help alleviate that other approval, so that the users still accomplish what they started to do.

Gardner: We really are on the cusp of an interesting age, where analysis from deep-data access and deep-penetrating business intelligence types of inserts can be made into process. We're at the crossroads of process and intelligence coming together.

Before we sign off, is there anything else we should expect in terms of user experience, enhancements in business applications, particularly in the procure-to-pay process?

Sarko: This is an ongoing evolutionary process. We learn from the users each day with multiple inputs: talking to them, watching analytics, listening to customer support. The product is only going to get better with the feedback that they give us.
We're listening, learning, reacting, much more quickly than we have before. I expect that you'll see many more product changes and from all of the feedback, we’ll make it better for everyone.

Also, our release cycles now have gone from 12 to 18 months down to three months, or even shorter. We're listening, learning, reacting, much more quickly than we have before. I expect that you'll see many more product changes and from all of the feedback, we’ll make it better for everyone.

Gardner: Speaking of feedback, I was very impressed with the Feature Voting that you've instituted, allowing people to look at different requirements for the next iteration of the software and letting them vote for their favorites. Could just add a bit more about how that might impact user experience as well?

Sarko: By looking holistically at all the feedback we get, we start to see trends and patterns of the things we're getting a lot of traction on or a lot of interest in. That helps us prioritize what we call a backlog -- the feature list -- so that based on user input, we attack the areas that are most important to users and work that way.

We listen to the input, every single piece of it. Also, as you heard from last year, we launched Visual Renewal. In the product when you switch versions of the interface, you see a feedback form that you can fill out. We read every piece of that feedback. We're looking for trends about how to fix the product and make enhancements based on users. This is an ongoing process that we'll continue to do: listen, learn, and react.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

4 Reasons Why Open Source is Ready for the Enterprise

4 Reasons Why Open Source is Ready for the Enterprise

Recently, Mike Olson, the co-founder of Cloudera, took some time for an inteview about the developments within the Hadoop ecosystem. In 2008, Cloudera was the first commercial distribution of Hadoop. Lots has happened since then, for Mike, there is no doubt about the future of open source.

"If you go back in time 10 years you would see that a lot of CIO’s believed there was something bad about open source. In general, we now see a diminishing fear of open source in the market. I use that word intentionally. Executives would bring up that open source wasn’t professionally developed, that it was not developed for companies. That is flatly no longer true. Nowadays, open source software is regulation compliant, and allows CIO’s to fully take advantage of the pace of innovation", Mike Olson started. 

In this article we share four reasons why open source software wil rule the enterprise software market.

1. Compliancy

Enterprise will bring in that open source software is not regulatory compliant, not secure, and therefor not save to use. Mike Olson: "We can point to very large scale, very secure implementations, compliant with rigorous regulatory regime requirements, of the open source platform in mission critical applications. Cloudera is the only Hadoop platform ...


Read More on Datafloq
Are You Still in the Dark About the Quality of Your Data?

Are You Still in the Dark About the Quality of Your Data?

More and more businesses are waking up to the threat of poor data quality. We’re gradually seeing the risk being taken more seriously as the shockwaves of poor management are felt.

Yet for many businesses, data quality is seen as an abstract concept; difficult to understand, and impossible to value.

When the business formulates its budgets for the year, data quality is often skipped over, because nobody really knows what’s wrong. Sure: they can see emails bouncing, and their customers are drifting away to competitors, but the root cause hasn’t been fully determined.

These businesses aren’t deliberately neglecting data. They just don’t realise how important it is. In fact, it’s the most critical asset that your business currently holds. As your competitors start to take action on data, your business is at risk of losing momentum.

Why Data Matters

As a society, we are now fully connected. We are reliant on the systems that bind us together. Collectively, humans are generating more data in a day than they have in many thousands of years.

It’s widely accepted that data decays at a rate of 2 per cent, per month, regardless of how it is stored. So, assuming you are not taking any action to prevent this, ...


Read More on Datafloq
Snowflake DB Cool Features – Automatic Query Optimization

Snowflake DB Cool Features – Automatic Query Optimization

Learn how Snowflake automatically optimizes SQL queries. It is all handled “auto-magically” via a dynamic optimization engine in our cloud services layer.
How Strategy – Not Technology – Is the Real Driver for Digital Transformation

How Strategy – Not Technology – Is the Real Driver for Digital Transformation

Business owners and executives today know the power of social media, mobile technology, cloud computing, and analytics. If you pay attention, however, you will notice that truly mature and successful digital businesses do not jump at every new technological tool or platform.

While they do not sit and wait for months or years to create social media pages or to take advantage of new analytical services, they do approach every piece of technology that they use with a solid strategy. Why? Marketing, production, and brand management require concrete planning to be effective and coherent. Implementing new technology without a set strategy is a recipe for failure – or, at the very least, for ineffective use of an otherwise powerful tool.

The Importance of Digital Strategy and Vision

To make the most use out of the technologies and tools available to your business today, you must have a coherent and cohesive digital strategy. Companies that have good digital strategies are said to be “digitally mature” and are more likely to embrace the most strategic technologies as they are developed, rather than casting about, trying everything, and failing to use most of it to their advantage.

A good digital strategy is born out of a vision ...


Read More on Datafloq
CloudMoyo helps KCS run trains in the Cloud

CloudMoyo helps KCS run trains in the Cloud

CloudMoyo, partner of choice for cloud & analytics solutions, announced today that Kansas City Southern de mexico (KCS) has gone live with the CloudMoyo Public Transport Management (CPM) solution. Kansas City Southern Railway Company (KCS), is the third oldest Class 1 railroad in North America.

The company has been in operation since 1887 and operates in ten central U.S states, as well as the north-eastern states of Mexico and into Canada. In Mexico, it has over 150 trains running per day with an average of 420 crew members daily. KCS chose the CPM platform to automate, streamline and optimize its railway scheduling and crew management operations.

Through its work on this project and in close collaboration with the client, CloudMoyo was able to deliver a system that addresses all the challenges of managing complex transit operations. The system developed “is cost effective and enables quicker time to deployment for the operators thereby leading to quicker ROI and takes advantage of mobility advancements.” The benefits delivered are clearly visible in the form of State-of-the-art user experience, improved visibility, reduced overtime, automation and adherence to labor laws.

Buoyed by the success of this project, KCS plans to leverage CloudMoyo’s expertise in data analytics ...


Read More on Datafloq
Why Taxonomies Are Required to Find Information You Need

Why Taxonomies Are Required to Find Information You Need

Taxonomy & Taxonomies

A disorganised system will be prone to stagnation, have limited user adoption and dissolve into chaos!  Do you have a Taxonomy?

If a taxonomy is formed at the outset of an information management project, a foundation can be defined that will enable the organisation to expand and evolve their system as demands change.

Taxonomy – the hierarchical classification of entities. Including the principles that underlie such classification – according to Wikipedia.

Plural noun: taxonomies

“a taxonomy of …”

Early 19th century: coined in French from Greek taxis ‘arrangement’ + -nomia ‘distribution’.



Use over time of the term Taxonomy.



Example Taxonomies: A car is a subtype of vehicle, so a car is a vehicle but not every vehicle is a car

For Food.



www.digital-mr.com

The story goes that if Microsoft had made completion of the properties box of all Office documents mandatory there would be no need for document management systems. But, “we are where we are”, we need to develop taxonomies – a set of chosen terms used to retrieve on-line content – to make the search and browse capabilities of the content, document or records management systems truly functional.

Be it a taxonomy designed for storage and management or one that supports better search, without them all types of management system are near useless.

Organising information

A business taxonomy ...


Read More on Datafloq
BBBT to Host Webinar from ThoughtSpot on Bringing Analytics to the Masses Through the Power of Search

BBBT to Host Webinar from ThoughtSpot on Bringing Analytics to the Masses Through the Power of Search

Next Wednesday, the Boulder Business Intelligence Brain Trust (BBBT), the largest industry analyst consortium of its kind, will host a webinar from ThoughtSpot on how some of the most successful Fortune 500 companies are giving front-line employees the ability to build their own reports and dashboards in seconds with search.

(PRWeb May 28, 2016)

Read the full story at http://www.prweb.com/releases/2016/05/prweb13449113.htm

How to Build Open Source Communities: Akka Project

How to Build Open Source Communities: Akka Project

Konrad Malawski from the Akka Project shared his experiences and approach to open source community building and management. Find out what he says and visit the Akka Project channel on Gitter.

Tell us about a little bit about yourself and the Akka project community. What is Akka project? How did it all begin?

Akka is a toolkit for building highly scalable, concurrent and distributed applications on the JVM. The project was started by Jonas Bonér who in 2009 decided to create a library to help him build concurrent, distributed apps heavily inspired by Erlang and its “let it crash!” motto. Later the project evolved into an important part of the Scala community and crucial part of the platform supported by Typesafe (later to be known as Lightbend), which is the company supporting the majority of Scala compiler work as well as multiple great open source projects, such as Akka or Play.

We have a fun infographic that we made to celebrate 5 years of Akka (back in 2014), that shows the evolution of the toolkit as well as people joining and contributing different modules etc. I joined around that time actually and the rest, as they say, is history.

What common goals do you have as a community?

I like ...


Read More on Datafloq
Next Generation Data Analytics Workflow Introduced in Datameer 6

Next Generation Data Analytics Workflow Introduced in Datameer 6

Further democratizing big data analytics by making traditionally complex tasks easy, Datameer today unveiled Datameer 6 to enable a new class of data-driven business analysts. Datameer 6 introduces an elegant new front end that reinvents the entire user experience, making the previously linear steps of data integration, preparation, analytics and visualization a single, fluid interaction. Shifts in context, tools or teams are no longer required every time a data change is needed saving both time and cost over traditional analytic workflows.

Datameer 6 also introduces the addition of Spark to its patent-pending Smart Execution™ technology, which intelligently selects the best processing framework for every single job while abstracting complexity from the end user. This addition ensures the fastest processing time, every time and allows the user to focus on the business problem at hand, instead of the underlying  technology.

Next-Generation Analytics Workflow for Uninterrupted Data Discovery

As a modern business intelligence platform, Datameer is leveraging smart data  discovery to transform complex, technical processes with easy-to-use point-and-click functionality. Building on this goal of simplification, Datameer 6 gives fluidity to the entire self-service analytics workflow between data integration, preparation, analytics and visualization in a single screen. By not requiring technical skills or the need for ...


Read More on Datafloq
Let Your Data Guide Your Marketing – 5 Ways to Transform Your Business with Better Data

Let Your Data Guide Your Marketing – 5 Ways to Transform Your Business with Better Data

Marketing data lies at the foundation of every successful marketing strategy. Data tells us who are best customers and prospects are, how to target them with the right offers and through the right channels, which messages will drive the most conversions, how to improve customer retention, and numerous other marketing initiatives. With the right mix of data, you can ensure that you are delivering the most optimal results for your business.

In order to create the perfect marketing strategy, you first need to fully understand who your customers and prospects are. This type of insight needs to go beyond data such as name, address, phone and email. Consumers expect you to know who they are, what they want, which channels they like to shop through, and the best time to communicate with them. This type of insight can only be achieved by utilizing your internal 1st party data and combining it with rich 3rd party data sets, both offline and online.

Here’s a look at 5 ways you can transform your business with better marketing data.

Pay Attention to Data Quality

Marketers talk about the importance of good data, but in reality, records often contain incomplete or wrong data. Records may be missing basic ...


Read More on Datafloq
#Kscope16 Blog Hop: #BigData and #AdvancedAnalytics Sessions Not to Miss

#Kscope16 Blog Hop: #BigData and #AdvancedAnalytics Sessions Not to Miss

Need help choosing sessions for #KScope16? Here are my Top 5 for the #BigData track

Privacy Policy

Copyright © 2016 BBBT - All Rights Reserved
Powered by WordPress & Atahualpa