3 Ways AI Could Totally Change Healthcare

3 Ways AI Could Totally Change Healthcare

Most of the time, artificial intelligence (AI) is discussed with respect to how it will make our technology devices better, how it’ll usher in driverless cars, or even how dangerous it could be...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
5 Strong Steps Towards Digital Transformation

5 Strong Steps Towards Digital Transformation

Strategy, organizational structure, management vision, human resources, and many such factors form the building blocks of an enterprise. If an organization plans to introduce any new strategy or...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Slow data kills business

Slow data kills business

More and more businesses today are looking to extract insight from the data they have access to in as near real-time as possible. They are looking to rapidly gain the insight and intelligence they...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Tomorrow’s Factories Will Need Better Processes, Not Just Better Robots

Tomorrow’s Factories Will Need Better Processes, Not Just Better Robots

When people think of the automotive Factory of the Future, the first word that comes to mind is automation. They think of the “lights-out� factory that General Motors Chief Executive Roger Smith...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
The rise of autonomous systems will change the world

The rise of autonomous systems will change the world

Harald Sack is Professor for Information Services Engineering at two of the most renowned research institutions in Europe:FIZ Karlsruhe and AIFB. He is a part of SEMANTiCS‘ research and innovation...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
The Disruptive Master Data Management Solutions List

The Disruptive Master Data Management Solutions List

Master Data Management (MDM) can take many forms. In the following I will shortly introduce 8 forms of MDM. A given MDM implementation will typically be focused on one of these forms with some...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Moving Your Data Infrastructure To The Cloud – Things You Should Know

Moving Your Data Infrastructure To The Cloud – Things You Should Know

Migrating to a cloud data warehouse can be a very successful endeavor for many organizations. One critical success factor ensuring a satisfactory conversion is the utilization of a data warehouse...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
IBM Thinks Blockchain Will Solve Facebook’s Data Problems

IBM Thinks Blockchain Will Solve Facebook’s Data Problems

Still reeling from its recent data leak scandal, Facebook this week announced it is forming a new team to focus on blockchain technology—but it didn’t explain why. IBM’s top blockchain expert,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Dimensions of Data Management

Dimensions of Data Management

Printing started to become what we understand as “data driven� in 1887, the year when Tolbert Lanston demonstrated a hot-metal linecasting machine controlled by punched paper tape. Modern digitized...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
The Third World War of 1985

The Third World War of 1985

Hackett

[This article was originally posted on 5 August 2016]

The seeming military resurgence of Vladimir Putin’s Russia has renewed concerns about the military balance between East and West in Europe. These concerns have evoked memories of the decades-long Cold War confrontation between NATO and the Warsaw Pact along the inner-German frontier. One of the most popular expressions of this conflict came in the form of a book titled The Third World War: August 1985, by British General Sir John Hackett. The book, a hypothetical account of a war between the Soviet Union, the United States, and assorted allies set in the near future, became an international best-seller.

Jeffrey H Michaels, a Senior Lecturer in Defence Studies at the British the Joint Services Command and Staff College, has published a detailed look at how Hackett and several senior NATO and diplomatic colleagues constructed the scenario portrayed in the book. Scenario construction is an important aspect of institutional war gaming. A war game will only be as useful if the assumptions that underpin it are valid. As Michaels points out,

Regrettably, far too many scenarios and models, whether developed by military organizations, political scientists, or fiction writers, tend to focus their attention on the battlefield and the clash of armies, navies, air forces, and especially their weapons systems.  By contrast, the broader context of the war – the reasons why hostilities erupted, the political and military objectives, the limits placed on military action, and so on – are given much less serious attention, often because they are viewed by the script-writers as a distraction from the main activity that occurs on the battlefield.

Modelers and war gamers always need to keep in mind the fundamental importance of context in designing their simulations.

It is quite easy to project how one weapon system might fare against another, but taken out of a broader strategic context, such a projection is practically meaningless (apart from its marketing value), or worse, misleading.  In this sense, even if less entertaining or exciting, the degree of realism of the political aspects of the scenario, particularly policymakers’ rationality and cost-benefit calculus, and the key decisions that are taken about going to war, the objectives being sought, the limits placed on military action, and the willingness to incur the risks of escalation, should receive more critical attention than the purely battlefield dimensions of the future conflict.

These are crucially important points to consider when deciding how to asses the outcomes of hypothetical scenarios.

The success of any analytics team starts with earning trust

The success of any analytics team starts with earning trust

A wise speaker I heard recently said, “We are human, which means we make emotional decisions first, then justify them with logic.� Do not underestimate the value of people liking you before they will...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Six Reasons Why Enterprises Need a Modern Data Integration Architecture

Six Reasons Why Enterprises Need a Modern Data Integration Architecture

Data is instigating change and giving rise to a new data-driven economy that is still in its infancy. Organizations across industries increasingly recognize that monetizing data is crucial to...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Helping data scientists to map a Knowledge Graph future

Helping data scientists to map a Knowledge Graph future

The use of Knowledge Graph technology to uncover connections within and across data sets has major potential in financial markets. We met with senior data scientists to discuss how to encourage wider...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
10 Hot IoT startups to watch

10 Hot IoT startups to watch

The Internet of Things (IoT) promises to make machines smarter, industrial processes more efficient and consumer devices more responsive to our needs. According to research firm Gartner, there will...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
What Blockchain Could Mean for Marketing

What Blockchain Could Mean for Marketing

In recent years, a major pain point for brands and advertisers has been the lack of transparency and accountability in being able to ascertain how their ad dollars have been spent. Digital...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Today’s smart cities versus connected and sustainable cities of the future

Today’s smart cities versus connected and sustainable cities of the future

As a concept, ‘ smart city ’ has been on the agenda for the last decade to make the cities more productive, more sustainable and ‘smarter.’ According to a survey, annual technology investments in...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
VOA II

VOA II

Another interview with Voice of America (VOA), this time it is on the current Russian Army: https://www.golos-ameriki.ru/a/syria-russia-interview/4397036.html

I may get around to translating it at some point. There will be a video later.

What is machine learning? Everything you need to know

What is machine learning? Everything you need to know

Machine learning is enabling computers to tackle tasks that have, until now, only been carried out by people. From driving cars to translating speech, machine learning is driving an explosion in the...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Artificial intelligence in content marketing: Three key areas of focus

Artificial intelligence in content marketing: Three key areas of focus

One of the most interesting qualities about AI technology is that the benefits are virtually limitless. Nearly every industry stands to gain in some way from the power of machines, and this...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
How Blockchain and GDPR Could Work Together

How Blockchain and GDPR Could Work Together

In an era where data privacy is an increasing concern, blockchain technology is moving toward a more transparent and more verifiable security model. Blockchain is a decentralised database where any data stored is read and write and not editable. As such, any data is immutable, verifiable and traceable. That puts blockchain in direct opposition to the General Data Protection Regulation (GDPR) that goes into effect on May 25, 2018. With regulations pushing for more consumer control of personal data, can blockchain technology work within this new framework? Absolutely, but not as it currently processes data.

GDPR Data Rules Explained

GDPR regulation is designed to allow individuals greater control over their personally identifiable information. Specific to the EU, new rules require data storage to enforce consumer rights like:


Erasing personal data when the need for its storage expires, when you withdraw consent for the storage or when it is no longer legal to continue processing that data.
Corrections for incorrect data.
Restricted processing when data is under contention, awaiting an amendment or you withdraw permission.


Erase is the word that butts directly up against the blockchain. Under these new rules, all companies storing personally identifiable data must offer the required levels of control and must have an ...


Read More on Datafloq
AI-Driven Test System Detects Bacteria In Water

AI-Driven Test System Detects Bacteria In Water

This advertorial is sponsored by Intel®.

“Clean water and health care and school and food and tin roofs and cement floors, all of these things should constitute a set of basics that people must have as birthrights.�1

 

– Paul Farmer, American Doctor, Anthropologist, Co-Founder,
Partners In Health


Challenge

Obtaining clean water is a critical problem for much of the world’s population. Testing and confirming a clean water source typically requires expensive test equipment and manual analysis of the results. For regions in the world in which access to clean water is a continuing problem, simpler test methods could dramatically help prevent disease and save lives.

Solution

To apply artificial intelligence (AI) techniques to evaluating the purity of water sources, Peter Ma, an Intel® Software Innovator, developed an effective system for identifying bacteria using pattern recognition and machine learning. This offline analysis is accomplished with a digital microscope connected to a laptop computer running the Ubuntu* operating system and the Intel® Movidius™ Neural Compute Stick. After analysis, contamination sites are marked on a map in real time.

Background and History

Peter Ma, a prolific contributor in the Intel® AI Academy program, regularly participates in hackathons and has won awards in a number of them. “I think everything started as a ...


Read More on Datafloq
Dupuy’s Verities: Offensive Action

Dupuy’s Verities: Offensive Action

Sheridan’s final charge at Winchester by Thune de Thulstrup (ca. 1886) [Library of Congress]

The first of Trevor Dupuy’s Timeless Verities of Combat is:

Offensive action is essential to positive combat results.

As he explained in Understanding War (1987):

This is like saying, “A team can’t score in football unless it has the ball.� Although subsequent verities stress the strength, value, and importance of defense, this should not obscure the essentiality of offensive action to ultimate combat success. Even in instances where a defensive strategy might conceivably assure a favorable war outcome—as was the case of the British against Napoleon, and as the Confederacy attempted in the American Civil War—selective employment of offensive tactics and operations is required if the strategic defender is to have any chance of final victory. [pp. 1-2]

The offensive has long been a staple element of the principles of war. From the 1954 edition of the U.S. Army Field Manual FM 100-5, Field Service Regulations, Operations:

71. Offensive

Only offensive action achieves decisive results. Offensive action permits the commander to exploit the initiative and impose his will on the enemy. The defensive may be forced on the commander, but it should be deliberately adopted only as a temporary expedient while awaiting an opportunity for offensive action or for the purpose of economizing forces on a front where a decision is not sought. Even on the defensive the commander seeks every opportunity to seize the initiative and achieve decisive results by offensive action. [Original emphasis]

Interestingly enough, the offensive no longer retains its primary place in current Army doctrinal thought. It is now placed on the same par as the defensive and stability operations. As the 2017 edition of the capstone FM 3-0 Operations now lays it out:

Unified land operations are simultaneous offensive, defensive, and stability or defense support of civil authorities’ tasks to seize, retain, and exploit the initiative to shape the operational environment, prevent conflict, consolidate gains, and win our Nation’s wars as part of unified action (ADRP 3-0)…

At the heart of the Army’s operational concept is decisive action. Decisive action is the continuous, simultaneous combinations of offensive, defensive, and stability or defense support of civil authorities tasks (ADRP 3-0). During large-scale combat operations, commanders describe the combinations of offensive, defensive, and stability tasks in the concept of operations. As a single, unifying idea, decisive action provides direction for an entire operation. [p. I-16; original emphasis]

It is perhaps too easy to read too much into this change in emphasis. On the very next page, FM 3-0 describes offensive “tasks� thusly:

Offensive tasks are conducted to defeat and destroy enemy forces and seize terrain, resources, and population centers. Offensive tasks impose the commander’s will on the enemy. The offense is the most direct and sure means of seizing and exploiting the initiative to gain physical and cognitive advantages over an enemy. In the offense, the decisive operation is a sudden, shattering action that capitalizes on speed, surprise, and shock effect to achieve the operation’s purpose. If that operation does not destroy or defeat the enemy, operations continue until enemy forces disintegrate or retreat so they no longer pose a threat. Executing offensive tasks compels an enemy to react, creating or revealing additional weaknesses that an attacking force can exploit. [p. I-17]

The change in emphasis reflects recent U.S. military experience where decisive action has not yielded much in the way of decisive outcomes, as is mentioned in to FM 3-0’s introduction. Joint force offensives in 2001 and 2003 “achieved rapid initial military success but no enduring political outcome, resulting in protracted counterinsurgency campaigns.� The Army now anticipates a future operating environment where joint forces can expect to “work together and with unified action partners to successfully prosecute operations short of conflict, prevail in large-scale combat operations, and consolidate gains to win enduring strategic outcomes� that are not necessarily predicated on offensive action alone. We may have to wait for the next edition of FM 3-0 to see if the Army has drawn valid conclusions from the recent past or not.

How AI Can Help Alleviate Poverty

How AI Can Help Alleviate Poverty

With the many, many uses of AI, we’re seeing an increase in researchers, scientists, organisations and start-ups of all kinds looking at ways we can leverage this technology for good. 

Whilst ‘high-technology’ has become synonymous with high wages, and high investment, there are loads of projects out there applying this technology to poverty reduction. Harnessing the power of AI to help the most desperate in our society is a fantastic way to use it. 

So, how is this being done? 

Recognising the causes of poverty is key in looking at how to tackle the problems using technologies. From natural disasters, war and conflict, affordable food, lack of education and life skills. AI can help to identify the region’s most in need of help. Tackling poverty in specific regions through improving farming lands and agriculture, increasing education and helping inhabitants learning new skills to support communities. AI can also help with aid distribution in poorer and war-torn areas, or where natural disasters have caused devastation. 

Identifying poverty, and the regions that are most in need is a key component in being able to tackle the problem of poverty. Satellite imagery is helping researchers do just this. An abundance of images taken by satellites on a ...


Read More on Datafloq
What is the Internet of Things (IoT)? Meaning & Definition

What is the Internet of Things (IoT)? Meaning & Definition

You’ve likely heard the phrase “Internet of Things” — or IoT — at some point, but you might also be scratching your head figuring out what it is or what it means. The Internet of...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
How Machine Learning Can Help Businesses Leave Bad Customer Service In The Dust

How Machine Learning Can Help Businesses Leave Bad Customer Service In The Dust

Each time a customer support representative interacts with a customer, they create data that can be used to improve their products and services. Although that type of data is important, there’s another kind of data that’s just starting to be analyzed: immediate feedback on how a customer really feels. On the surface, they may appear calm and complacent, but they could be more upset than they let on.

Recent studies have revealed that people prefer to troubleshoot their own problems and only call customer service as a last resort. By the time a DIY consumer calls customer support, they’re more than likely upset. Customer service representatives understand that people who contact them for help might be frustrated or even angry, and they’re trained to do everything they can to resolve the problem and de-escalate a customer’s anger. That starts with reading the customer.

Reading people is a skill naturally developed through daily interactions with others. Still, many customer service reps receive extended training to help them read people better.

This extended training is necessary because some customers hide their true feelings and do a good job of appearing satisfied when they’re actually upset. This is a problem for customer service reps because they ...


Read More on Datafloq
Oracle’s New Cloud Services: A New Big Push for Automation

Oracle’s New Cloud Services: A New Big Push for Automation

With a recent announcement Oracle, the global software and hardware powerhouse follows on its continuing effort to equipe all the solutions from its Cloud Platform with autonomous capabilities.

As part of a venture that started early this year with the announcement of the first set of autonomous services —including Oracle Autonomous Data Warehouse Cloud Service— and the announcement of Oracle 18c to be Oracle’s first fully autonomous database— the company is now extending these capabilities with the launch of another set of services in the cloud.

This time the turn is for three new services: Oracle Autonomous Analytics Cloud, Oracle Autonomous Integration Cloud, and Oracle Autonomous Visual Builder Cloud which, according to Oracle, will be followed by the release of more autonomous services later through the year and which will be focused on mobile, chatbots, data integration, blockchain, security and management,  as well as more traditional database workloads including OLTP.

Built from the ground-up with advanced artificial intelligence (AI) and machine learning algorithms, Oracle’s Cloud Platform new autonomous set of services aims, according to Oracle, to automate and/or eliminate tasks so organizations can lower costs, reduce risks and accelerate innovation while also gain predictive insights.

In this regard Amit Zavery, executive vice president of development from Oracle Cloud Platform mentioned:
“Embedding AI and machine learning in these cloud services will help organizations innovate in revolutionary new ways. These new cloud services are the latest in a series of steps from Oracle to incorporate industry-first autonomous capabilities that will enable customers to significantly reduce operational costs, increase productivity, and decrease risk.�

Oracle´s new and existing autonomous services within Oracle Cloud Platform services all follow the companies guidelines and fundamental autonomous capabilities, which can be summarized as:

  • Self-Driving capabilities that can reduce/eliminate human labor throughout all processes: provisioning, securing, monitoring, storing and copying and troubleshooting.
  • Self-Securing capabilities to secure services from external attacks and malicious internal users, this includes automatically application of security updates, protection against cyberattacks as well as automatically encrypt all data.
  • Self-Repairing capabilities for providing automated protection against planned and unplanned downtime, including maintenance.
The new autonomous services announced by Oracle are planned to impact different functional aspects of an organization’s necessary enterprise software services, from analytics to software development, a brief description of these new services include:

Oracle Autonomous Analytics Cloud

This service assembles a combination of technologies including machine learning, adaptive intelligence, and service automation within an analytics platform aiming to change the way users  analyze, understand, and act on information.

Oracle’s Autonomous Analytics Cloud service includes also functionality to enable business users to uncover insights by asking questions on their mobile devices. Natural language processing techniques convert questions into queries in the back end to processed so the system can deliver visualizations on their device.

The service’s machine learning functionality can autonomously gain intelligence and proactively suggest insight on data the user might not even have asked for or reveal hidden patterns.

The service is designed so can provide predictive analytics on IoT functionality data by applying domain specific machine learning algorithms on large volumes of sensor data and historical patterns.

Oracle Autonomous Integration Cloud

This service aims to speed an organization’s complex application integration process via automation.

Business processes that expand both Oracle and non-Oracle applications sitting over on-premises and SaaS applications can be embedded and integrated through a best-practice guided autonomous application integration process using machine-learning and pre-built application integration techniques.

The Autonomous Integration Cloud Service delivers an adaptive case management system through APIs with AI and machine learning frameworks which enables also Robotic Process Automation (RPA) to deliver process automation to systems with no APIs enabled.

Autonomous Visual Builder Cloud

Oracle’s Autonomous Visual Builder is designed to help companies accelerate their mobile and web application development cycles by providing business users and developers a framework to build applications with no coding.

By using the latest industry-standard technologies, the service automates code generation and allowing its deployment with a single click.

Aside from enabling rapid application development the service also automates the delivery of mobile applications across multiple mobile platforms including iOS and Android as well as availability to development on standard open-source technologies including Oracle JET and  Swagger.

So what?

Well, with a set of significant and continuous moves to achieve automation, Oracle aims to gain significant leading edge across the software industry which has become increasingly competitive.

Oracle is making clear it will extend autonomous capabilities throughout its entire Cloud Platform, committed to provide self-driving, self-securing, and self-repairing capabilities to all its PaaS services. Yet, in my view, even with all the potential advantages these movements might bring to the company is taking not a small risk, one perhaps comparable with IBM and Watson which for some time seemed to be launched in a time where most users were not ready for all the goodness the AI system could provide them with.

This been said, it's still hard of course not to be excited about Oracle’s new promise for a new generation of fully autonomous software, able to achieve many of the end objectives for what expert systems and artificial intelligence visionaries have dreamed for.

In the meantime, I hardly can wait to know what will be the response in the market both from users and of course, from Oracle’s competitors, if any, to these new release from Oracle.



Optimizing an artificial intelligence architecture: The race is on

Optimizing an artificial intelligence architecture: The race is on

AI applications often benefit from fundamentally different architectures than those used by traditional enterprise apps. And vendors are turning somersaults to provide these new components....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Ask the Data Governance Coach: Do We Have to Call It Data Governance?

Ask the Data Governance Coach: Do We Have to Call It Data Governance?

The question that I’m going to address in this column is one I get asked fairly regularly. Sadly however, there is no easy yes or no answer. There are a number of reasons for this: Naturally I am...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
What Are the PPC Trends to Know in 2018?

What Are the PPC Trends to Know in 2018?

With technology advancing so much and so fast, 2018 is set to be a year of big changes when it comes to PPC and SEO. The development of the Internet of Things (IoT) and the integration of Artificial Intelligence (AI) or automation in search engines means that the internet will never be the same again. In order to stay ahead of the game with optimisation and ads, it’s important to look at the changes to come. Here are some of the most important and relevant trends that will affect your PPC campaign in 2018.

Smart Bids

Last year, automated bids became a key focus for Google and the ‘smart bidding’ system was created to help optimise conversions through AI and machine learning. In addition to smart bidding in Adwords in 2017, other developments powered by Artificial Intelligence included smart display campaigns and new ad rotations. Going forward, it will be important for PPC specialists to understand how machine learning works, and how it will affect digital marketing in the future. Whilst some online marketers may find it difficult to relinquish control of their ads over to a robot at first, it is likely that in just a few years time, manual ...


Read More on Datafloq
Forget Bitcoin for Now: Here’s the Real Reason You Should Embrace Blockchain

Forget Bitcoin for Now: Here’s the Real Reason You Should Embrace Blockchain

Ignore Bitcoin for the moment: Distributed-ledger technology is most useful for assuring quality within supply chains. Here’s how to incorporate it. The basis for all business is supply and...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Blockchain in the cloud: Microsoft and Amazon look to democratize the distributed ledger for developers

Blockchain in the cloud: Microsoft and Amazon look to democratize the distributed ledger for developers

It was good to be back at Microsoft’s annual developer conference, Build, for the second time. The event started off with Microsoft CEO Satya Nadella claiming, “the world is becoming a computer� and...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Artificial Intelligence can bring in many positives for workforce management

Artificial Intelligence can bring in many positives for workforce management

While Artificial Intelligence and automation technologies is still nascent, the building blocks exist to suggest that Machine Learning could ease the burden of complex analysis, surface insights and...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Trust: Are we on the verge of losing it?

Trust: Are we on the verge of losing it?

Google just announced a remarkable achievement – an AI assistant that can make phone calls on your behalf to live humans. For example, it called a salon to make an appointment. You can listen to the calls here.

The engineers who designed the system taught the assistant to use human-like cadences such as um, mm, hmmm to make the computer voices sound remarkably human. Even knowing it was a machine on one end of the discussion, I had to listen carefully to figure out which party was the computer.

The AI marketer in me is both excited and worried.

The ability to mimic human interaction is an incredible milestone in machine intelligence, but the lack of transparency is something we should fear.

In a digital world easily manipulated, just because we have the ability to pretend to be human doesn’t mean we should. We are on the threshold of remarkably swift change and the burden not to sacrifice trust for saving time must wear heavily on the companies bringing about the future.

I applaud Google for following the announcement with a clear statement that their assistant will disclose it is a machine, not human, on the line. I hope they, and other leaders, continue to stay on the side of transparency. And so must each of us in our use of these remarkable achievements.

Trust is the foundation for all relationships, including the one between a brand and it’s customers.  We must ensure that trust is bi-directional both to and from the end consumer.

Leaping Forward: The What And Why Of Edge Computing

Leaping Forward: The What And Why Of Edge Computing

Edge computing is proving itself to be a durable conversation starter among a particular set. So, it makes sense to understand what it is and why it’s important, particularly since a number of...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Why Artificial Intelligence Will Shift Human Innovation Into Overdrive

Why Artificial Intelligence Will Shift Human Innovation Into Overdrive

Veritone, developer of the world’s first artificial intelligence operating system.”>A visionary serial entrepreneur, Chad is the CEO of Veritone, developer of the world’s first artificial...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
General McInerney

General McInerney

Lt. General Thomas McInerney has been in the news lately, mostly for saying things that are getting him kicked off of news shows:

https://en.wikipedia.org/wiki/Thomas_McInerney

It is my understanding that he was the person who was responsible for making sure that DACM (Dupuy Air Combat Model) was funded by AFSC. He then retired from the Air Force in 1994. We completed the demonstration phase of the DACM and quite simply, there was no one left in the Air Force who was interested in funding it. So, work stopped. I never met General McInerney and was not involved in the marketing of the initial effort.

The Dupuy Institute Air Model Historical Data Study

The Dupuy Air Campaign Model (DACM)

But, this is typical of the problems with doing business with the Pentagon, where an officer will take an interest in your work, generate funding for it, but by the time the first steps are completed, that officer has moved on to another assignment. This has happened to us with other projects. One of these efforts was a joint research project that was done by TDI and former Army surgeon on casualty rates. It was for J-4 of the Joint Staff. The project officer there was extremely interested and involved in the work, but then moved to another assignment. By the time we got original effort completed, the division was headed by an Air Force Colonel who appeared to be only interested in things that flew. Therefore, the project died (except that parts of it were used for Chapter 15: Casualties, pages 193-198, in War by Numbers).

Our experience in dealing with the U.S. defense establishment is that sometimes research efforts that takes longer than a few months will die……because the people interested in it have moved on. This sometimes leads to simple, short-term analysis and fewer properly funded long-term projects.

10 Promising AI Applications in Health Care

10 Promising AI Applications in Health Care

There’s a lot of excitement right now about how artificial intelligence (AI) is going to change health care. Indeed, many AI technologies are cropping up to help people streamline administrative and...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Is Data Now More Differentiating Than Analytics?

Is Data Now More Differentiating Than Analytics?

As a person who has been involved with analytics for a long time, I have historically considered analytics to be a huge differentiator while data was more of a table-stakes enabler. Several trends have come together to make me realize that the equation has been reversed in many cases today.

Data as Enabler, Analytics as Differentiator

Before the era of big data, most enterprises captured similar data in similar ways. Every retailer captured the same transactional history, every bank had the same account activity history, and every telecommunications company had the same call detail records. The data itself was fairly standard and didn’t provide a competitive advantage on its own. By definition, if every competitor has the same data, then the data doesn’t differentiate them from one another.

Given the similarity of data assets, differentiation came from how different organizations analyzed the data that they possessed. In the retail space, for example, Tesco had a well-documented period where it got far ahead of the competition in the realm of customer analytics and reaped huge rewards as a result. Part of what drove the ability to differentiate with analytics was the fact that access to the algorithms needed for analytics was expensive and required specialized tools ...


Read More on Datafloq
How the IoT is keeping traffic moving and the streetlights shining

How the IoT is keeping traffic moving and the streetlights shining

The next time your car bottoms out on a nasty pothole, grit your teeth and try to spare a thought for the people trying to end that problem and smooth out your journey. Yotta helps local authorities...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
How Big Data Can Help Combat DDOS Attacks

How Big Data Can Help Combat DDOS Attacks

Ever since the web became an intricate and fundamental part of our daily lives, DDOS attacks have constantly threatened to disrupt businesses and enrage internet users. Time and time again, talented entrepreneurs and massive corporations alike have struggled to combat these attacks, regularly finding themselves one step behind of the nefarious actors who bring down their networks.

These days, however, tech experts are beginning to harness the power of big data to help combat DDOS attacks, and are making serious inroads when it comes to developing a safer, more secure internet for everyone. Here’s how big data is reshaping data security as we know it when it comes to DDOS attacks.

The net seems less secure than ever before

Let’s face it; today’s internet is feeling less and less secure by the day, and there’s often little good news when it comes to such things as data security and privacy. Global DDOS attacks are on the rise, and it stands to reason that as the internet grows to become more important economically and politically, such attacks will only grow to become more and more commonplace. Whether they’re trying to extort massive companies, belittle and frustrate smaller websites, or merely anger individuals with ...


Read More on Datafloq
What is an API? Application programming interfaces explained

What is an API? Application programming interfaces explained

API, for application programming interface, is one of those acronyms that is used everywhere from command-line tools to enterprise Java code to Ruby on Rails web apps. Unless you write every single...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Stanley Cup Play-off Odds

Stanley Cup Play-off Odds

Last night, as I was watching the Capitals trash Tampa Bay in the third round of the Stanley Cup play-offs….the announcer mentioned that only twice in the last 41 years (or cases) has a team won the third round of the play-offs after loosing the first two games. Tampa Bay was lost the first two games. So, historically, in only 4.878% (say 5%) of the cases has someone come back from loosing the first two play-off games to win. Note that the Capitals did this against Columbus in the first round.

Now, there are seven games in a play-off round.  So with five games left, Tampa Bay has to win 4 of the 5 games. So, assuming the teams are equal (50% chance of either winning a game), then the odds of Tampa Bay winning 4 of the next 5 games I calculate as .09375 (mathematicians…please check me on this) or 9%. So if the teams are equal, then Tampa Bay should statistically have a 9% chance of coming back and winning the round. Historically, it has only happened 5% of the time.

Suppose Tampa Bay is the better team. Lets say their odds of winning are 60% for each game, then their odds of winning 4 out of 5 rises to 18.144%. They did win 2 out of 3 games against the Capitals in the regular season, so maybe their odds of winning any single game is really 67%. This is a 26.34% chance of coming back. Let us say they are really good and motivated and have a 75% of winning each game, then the odds are  39.55%. On the other hand, to get to the historical 5% win rate in this situation, then team that is behind had to have around a 42% chance of winning each game.

Anyhow…..not sure what it all means.

CMOs Can Construct A More Effective Strategy With Blockchain

CMOs Can Construct A More Effective Strategy With Blockchain

As the central technology for cryptocurrency like Bitcoin, blockchain has been touted as a disruptive solution for the financial environment. However, as blockchain as developed, other non-financial...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Man Vs. Machine: Who’s Winning at PPC? | Simplilearn

Man Vs. Machine: Who’s Winning at PPC? | Simplilearn

Man Vs. Machine: Who’s Winning at PPC? | Simplilearn As machine learning grows in pay-per-click (PPC), so does feelings of anxiety about future job stability. According to the Recruiter Nation Report by Jobvite, 69 percent of job seekers admit to being at least somewhat worried about losing their career to job automation. Man vs. machine is not just a common science fiction trope; it is evi...Read More.
Blockchain Is Gaining Ground in Video-Streaming. Here’s Why.

Blockchain Is Gaining Ground in Video-Streaming. Here’s Why.

Blockchain is bringing about Internet 3.0. Are you going to be involved? The internet revolutionized the way the world operates, stays connected and communicates — that was Internet 1.0....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
ITIL® – CSF &amp KPIs | Simplilearn

ITIL® – CSF &amp KPIs | Simplilearn

ITIL® – CSF &  KPIs | Simplilearn When it comes to the actions that are critical to business success, organizations want to know which effects are the most important, along with how to measure these effects and successes. If you’ve ever heard references to CSFs or KPIs, these are a kind of shorthand to how those two questions are answered.  Sometimes the terms Critica...Read More.
Building a career in Mobile App Development | Simplilearn

Building a career in Mobile App Development | Simplilearn

Building a career in Mobile App Development | Simplilearn When it comes to the IT industry, there are plenty of career paths to take. But one field that has seen a tremendous rise in popularity, of late, is that of mobile App Development.  Mobile devices have become ubiquitous—two-thirds of the world’s population is connected with a mobile device. That’s more than 5 billion uniq...Read More.
Cloud Computing Architecture | Simplilearn

Cloud Computing Architecture | Simplilearn

Cloud Computing Architecture | Simplilearn Cloud computing has been trending in today’s technology-driven world for years now, and with good reason. Cloud computing offers many advantages with flexibility, storage, sharing and easy accessibility, cloud computing is being used by companies of all sizes. Even at home, we use cloud technologies for various daily activities. From Google D...Read More.
3 Ways AI is Changing Market Research in 2018

3 Ways AI is Changing Market Research in 2018

No doubt you have been hearing all about artificial intelligence – “It’s going to take over the worldâ€� “We need to start using it on our business!â€� “I think Alexa knows too much…â€� It’s clear...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Neurala’s new neural network reduces AI training times from hours to seconds

Neurala’s new neural network reduces AI training times from hours to seconds

Artificial intelligence startup Neurala Inc. is claiming a major breakthrough with its deep learning platform, saying it has reduced the time it takes to train a deep neural network from 15 hours to...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
AI Redefines Performance Requirements At The Edge

AI Redefines Performance Requirements At The Edge

In a broad sense, the history of computing is the constant search for the ideal system architecture. Over the last few decades system architects have continually shifted back and forth from...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
The Life-Cycle of Live Data

The Life-Cycle of Live Data

With an eye on integration of embedded devices with the Cloud, Raima CTO Wayne Warren differentiates between live, actionable information and data with ongoing value, and argues that to realise the...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
What the IoT Needs Is a Data Layer

What the IoT Needs Is a Data Layer

When people talk about the IoT market, it is almost always in terms of the number of devices that are, or will be, connected to the internet. 20 Billion devices by 2020, of devices by 2025....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Was Kursk the Largest Tank Battle in History?

Was Kursk the Largest Tank Battle in History?

[This post was originally published on 3 April 2017.]

Displayed across the top of my book is the phrase “Largest Tank Battle in History.� Apparently some people dispute that.

What they put forth as the largest tank battle in history is the Battle of Brody in 23-30 June 1941. This battle occurred right at the start of the German invasion of the Soviet Union and consisted of two German corps attacking five Soviet corps in what is now Ukraine. This rather confused affair pitted between 750 to 1,000 German tanks against 3,500 to 5,000 Soviet tanks. Only 3,000 Soviet tanks made it to the battlefield according to Glantz (see video at 16:00). The German won with losses of around a 100 to 200 tanks. Sources vary on this, and I have not taken the time to sort this out (so many battles, so little time). So, total tanks involved are from 3,750 to up to 6,000, with the lower figure appearing to be more correct.

Now, is this really a larger tank battle than the Battle of Kursk? My book covers only the southern part of the German attack that started on 4 July and ended 17 July. This offensive involved five German corps (including three Panzer corps consisting of nine panzer and panzer grenadier divisions) and they faced seven Soviet Armies (including two tank armies and a total of ten tank corps).

My tank counts for the southern attack staring 4 July 1943 was 1,707 German tanks (1,709 depending if you count the two Panthers that caught fire on the move up there). The Soviets at 4 July in the all formations that would eventually get involved has 2,775 tanks with 1,664 tanks in the Voronezh Front at the start of the battle. Our count of total committed tanks is slightly higher, 1,749 German and 2,978 Soviet. This includes tanks that were added during the two weeks of battle and mysterious adjustments to strength figures that we cannot otherwise explain. This is 4,482 or 4,727 tanks. So depending on which Battle of Brody figures being used, and whether all the Soviet tanks were indeed ready-for-action and committed to the battle, then the Battle of Brody might be larger than the attack in the southern part of the Kursk salient. On the other hand, it probably is not.

But, this was just one part of the Battle of Kursk. To the north was the German attack from the Orel salient that was about two-thirds the size of the attack in the south. It consisted of the Ninth Army with five corps and six German panzer divisions. This offensive fizzled at the Battle of Ponyiri on 12 July.

The third part to the Battle of Kursk began on 12 July the Western and Bryansk Fronts launched an offensive on the north side of the Orel salient. A Soviet Front is equivalent to an army group and this attack initially consisted of five armies and included four Soviet tank corps. This was a major attack that added additional forces as it developed and went on until 23 August.

The final part of the Battle of Kursk was the counter-offensive in the south by Voronezh, Southwestern and Steppe Fronts that started on 3 August, took Kharkov and continued until 23 August. The Soviet forces involved here were larger than the forces involved in the original defensive effort, with the Voronezh Front now consisting of eight armies, the Steppe Front consisting of three armies, and there being one army contributed by the Southwestern Front to this attack.

The losses in these battles were certainly more significant for the Germans than at the Battle of Brody. For example, in the southern offensive by our count the Germans lost 1,536 tanks destroyed, damaged or broken down. The Soviets lost 2,471 tanks destroyed, damaged or broken down. This compares to 100-200 German tanks lost at Brody and the Soviet tank losses are even more nebulous, but the figure of 2,648 has been thrown out there.

So, total tanks involved in the German offensive in the south were 4,482 or 4,727 and this was just one of four parts of the Battle of Kursk. Losses were higher than for Brody (and much higher for the Germans). Obviously, the Battle of Kursk was a larger tank battle than the Battle of Brody.

What some people are comparing the Battle of Brody to is the Battle of Prokhorovka. This was a one- to five-day event during the German offensive in the south that included the German SS Panzer Corps and in some people’s reckoning, all of the III Panzer Corps and the 11th Panzer Division from the XLVIII Panzer Corps. So, the Battle of Brody may well be a larger tank battle than the Battle of Prokhorovka, but it was not a larger tank battle than the Battle of Kursk. I guess it depends all in how you define the battles.

Some links on Battle of Brody:

https://en.wikipedia.org/wiki/Battle_of_Brody_(1941)

http://warisboring.com/the-biggest-tank-battle-in-history-wasnt-at-kursk/

https://www.youtube.com/watch?v=5qkmO7tm8AU

Using Big Data Analytics To Improve Production

Using Big Data Analytics To Improve Production

Manufacturing remains a critically important part of the world’s economic engine, but the roles it plays in advanced and developing economies has shifted dramatically. In developing countries,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Real-time analytics is becoming increasingly important for us

Real-time analytics is becoming increasingly important for us

I have cought up with Dr. Volker Stümpflen, Head of Data Strategy and Operations at Mediengruppe RTL, to talk about the use of real-time analytics. He is primarily responsible for the development and...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
4 Ways How Blockchain Will Change the Retail Industry

4 Ways How Blockchain Will Change the Retail Industry

The retail industry has become increasingly complex in the past decades. Products are made in one part of the world, assembled in another and sold in a third part of the world, whether it is food,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Putting customer data at the heart of your digital business

Putting customer data at the heart of your digital business

Ensuring that this is collected and used effectively benefits the bottom line and builds closer, more profitable relationships with customers. Respondents in a recent Royal Mail Data Services study...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Graphing the sensitive boundary between personally identifiable information and publicly inferable insights

Graphing the sensitive boundary between personally identifiable information and publicly inferable insights

Sleuthing is the art of making intelligent inferences from sparse, disconnected and seemingly random data. Detectives, like any skilled analyst, are adept at using these inferences to solve...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
5 Ways Digital Transformation Is Changing the People Strategy | Simplilearn

5 Ways Digital Transformation Is Changing the People Strategy | Simplilearn

5 Ways Digital Transformation Is Changing the People Strategy | Simplilearn As organizations prepare to move from a cost-arbitrage model to one of digital transformation, it is imperative that they rethink their hiring as well as learning and development strategies to build a digital workforce that can ensure a seamless transition. To understand the strategies organizations are using to build a digital workforce and the me...Read More.
Three Practical Uses for Graph Databases in MDM

Three Practical Uses for Graph Databases in MDM

Graph databases are not ready to replace relational database MDM platforms, but the use cases are growing in number. Here are three practical examples of when to supplement your MDM practice with a...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
The Basic Principles of Project Management | Simplilearn

The Basic Principles of Project Management | Simplilearn

The Basic Principles of Project Management | Simplilearn Project management is a composite activity with multiple dimensions. Depending on the type and class of project, this management activity can be very complex. In a nutshell, project management is the discipline of planning, organizing, securing, managing, leading, and controlling resources to achieve specific goals. What is Project Management Abou...Read More.
ITIL: Key Concepts and Summary | Simplilearn

ITIL: Key Concepts and Summary | Simplilearn

ITIL: Key Concepts and Summary | Simplilearn The ITIL (Information Technology Infrastructure Library) has become the de facto standard in IT Service Management. ITIL helps organizations across industries offer their services in a quality-driven and economical way. The framework’s most recent version, published in 2011, is only a progressive update that further refines an existing body o...Read More.
20 Frequently Asked Node.js Interview Questions and Answers | Simplilearn

20 Frequently Asked Node.js Interview Questions and Answers | Simplilearn

20 Frequently Asked Node.js Interview Questions and Answers | Simplilearn Node.js is a server-side technology that you have to be proficient in to become the ideal MEAN stack developer. It is one of the most rewarding web development tools to learn, not only because of the lucrative salary—averaging $100,000/ year—but also the fast and continued growth of job postings requiring knowledge of the technology.&nbs...Read More.
Cracking the case of unstructured data

Cracking the case of unstructured data

Today’s police officers have access to huge volumes of data from an enormous variety of sources. These include video from surveillance and body-mounted cameras, automatic number plate recognition...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Top 20 Scrum Master & Agile Scrum Interview Questions | Simplilearn

Top 20 Scrum Master & Agile Scrum Interview Questions | Simplilearn

Top 20 Scrum Master & Agile Scrum Interview Questions | Simplilearn Congratulations! You got the interview! Whether this is your first Scrum role or you are a seasoned pro, it’s always helpful to know how to prepare for an Agile interview. Some of the top companies that use Agile and Scrum include Apple, Google, Valve, Philips, and more—so you’ll definitely want to be ready to answer the kinds of ...Read More.
12 Great CISSP Books and Study Guides for the CISSP Certification | Simplilearn

12 Great CISSP Books and Study Guides for the CISSP Certification | Simplilearn

12 Great CISSP Books and Study Guides for the CISSP Certification | Simplilearn So you’ve decided to pursue a career in IT Security and are looking to get CISSP certified. Maybe you’re excited at the prospect of designing winning IT security architectures. Maybe you’re just looking to up-skill and improve your resume? Or maybe you’re drawn by the high-paying jobs offered to certified CISSP ...Read More.
Afghan Migration

Afghan Migration

Fascinating article from a British-based analyst, Dermot Rooney: http://www.wapentakes.com/wp-content/uploads/2016/11/2016-Newsbrief_May_Rooney.pdf

A few highlights:

  1. “…war alone cannot account for the vast number of Afghan migrants or the great distance they are travelling.”
    1. “Globally, up until 1960, the ratio of refuges to fatalities in conflict zones was below 5:1.”
    2. “…in 2015 there was an almost unprecedented 50 as asylum applicants for every civilian killed.”
    3. “Whereas in 1979 over 90% of the Afghan refugees travelled less than 500 km and cross one border, now more than 90% travel over 5,000 km to seek asylum…”
  2. “There are now 1.3 million internally displaced Afghans, with the total increasing by 400,000 a year.”
  3. “The pull of economic opportunity plays a large part in the decision to migrate.”
  4. “In 2015, the population of Afghanistan was 32 million.”
    1. “…it is nonetheless obliged to import enough wheat to feed 10 million people…”
  5. “…Afghanistan’s population will pass 40 million in ten years.”
    1. “the natural growth rate of 2.3% a year added 700,000 to the Afghan population in 2015.”
    2. “Unless there is a dramatic improvement in the economy and security in that time, 16 million will depend on food aid…”

 

How Blended Learning Boosts Digital Marketing Skills for WPP Agencies | Simplilearn webinar starts 13-06-2018 11:00

How Blended Learning Boosts Digital Marketing Skills for WPP Agencies | Simplilearn webinar starts 13-06-2018 11:00

Digital transformation has created a crisis for ad agencies, as technological advances have made it easier for clients to cut out the middleman. But this also presents a critical opportunity for agencies to bolster their own expertise in digital marketing, especially since there is a major talent shortage in these same technology skills. Join expe...Read More.
Man vs. Machine: How to Future-proof Your PPC Job | Simplilearn webinar starts 10-05-2018 21:30

Man vs. Machine: How to Future-proof Your PPC Job | Simplilearn webinar starts 10-05-2018 21:30

In today’s digital economy, new technologies like AI and Machine Learning pose a threat not just to offline jobs but to digital ones too. As PPC tools become more and more capable of automating recommendations, how safe is your PPC job? What skills will help you future-proof your PPC job in the era of AI and machine learning? Join Bra...Read More.
When Has a Data Analyst Succeeded?

When Has a Data Analyst Succeeded?

I read a thought-provoking blog post by Roger Peng of Simply Stats (and professor in the Department of Biostatistics at the Johns Hopkins Bloomberg School of Public Health) entitled “What is a...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Why the Future Will Never Be Slow Again

Why the Future Will Never Be Slow Again

The fast-changing, uncertain and ambiguous environments that organisations operate in today, require organisations to re-think all their internal business processes and customer touch points. In addition, due to the availability of Emerging Information Technologies such as big data analytics, blockchain and artificial intelligence, it has become easier for startups to compete with existing organisations. Often these startups are more flexible and agile than Fortune 1000 companies, and they can become a significant threat if not paid attention to. Therefore, focusing purely on the day-to-day operation is simply not enough and organisations have to become innovative and adaptive to change if they wish to remain competitive.

When big data, blockchain and AI are combined, it will change collaboration among individuals, organisations and things, moving from pure human-to-human collaboration to increasingly human-to-machine collaboration and machine-to-machine collaboration. All these technologies enable organisations to design smarter businesses and incorporating these technologies within your organisation has become easier than ever before. Last week, I was invited to attend IBM Think Australia. An event that discussed how emerging technologies are changing the way we live, work and interact with one another every day. The event was all about exploring the relationship between humanity and technology and showing ...


Read More on Datafloq
How artificial intelligence is reshaping jobs in banking

How artificial intelligence is reshaping jobs in banking

The idea of artificial tends to strike fear in the hearts of workers who suspect they’ll be replaced by robots. The reality is more nuanced. There is no question some jobs will be lost. But others...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Chess can teach us how to implement AI in healthcare

Chess can teach us how to implement AI in healthcare

It’s been more than 20 years since Garry Kasparov lost his famous chess match against IBM’s Deep Blue, which heralded much anxious commentary about how humanity was soon to be subjugated by the...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Data Quality Evolution with Big Data and Machine Learning

Data Quality Evolution with Big Data and Machine Learning

When big data is combined with machine learning, enterprises must be alert to new data quality issues. IT departments have been struggling with data quality issues for decades, and satisfactory...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Solidifying security analytics with artificial intelligence knowledge graphs

Solidifying security analytics with artificial intelligence knowledge graphs

Each successive instance of data compromises (and their escalating repercussions) is a veritable case study for the necessity of security analytics. With increasing regulations and new security...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Data Scientist versus Data Engineer: How are they different?

Data Scientist versus Data Engineer: How are they different?

Data Science is an approach to merge data analysis, business analytics, deep learning with other related methods. With the advent of digital technology, data has gained momentum in a variety of work...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Tackling Fake News, and Deep Fakes With Artificial Intelligence

Tackling Fake News, and Deep Fakes With Artificial Intelligence

Fake news. Two words you’ll have heard a lot of over the past year or so. It’s been such a popular term that it even made Word of the Year for 2017. But there is a much more to it than being a large part of Trump’s vocabulary.  

This is a ‘uge problem that needs to be addressed and tackled.

So, what is it then? Isn’t ALL news in some way a bit fake - blown out of proportion, exaggerated, politically charged? Well, yes but fake news is a whole other level, it goes beyond the sensationalism of the tabloids we all loathe. It’s a type of propaganda that has garnered much attention in recent years, and this epidemic of ‘yellow journalism’, is extremely damaging and is a massive cause for concern. Given the popularity of social media and online sources, completely bogus headlines and sensationalist content is only fueling political tensions and social divides the world over. 

Interestingly, it was also one of the most popular areas in which Data Scientists are wanting to work this year according to our salary report, data science professionals want to work in detecting fake news, and finding ways in which machine learning can help ...


Read More on Datafloq
AI and Machine Learning in Software Development: How it Can Benefit Developers

AI and Machine Learning in Software Development: How it Can Benefit Developers

Artificial Intelligence and Machine Learning have disrupted every industry including retail, manufacturing, transportation and even customer support. Software development is no exception where machine learning and AI can enhance the process of traditional software development cycle.

Research report as of September 2017 by MarketandMarkets predicts the Machine Learning market will grow to $8.81 billion by 2022.

To give you a brief overview of Machine Learning, it is an application of Artificial Intelligence that provides systems the ability to automatically improve and learn from the experiences without being programmed. The core purpose of machine learning is to allow computers learn automatically without the interference of humans.

As a software developer or computer programmer, you have to specify every detail to let the system know what to do. Thereafter, you customize the features of your technology accordingly. Developing an app or software integrated with machine learning would make a significant difference and help you to make a unique name as a software development company. 

In this article, we will look into how machine learning in software development can benefit developers to write flawless code, deploy the upgradation and identify bugs

Let’s get started.

Data Security becomes Easier

The amount of data being transferred from different networks makes it ...


Read More on Datafloq
Why All Big Data Encryptions Are Not Created Equally

Why All Big Data Encryptions Are Not Created Equally

Information security and data privacy are more important now than ever before, but countless companies and individuals alike are dropping the ball when it comes to encrypting their most important information. For years, encryptions measures haven’t been sufficient to stop nefarious hackers from breaching company firewalls or making off with individuals’ personal data, and it seems like IT professionals have few solutions to these problems.

Luckily for consumers in the 21st century, big data-backed encryption practices are becoming more common, and stand to revolutionise how we understand information security practices. Not all big data encryption enterprises are created equally, however, and companies that back the wrong horse could pay the price. Here’s everything you need to know about sound big data encryption.

Investment is soaring upwards

It’s indisputable that big data-based encryption practices are no longer mere fads, but are indeed a driving force of today’s global IT market. The big data security market alone is expected to reach an astonishing $26.85 billion as soon as 2022, and it’s a matter of fact that individuals and companies of all shapes and sizes will continue to employ big data analytics operations when it comes to keeping their information private. But is everyone getting a ...


Read More on Datafloq
Oracle Big Data Cloud, Event Hub Cloud and Analytics Cloud Data Lake Edition pt.1

Oracle Big Data Cloud, Event Hub Cloud and Analytics Cloud Data Lake Edition pt.1

Oracle Big Data Cloud, Event Hub Cloud and Analytics Cloud Data Lake Edition pt.1 : Creating the Real-Time Data Pipeline

Some time ago I posted a blog on what analytics and big data development looked like on Google Cloud Platform using Google BigQuery as my data store and Looker as the BI tool, with data sourced from social media, wearable and IoT data sources routed through a Fluentd server running on Google Compute Engine. Overall, the project architecture looked like the diagram below…

… and I’ve got more-or-less the same setup running right now, with an additional GCE VM running Confluent Open Source to feed a subset of the event streams into a Druid cluster that I’m using to test out Looker, Superset and Imply for sub-second ad-hoc query analysis use-cases. More on that soon.

If you’re a regular reader of this blog you might recall a week or so ago I posted a blog on the new releases of Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Analytics Cloud (OAC) and Oracle Data Visualization Desktop (DVD) and mentioned a new packaging option for their cloud analytics product, Oracle Analytics Cloud Data Lake Edition. I’ve got a particular interest in what this product might be as I used to use the product it replaces, Oracle Big Data Discovery (BDD), fairly extensively in Oracle big data analytics and data lake projects a few years ago.

And Oracle Big Data Discovery was — technically at least — a great product. It combined the search and analytics features of Endeca Information Discovery with the scale and data transformation abilities enabled by Apache Spark and Hadoop, but suffered perhaps by being a bit ahead of the market and by not having any obvious integration with the rest of Oracle’s analytics and data management tools. By contrast Oracle Analytics Cloud Data Lake Edition is one of three packaging options for Oracle Analytics Cloud and includes all of the functionality of OAC Standard Edition (Oracle Data Visualization together with basic data preparation tools) as well as itself being a subset of the wider set of analysis, dashboarding and enterprise data modeling features in OAC Enterprise Edition.

An equivalent product architecture for ingesting, transforming and analyzing my IoT, wearables and social media data in Oracle Cloud would look something like the diagram below, with the following Oracle Cloud Platform-as-a-Service (PaaS) products used for ingest, storage and analysis:

  • Oracle Event Hub Cloud Service: Apache Kafka running either customer or Oracle-managed with full access to Kafka’s REST Proxy and Kafka Connect
  • Oracle Big Data Cloud: Oracle’s new elastically-scalable Hadoop platform running Apache Spark, Ambari and other Hortonworks Data Platform components
  • Oracle Analytics Cloud Data Lake Edition: Oracle Data Visualization combined with more extensive lightweight ETL (“data flowâ€�) components, text analytics and machine learning model training and build capabilities

In this example I’m using Apache Hive and Parquet storage as my column-orientated data platform but of course I’ve now also got Oracle Autonomous Data Warehouse Cloud as an option; I’ll stick with Hive on Oracle Big Data Cloud for now though as this gives me the option to use Apache Spark to transform and wrangle my data and for building machine learning models using SparkML and, via pySpark, Python Pandas. In what’s the first of two posts in this short series I’ll be looking at how the data pipeline is set up, and then in the second post I’ll look at Oracle Analytics Cloud Data Lake Edition in detail focusing on the data transformation, data engineering and data science features it adds beyond OAC Standard Edition.

The development environment I put together for this scenario used the following Oracle Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) components:

  • Oracle Compute Classic and Storage Classic Services
  • Oracle Database Cloud Service, with the 11g database option
  • Oracle Event Hub Cloud Service Dedicated, with Kafka Connect and REST Proxy nodes
  • Oracle Big Data Cloud, single node with Hive, Spark 2.1, Tez, HDFS, Zookeeper, Zeppelin, Pig and Ambari
  • Oracle Analytics Cloud Data Lake Edition with Self Service Data Preparation, Visualisation and Smart Discovery (aka Oracle DV)

The screenshot below from the Compute Classic Service Console shows the various PaaS VMs running on the compute IaaS layer, with other tabs in this console showing network, storage and other infrastructure service usage.

The order in which you install the services is important if you want to associate the various products together correctly, and if like me you’re using a trial account you’ll need to plan carefully to ensure you keep with the various quota limits that Oracle Cloud imposes on trial identity domains that stopped me, for example, allocating the usual two OCPUs to the main Oracle Event Hub Kafka server if I wanted to run the rest of the stack at the same time.

Associating two services together, for example Oracle Event Hub and Oracle Big Data Cloud, connects the two services together automatically in the identity domain network and makes using them together much simpler but unless Event Hub Cloud is provisioned and available when you come to install Big Data Cloud you can’t go back and associate them afterwards, but more annoyingly if you decide you want to use Event Hub Cloud and associate it with Big Data Cloud but since then you’ve provisioned Oracle Analytics Cloud Data Lake Edition and associated Big Data Cloud with that you have to unwind the whole provisioning process and start again with Event Hub Cloud if you want to connect them all together properly. And forget deleting that Database Cloud Service you associated with Oracle Big Data Cloud and then forgot about as you can’t delete services that other services are associated with.

Provisioning each of the services involves giving the service instance a name, assigning storage buckets and OCPU counts to the various cluster nodes you request, and at key points selecting previously provisioned and now running services for association with the one you’re now provisioning. The screenshots below show the three-stage provisioning service for Event Hub Cloud Service — Dedicated:

Provisioning a new Event Cloud Service — Dedicated cluster

and the order in which I provisioned my data lake services, and the important options I chose to make it all work together and within quota, were as follows:

  1. First ensure you have access to Oracle Cloud, and then the various IaaS services: Oracle Cloud Infrastructure Compute Classic, Oracle Cloud Infrastructure Object Storage Classic and Oracle Identity Cloud Service
  2. Provision Oracle Database Cloud Service to store the various Fusion Middleware RCU schemas; in my case I chose Oracle Database 11gR2 as the database type as it avoids the complexity around CDBs and PDBs you get with the 12c database release
  3. Then provision Oracle Event Hub Cloud — Dedicated with one OCPU for the main Kafka VM, one for the REST Proxy VM and another one OCPU for the Kafka Connect VM. For a real deployment you’d want at least two OCPUs for the Kafka service VM but using just the one kept me within my overall OCPU quota limit when installing the rest of the stack
  4. Next step is to provision Big Data Cloud with a single node with the minimum 2 OCPUs, the Full deployment profile and version 2.1 of Apache Spark as that’s the version OAC Data Lake Edition insist on in the next step. When prompted, choose the option to associate Event Hub Cloud and Database Cloud with Big Data Cloud as you won’t get the option to do this again after the initial service provision; once provisioned, open-up the TCP port for Ambari (8080) to the public internet so that OAC in the next step can associate with it — provisioning for OAC failed for me every time until I looked through the provisioning logs and spotted this as the issue
  5. Finally, provision Oracle Analytics Cloud and choose Data Lake Edition as the package option, again in my case assigning a single OCPU and selecting Data Lake Edition as the software version

At that point if you then bring up the Cloud Services dashboard and review the services together for the first time, it’ll look something like this:

Oracle Data Lake stack within Cloud Services Dashboard

Now it’s time to ingest some data and land it into Oracle Big Data Cloud.

The streaming IoT, wearables and social media comms data that I’ll be ingesting into Big Data Cloud will be coming in from the public internet over TCP, and I’ll also want to connect to Event Hub Cloud from my desktop using tools such as Kafka Tool so an addition configuration step I’ll do before setting up anything else is to open-up Event Hub Cloud’s Kafka broker endpoint to the public internet using the Access Rules menu item in the Event Hub Cloud console.

Now I can see the Kafka service and the default topics that Event Hub Service created for me in Kafka tool.

I can either then use Kafka tool to create a new topic to start receiving the first of my data streams, the IoT device event data coming out of Samsung SmartThings, or create the topic as by defining a new Event Hub Cloud service from within the Event Hub Cloud Service — Dedicated console (confusing, but that’s how Kafka topics are named within Event Hub Cloud)

Then it’s just a case of directing the stream of IoT event data to the public Kafka broker endpoint exposed by Event Hub Cloud Service — Dedicated and then, after a short while, checking the metrics for the new Kafka topic that I setup to receive this incoming streaming data.

Getting the data off the Kafka topic and into a Hive table on the Big Data Cloud instance involved the following steps, using Oracle Cloud Infrastructure Object Storage Classic as the intermediate staging layer together with Event Hub Kafka Connect’s OCS Sink Connector:

  1. Configure Event Hub Kafka Connect OCS Sink Connector to push topic events to Oracle Cloud Infrastructure Object Storage Classic (OCS)
  2. Using Zeppelin notebook provided by Big Data Cloud Console, create a CRON job that copies those events across to HDFS storage
  3. Create Hive external tables with location clauses that point to the directories I’ve copied the event files into from OCS

Then, when I go and log into OAC Data Lake Edition and connect to the Hive Thrift Server on the Big Data Cloud instance I can see the Hive tables I’ve just created, and the data that’s now streaming through from my two initial sources via the Kafka service and Kafka Connect running on Event Hub Cloud Service — Dedicated.

In the second half of this two-post series I’ll go deeper into OAC Data Lake Edition and see how its additional transformation and analysis capabilities stack-up against OAC Standard Edition, and also see how it compares to the Oracle Big Data Discovery tool its looking to eventually replace.


Oracle Big Data Cloud, Event Hub Cloud and Analytics Cloud Data Lake Edition pt.1 was originally published in Mark Rittman’s Personal Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Faster-than-expected cloud adoption has upped the ante for protection of sensitive data

Faster-than-expected cloud adoption has upped the ante for protection of sensitive data

Three weeks out from GDPR, businesses are still shoving sensitive data into the cloud without necessarily having appropriate security Massive cloud-based companies may be tweaking their data...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
11 Business Intelligence Trends To Watch

11 Business Intelligence Trends To Watch

Many successful companies today have found their own ways of connecting data, people, and ideas. What sets them apart is how they are taking advantage of an unstoppable force — the increased...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Toxic Data: A New Challenge for Data Governance and Security

Toxic Data: A New Challenge for Data Governance and Security

My early adopter friend said that one of the unintended consequences of the publication of many different forms of data, both from individual companies and across companies, is the ability this...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Battle of Kursk on VOA

Battle of Kursk on VOA

Zentralbild, II. Weltkrieg 19139-45
Der von der faschistischen deutschen Wehrmacht während des Krieges entwickelte neue Panzerkampfwagen Typ “Panther”.
UBz: die Verladung neuer “Panther”-Panzerkampfwagen zum Transport an die Front (1943).

The Voice of America (VOA) interviewed me about Kursk and the current Russian Army for some articles they were working on. The interviewer, Alex Grigoryev, was a journalist in Russia before he immigrated to the United States. The first interview, on Kursk, is on video here, with me speaking in English with Russian subtitles: https://www.golos-ameriki.ru/a/ag-kursk-battle-book-of-cristopher-lawrance/4384650.html

A few things I would change, but I don’t think I completely embarrassed myself.

The Impact of MiFID II on Data Management

The Impact of MiFID II on Data Management

Recently, the revised Markets in Financial Instruments Directions (MiFID II) launched in the EU. The sweeping regulatory changes will impact transaction reporting on all financial instruments traded...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
How Big Data & Digitization are Transforming Business Schools

How Big Data & Digitization are Transforming Business Schools

With the demand for big data and analytics talent, several business schools are now offering Masters in Business Analytics. The growing popularity of the Analytics programs has triggered doubts in...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Change management for analytics success

Change management for analytics success

In a recent survey conducted by Forbes Insights and EY, 564 executives from around the world were asked about their challenges and successes in implementing and driving value from analytics across...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Comment: Can GDPR and blockchain co-exist?

Comment: Can GDPR and blockchain co-exist?

The EU’s General Data Protection Regulation (GDPR), due to be enforced on 25 May, implements new rights for people accessing the information companies hold about them and business obligations for...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
My Company Realized We Had No Idea How to Explain Our Product. So We Learned to Tell a Story.

My Company Realized We Had No Idea How to Explain Our Product. So We Learned to Tell a Story.

I work as a marketing manager for a business software called Weekdone and, up until a month ago, I probably couldn’t give you a straight answer on what exactly our product did. If you asked...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Keys to Data Monetization-Informing Employees-Part 1

Keys to Data Monetization-Informing Employees-Part 1

This article is in continuation from the last article I wrote on Data Monetization . You can read it here.. We begin with the first step of data monetization by educating employees and involving them...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
How Artificial Intelligence Will Shape The Email Marketing Landscape

How Artificial Intelligence Will Shape The Email Marketing Landscape

When we talk about artificial intelligence (AI), it is not only to reference a “what if” scenario like in an episode of Black Mirror but to talk about current tools that make our work as...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
The Growth of Autonomous Vehicles and What the Future Holds

The Growth of Autonomous Vehicles and What the Future Holds

The future is now!

Such are the times we live in that we are making strides and leaps in technology, moving from abstract ideas to reality. From replacing manual gears with automated transmissions to Global Positioning Systems (GPS) installed in cars for navigation we are now are talking about fully autonomous vehicles.  What is an autonomous vehicle? Also known as driverless vehicles is where the art of driving is controlled by computers.

This kind of vehicle is already a concrete reality. In April 2016 a platoon of wireless linked self driving trucks completed a European cross border trip. Earlier this year Uber announced that it ordered 24,000 self driving Volvos. Autonomous vehicles are yet to be adopted on a larger scale however the growing trend of transportation as a service rather than a commodity projects a shorter time frame, as private companies and industries are becoming interested thus investing. This technology is looking to solves some major road safety issues by eliminating human error, which is the main cause of accidents. Beyond this, driverless vehicles will have a profound impact on society and the world as we know it.

A driverless future will have people not owning cars of their own. Instead transport ...


Read More on Datafloq
Moving Goods – Is Blockchain the Answer?

Moving Goods – Is Blockchain the Answer?

We have all heard of Bitcoin. And we have various levels of understanding of it and the myriad of other cryptocurrencies that have popped up since. We may have even less understanding of the technology behind the cryptocurrency craze – blockchain. It is complex, and we are just beginning to see skilled developers with the expertise to both understand it and to set up the technological architecture of blockchain, in any number of sectors – healthcare, education, insurance, travel, governments, and more. It is becoming a favoured technology and major disruptor.

The Value of Blockchain is Clear

There are four main features of the technology.

1. Decentralized Validation

Any transaction that occurs is recorded in a block, detailing the transaction, and the block in which it is recorded is added to a chain of blocks. This only occurs after consensus is achieved among participants in the transaction.

2. Redundancy

The chain is replicated on a group of networked nodes, so that no individual point of failure can corrupt the transaction.

3. Immutable Storage

Hackers cannot corrupt the data in the blocks, because they would have to corrupt all of the blocks before it and after it, and time and date stamps would have to be altered.

4. Encryption

Access to ...


Read More on Datafloq
Five Challenges to IoT Analytics Success

Five Challenges to IoT Analytics Success

The Internet of Things (IoT) is an ecosystem of ever-increasing complexity; it’s the next wave of innovation that will humanize every object in our life. IoT is bringing more and more devices...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Best Public Datasets for Machine Learning and Data Science: Sources and Advice on the Choice

Best Public Datasets for Machine Learning and Data Science: Sources and Advice on the Choice

While shaping the idea of your data science project, you probably dreamed of writing variants of algorithms, estimating model performance on training data, and discussing prediction results with colleagues . . . But before you live the dream, you not only have to get the right data, you also must check if it’s labeled according to your task. Even if you don’t need to collect specific data, you can spend a good chunk of time looking for a dataset that will work best for the project.

Thousands of public datasets on different topics — from top fitness trends and beer recipes to pesticide poisoning rates — are available online. To spend less time on the search for the right dataset, you must know where to look for it.

This article is aimed at helping you find the best publicly available dataset for your machine learning project. We’ve grouped the article sections according to dataset sources, types, and a number of topics:


Catalogs of data portals and aggregators
Government and official data
Scientific research data
Verified datasets from data science communities
Political and social datasets from media outlets
Finance and economic data
Healthcare data
Travel and transportation data
Other sources


So, let’s deep dive into this ocean of data.

Catalogs of data portals and aggregators

While you ...


Read More on Datafloq
How Deep Learning Will Change Customer Experience

How Deep Learning Will Change Customer Experience

Deep learning is a sub-category within machine learning and artificial intelligence. It is inspired by and based on the model of the human brain to create artificial neural networks for machines. Deep learning will allow machines and devices to function in some ways as humans do.

Dr. Rodrigo Agundez of GoDataDriven is a co-author of this article and very enthusiastic about the improvements that deep learning can offer. He’s been involved in the data science and analysis field for some time and is already working on implementing models for practical applications.

Rodrigo notes that the new generation of users wants to interact with devices and appliances in a human-like manner. Take the example of Apple’s Siri, which allows for voice command and voice recognition. Communicating with Siri is similar to interacting with a human.

The user interface for Siri seems simple enough. However, the A.I. algorithms that are designed on the back-end are quite complex. 

Designing this kind of interaction with a machine was not possible a few years ago. System designers now have access to complex deep learning algorithms that make it possible to integrate such behaviour into machines.

Importance of Deep Learning

Artificial Intelligence will never truly come of age without giving machines the powerful capabilities of deep learning.

The idea of ...


Read More on Datafloq
Scoring Weapons And Aggregation In Trevor Dupuy’s Combat Models

Scoring Weapons And Aggregation In Trevor Dupuy’s Combat Models

[The article below is reprinted from the October 1997 edition of The International TNDM Newsletter.]

Consistent Scoring of Weapons and Aggregation of Forces:
The Cornerstone of Dupuy’s Quantitative Analysis of Historical Land Battles
by
James G. Taylor, PhD,
Dept. of Operations Research, Naval Postgraduate School

Introduction

Col. Trevor N. Dupuy was an American original, especially as regards the quantitative study of warfare. As with many prophets, he was not entirely appreciated in his own land, particularly its Military Operations Research (OR) community. However, after becoming rather familiar with the details of his mathematical modeling of ground combat based on historical data, I became aware of the basic scienti�c soundness of his approach. Unfortunately, his documentation of methodology was not always accepted by others, many of whom appeared to confuse lack of mathematical sophistication in his documentation with lack of scienti�c validity of his basic methodology.

The purpose of this brief paper is to review the salient points of Dupuy’s methodology from a system’s perspective, i.e., to view his methodology as a system, functioning as an organic whole to capture the essence of past combat experience (with an eye towards extrapolation into the future). The advantage of this perspective is that it immediately leads one to the conclusion that if one wants to use some functional relationship derived from Dupuy’s work, then one should use his methodologies for scoring weapons, aggregating forces, and adjusting for operational circumstances; since this consistency is the only guarantee of being able to reproduce historical results and to project them into the future.

Implications (of this system’s perspective on Dupuy’s work) for current DOD models will be discussed. In particular, the Military OR community has developed quantitative methods for imputing values to weapon systems based on their attrition capability against opposing forces and force interactions.[1] One such approach is the so-called antipotential-potential method[2] used in TACWAR[3] to score weapons. However, one should not expect such scores to provide valid casualty estimates when combined with historically derived functional relationships such as the so-called ATLAS casualty-rate curves[4] used in TACWAR, because a different “yard-stick� (i.e. measuring system for estimating the relative combat potential of opposing forces) was used to develop such a curve.

Overview of Dupuy’s Approach

This section briefly outlines the salient features of Dupuy’s approach to the quantitative analysis and modeling of ground combat as embodied in his Tactical Numerical Deterministic Model (TNDM) and its predecessor the Quanti�ed Judgment Model (QJM). The interested reader can �nd details in Dupuy [1979] (see also Dupuy [1985][5], [1987], [1990]). Here we will view Dupuy’s methodology from a system approach, which seeks to discern its various components and their interactions and to view these components as an organic whole. Essentially Dupuy’s approach involves the development of functional relationships from historical combat data (see Fig. 1) and then using these functional relationships to model future combat (see Fig, 2).

At the heart of Dupuy’s method is the investigation of historical battles and comparing the relationship of inputs (as quanti�ed by relative combat power, denoted as Pa/Pd for that of the attacker relative to that of the defender in Fig. l)(e.g. see Dupuy [1979, pp. 59-64]) to outputs (as quanti�ed by extent of mission accomplishment, casualty effectiveness, and territorial effectiveness; see Fig. 2) (e.g. see Dupuy [1979, pp. 47-50]), The salient point is that within this scheme, the main input[6] (i.e. relative combat power) to a historical battle is a derived quantity. It is computed from formulas that involve three essential aspects: (1) the scoring of weapons (e.g, see Dupuy [1979, Chapter 2 and also Appendix A]), (2) aggregation methodology for a force (e.g. see Dupuy [1979, pp. 43-46 and 202-203]), and (3) situational-adjustment methodology for determining the relative combat power of opposing forces (e.g. see Dupuy [1979, pp. 46-47 and 203-204]). In the force-aggregation step the effects on weapons of Dupuy’s environmental variables and one operational variable (air superiority) are considered[7], while in the situation-adjustment step the effects on forces of his behavioral variables[8] (aggregated into a single factor called the relative combat effectiveness value (CEV)) and also the other operational variables are considered (Dupuy [1987, pp. 86-89])

Figure 1.

Moreover, any functional relationships developed by Dupuy depend (unless shown otherwise) on his computational system for derived quantities, namely OLls, force strengths, and relative combat power. Thus, Dupuy’s results depend in an essential manner on his overall computational system described immediately above. Consequently, any such functional relationship (e.g. casualty-rate curve) directly or indirectly derivative from Dupuy‘s work should still use his computational methodology for determination of independent-variable values.

Fig l also reveals another important aspect of Dupuy’s work, the development of reliable data on historical battles, Military judgment plays an essential role in this development of such historical data for a variety of reasons. Dupuy was essentially the only source of new secondary historical data developed from primary sources (see McQuie [1970] for further details). These primary sources are well known to be both incomplete and inconsistent, so that military judgment must be used to �ll in the many gaps and reconcile observed inconsistencies. Moreover, military judgment also generates the working hypotheses for model development (e.g. identi�cation of signi�cant variables).

At the heart of Dupuy’s quantitative investigation of historical battles and subsequent model development is his own weapons-scoring methodology, which slowly evolved out of study efforts by the Historical Evaluation Research Organization (HERO) and its successor organizations (cf. HERO [1967] and compare with Dupuy [1979]). Early HERO [1967, pp. 7-8] work revealed that what one would today call weapons scores developed by other organizations were so poorly documented that HERO had to create its own methodology for developing the relative lethality of weapons, which eventually evolved into Dupuy’s Operational Lethality Indices (OLIs). Dupuy realized that his method was arbitrary (as indeed is its counterpart, called the operational definition, in formal scientific work), but felt that this would be ameliorated if the weapons-scoring methodology be consistently applied to historical battles. Unfortunately, this point is not clearly stated in Dupuy’s formal writings, although it was clearly (and compellingly) made by him in numerous brie�ngs that this author heard over the years.

Figure 2.

In other words, from a system’s perspective, the functional relationships developed by Colonel Dupuy are part of his analysis system that includes this weapons-scoring methodology consistently applied (see Fig. l again). The derived functional relationships do not stand alone (unless further empirical analysis shows them to hold for any weapons-scoring methodology), but function in concert with computational procedures. Another essential part of this system is Dupuy‘s aggregation methodology, which combines numbers, environmental circumstances, and weapons scores to compute the strength (S) of a military force. A key innovation by Colonel Dupuy [1979, pp. 202- 203] was to use a nonlinear (more precisely, a piecewise-linear) model for certain elements of force strength. This innovation precluded the occurrence of military absurdities such as air �repower being fully substitutable for ground �repower, antitank weapons being fully effective when armor targets are lacking, etc‘ The �nal part of this computational system is Dupuy’s situational-adjustment methodology, which combines the effects of operational circumstances with force strengths to determine relative combat power, e.g. Pa/Pd.

To recapitulate, the determination of an Operational Lethality Index (OLI) for a weapon involves the combination of weapon lethality, quanti�ed in terms of a Theoretical Lethality Index (TLI) (e.g. see Dupuy [1987, p. 84]), and troop dispersion[9] (e.g. see Dupuy [1987, pp. 84- 85]). Weapons scores (i.e. the OLIs) are then combined with numbers (own side and enemy) and combat- environment factors to yield force strength. Six[10] different categories of weapons are aggregated, with nonlinear (i.e. piecewise-linear) models being used for the following three categories of weapons: antitank, air defense, and air �repower (i.e. c1ose—air support). Operational, e.g. mobility, posture, surprise, etc. (Dupuy [1987, p. 87]), and behavioral variables (quanti�ed as a relative combat effectiveness value (CEV)) are then applied to force strength to determine a side’s combat-power potential.

Requirement for Consistent Scoring of Weapons, Force Aggregation, and Situational Adjustment for Operational Circumstances

The salient point to be gleaned from Fig.1 and 2 is that the same (or at least consistent) weapons—scoring, aggregation, and situational—adjustment methodologies be used for both developing functional relationships and then playing them to model future combat. The corresponding computational methods function as a system (organic whole) for determining relative combat power, e.g. Pa/Pd. For the development of functional relationships from historical data, a force ratio (relative combat power of the two opposing sides, e.g. attacker’s combat power divided by that of the defender, Pa/Pd is computed (i.e. it is a derived quantity) as the independent variable, with observed combat outcome being the dependent variable. Thus, as discussed above, this force ratio depends on the methodologies for scoring weapons, aggregating force strengths, and adjusting a force’s combat power for the operational circumstances of the engagement. It is a priori not clear that different scoring, aggregation, and situational-adjustment methodologies will lead to similar derived values. If such different computational procedures were to be used, these derived values should be recomputed and the corresponding functional relationships rederived and replotted.

However, users of the Tactical Numerical Deterministic Model (TNDM) (or for that matter, its predecessor, the Quanti�ed Judgment Model (QJM)) need not worry about this point because it was apparently meticulously observed by Colonel Dupuy in all his work. However, portions of his work have found their way into a surprisingly large number of DOD models (usually not explicitly acknowledged), but the context and range of validity of historical results have been largely ignored by others. The need for recalibration of the historical data and corresponding functional relationships has not been considered in applying Dupuy’s results for some important current DOD models.

Implications for Current DOD Models

A number of important current DOD models (namely, TACWAR and JICM discussed below) make use of some of Dupuy’s historical results without recalibrating functional relationships such as loss rates and rates of advance as a function of some force ratio (e.g. Pa/Pd). As discussed above, it is not clear that such a procedure will capture the essence of past combat experience. Moreover, in calculating losses, Dupuy �rst determines personnel losses (expressed as a percent loss of personnel strength, i.e., number of combatants on a side) and then calculates equipment losses as a function of this casualty rate (e.g., see Dupuy [1971, pp. 219-223], also [1990, Chapters 5 through 7][11]). These latter functional relationships are apparently not observed in the models discussed below. In fact, only Dupuy (going back to Dupuy [1979][12] takes personnel losses to depend on a force ratio and other pertinent variables, with materiel losses being taken as derivative from this casualty rate.

For example, TACWAR determines personnel losses[13] by computing a force ratio and then consulting an appropriate casualty-rate curve (referred to as empirical data), much in the same fashion as ATLAS did[14]. However, such a force ratio is computed using a linear model with weapon values determined by the so-called antipotential-potential method[15]. Unfortunately, this procedure may not be consistent with how the empirical data (i.e. the casualty-rate curves) was developed. Further research is required to demonstrate that valid casualty estimates are obtained when different weapon scoring, aggregation, and situational-adjustment methodologies are used to develop casualty-rate curves from historical data and to use them to assess losses in aggregated combat models. Furthermore, TACWAR does not use Dupuy’s model for equipment losses (see above), although it does purport, as just noted above, to use “historical data” (e.g., see Kerlin et al. [1975, p. 22]) to compute personnel losses as a function (among other things) of a force ratio (given by a linear relationship), involving close air support values in a way never used by Dupuy. Although their force-ratio determination methodology does have logical and mathematical merit, it is not the way that the historical data was developed.

Moreover, RAND (Allen [1992]) has more recently developed what is called the situational force scoring (SFS) methodology for calculating force ratios in large-scale, aggregated-force combat situations to determine loss and movement rates. Here, SFS refers essentially to a force- aggregation and situation-adjustment methodology, which has many conceptual elements in common with Dupuy‘s methodology (except, most notably, extensive testing against historical data, especially documentation of such efforts). This SFS was originally developed for RSAS[16] and is today used in JICM[17]. It also apparently uses a weapon-scoring system developed at RAND[18]. It purports (no documentation given [citation of unpublished work]) to be consistent with historical data (including the ATLAS casualty-rate curves) (Allen [1992, p.41]), but again no consideration is given to recalibration of historical results for different weapon scoring, force-aggregation, and situational-adjustment methodologies. SFS emphasizes adjusting force strengths according to operational circumstances (the “situation�) of the engagement (including surprise), with many innovative ideas (but in some major ways has little connection with previous work of others[19]). The resulting model contains many more details than historical combat data would support. It also is methodology that differs in many essential ways from that used previously by any investigator. In particular, it is doubtful that it develops force ratios in a manner consistent with Dupuy’s work.

Final Comments

Use of (sophisticated) mathematics for modeling past historical combat (and extrapolating it into the future for planning purposes) is no reason for ignoring Dupuy’s work. One would think that the current Military OR community would try to understand Dupuy’s work before trying to improve and extend it. In particular, Colonel Dupuy’s various computational procedures (including constants) must be considered as an organic whole (i.e. a system) supporting the development of functional relationships. If one ignores this computational system and simply tries to use some isolated aspect, the result may be interesting and even logically sound, but it probably lacks any scienti�c validity.

REFERENCES

P. Allen, “Situational Force Scoring: Accounting for Combined Arms Effects in Aggregate Combat Models,� N-3423-NA, The RAND Corporation, Santa Monica, CA, 1992.

L. B. Anderson, “A Brie�ng on Anti-Potential Potential (The Eigen-value Method for Computing Weapon Values), WP-2, Project 23-31, Institute for Defense Analyses, Arlington, VA, March 1974.

B. W. Bennett, et al, “RSAS 4.6 Summary,� N-3534-NA, The RAND Corporation, Santa Monica, CA, 1992.

B. W. Bennett, A. M. Bullock, D. B. Fox, C. M. Jones, J. Schrader, R. Weissler, and B. A. Wilson, “JICM 1.0 Summary,� MR-383-NA, The RAND Corporation, Santa Monica, CA, 1994.

P. K. Davis and J. A. Winnefeld, “The RAND Strategic Assessment Center: An Overview and Interim Conclusions About Utility and Development Options,� R-2945-DNA, The RAND Corporation, Santa Monica, CA, March 1983.

T.N, Dupuy, Numbers. Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles, The Bobbs-Merrill Company, Indianapolis/New York, 1979,

T.N. Dupuy, Numbers Predictions and War, Revised Edition, HERO Books, Fairfax, VA 1985.

T.N. Dupuy, Understanding War: History and Theory of Combat, Paragon House Publishers, New York, 1987.

T.N. Dupuy, Attrition: Forecasting Battle Casualties and Equipment Losses in Modem War, HERO Books, Fairfax, VA, 1990.

General Research Corporation (GRC), “A Hierarchy of Combat Analysis Models,� McLean, VA, January 1973.

Historical Evaluation and Research Organization (HERO), “Average Casualty Rates for War Games, Based on Historical Data,� 3 Volumes in 1, Dunn Loring, VA, February 1967.

E. P. Kerlin and R. H. Cole, “ATLAS: A Tactical, Logistical, and Air Simulation: Documentation and User’s Guide,� RAC-TP-338, Research Analysis Corporation, McLean, VA, April 1969 (AD 850 355).

E.P. Kerlin, L.A. Schmidt, A.J. Rolfe, M.J. Hutzler, and D,L. Moody, “The IDA Tactical Warfare Model: A Theater-Level Model of Conventional, Nuclear, and Chemical Warfare, Volume II- Detailed Description� R-21 1, Institute for Defense Analyses, Arlington, VA, October 1975 (AD B009 692L).

R. McQuie, “Military History and Mathematical Analysis,” Military Review 50, No, 5, 8-17 (1970).

S.M. Robinson, “Shadow Prices for Measures of Effectiveness, I: Linear Model,� Operations Research 41, 518-535 (1993).

J.G. Taylor, Lanchester Models of Warfare. Vols, I & II. Operations Research Society of America, Alexandria, VA, 1983. (a)

J.G. Taylor, “A Lanchester-Type Aggregated-Force Model of Conventional Ground Combat,� Naval Research Logistics Quarterly 30, 237-260 (1983). (b)

NOTES

[1] For example, see Taylor [1983a, Section 7.18], which contains a number of examples. The basic references given there may be more accessible through Robinson [I993].

[2] This term was apparently coined by L.B. Anderson [I974] (see also Kerlin et al. [1975, Chapter I, Section D.3]).

[3] The Tactical Warfare (TACWAR) model is a theater-level, joint-warfare, computer-based combat model that is currently used for decision support by the Joint Staff and essentially all CINC staffs. It was originally developed by the Institute for Defense Analyses in the mid-1970s (see Kerlin et al. [1975]), originally referred to as TACNUC, which has been continually upgraded until (and including) the present day.

[4] For example, see Kerlin and Cole [1969], GRC [1973, Fig. 6-6], or Taylor [1983b, Fig. 5] (also Taylor [1983a, Section 7.13]).

[5] The only apparent difference between Dupuy [1979] and Dupuy [1985] is the addition of an appendix (Appendix C “Modi�ed Quanti�ed Judgment Analysis of the Bekaa Valley Battle�) to the end of the latter (pp. 241-251). Hence, the page content is apparently the same for these two books for pp. 1-239.

[6] Technically speaking, one also has the engagement type and possibly several other descriptors (denoted in Fig. 1 as reduced list of operational circumstances) as other inputs to a historical battle.

[7] In Dupuy [1979, e.g. pp. 43-46] only environmental variables are mentioned, although basically the same formulas underlie both Dupuy [1979] and Dupuy [1987]. For simplicity, Fig. 1 and 2 follow this usage and employ the term “environmental circumstances.”

[8] In Dupuy [1979, e.g. pp. 46-47] only operational variables are mentioned, although basically the same formulas underlie both Dupuy [1979] and Dupuy [1987]. For simplicity, Fig. 1 and 2 follow this usage and employ the term “operational circumstances.�

[9] Chris Lawrence has kindly brought to my attention that since the same value for troop dispersion from an historical period (e.g. see Dupuy [1987, p. 84]) is used for both the attacker and also the defender, troop dispersion does not actually affect the determination of relative combat power PM/Pd.

[10] Eight different weapon types are considered, with three being classi�ed as infantry weapons (e.g. see Dupuy [1979, pp, 43-44], [1981 pp. 85-86]).

[11] Chris Lawrence has kindly informed me that Dupuy‘s work on relating equipment losses to personnel losses goes back to the early 1970s and even earlier (e.g. see HERO [1966]). Moreover, Dupuy‘s [1992] book Future Wars gives some additional empirical evidence concerning the dependence of equipment losses on casualty rates.

[12] But actually going back much earlier as pointed out in the previous footnote.

[13] See Kerlin et al. [1975, Chapter I, Section D.l].

[14] See Footnote 4 above.

[15] See Kerlin et al. [1975, Chapter I, Section D.3]; see also Footnotes 1 and 2 above.

[16] The RAND Strategy Assessment System (RSAS) is a multi-theater aggregated combat model developed at RAND in the early l980s (for further details see Davis and Winnefeld [1983] and Bennett et al. [1992]). It evolved into the Joint Integrated Contingency Model (JICM), which is a post-Cold War redesign of the RSAS (starting in FY92).

[17] The Joint Integrated Contingency Model (JICM) is a game-structured computer-based combat model of major regional contingencies and higher-level conflicts, covering strategic mobility, regional conventional and nuclear warfare in multiple theaters, naval warfare, and strategic nuclear warfare (for further details, see Bennett et al. [1994]).

[18] RAND apparently replaced one weapon-scoring system by another (e.g. see Allen [1992, pp. 9, l5, and 87-89]) without making any other changes in their SFS System.

[19] For example, both Dupuy’s early HERO work (e.g. see Dupuy [1967]), reworks of these results by the Research Analysis Corporation (RAC) (e.g. see RAC [1973, Fig. 6-6]), and Dupuy’s later work (e.g. see Dupuy [1979]) considered daily fractional casualties for the attacker and also for the defender as basic casualty-outcome descriptors (see also Taylor [1983b]). However, RAND does not do this, but considers the defender’s loss rate and a casualty exchange ratio as being the basic casualty-production descriptors (Allen [1992, pp. 41-42]). The great value of using the former set of descriptors (i.e. attacker and defender fractional loss rates) is that not only is casualty assessment more straight forward (especially development of functional relationships from historical data) but also qualitative model behavior is readily deduced (see Taylor [1983b] for further details).

Digital Twin as a strategic technology trend: meaning, benefits & examples

Digital Twin as a strategic technology trend: meaning, benefits & examples

There is not yet so much information about the buzzword and technology – Digital Twin, even though, it is already listed in the next technology trends and used by many companies. Today, I would like...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Data protection for Azure Stack hybrid cloud environments using HPE Storage and Veritas NetBackup

Data protection for Azure Stack hybrid cloud environments using HPE Storage and Veritas NetBackup

HPE has introduced the much-awaited HPE ProLiant for Microsoft Azure Stack (Gen 10) last week. As you deploy Microsoft Azure Stack to host your applications, have you thought about how to protect...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Privacy Policy

Copyright © 2018 BBBT - All Rights Reserved
Powered by WordPress & Atahualpa
X