
Cybercrime is on the rise and organizations across a wide variety of industries — from financial institutions to insurance, to healthcare providers, and large e-retailers — are rightfully worried. In the first half of 2017 alone, over 2 billion records were compromised. After stealing PII (personally identifiable information) from these hacks, fraudsters can gain access to customer accounts, create synthetic identities, and even craft phony business profiles to commit various forms of fraud.
Naturally, companies are frantically looking to beef up their security teams. But there’s a problem.A large skills gap is causing hiring difficulties in the cybersecurity industry. So much so that the Information Systems Audit and Control Association found that less than one in four candidates who apply for cybersecurity jobs are qualified. The ISACA predicts that this lack of qualified applicants will lead to a global shortage of two million cybersecurity professionals by 2019.
In response, many companies are turning to artificial intelligence to pick up the slack. This raises a very important and expensive question: Are robocops ready for the job?
Training & supervision are paramount
AI was seemingly built to alleviate the need for humans to provide authentication. Monitoring implicit data points, i.e., a user’s environment (geo-location), device characteristics (metadata of the call), biometrics (heartbeat), or behavior (typing speed and style), to validate someone’s identity is more effective and quicker with AI than a human eye.
Companies are already seeing great results from AI as illustrated by FICO’s newest Falcon consortium models, which have improved CNP fraud detection by 30n percent without increasing their false positive rate.
While AI’s ability to authenticate may outweigh that of a human, without strategic direction from a human to alleviate the cold-start problem, cybercrime is too intricate an issue to solve. Given the complexity of a cybersecurity environment and the lack of proper foundation as to where to start solving, unsupervised cyber sleuthing from robocops gets us nowhere. Identifying patterns in big data is an impressive feat for AI, but these analyses in themselves are ill-equipped to fight the war on fraud and eliminate inefficient CX.
On the other hand, supervised machine learning techniques depend upon human-supplied test cases to help train algorithms. As an analogy, instead of trying to reinvent the wheel, a supervised algorithm is just figuring out the best tire circumference for given car models and weather conditions. Supervised learning can find patterns from big data. However, more than that, it can provide actionable intelligence.
AI and machine learning can analyze massive quantities of data and identify patterns within that data that humans could never distill. Human direction is still needed to lay the foundation for what the machine is learning and to set AI off on the right foot in its pursuit of fraud and great customer service.
Readying AI for first contact
When artificial intelligence comes across a new dataset instance that doesn’t fit its induction-based models, a human decision may be necessary to resolve…
The post Robocops can’t tackle online crime without human assistance appeared first on FeedBox.