As hacking techniques become more sophisticated, cybersecurity threats are now a daily occurrence for large organisations. Researchers are constantly coming up with new methods to fight the battle to keep data secure. The cybersecurity market was estimated to be worth $150bn in 2018 and is anticipated to grow to £230bn over the next four years. As household names from Wendy’s and Equifax to Uber and Yahoo are attacked, investment in cybersecurity businesses has soared, meaning cybersecurity stocks have risen.
One of the most recent developments is the use of artificial intelligence (AI). Artificial intelligence can be incorporated into security software to protect an organisation against unknown threats as well as known ones. In other words, the software identifies a hacker’s modus operandi and ‘learns’ to adapt itself to continue blocking attacks.
One concern is this same technology could be used by hackers themselves, using machine learning to work out ways to overcome security systems and modifying their own code accordingly. At the moment, the risks are relatively low partly due to the fact that there are very few people who have the relevant skills to develop this technology. These skills are in such demand that people who have them can make a very comfortable living working legitimately to protect businesses from crime.
Artificial intelligence has vast potential to benefit a wide range of sectors and businesses, including healthcare, financial services, climate change prevention and energy consumption reduction.
Data security was highlighted at the recent G20 event on June 8th, where the EU thanked Japan for raising important topics regarding the digital economy, such as artificial intelligence, improving security, and bridging digital division between countries.
However there is still a long way to go having a shortage of people with the relevant skill set. Research by ESG has demonstrated that two-thirds of cybersecurity staff felt under pressure, as the lack of expertise leads those who do possess the relevant skills to have unreasonable workloads. This leads to burnout and human errors. Difficulties in prioritising work means delays in fixing potential security issues occur. One way to take the pressure off could be to use Artificial Intelligence to investigate and prioritise security issues.
By using automated data mining, as well as reasoning processes and supervised learning, workers can eliminate the need for manual research which is prone to errors, and instead make accurate predictions about the severity of a potential threat.
Although Artificial Intelligence and Machine Learning (ML) is not new, it is only since cloud-based computer storage that it has become practical to run these systems. They rely on massive quantities of data, so it is only since the development of big data platforms that AI and ML have been used successfully.
Artificial intelligence is no longer science fiction. It is part of everyday life and growing rapidly in sophistication. Such development is a double-edged sword. In the right hands it can be used as a force for good, keeping us protected and improving efficiency of systems. The danger is in the wrong hands it could be used by cybercriminals to compromise our data in damaging and destructive ways.
As professor Steven Hawking said: “Success in creating AI would be the biggest event in human history. Unfortunately it might also be the last, unless we learn how to avoid the risks.”