Today, devices and networks are increasingly interconnected, multiplying the risk of malicious attacks. But AI can help to thwart them, just as it can proactively identify potential vulnerabilities. But what about privacy? And how much can we trust AI? Trust in the digital world is still very much a work in progress.
Is it possible to build a hyperconnected world that is also secure? The digital revolution has created a vast and growing playground for sophisticated cybercrime. And the attackers are not even people—they are machines running millions of tests per second looking to exploit the slightest vulnerability. And from major corporations to small businesses, everyone is at risk. Companies need to build a strong security culture that embraces best practices, regular updates, and security audits. And they cannot forget that the weakest link is usually that unwitting employee who plugs a USB stick received from a third party into a PC or who falls prey to a malicious email. In other words, human. Experts insist that security needs to be considered from the earliest stages of a project’s design and not as an “add-on” addressed when it is too late. This advice is often overlooked because protecting a project increases its cost and there is no immediate financial benefit. But when an attack occurs, and succeeds, the cost of not planning for it can be disastrous for a company’s reputation and finances. In 2018, Accenture Security estimated that the average cost to a company from a cyberattack was $13 million. And your run of the mill cybercriminals are not the only threat. Industrial espionage and much larger scale strategic attacks are also major problems. It is now much easier, and certainly less risky, for one state to destabilize another by weakening its communication and distribution networks over the Internet than it is to attack it physically. Cybersecurity can temper these emerging threats by using artificial intelligence to monitor the entire network, identify the signature of an attack, and develop real-time countermeasures to protect the code and the network. If necessary, systems can reset to the state they were in before the attack. CEA-List is developing a “cyber centaur” approach that combines human expertise and artificial intelligence. It uses machine learning by neural networks to assess threats, simplify the vast amount of data available, and provide recommendations to the human expert as quickly as possible. CEA-List is also part of the European Sparta network, a group of 44 organizations from 14 European countries that aims to share their diverse expertise in cybersecurity to find robust solutions. More about CEA-List’s work on Cybersecurity
The Frama-C platform, developed by CEA-List, checks a program’s C code to identify any vulnerabilities that could be used as a doorway for an attack. This open source platform won the cybersecurity competition organized by the US National Institute of Standards and Technology (NIST), identifying all the errors hidden in thousands of lines of code. It was also used by Airbus to validate the code used in its A380 aircraft. Its companion platform, Binsec, checks for potential weaknesses in binary code.
ransomware attacks processed by ANSSI (the French National IT Security Agency) in France in 2020
of companies and organizations reported daily attacks on their application services (source: EU Cybersecurity Agency)
of companies and organizations experienced malware activity that spread from one employee to another
increase in phishing attempts in just one month during the Covid-19 pandemic.
The GAFAMs—Google, Amazon, Facebook, Apple, and Microsoft—are everywhere. And their use of our personal data is creating an unprecedented threat. Is backpedaling even an option at this point? For businesses, the answer is no. They would be hard pressed to find solutions that work as well as the ones these tech giants deliver.
European regulations, most notably the GDPR (General Data Protection Regulation), do however place significant restrictions on how personal data can be used. All identifying information must be removed from medical data, for example. Today’s algorithms can analyze thousands—even millions—of medical records to identify correlations or suggest a diagnosis, treatment, or prognosis—but this must not come at the expense of doctor-patient confidentiality. CEA-List has developed new cryptographic technologies such as homomorphic cryptography, which can perform computations on encrypted data without ever decrypting the data itself. Confidentiality is therefore never broken, from one end of the process to the other. And, because data protection is a concern for companies of all types, this technology is attracting attention far beyond the medical field.
How do we strike the right balance between securing our systems and protecting individual civil liberties? Anyone involved in digital security has likely already engaged in a debate on this crucial topic. Because the only barrier between an overly efficient algorithm and an intrusive big brother is the robustness of a piece of legislation. For example, automated facial recognition can be used to secure a system and to monitor an entire population. It is up to digital technology providers to demonstrate social responsibility and ask probing questions about how the technologies they are developing will really be used.
Despite all its promise, AI raises questions. The lack of transparency surrounding the most powerful AI technologies, based on complex neural networks, sows doubt among users, undermining the acceptance of AI by society at large. And uptake of AI by industrial companies has been slow due to AI’s failure to live up to industrial quality, robustness, safety, and cybersecurity requirements. CEA-List’s research addresses trust through programs on the development of explainable AI and software for the formal validation of neural networks. The institute is also engaged in national and international communities and European collaborative projects around trusted AI.
For more information, read the article Artificial intelligence: challenges to implementation
Algorithms have so far been about performance; for them to gain traction, we now need to build trust.
Popularized by Bitcoin, blockchain technologies can be used for far more than just cryptocurrency. Blockchains are an alternative to centralized databases. In a blockchain, the data is distributed among all interconnected participants via a cryptographic algorithm, guaranteeing—through a system of validation by all participants—the reliability of the information contained in the entire chain, from the first link to the last. There is therefore no need for a central authority to act as a trusted third party. Instead, the network’s ability to converge independently on the right information and to guarantee that the transactions made between the various points in the chain are indeed valid provide the necessary transparency.
These qualities make blockchain an ideal tool for guaranteeing the traceability of a product, particularly in the food industry, or for guaranteeing that the electricity supplied by a particular producer comes from a renewable source. The blockchain provides the necessary trust. One downside, however, is that building a blockchain is computing intensive. This is particularly true for blockchains based on “proof of work” protocols to validate transactions, such as Bitcoin. The environmental costs are unsustainable.
New approaches—not based on proof of work protocols—drastically reduce the computing resources required. CEA-List is currently developing “green” blockchains, to aggregate medical data from multiple sources such as hospitals and other healthcare stakeholders, for example. The blockchain guarantees where the information comes from and how reliable it is.