The Rise of AI in Cybersecurity: How to Keep Your Organization Secure in the Digital Age

by Ana Clarke 1/02/2023

Introduction Full article here

As cyber threats continue to evolve and become increasingly sophisticated, organizations are turning to the power of artificial intelligence (AI) to stay ahead of the game. From machine learning algorithms that detect and prevent cyber-attacks, to AI-powered defensive strategies that protect sensitive data, the role of AI in cybersecurity is becoming increasingly vital.

One of the key benefits of AI in cybersecurity is its ability to minimize security breaches by augmenting current security teams, processes, and tools. Organizations do not have to rip apart their existing security infrastructure and replace it with AI-powered tools. Instead, AI can work in harmony with existing security teams and processes, providing a more comprehensive and effective defence against cyber threats.

In this article, we will delve into the three main domains where AI is revolutionizing the field of cybersecurity - defensive techniques, offensive techniques and AI-based systems. We will explore the various applications of AI in cybersecurity, such as threat detection and prevention, and examine how AI systems themselves can be vulnerable to cyber-attacks. By the end of this article, you will have a comprehensive understanding of the role of AI in strengthening cybersecurity efforts and its potential vulnerabilities.

Enhancing cyber defence strategies with AI

Artificial Intelligence can be utilized throughout the entire security process, from prevention to detection and response. Specifically, many of the AI techniques being applied in defensive security today are concentrated on identifying potential cyber-attacks, working primarily in the detection stage. To comprehend how AI improves the efficiency of these instruments, let's take a look at the example of a classic security detection mechanism, such as a spam filter.

A spam filter is a tool that aids in identifying whether an email is spam or not. The traditional structure for building a spam filter involves analysing the email's content, sender, and other characteristics.

In the traditional programming approach, a software engineer begins by analysing the problem and identifying the differences between legitimate and spam emails. Then, they create a set of static rules based on these differences. For example, an email written in more than two languages has a high probability of being spam, or if an email contains a specific list of words, it is likely to be spam.

The software engineer identifies all possible inputs and conditions that the system would be exposed to and creates a program that can handle those specific inputs and conditions. The expert then tests the solution and analyses any errors. If necessary, they would have to return to the original problem and understand where the differences between a good email and spam were incorrectly identified.

This traditional programming approach in cybersecurity has its limitations. One major constraint is that the set of rules created depends heavily on the ability of the expert to foresee the possible combinations and their ranges.

Additionally, these static rules are vulnerable to changes in the tactics used by attackers. If the program receives an input that it is not designed for, it will fail to handle that situation, leading to a tedious and complex process of constantly reviewing and updating the rules.

To address these limitations, many companies are turning to AI and machine learning (ML) to enhance their cybersecurity efforts. One way that ML is being used is to replace the static set of rules defined in traditional programming approaches.

Instead of manually creating a set of rules, the data science team first identifies training data, such as source code, log files, past emails, or program execution context. This data is then fed into a ML model that learns from the input data the patterns and features to classify new incoming emails as spam or not spam.

This approach separates the task of creating rules for classifying emails from the expert and, instead, infers the rules based on data. Then, if attackers change their methods, the algorithm can be retrained with new data, making the process faster and easily tailored for specific industries.

While there are pre-built solutions available in the market, they often have a one-size-fits-all approach and may not be tailored to the specific needs of a particular industry. By using ML, companies can create a customized solution that adapts to the unique characteristics of their industry.

The role of AI in advancing offensive security strategies

In the previous section, we examined how implementing AI can improve the cybersecurity of an organization. Now, let's examine the other side of the coin: the ways in which AI and ML are enabling cybercriminals to carry out security attacks with greater speed, scope, and stealth. The same technologies and techniques you rely on to protect your organization are just as easily accessible to your adversaries.

AI is becoming a powerful tool in the hands of attackers. There are many tools available to automate common tasks such as scanning networks and discovering services, as well as a variety of offensive techniques that attackers use, such as social engineering.

Social engineering is a form of psychological manipulation that is often used in cybersecurity attacks. It involves tricking individuals into revealing sensitive information or performing actions that may compromise their own security or the security of their organization. Social engineering attacks can take many forms, including phishing scams, pretexting, baiting, and scareware.

One common example of social engineering is phishing, which involves sending fraudulent emails or messages that appear to be from a legitimate source and request sensitive information, such as login credentials or financial information.

With the development of large natural language processing models like GPT-3 and ChatGPT, it is becoming increasingly easier for cybercriminals to create more sophisticated phishing emails.

ChatGPT is a NLP model recently launched by OpenAI that is capable of generating human-like text based on a given prompt.

Attackers often rely on formats or templates when launching their campaigns. Defence systems that rely on static indicators, including text strings, would be impacted by a shift to more unique content in phishing emails.

As an example, we gave ChatGPT a format email commonly used in payroll diversion phishing email attacks and asked it to write three new variations. ChatGPT generated the list in less than a minute. As you can see in Figure 3, they are all considerably unique from one another.

ChatGPT was not developed for this kind of use, but as we can see in this simple example, it could allow scammers to craft unique content based on an initial format, making detection based on known malicious text string matches much more difficult.

ChatGPT: Write 3 new variations of the following paragraph: “I have moved last week, and I have changed my bank account as well. What details do I provide to update my direct deposit information on record? Also, can it be effective for my next payroll? Looking forward to hearing from you.”

If that was not enough, there is a brand-new category of attacks made possible in recent years due to advancements in Deep Learning, such as CEO fraud.

This type of attack is often designed to trick employees into divulging sensitive information or transferring money to the attacker's account. For example, an attacker may send an email claiming to be from the CEO and requesting that an employee transfer money to a specific account. The attacker may use fake invoices, purchase orders, or other documents to support their request and make it appear legitimate.

CEO fraud attacks can be difficult to detect because they often use logos and branding that appear legitimate, and they may use domain names that are like those of the real organization.

Because there is much more awareness among employees, this cyber-crime had been greatly reduced. However, with the implementation of AI techniques, this cybercrime is back in a more sophisticated way. Attackers now can generate synthetic images, audio, and video. With synthetic data, an attacker can impersonate another person or even create a false information record.

This AI technology has been used to improve the CEO fraud technique. An attacker can use videos from senior managers to create a montage, getting the email of an employee to call them via an online meeting, emulate the face and voice of the senior manager in real-time and asking the employee to perform an urgent task that usually involves money transfer.

New breed of cyber-attacks: targeting ML algorithms and models

So far, we have reviewed how AI is being applied to leverage cybersecurity strategies. Now, let's delve deeper into the topic of cybersecurity in AI-deployed systems.

We are not just talking about the well-known attacks such as buffer overflow, denial of service, man-in-the-middle, or phishing attacks. Instead, we are discussing a new breed of attacks on the machine learning algorithms and models that are the building blocks of AI solutions.

By creating an AI-based system, we've also created a new type of attack surface for the security environment and opened up new ways for malicious actors to exploit. In the field of machine learning, there is a subfield called Adversarial Machine Learning that specifically studies these types of attacks and defences against them.

Adversarial Machine Learning focuses on developing techniques to defend against malicious attacks on machine learning models. These attacks can take many forms, such as feeding a model maliciously crafted input data to cause it to make incorrect predictions or manipulating the model's parameters to cause it to behave in unintended ways.

In one of the most relevant papers on the topic, Goodfellow et al. explores the concept of adversarial examples and how they can be used to fool machine learning models. The authors explain that adversarial examples are inputs to a model that are deliberately constructed to cause the model to make a mistake. They show that these examples can be easily generated for many types of models, and that they can be used to fool state-of-the-art image classifiers with high confidence.

In their work, the authors used GoogLeNet, a CNN with 22 layers that is trained on ImageNet to classify images into 1000 object categories. They worked with a pretrained version of this network that was trained to identify animal’s pictures and asked the questions: how can we attack this system? How can we make this system make a wrong prediction that is not visibly apparent to the human eye?

They took a picture of a panda bear and added an imperceptibly small vector, or a layer of noise. After adding this noise, there is no visible change to the output image. However, for the AI-based system, the addition of that layer of noise causes the internal rules generated by the CNN to not work well. As a result, after applying this layer of noise, the system incorrectly classified the image of the panda bear as another animal with 99%.

Classifying the image of a panda bear incorrectly might not seem like a big deal, but AI systems based on computer vision are being deployed in many fields, such as traffic management in cities, object detection systems, and so on. Thus, a cyber-attack on these algorithms could have devastating consequences.

Getting your organization ready for AI in cybersecurity

As a leading consultancy firm in the field of AI and data, we understand the importance of preparing your organization for the integration of AI in cybersecurity. With the rapid advancements in technology, it's crucial for companies to stay ahead of the curve and protect their assets from cyber threats.

It is important to understand that implementing AI in cybersecurity is not a one-time task, but rather a continuous process of learning, testing, and improving.

So, what if your organization has limited or no prior experience with implementing AI? How do you even get started?

The first step in getting your security organization ready for AI is to assess your current cybersecurity infrastructure and identify any areas that can be improved with the integration of AI. This includes identifying potential vulnerabilities and areas where AI can help to automate and enhance your security capabilities.

Next, it's important to establish a clear plan and roadmap for implementation. This includes identifying specific use cases for AI in cybersecurity, such as automated threat detection and response, and identifying the necessary resources and budget for implementation.

It's also crucial to ensure that your team has the necessary skills and training to effectively implement and manage AI in cybersecurity. This may include training in data science and machine learning, as well as hands-on experience with AI platforms and tools.

In summary, getting your security organization ready for AI requires a clear understanding of the basics of AI, a thorough assessment of your current systems and processes, and a well-designed plan for integration. With the right guidance and preparation, your organization can harness the power of AI to improve your cybersecurity and protect against cyber-attacks.

At AC Consulting, we offer a wide range of services to help organizations prepare for AI in cybersecurity. From assessment and planning to implementation and training, we have the expertise and resources to help you navigate the complexities of AI and ensure that your organization is protected from cyber threats.

If you're ready to take the next step in preparing your organization for AI, contact us today to learn more about our services and how we can help you get started.


Implementing AI with a Copernican mindset shift

by Ana Clarke 19/11/2022

Introduction Full article here

Many industries are still figuring out how to get the most out of Artificial Intelligence (AI). If we can make its value clear to executives in their organization, we can play a big role in moving things forward.

In a study commissioned by Capital One, Forrester Consulting surveyed 150 data management decision-makers in North America about their organizations’ Machine Learning (ML) goals, challenges, and plans to operationalize ML. According to this study, 73% of the respondents said that they struggled to explain the business value of their ML applications to executives.

In my professional career in Data Science and AI, I have seen several projects fail. All of them had four things in common: the lack of a Proof of Concept, the business problem was not first understood; people became over excited with the technology; analysts and developers started to code too quickly.

This article aims to discuss how to embrace a Copernican revolution either in AI based business or in organizations that are starting this journey to adapt, not just to survive through the sixth wave of innovation but succeed.

How we can add business value from the baseline

When products and services reach the market they begin with the sales process. In this process, the traditional approach is to present a sales deck with our product’s features, client logos, case studies, and pricing table. However, if our sales process rotates around us, we are probably not as effective as we could be.

From a Copernican revolution perspective, the sales process is not about us (our company, our product, or our pricing), it is about the client and solving a problem for them, making clear that we are there to be a partner in getting the problem solved.

It is about doing a lot of listening to understand the nature of their business, what problems they are trying to solve, and what their company’s needs are.

We need to become much more customer-centric, embracing a Copernican business mindset that stops us thinking about the firm being at the centre of our universe and start to think about the client being the centre and how we circle and deliver value to them.

We always start with the business problem and work backwards to a Data Science solution, not the other way around.

Minding the cultural lag

Most organizations have been hit by a cultural lag between technology and people adopting the technology. Whilst technology is increasing at an exponential rate, people change at a linear rate.

Business leaders who run companies in general were trained to handle change in the downturn by focusing on controlling things. This affects how the culture is developed, what metrics are created and measured, and the skills and behaviours that are valued.

In the current upturn with an ever-changing environment, the measures, the skills and the culture need to change and adapt. Leadership struggles to manage their business in a world that basically no longer aligns to what they were taught.

AI and data analytics have played an incremental role in helping organizations to react to the speed of change, analysing data in real time and handling high volumes of data.

Implementing AI with a Copernican mindset

Successful AI transformation is not actually about technology. It is about transforming the mindset, culture and strategy of the organisation. The ‘what’ and ‘how’ of AI only makes sense when it is applied to a compelling ‘why’.

The problem with AI transformation in many organizations is never a lack of ideas; it is a lack of change in behaviour. Analog mindsets in a digital world that are weighed down by traditional thinking or existing infrastructure.

Developing a digital project does not make a company digital. A technological transformation based on data and AI is a journey, not a destination. It represents a paradigm shift in the organization’s mindset.

We can see that most businesses are not trying to physically transform, they are simply digitizing their current business.

The most common approaches to technological transformation are either: replicating what we are already doing or to imitate what other companies do.

Thinking about new paradigms, and not just about continuous improvements to old ways of working, represents an opportunity to redefine what it stands for, how it operates, and what it rewards.

AI is here to enrich our lives, not to replace us.

Please get in touch if you are interested in Data and AI, or you would like to bring your organization along on this journey. You can reach me out via LinkedIn or via the contact form in this website.

Turn on Algorithms

Turn on Algorithms

by Ana Clarke 3/11/2022

State of AI Full article here

Today we all live with systems that use Artificial Intelligence (AI); on platforms that recommend movies, apps that give you the best way to get somewhere or where we buy products. AI is becoming part of everything from medical breakthroughs in cancer research to cutting-edge climate change research.

The most successful companies in the world today are digital and have machine and deep learning algorithms at the heart of their business. But not all these companies were born digital. They embraced the power of data science and AI technologies as part of their growth.

AI will be the new electricity. Nowadays we are still discovering new uses for electricity, and with AI it is the same. Every day we hear about a new device, app or solution that adds value to a business or the society and it is based on an AI algorithm.

Also, because AI is becoming more common place, in the near future we most probably will take it for granted as we do with electricity. We only realise the importance of electricity in our lives when we have a power cut.

Perhaps not everything will be based on AI in the future, but everything will contain, in some form or another, certain elements of artificial intelligence.

AI is going to impact all business models and jobs. According to the OECD, AI technology and automation will change up to 1.1 billion jobs over the coming decade. Organisations are full of jobs that nobody wants to do. These are routine jobs that don’t have creativity and are boring. AI is going to replace the tasks of these jobs, not people.

A replacement of mass jobs is not forthcoming but a replacement of tasks within jobs. According to the World Economic Forum, it is estimated that 85 million jobs will indeed be lost, but 98 million new jobs will be created in 26 countries. Overall, more jobs are expected to be created than lost.

AI is not only impacting on how we perform our work. AI is becoming a powerful technology for enhancing decision making process across all level of the organizations. From tailoring promotions to customers according to their consumer behaviour, to being integrate with digital twins environments.

From where to start

For any AI projects to translate into concrete benefits for your shareholders, someone needs to lead them. Leadership is essential to carry out this transformation process and manage the expectations of the members of the organization in this regard.

Incorporating AI or ML technology is not always what we usually call plug and play. And the return on investment may not be immediate.

Before starting to work in any data science project, it is key to make a good balance between feasibility, budget, time and value to the business. Many times, we know that we want to start using these technologies, but we are not sure what we want to do with them, or how they will improve our business. It is key to invest time in analysing this first by performing an exploration and initial discovery project.

Building a Proof of Concept (PoC)

A PoC is a version of a product or solution with just enough features to satisfy the high-level goal of the project. It is not a final solution, improvements and enhancements are likely possible. The benefit of putting in place a PoC is that it can be built and implemented quickly to test and prove to the business that there is real potential in the area.

The simplest approach of a PoC can be easily understood by all stakeholders, and we have higher probability to get the final solution into production. Also, it acts as an impact blow that helps break down the barrier of distrust or scepticism.

What does a PoC include?

· Use data that is already available and clean as possible.

· Use a basic model.

· Only apply the solution to a small number of customers or end users.

AI with an ethical vision

Creating a framework to implement AI with an ethical vision is critical. The ethical concern for the creation of new types of intelligence requires moral criteria in the people who are developing them. Since today people care about the products they consume, they will care about the transparency of the algorithms they will use.

When we create AI solutions we need:

  • To pay attention to vulnerable groups, such as minors or people with disabilities.

  • Respect fundamental rights and applicable regulations.

  • Operate with transparency.

  • It must not restrict human freedom.

If you are thinking about implementing AI in your organization, I hope these ideas help you to think strategically about this and understanding the implications of being a lag out.

Please get in touch if you are interested in Data and AI, or you would like to bring your organization along on this journey. You can reach me out via LinkedIn or via the contact form in this website.

International Day of Mathematics (Pi Day)

International Day of Mathematics (Pi Day)

by Ana Clarke 14/03/2022

In 1988, the physicist Larry Shaw, known as the "Prince of Pi", proposed to establish March 14 as Pi Day. Using the English notation and accepting 3.14 as an approximation of pi, we have: 3-14.

Pi is a constant that establishes the relationship between a circumference and its diameter, regardless of which circumference or diameter it is. The constant of this relationship is the same for a wheel, a coin or even the rings of Saturn.

The number Pi is one of the most fascinating and famous numbers in mathematics. The interest that Pi unleashes among mathematicians and non-mathematicians goes beyond all limits. Let’s review some of the milestones behind Pi number’s fame.

Who discovered the number pi?

Establishing who was the first to deduce the value of Pi is not easy. 2,000 years before Christ, the Babylonians had calculated it at 3.125. Around the same time, the Egyptians said it was 3.16049. Even the Bible provides its version of the number Pi with an approximation to 3.

In the third century B.C., Archimedes of Syracuse approximated the number Pi using polygons inside and outside a circle. He ended up with a 96-sided polygon and his final approximation was 3.14163. Archimedes also introduced the Greek letter π to refer to this constant, as it was the first letter of the Greek words: περιφέρεια and περίμετρος, which mean periphery and perimeter.

A century later, Claudius Ptolemy used a 720-sided polygon to arrive at another approximation of Pi: 3.141666. Liu Hui, in the third century after Christ, worked with polygons up to 3092 sides to get an approximate value of 3.14159. Zu Chongzhi added two decimal places in the 5th century: 3.1415929. At the same time in India, Persia and Italy with Fibonacci, similar results were reached.

In 1615, Ludolph van Ceulen accurately found the first 35 digits of pi. It was such a milestone in the history of pi that Van Ceulen had these digits engraved on his own grave.

William Shanks dedicated 20 years of his life to the study of the number pi. In 1872 he described the first 707 of pi number decimals. As a tribute to Shanks’s discovery, his number pi with the 707 decimal places was written under the dome of the Palais de la Découverte in Paris.

This was the last milestone achieved by the calculation mathematicians in the time when there were no calculators. Once computers took over, the challenge was to deduce the most Pi decimal point numbers in a certain amount of time.

Interesting assets of Pi number

In the 1760s, Johann Heinrich Lambert was the first to prove that the number pi is irrational. This means that Pi cannot be expressed as the ratio of two integers. Pi has an infinite decimal expansion with no repeats and no patterns. Pi decimal numbers go on and on.

Pi number is also a transcendental number. It is an exciting property proved by Ferdinand von Lindemann in 1882. This means that the number pi is not the root of any polynomial with integer coefficients. Because of this, it is known that pi cannot be written as any finite combination of integers and/or their roots.

Transformed by Pi

The number pi has fascinated so many people that:

  • When a survey was conducted among subway passengers in my home city of Buenos Aires to find out what number they remember most, Pi came in the first place. So 31416 was setup as the emergency number in the subway.

  • There is a Pi Bar in San Francisco, USA.

  • Pi fans have tattooed the Greek letter and several of its decimals.

  • People have entered into the Guinness Book of Records for remembering by heart thousands of the decimals of the number pi.

  • There are mnemonic rules for remembering the first decimal numbers of pi.

  • The cinema has also echoed the fame of the number Pi with films like Pi - Faith in Chaos.

The number Pi is not only a mathematically interesting number. It is applied in many functions in our lives. In NASA, for example, Pi number has an essential role in the day-to-day job of its scientists. They use Pi number to measure craters as well as determine the size of the planets that orbit suns other than our own (exoplanets).

Pi number is also behind the Internet, mobile phones, GPS signals and the radio because these waves have circular motion.