Beware: Generative AI Will Increase Cyber ​​Attacks, According to New Google Report

2 minutes, 57 seconds Read

Examples of cyber security threats

Getty Images/Malte Mueller

As technology gets smarter with developments like generative AI, so do cyber security attacks. Google’s new cybersecurity forecast reveals that the rise of AI brings new threats you should be aware of.

On Wednesday, Google launched its Google Cloud Cybersecurity Forecast 2024, a report put together in collaboration with numerous Google Cloud security teams diving deep into the cyber landscape for the coming year.

Also: ChatGPT down for you yesterday? OpenAI says DDoS attacks were responsible

The report found that generative AI and large language models (LLM) will be used in various cyber attacks such as phishing, SMS and other social engineering activities to make content and elements such as voice and video appear more legitimate.

For example, phishing attack dead giveaways like misspellings, grammar errors and lack of cultural context will be more challenging to spot when generative AI is used because it does a great job of mimicking natural language.

In other instances, attackers can feed an LLM valid content and create a modified version that suits the attacker’s goals but keeps the same style as the original input.

Also: Australia to investigate Optus outages that affected millions

The report also predicts how LLM and other generative AI tools that are offered as a paid service will be increasingly developed to help attackers deploy their attacks more efficiently involving less effort.

However, malicious AI or LLMs may not be entirely necessary since using generative AI to generate content, such as drafting an invoice reminder, is not harmful in itself, but attackers can exploit it to target victims for their own goals.

For example, ZDNET has previously covered how scammers are using AI to impersonate the voices of family members or friends to defraud them of money.

Another potential generative AI threat involves information operations. Using only AI prompts, attackers can use generative AI models to create fake news, fake phone calls, and deepfake photos and videos.

Also: What is passkey? Experience the life-changing magic of going passwordless

According to the report, these campaigns can enter the mainstream news cycle. The immensity of this activity can erode public trust in news and online information, where everyone becomes more skeptical or stops believing the news they receive.

“This could make it increasingly difficult for businesses and governments to engage with their audiences in the near future,” the report said

While attackers are using AI to power their attacks, cyber defenders can leverage the technology to counter more advanced defenses.

Also: 3 Ways Microsoft’s New Secure Future Initiative Aims to Combat Growing Cyber ​​Threats

“AI is already providing a tremendous benefit to our cyber defenders, enabling them to improve capabilities, reduce investment and better protect against threats”, said Phil Venables, CISO, AI on Google Cloud. “We expect these capabilities and benefits to emerge in 2024 as defenders own the technology and thus direct its development for specific use cases.”

Some of the uses of generative AI for defenders include allowing it to synthesize large amounts of data, yield actionable detections, and take action at a faster pace.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

​Jr Mehmood’s Top Performances: Ghar Ghar Ki Kahani, Aap Ki Kasam And More ​ Most Beautiful Bioluminescent Beaches in The World Health Benefits Of Curry Leaves Benefits Of Amla For Hair Growth Zodiac Signs As Nail Arts