News

BSI: AI enables “unprecedented quality” for phishing attacks

The Federal Office for Information Security warns of security threats from artificial intelligence. However, much is still up in the air.

The German Federal Office for Information Security (BSI) takes the influence of artificial intelligence (AI) on the cyber security situation very seriously, but does not yet see any reason for alarmism. “In our current assessment of the impact of AI on the cyber threat landscape, we assume that there will be no significant breakthroughs in the development of AI, especially large language models, in the near future,” says BSI President Claudia Plattner, assessing the situation.

In a research paper, which is exclusively available to heise online, the BSI looks at already known threat scenarios on the one hand and the expected developments, including through AI, on the other. Although the major threat has not yet materialized, developments should not be underestimated.

According to the cyber security experts from Bonn, self-learning language models (LLM) are already having an impact: “In addition to general productivity gains for malicious actors, we are currently seeing malicious use in two areas in particular: social engineering and the generation of malicious code.”

In social engineering, where technical security precautions are circumvented through human contact with employees or service providers, AI enables an “unprecedented level of quality” in phishing attempts, for example, warns the BSI. “Conventional methods of detecting fraudulent messages, such as checking for spelling mistakes and unconventional use of language, are therefore no longer sufficient.”

The BSI gives a slight all-clear on the question of the extent to which malicious code is already being created fully automatically today. “LLMs can already write simple malware, but we have not found any AI that is independently capable of writing advanced, previously unknown malware,” says the IT security authority in its assessment of the situation.

The fact that an AI uses complicated methods for concealment or independently recognizes and exploits zero-day gaps is not yet a reality. Even the automated adaptation of existing malware has so far mainly been the subject of research work.

-> Read more on heise.de <-