Risks from AI in software supplies: alarm level red for companies!

Risks from AI in software supplies: alarm level red for companies!

In the fast -moving world of technology, security is a constant topic. In view of the increasing spread of artificial intelligence (AI) and the progressive developments in the software industry, it becomes increasingly challenging to manage the risks. An open letter from Patrick Opet, CISO of JPMorgan Chase, illustrates the pressing security concerns in the industry. It emphasizes that 30% of the security violations are caused by third-party components in 2025, which represents worrying doubling compared to the previous year. The numbers come from the Verizon Data Breach Investigations Report 2025 and show how important it is to focus on safety strategies that also include external interfaces.

In this digital era, AI develops rapidly in software development. According to estimates of market sandmarks, the AI ​​coding sector will grow from USD from around $ 4 billion to almost $ 13 billion by 2028. At this point, however, it becomes clear that the efficiency gains that AI can offer will not come without new risks. AI tools such as Github Copilot and Amazon Q Developer help developers, but they do not have human judgment and can reuse historical weaknesses in code repositories. cyberscoop warns that these tools are overwhelming classic security instruments such as Sast, DAS and SCA and not designed for AI-specific threats.

risks in the software supply chain

But what does the situation look like in the software supply chains? Organizations that use generative AI have to adapt their cybersecurity strategies to identify unexpected risks at an early stage, report Cloud Security Alliance . The problem is that many risks, such as malignant software packages or human errors, often remain undetected. The examination of a current JFrog report shows that of 25,229 open-plated secrets in public repositories over 6,790 were actively usable-an increase of 64% compared to the previous year. This screams according to a new level of vigilance to security risks.

With the integration of AI into software processes, the attack area increases. Inadequately tested AI models can generate incorrect expenses that turn out to be a security risk. Security insider emphasizes that evil are and that Company and secure your ML models as well as any other software component. Only about 43% of companies carry out security scans on the code and binary level, which not only ignores potential risks, but even further reinforces them.

strategies for risk reduction

What should be done? Organizations are required to fundamentally revise their security strategies and to adapt them to the new challenges. This includes checking the integrity of AI models, the validation of components proposed by AI, as well as the monitoring processes that can recognize potential data poisoning. The areas of cybersecurity and software development must be linked more closely in order to meet the requirements of a changing threat landscape.

A cybersecurity program should include future challenges, for example through multi-factor authentication and continuous threat exposure management (CTEM). The implementation of a Zero Trust architecture could serve as an additional protective mechanism. Security teams must take responsibility for the entire software supply chain to prevent attacks on upstream assets and ensure the integrity of the data used.

The integration of AI into the development process is inevitable. However, the challenge is to ensure that these technologies are used safely and efficiently - because a technical revolution without security can quickly become a great danger.

Details
OrtKönnte nicht extrahiert werden, da der Text keine Adresse oder einen spezifischen Ort des Vorfalls enthält.
Quellen

Kommentare (0)