AI Adoption Soars, but Security Concerns Haunt DevOps and SecOps
In a recent study by Sonatype, a software supply chain management firm, it’s found that generative AI is significantly influencing the software development life cycle and the roles of software engineers. The report highlights a surge in AI adoption, which also brings about rising security concerns for developers and security leaders.
What is a Survey?
Nearly all (97%) of the 800 surveyed developers and application security leaders currently employ technology, with 74% feeling pressured despite security risks. The majority agree that security threats are the top concern, highlighting the urgent need for responsible AI deployment to enhance software and security.
While DevOps and SecOps respondents generally hold similar opinions on generative AI, there are disparities in adoption and productivity. Key findings include:
- SecOps leads are early adopters: 45% have already integrated generative AI into the software development process, compared to 31% of DevOps leads.
- SecOps teams save more time: 57% report that generative AI saves them at least 6 hours per week, while only 31% of DevOps respondents agree.
Advantages of AI Adoption in Software Development
There are varying views on the advantages:
According to DevOps leads, the greatest benefits of this technology are more secure software (15%) and quicker software development (16%). Key advantages include quicker issue resolution (16%) and higher productivity (21%), highlighted by SecOps leads as an advantage of DevOps.
Major Concerns due to lack of Regulation in AI
According to over 75% of DevOps leaders, the usage of generative AI would lead to greater vulnerabilities in open-source code. Surprisingly, only 58% of SecOps leads are worried about this. Additionally, 40% of SecOps leads and 42% of DevOps respondents believe that a lack of regulation may discourage developers from contributing to open-source projects.
When asked who should regulate the usage of generative AI, 59% of DevOps leads and 78% of SecOps leads responded that both the government and specific businesses should be responsible for regulation.
According to Brian Fox, co-founder and CTO at Sonatype, “The AI era feels like the early days of open source, where we’re building the plane as we fly it in terms of security, policy, and regulation.” The software development cycle is no exception to mainstream adoption. While the productivity benefits are apparent, our research also reveals a concerning reality: the security risks posed by this still-emerging technology. Developers and leaders in application security must approach the adoption of AI with a focus on safety and security, as every innovation cycle brings new dangers.
Both groups also discussed licensing and compensation, as developers may face legal issues without them, such as accusations of plagiarism against Large Language Models (LLMs). Notably, decisions rejecting copyright protection for AI-generated art have already sparked debate regarding the level of human involvement required for authorship according to current laws.
In the absence of copyright legislation, respondents agree that authors should hold the copyright for AI-generated output (40%), and both groups overwhelmingly agree that developers should be paid for the code they author if it is used in LLMs that are open-source artifacts (DevOps 93% vs. SecOps 88%).
Read More:
1 thought on “Rising AI Adoption raises Security Worries for DevOps & SecOps”