Developers, it appears, will not be replaced by artificial intelligence – at least not yet, anyway. What they will need to do is learn or improve their skills in providing templates for AI, become masters of fixing problems in AI-generated code, and actually learn the best uses for AI in software development.
In its current state, AI has given users pause, due to hallucinations, inaccuracies, and simply making up an answer if it doesn’t know one. As Long Island music legend Billy Joel wrote, “it’s a matter of trust.â€
To help developers gain confidence in AI, and to help organizations assess if those developers have the requisite skills to ensure code is secure, the company Secure Code Warrior (SCW) will be discussing its new Trust Agents at the upcoming Black Hat conference, according to company co-founder and CTO Matias Madou. That builds on the Trust Score they announced at the RSA Conference in April.
AI, he said, “doesn’t eradicate smart people. While a developer will be able to be more productive, if he or she doesn’t get more educated, they’ll only be creating bad code at rapid speeds. They will be faster, they will crank out more features, but only quality features, and not secure features.â€
Many organizations have no idea if secure developers are creating code, or not. “Directors of AppSec, CISOs, find it’s really hard to know,†Madou said. “So what we’ve done is we can give you insights in your repositories, we can tell you if code was created by secure developers or insecure developers.â€
The Trust Score is a way to determine how well-trained a developer is to write secure code, and their work can be compared to a benchmark. “We can give insight into how well are your developers in your organization creating secure code? How well-trained are they in creating secure code? And essentially, your trust score is an aggregate of all the skill scores of your developers, based on all their data as they work through the platform,†Madou explained. “So every individual developer that goes through our platform that takes training, that upskills himself or herself, gets a skill score, and the aggregate of the skill scores is a Trust Score.â€
“We sit on a mountain of data, of 250,000 active learners today, around 600 enterprise companies and 20 million data points,†Madou explained. “So we asked the group of data scientists, ‘hey, if you look at the data here, can you figure out what a skilled developer looks like solely by looking at the data of how people go through our platform?’ “
SCW’s Trust Agents, which integrate with GitLab, GitHub and Bitbucket –â€all the Gits,†he said – don’t look at code, or check for errors. They will pick up metadata about a developer when he or she checks in code. Does that developer have a Trust Score? What level of secure coding is he or she at? Do they know what they’re doing? Based on that, they can say if a developer is secure or not.
SCW found that some developers are very meticulous, with high accuracy, showing they know what they’re doing. Others click through the platform simply for compliance, and aren’t learning anything, and that’s visible in those patterns. “So out of the data, they were able to distill a pattern of what a secure developer looks like. And out of that, they get a score. If they do this, and do that, if they have high accuracy, and they touch on the OWASP Top 10, we can give them a high Trust Score, because they want to learn, and they understand that first they learn, then they prove.â€
The Trust Agents, Madou said, can now see, “Oh, you’re doing something. Let me tell you about that developer. Let me tell you if that developer knows his or her stuff, or if they don’t.â€
You may also like…
Code in the fast lane: Why secure developers can ship at warp speed
Generative AI development requires a different approach to testing
The post Trust Agents can show if developers know their stuff appeared first on SD Times.
Source: Read MoreÂ