Making security and development co-owners of DevSecOps

In this Help Net Security interview, Galal Ibrahim Maghola, former Head of Cybersecurity at G42 Company, discusses strategic approaches to implementing DevSecOps at scale. Drawing on experience in regulated industries such as finance, telecom, and critical infrastructure, he offers tips on ownership models, automation, and compliance. His approach focuses on collaborative practices that balance speed, security, and developer productivity.

DevSecOps practices tips

How do you recommend companies structure ownership of DevSecOps? Should security teams drive it, or is it more effective when it’s developer-led?

In my view, DevSecOps should be structured as a shared responsibility model, with ownership but no silos. Security teams must lead from a governance and risk perspective, defining the strategy, standards, and controls. However, true success happens when development teams take ownership of implementing those controls as part of their normal workflow.

In my career, especially while leading security operations across highly regulated industries, including finance, telecom, and energy, I’ve found this dual-ownership model most effective. For instance, during one engagement with a digital bank in the GCC, my team implemented secure coding standards, CI/CD hardening templates, and pre-commit scanning, all designed by the security team but embedded by developers. We also established “Security Champions” within engineering teams, which helped bridge the gap. These champions weren’t just trained, they were incentivized and empowered to make risk-based decisions aligned with business goals.

If security tries to drive DevSecOps unilaterally, it risks becoming a bottleneck. But if developers lead without oversight, security debt accumulates fast. Think of security as the architect and developers as the builders. They both own the outcome.

How can organizations strike the right balance between automation and human oversight in DevSecOps? Are there specific phases of the pipeline where automation adds the most value, or the most risk?

Automation is critical in DevSecOps, but not everywhere, and not equally.

From my experience I’ve seen the power of automation in the early stages of the pipeline. Static code analysis, dependency scanning, and secret detection during the pre-merge or CI phases can be automated with high confidence. These catch issues early, reducing the cost of fixing them by as much as 80%.

However, automation without context becomes dangerous, especially closer to deployment. I’ve led SOC teams that had to intervene because automated security policies blocked deployments over non-exploitable vulnerabilities in third-party libraries. That’s a classic example where automation caused friction without adding value.

So the balance is about maturity: automate where findings are high-confidence and easily fixable, but maintain oversight in phases where risk context matters, like release gates, production changes, or threat hunting. Human-led review, especially from seasoned analysts or red teamers, is vital in those areas.

How do you ensure security scanning tools don’t slow down developer productivity or trigger alert fatigue?

This is one of the most common points of failure I’ve seen in DevSecOps programs. Tools are often dropped into pipelines without tuning or context, overwhelming developers with irrelevant findings. The result? Fatigue, resistance, and workarounds.

What I’ve implemented across multiple client environments, particularly in financial services and critical infrastructure, is a risk-tiered alerting model. Scans are tuned to flag only actionable vulnerabilities. For example, CVEs that are exploitable and affect critical functions are escalated immediately, while low-risk findings are deferred for backlog remediation.

In one major project involving ISO 27001 and NIST alignment across a multinational group, I integrated scanning into the developer’s IDE using pre-commit hooks and GitLab Actions. We ensured results were contextual, with remediation tips built into the PR feedback loop. It made developers part of the solution, not the target.

Also, establishing a feedback loop between engineering and security teams to continuously refine scanning rules is essential. Over time, we reduced false positives by 60% and improved scan acceptance by 70%.

How do you embed compliance requirements, such as those from NIST, ISO, or PCI DSS, into a DevSecOps pipeline without slowing it down?

Compliance shouldn’t slow down DevSecOps. It should be codified, automated, and baked into the process, not added at the end like a checklist.

For example, during one engagement where I helped a regional telecom provider align with both NIST and ISO 27001, we translated control requirements into technical policies. Infrastructure-as-Code (IaC) templates were embedded with hardening standards. Open Policy Agent (OPA) was used to enforce runtime configuration policies during Terraform deployments. CIS Docker benchmarks were applied automatically during container builds using open-source tools.

We also built compliance-as-code dashboards that provided real-time evidence of conformance. That eliminated the painful manual audit prep cycles. When auditors requested proof of encryption, logging, or access controls, it was already documented via signed pipeline logs and automated scans.

This approach helped reduce audit preparation time by 40% while ensuring controls remained continuously enforced, not just point-in-time compliant.

How is the rise of AI and machine learning impacting DevSecOps practices and toolchains? Are there real use cases beyond hype, like prioritizing vulnerabilities or spotting malicious commits?

There’s a lot of noise in the AI space, but beneath it, there are some real, emerging capabilities that are starting to mature.

One of the most promising areas I’ve seen is AI-assisted vulnerability prioritization. During a recent initiative supporting cyber risk reduction for a national energy company, we used ML-based scoring models that incorporated exploitability trends, asset exposure, and historical patch delays. This allowed us to prioritize less than 5% of issues that represented over 85% of real risk. That level of focus dramatically improved MTTR and reduced risk exposure.

Another area gaining traction is malicious code detection. Some AI-powered tools now analyze commit behavior, contributor reputation, and code patterns to flag anomalies, especially valuable in open-source-heavy environments. While not foolproof, these tools offer early warning systems that augment human review.

That said, we still need transparency. AI in DevSecOps must support explainable decisions, especially in regulated environments. I view AI as a decision-support layer, not a replacement for experienced engineers. Used correctly, it accelerates triage, highlights blind spots, and enables proactive defense.

Don't miss