SaaS & Technology

Global CI/CD Unification — Platform Engineering at Scale

CloudForge consolidated 15+ fragmented CI/CD configurations (GitLab CI, Jenkins, custom scripts) into a unified template system serving 500+ developers across 4 continents, eliminating $100K/yr overhead and reducing CI setup time from 2–3 days to 30 minutes. Also deployed a hybrid identity platform (SSO/OIDC) resolving 12-second peak authentication latency for 50K+ users.

2-3 days → 30 min
CI setup time
$100K/yr eliminated
Maintenance overhead
12s → < 2s
Auth latency
85% reduction
Identity incidents
6 months 4 engineers
Multi-CloudAzureCI/CDPlatform Engineering

A global enterprise technology company

The client is a global enterprise technology company with engineering offices in San Francisco, London, Warsaw, and Bangalore, employing over 500 developers across 40+ product teams. The company had grown through both organic expansion and acquisitions over the preceding 5 years, and each growth event brought new engineering teams with their own tooling, processes, and platform preferences. The result was a CI/CD landscape that reflected the company's organizational history rather than any intentional architecture: GitLab CI in the European offices (3 distinct configuration variants), Jenkins in the Bangalore office (4 variants with different plugin sets), custom Bash scripts in the San Francisco headquarters (5+ scripts of varying sophistication), and GitHub Actions in the London office (3 variants that had emerged after a recent tooling initiative). Across all offices, there were 15+ distinct pipeline configurations with zero shared libraries, templates, or conventions.

The fragmentation created measurable operational costs. Setting up CI/CD for a new microservice took 2–3 days on average: a developer would find the "closest" existing pipeline configuration in their office's preferred tool, copy it, and spend 1–2 days adapting it to the new service's language, build system, and deployment target. Because copied configurations drifted from their sources over time, the result of "copy from Project A" and "copy from Project B" produced different pipeline behaviours even when the source language and deployment target were identical. Security patches — such as updating a base Docker image or changing a credentials provider — required 15+ separate updates across all configuration variants, a process that typically took a DevOps engineer two full weeks to complete across all offices.

The identity platform crisis was a separate but contemporaneous problem. The company's internal SSO gateway — serving 50,000+ users across all four offices for authentication to internal tools, code repositories, and deployment systems — was experiencing 12-second peak authentication latency during morning login surges (8:00–9:30 AM in each time zone). The gateway ran on a single-region Azure deployment with no auto-scaling, meaning that 4 regional morning surges hit the same static infrastructure each business day. Failed logins during peak periods generated an average of 200 support tickets per month, consuming helpdesk capacity and frustrating developers whose first interaction of the workday was a spinning login page. The identity platform and CI/CD unification were addressed in the same engagement because both were platform engineering problems requiring cross-regional coordination, and both contributed to the same organizational pain: developers spending time fighting tooling instead of building product.

Fragmented Tooling Across Four Continents with Cultural Resistance to Unification

The 15+ pipeline configurations were not merely different implementations of the same concept — they represented fundamentally different CI/CD philosophies. The San Francisco team's Bash scripts were minimalist: a build step, a test step, and a deploy step, with no artifact caching, no parallelism, and no quality gates. The Warsaw team's GitLab CI configurations were sophisticated: multi-stage pipelines with Docker-in-Docker builds, parallel test execution, SAST scanning, and per-environment deployment gates. The Bangalore team's Jenkins pipelines fell somewhere in between, with Groovy-scripted pipelines that were powerful but relied on Jenkins plugins that were maintained by individual team members. Unifying these approaches required understanding not just what each team had built, but why they had built it that way.

New service CI setup consuming 2–3 days was the most universal pain point, but the root causes differed by office. In San Francisco, the complexity was low but the time was spent debugging Bash scripts that had been copied from 3-year-old projects with dependencies on system packages no longer installed on the build runners. In Warsaw, the GitLab CI configurations were complex enough that adapting them to a new language or framework required understanding interactions between build stages, service containers, and cache policies — knowledge that typically resided with one or two senior engineers. In Bangalore, Jenkins pipeline setup required configuring the correct set of plugins, which varied between teams and Jenkins instances, and ensuring that the Groovy pipeline script was compatible with the installed plugin versions.

The $100K annual maintenance overhead was distributed across 4 regional DevOps teams that spent a significant portion of their time on pipeline maintenance. Each time a security vulnerability was discovered in a base image, a credentials rotation was required, or a new corporate compliance requirement was mandated, the change had to be propagated across all 15+ configuration variants. The Warsaw team estimated they spent 30% of their DevOps capacity on pipeline maintenance — time that could have been spent on infrastructure improvements, developer tooling, or reliability engineering. The cumulative cost across all four offices, including direct salary and opportunity cost, exceeded $100K annually.

Knowledge silos were perhaps the most damaging consequence of the fragmentation. When a developer moved from the London office to the Warsaw office — a common occurrence in a global company — they needed to learn an entirely different CI/CD system. Debugging a build failure in Warsaw required GitLab CI expertise; debugging the same build failure in a functionally identical service in Bangalore required Jenkins expertise. Cross-office collaboration on shared libraries was impractical because the CI/CD systems were incompatible. A reusable build component written as a GitLab CI include template was useless in a Jenkins pipeline, and vice versa.

The identity platform's 12-second peak latency was a technical problem with a straightforward cause: a single-region Azure App Service deployment handling authentication requests from 4 global offices, with no auto-scaling configuration and no CDN caching for token validation. During the morning login surge (8–9:30 AM in each time zone), the service received 4x its baseline request volume over a 90-minute window, and the fixed-capacity deployment could not handle the load. The service did not fail outright — it degraded gracefully into unacceptable latency, which presented as "login works but takes 12 seconds" rather than "login is broken." This gradual degradation made the problem harder to detect and diagnose than an outright outage.

Cultural resistance to unification was the most challenging obstacle, and the one that would determine the project's ultimate success or failure. Each regional office had invested significant effort in their CI/CD tooling. The Warsaw team was particularly attached to their GitLab CI configurations, which they regarded as the most mature and well-engineered pipelines in the company — a fair assessment. A unification initiative perceived as "headquarters replacing our tooling with theirs" would face active resistance from the teams whose work was being discarded. Past attempts to standardize tooling (a previous Git platform consolidation from BitBucket to GitHub) had succeeded technically but failed culturally, leaving teams resentful and non-compliant. We needed an approach that respected regional contributions and provided genuine value rather than imposing uniformity.

Cross-Regional Discovery with Champions Model for Adoption

We began with cross-regional discovery workshops: four sessions (one per office), each a full day, conducted by different members of the CloudForge team to demonstrate respect for each office's perspective. The workshops had three objectives: (1) inventory every pipeline variant in use, including undocumented custom scripts; (2) map commonalities across all variants to identify the shared patterns that would form the foundation of a unified template; and (3) identify the unique requirements that each office had addressed in their custom configurations, which the unified system would need to accommodate. The workshops surfaced 15 pipeline variants, of which 12 were actively maintained and 3 were deprecated but still referenced by existing services.

The pattern analysis revealed that 85% of the pipeline logic was common across all 15+ variants. Every configuration, regardless of tool or office, performed the same core operations: checkout code, install dependencies, build artifacts, run tests, scan for security issues (in Warsaw and some London configs), build a container image, push to a registry, and deploy to a target environment. The differences — the remaining 15% — were concentrated in four areas: language-specific build tooling (Go vs. Node.js vs. Python vs. .NET), deployment target variations (AKS in some teams, Azure App Service in others, VM-based deployment in legacy services), security scanning tool preferences (SonarQube in Warsaw, Snyk in London, nothing in San Francisco or Bangalore), and notification integrations (Slack channels, Teams webhooks, email alerts). The 85/15 split was the key insight: a template system that covered the 85% common ground while providing extension points for the 15% regional variation would satisfy all offices without forcing any office to abandon its unique requirements.

The regional champions model was our strategy for overcoming cultural resistance. Rather than positioning unification as a top-down mandate, we recruited one senior engineer from each office as a "template champion" — a person who would participate in the template design, provide feedback from their office's perspective, serve as the first-line support for template adoption in their region, and have the authority to make regional customization decisions within the guardrails of the template system. The Warsaw champion, recognizing that much of the template system would be modelled on Warsaw's existing GitLab CI patterns (the most mature in the company), became the project's strongest advocate — their office's investment was not being discarded but elevated to a company-wide standard.

The identity platform was addressed as a parallel workstream with its own dedicated engineer. We designed a hybrid SSO/OIDC gateway deployed on AKS with Keycloak as the SSO/OIDC identity provider. The architecture was designed for multi-region deployment from the start: authentication request routing based on caller geography, auto-scaling from 3 to 30 pods based on request volume, CDN-cached JWT validation for frequently accessed services (reducing repeat validation latency to single-digit milliseconds), and Redis-based session storage for cross-region session persistence (enabling a developer who authenticated in the London office to seamlessly access resources served from the San Francisco infrastructure).

The pilot phase in month 3 was critical for building credibility. We selected three volunteer teams — one from Warsaw, one from Bangalore, and one from San Francisco — to migrate their CI/CD configurations to the new template system. Each pilot team was chosen for having a medium-complexity pipeline (not so simple that the template added no value, not so complex that migration would be slow) and a team lead who was open to change. The pilot would demonstrate that the template system worked for real-world services, reduced setup time, and preserved each team's ability to customize their pipeline — all without causing service disruptions or deployment failures.

Unified Template System with Regional Customization and Identity Platform

The unified CI/CD template system was built on GitHub Actions as the common orchestration platform (the company had already consolidated source code on GitHub, making Actions the natural choice for CI/CD). The system consisted of a central template repository containing composable workflow modules for each pipeline stage: checkout, build, test, scan, containerize, and deploy. Each module accepted standardized inputs (language, framework, deployment target, environment) and produced standardized outputs (build artifacts, test results, scan reports, deployment status). Teams composed their pipeline by referencing the modules they needed in a concise workflow YAML file — typically 30–50 lines — that specified their service's language, framework, and deployment target. The template system handled everything else: dependency caching, parallel test execution, security scanning, container image building, registry pushing, and environment-specific deployment.

The customization model was the design element that made adoption possible. Each module supported YAML overlay files that teams could use to extend or modify the default behaviour. The Warsaw team, for example, added a custom SonarQube quality gate to their scan stage. The Bangalore team customized the deploy stage to support their legacy VM-based services that were not yet containerized. The San Francisco team added a custom notification integration that posted deployment status to their specific Slack channel format. These customizations were local to each team's workflow file and did not affect the shared template modules. The principle was: "defaults that work for 85% of teams, extension points for the remaining 15%." This preserved team autonomy while ensuring that the core pipeline logic — the part responsible for build correctness, security scanning, and deployment safety — was consistent and centrally maintained.

New service CI setup dropped from 2–3 days to 30 minutes. A developer creating a new microservice would run a scaffold command that generated the workflow YAML file based on prompts (language? framework? deployment target?) and committed it to the service repository. The first push would trigger the pipeline, and in most cases the service would be building, tested, scanned, and deploying within 30 minutes of initial commit. The template champion in each office served as first-line support for any issues, and a shared Slack channel connected all four champions for cross-regional knowledge sharing. The 2-day onboarding time savings, multiplied across the 40+ teams creating an average of 3 new services per quarter, represented significant engineering productivity recovery.

The identity platform was deployed on AKS in two Azure regions (US West and EU West) with Keycloak as the SSO/OIDC authentication backend. Keycloak realms were configured per office with federated OIDC clients for each internal service, and Azure Traffic Manager routed authentication requests to the nearest regional deployment based on DNS resolution latency. Each regional deployment ran 3 pods at baseline, scaling to 30 pods during morning login surges via Horizontal Pod Autoscaler triggered by request rate metrics. JWT validation for repeat API calls was cached at the CDN layer using Azure Front Door, reducing repeat validation latency from 200–300ms (round-trip to the identity service) to 5–15ms (CDN cache hit). Redis was deployed in each region for session storage, with cross-region replication ensuring that authenticated sessions were available globally — a developer who logged in via the EU deployment could access US-hosted services without re-authenticating.

The training programme spanned all four offices and was delivered in a 40-hour curriculum adapted for each region's starting point. Warsaw engineers, with their GitLab CI background, focused on GitHub Actions syntax differences and the template customization model. Bangalore engineers, with Jenkins experience, received additional coverage on declarative YAML workflows versus procedural Groovy scripts. San Francisco and London engineers focused on the transition from custom scripts to structured templates. The curriculum included hands-on workshops where each team migrated a real service to the template system, troubleshot intentional failure scenarios, and customized their pipeline using YAML overlays. Post-training assessment showed that 90% of participants could independently create and customize a pipeline from the template system without assistance.

The regional rollout proceeded in two waves over months 4 and 5. Wave 1 targeted teams with simpler pipelines and willing team leads — approximately 20 teams across all four offices. Wave 2 tackled the remaining 20+ teams, including those with complex legacy configurations and teams that had been initially resistant. The champions model proved its value during Wave 2: in several cases, the regional champion resolved adoption concerns by demonstrating that the template system could accommodate the team's unique requirements through the customization model, without requiring template changes or exceptions. By the end of month 5, all 40+ teams were on the unified template system, and the legacy GitLab CI, Jenkins, and Bash script configurations had been archived.

How We Delivered

1

Discovery & Workshops

Month 1

Cross-regional discovery workshops (one per office) inventorying all 15+ pipeline variants. Pattern analysis revealing 85% commonality. Recruited 4 regional champions. Designed template architecture and customization model.

2

Template Architecture

Month 2

Built composable GitHub Actions workflow modules for build, test, scan, containerize, and deploy stages. Created YAML overlay customization framework. Scaffold command for new service onboarding. Template repository with versioned releases.

3

Pilot Teams

Month 3

Three volunteer teams (Warsaw, Bangalore, San Francisco) migrated to template system. Validated 30-minute setup time, customization model, and pipeline correctness. 40-hour training curriculum delivered to pilot teams.

4

Regional Rollout Wave 1

Month 4

Approximately 20 teams across all four offices migrated to the unified template system. Regional champions provided first-line support and resolved adoption concerns. Archived legacy configurations for migrated teams.

5

Regional Rollout Wave 2

Month 5

Remaining 20+ teams migrated, including complex legacy configurations and initially resistant teams. Champions model resolved final adoption barriers through customization demonstrations. All 40+ teams on unified platform.

6

Identity Platform & Handover

Month 6

Deployed dual-region AKS identity platform with Keycloak SSO/OIDC, auto-scaling, CDN-cached JWT validation, and Redis session store. Training delivered across all offices. Formal handover to newly formed Platform Engineering team.

Unified Platform Serving 500+ Developers Across 4 Continents

2-3 days → 30 min
CI setup time
$100K/yr eliminated
Maintenance overhead
12s → < 2s
Auth latency
85% reduction
Identity incidents

CI setup time for new services dropped from 2–3 days to 30 minutes — a reduction that was validated across all four offices during the rollout. The 30-minute figure included running the scaffold command, reviewing the generated workflow file, making any team-specific customizations via YAML overlays, committing the workflow, and verifying that the first pipeline run completed successfully. For standard service types (Go API, Node.js frontend, Python ML service, .NET backend), the scaffold command generated a working pipeline with zero customization required. The cumulative time savings across 40+ teams — each previously spending 2+ days per new service, creating an average of 3 services per quarter — recovered thousands of engineering hours annually.

The $100K annual maintenance overhead was effectively eliminated. Security patches, base image updates, and compliance requirements were now applied to the central template modules — a single PR, reviewed by the four regional champions, deployed once, and automatically consumed by all 40+ teams on their next pipeline run. The quarterly security patch cycle, which had previously consumed two weeks of distributed DevOps effort across all offices, became a 2-day effort for one engineer. The regional DevOps teams redirected their freed capacity to infrastructure reliability, cost optimization, and developer experience improvements that had been perpetually deferred.

Identity platform latency dropped from 12 seconds at peak to under 2 seconds in all conditions, with CDN-cached JWT validation providing sub-15ms response times for repeat API authentications. The auto-scaling configuration handled morning login surges without degradation — the largest observed surge (London and Warsaw offices logging in simultaneously due to a time zone overlap period) peaked at 22 pods across the two regional deployments and fully served all authentication requests within 800ms. Identity-related support tickets dropped from 200 per month to 30 — an 85% reduction — and the remaining tickets were primarily password reset requests rather than authentication failures.

The cultural outcome was the result the executive team valued most. Cross-office collaboration, which had been impractical when each office used different CI/CD tooling, became routine. A Warsaw engineer who needed to contribute to a London team's repository could read and understand the pipeline immediately because it used the same template system. When the company acquired a startup in Tokyo (month 7), the Tokyo team was onboarded to the unified template system in 3 days — the Tokyo champion (recruited from the existing engineering team) attended a compressed training session and had the first 5 services running on templates within a week. This onboarding speed would have been impossible under the previous fragmented model.

The four regional champions became the nucleus of a new Platform Engineering team that the company formalized in month 8. The champions — having developed deep expertise in the template system, GitHub Actions, and cross-regional platform operations — transitioned into a permanent team responsible for CI/CD standards, developer tooling, and infrastructure reliability across all offices. This organisational outcome was not planned at the engagement's outset but emerged naturally from the champions' demonstrated expertise and the company's recognition that platform engineering required dedicated ownership. The Platform Engineering team continued to evolve the template system independently after the engagement, adding new modules for Terraform IaC validation, database migration orchestration, and feature flag management — all without CloudForge involvement.

Tools & Platforms

GitHub Actions

Unified CI/CD orchestration replacing GitLab CI, Jenkins, and Bash scripts

GitLab CI (Legacy)

European offices' previous CI/CD platform, archived after migration

Jenkins (Legacy)

Bangalore office's previous CI/CD platform, archived after migration

AKS

Dual-region Kubernetes hosting for identity platform with auto-scaling

Keycloak SSO/OIDC

Identity provider with realm-based multi-tenancy and federated OIDC clients

OIDC/SSO

Hybrid authentication gateway with CDN-cached JWT validation

Redis

Cross-region session storage for seamless global authentication

Terraform

Infrastructure-as-code for identity platform and template infrastructure

Helm

Identity platform deployment packaging and configuration management

ArgoCD

GitOps-based continuous delivery for identity platform updates

YAML Templates

Composable pipeline modules with overlay-based customization

Lessons Learned

1

Pipeline unification is 20% technical, 80% cultural. The template system itself was a 2-month engineering effort. The remaining 4 months were spent on discovery workshops, pilot programmes, regional rollouts, training, and cultural change management. The regional champions model was the critical success factor: by giving each office a voice in the design process and a local expert who owned adoption, we turned potential resisters into advocates. Every organisation attempting CI/CD unification should invest at least 4x more effort in adoption than in engineering.

2

Common patterns across fragmented tooling are always greater than 80%. The 85% commonality we discovered is consistent with every CI/CD audit we have performed. Teams believe their pipelines are unique, but the core operations — build, test, scan, deploy — are identical across languages, frameworks, and deployment targets. The real differences are in the 15% of configuration specific to a team's language, deployment target, or compliance requirements. A template system that nails the 85% and provides extension points for the 15% satisfies every team without requiring every team to change.

3

Template systems beat mandated platforms because teams retain a sense of ownership. The previous attempt to standardise tooling (a Git platform consolidation) had been perceived as "headquarters imposing their way," and compliance was grudging. The template system was perceived differently: teams still owned their pipeline configuration, they could customise it within guardrails, and they could see that their office's contributions had influenced the template design. The psychological difference between "use this pipeline we built for you" and "compose your pipeline from these modules we built together" was the difference between resistance and adoption.

4

Identity infrastructure is invisible until it breaks, and then it's the only thing anyone sees. The 12-second authentication latency was costing the company far more than the $100K CI/CD maintenance overhead — 500 developers experiencing degraded login every morning translates to thousands of hours of lost productivity and frustration annually. The fix (auto-scaling AKS deployment with CDN-cached JWT validation) was architecturally straightforward, but the problem had persisted for over a year because identity infrastructure does not generate visible failure metrics the way application outages do. Platform engineering teams should monitor authentication percentile latency with the same rigour they apply to application SLAs.

When we started this project, I expected the hardest part would be choosing between GitLab CI and GitHub Actions. It turned out that the technology decision was the easy part — the hard part was getting 500 developers across 4 offices to embrace a shared approach. CloudForge's regional champions model was brilliant: instead of a top-down mandate that would have triggered resistance, they gave each office a seat at the design table and a local expert who could translate the global template into regional value. Warsaw's team, who were initially the most resistant, became our strongest advocates once they saw their GitLab CI patterns elevated to company-wide templates. The unification didn't just consolidate our pipelines — it connected our engineering culture across 4 continents for the first time.
Mark Davidson
VP of Platform Engineering, Global Enterprise Technology Company

Ready to Achieve Similar Results?

Every engagement starts with a conversation about your infrastructure challenges. Let's discuss how CloudForge can help.

Schedule a Consultation