← All Case Studies

GitOps at Scale: Migrating 100+ Banking Applications

Enterprise banking infrastructure exists in a world of constraints that most engineers never encounter. Regulatory requirements dictate how code moves between environments. Audit trails must be immutable and complete. Access controls are not suggestions — they are contractual obligations to regulators. When Credit Suisse engaged me to help migrate their application portfolio to a modern GitOps deployment model, the technical challenges were real, but the governance challenges were formidable.

The context

Credit Suisse operated a large Kubernetes estate spanning multiple clusters across data centres in Zurich, London, New York, and Singapore. The platform team had built solid foundational infrastructure — clusters were well-provisioned, networking was sound, and the container runtime was stable. What lagged behind was the CI/CD tooling and deployment workflow.

The firm’s CI/CD was built on TeamCity. Over the years, TeamCity had become a bottleneck: build configurations were fragile, pipelines were difficult to version control, and the system struggled to scale with the growing number of application teams. Most teams deployed through a combination of TeamCity pipelines, manual Helm commands, and in some cases, direct kubectl apply against production clusters. There was no consistent deployment model, no universal audit trail, and no way to answer the question that regulators frequently asked: “Can you show us exactly what changed, when, who approved it, and how it was deployed?”

The brief was to migrate from TeamCity to GitLab, establish a GitOps model as the standard deployment mechanism, and scale it to over 100 applications across all global regions.

Why GitOps with GitLab

The move from TeamCity to GitLab was not just a CI system swap — it was an architectural shift. GitLab offered what TeamCity could not: pipelines defined as code in .gitlab-ci.yml, native merge request workflows for change control, and a unified platform for source hosting, CI, container registry, and deployment management.

GitOps on top of GitLab solved the specific problems that the bank’s regulatory environment demanded:

Auditability. Every deployment is a Git commit. Every commit has an author, a timestamp, a review trail, and a diff showing exactly what changed. This is precisely the audit trail that compliance teams need.

Reproducibility. The desired state of every environment is declared in a Git repository. If a cluster is lost, the entire application estate can be redeployed from the repository — no tribal knowledge required.

Separation of concerns. GitOps cleanly separates the CI pipeline (build, test, produce artefacts) from the CD pipeline (deploy to environment). This separation aligned with the bank’s existing change control processes, where different teams were responsible for building software versus promoting it to production.

The architecture

I designed the GitOps platform around the following components:

GitLab as the unified platform, replacing TeamCity entirely. Each application team maintained two repositories: one for source code and CI, and one for deployment manifests. The CI pipeline in the source repository built, tested, and pushed container images to GitLab’s integrated container registry. The deployment repository held Helm values and environment-specific configuration.

Helm charts as the packaging format. I worked with the platform team to create a library of base Helm charts that encoded the bank’s standards — resource limits, security contexts, network policies, sidecar injection for logging and tracing — while allowing application teams to override service-specific parameters via values.yaml files. This standardisation meant that 100 applications deployed consistently, regardless of which team built them.

HashiCorp Vault for secrets management with deep GitLab integration. I configured Vault’s JWT authentication backend to trust GitLab CI job tokens, allowing pipelines to authenticate to Vault without storing any credentials. Secrets were injected at deployment time based on the pipeline’s project, branch, and environment — so a staging pipeline could only access staging secrets. This eliminated the shared service accounts that had previously been the norm.

Reusable Ansible Galaxy roles for infrastructure provisioning that sat outside the Kubernetes layer. Not everything at Credit Suisse ran on Kubernetes — there were VM-based middleware components, database servers, and network appliances that needed consistent configuration management. I published a set of internal Ansible Galaxy roles covering common provisioning patterns: JVM application deployment, certificate management, log agent configuration, and monitoring agent setup. Teams consumed these roles in their deployment pipelines, ensuring that VM-based workloads received the same level of automation and consistency as containerised ones.

F5 Load Balancer management as part of the deployment pipeline. Credit Suisse’s network topology relied on F5 BIG-IP appliances for external and inter-data-centre traffic management. I integrated F5 configuration into the GitOps workflow using Ansible and the F5 modules, so that load balancer pool membership, health monitors, and virtual server configurations were updated automatically as part of each deployment. This removed a manual step that had previously required raising a ticket with the network team and waiting for implementation — often the slowest part of any release.

A promotion pipeline that managed environment progression across global regions. When an application team merged to their deployment repository’s staging branch, the pipeline deployed to staging. When the change was promoted — via a merge request requiring explicit approval from both the development team and a release manager — the pipeline deployed to production clusters in all applicable regions. The merge request itself served as the change control record.

Onboarding at scale

Migrating 100+ applications was a people challenge as much as a technical one. Each application team had their own TeamCity configurations, their own assumptions, and their own scripts that had evolved over years.

I developed an onboarding programme structured in three tiers:

Self-service onboarding for teams with straightforward deployments. I provided a scaffolding tool that generated the GitLab CI configuration, deployment repository structure, base Helm values, and Vault access policies from a short questionnaire. Teams could migrate from TeamCity and onboard to the new model in under two days.

Assisted onboarding for teams with more complex deployments — multiple interdependent services, database migrations, F5 configuration, or custom deployment orchestration. I worked directly with these teams to translate their existing TeamCity workflows into GitLab pipelines and the GitOps model.

Bespoke onboarding for legacy applications that could not be cleanly containerised or that had regulatory constraints preventing standard deployment patterns. These were rare but required creative solutions — in one case, wrapping a legacy deployment script as a Kubernetes Job triggered by the GitOps pipeline.

Results

Over eight months, the programme achieved:

  • 107 applications migrated from TeamCity to GitLab with a GitOps deployment model.
  • TeamCity fully decommissioned for the migrated portfolio, eliminating licensing costs and operational overhead.
  • Deployment frequency increased by 3x on average. Teams deployed more often because the process was safer and required less manual effort.
  • Rollback time reduced from hours to minutes. A bad deployment was reversed by reverting a Git commit and letting the pipeline reconcile.
  • F5 configuration changes automated, removing what had been a multi-day manual process from the critical path of every release.
  • Audit compliance achieved. The regulatory team confirmed that the GitOps workflow, combined with Vault’s audit logging, met their requirements for change traceability and access control.
  • Zero production incidents caused by the migration itself.

Reflections

The most rewarding aspect of this project was watching teams go from scepticism to advocacy. Engineers who initially resisted the change eventually became its strongest proponents, because the new model removed toil from their daily work. The deployment process became boring — and in infrastructure, boring is the highest compliment.

The key to adoption at this scale was not mandating compliance but demonstrating value. The self-service onboarding tools, the base Helm charts, the reusable Ansible roles, and the automated F5 management all contributed to making the right path the easy path. When you make the secure, auditable, reproducible approach also the fastest and simplest approach, adoption follows naturally.