New Case Study:   How Kitabisa Scales Unpredictable Donation Traffic Reliably with Kedify Arrow icon

How Kitabisa Scales Unpredictable Donation Traffic Reliably with Kedify

The Challenge

Kitabisa brings Indonesia’s spirit of gotong royongExternal Link, collective mutual aid, into the digital age. Through its donation-based crowdfunding platform, individuals, NGOs, hospitals, schools, communities, and corporate partners can raise and direct funds for medical needs, disaster relief, education, zakat, and other social causes.

That mission creates a scaling challenge unlike most digital platforms. Retail businesses can plan around sales events. Travel companies can anticipate holiday demand. Kitabisa cannot. A natural disaster, a campaign amplified by a public figure, or a story that spreads quickly on social media can trigger a sharp surge in donations within minutes. Donor behavior is also shaped by cultural and religious rhythms, including meaningful peaks during pre-dawn hours and other spiritually significant moments.

Before Kedify, Kitabisa relied on Kubernetes HPA and later moved toward KEDA and the KEDA HTTP add-on to get closer to event-driven autoscaling. But making that setup production-ready introduced friction. The team encountered breaking changes tied to the HTTP add-on’s beta status, limitations around running cron and HTTP scalers side by side, Gateway API compatibility issues, and broader stability concerns that made full production rollout harder than it should have been.

For a non-profit platform, those constraints created a difficult tradeoff. Keeping a large infrastructure buffer in place was too expensive, but underprovisioning risked degraded donation flows at exactly the moments when people were trying to help.

Why Kedify

Kitabisa discovered Kedify while trying to push its KEDA-based autoscaling approach further in production. What stood out was how directly Kedify addressed the problems the team was already facing.

Kedify offered a more practical path to production-grade event-driven autoscaling, with support for combining multiple scaler types, stronger stability for HTTP-based scaling, and a smoother route to tuning scaling behavior around real demand signals. Just as importantly, it fit naturally into Kitabisa’s existing Kubernetes environment rather than forcing a major platform shift.

The team also considered Knative, but the migration overhead was significantly higher. Kedify offered the benefits of advanced autoscaling without requiring Kitabisa to rearchitect how it runs workloads.

The Solution

Roughly 90% of Kitabisa’s workloads run on Kubernetes across GKE clusters on Google Cloud. Kedify is now used primarily to scale the services most directly exposed to sudden demand: backend APIs, gRPC services, and RabbitMQ consumers.

Because Kitabisa had already adopted KEDA, the implementation was relatively straightforward overall. The main complexity came from carefully replacing the existing KEDA and HTTP add-on installation without introducing downtime. Once that transition was complete, Kedify gave the team a more production-ready way to manage both HTTP- and queue-based autoscaling from a single platform.

The rollout experience was also strengthened by Kedify’s support. Even across a significant time zone gap, the Kitabisa team received responsive help while working through technical issues. That hands-on collaboration helped them reach a stable setup faster and with less operational friction.

Kitabisa is still early in its Kedify journey and plans to explore more features and scaler types over time. But even at this stage, Kedify has already strengthened the autoscaling foundation behind some of the platform’s most critical services.


“Our traffic spikes are unpredictable by nature. Kedify gave us the confidence to scale to zero without gambling on user experience.”

Zackky Muhammad

Lead DevOps Engineer @ Kitabisa.com

The Impact

With Kedify, 100% of Kitabisa’s microservices can now scale to and from zero, something that had not been practical to achieve reliably before. For a mission-driven platform that needs to control infrastructure costs without compromising responsiveness, that is a meaningful step forward.

Kedify’s Scaling Policy and Resume Scaling Controller have been especially valuable, helping the team scale more aggressively while smoothing cold starts and reducing abrupt scaling transitions. That improves the experience for both donors using the platform and engineers responsible for keeping it stable under pressure.

Most importantly, Kedify helps Kitabisa manage the central tension in its infrastructure strategy. Instead of relying as heavily on expensive idle capacity just to stay safe, the team can align infrastructure more closely to real demand, staying lean when traffic is quiet, while remaining ready for the sudden moments when people respond quickly and generously to urgent need.

Customer

Kitabisa

https://kitabisa.comExternal Link

Industry

Donation-based crowdfunding platform supporting medical, disaster relief, education, zakat, and broader social causes

Size

  • Founded in 2013
  • 11.7M+ donors and 554,000+ campaigns supported
  • 90% of workloads run on Kubernetes across GKE clusters

Challenges

  • Unpredictable, socially driven donation surges with little warning
  • Need to keep infrastructure lean without risking donation flows
  • Production friction with KEDA HTTP add-on and multi-scaler limitations

Overview

Kedify gave Kitabisa a more production-ready autoscaling foundation for HTTP and queue workloads, enabling reliable scale-to-zero across microservices while improving confidence under sudden demand.

  • 100% of Microservices Scale to Zero
    100% of Microservices Scale to Zero

    Reliable scale-to-zero and resume behavior across the platform

  • Smoother Scaling Under Sudden Demand
    Smoother Scaling Under Sudden Demand

    Scaling Policy and Resume Scaling Controller helped reduce cold-start and transition risk

  • Lean Infrastructure, Stronger Readiness
    Lean Infrastructure, Stronger Readiness

    The team can scale around real demand instead of holding large buffers just in case

Please reach out for more information, to try a demo, or to learn more:
www.kedify.io