IMO, the waiting room service shouldn't be responsible on determining when a new batch of users should be let to the protected zone. This being said we need some sort of integration between the protected service and the waiting room so we could update the leaking bucket somehow.
Since any integrations typically take a lot of time and effort from both sides I was hoping to find out if there is some "creative" solution that would allow adding the waiting room as a service with low to no coding effort from the service provider perspective.
Note: Yes, we could definitely allow N users go every 1/3/5 minutes, but this doesn't guarantee that the protected service will be able to withstand it, since once we let somebody in, we do not know if they already made a purchase or still in-progress.
In any case, the article is great, thank you for sharing!
Nice one! Thank you!
Really interesting! And an original approach to solving spikes :)
thank you
Thank You Neo, the best part here is reference given in the post, so people willing to deep dive can go through those also.
you're welcome
Interesting solution, Neo.
I guess the serverless architecture serves well for the spikes, but I'm not sure about the price.
Thanks for sharing!
I wonder how the Exchange Lambda works
IMO, the waiting room service shouldn't be responsible on determining when a new batch of users should be let to the protected zone. This being said we need some sort of integration between the protected service and the waiting room so we could update the leaking bucket somehow.
Since any integrations typically take a lot of time and effort from both sides I was hoping to find out if there is some "creative" solution that would allow adding the waiting room as a service with low to no coding effort from the service provider perspective.
Note: Yes, we could definitely allow N users go every 1/3/5 minutes, but this doesn't guarantee that the protected service will be able to withstand it, since once we let somebody in, we do not know if they already made a purchase or still in-progress.
In any case, the article is great, thank you for sharing!