-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with session when i create more than one webapi replicas #2356
Comments
WebAPI does not support shared sessions. The workaround is just to run single pod. |
We are trying to deploy Atlas in a high available environment but the problem with the sessions will be impossible. |
You can use sticky sessions on LB side. |
use sticky sessions is a patch. We are using spot instances in a k8s cluster. Is very easy to had 10 instances and one of this die. The clients logged into the webapi that is on the died node must be logged again. |
Some changes at the core of WebAPI will be necessary in order to support a multi-instance / non-sticky hosting of WebAPI. Main issue:Running jobs are updated to Canceled on Startup: If WebAPI gets killed while background jobs are running, it's assumed that WebAPI itself is running those jobs and therefore should mark 'running' job as 'canceled'. If there is multi-host WebAPI running, then each instance that starts up will attempt to cancel jobs that could be managed by other nodes. So something needs to keep track of which nodes is running each job, either using a local filesystem for each node or some kind of unique identifier associated which each host so that the host that starts up will only kill their own jobs. Alternatively, we are moving towards an 'external execution' service such that this external service will live regardless of WebAPI status (and call-back to WebAPI when jobs are complete). That would allow WebAPI to not kill running jobs on startup, but introduces other challenges (ie: if the external service is done and calls back to WebAPI to notify of completion, and it WebAPI is done, we need a sort of 'retry' functionality). Other IssuesOther issues include caching, where we want to share a cache across server processes, and sharing authentication information across processes. |
Expected behavior
We are deploying atlas in a k8s cluster. When we scale more than one replica the webapi pod, the login process crash.
We have configured google auth as a login provider and with one webapi replica works fine.
Is possible keep the session in a redis ? Where are the session stored ? Any suggestions?
The text was updated successfully, but these errors were encountered: