Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with session when i create more than one webapi replicas #2356

Open
jmvizcainoio opened this issue Jan 31, 2024 · 5 comments
Open

Comments

@jmvizcainoio
Copy link

Expected behavior

We are deploying atlas in a k8s cluster. When we scale more than one replica the webapi pod, the login process crash.

2024-01-31 17:34:37.126 ERROR http-nio-8080-exec-6 org.ohdsi.webapi.shiro.filters.ExceptionHandlerFilter - [] - Error during filtering
javax.servlet.ServletException: org.pac4j.core.exception.TechnicalException: State parameter is different from the one sent in authentication request. Session expired or possible threat of cross-site request forgery

We have configured google auth as a login provider and with one webapi replica works fine.

Is possible keep the session in a redis ? Where are the session stored ? Any suggestions?

@konstjar
Copy link
Contributor

WebAPI does not support shared sessions. The workaround is just to run single pod.
Technically, WebAPI uses pack4j and it has a flexible SessionStore implementation. But I'm not aware on any implementation of external session storage for WebAPI.

@jmvizcainoio
Copy link
Author

We are trying to deploy Atlas in a high available environment but the problem with the sessions will be impossible.

@anthonysena anthonysena transferred this issue from OHDSI/Atlas Feb 29, 2024
@konstjar
Copy link
Contributor

You can use sticky sessions on LB side.

@jmvizcainoio
Copy link
Author

You can use sticky sessions on LB side.

use sticky sessions is a patch. We are using spot instances in a k8s cluster. Is very easy to had 10 instances and one of this die. The clients logged into the webapi that is on the died node must be logged again.

@chrisknoll
Copy link
Collaborator

chrisknoll commented Mar 4, 2024

Some changes at the core of WebAPI will be necessary in order to support a multi-instance / non-sticky hosting of WebAPI.

Main issue:

Running jobs are updated to Canceled on Startup: If WebAPI gets killed while background jobs are running, it's assumed that WebAPI itself is running those jobs and therefore should mark 'running' job as 'canceled'. If there is multi-host WebAPI running, then each instance that starts up will attempt to cancel jobs that could be managed by other nodes. So something needs to keep track of which nodes is running each job, either using a local filesystem for each node or some kind of unique identifier associated which each host so that the host that starts up will only kill their own jobs. Alternatively, we are moving towards an 'external execution' service such that this external service will live regardless of WebAPI status (and call-back to WebAPI when jobs are complete). That would allow WebAPI to not kill running jobs on startup, but introduces other challenges (ie: if the external service is done and calls back to WebAPI to notify of completion, and it WebAPI is done, we need a sort of 'retry' functionality).

Other Issues

Other issues include caching, where we want to share a cache across server processes, and sharing authentication information across processes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants