I'm working to build out a Github Actions-based continuous delivery flow for deploying a Python web application to Opalstack when merging into the main branch, and am wondering if there are any best practices that folks can recommend to achieve as close to a zero-downtime deployment as is possible with a non-containerized webhost (previously this was getting auto-deployed to Render).
It looks like I will probably want to write a shell script to live alongside my start
and stop
scripts, and then execute it remotely via SSH from Github Actions, and there are effectively four things that script will need to do:
- Pull the latest code from the main branch
- Install/update Python dependencies based on a lock file
- Apply database migrations
- Restart the webserver
I can do all of that via shell script without a problem, but am wondering if anyone has crafted a flow like this and has recommendations for what worked and what didn't. Like, can I just do this all within the application folder on disk? Or is it better to clone a separate copy, stop the server, then swap the folder names and start it again? I'm not a devops person, so any advice (particularly as applies to Opalstack) is welcome!