No More Posting New Topics!

If you have a question or an issue, please start a thread in our Github Discussions Forum.
This forum is closed for new threads/ topics.

Navigation

    Quasar Framework

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Search

    Deploying with zero downtime

    Help
    3
    10
    617
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • beets
      beets last edited by

      Hey guys,

      I’m trying to think of the best approach for deploying a Quasar app without downtime. To explain what I mean here, consider this example:

      1. A project’s dist files are committed into the deploy branch anytime there’s a new version. The webserver simply serves files from the dist folder.
      2. Some users are currently on the SPA at the time an update is ready to be deployed.
      3. The server admin ssh’s into the server and does a git pull.

      The problem is that git will delete the old webpack versioned files. So if the user who was on the site before the git pull then navigates to a route that is lazy loaded, Vue will try and load the non-existent resource, which will 404.

      My current plan is to do: git pull && cp -R dist/* /some/path/ and have nginx serve from /some/path/ instead. The upside is that all old files will still exist. The downside is that the old files will continue to exist forever unless there’s some smart way to keep the old files for a few weeks before purging. That being said, keeping old js / css files really isn’t going to take up a lot of disk space in any case.

      I’m sure someone else has encountered this problem, how do you guys solve it? Note that I’m using a regular old VPS, and not docker, heroku, etc.

      1 Reply Last reply Reply Quote 0
      • S
        Sfinx last edited by

        You can always delete the old files from using ‘find /some/path -ctime +3 -type f -exec rm -vf {};’ command.

        beets 1 Reply Last reply Reply Quote 0
        • beets
          beets @Sfinx last edited by

          @Sfinx
          That would be a good way to delete the old files indeed. So something like git pull && cp -R dist/* /some/path/ && find /some/path -ctime +3 -type f -exec rm -vf {}; as a deploy script. I may end up just doing that, or just leaving the old files there isn’t an huge issue either.

          I’m mostly interested if anyone has another approach besides copying files to /some/path/. The problem in my original post should affect any SPA, SSR, and even PWA (under the right circumstances) when an update is pulled, so I’m thinking some people must have thought of ways to mitigate it.

          On any other Quasar app I deployed, I just did a git pull, but I’m planning deployment for a higher traffic site where assets that 404 would be a problem.

          dobbel 1 Reply Last reply Reply Quote 0
          • dobbel
            dobbel @beets last edited by

            @beets

            Maybe you could solve it on the nginx side, here’s an article ( not exactly the same as it switches backends). But it has some nice concepts:

            To be able to execute the zero-downtime deployment you just need to switch the “down” flag on the upstream configuration to the old backend, which will not receive any new connections, but will finish processing the existing ones and remove the down flag from the new instance, which will start to receive all the new traffic.

            https://syshero.org/2016-06-09-zero-downtime-deployments-using-nginx/

            beets 1 Reply Last reply Reply Quote 0
            • beets
              beets @dobbel last edited by

              @dobbel That’s a pretty neat concept. I’ve been working on nginx configuration all day and hadn’t thought of something like that. For the backend / ssr server, I’m actually using pm2 which will support zero downtime reloads, but this is an interesting concept nonetheless.

              1 Reply Last reply Reply Quote 0
              • S
                Sfinx last edited by

                You can redirect with some nginx logic the users from /latest/index.html -> /x.y.z/index.html

                This way all lazy routes will still work for older versions and new users will get the latest one. Then just remove the whole /x.y.z directories using some usage stat from nginx logs - i.e. grep & remove at cron.

                As redirect helper you can use generated scripts or njs

                beets 1 Reply Last reply Reply Quote 0
                • beets
                  beets last edited by

                  @Sfinx & @dobbel Thanks for the responses. Here’s what I’m doing so far, this is for a PWA. I omitted some things like my CSP policy, etc.

                  # Upstream to our api
                  upstream api {
                      server 127.0.0.1:8020;
                      keepalive 8;
                  }
                  
                  server {
                      listen 443 ssl http2;
                      listen [::]:443 ssl http2;
                  
                      server_name example.com;
                  
                      location / {
                          # Max cache
                          expires max;
                          add_header Cache-Control public;
                          add_header Last-Modified $date_gmt;
                          if_modified_since off;
                          etag off;
                  
                          # Assets are precompressed
                          gzip_static on;
                          brotli_static on;
                  
                          root /var/www/my-quasar-project/dist/pwa/;
                          try_files $uri @backup;
                      }
                      
                      location @backup {
                          # Max cache
                          expires max;
                          add_header Cache-Control public;
                          add_header Last-Modified $date_gmt;
                          if_modified_since off;
                          etag off;
                  
                          # Assets are precompressed
                          gzip_static on;
                          brotli_static on;
                  
                          root /var/www/backup/my-quasar-project/;
                          try_files $uri @index;
                      }
                      
                      location @index {
                          # Never cache
                          expires off;
                          add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
                          add_header Last-Modified $date_gmt;
                          if_modified_since off;
                          etag off;
                  
                          gzip on;
                          brotli on;
                  
                          root /var/www/my-quasar-project/dist/pwa/;
                          try_files /index.html =404;
                      }
                  
                      location = /service-worker.js {
                          # Never cache
                          expires off;
                          add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
                          add_header Last-Modified $date_gmt;
                          if_modified_since off;
                          etag off;
                  
                          # Service worker is precompressed
                          gzip_static on;
                          brotli_static on;
                  
                          alias /var/www/my-quasar-project/dist/pwa/service-worker.js;
                      }
                      
                      location = /manifest.json {
                          expires 30d;
                          add_header Cache-Control public;
                  
                          gzip on;
                          brotli on;
                  
                          types {
                              application/manifest+json json;
                          }
                          alias /var/www/my-quasar-project/dist/pwa/manifest.json;
                      }
                  
                      location /api {
                  
                          # Remove the /api portion of the request
                          rewrite ^/api/(.*)$ /$1 break;
                          
                          gzip on;
                          brotli on;
                  
                          proxy_pass http://api/;
                          proxy_http_version 1.1;
                          proxy_set_header Connection "";
                          proxy_set_header Host $http_host;
                          proxy_set_header X-Real-IP $remote_addr;
                          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                          proxy_set_header X-Forwarded-Proto $scheme;
                      }
                  
                  }
                  

                  The path /var/www/my-quasar-project/ is the quasar project repo with the dist folder committed to it. /var/www/backup/my-quasar-project/ contains all old version of files. I like this because after a git pull the site is updated immediately. Just sometime before the next git pull, I must issue the command cp -R /var/www/my-quasar-project/dist/pwa/* /var/www/backup/my-quasar-project/

                  1 Reply Last reply Reply Quote 0
                  • beets
                    beets @Sfinx last edited by

                    @Sfinx said in Deploying with zero downtime:

                    njs

                    Oh that’s pretty cool, I did not know about this module. I would much rather use it than lua.

                    I did want to at some point write a jwt interceptor that would check for an expired token and issue a new one. Seems like this might be up to the task, but I’m not yet sure about how it may block nginx’s event loop. The docs say it’s non blocking for file i/o etc, but I’m not sure if using the crypto module will block it or not.

                    The reason for me wanting to do that, is that I have several node api endpoints that all read the auth token, so one unifying method to handle tokens would be nice instead of duplicating code.

                    1 Reply Last reply Reply Quote 0
                    • S
                      Sfinx last edited by

                      Nginx can do auth for you for any location, see ngx_http_auth_request_module

                      …
                      someapilocation {
                      auth_request /auth;
                      …
                      }

                      location /auth {
                      proxy_pass http://myauthendpoint
                      }
                      …

                      At your endpoint you can just check the token that is coming in header

                      beets 1 Reply Last reply Reply Quote 0
                      • beets
                        beets @Sfinx last edited by

                        @Sfinx Thanks again for that, I’ll have to check it out. My JWT tokens are in http only cookies, so if I can return a set-cookie header from the auth proxy pass (I assume I can like any other proxy pass) then that would help simplify things.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post