How to set up a CDN for SPA (got problems with vue-router)

  • Hi,

    we are building a Quasar SPA which has to use a CDN. This means, that every asset (generated JS files, images and so on) is deployed to an AWS S3 bucket.

    So, I configured the CDN in quasar.conf.js by setting the publicPath:

       build: {
          vueRouterMode: 'hash',
          publicPath: 'AWS_S3_URL',

    This adds a base TAG:

         <base href="AWS_S3_URL" >

    So far so good. The problem is, that in case of reloading a page or using the back - button of the browser, the page will be blank and a JS error in the JS console appears:

    SecurityError: Blocked attempt to use history.replaceState() to change session history URL from MY_URL to AWS_S3_URL. [...]


    TypeError: undefined is not an object (evaluating 'l.matched')

    It seems, that the CDN URL is causing a couple of problems here with the vue-router.

    So, my question is: is there another way to set up a CDN? Is this the intended way to use a CDN by setting a base TAG?

    Thanks for your help.

    Best regards,

  • @daniel You can try something like this, it will take care of all webpack assets, at least.

          extendWebpack (cfg) {
            if(cfg.mode === 'production') {
              cfg.output.publicPath = 'AWS_S3_URL' + '/'

    As far as things that are in your public/ dir, you’d have to do something else. One option is a boot file like this:

    export default ({ Vue }) => {
      let media_url = 'AWS_S3_URL' + '/'
      if(process.env.NODE_ENV === 'development') { 
        media_url = '/' // for development just use /
      Vue.prototype.$media_url = media_url

    Then in components:

    <img :src="$media_url + 'placeholder.png'" />

  • @beets

    When you build the app, it will still contain the public folder in the ‘dist’ output right?
    If you are using a CDN like this, do you have to copy the contents of the dist/public folder ‘manually’ to amazon? Or can you create 2 outputs when you build your quasar SPA?

  • @dobbel The content of the public folder (dist/spa) is copied to Amazon S3 during the pipeline build by gitLab.

  • @dobbel Your build would still contain the contents of public, as well as all your built css and js files. How to deploy will depend on how the CDN is set up. Typically I’ve just used a cdn that proxies (and caches of course) your main domain.

    So you’d have 


    They both contain the same files, but webpack will request all files from which is pointed to the cdn, which gets files from You could also have a post build step that uploads all of the build dir to a s3 bucket or whatever.

  • @beets Thanks. The webpack “command” is perfekt. Because the JS and CSS assets are the “biggest” assets, it is important, that these are served by S3.

  • @daniel great, glad I could help.

    Also just for anyone interested who might read this thread, my use case is a bit different which I’ll explain below:

    • I use SSR, and have the server process running at
    • It only serves server generated pages, not js, css or images. It will 404 anything not in the routes.js file.
    • I have pointed to the dist/ssr/www folder
    • I like this better since node doesn’t need to waste time serving static assets, and additionally I pre compress (.gz and .br) all the files, so I can just use nginx’s gzip_static and brotli_static
    • Most of my images come from some other process (magento in particular), so I have serve those. I use something similar to the boot file I showed above, except a bit nicer, i.e. this.$media_url('somefile.png') returns
    • For favicons, I simply change index.template.html to point to, etc. And since some browsers will always try to look for, I just do a simple location block for that in nginx to either respond 404, or the actual file in /var/www/media/htdocs/favicon.ico.

    Overall it makes me happy, the only thing that doesn’t work is Quasar’s automagic static asset handling, which as I mentioned above I just don’t use, and instead use explicit references to the media subdomain.

  • @beets said in How to set up a CDN for SPA (got problems with vue-router):

    Overall it makes me happy, the only thing that doesn’t work is Quasar’s automagic static asset handling, which as I mentioned above I just don’t use, and instead use explicit references to the media subdomain.

    for that you can use a devserver feature - proxy:

    documented here:

    you can easily use just one configuration (production) and in dev environment statics, media will just be proxied whenever you need. It is very powerful because you can not simple proxy but also use: changeOrigin, context, cookieDomainRewrite, cookiePathRewrite and pathRewrite in such a way, where even the most hard to maintain api/backend will properly work in dev environment (which will be identical to the production).

  • @qyloxe Maybe I’m not understanding how that would work for static assets. On Quasar’s dev mode, the statics / public folder will work fine, since it serves both static assets from public/ and ssr pages on localhost:8080, so there’s no reason to proxy. On production however, I can’t have any images like since only proxies the SSR node process, which doesn’t serve anything other than html.

    Instead what I have now is just a production and dev ENV, where the dev env has media_url point to https://media.example.local and production

  • @beets This sound very interesting. Is it possible to show how you achieved that (i.e. showing the quasar.conf.js)? Maybe we can use this approach for the next project.

  • @beets

    Again, with dev proxy rewrite and other local configurations like a simple change in hosts file:

    You can mimic any production configuration on development. It is highly usefule if you have accesss to local/development/qa vms/docker/kubernetes with production external services.

    What I’m trying to say, is that dev environment should be a mimic of production instead of its other conditional incarnation. Different way of thinking.

  • @daniel Here’s some snippets on how I have it set up. This is not the normal SSR setup, but was customized pretty heavily for my needs.


      "api_url": "",
      "api_url_local": "",
      "media_url": "",
      "static_url": "",
      "frontend_url": "",

    Above is the basic config file, set for the dev environment. The api_url_local is just because on SSR, we can access the loopback address instead calling the full domain, which makes it a bit faster.


    const fs = require('fs')
    const path = require('path')
    const zlib = require('zlib')
    const CompressionPlugin = require('compression-webpack-plugin')
    const config = require('./config.json')
    module.exports = function (ctx) {
      return {
        boot: [
          { path: 'a11y', server: false },
          { path: 'polyfills', server: false },
          { path: 'hydrate', server: false },
        css: [
        extras: [
        framework: {
          iconSet: 'svg-mdi-v5',
          lang: 'en-us',
          importStrategy: 'auto',
          plugins: [
          config: {
            loadingBar: {
              color: 'secondary',
              size: '4px',
              skipHijack: true,
        preFetch: true,
        build: {
          scopeHoisting: true,
          vueRouterMode: 'history',
          showProgress: true,
          gzip: false,
          // analyze: {
          //   analyzerPort: 9000,
          //   openAnalyzer: false
          // },
          appBase: false,
          beforeDev({ quasarConf })	{
            // Hook our own meta system into Quasar
            quasarConf.__meta = true
          extendWebpack (cfg) {
            cfg.resolve.alias['@'] = path.resolve(__dirname, 'src/components')
            cfg.resolve.alias['mixins'] = path.resolve(__dirname, 'src/mixins')
            cfg.resolve.alias['modules'] = path.resolve(__dirname, 'src/modules')
            cfg.resolve.alias['utils'] = path.resolve(__dirname, 'src/utils')
            if(cfg.mode === 'production') {
              cfg.output.publicPath = config.static_url + '/'
                new CompressionPlugin({
                  filename: '[path].gz[query]',
                  algorithm: 'gzip',
                  test: /\.(js|css|svg)$/,
                  compressionOptions: {
                    level: 9,
                  minRatio: 1,
                new CompressionPlugin({
                  filename: '[path].br[query]',
                  algorithm: 'brotliCompress',
                  test: /\.(js|css|svg)$/,
                  compressionOptions: {
                    [zlib.constants.BROTLI_PARAM_MODE]: zlib.constants.BROTLI_MODE_TEXT,
                    [zlib.constants.BROTLI_PARAM_QUALITY]: zlib.constants.BROTLI_MAX_QUALITY,
                  minRatio: 1,
        devServer: {
          // https: {
          //   key: fs.readFileSync('certs/key.pem'),
          //   cert: fs.readFileSync('certs/cert.pem'),
          //   ca: fs.readFileSync('certs/minica.pem'),
          // },
          https: false,
          port: 8000,
          sockPort: 4443,
          open: false,
        animations: [
        ssr: {
          pwa: false,
          manualHydration: true,
          extendPackageJson(pkg) {
            // Default quasar SSR packages we don't use
            delete pkg.dependencies['compression']
            delete pkg.dependencies['express']
            delete pkg.dependencies['lru-cache']

    Main interesting things above are:

    • I use the webpack compress plugin (only on production) to gzip and brotli compress the assets. You can just do .gz without the plugin, but I wanted .br too.
    • I also set the publicPath there too, as shown in my other post
    • For SSR, I’m removing express and some other packages I don’t use
    • I also manually hydrate, which is pretty store specific to my case with vuex. It’s because I freeze a lot of objects stored there.
    • I rolled my own meta plugin, for various reasons, so I had to mimic how the official one works with the beforeDev hook.

    Then finally, src-ssr/index.js

    const { createServer } = require('http')
    const { promisify } = require('util')
    const randomBytes = promisify(require('crypto').randomBytes)
    const { createBundleRenderer } = require('vue-server-renderer')
    const bundle = require('./quasar.server-manifest.json')
    const clientManifest = require('./quasar.client-manifest.json')
    const config = require('../config.json')
    const renderer = createBundleRenderer(bundle, {
      runInNewContext: false,
    const port = 8000
    const server = createServer(async (req, res) => {
      // Stats
      const start_time =
      let ttfb = null, total = null
      // Generate CSP nonce
      const nonce = (await randomBytes(16)).toString('base64')
      const ctx = {
        url: req.url,
      const stream = renderer.renderToStream(ctx)
      stream.once('data', () => {
        console.log('First chunk: ',
        // Interesting note: if needed, we can access vuex with ctx.state
        // Custom Asset Prefetch
        // Instead of using ${ ctx.renderResourceHints() } in the template,
        // We are going to do the same here but remove preload links handled above
        // Todo: would be nice if vue exposed the render function for just these
        let prefetchLinks = ctx.renderResourceHints()
        prefetchLinks = prefetchLinks.substring(prefetchLinks.indexOf('<link rel="prefetch"'))
        // Custom Asset Preload
        // Instead of using ${ ctx.renderResourceHints() } in the template,
        // We are going to push the preload files in the HTTP header
        let preloadLinks = ctx.getPreloadFiles().map(f => {
          return {
            file: config.static_url + '/' + f.file,
            asType: f.asType,
            //extra: '; crossorigin',
            extra: '',
          { file: config.media_url + '/fonts/roboto-v20-latin-300.woff2',     asType: 'font', extra: '; crossorigin; type="font/woff2"' },
          { file: config.media_url + '/fonts/roboto-v20-latin-regular.woff2', asType: 'font', extra: '; crossorigin; type="font/woff2"' },
          { file: config.media_url + '/fonts/roboto-v20-latin-500.woff2',     asType: 'font', extra: '; crossorigin; type="font/woff2"' },
        preloadLinks = => `<${f.file}>; rel=preload; as=${f.asType}${f.extra}`).join(', ')
        const csp_template = {
          'default-src': [config.static_url],
          'prefetch-src': [config.static_url],
          'base-uri': ["'self'"],
          'script-src': [config.static_url, `'nonce-${nonce}'`, "'unsafe-inline'", /*"'strict-dynamic'",*/],
    	  // abbreviated
        const csp = Object.entries(csp_template).reduce((acc, [key, values]) => {
          return acc += key + ' ' + values.join(' ') + '; '
        }, '')
        res.writeHead(ctx.httpCode || 200, {
          'Content-Type': 'text/html; charset=UTF-8',
          'Content-Security-Policy': csp,
          'Link': preloadLinks,
        res.write(`<!DOCTYPE html>
    <html ${ctx.Q_HTML_ATTRS}>
        <meta charset="utf-8">
        <meta name="format-detection" content="telephone=no">
        <meta name="msapplication-tap-highlight" content="no">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <link rel="icon" type="image/png" sizes="128x128" href="${config.media_url}/favicon/favicon-128x128.png">
        <link rel="icon" type="image/png" sizes="96x96" href="${config.media_url}/favicon/favicon-96x96.png">
        <link rel="icon" type="image/png" sizes="32x32" href="${config.media_url}/favicon/favicon-32x32.png">
        <link rel="icon" type="image/png" sizes="16x16" href="${config.media_url}/favicon/favicon-16x16.png">
        <link rel="icon" type="image/ico" href="${config.media_url}/favicon/favicon.ico">
        ${ ctx.Q_HEAD_TAGS }
        ${ ctx.renderStyles() }
        ${ prefetchLinks }
      <body class="${ctx.Q_BODY_CLASSES}">
            Javascript disabled
        ttfb = - start_time
      stream.on('data', chunk => {
      stream.on('end', () => {
        ${ ctx.renderState() }
        ${ ctx.renderScripts() }
        total = - start_time
        const ttfb_pct = Math.floor(100 * ttfb / total)
        console.log(`${req.url} TTFB: ${ttfb} Total: ${total} - ${ttfb_pct}%`)
      stream.on('error', error => {
        if(ctx.statusCode === 307 || ctx.statusCode === 308) {
          res.writeHead(ctx.statusCode, {
            'Location': ctx.location
        } else {
          // Nginx will render this error page
          // Note, statusMessage doesn't seem to be
          // available to nginx
          res.statusCode = 502
          res.statusMessage = error.message
    server.listen(port, '', error => {
      console.log(`Server listening at port ${port}`)

    The above file it a bit of a doozy, I really didn’t need express for SSR mode, and I really wanted to get renderToStream to work (which it does, and saves about 50% TTFB.) Luckily, Quasar just gives you a template for the SSR server, but you don’t have to stick to it. The file generates the CSP, and also sends the preload links as HTTP headers, instead of meta tags. It also completely ignores what’s in index.template.html and writes it directly.

    All of the domains are set up through nginx, I’ll see if I can clean that up and post it here, but basic idea is that goes to the SSR process (which you can see doesn’t serve any static assets) and serves the dist/ssr/www folder, with gzip_static and brotli_static enabled (you have to manually compile the brotli plugin for that to work, but it’s worth it for me.)

    Finally, I also serve quasar’s dev mode through nginx as well, just so I can use SSL, test things like my CSP, etc. Basically if i run quasar dev, or build it and run the node process, I just access it through

    Edit: an example of the TTFB savings:

    / TTFB: 188 Total: 458 - 41%

    That is logged when I request my homepage. With renderToStream, I get a TTFB of 188ms (minus network latency) while rederToString would take 458ms

  • @beets WOW - that’s incredible. I think I need some time to understand it. Thank you very much!!!

  • @daniel No problem, I’m not sure if you’re using SSR, it’s definitely a lot to wrap your head around at first, but I like how it’s set up for this project I’m working on. This project is still in development, perhaps once it’s done and I’ve organized my notes a bit more, I can make a sample project or post about it.

  • @beets That would be nice. Personally I created only SPAs so far. But SPA has a few drawbacks and I’m dealing with SSR for some time.

  • @daniel Yeah, I had never used SSR before this, I always figured it was too complex. But this project is an e-commerce site, so SSR is mandatory to get meta to work, and better SEO.

  • @beets nice work in deployment 🙂

    If you’re using nginx, then if it is not an enterprise one, I strongly recommend using openresty fork - you can process browser feedback like csp etc. directly in lua part of nginx (and many, many more).

    The second hint for nginx and even lover latencies is using custom configured caches - caching dynamic pages even for 1-2 minutes gives a BIG win in some setups. Well, nginx/openresty is obviously awesome. Oh, and proper host configuration - files, handles, buffers etc. That is not an art - it is a black magic haha Careful host configuration can give you even lower latency and higher reliability.

    Anyway, I like your style 🙂

  • @beets Definitely. And what about the page speed and size? Our SPA has a huge JS file)s), about 700K which is necessary for the index page (vendor*.js). I tried to reduce its size but without luck. Is the size of the JS files still the same with SSR?

  • @daniel Bundle size won’t change [with SSR], but is there anything in vendor chunk that isn’t needed on every page? If so you can add vendor -> remove in quasar.conf.js, like this:

        vendor: {
          remove: [

    Here you can see I remove braintree sdk, the smooth drag and drop module, pdfjs, and howler (an audio library.) Those aren’t needed on most pages, so I remove them from the vendor chunk and they get their own js file through webpack.

    Also, with SSR, you have other problems like component hydration taking a long time. To solve that, I use wherever possible. Then I delay hydrating some components until idle, or other conditions which helps a ton.

    Another problem, is that I try to use @click events as little as possible. For example I won’t use a click event that pushes a new route, since if the page isn’t fully hydrated, the button would do nothing. Instead I use just a router-link. So if a user clicks a link too fast after loading the initial page, they just make a normal http request, and hopefully the next page they wait a second for the SPA to kick in.

  • @qyloxe Yeah, nginx and host config is definitely black magic. I had considered openresty for other things, but haven’t used it yet. I just have a custom CSP endpoint for that. As far as caching, it’s tricky because every user can see different prices, “you’ve bought this item” icon, etc. I know I could figure out some hole punching system, or just use ajax after the client loads the page, but I haven’t gone down that rabbit hole yet. I was more focused on the API speed (coming from magento, now have a custom API with lots of redis cache.)

Log in to reply